Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,523
|
Comments: 51,145
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 124 words

There have been some changes, and it seems that it is hard to track them. Here are where you can find the master repositories for the rhino tools projects:

On PSake

time to read 7 min | 1212 words

James Kovacks introduced psake ( a power shell based build system )over a year ago, and at the time, I gave it a glance and decided that it was interesting, but not worth further investigation.

This weekend, as I was restructuring my Rhino Tools project, I realized that I need to touch the build system as well. The Rhino Tools build system has been through several projects, and was originally ported from Hibernate. It is NAnt based, complex, and can do just about everything that you want expect be easily understandable.

It became clear to me very quickly that it ain’t going to be easy to change the way it works, nor would it be easy to modify that to reflect the new structure. There are other issues with complex build systems, they tend to create zones of “there be dragons”, where only the initiated go, and even they go with trepidation. I decided to take advantage of the changes that I am already making to get a simpler build system.

I had a couple of options open to me: Rake and Bake.

Bake seemed natural, until I remember that no one touched it in a year or two. Beside, I can only stretch NIH so far :-). And while I know that people rave about rake, I did not want to introduce a Ruby dependency on my build system. I know that it was an annoyance when I had to build Fluent NHibernate.

One thing that I knew that I am not willing to go back to was editing XML, so I started looking at other build systems, ending up running into PSake.

There are a few interesting things that reading about it brought to mind. First, NAnt doesn’t cut it anymore. It can’t build WPF applications nor handle multi targeting well. Second, I am already managing the compilation part of the build using MSBuild, thanks to Visual Studio.

That leave the build system with executing msbuild, setting up directories, executing tests, running post build tools, etc.

PSake handles those well, since the execution environment is the command line. The syntax is nice, just enough to specify tasks and dependencies, but everything else is just pure command line. The following is Rhino Mocks build script, using PSake:

properties { 
  $base_dir  = resolve-path .
  $lib_dir = "$base_dir\SharedLibs"
  $build_dir = "$base_dir\build" 
  $buildartifacts_dir = "$build_dir\" 
  $sln_file = "$base_dir\Rhino.Mocks-vs2008.sln" 
  $version = "3.6.0.0"
  $tools_dir = "$base_dir\Tools"
  $release_dir = "$base_dir\Release"
} 

task default -depends Release

task Clean { 
  remove-item -force -recurse $buildartifacts_dir -ErrorAction SilentlyContinue 
  remove-item -force -recurse $release_dir -ErrorAction SilentlyContinue 
} 

task Init -depends Clean { 
    . .\psake_ext.ps1
    Generate-Assembly-Info `
        -file "$base_dir\Rhino.Mocks\Properties\AssemblyInfo.cs" `
        -title "Rhino Mocks $version" `
        -description "Mocking Framework for .NET" `
        -company "Hibernating Rhinos" `
        -product "Rhino Mocks $version" `
        -version $version `
        -copyright "Hibernating Rhinos & Ayende Rahien 2004 - 2009"
        
    Generate-Assembly-Info `
        -file "$base_dir\Rhino.Mocks.Tests\Properties\AssemblyInfo.cs" `
        -title "Rhino Mocks Tests $version" `
        -description "Mocking Framework for .NET" `
        -company "Hibernating Rhinos" `
        -product "Rhino Mocks Tests $version" `
        -version $version `
        -clsCompliant "false" `
        -copyright "Hibernating Rhinos & Ayende Rahien 2004 - 2009"
        
    Generate-Assembly-Info `
        -file "$base_dir\Rhino.Mocks.Tests.Model\Properties\AssemblyInfo.cs" `
        -title "Rhino Mocks Tests Model $version" `
        -description "Mocking Framework for .NET" `
        -company "Hibernating Rhinos" `
        -product "Rhino Mocks Tests Model $version" `
        -version $version `
        -clsCompliant "false" `
        -copyright "Hibernating Rhinos & Ayende Rahien 2004 - 2009"
        
    new-item $release_dir -itemType directory 
    new-item $buildartifacts_dir -itemType directory 
    cp $tools_dir\MbUnit\*.* $build_dir
} 

task Compile -depends Init { 
  exec msbuild "/p:OutDir=""$buildartifacts_dir "" $sln_file"
} 

task Test -depends Compile {
  $old = pwd
  cd $build_dir
  exec ".\MbUnit.Cons.exe" "$build_dir\Rhino.Mocks.Tests.dll"
  cd $old        
}

task Merge {
    $old = pwd
    cd $build_dir
    
    Remove-Item Rhino.Mocks.Partial.dll -ErrorAction SilentlyContinue 
    Rename-Item $build_dir\Rhino.Mocks.dll Rhino.Mocks.Partial.dll
    
    & $tools_dir\ILMerge.exe Rhino.Mocks.Partial.dll `
        Castle.DynamicProxy2.dll `
        Castle.Core.dll `
        /out:Rhino.Mocks.dll `
        /t:library `
        "/keyfile:$base_dir\ayende-open-source.snk" `
        "/internalize:$base_dir\ilmerge.exclude"
    if ($lastExitCode -ne 0) {
        throw "Error: Failed to merge assemblies!"
    }
    cd $old
}

task Release -depends Test, Merge {
    & $tools_dir\zip.exe -9 -A -j `
        $release_dir\Rhino.Mocks.zip `
        $build_dir\Rhino.Mocks.dll `
        $build_dir\Rhino.Mocks.xml `
        license.txt `
        acknowledgements.txt
    if ($lastExitCode -ne 0) {
        throw "Error: Failed to execute ZIP command"
    }
}

It is about 50 lines, all told, with a lot of spaces and is quite readable.

This handles the same tasks as the old set of scripts did, and it does this without undue complexity. I like it.

time to read 2 min | 398 words

This post is about the Rhino Tools project. It has been running for a long time now, over 5 years, and amassed quite a few projects in it.

I really like the codebase in the projects in Rhino Tools, but secondary aspects has been creeping in that made managing the project harder. In particular, putting all the projects in a single repository made it easy, far too easy. Projects had an easy time taking dependencies that they shouldn’t, and the entire build process was… complex, to say the least.

I have been somewhat unhappily tolerant of this so far because while it was annoying, it didn’t actively create problems for me so far. The problems started creeping when I wanted to move Rhino Tools to use NHibernate 2.1. That is when I realized that this is going to be a very painful process, since I have to take on the entire Rhino Tools set of projects in one go, instead of dealing with each of them independently. the fact that so many of the dependencies where in Rhino Commons, to which I have a profound dislike, helped increase my frustration.

There are other things that I find annoying now, Rhino Security is a general purpose library for NHibernate, but it makes a lot of assumptions about how it is going to use, which is wrong. Rhino ETL had a dependency on Rhino Commons because of three classes.

To resolve that, I decided to make a few other changes, taking dependencies is supposed to be a hard process, it is supposed to make you think.

I have been working on splitting the Rhino Tools projects to all its sub projects, so each of them is independent of all the others. That increase the effort of managing all of them as a unit, but decrease the effort of managing them independently.

The current goals are to:

  • Make it simpler to treat each project independently
  • Make it easier to deal with the management of each project (dependencies, build scripts)

There is a side line in which I am also learning to use Git, and there is a high likelihood that the separate Rhino Tools projects will move to github. Suversion’s patching & tracking capabilities annoyed me for the very last time about a week ago.

time to read 2 min | 245 words

I am currently in the process of retiring the IRepository<T> from Rhino.Commons. I am moving it to its own set of projects, and I am not going to give it much attention in the future.

Note: Backward comparability is maintained, as long as you reference the new DLLs.

But why am I doing that? If there is anything that I have been thought in the last couple of years is that there is a sharp and clear distinction between technological infrastructure (web frameworks, IoC containers, OR/M) and application infrastructure (layer super type, base services, conventions). The first can and should be made common, but trying to make the application infrastructure shared between multiple projects that aren't closely matched is likely to cause a lot of issues.

To take the IRepository<T> example, it currently have over 50 methods. If that isn't a violation of SRP, I don't know what is. Hell, you can even execute a stored procedure using the IRepository<T> infrastructure. That is too much.

What will I use instead? A project focused infrastructure. A repository interface reflect the application that is using it, and it changes from project to project and even in the same project as our understanding of how the project is built improve.

I am still considering what to do with the other tidbits that I have there, such as the EntitiesToRepositories implementation. Ideally, I want to keep Rhino Commons focused on only session management, and nothing else beside.

Rhino Queues

time to read 3 min | 570 words

One of the things that often come up in the NServiceBus mailing list is the request for an xcopy, zero administration, queuing service. This is especially the case when you have smart clients or want to have queues over the Internet.

I decided to try to build such a thing, because it didn't seem such a hard problem. I turned out to be wrong, but it was an interesting experiment. Actually, the problem isn't that it is hard to do this, the problem was that I wanted durable queuing, and that led me to a lot of technologies that weren't suitable for my needs.

You can get the bits here: https://rhino-tools.svn.sourceforge.net/svnroot/rhino-tools/branches/rhino-queues-1.0

What it is:

  • XCopyable, Zero Administration, Embedded, Async queuing service
  • Robust in the face of networking outages
  • System.Transactions support
  • Fast
  • Works over HTTP

What it isn't:

  • Durable queuing service
  • Wetted by production use

Broadly, using Rhino Queues you get async queues over HTTP. But, it keeps all the data in memory, so if you restart the application, it will lose all waiting messages. It also tries its best (but does not guarantee) to ensure message ordering and ensure delivery.

Let us take a look at the code, and then discuss the implementation details from there.

Server usage:

using(var factory = new Configuration("server")
	.Map("server").To("http://localhost:9999/server/")
	.Map("client").To("http://localhost:9999/client/")
	.RegisterQueue("echo")
	.BuildQueueFactory())
{

factory.Start();
using (var queue = factory.OpenQueue("echo")) { var message = queue.Recieve(); var str = (string)message.Value; var rev = new string(str.Reverse().ToArray()); using(var remoteQueue = factory.OpenQueue(message.Source)) { Console.WriteLine("Handled message"); remoteQueue.Send(rev); } } }

And the client usage:

using(var factory = new Configuration("client")
	.Map("server").To("http://localhost:9999/server/")
	.Map("client").To("http://localhost:9999/client/")
	.RegisterQueue("echo-reply")
	.BuildQueueFactory())
{

	factory.Start();

	using (var tx = new TransactionScope())
	using (var serverQueue = factory.OpenQueue("echo@server"))
	{
		Console.WriteLine("Sending 'hello there'");
		serverQueue.Send("Hello there").Source = new Destination("echo-reply@client");
		tx.Complete();
	}
	
	using (var clientQueue = factory.OpenQueue("echo-reply"))
	{
		var msgText = clientQueue.Recieve();
		Console.WriteLine(msgText.Value);
	}
}

As you can see, we map a nice name to an endpoint, so we can send message to [queue]@[endpoint nice name], sending a message to just [queue] send it to the local machine. It is expected that the nice name for an endpoint would be the machine name, which alleviate the need of syncing names across all endpoints.

You can also see the usage of System.Transaction and without System.Transactions.

One thing that would probably raise questions is why Rhino Queues doesn't have a durable mode, after all, I just spent some time building a durable queue infrastructure. The reason for that is very simple, I spent too much time on that, and I can't spend more time on this at the moment. So I am putting both Rhino Queues and Rhino Queues Storage Disk out there, and I'll let the community bring them together.

A word of warning, Rhino Queues is a cool project, but it is not a replacement for such things as MSMQ. If you can, you probably want to make use of MSMQ, specifically because there is a lot more production time using it.

time to read 1 min | 80 words

Nathan has posted about utility libraries and he includes Rhino Commons there as well.

I don't see Rhino Commons as a utility library. At least not anymore. It certainly started its life as such, but it has grown since then.

Rhino Commons represent my default architecture. This is my base when I am building applications. It has some utility classes, sure, but it contains a lot more foundation and infrastructure components than anything else.

time to read 2 min | 365 words

Two days ago I asked how many tests this method need:

///<summary> 
///Get the latest published webcast 
///</summary>
public Webcast GetLatest();

Here is what I came up with:

[TestFixture]
public class WebcastRepositoryTest : DatabaseTestFixtureBase
{
	private IWebcastRepository webcastRepository;

	[TestFixtureSetUp]
	public void TestFixtureSetup()
	{
		IntializeNHibernateAndIoC(PersistenceFramework.ActiveRecord, 
			"windsor.boo", MappingInfo.FromAssemblyContaining<Webcast>());
	}

	[SetUp]
	public void Setup()
	{
		CurrentContext.CreateUnitOfWork();
		webcastRepository = IoC.Resolve<IWebcastRepository>();
	}

	[TearDown]
	public void Teardown()
	{
		CurrentContext.DisposeUnitOfWork();
	}

	[Test]
	public void Can_save_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));
		Assert.AreNotEqual(0, webcast.Id);
	}

	[Test]
	public void Can_load_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));
		UnitOfWork.CurrentSession.Evict(webcast);

		var webcast2 = webcastRepository.Get(webcast.Id);
		Assert.AreEqual(webcast.Id, webcast2.Id);
		Assert.AreEqual("test", webcast2.Name);
		Assert.IsNull(webcast2.PublishDate);
	}

	[Test]
	public void When_asking_for_latest_webcast_will_not_consider_any_that_is_not_published()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));

		Assert.IsNull(webcastRepository.GetLatest());
	}

	[Test]
	public void When_asking_for_latest_webcast_will_get_published_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = null };
		With.Transaction(() => webcastRepository.Save(webcast));
		var webcast2 = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-1) };
		With.Transaction(() => webcastRepository.Save(webcast2));

		Assert.AreEqual(webcast2.Id, webcastRepository.GetLatest().Id);
	}

	[Test]
	public void When_asking_for_latest_webcast_will_get_the_latest_webcast()
	{
		var webcast = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-2) };
		With.Transaction(() => webcastRepository.Save(webcast));
		var webcast2 = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-1) };
		With.Transaction(() => webcastRepository.Save(webcast2));

		Assert.AreEqual(webcast2.Id, webcastRepository.GetLatest().Id);
	}

	[Test]
	public void When_asking_for_latest_webcast_will_not_consider_webcasts_published_in_the_future()
	{
		var webcast = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(-2) };
		With.Transaction(() => webcastRepository.Save(webcast));
		var webcast2 = new Webcast { Name = "test", PublishDate = DateTime.Now.AddDays(2) };
		With.Transaction(() => webcastRepository.Save(webcast2));
		Assert.AreEqual(webcast.Id, webcastRepository.GetLatest().Id);
	}
}

And the implementation:

public class WebcastRepository : RepositoryDecorator<Webcast>, IWebcastRepository
{
	public WebcastRepository(IRepository<Webcast> repository)
	{
		Inner = repository;
	}

	public Webcast GetLatest()
	{
		var publishedWebcastsByDateDesc =
			from webcast in Webcasts
			where webcast.PublishDate != null && webcast.PublishDate < SystemTime.Now()
			orderby webcast.PublishDate descending 
			select webcast;

		return publishedWebcastsByDateDesc.FirstOrDefault();
	}

	private static IOrderedQueryable<Webcast> Webcasts
	{
		get { return UnitOfWork.CurrentSession.Linq<Webcast>(); }
	}
}

I think it is pretty sweet.

time to read 4 min | 645 words

Udi Dahan has been talking about this for a while now. As usual, he makes sense, but I am working in different enough context that it takes time to assimilate it.

At any rate, we have been talking about this for a few days, and I finally sat down and decided that I really need to look at it with code. The result of that experiment is that I like this approach, but am still not 100% sold.

The first idea is that we need to decouple the service layer from our domain implementation. But why? The domain layer is under the service layer, after all. Surely the service layer should be able to reference the domain. The reasoning here is that the domain model play several different roles in most applications. It is the preferred way to access our persistent information (but they should not be aware of persistence), it is the central place for business logic, it is the representation of our notions about the domain, and much more that I am probably leaving aside.

The problem here is there is a dissonance between the requirements we have here. Let us take a simple example of an Order entity.

image As you can see, Order has several things that I can do. It can accept an new line, and it can calculate the total cost of the order.

But those are two distinct responsibilities that are based on the same entity. What is more, they have completely different persistence related requirements.

I talked about this issue here, over a year ago.

So, we need to split the responsibilities, so we can take care of each of them independently. But it doesn't make sense to split the Order entity, so instead we will introduce purpose driven interfaces. Now, when we want to talk about the domain, we can view certain aspect of the Order entity in isolation.

This leads us to the following design:

image

And now we can refer to the separate responsibilities independently. Doing this based on the type open up to the non invasive API approaches that I talked about before. You can read Udi's posts about it to learn more about the concepts. Right now I am more interested in discussing the implementation.

First, the unit of abstraction that we work in is the IRepository<T>, as always.

The major change with introducing the idea of a ConcreteType to the repository. Now it will try to use the ConcreteType instead of the give typeof(T) that it was created with. This affects all queries done with the repository (of course, if you don't specify ConcreteType, nothing changes).

The repository got a single new method:

T Create();

This allows you to create new instances of the entity without knowing its concrete type. And that is basically it.

Well, not really :-)

I introduced two other concepts as well.

public interface IFetchingStrategy<T>
{
	ICriteria Apply(ICriteria criteria);
}

IFetchingStrategy can interfere in the way queries are constructed. As a simple example, you could build a strategy that force eager load of the OrderLines collection when the IOrderCostCalculator is being queried.

There is not complex configuration involved in setting up IFetchingStrategy. All you need to do is register your strategies in the container, and let the repository do the rest.

However, doesn't this mean that we now need to explicitly register repositories for all our entities (and for all their interfaces)?

Well, yes, but no. Technically we need to do that. But we have help, EntitiesToRepositories.Register, so we can just put the following line somewhere in the application startup and we are done.

EntitiesToRepositories.Register(
	IoC.Container, 
	UnitOfWork.CurrentSession.SessionFactory, 
	typeof (NHRepository<>),
	typeof (IOrderCostCalculator).Assembly);

And this is it, you can start working with this new paradigm with no extra steps.

As a side benefit, this really pave the way to complex multi tenant applications.

time to read 6 min | 1029 words

I have a very simple requirement, I need to create a hierarchy of users' groups. So you can do something like:

  • Administrators
    • DBA
      • SQLite DBA

If you are a member of SQLite DBA group, you are implicitly a member of the Administrators group.

In the database, it is trivial to model this:

image

Except that then we run into the problem of dealing with the hierarchy. We can't really ask questions that involve more than one level of the hierarchy easily.  Some databases has support for hierarchical operators, but that is different from one database to the next. That is a problem, since I need it to work across databases, and without doing too much fancy stuff.

We can work around the problem by introducing a new table:

image

Now we move the burden of the hierarchy from the query phase to the data entry phase.

From the point of view of the entity, we have this:

image

Please ignore the death star shape and concentrate on the details :-)

Here is how we are getting all the data in the tree:

public virtual UsersGroup[] GetAssociatedUsersGroupFor(IUser user)
{
    DetachedCriteria directGroupsCriteria = DetachedCriteria.For<UsersGroup>()
        .CreateAlias("Users", "user")
        .Add(Expression.Eq("user.id", user.SecurityInfo.Identifier))
        .SetProjection(Projections.Id());

    DetachedCriteria allGroupsCriteria = DetachedCriteria.For<UsersGroup>()
        .CreateAlias("Users", "user", JoinType.LeftOuterJoin)
        .CreateAlias("AllChildren", "child", JoinType.LeftOuterJoin)
        .Add(
            Subqueries.PropertyIn("child.id", directGroupsCriteria) ||
            Expression.Eq("user.id", user.SecurityInfo.Identifier));

    ICollection<UsersGroup> usersGroups = 
        usersGroupRepository.FindAll(allGroupsCriteria, Order.Asc("Name"));
    return Collection.ToArray<UsersGroup>(usersGroups);
}

Note that here we don't care whatever we are associated with a group directly or indirectly. This is an important consideration in some scenarios (mostly when you want to display information to the user), so we need some way to chart the hierarchy, right?

Here is how we are doing this:

public virtual UsersGroup[] GetAncestryAssociation(IUser user, string usersGroupName)
{
    UsersGroup desiredGroup = GetUsersGroupByName(usersGroupName);
    ICollection<UsersGroup> directGroups =
        usersGroupRepository.FindAll(GetDirectUserGroupsCriteria(user));
    if (directGroups.Contains(desiredGroup))
    {
        return new UsersGroup[] { desiredGroup };
    }
    // as a nice benefit, this does an eager load of all the groups in the hierarchy
    // in an efficient way, so we don't have SELECT N + 1 here, nor do we need
    // to load the Users collection (which may be very large) to check if we are associated
    // directly or not
    UsersGroup[] associatedGroups = GetAssociatedUsersGroupFor(user);
    if (Array.IndexOf(associatedGroups, desiredGroup) == -1)
    {
        return new UsersGroup[0];
    }
    // now we need to find out the path to it
    List<UsersGroup> shortest = new List<UsersGroup>();
    foreach (UsersGroup usersGroup in associatedGroups)
    {
        List<UsersGroup> path = new List<UsersGroup>();
        UsersGroup current = usersGroup;
        while (current.Parent != null && current != desiredGroup)
        {
            path.Add(current);
            current = current.Parent;
        }
        if (current != null)
            path.Add(current);
        // Valid paths are those that are contains the desired group
        // and start in one of the groups that are directly associated
        // with the user
        if (path.Contains(desiredGroup) && directGroups.Contains(path[0]))
        {
            shortest = Min(shortest, path);
        }
    }
    return shortest.ToArray();
}

As an aside, this is about as complex a method as I can tolerate, and even that just barely.

I mentioned that the burden was when creating it, right? Here is what I meant:

public UsersGroup CreateChildUserGroupOf(string parentGroupName, string usersGroupName)
{
    UsersGroup parent = GetUsersGroupByName(parentGroupName);
    Guard.Against<ArgumentException>(parent == null,
                                     "Parent users group '" + parentGroupName + "' does not exists");

    UsersGroup group = CreateUsersGroup(usersGroupName);
    group.Parent = parent;
    group.AllParents.AddAll(parent.AllParents);
    group.AllParents.Add(parent);
    parent.Directchildren.Add(group);
    parent.AllChildren.Add(group);
    return group;
}

We could hide it all inside the Parent's property setter, but we still need to deal with it.

And that is all you need to do in order to get it cross database hierarchical structures working.

time to read 11 min | 2071 words

How many times have the documentation of a product told you that you really should use another product? Well, this is one such case. Rhino Igloo is an attempt to make an application that was mandated to Web Forms more palatable to work with. As such, it is heavily influenced by MonoRail and its ideas.

I speak as the creator of this framework, if at all possible, prefer (mandate!) the use of MonoRail, it will be far easier, all around.

Model View Controller is a problematic subject in Web Forms, because of the importance placed on the view (ASPX page) in the Web Forms framework.

When coming to build an MVC framework on top of Web Forms, we need to consider this limitation in our design. For that reason, Rhino Igloo is not a pure MVC framework, and it works under the limitations of the Web Forms framework.

Along with the MVC framework, Rhino Igloo has tight integration to the Windsor IoC container, and make extensive use of it in its internal operations.

It is important to note that while we cover the main chain of events that occurs when using Rhino Igloo, we are not going into the implementation details here, just a broad usage overview.

I took to liberty of flat out ignoring implementation details that are not necessary to understanding how things flow.

The Views

Overall, the Rhino Igloo4framework hooks into the Web Forms pipeline by utilizing a common set of base classes ( BasePage, BaseMaster, BaseScriptService, BaseHttpHandler ).  This set of base classes provide transparent dependency injection capabilities to the views.

As an example, to the right you can see the Login page. As you can see that the login page has a Controller property, of type LoginController. The controller property is a simple property, which the base page initializes with an instance of the LoginController.

It is important to note that it is not the responsibility of the Login page to initialize the Controller property, but rather its parent, the BasePage. As an example, here is a valid page, which, when run, will have its Controller property set to an instance of a LoginController:

public partial class Login : BasePage

{

    private LoginController controller;

 

    public LoginController Controller

    {

        get { return controller; }

        set { controller = value; }

    }

 }

As a result of that, the view no longer needs to be aware of the dependencies that the LoginController itself has. Those are automatically supplied using Windsor itself.

The controller is the one responsible for handling the logic of the particular use case that we need to handle. The view's responsibilities are to call the controller when needed. Here is the full implementation of the above Login_Click method:

public void Logon_Click(object sender, EventArgs e)

{

    if (!Controller.Authenticate(Username.Text, Password.Text))

    {

      lblError.Text = Scope.ErrorMessage;

    }

}

 

As you can see, the view calls to the controller, which can communicate back to the view using the scope.

The Scope

The scope class is the main way controllers get their data, and a common way to pass results from the controller to the view.

The scope class contains properties such as Input and Session, which allows an abstract access to the current request and the user's session.

This also allows replacing them when testing, so we can test the behavior of the controllers without resorting to loading the entire HTTP pipeline.

Of particular interest are the following properties:

·         Scope.ErrorMessage

·         Scope.SuccessMessage

·         Scope.ErrorSummary

These allow the controller to pass messages to the view, and in the last case, the entire validation failure package, as a whole.

Communication between the controller and the view is intentionally limited, in order to ensure separation of concerns between the two.


The controllers

All controllers inherit from Base Controller, which serves as a common base class and a container for a set of utility methods as well.

From the controller, we can access the current context, which is a nicer wrapper around the Http Context and operations exposed by it.

While the controllers are POCO classes, which are easily instantiated and used outside the HTTP context, there are a few extra niceties that are supplied for the controllers.

The first is the Initalize() method, which allows the controller to initialize itself before processing begins.

As an extension to this idea, and taking from MonoRail DataBind approach, there are other niceties:

[Inject]

Inject is an attribute that can be used to decorate a property, at which point the Rhino Igloo framework will take a value from the request, convert it to the appropriate data type and set the property when the controller is first being created. Here is an example:

[Inject]

public virtual Guid CurrentGuid

{

    Get { return currentGuid;  }

    set { currentGuid = value; }

}

The framework will search for a request parameter with the key "CurrentGuid", convert the string value to a Guid, and set its value.

[InjectEntity]

InjectEntity performs the same operation as Inject does, but it takes the value and queries the database an entity with the specified key.

[InjectEntity(Name = Constants.Id, EagerLoad = "Customer")]

public Order Order

{

    get { return order; }

    set { order = value; }

}

This example shows how we can automatically load an order from the database based on the "Id" request parameter, and eager load the Customer property as well.

The model

Rhino Igloo has integration to Rhino Common's Unit Of Work style, but overall, has no constraints on the way the model is built.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}