Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,590
|
Comments: 51,223
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 356 words

Udi is talking about some of the advantages of using spaces, tha ability to extend an existing infrastructure at runtime, simply by adding new clients and new tasks to a running grid/space. The idea is that there is an existing set of servers, which handles tasks from several queues. This separation of the servers that does the work from the clients that request is means that you can register new types of tasks and they will be handled automatically.

He also mentions a technological difference between Java and .Net. Apperantly, Jini on Java is capable of handling uknown types automatically, by downloading their byte code and dependencies. On the .Net side of things, we have no such facility built in, and if we would try to deserialize a task of an uknown type, we would get a serialization exception.

This difference pushed Udi to use a different style for this, instead of tasks that contains both behavior and data, use messages on the queue, and deploy message handlers to the servers. Personally, I like the zero deploy scenario, and I think that I like the idea of a task that encompass both data and behavior.

I have given to the issue of deploying some thought, and since we have strong versioning for .Net assemblies, it is really a matter of finding out what the assembly qualified name of the task is, and then go to a central server that has it if the server doesn't have it already. A nice advantage of this scheme is that you get the ability to handle versioning of tasks pertty much out of the box.

One thing that I am not so sure of is separation of clients from the decision process. That is, while a client may want to start the SendOrder process, is it really the responsability of the client to put a SendOrderUsingFedExTask on the queue? This seems like too much responsability, so maybe using messages here is preferred. Again, you can request message hanlders automatically, so you still get that benefit.

time to read 4 min | 637 words

It seems like a MonoRail implementation of the UpdatePanel is fairly high for people coming to MonoRail from the WebForms world. I am doing a project that uses Update Panel fairly extensively, and I can see why, it is really nice way to add Ajax capabilities to your web form. I started to look into what it would take to implement it with MonoRail, and I came to two conclusions:

  • The technical challanges on the server side are minor, on the client side it is more of an issue, but not hugely so.
  • The default architecture for UI in MonoRail basically makes it a non applicable.

I guess that I should explain.

The Web Form architecture is constraining the page to mostly interact with the page using post backs. There is a whole mechanism at both client and server side that is used to make this happen, which is what the Update Panel plugs into. This means that programming concepts such as AutoPostBack=true are very common on the WebForms world, and a great deal of work is done on the same page.

MonoRail, on the other hand, doesn't place such constraints, and the default architecture for the UI is radically different. Instead of interacting mostly with the same page, mostly using postbacks, you tend to have a more relaxed model, where you usually talk to the controller, and that chooses one of several views to send your way.

When I say "talking to the controller", I mean issuing requests to the application, most often, Ajax seems to be the choice for that, but moving between actions and views using POST/GET is fairly common as well. While you could build the views to work in the same manner as Web Forms, I have hard time thinkning about good reasons why you would want to do that.

So, I hope we can agree that the default architecture between the two is hugely different. Now, let us go back to the update panel request. While the mechanics may not be the same, it is fairly common to want to replace part of your page. And it wouldn't be nice if MonoRail made it hard. Well, as it turn out, it doesn't.

In my MonoRail web cast, I have shown how it can be done, you have a piece of the UI that you want to replace using Ajax. Let us take Fowler's refactoring approach:

  • You decide that you want to update a part of the UI dynamically.
  • You extract that part into a separate view (mostly involving Cut&Paste)
  • You call the new view from the old view
  • You create a new method on the controller that would rendner the new view and return it.*
  • On the UI, you create an Ajax.Updater element that would call the new method on the server (and automatically replace the part of the page with the returned result).

* There are several ways to do it, a new method is one, specifying different view for the same method is another, which I sometimes like more.

time to read 3 min | 546 words

About 6 years ago, I was a very yound soldier, serving as a warden at Prison 6, Company C. For those of you who aren't familiar with the inner details of Israel's prisons, Prison 6 is a military prison for IDF soldiers. Reasons for getting to prison range from showing up with unshined boots to drug use to selling arms. Company C was where the more dangerous inmates where held. I didn't get the people who forgot to shine their shoes, I got the drug dealers and stollen arms sellers.

Anyway, back to the story, I was (and am) a geek, probably a nice person, being thrown into that situation was very low on my "Things To Do" list. Nevertheless, the IDF, in his great wisdom, has decided to put me there, so I had to go. (Later, I was in a position when I had to make the same decision, that wisdom turned out to be purely arbitrary).

About three weeks after I arrived at the prison, I was at my Sergeant's office, and he was busy flirting with the nurse that came to check on the inmates. It was about noon, which meant that it was time for the noon Count. Life in prison mostly revolve around those Counts, and they are held as sacred. I already had most of the inmates lined up outside the office, just waiting for the Sergeant to come out and count them.

As I said, he was busy flirting, and didn't feel like doing it right then. So he told me to go out and keep them busy for a while. Keep in mind what the population was, and that I was just finished the introduction part, I had no idea what to do, and I told him so.

"Oren, my boy," he told me, "you are going to go out there, stand in front of the company, and talk for the next fifteen minutes. They are going to listen to you, and in 12:17, I am going to count them, am I clear?" I protested that I had no idea what to talk about, and couldn't do it anyway.

"Oren," he said in a much milder voice, "perhaps I wasn't clear, you are going to go out there, and you are going to talk to them, talk to them about socks, if you like. If you don't like it, be kind and take your place at cell 6".

I went out there, and talked for fifteen minutes, and to this day I have no idea about what, although I suspect that socks has been the topic under discussion at least part of the time. I never had any problem with speaking in public since and I admire this Sergeant to this day, very much so, and not only because of that.

DevTeach is in a few days, and I find myself missing my Sergeant again. I know the material, and I am not so much afraid about giving a talk in English, it is the ability to make it interesting that worries me. *Sigh*, I will go practice it some more now...

time to read 38 min | 7539 words

I should start by saying that <one-to-one> is not a recommended approach, but the question came up in the Castle mailing list, and I set to investigate. There are scenarios where this is the only choice, but in general, prefer to avoid it. Here are our entities:

[ActiveRecord("Users")]

publicclassUser : ActiveRecordBase<User>

{

       privateBlog blog;

       privateint id;

       privatestring name;

 

       [PrimaryKey]

       publicvirtualint Id

       {

              get { return id; }

              set { id = value; }

       }

 

       [Property]

       public virtual string Name

       {

              get { return name; }

              set { name = value; }

       }

 

       [OneToOne]

       publicvirtualBlog Blog

       {

              get { return blog; }

              set { blog = value; }

       }

}

 

[ActiveRecord]

public class Blog : ActiveRecordBase<Blog>

{

       private int id;

       private User user;

       private string name;

 

       [PrimaryKey]

       public virtual int Id

       {

              get { return id; }

              set { id = value; }

       }

 

       [BelongsTo("`User`")]

       public virtual User User

       {

              get { return user; }

              set { user = value; }

       }

 

       [Property]

       public virtual string Name

       {

              get { return name; }

              set { name = value; }

       }

}

Now, let us see what happens when I try to run this code:

using(new SessionScope())

{

       User user = User.Find(u.Id);

       Console.WriteLine(user.Name);

}

For this, NHibernate will generate the following query:

declare @p0 int;

set @p0 = '1';

SELECT user0_.Id as Id1_2_, blog1_.Id as Id0_0_,

     blog1_.Name as Name0_0_, blog1_.[User] as User3_0_0_,

     user2_.Id as Id1_1_

FROM Users user0_ left outer join

     Blog blog1_ on user0_.Id=blog1_.Id

     left outer join Users user2_

     on blog1_.[User]=user2_.Id WHERE user0_.Id=@p0;

The double left outer join had me scratching my head for a while, until I figured out how NHibernate was thinking about it. The first left outer join is to find the related blog, and the second is to find the blog's user. Not reasonable, I agree, but that is the way that one-to-one works.

Now, let us see what happens if we specify that both classes are lazy? Well, now NHibernate generate this SQL:

declare @p0 int;

set @p0 = '1';

SELECT user0_.Id as Id1_1_, user0_.Name as Name1_1_,

     blog1_.Id as Id0_0_, blog1_.Name as Name0_0_,

     blog1_.[User] as User3_0_0_

FROM Users user0_

     left outer join Blog blog1_ on user0_.Id=blog1_.Id

WHERE user0_.Id=@p0;

But wait, didn't we specify that both classes should be lazy? It removed one left outer join, but kept the second one, why is it doing that?

Well, let us give a moment's thought to the way NHibernate see things, shall we? We told it that User has a one to one assoication with Blog, this means that when it loads a User, it has to populate all the properties of the user, but the Blog's column is not kept on the Users table, but on the "Blog" table. What this means is that in order to find the id of the blog entity, NHibernate must query the Blog table as well. At that point, it is more efficent to just grab all the data from the table rather than just the id.

In other words, one-to-one cannot be lazily loaded, which is one of the reasons why it is recommended to use two many-to-one instead.

Booish fun

time to read 4 min | 755 words

Booish is a command interpreter for the Boo language, this means that it gives you the full power of Boo and the .NET framework at your finger tips.

I just needed to find the number of methods in mscorlib:

import System.Reflection

mscorlib = typeof(object).Assembly

methodCount = 0

typeCount = 0

for t in mscorlib.GetTypes():

      typeCount +=1

      for m in t.GetMethods():

            methodCount += 1

print "Types ${typeCount}, Methods: ${methodCount}"

The result:

Types 2319, Methods: 27650
time to read 2 min | 214 words

Just run into this project, another port from the Java Land, which looks really interesting. Basically, it is a job scheduling framework. I have need of that in a previous application - which meant that I had to write a simple version of that, and I was very happy with it, until I realized that I had a convoy issue that was so severe that it killed the application under heavy load (discoveed 6 months into production, naturally).

In my current project, there are quite a few things that I mark as "TODO Later", which are all sorts of maintenance tasks that I would like to run. I can probably build something simple based on Timer, but that has already proven to be problematic at times. I don't know about the level of the project (or much about it, frankly), but it looks like there are some interesting things there, such as the possibility for persistent jobs, which would make my life easier. It looks like it is not implemented yet, but work on porting has started.

From cursory look, it looks interesting enough to take a deeper look, and the site has good documentation.

No mailing list, though :-(

time to read 3 min | 462 words

There are many tools that tends to work only with DataSets, the most often cases are reporting tools or data driven systems. I consider this an issue with the tool, usually, but this is a fact of line. Case in point, I want to display a report of customer objects, I say objects here because I retrieve them through an NHibernate query + business logic that can't really be done in a stored procedure.

At the end, I get a collection of customer objects, and I need to pass that to a reporting tool that can only accept a DataSet, this means that I need to translate an object graph to tabular format.

Pay no attention to the man behind the screen!

Here is my secret technique to do this:

DataTable dt = new DataTable();
dt.Columns.Add("CustomerId"typeof(int));
dt.Columns.Add("CustomerName"typeof(string));
dt.Columns.Add("RegisteredAt"typeof(string));//not a typo, sadly.

// ... lot more properties, often nested ones.

foreach(Customer cust in customers)
{
  DataRow row = dt.NewRow();
  row["CustomerId"] = cust.Id;
  row["CustomerName"] = cust.At(reportDate).Name;
  row["RegisteredAt"] = cust.RegisteredAt.ToShortDateString();
  //... lot more properties

  dt.Rows.Add(row);
}
DataSet ds = new DataSet();
ds.Tables.Add(dt);
return ds;

Sorry that it isn't magic, just the simplest solution that could work without writing a whole new data source adapter for the tool.

time to read 1 min | 158 words

Just finished writing some fairly complex reports. The reports are complex enough that I decided that it isn't worth my time to try to build a stored procedure to do it, and I simply used NHibernate to get the data. The simplest report had 14(!) parameters, but the main issue was handling security and running business logic as part of the report. (Specifically, a lot of date calculations).

To be clear, I am talking about using NHibernate as a data source for a report, not generating the report itself. That is done with reporting services, which is talking to an NHibernate backed Web Service. Of course, this has the predictable result of:

ayende.DislikedTools.Add( Microsoft.SqlServer.ReportingServices, 
     ReasonsForDislike.DoesNotSupportRightToLeft | ReasonsForDislike.XPathWhoNeedsXPath );

FUTURE POSTS

  1. RavenDB & Ansible - 3 days from now

There are posts all the way to Jul 21, 2025

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}