Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 301 words

Reading Davy Brion’s post There Is No Excuse For Failing Queries In Production had me twitching slightly. The premise is that since you have tests for your queries, they are not going to fail unless some idiot goes to the database and mess things up,and when that is going to happen, you are going to trip a circuit breaker and fail fast. Davy’s post is a response to a comment thread on another post:

Bertrand Le Roy: Would it make sense for the query to implement IDisposable so that you can put it in a using block? I would be a little concerned about the query failing and never getting disposed of.
Davy Brion: well, if a query fails the underlying SqlCommand object will eventually be garbage collected so i wouldn’t really worry about that. Failed queries shouldn’t happen frequently anyway IMO because they typically are the result of incorrect code.

The problem with Davy’s reasoning is that I can think of several very easy ways to get queries to fail in production even if you go through testing and put your DB under lock & key. Probably the easiest problems to envision are timeouts and transaction deadlocks. Imagine the following scenario, your application is running in production, and the load is big enough (and the request pattern is such that) transaction deadlocks begin to happen. Deadlocks in the DB are not errors. Well, not in the usual sense of the word. They are transient and should be dealt separately.

Using Davy’s approach, we would either have the circuit breaker tripping because of a perfectly normal occurrence or we may have a connection leak because we are assuming that all failures are either catastrophic or avoidable.

time to read 4 min | 742 words

I got a report on a memory leak in using Rhino Queues, after a bit of a back and forth just to ensure that it isn’t the user’s fault, I looked into it. Sadly, I was able to reproduce the error on my machine (I say sadly, because it proved the problem was with my code). This post show the analysis phase of tracking down this leak.

That meant that all I had to do is find it. The problem with memory leaks is that they are so insanely hard to track down, since by the time you see their results, the actual cause for that is long gone.

By I fired my trusty dotTrace and let the application run for a while with the following settings:

image

Then I dumped the results and looked at anything suspicious:

image

And look what I found, obviously I am holding buffers for too long. Maybe I am pinning them by making async network calls?

Let us dig a bit deeper and look only at arrays of bytes:

image

I then asked dotTrace to look at who is holding those buffers.

image

And that is interesting, all those buffers are actually held by garbage collector handle. By I have no idea what that is. Googling is always a good idea when you are lost, and it brought me to the GCHandle structure. But who is calling this? I certainly am not doing that. GCHandle stuff is only useful when you want to talk to unmanaged code. My suspicions that this is something related to the network stack seems to be confirmed.

Let us take a look at the actual objects, and here I got a true surprise.

image

4 bytes?! That isn’t something that I would usually pass to a network stream. My theory about bad networking code seems to be fragile all of a sudden.

Okay, what else can we dig out of here? Someone is creating a lot of 4 bytes buffers. But that is all I know so far. DotTrace has a great feature for tracking such things, trackign the actual allocation stack, so I moved back to the roots window and looked at that.

image

What do you know, it looks like we have a smoking gun here. But it isn’t where I expected it to be. At this point, I left dotTrace (I didn’t have the appropriate PDB for the version I was using for it to dig into it) and went to reflector.

image

RertieveColumnAsString and RertieveColumnAsUInt32 are the first things that make me think of buffer allocation, so I checked them first.

image

Still following the bread crumbs trail, I found:

image

Which leads to:

image

And here we have a call to GCHandle.Alloc() without a corresponding Free().

We have found the leak.

Once I did that, of course, I headed to the Esent project to find that the problem has long been solved and that the solution is simply to update my version of Esent Interop.

It makes for a good story, at the very least :-)

time to read 5 min | 884 words

One of the annoyances that we have to deal when building enterprise applications is the requirement that no data shall be lost. The usual response to that is to introduce a WasDeleted or an IsActive column in the database and implement deletes as an update that would set that flag.

Simple, easy to understand, quick to implement and explain.

It is also, quite often, wrong.

The problem is that deletion of a row or an entity is rarely a simple event. It effect not only the data in the model, but also the shape of the model. That is why we have foreign keys, to ensure that we don’t end up with Order Lines that don’t have a parent Order. And that is just the simplest of issues. Let us consider the following model:

public class Order
{
    public virtual ICollection<OrderLines> OrderLines { get;set; }
}

public class OrderLine
{
    public virtual Order Order { get;set; }
}

public class Customer
{
    public virtual Order LastOrder { get;set; }
}

Let us say that we want to delete an order. What should we do? That is a business decision, actually. But it is one that is enforced by the DB itself, keeping the data integrity.

When we are dealing with soft deletes, it is easy to get into situations where we have, for all intents and purposes, corrupt data, because Customer’s LastOrder (which is just a tiny optimization that no one thought about) now points to a soft deleted order.

Now, to be fair, if you are using NHibernate there are very easy ways to handle that, which actually handle cascades properly. It is still a more complex solution.

For myself, I much rather drop the requirement for soft deletes in the first place and head directly to why we care for that. Usually, this is an issue of auditing. You can’t remove data from the database once it is there, or the regulator will have your head on a silver platter and the price of the platter will be deducted from your salary.

The fun part is that it is so much easier to implement that with NHibernate, for that matter, I am not going to show you how to implement that feature, I am going to show how to implement a related one and leave building the auditable deletes as an exercise for the reader :-)

Here is how we implement audit trail for updates:

public class SeparateTableLogEventListener : IPreUpdateEventListener
{
    public bool OnPreUpdate(PreUpdateEvent updateEvent)
    {
        var sb = new StringBuilder("Updating ")
            .Append(updateEvent.Entity.GetType().FullName)
            .Append("#")
            .Append(updateEvent.Id)
            .AppendLine();

        for (int i = 0; i < updateEvent.OldState.Length; i++)
        {
            if (Equals(updateEvent.OldState[i], updateEvent.State[i])) continue;

            sb.Append(updateEvent.Persister.PropertyNames[i])
                .Append(": ")
                .Append(updateEvent.OldState[i])
                .Append(" => ")
                .Append(updateEvent.State[i])
                .AppendLine();
        }

        var session = updateEvent.Session.GetSession(EntityMode.Poco);
        session.Save(new LogEntry
        {
            WhatChanged = sb.ToString()
        });
        session.Flush();

        return false;
    }
}

It is an example, so it is pretty raw, but it should be create what is going on.

Your task, if you agree to accept it, is to build a similar audit listener for deletes. This message will self destruct whenever it feels like.

On PSake

time to read 7 min | 1212 words

James Kovacks introduced psake ( a power shell based build system )over a year ago, and at the time, I gave it a glance and decided that it was interesting, but not worth further investigation.

This weekend, as I was restructuring my Rhino Tools project, I realized that I need to touch the build system as well. The Rhino Tools build system has been through several projects, and was originally ported from Hibernate. It is NAnt based, complex, and can do just about everything that you want expect be easily understandable.

It became clear to me very quickly that it ain’t going to be easy to change the way it works, nor would it be easy to modify that to reflect the new structure. There are other issues with complex build systems, they tend to create zones of “there be dragons”, where only the initiated go, and even they go with trepidation. I decided to take advantage of the changes that I am already making to get a simpler build system.

I had a couple of options open to me: Rake and Bake.

Bake seemed natural, until I remember that no one touched it in a year or two. Beside, I can only stretch NIH so far :-). And while I know that people rave about rake, I did not want to introduce a Ruby dependency on my build system. I know that it was an annoyance when I had to build Fluent NHibernate.

One thing that I knew that I am not willing to go back to was editing XML, so I started looking at other build systems, ending up running into PSake.

There are a few interesting things that reading about it brought to mind. First, NAnt doesn’t cut it anymore. It can’t build WPF applications nor handle multi targeting well. Second, I am already managing the compilation part of the build using MSBuild, thanks to Visual Studio.

That leave the build system with executing msbuild, setting up directories, executing tests, running post build tools, etc.

PSake handles those well, since the execution environment is the command line. The syntax is nice, just enough to specify tasks and dependencies, but everything else is just pure command line. The following is Rhino Mocks build script, using PSake:

properties { 
  $base_dir  = resolve-path .
  $lib_dir = "$base_dir\SharedLibs"
  $build_dir = "$base_dir\build" 
  $buildartifacts_dir = "$build_dir\" 
  $sln_file = "$base_dir\Rhino.Mocks-vs2008.sln" 
  $version = "3.6.0.0"
  $tools_dir = "$base_dir\Tools"
  $release_dir = "$base_dir\Release"
} 

task default -depends Release

task Clean { 
  remove-item -force -recurse $buildartifacts_dir -ErrorAction SilentlyContinue 
  remove-item -force -recurse $release_dir -ErrorAction SilentlyContinue 
} 

task Init -depends Clean { 
    . .\psake_ext.ps1
    Generate-Assembly-Info `
        -file "$base_dir\Rhino.Mocks\Properties\AssemblyInfo.cs" `
        -title "Rhino Mocks $version" `
        -description "Mocking Framework for .NET" `
        -company "Hibernating Rhinos" `
        -product "Rhino Mocks $version" `
        -version $version `
        -copyright "Hibernating Rhinos & Ayende Rahien 2004 - 2009"
        
    Generate-Assembly-Info `
        -file "$base_dir\Rhino.Mocks.Tests\Properties\AssemblyInfo.cs" `
        -title "Rhino Mocks Tests $version" `
        -description "Mocking Framework for .NET" `
        -company "Hibernating Rhinos" `
        -product "Rhino Mocks Tests $version" `
        -version $version `
        -clsCompliant "false" `
        -copyright "Hibernating Rhinos & Ayende Rahien 2004 - 2009"
        
    Generate-Assembly-Info `
        -file "$base_dir\Rhino.Mocks.Tests.Model\Properties\AssemblyInfo.cs" `
        -title "Rhino Mocks Tests Model $version" `
        -description "Mocking Framework for .NET" `
        -company "Hibernating Rhinos" `
        -product "Rhino Mocks Tests Model $version" `
        -version $version `
        -clsCompliant "false" `
        -copyright "Hibernating Rhinos & Ayende Rahien 2004 - 2009"
        
    new-item $release_dir -itemType directory 
    new-item $buildartifacts_dir -itemType directory 
    cp $tools_dir\MbUnit\*.* $build_dir
} 

task Compile -depends Init { 
  exec msbuild "/p:OutDir=""$buildartifacts_dir "" $sln_file"
} 

task Test -depends Compile {
  $old = pwd
  cd $build_dir
  exec ".\MbUnit.Cons.exe" "$build_dir\Rhino.Mocks.Tests.dll"
  cd $old        
}

task Merge {
    $old = pwd
    cd $build_dir
    
    Remove-Item Rhino.Mocks.Partial.dll -ErrorAction SilentlyContinue 
    Rename-Item $build_dir\Rhino.Mocks.dll Rhino.Mocks.Partial.dll
    
    & $tools_dir\ILMerge.exe Rhino.Mocks.Partial.dll `
        Castle.DynamicProxy2.dll `
        Castle.Core.dll `
        /out:Rhino.Mocks.dll `
        /t:library `
        "/keyfile:$base_dir\ayende-open-source.snk" `
        "/internalize:$base_dir\ilmerge.exclude"
    if ($lastExitCode -ne 0) {
        throw "Error: Failed to merge assemblies!"
    }
    cd $old
}

task Release -depends Test, Merge {
    & $tools_dir\zip.exe -9 -A -j `
        $release_dir\Rhino.Mocks.zip `
        $build_dir\Rhino.Mocks.dll `
        $build_dir\Rhino.Mocks.xml `
        license.txt `
        acknowledgements.txt
    if ($lastExitCode -ne 0) {
        throw "Error: Failed to execute ZIP command"
    }
}

It is about 50 lines, all told, with a lot of spaces and is quite readable.

This handles the same tasks as the old set of scripts did, and it does this without undue complexity. I like it.

time to read 2 min | 398 words

This post is about the Rhino Tools project. It has been running for a long time now, over 5 years, and amassed quite a few projects in it.

I really like the codebase in the projects in Rhino Tools, but secondary aspects has been creeping in that made managing the project harder. In particular, putting all the projects in a single repository made it easy, far too easy. Projects had an easy time taking dependencies that they shouldn’t, and the entire build process was… complex, to say the least.

I have been somewhat unhappily tolerant of this so far because while it was annoying, it didn’t actively create problems for me so far. The problems started creeping when I wanted to move Rhino Tools to use NHibernate 2.1. That is when I realized that this is going to be a very painful process, since I have to take on the entire Rhino Tools set of projects in one go, instead of dealing with each of them independently. the fact that so many of the dependencies where in Rhino Commons, to which I have a profound dislike, helped increase my frustration.

There are other things that I find annoying now, Rhino Security is a general purpose library for NHibernate, but it makes a lot of assumptions about how it is going to use, which is wrong. Rhino ETL had a dependency on Rhino Commons because of three classes.

To resolve that, I decided to make a few other changes, taking dependencies is supposed to be a hard process, it is supposed to make you think.

I have been working on splitting the Rhino Tools projects to all its sub projects, so each of them is independent of all the others. That increase the effort of managing all of them as a unit, but decrease the effort of managing them independently.

The current goals are to:

  • Make it simpler to treat each project independently
  • Make it easier to deal with the management of each project (dependencies, build scripts)

There is a side line in which I am also learning to use Git, and there is a high likelihood that the separate Rhino Tools projects will move to github. Suversion’s patching & tracking capabilities annoyed me for the very last time about a week ago.

time to read 4 min | 668 words

In a recent email thread, I was asked:

How come that you manage to post and comment that much? I bet you spend really loads of time on your blog.

The reason why I ask is because roughly a month ago I've decided to roll out my own programming blog. I've made three or four posts there
(in 5 days) and abandoned the idea because it was consuming waaaaay too much time (like 2-3 hours per post)

The only problem is time. If I'm gonna to post that much every day (and eventually also answer to comments), it seems that my effective working time would be cut by 3+ hours. Daily.  So here comes my original question. How come that you manage to post and comment that much?

And here is my answer:

I see a lot of people in a similar situation. I have been blogging for 6 years now (Wow! how the the time flies), and I have started from a blog that has zero readers to one that has a respectable readership. There are only two things that I do that are in any way unique. First, my "does it fit to be blogged about?" level is pretty low. If it is interesting (to me), it will probably go to the blog.

Second, I don't mind pushing a blog post that requires fixing later. It usually takes me ten minutes to put out a blog post, so I can literally have a thought, post it up and move on, without really noticing it hurting my workflow. And the mere act of putting things in writing for others to read is a significant one, it allows me to look at things in writing in a way that is hard to do when they are just in my head.

It does take time, make no mistakes about that. The 10 minutes blog post is about 30% of my posts, in a lot of cases, it is something that I have to work on for half an hour. In some rare cases, it goes to an hour or two. There have been several dozens of posts that took days. But while it started out as a hobby, it has become part of my work now. The blog is my marketing effort, so to speak. And it is an effective one.

Right now, I have set it up so about once a week I am spending four or five hours crunching out blog posts and future posting them. Afterward, I can blog whenever I feel like, and that takes a lot of the pressure off. It helps that I am trying hard to schedule most of them day after day. So I get a lot of breathing room, but there is new content every day.

That doesn’t mean that I actually blog once a week, though. I push stuff to the blog all the time, but it is usually short notes, not posts that take a lot of time.

As for comments, take a look at my commenting style, I am generally commenting only when I actually have something to add to the conversation, and I very rarely have long comments (if I do, they turn into posts :-) ).

It also doesn’t take much time to reply to most of them, and it creates a feedback cycle that means that more people are participating and reading the blog. It is rare that I post a topic that really stir people up and that I feel obligated to respond to all/most of the comments. That does bother me, because it takes too much time. In those cases, I’ll generally close the comment threads with a note about that.

One final thought, the time I spend blogging is not wasted. It is well spent. Because it is an investment in reputation, respectability and familiarization.

time to read 2 min | 315 words

Continuing to shadow Davy’s series about building your own DAL, this post is about the Executing Custom Queries.

The post does a good job of showing how you can create a good API on top of what is pretty much raw SQL. If you do have to create your own DAL, or need to use SQL frequently, please use something like that rater than using ADO.Net directly.

ADO.Net is not a data access library, it is the building blocks you use to build a data access library.

A few comments on queries in NHibernate. NHibernate uses a more complex model for queries, obviously. We have a set of abstractions between the query and generated SQL, because we are supporting several query options and a lot more mapping options. But while a large amount of effort goes into translating the user desires to SQL, there is an almost equivalent amount of work going into hydrating the queries. Davy’s support only a very flat model, but with NHibernate, you can specify eager load options, custom DTOs or just plain value queries.

In order to handle those scenarios, NHibernate tracks the intent behind each column, and know whatever a column set represent an entity an association, a collection or a value. That goes through a fairly complex process that I’ll not describe here, but once the first stage hydration process is done, NHibernate has a second stage available. This is why you can write queries such as “select new CustomerHealthIndicator(c.Age,c.SickDays.size()) from Customer c”.

The first stage is recognizing that we aren’t loading an entity (just fields of an entity), the second is passing them to the CustomerHealthIndicator constructor. You can actually take advantage of the second stage yourself, it is called Result Transformer, and you can provide you own and set it a query using SetResultTransformer(…);

time to read 2 min | 260 words

Continuing to shadow Davy’s series about building your own DAL, this post is about the Lazy Loading. Davy support lazy loading is something that I had great fun reading, it is simple, elegant and beautiful to read.

I don’t have much to say about the actual implementation, NHibernate does things in much the same way (we can quibble about minor details such as who holds the identifier and other stuff, but they aren’t really important). A major difference between Davy’s lazy loading approach and NHibernate’s is that Davy doesn’t support inheritance. Inheritance and lazy loading plays some nasty games with NHibernate implementation of lazy loaded entities.

While Davy can get away with loading the values into the same proxy object that he is using, NHibernate must load them into a separate object. Why is that? Let us say that we have a many to one association from AnimalLover to Animal. That association is lazy loaded, so we put a AnimalProxy (which inherit from Animal) as the value in the Animal property. Now, when we want to load the Animal property, NHibernate has to load the entity, at that point, it discovers that the entity isn’t actually an Animal, but a Dog.

Obviously, you cannot load a Dog into an AnimalProxy. What NHibernate does in this case is load the entity into another object (a Dog instance) and once that instance is loaded, direct all methods calls to the new instance. It sounds complicated, but it is actually quite elegant and transparent from the user perspective.

time to read 2 min | 224 words

I run into the following in a code base that is currently being converted to use NHibernate from a hand rolled data access layer.

private void GetHierarchyRecursive(int empId, List<Employee> emps) {
    List<Employee> subEmps = GetEmployeesManagedBy(empId);
    foreach (Employee c in subEmps) {
        emps.Add(c);
        GetHierarchyRecursive(c.EmployeeID, c.ManagedEmployees);
    }
}

GetHierarchyRecursive is a method that hits the database. In my book, a method that is calling the database in a loop is guilt of a bug until proven otherwise (and even then I’ll look at it funny).

When the code was ported to NHibernate, the question of how to implement this came up. And I wanted to avoid having the same pattern repeat itself. The fun part with NHibernate is that it make such things so easy.

session.CreateQuery(
		"select e from Employee e join fetch e.ManagedEmployees"
	)
.SetResultTransformer(new DistinctRootEntityResultTransformer())
.List<Employee>();

This will load the entire hierarchy in a single query. Moreover, it will build the organization tree correctly, so now you can traverse the entire graph without hitting empty spot or causing lazy loading.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}