Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 70 words

Well, it is nearly the 29 May, and that means that I have been married for four years.

To celebrate that, I am offering a 29% discount on all our products (RavenDB, NHibernate Profiler, Entity Framework Profiler).

All you have to do is purchase any of our products using the following coupon code:

4th Anniversary

This offer is valid to the end of the month only.

time to read 4 min | 787 words

In my previous post, we have increased the capacity of the cluster by moving all new work to the new set of servers. In this post, I want to deal with a slightly harder problem, how to handle it when it isn’t new data that is causing the issue, but existing data. So we can’t just throw a new server, but need to actually move data between nodes.

We started with the following configuration:

var shards = new Dictionary<string, IDocumentStore>
{
    {"Shared", new DocumentStore {Url ="http://rvn1:8080", DefaultDatabase = "Shared"}},
    {"EU", new DocumentStore {Url = "http://rvn2:8080", DefaultDatabase = "Europe"}},
    {"NA", new DocumentStore {Url = "http://rvn3:8080", DefaultDatabase = "NorthAmerica"}},
};

And what we want is to add another server for EU and NA. Our new topology would be:

var shards = new Dictionary<string, IDocumentStore>
{
    {"Shared", new DocumentStore {Url ="http://rvn1:8080", DefaultDatabase = "Shared"}},
    {"EU1", new DocumentStore {Url = "http://rvn2:8080", DefaultDatabase = "Europe1"}},
    {"NA1", new DocumentStore {Url = "http://rvn3:8080", DefaultDatabase = "NorthAmerica1"}},
    {"EU2", new DocumentStore {Url = "http://rvn4:8080", DefaultDatabase = "Europe2"}},
    {"NA2", new DocumentStore {Url = "http://rvn5:8080", DefaultDatabase = "NorthAmerica2"}},
};

There are a couple of things that we need to pay attention to. First, we no longer use the EU / NA shard keys, they have been removed in favor of EU1 & EU2 / NA1 & NA2. We’ll also change the sharding configuration so it would split the new data between the two new nodes for each region evenly (see previous post for the details on exactly how this is done). But what about the existing data? We need to have some way of actually moving the data. That is when our ops tools come into play.

We use the smuggler to move the data between the servers:

Raven.Smuggler.exe  between http://rvn2:8080 http://rvn2:8080 --database=Europe --database2=Europe1 --transform-file=transform-1.js --incremental
Raven.Smuggler.exe  between http://rvn2:8080 http://rvn4:8080 --database=Europe --database2=Europe2 --transform-file=transform-2.js --incremental
Raven.Smuggler.exe  between http://rvn3:8080 http://rvn3:8080 --database=NorthAmerica --database2=NorthAmerica1 --transform-file=transform-1.js --incremental
Raven.Smuggler.exe  between http://rvn3:8080 http://rvn5:8080 --database=NorthAmerica --database2=NorthAmerica2 --transform-file=transform-2.js --incremental

The commands are pretty similar, with just the different options, so let us try to figure out what is going on. We are asking the smuggler to move the data between two databases in an incremental fashion, while applying a transform script. The transform-1.js file looks like this:

function(doc) { 
    var id = doc['@metadata']['@id']; 
    var node = (parseInt(id.substring(id.lastIndexOf('/')+1)) % 2);

    if(node == 1)
        return null;

    doc["@metadata"]["Raven-Shard-Id"] = doc["@metadata"]["Raven-Shard-Id"] + (node+1);

    return doc;
}

And the tranasform-2.js is exactly the same except that it return early if node is 0. In this way, we are able to split the data into the two new servers.

Note that the reason we use an incremental approach means that we can do this, even if it takes a long while, then the window of time when we switch is very narrow, and require us to only pass the recently changed data.

That still leaves the question of how are we going to deal with old ids. We are still going to have things like “EU/customers/###” in the database, even if those documents are on one of the two new nodes. We handle this, like most low level sharding behaviors, by customizing the sharding strategy. In this case, we modify the PotentialsServersFor(…) method:

public override IList<string> PotentialShardsFor(ShardRequestData requestData)
{
    var potentialShardsFor = base.PotentialShardsFor(requestData);
    if (potentialShardsFor.Contains("EU"))
    {
        potentialShardsFor.Remove("EU");
        potentialShardsFor.Add("EU1");
        potentialShardsFor.Add("EU2");
    }
    if (potentialShardsFor.Contains("NA"))
    {
        potentialShardsFor.Remove("NA");
        potentialShardsFor.Add("NA1");
        potentialShardsFor.Add("NA2");
    }
    return potentialShardsFor;
}

In this case, we are doing a very simple thing, when the default shard resolution strategy detect that we want to go to the old EU node, we’ll tell it to go to both EU1 and EU2. A more comprehensive solution would narrow it down to the exact server, but that depend on how exactly you split the data, and is left as an exercise for the reader.

time to read 6 min | 1145 words

Continuing on the theme of giving a full answer to interesting questions on the mailing list in the blog, we have the following issue. 

We have a sharded cluster, and we want to add a new node to the cluster, how do we do it? I’ll discuss a few ways in which you can handle this scenario. But first, let us lay out the actual scenario.

We’ll use the Customers & Invoices system, and we put the data in the various shard along the following scheme:

Customers Sharded by region
Invoices Sharded by customer
Users Shared database (not sharded)

We can configure this using the following:

var shards = new Dictionary<string, IDocumentStore>
{
    {"Shared", new DocumentStore {Url ="http://rvn1:8080", DefaultDatabase = "Shared"}},
    {"EU", new DocumentStore {Url = "http://rvn2:8080", DefaultDatabase = "Europe"}},
    {"NA", new DocumentStore {Url = "http://rvn3:8080", DefaultDatabase = "NorthAmerica"}},
};
ShardStrategy shardStrategy = new ShardStrategy(shards)
    .ShardingOn<Company>(company =>company.Region, region =>
    {
        switch (region)
        {
            case "USA":
            case "Canada":
                return "NA";
            case "UK":
            case "France":
                return "EU";
            default:
                return "Shared";
        }
    })
    .ShardingOn<Invoice>(invoice => invoice.CompanyId)
    .ShardingOn<User>(user=> "Shared");
 

So far, so good. Now, we have so much work that we can’t just have two servers for customers & invoices, we need more. We change the sharding configuration to include 2 new servers, and we get:

var shards = new Dictionary<string, IDocumentStore>
     {
         {"Shared", new DocumentStore {Url = "http://rvn1:8080", DefaultDatabase = "Shared"}},
         {"EU", new DocumentStore {Url = "http://rvn2:8080", DefaultDatabase = "Europe"}},
         {"NA", new DocumentStore {Url = "http://rvn3:8080", DefaultDatabase = "NorthAmerica"}},
        {"EU2", new DocumentStore {Url = "http://rvn4:8080", DefaultDatabase = "Europe-2"}},
        {"NA2", new DocumentStore {Url = "http://rvn5:8080", DefaultDatabase = "NorthAmerica-2"}},
     };

    var shardStrategy = new ShardStrategy(shards);
    shardStrategy.ShardResolutionStrategy = new NewServerBiasedShardResolutionStrategy(shards.Keys, shardStrategy)
        .ShardingOn<Company>(company => company.Region, region =>
        {
            switch (region)
            {
                case "USA":
                case "Canada":
                    return "NA";
                case "UK":
                case "France":
                    return "EU";
                default:
                    return "Shared";
            }
        })
        .ShardingOn<Invoice>(invoice => invoice.CompanyId)
        .ShardingOn<User>(user => user.Id, userId => "Shared");

Note that we have a new shard resolution strategy, what is that? This is how we control lower level details of the sharding behavior, in this case, we want to control where we’ll write new documents.

public class NewServerBiasedShardResolutionStrategy : DefaultShardResolutionStrategy
{
    public NewServerBiasedShardResolutionStrategy(IEnumerable<string> shardIds, ShardStrategy shardStrategy)
        : base(shardIds, shardStrategy)
    {
    }

    public override string GenerateShardIdFor(object entity, ITransactionalDocumentSession sessionMetadata)
    {
        var generatedShardId = base.GenerateShardIdFor(entity, sessionMetadata);
        if (entity is Company)
        {
            if (DateTime.Today < new DateTime(2015, 8, 1) ||
                DateTime.Now.Ticks % 3 != 0)
            {
                return generatedShardId + "2";
            }
            return generatedShardId;
        }
        return generatedShardId;
    }
}    

What is the logic? If we have a new company, we’ll call the GenerateShardIdFor(entity) method, and for the next 3 months, we’ll create all new companies (and as a result, their invoices) in the new servers. After the 3 months have passed, we’ll still generate the new companies on the new servers at a rate of two thirds on the new servers vs. one third on the old servers.

Note that in this case, we didn’t have to modify any data whatsoever. And over time, the data will balance itself out between the servers. In my next post, we’ll deal with how we actually split an existing shard into multiple shards.

time to read 4 min | 657 words

Sometimes, we’ll reject a certain pull request from the community, not because it doesn’t meet our standards, or doesn’t do things properly. We’ll reject it because we don’t want to accept the responsibility for this.

This seems obvious, but I got a comment on my recent post saying:

If you e.g. say that you're willing to accept a new F# module within RavenDB that does scripted deploys and automation of various tasks, I bet people would jump in with enthusiasm.

I wouldn’t accept such a PR. Not because there is anything wrong with F#, or because it wouldn’t be valuable. I wouldn’t accept such a PR because none of the core team of RavenDB has great expertise in F#. Oh, we have a few guys that played with it, and would love to do some more. In fact, I’ve got a guy that is pushing hard for allowing RavenDB to run computations via F#. It is a pretty cool feature, and I’ll talk about that in detail in a future post.

But imagine the scenario outline in the comment. A F# module that does some automation, scripted deploys, etc in F#. We go over the code, we are happy with it, we have a need for this feature. And at that point, we would have to make sure that it is written in C#.

Why? What is wrong with F#?

Pretty much nothing, except that any piece of code we ship (yes, even pieces that were contributed) has support requirements. A 2 AM call for one of the support team on that component means that the person answering the phone need to: Figure out the problem, decide if this is a misuse / feature / bug, fix this by either suggesting work around or supplying a hot fix.

And if the component with the issue is written in F#, that means that you need to have all the support group (now just over 20 people) be able to understand and work with that. And to the nitpickers, yes, it doesn’t take a long while to learn a new language, but it takes a good long while before you can be effective with it, especially at 2AM. Now multiple that by 20+ people, and you might see where we are starting to have an issue.

There is also the community angle to consider, code that is written in a language more people know gets more contributors.

Now, I’ve actually have run an real live study on that. In 2005 I wrote a view engine for MonoRail (MVC framework for ASP.Net) written in Boo. (You can read all about that here). Naturally, building a DSL that is based on Boo, I wrote that in Boo. And it was mildly successful. It was a pure OSS project, with multiple contributors. And I was the sole person that would actually modify the Boo code. Boo code looks pretty much like Python, and there isn’t any FP aspect to it. I find it imminently approachable and easy to work with.

The Brail codebase had pretty much zero outside contributions. At that point, I ported the Brail codebase to C#, the post is from 2006. Note the discussion around the reasoning. After that happened, we started getting more outside contributors and people were actually willing to take a look at the code.

For fun, this is actually in a situation where anyone using Brail was actually writing Boo code. It isn’t like they weren’t aware of how it worked. They pretty much had to, because Brail was a very thin DSL over Boo code. But moving the codebase to C# significantly improved the level of involvement.

Something that I think people miss is the fact that this kind of decision has pretty much nothing to do with the language or its merits. It is all about the implications on the project as a whole.

time to read 7 min | 1265 words

A question came up in the mailing list, how do we enable sharding for an existing database. I’ll deal with data migration in this scenario at a later post.

The scenario is that we have a very successful application, and we start to feel the need to move the data to multiple shards. Currently all the data is sitting in the RVN1 server. We want to add RVN2 and RVN3 to the mix. For this post, we’ll assume that we have the notion of Customers and Invoices.

Previously, we access the database using a simple document store:

var documentStore = new DocumentStore
{
	Url = "http://RVN1:8080",
	DefaultDatabase = "Shop"
};

Now, we want to move to a sharded environment, so we want to write it like this. Existing data is going to stay where it is at, and new data will be sharded according to geographical location.

var shards = new Dictionary<string, IDocumentStore>
{
	{"Origin", new DocumentStore {Url = "http://RVN1:8080", DefaultDatabase = "Shop"}},//existing data
	{"ME", new DocumentStore {Url = "http://RVN2:8080", DefaultDatabase = "Shop_ME"}},
	{"US", new DocumentStore {Url = "http://RVN3:8080", DefaultDatabase = "Shop_US"}},
};

var shardStrategy = new ShardStrategy(shards)
	.ShardingOn<Customer>(c => c.Region)
	.ShardingOn<Invoice> (i => i.Customer);

var documentStore = new ShardedDocumentStore(shardStrategy).Initialize();

This wouldn’t actually work. We are going to have to do a bit more. To start with, what happens when we don’t have a 1:1 match between region and shard? That is when the translator become relevant:

.ShardingOn<Customer>(c => c.Region, region =>
{
    switch (region)
    {
        case "Middle East":
            return "ME";
        case "USA":
        case "United States":
        case "US":
            return "US";
        default:
            return "Origin";
    }
})

We basically say that we map several values into a single region. But that isn’t enough. Newly saved documents are going to have the shard prefix, so saving a new customer and invoice in the US shard will show up as:

image

But existing data doesn’t have this (created without sharding).

image

So we need to take some extra effort to let RavenDB know about them. We do this using the following two functions:

 Func<string, string> potentialShardToShardId = val =>
 {
     var start = val.IndexOf('/');
     if (start == -1)
         return val;
     var potentialShardId = val.Substring(0, start);
     if (shards.ContainsKey(potentialShardId))
         return potentialShardId;
     // this is probably an old id, let us use it.
     return "Origin";

 };
 Func<string, string> regionToShardId = region =>
 {
     switch (region)
     {
         case "Middle East":
             return "ME";
         case "USA":
         case "United States":
         case "US":
             return "US";
         default:
             return "Origin";
     }
 };

We can then register our sharding configuration so:

  var shardStrategy = new ShardStrategy(shards)
      .ShardingOn<Customer, string>(c => c.Region, potentialShardToShardId, regionToShardId)
      .ShardingOn<Invoice, string>(x => x.Customer, potentialShardToShardId, regionToShardId); 

That takes care of handling both new and old ids, and let RavenDB understand how to query things in an optimal fashion. For example, a query on all invoices for ‘customers/1’ will only hit the RVN1 server.

However, we aren’t done yet. New customers that don’t belong to the Middle East or USA will still go to the old server, and we don’t want any modification to the id there. We can tell RavenDB how to handle it like so:

var defaultModifyDocumentId = shardStrategy.ModifyDocumentId;
shardStrategy.ModifyDocumentId = (convention, shardId, documentId) =>
{
    if(shardId == "Origin")
        return documentId;

    return defaultModifyDocumentId(convention, shardId, documentId);
};

That is almost the end. There is one final issue that we need to deal with, and that is the old documents, before we used sharding, don’t have the required sharding metadata. We can fix that using a store listener. So we have:

 var documentStore = new ShardedDocumentStore(shardStrategy);
 documentStore.RegisterListener(new AddShardIdToMetadataStoreListener());
 documentStore.Initialize();

Where the listener looks like this:

 public class AddShardIdToMetadataStoreListener : IDocumentStoreListener
 {
     public bool BeforeStore(string key, object entityInstance, RavenJObject metadata, RavenJObject original)
     {
         if (metadata.ContainsKey(Constants.RavenShardId) == false)
         {
             metadata[Constants.RavenShardId] = "Origin";// the default shard id
         }
         return false;
     }

     public void AfterStore(string key, object entityInstance, RavenJObject metadata)
     {
     }
 }

And that is it. I know that there seems to be quite a lot going on in here, but it basically can be broken down to three actions that we take:

  • Modify the existing metadata to add the sharding server id via the listener.
  • Modify the document id convention so documents on the old server won’t have a designation (optional).
  • Modify the sharding configuration so we’ll understand that documents without a shard prefix actually belong to the Origin shard.

And that is pretty much it.

time to read 7 min | 1218 words

In a recent post, a commenter suggested that using F# rather than C# would dramatically reduce the code size (measured in line numbers).

My reply to that was:

F# would also lead to a lot more complexity, reduced participation in the community, harder to find developers and increased costs all around.

And the data to back up this statement:

C# Developers F# Developers

image

image

Nitpicker corner: Now, I realize that this is a sensitive topic, so I’ll note that this isn’t meant to be a scientific observation. It is a data point that amply demonstrate my point. I’m not going to run a full bore study.  And yes, those numbers are about jobs, not people, but I’m assuming that the numbers are at least roughly comparable.

The reply to this was:

You have that option to hire cheaper developers. I think that the cheapest developers usually will actually increase your costs. But if that is your way, then I wish you good luck, and I accept that as an answer. How about "a lot more complexity"?

Now, let me try to explain my thinking. In particular, I would strongly disagree with the “cheapest developers” mentality. That is very far from what I’m trying to achieve. You usually get what you pay for, and trying to save on software development costs when your product is software is pretty much the definition of penny wise and pound foolish.

But let us ignore such crass terms as money and look at availability. There are less than 500 jobs for F# developers (with salary ranges implications that there isn’t a whole lot of F# developers queuing up for those jobs). There are tens of thousands of jobs for C# developers, and again, the salary range suggest that there isn’t a dearth of qualified candidates that would cause demand to raise the costs. From those numbers, and my own experience, I can say the following.

There are a lot more C# developers than there are F# developers. I know that this is a stunning conclusion, likely to shatter the worldview of certain people. But I think that you would find it hard to refute that. Now, let us try to build on this conclusion.

First, there was the original point, that F# lead to reduced number of lines. I’m not going to argue that, mostly  because software development isn’t an issue of who can type the most. The primary costs for development is design, test, debugging, production proofing, etc. The act of actually typing is pretty unimportant.

For fun, I checked out the line count numbers for similar projects (RavenDB & CouchDB). The count of lines in the Raven.Database project is roughly 100K. The count of lines in CouchDB src folder is roughly 45K. CouchDB is written in Erlang, which is another functional language, so we are at least not comparing apples to camels here. We’ll ignore things like different feature set, different platforms, and the different languages for now. And just say that an F# program can deliver with 25% lines of code of a comparable C# program.

Note that I’m not actually agreeing with this statement, I’m just using this as a basis for the rest of this post. And to (try to) forestall nitpickers. It is easy to show great differences in development time and line of code in specific cases where F# is ideally suited to the task. But we are talking about general purpose usage here.

Now, for the sake of argument, we’ll even assume that the cost of F# development is 50% of the cost of C# development. That is, that the reduction in line count actually has a real effect on the time and efficiency. In other words, if an F# program is 25% smaller than a similar C# program, we’ll not assume that it takes 4 times as much time to write.

Where does this leave us? It leave us with a potential pool of people to hire that is vanishingly small. What are the implications of writing software in a language that have fewer people familiar with it?

Well, it is harder to find people to hire. That is true not only for people that your hire “as is”. Let us assume that you’re going to give those people additional training after hiring them, so they would know F# and can work on your product. An already steep learning curve has just became that much steeper. Not only that, but this additional training means that the people you hire are more expensive (there is an additional period in which they are only learning). In addition to all of that, it will be harder to hire people, not just because you can’t find people already experienced with F#, but because people don’t want to work for you.

Most developers at least try to pay attention to the market, and they make a very simple calculation. If I spend the next 2 – 5 years working in F#, what kind of hirability am I going to have in the end? Am I going to be one of those trying to get the < 500 F# jobs, or am I going to be in the position to find a job among the tens of thousands of C# jobs?

Now, let us consider another aspect of this. The community around a project. I actually have a pretty hard time finding any significant F# OSS projects. But leaving that aside, looking at the number of contributors, and the ability of users to go into your codebase and look for themselves is a major advantage. We have had users skip the bug report entirely and just send us a Pull Request for an issue they run into, others have contributed (significantly) to the project. That is possible only because there is a wide appeal. If the language is not well known, the number of people that are going to spend the time and do something with it is going to be dramatically lower.

Finally, there is the complexity angle. Consider any major effort required. Recently, we are working on porting RavenDB to Linux. Now, F# work on Linux, but anyone that we would go to in order to help us port RavenDB to Linux would have this additional (rare) requirement, need to understand F# as well as Linux & Mono. Any problem that we would run into would have to first be simplified to a C# sample so it could be readily understood by people who aren’t familiar with F#, etc.

To go back to the beginning, using F# might have reduce the lines of code counter, but it wouldn’t reduce the time to actually build the software and it would limit the number of people that can participate in the project, either as employees or Open Source contributors.

time to read 1 min | 166 words

This blog has been running continuously for over 10 years now. And while I think that the level of content has improved somewhat over the years (certainly my command of English did), I’m afraid that we never really touch with the design.

This blog theme was taken from (if I recall properly) dasBlog default skin with some color shifting to make it a bit more orange. And I kept that look for the past 10 years, even when we moved between various blogging platform. This has grown tiring, and more to the point, the requirement that we have today are not nearly the same as before.

Hence, the new design. This include responsive design, mobile friendly layout and improving just about every little bit in the user experience.

One major feature is the introduction of series, which will allow reader to easily go through an entire related series of post without them (or me) having to do anything.

I would appreciate any feedback you have.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}