Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 497 words

We have been head down for a while, doing some really cool things with RavenDB (sharding, read striping, query intersection, indexing reliability and more). But that meant that for a while,things that are not about writing code for RavenDB has been more or less on auto-pilot.

So here are some things that we are planning. We will increase the pace of RavenDB courses and conference presentation. You can track it all in the RavenDB events page.

Conferences

RavenDB Courses

NHibernate Courses

Not finalized yet

  • August 2012.
    • User groups talks in Philadelphia & Washington DC by Itamar Syn-Hershko.
    • One day boot camp for moving from being familiar with RavenDB to being a master in Chicago.
  • September 2012.
    • RavenDB Course in Austin, Texas.

Consulting Opportunities

We are also available for on site consulting in the following locations and times. Please contact us directly if you would like to arrange for one of RavenDB core team to show up at your door step. Or if you want me to do architecture or NHibernate consulting.

  • Oren Eini – Malmo, June 26 – 27.
  • Oren Eini – Berlin, July 2 – 4.
  • Itamar Syn-Hershko – New York, Aug 23.
  • Itamar Syn-Hershko – Chicago, Aug 30 or Sep 3.
  • Itamar Syn-Hershko – Austin, Sep 3.
  • Itamar Syn-Hershko – Toronto, Sep 9 – 10.
  • Itamar Syn-Hershko – London, Sep 11.
time to read 3 min | 418 words

It is interesting to note that for a long while, what we were trying to do with RavenDB was make it use less and less resources. One of the reasons for that is that less resources is obviously better, because we aren’t wasting anything.

The other reason is that we have users running us on a 512MB/650 MHz Celeron 32 bit machines. So we really need to be able to fit into a small box (and also allow enough processing power for the user to actually do something with the machine).

We have gotten really good in doing that, actually.

The problem is that we also have users running RavenDB on standard server hardware (32 GB / 16 cores, RAID and what not) in which case they (rightly) complain that RavenDB isn’t actually using all of their hardware.

Now, being conservative about resource usage is generally good, and we do have the configuration in place which can tell RavenDB to use more memory. It is just that this isn’t polite behavior.

RavenDB in most cases shouldn’t require anything special for you to run, we want it to be truly a zero admin database. The solution?  Take into account the system state and increase the amount of work that we do to get things done. And yes, I am aware of the pitfalls.

As long as there is enough free RAM available, we will increase the amount of documents that we are going to index in a single batch. That is subject to some limits (for example, if we just created a new index on a big database, we need to make sure we aren’t trying to load it entirely to memory), and it knows how to reserve some room for other things, and how to throttle down and as well as up.

This post is written before I had the chance to actually test this on production level size dataset, but I am looking forward to seeing how it works.

Update: Okay, that is encouraging, it looks like what we did just made things over 7 times faster. And this isn’t a micro benchmark, this is when you throw this on a multi GB database with full text search indexing.

Next, we need to investigate what we are going to do about multiple running indexes and how this optimization affects them. Fun Smile.

time to read 2 min | 254 words

As I said in my previous post, tasked with having to load 3.1 million files into RavenDB, most of them in the 1 – 2 KB range.

Well, the first thing I did had absolutely nothing to do with RavenDB, it had to do with avoiding dealing with this:

image

As you can see, that is a lot.

But when the freedb dataset is distributed, what we have is actually:

image

This is a tar.bz2, which we can read using the SharpZipLib library.

The really interesting thing is that reading the archive (even after adding the cost of decompressing it) is far faster than reading directly from the file system. Most file systems do badly on large amount of small files, and at any rate, it is very hard to optimize the access pattern to a lot of small files.

However, when we are talking about something like reading a single large file? That is really easy to optimize and significantly reduces the cost on the input I/O.

Just this step has reduced the cost of importing by a significant factor, we are talking about twice as much as before, and with a lot less disk activity.

time to read 3 min | 582 words

One of the things that is really important for us in RavenDB is the notion of Safe by Default and Zero Admin. What this means is that we want to make sure that you don’t really have to think about what you are doing for the common cases, RavenDB will understand what you mean and figure out what is the best way to do things.

One of the cases where RavenDB does that is when we need to generate new ids. There are several ways to generate new ids in RavenDB, but the most common one, and the default, is to use the hilo algorithm. It basically (ignoring concurrency handling) works like this:

var currentMax = GetMaxIdValueFor("Disks");
var limit = currentMax + 32;
SetMaxIdValueFor("Disks");

And now we can generate ids in the range of currentMax to currentMax+32, and we know that no one else can generate those ids. Perfect!

The good thing about it is that now we have a reserved range, we can create ids without going to the server. The bad thing about it is that we now reserved a range of 32. If we create just one or two documents and then restart, we would need to request a new range, and the rest of that range would be lost. That is why the default range value is 32. It is small enough that gaps aren’t that important*, but it since in most applications, you usually create entities on an infrequent basis and when you do, you usually generate just one, then it is big enough to still provide a meaningful optimization with regards to the number of times you have to go to the server.

* What does it means, “gaps aren’t important”? The gaps are never important to RavenDB, but people tend to be bothered when they see disks/1 and disks/2132 with nothing in the middle. Gaps are only important for humans.

So this is perfect for most scenarios. Except one very common scenario, bulk import.

When you need to load a lot of data into RavenDB, you will very quickly note that most of the time is actually spent just getting new ranges. More time than actually saving the new documents takes, in fact.

Now, this value is configurable, so you can set it to a higher value if you care for it, but still, that was annoying.

Hence, what we have now. Take a look at the log below:

image

It details the requests pattern in a typical bulk import scenario. We request an id range for disks, and then we request it again, and again, and again.

But, notice what happens as times goes by (and not that much time) before RavenDB recognizes that you need bigger ranges, and it gives you them. In fact, very quickly we can see that we only request a single range per batch, because RavenDB have optimized itself based on our own usage pattern.

Kinda neat, even if I say so myself.

time to read 5 min | 845 words

On my last post, I descried the following problem:

image_thumb

And stated that the following trivial solution is the wrong approach to the problem:

select d.* from Designs d 
 join ArchitectsDesigns da on d.Id = da.DesignId
 join Architects a on da.ArchitectId = a.Id
where a.Name = @name

The most obvious reason is actually that we are thinking too linearly. I intentionally showed the problem statement in terms of UI, not in terms of a document specifying what should be done.

The reason for that is that in many cases, a spec document is making assumptions that the developer should not. When working on a system, I like to have drafts of the screens with rough ideas about what is supposed to happen, and not much more.

In this case, let us consider the problem from the point of view of the user. Searching by the architect name makes sense to the user, that is usually how they think about it.

But does it makes sense from the point of view of the system? We want to provide good user experience, which means that we aren’t just going to provide the user with a text box to plug in some values. For one thing, they would have to put in the architect full name as it is stored in our system. That is going to be a tough call in many cases. Ask any architect what the first name of Gaudi is, and see what sort of response you’ll get.

Another problem is how to deal with misspelling, partial names, and other information. What if we actually have the architect id, and are used to type that? I would much rather type 1831 than Mies Van Der Rohe, and most users that work with the application day in and day out would agree.

From the system perspective, we want to divide the problem into two separate issues, finding the architect and finding the appropriate designs. From a user experience perspective, that means that the text box is going to be an ajax suggest box, and the results would be loaded based on valid id.

Using RavenDB and ASP.Net MVC, we would have the following solution. First, we need to define the search index:

image

This gives us the ability to search across both name and id easily, and it allows us to do full text searches as well. The next step is the actual querying for architect by name:

image

Looks complex, doesn’t it? Well, there is certainly a lot of code there, at least.

First, we look for an a matching result in the index. If we find anything, we send just the name and the id of the matching documents to the user. that part is perfectly simple.

The interesting bits happen when we can’t find anything at all. In that case, we ask RavenDB to find us results that might be the things that the user is looking for. It does that by running a string distance algorithm over the data in the database already and providing us with a list of suggestions about what the user might have meant.

We take it one step further. If there is just one suggestion, we assume that this is what the user meant, and just return the results for that value. If there is more than that, we sent an empty result set to the client along with a list of alternatives that they can suggest to the user.

From here, the actual task of getting the designs for this architect becomes as simple as:

image

And it turns out that when you think about it right, searching is simple.

time to read 1 min | 184 words

The problem statement is best described using:

image

This seems like a nice and easy problem, right? We join the architects table to the designs table and we are done.

select d.* from Designs d 
 join ArchitectsDesigns da on d.Id = da.DesignId
 join Architects a on da.ArchitectId = a.Id
where a.Name = @name

This is a trivial solution, and shouldn’t take a lot of time to build…

It is also the entirely wrong approach for the problem, can you tell me why?

time to read 6 min | 1047 words

In my previous post, I introduced RavenDB Sharding and discussed how we can use sharding in RavenDB. We discussed both blind sharding and data driven sharding. Today I want to introduce another aspect of RavenDB Sharding. The usage of Map/Reduce to  gather information from multiple shards.

We start by defining a map/reduce index. In this case, we want to look at the invoice totals per date. We define the index like this:

public class InvoicesAmountByDate : AbstractIndexCreationTask<Invoice, InvoicesAmountByDate.ReduceResult>
{
    public class ReduceResult
    {
        public decimal Amount { get; set; }
        public DateTime IssuedAt { get; set; }
    }

    public InvoicesAmountByDate()
    {
        Map = invoices =>
              from invoice in invoices
              select new
              {
                  invoice.Amount,
                invoice.IssuedAt
              };

        Reduce = results =>
                 from result in results
                 group result by result.IssuedAt
                 into g
                 select new
                 {
                     Amount = g.Sum(x => x.Amount),
                    IssuedAt = g.Key
                 };
    }
}

And then we execute the following code:

using (var session = documentStore.OpenSession())
{
    var asian = new Company { Name = "Company 1", Region = "Asia" };
    session.Store(asian);
    var middleEastern = new Company { Name = "Company 2", Region = "Middle-East" };
    session.Store(middleEastern);
    var american = new Company { Name = "Company 3", Region = "America" };
    session.Store(american);

    session.Store(new Invoice { CompanyId = american.Id, Amount = 3, IssuedAt = DateTime.Today.AddDays(-1)});
    session.Store(new Invoice { CompanyId = asian.Id, Amount = 5, IssuedAt = DateTime.Today.AddDays(-1) });
    session.Store(new Invoice { CompanyId = middleEastern.Id, Amount = 12, IssuedAt = DateTime.Today });
    session.SaveChanges();
}

We use a three way sharding, based on the region of the company, so we actually have the following document sin three different servers:

First server, Asia:

image

Second server, Middle East:

image

Third server, America:

image

Now, let us see what happen when we use the map/reduce query:

using (var session = documentStore.OpenSession())
{
    var reduceResults = session.Query<InvoicesAmountByDate.ReduceResult, InvoicesAmountByDate>()
        .ToList();

    foreach (var reduceResult in reduceResults)
    {
        string dateStr = reduceResult.IssuedAt.ToString("MMM dd, yyyy", CultureInfo.InvariantCulture);
        Console.WriteLine("{0}: {1}", dateStr, reduceResult.Amount);
    }
    Console.WriteLine();
}

As you can see, again, we make no distinction in our code about using sharding, we just query it normally. The results, however, are quite interesting:

image

As you can see, we got the correct results, cluster wide.

RavenDB was able to query all the servers in the cluster for their results, reduce them again, and get us the total across all three servers.

And that, my friends, it truly awesome.

time to read 6 min | 1170 words

In my previous post, I introduced RavenDB Sharding and discussed how we can use Blind Sharding to a good effect. I also mentioned that this approach is somewhat lacking, because we don’t have enough information at hand to be able to really understand what is going on. Let me show you how we can define a proper sharding function that shards your documents based on their actual data.

We are still going to run the exact same code as we have done before:

string asianId, middleEasternId, americanId;

using (var session = documentStore.OpenSession())
{
    var asian = new Company { Name = "Company 1", Region = "Asia" };
    session.Store(asian);
    var middleEastern = new Company { Name = "Company 2", Region = "Middle-East" };
    session.Store(middleEastern);
    var american = new Company { Name = "Company 3", Region = "America" };
    session.Store(american);

    asianId = asian.Id;
    americanId = american.Id;
    middleEasternId = middleEastern.Id;

    session.Store(new Invoice { CompanyId = american.Id, Amount = 3 });
    session.Store(new Invoice { CompanyId = asian.Id, Amount = 5 });
    session.Store(new Invoice { CompanyId = middleEastern.Id, Amount = 12 });
    session.SaveChanges();

}

using (var session = documentStore.OpenSession())
{
    session.Query<Company>()
        .Where(x => x.Region == "America")
        .ToList();

    session.Load<Company>(middleEasternId);

    session.Query<Invoice>()
        .Where(x => x.CompanyId == asianId)
        .ToList();
}

What is different now is how we initialize the document store:

image

What we have done is given RavenDB the information about how our entities are structured and how we should shard them. We should shard the companies based on their regions, and the invoices based on their company id.

Let us see how the code behaves now, shall we? As before, we will analyze the output of the HTTP logs from execute this code. Here is the first server output:

image

As before, we can see the first four request are there to handle the hilo generation, and they are only there for the first server.

The 5th request is saving two documents. Note that this is the Asia server, and unlike the previous example, we don’t get companies/1 and invoices/1 in the first shard.

Instead, we have companies/1 and invoice/2. Why is that? Well, RavenDB detected that invoices/2 belongs to a company that is associated with this shard, so it placed it in the same shard. This ensures that we have good locality and that we can utilize features such as Includes or Live Projections even when using sharding.

Another interesting aspect is that we don’t see a request for companies in the America region. Because this is what we shard on, RavenDB was able to figure out that there is no way that we will have a company in the America region in the Asisa shard, so we can skip this call.

Conversely, when we need to find an invoice for an asian company, we can see that this request gets routed to the proper shard.

Exciting, isn’t it?

Let us see what we have in the other two shards.

image

In the second shard, we can see that we have just two requests, one to save two documents (again, a company and its associated invoice) and the second to load a particular company by id.

We were able to optimize all the other queries away, because we actually understand the data that you save.

And here is the final shard results:

image

Again, we got a save for the two documents, and then we can see that we routed the appropriate query to this shard, because this is the only place that can answer this question.

Data Driven Sharding For The Win!

But so far we have seen how RavenDB can optimize the queries made to the shards when it has enough information to do so. But what happens when it can’t?

For example, let us say that I want to get the 2 highest value invoices. Since I didn’t specify a region, what would RavenDB do? Let us look at the code:

var topInvoices = session.Query<Invoice>()
    .OrderByDescending(x => x.Amount)
    .Take(2)
    .ToList();

foreach (var invoice in topInvoices)
{
    Console.WriteLine("{0}\t{1}", invoice.Amount, invoice.CompanyId);
}

This code outputs:

image

So we were actually able to get just the two highest invoices. But what actually happened?

Shard 1 (Asia):

image

Shard 2 (Middle-East):

image

Shard 3 (America):

image

As you can see, we have actually made 3 queries, asking the same question from each of the shards. Each shard returned its own results. On the client side, we merged those results, and gave you back exactly the information that requested, across the entire cluster.

time to read 6 min | 1148 words

From the get go, RavenDB was designed with sharding in mind. We had a sharding client inside RavenDB when we shipped, and it made for some really cool demos.

It also wasn’t really popular, we didn’t implement some things for sharding. We always intended to, but we had other things to do and no one was asking for it much.

That was strange. I decided that we needed to do two major things.

  • First, to make sure that the experience for writing in a sharded environment was as close as we could get to the one you get with a non sharded environment.
  • Second, we had to make it simple to use sharding.

Before our changes, in order to use sharding you had to do the following:

  • Setup multiple RavenDB server.
  • Create a list of those servers urls.
  • Implement IShardStrategy, which exposes
    • IShardAccesStrategy – determine how we call to the servers.
    • IShardSelectionStrategy – determine how we select which server a new instance will go to, and what server an existing instance belongs on.
    • IShardResolutionStrategy – determine which servers we should query when we are querying for data (allow to optimize which servers we are actually hitting for particular queries)

All in all, you would need to write a minimum of 3 classes, and have to write some sharding code that can be… somewhat tricky.

Oh, it works, and it is a great design. It is also complex, and it makes it harder to use sharding.

Instead, we now have the following scenario:

image

As you can see, here we have three different servers, each running in a different port. Let us see what we need to do to get us working with this from the client code:

image

First, we need to define the servers (and their names), then we create a shard strategy and use that to create a sharded document store. Once that is done, we are home free, and can do pretty much whatever we want:

string asianId, middleEasternId, americanId;

using (var session = documentStore.OpenSession())
{
    var asian = new Company { Name = "Company 1", Region = "Asia" };
    session.Store(asian);
    var middleEastern = new Company { Name = "Company 2", Region = "Middle-East" };
    session.Store(middleEastern);
    var american = new Company { Name = "Company 3", Region = "America" };
    session.Store(american);

    asianId = asian.Id;
    americanId = american.Id;
    middleEasternId = middleEastern.Id;

    session.Store(new Invoice { CompanyId = american.Id, Amount = 3 });
    session.Store(new Invoice { CompanyId = asian.Id, Amount = 5 });
    session.Store(new Invoice { CompanyId = middleEastern.Id, Amount = 12 });
    session.SaveChanges();

}

using (var session = documentStore.OpenSession())
{
    session.Query<Company>()
        .Where(x => x.Region == "America")
        .ToList();

    session.Load<Company>(middleEasternId);

    session.Query<Invoice>()
        .Where(x => x.CompanyId == asianId)
        .ToList();
}

What you see here is the code that saves both companies and invoices, and does this over multiple servers. Let us see the log output for this code:

image

You can see a few interesting things here:

  • The first four requests are to manage the document ids (hilos). By default, we use the first server as the one that will store all the hilo information.
  • Next (request #5) we are saving two documents, note that the shard id is now part of the document id.
  • Request 6 and 7 here are actually queries, we returned 1 results for the first query, and none for the second.

Let us look at another shard now:

image

This is much shorter, since we don’t have the hilo requests. The first request is there to store two documents, and then we see two queries, both of which return no results.

And the last shard:

image

Here we again don’t see the hilo requests (since they are all on the first server). We do see putting of the two docs, and request #2 is a query that returns no results.

Request #3 is interesting, because we did not see that anywhere else. Since we did a load by id, and since by default we store the shard id in the document id, we were able to optimize this operation and go directly to the relevant shard, bypassing the need to query anything other server.

The last request is a query, for which we have a result.

So what did we have so far?

We were able to easily configure RavenDB to use 3 ways sharding in a few lines of code. It automatically distributed writes and reads for us, and when it could, it optimized the data access so it would only access the relevant shards. Writes are distributed on a round robin basis, so it is pretty fair. And reads are optimized on whatever we can figure out a minimal number of shards to query. For example, when we do a load by id, we can figure out what the shard id is, and query that server directly, rather than all of them.

Pretty cool, if you ask me.

Now, you might have noticed that I called this post Blind Sharding. The reason this is called this name is that this is pretty much the lowest rung in the sharding ladder. It is good, it split your data and it tries to optimize things, but it isn’t the best solution. I’ll discuss a better solution in my next post.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}