Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,527
|
Comments: 51,163
Privacy Policy · Terms
filter by tags archive
time to read 5 min | 989 words

I recently run into this blog post Scaling to 100M: MySQL is a Better NoSQL (from about 6 months ago) and cringed, hard. Go ahead and read it, I’ll wait. There are so much stuff going on here that I disagree with that I barely even know where to start.

I think that what annoys me the most about this post is that it attempts to explain a decision, but does that in a way that clearly shows a lack of depth in the decision making process.

I absolutely agree on the first section, you shouldn’t make your database choice based on hype, or by whatever it is “everyone” is doing. But saying that “if everyone jumps off the roof…” is generally a bad argument to make when literally everyone jumps off the roof (maybe it is on fire, maybe it is 1 meter drop, maybe it has a pool to jump into, etc). If this sounds ridiculous, this is because it is.

In particular, I take offense at:

This post will explain why we’ve found that using MySQL for the key/value use case is better than most of the dedicated NoSQL engines, and provide guidelines to follow when using MySQL in this way.

Then they go to list some of their requirements. I’m assuming that you read the post, so I’ll answer it directly.

The dataset they are talking about is about 210GB, and is composed of about 100 million records. In other words, you can fit that entire thing to memory in an AWS instance such as d2.8xlarge, at a cost of about 1.5$ / hour for a 3 year plan. Read this again, their dataset can actually fit in memory.

And even with that, they report a rate of 200K request per minute, which is funny, because the typical metric is looking at requests per second. At which point we are talking about around 3,400 req/second. But they have three database servers, so we are probably talking about around a thousand requests per second overall.

Oh, and they report an average of 1 – 1.5 ms latency numbers. Leaving aside the fact that averages means nothing (a percentiles summary would work much better), that is a really long time to process a single request.

I really liked this one:

Our existing system has scaling / throughput / concurrency / latency figures that are impressive for any NoSQL engine.

No, it isn’t. Just to give you some idea, assuming even distribution of the data, each site entry is about 2KB in size, so their throughput numbers are less than 10 MB / second.

Now, let us talk about the ways that their approach is actually broken. To start with, they have statements such as this one:

Serial keys impose locks… …Also notice that we are not using serial keys; instead, we are using varchar(50), which stores client-generated GUID values—more about that in the next section.

I mean, okay, so you have no idea how to generate serial keys without requiring locks, because things like that are so hard. I can think of several ways without even trying hard (Snowflake, HiLo, ranges, guid.comb, to name just a few). Now, why would you want want to take the time to do something like this? Because using a GUID is… how shall we say it, a horrible idea!

GUIDs are not sorted, which means that you are inserting (at a high rate) a lot of entries to the table, which forces a lot of page splits, which results in a bigger and deeper B+Tree, which result in a higher cost to find records, which is what you were trying to prevent in the first place.

Allowing sequential inserts can improve your insert performance (and afterward, the query speed) by orders of magnitude. So that is most certainly something that you really want to invest the 30 minutes it takes to code a sequential number solution from scratch, if you can use the literally dozens of ready made solutions.

But the thing that is really takes the cake is the fact that all of their queries take the following form:

image

So a sub-select is required to run this query (which with most reasonable query optimizers will be exactly equivalent to the query plan of an inner join), but the usage of TEXT data in the site information will mean at least another disk seek (to load the actual value) after the relevant row was located.

Now, it is possible that MySQL was a good decision for their use case, but this is:

  • Not an optimal usage of MySQL in the first place.
  • Small data set, can fit on one machine, can actually fit into memory
  • Inflexible system, very hard to change (needing another queryable field is now a major operation)
  • Low overall performance

That last one is very important. Just to give you some idea, for the size that they are talking about, we can probably handle the full 200,000 request per minute that they are talking about on their three way cluster using a single machine, and doing that in one second.

Assuming that I’m trying to find a dedicated solution to the problem (trie for the routing, simple memory mapped storage for the actual site data, where the routing trie will contain the position of the data). Of course, you would be crazy to do that. Just having a speedy solution for this is not enough, you also need to handle all of the rest of the associated costs of a database (operations, metrics, backup/restore, replication, etc).

But the provided solution is just Not Good.

time to read 2 min | 271 words

In RavenDB 4.0, we are writing a lot of code that need to do something, then react to something, then do something, etc.

For example, an index need to index documents until it runs out, then it wait for more documents, and when they arrive, it index them, and then wait, etc.

Here are two such examples (note that the code is written just to demonstrate a point):

In the first example, we see how we handle indexing. The outer loop runs as long as we the database runs, and then we index until we run out of stuff to do. When we run out, we’ll wait. During that time, the thread is paused, and unless a new document comes for the collections that this index covers, there is nothing that needs to be done.

In the other case, we are actually handling a web socket connection, so there are some differences, but for the most part, this is pretty much the same. We use an async event, and we need to keep the connection alive, so if we have nothing to do, we’ll wake up every 5 seconds and just write a new line to the socket, keeping it alive.

Nitpicker corner: this isn’t new, and it is about the simplest and most obvious concurrency strategy you can have.

But because it is the simplest, it also have some major advantages. The code does very little actual concurrency. We can reason about the code quite easily. We keep having to dumb down our code to similar patterns, because this is much easier to maintain over the long run.

time to read 2 min | 234 words

The following is likely to end up in the list of questions we’ll ask candidates to answer when they apply to Hibernating Rhinos.

We need to store information about keys and their location on disk. We need to implement a trie. We want the trie to store int64 size values and unbounded UTF8 string keys.

Given that, we have the following interface:

We need to implement that with the following properties:

  • The trie will be represented by single 32 kilobytes byte array.
    • You cannot store any additional information about the trie outside of the array.
  • The costs of searching in the trie in proportional to the size of the key.
  • There are no duplicates.
  • This is a single thread implementation.
  • If the array is full, it is fine to return false from the TryWrite()
  • Unsafe code is fine (but not required).
  • You cannot use any in memory structure other than the byte array. But it is fine to allocate memory during the processing of the operations (for example, to turn the string key into a byte array).

We will be looking at the following aspect in the implementation:

  • Correctness
  • Performance
  • Space used

The idea is that we can pack as much as possible into as small a space as possible, but at the same time, we can get great performance.

time to read 5 min | 926 words

We didn’t plan to do a lot of changes on the client side for RavenDB 4.0. We want to do most changes on the server side, and just adapt the client when we are doing something differently.

However, we run into an interesting problem. A while ago we asked Idan, a new guy at the company, to write a Python client for RavenDB as part of the onboarding process. Unlike with the JVM client, we didn’t want to do an exact duplication, with just the changes to match the new platform. We basically told him, go ahead and do this. And he went, and he did. But along the way he got to choose his own approach for the implementation, and he didn’t copy the same internal architecture. The end result is that the Python client is significantly simpler than the C# one.

That has a few reasons. To start with, the Python client doesn’t need to implement features such as async or Linq. But for the most part, it is because Idan was able to look at the entire system, grasp it, and then just implement the whole thing in one go.

The RavenDB network layer on the client side has gone through many changes over the years. In particular, we have the following concerns handled in this layer:

  • Actual sending of requests to the server.
  • High availability, Failover, load balancing and SLA matching.
  • Authentication of the requests
  • Caching server responses.

I’m probably ignoring a few that snuck in there, but those should be the main ones. The primary responsibility of the network layer in RavenDB is to send requests to the server (with all the concerns mentioned above) and give us the JSON object back.

The problem is that right now we have those responsibilities scattered all over the place. Each was added at a different time, and using several different ways, and the handling is done in very different locations in the code.  This leads to complexity, and seeing everything in one place in the Python client is a great motivation to simplify things. So we are going to do that.

image

Having a single location where all of those concerned will make things simpler for us. But we can do better. Instead of just returning a JSON object, we can solve a few additional issues. Performance, stability and ease of deployment.

We can do that by removing the the internal dependency on JSON.Net. A while ago we got tired from conflicting JSON.Net versions, and we decided to just internalize it in our code. That led to much simpler deployment life, but does add some level of complexity if you want to customize your entities over the wire and in RavenDB (because you have to use our own copy of JSON.Net for that). And it complicates getting new JSON.Net updates, which we want.

So we are going to do something quite interesting here. We are going to drop the dependency on JSON.Net entirely. We already have a JSON parser, and one that is extremely efficient, using the blittable format. More importantly, it can writes directly to native code. That give us a very important property, it make it very easy to store the data, already parsed, with a well known size. Let me try to explain my reasoning.

A very important feature of RavenDB is the idea that it can cache requests for you. When you make a request to the server, the server returns the reply as well as an etag. Next time you make this request, you’ll use the same etag, and if the response hasn’t changed, the server can just tell you to use the value in the cache. Right now we are storing the full string of the request in the cache. That lead to a few issues, in particular, while we saved on the I/O of sending the request, we still need to parse the JSON, and we need to keep relatively large (sometimes very large) .NET strings around for a long time.

But if we use blittable data in the client cache, then we have no need to do additional parsing, and we don’t need to involve the GC on the client side to collect things, we can manage that explicitly and directly. So the 2nd time you make a request, we’ll hit the server, learn that the value is still relevant, and then just use it.

That is great for us. Except for one thing. JSON.Net is doing really great job in translating between JSON and .NET objects, and that isn’t really something that we want to do. Instead, we are going to handle blittable objects throughout the RavenDB codebase. We don’t actually need to deal with the user .NET’s types until the very top layers of the system. And we can handle that by having a convention, that will call into JSON.Net (the only place that will happen) and translate .NET objects to JSON and back. The nice thing about it is that since this is just a single location where this happens, we can do that dynamically, without having a hard dependency on JSON.Net version.

That, in turn, also expose the ability to do JSON <—> .Net objects using other parsers, such as Jil, for example, or whatever the end user decides.

time to read 1 min | 189 words

Here we have another aspect of making operations’ life easier. Supporting server-side import/export, including multiple databases, and using various options.

image

Leaving aside the UI bugs in the column alignment (which will be fixed long before you should see this post), there are a couple of things to note here. I have actually written about this feature before, although I do think that this is a snazzy feature.

What is more important is that we managed to get some early feedback on the released version from actual ops people and then noted that while this is very nice, what they actually want is to be able to script this. So this serves as both the UI for activating this feature, and also generating the curl script to execute it from a cron job.

As a reminder, we have the RavenDB Conference in Texas in a few months, where we’ll present RavenDB 3.5 in all its glory.

image

time to read 12 min | 2270 words

Replication with RavenDB is one of our core features. Something that we had in the product from the very first release (although we did simplify things by several orders of magnitudes over the years). Replication is responsible for high availability, load balancing and several other goodies. For the most part, replication works quite well, and it is a lot less complex then some of the other things that grew over the years (LoadDocument, for example). That said, it doesn’t mean that it can’t be improved. And since this is such an important aspect of RavenDB, we spent quite a lot of time in seeing what we can do to improve it.

Here are the basic design guidelines:

  • RavenDB is going to remain a multi master system, where each node can accept writes and distribute it to its siblings.
    • We intend to use Raft for dynamic leader selection, but that is a layer on top of the basic replication.
    • That means that RavenDB is an AP system, and needs to handle conflicts.
  • We mostly deal with fully connected graphs of relatively small clusters (less than 10 nodes).
    • Higher number of nodes are quite frequent, but they don’t use a mesh topology, but typically go for a hierarchy.

This post is going to focus solely on the server side aspects of replication, I’ll do another post about changes from the clients perspective.

Probably the first thing that we intend to change is how we track the replication history. Currently, we track the last 50 changes made on a document. This has several problems:

  • if there have been more than 50 changes on a document between replication batches, we’ll get a false conflict.
  • if the documents are small, in many cases the replication metadata is actually bigger than the document itself.

We are going to move to an explicit vector clock implementation. This is a bit complex, because there are multiple concepts that we need to track concurrently here.

Every time that a document changes, the server generate an etag for that change. This etag is an int64 number that is always increasing. This is used for optimistic concurrently, indexing, etc. The etag value is per server, and cannot be used across servers. Each server has a unique identifier. Joining the two together, whenever a document is changed on a server directly (not via replication), we’ll stamp it with the server id and the current document etag.

In other words, let us imagine the following set of operations happening in a three node cluster.

Users/1 is created on Server A, it gets an etag of 1 and a vector clock of {A:1}. Users/2 is created on Server A, it gets an etag of 2 and a vector clock of {A:2}. Users/3 is created on Server C, it gets etag 1 (because etags are local per server) and its vector clock is {C:1}. Servers A and C both replicate to Server B, and to each other, resulting in the following cluster wide setup:

  Server A Server B Server C
Users/1 etag 1, {A:1} etag 1, {A:1} etag 2, {A:1}
Users/2 etag 2, {A:2} etag 3, {A:2} etag 3, {A:2}
Users/3 etag 3, {C:1} etag 2, {C:1} etag 1, {C:1}

Note that the etags assigned for each document are not consistent across the different servers, but that they are temporally consistent with respect to the writes. In other words, Users/1 will always have a lower etag than Users/2.

Now, when we modify Users/3 on server B, we’ll get the following cluster wide picture:

  Server A Server B Server C
Users/1 etag 1, {A:1} etag 1, {A:1} etag 2, {A:1}
Users/2 etag 2, {A:2} etag 3, {A:2} etag 3, {A:2}
Users/3 etag 4, {B:4,C:1} etag 4, {B:4,C:1} etag 4, {B:4,C:1}

As I said, only changed on the server directly (and not via replication) will impact the document vector clock, but any modification (replication or directly on the node) will modify a document’s etag.

Using such vectors clocks, we gain two major features. First, it is very easy to see if we have conflicting changes. {C:1, B:4} is obviously a parent of {C:4,B:6}, while {C:2,A:6} is a conflict. The other is that we can now form a very easy view of the kind of changes that we have received. We do that using a server wide vector clock. In the case of the table above, the server wide vector clock would be {A:2,B4,C:1}. In other words, it will contain the latest etag seen from each server.

We’ll get to why exactly this is important for us in a bit. For now, just accept that it does, because the next part is about how we are going to actually do the replication. In previous versions of RavenDB, we did each replication batch through a separate REST call to the remote server. This has a few disadvantages. It meant that we had to authenticate every single time, and we couldn’t make any assumptions about the state of the remote server.

In RavenDB 4.0, we intend to move replication to use pure Websockets only usage. On startup, a node will connect to all its siblings, and stay connected to them (retrying the connection on any interruption). This has the nice benefit of only doing authentication once, of course, but far more interesting from our perspective is the fact that it means that we can rely on the state of the server on the other side. TCP has a few very interesting properties here for us.  In particular, it guarantee that we’ll have ordered delivery of messages. Which means that we can assume that once we sent a message to a server on a TCP connection, it either got it, or the TCP connection will return an error at some point, forcing us to reconnect.

In other words, it isn’t just authentication that I can do just once, I can also query the remote server for its state (as it regards me), and since I’m the only person that can talk as myself, and I’m the one sending the details. As long as the connection lasts, I know what the other side knows about me. Confusing, isn’t it?

But basically it means that instead of having to ask on each batch what is the last document that the destination server saw of me, I can assume that the last document that I sent was received. That lasts until the connection breaks, in which case I can need to figure out what actually arrived. This seems like a small thing, but this will actually allow me to reduce the number of roundtrips for a batch by half. There are other aspects here that are really nice, I get to piggyback on TCP’s congestion protocol, so if the remote server is slow in accepting updates, it will (eventually) reflect as a blocking write on my end. That seems like a bad thing, right? But this is actually what I want.

Each destination server in RavenDB 4.0 is going to get its own dedicated thread. This thread will manage all outgoing communication with this server. That gives us several really important behaviors. It means that we can easily account for problems by just looking at the thread responsible (hm… I see that replication to node C is consuming a lot of CPU) and it also turn the entire replication process to a pretty simple single threaded operation. Because of the blittable format, we don’t need complex prefetching strategies or sharing of memory in the replication, and a slow node will not impact any other replication behavior. That, in turn, basically mean a thread per connection (see previous discussion on the expected number of nodes being relatively small) and a very simple programming / error handling / communication model.

The replication sending logic goes something like this:

Yes, my scratch pad language is still Boo (Python, if you aren’t familiar with it), and this is meant to convey how simple that thing is. All the complexity that we currently have to deal with is out. Of course, the real code will need to have error handling, reconnection logic, etc, but that is roughly all you’ll need.

Actually, that is a lie. The problem with the code above is that it doesn’t work well with multiple servers. In other words, it is perfect for two nodes, replicating to one another, but when you have multiple nodes, you don’t want a single document update to replication from each node to every other node. That is why we have the concept of vector clocks. At the document level, this serves as an easy way to detect conflicts and see what version of a document is casually later than another version of a document. But on the server level, we gather the latest writes from all nodes that we saw to get the server wide vector clock.

When a document is modified on a server, that server will immediately send that document to all its siblings. Because there is no way that they already have it. But if a document was replicated to a node, it will not start replicating right away. Instead, it will let a set amount of time go by (defaulting to once a minute) and then ask each sibling what is the latest server wide vector clock that it is aware of. If the remote vector clock is equal to or higher than the local server wide vector clock, then we know that they are up to date. In this case, the local server will let the remote server know that they are a match to the current etag on that server.

If, however, the local vector clock is smaller (or conflicting) from the remote server, then we need to send the relevant documents. We already know what is the last etag that the remote server has from us (we negotiated that when we established the connection, and we updated it every time we sent a document to the remote server. Since we have the current vector clock from the remote server, we aren’t going to just blindly send all documents after the last etag we sent to the remote server. Instead, we are going to check each of those to see if the vector clock for the document is larger (or conflicting) than the remote server vector clock. In this way, we can send the remote server only the documents that it doesn’t have.

What about delayed servers? If we had a new node in the cluster, and we just started replicating to it, what happens when a new document is being written. Above, I mentioned that the written to server will immediately write it to all its siblings, but that is an over simplification. An extremely important property of RavenDB replication is that documents are always replicated in the order the server saw them (either written to it directly, or the order they were replicated to it). If we allow a server to replicate documents directly to another server, that might break this guarantee. Looking at the code above, it will also require us to write a separate code path to handle such things. But that is the beauty in this design. All of this logic is actually encapsulated in WaitForMoreDocuments(). You can this of WaitForMoreDocuments() as a simple manual reset event. Whenever a document is written to a document directly, it will be set. But not when a document is replicated to us.

So WaitForMoreDocuments() will basically wait for a document to be written to us, or a timeout, in which case it will check with its sibling for new stuff that need to go over the wire because it was replicated to us. But the code is the same code, and the behavior is the same. If we are busy sending data to a new server? We’ll still set the event, but that will have no effect on the actual behavior. And when we are working with a fully caught up server, the act of writing a single document will immediately free the replication threads to start sending it to the sibling. All the desired behaviors, and very little actual complexity.

On the receiving end, we get just the documents we don’t have, as well as the last etag from that source server (which we’ll keep in persistent storage). Whenever we get a new document, we’ll check if it is conflicting. If so, we’ll mark the document as conflicting and allow the user to define default strategies to handle that (latest, resolve to remote, resolve to local). But we are also going to allow the user to define a Javascript function that will merge the conflicted documents directly. This way you can have your business logic for the resolution directly on the server, and you’ll never actually see any conflicts externally.

There are quite a lot of small details that I’m skipping, but this is already long enough, and should give you a pretty good idea about where we are headed.

time to read 2 min | 286 words

A natural consequence of RavenDB decision to never reject writes (a design decision that was influenced heavily by the Dynamo paper) is that it is possible for two servers to get client writes to the same document without coordination. RavenDB will detect that and handle it. Here is a RavenDB 2.5 in conflict detection mode, for example:

clip_image012

In RavenDB 3.0, we added the ability to have the server resolve conflicts automatically, based on a few predefined strategies.

image

This is in addition to giving you the option for writing your own conflict resolution strategies, which can apply your own business logic.

What we have found was that while some users deployed RavenDB from the get go with the conflict resolution strategy planned and already set, in many cases, users were only getting around to doing this when they actually had this happen in their production systems. In particular, when something failed to the user in such a way that they make a fuss about it.

At that point, they investigate, and figure out that they have a whole bunch of conflicts, and set the appropriate conflict resolution strategy for their needs. But this strategy only applies to future conflicts. That is why RavenDB 3.5 added the ability to also apply those strategies in the past:

Capture

Now you can easily select the right behavior and apply it, no need to worry.

time to read 4 min | 748 words

We have been trying to get RavenDB to run on Linux for the over 4 years. A large portion of our motivation to build Voron was that it will also allow us to run on Linux natively, and free us from dependencies on Windows OS versions.

The attempt was actually made several times, and Voron has been running successfully on Linux for the past 2 years, but Mono was never really good enough for our needs. My hypothesis is that if we were working with it from day one, it would have been sort of possible to do it. But trying to port a non trivial (and quite a bit more complex and demanding than your run of the mill  business app) to Mono after the fact was just a no go. There was too much that we did in ways that Mono just couldn’t handle. From GC corruption to just plain “no one ever called this method ever” bugs. We hired a full time developer to handle porting to Linux, and after about six months of effort, all we had to show for that was !@&#! and stuff that would randomly crash in the Mono VM.

The CoreCLR changed things dramatically. It still takes a lot of work, but now it isn’t about fighting tooth and nail to get anything working. My issues with the CoreCLR are primarily in the area of “I wanna have more of the goodies”. We had our share of issues porting, some of them were obvious, a very different I/O subsystem and behaviors. Other were just weird (you can’t convince me that the Out Of Memory Killer is the way things are supposed to be or the fsync dance for creating files), but a lot of that was obvious (case sensitive paths, / vs \, etc). But pretty much all of this was what it was supposed to be. We would have seen the same stuff if were working in C.

So right now, we have RavenDB 4.0 running on:

  • Windows x64 arch
  • Linux x64 arch

We are working on getting it running on Windows and Linux in 32 bits modes as well, and we hope to be able to run it on ARM (a lot of that depend on the porting speed of the CoreCLR to ARM, which seems to be moving quite nicely).

While there is still a lot to be done, let me take you into a tour of what we already have.

First, the setup instructions:

image

This should take care of all the dependencies (including installing CoreCLR if needed), and run all the tests.

You can now run the dnx command (or the dotnet cli, as soon as that become stable enough for us to use), which will give you RavenDB on Linux:

image

By and large, RavenDB on Windows and Linux behaves pretty much in the same manner. But there are some differences.

I mentioned that the out of memory killer nonsense behavior, right? Instead of relying on swap files and the inherent unreliability of Linux in memory allocations, we create temporary files and map them as our scratch buffers, to avoid the OS suddenly deciding that we are nasty memory hogs and that it needs some bacon. Windows has features that allow the OS to tell applications that it is about to run out of memory, and we can respond to that. In Linux, the OS goes into a killing spree, so we need to monitor that actively and takes steps accordingly.

Even so, administrators are expected to set vm.overcommit_memory and vm.oom-kill to proper values (2 and 0, respectively, are the values we are currently recommending, but that might change).

Websockets client handling is also currently not available on the CoreCLR for Linux. We have our own trivial implementation based on TcpClient, which currently supports on HTTP. We’ll replace that with the real implementation as soon as the functionality becomes available on Linux.

Right now we are seeing identical behaviors on Linux and Windows, with similar performance profiles, and are quite excited by this.

time to read 2 min | 337 words

Data subscriptions in RavenDB are a way for users to ask RavenDB to give the user all documents matching a particular query, both past and future. For example, I may open a subscription to handle all Orders mark as "Require Review", for example.

The nice thing about is that when I open a subscription, I can specify whatever I want to get only new documents, or if I want to process all documents from the beginning of time. And once I have process all the documents in the database, RavenDB will be sure to call me out whenever there is a new document that matches my parameters. RavenDB will ensure that even if there is some form of failure in processing a new document, I'll get it again, so I can retry.

This is a really great way to handle background jobs, process incoming documents, go over large amount of data efficiently, etc.

That said, subscriptions have a dark side. Because it is common for subscriptions to process all the relevant documents, they are frequently required to go over all the documents in the database. The typical use case is that you have a few active subscriptions, and mostly they are caught up and processing only new documents. But we have seen cases where users opened a subscription per page view, which results in us having to read the entire database for each and every page view, which consumed all our I/O and killed the system.

In order to handle that, we added a dedicated endpoint to monitor such cases, and you can see one such subscription below.

image

In this case, this is  relatively new subscription, which has just started processing documents, and it is for all the Disks in the Rock genre.

This make it easier for the operation team to figure out what is going on on their server.

time to read 7 min | 1351 words

LoadDocument in RavenDB is a really nice feature. It allows you to reach out to another document during indexing, and load its value. A simple example of that would be:

from p in docs.Pets
select new { Name = p.Name, OwnerName = LoadDocument(p.OwnerId).Name }

When we got the idea for LoadDocument, we were very excited, because it allowed us to solve some very tough problems.

Unfortunately, this also has some really nasty implementation issues. In particular, one of the promises we give for LoadDocument is that we’ll re-index the referencing document if the referenced document changed. In other words, if my wife changed her name, even though my document wasn’t changed, it will be re-indexed, my wife’s document will be loaded and the new name will end up in the index.

Now, consider what happens when there are two concurrent transactions. The first transaction happens during indexing, we try to load the owner document, which doesn’t exists, so we leave a record in place so force re-indexing when it is inserted, but at the same time, a new transaction is opened and the owner document is inserted. During the insert, it checks if there are any referencing documents, but since both transactions aren’t committed yet, they can’t see each other changes. And we end up with a bug. Resolving that took a bit of coordination between the two processes, which was hard to get right.

Another issue that we have is the fact that each LoadDocument call need to create a record of its existence, so we’ll know what documents require re-indexing. However, for large indexes that use LoadDocument, the number of entries there can be staggering, and impact the amount of time we have to delete an index, for example. It also force up to do a bit of work during document updates that is proportional to the number of documents referencing a particular document. In some cases, all documents in the database reference a single document, and an update to that document can take a very long time. In fact, we limit the amount of time that this can take to 30 seconds, and abort the update if it takes this long. This is one of the only cases where insert speed is impacted in RavenDB (we have a workaround to update the document without triggering re-indexing, of course).

So, overall, we have a really nice feature, but it has some serious drawbacks when you peel back the implementation details. In RavenDB 4.0, we have decided to try as much as possible to avoid having things like that, so we sat down and tried to think how we can get something like that working.

We have the following considerations:

  • All data must be scoped to the index level. Nothing that require multiple indexes to cooperate.
  • We cannot have any  global data, or have interactions between documents an indexing that require complex coordination.
  • We cannot use TouchDocument as a control mechanism any longer.
  • It should be as simple as we can get away with it.

The solution we came up with goes like this (a full walkthrough can be found after the explanation):

  • An index that uses LoadDocument cannot just look at the items in the collections it covers, it need to go over all documents.
    • We can probably get away with only scanning the documents from collections that we loaded documents from, but what if the document doesn’t exists yet? In that case, we need to scan all documents anyway.
    • We’ll have an overload of LoadDocument that specify the collection type (which we can auto fill from the client side based on the provided type) to optimize this.
  • A call to LoadDocument is going to record the relationship between the two documents, in the index’s own storage (Unlike before, we have no global tracking). Conceptually, you can think about that storage as the “references table”, with the source document key and the destination document key. In practice, we’ll use an optimal data structure for this, but it is easier if you imagine a table with those two columns.
  • Instead of using TouchDocument to modify documents etag (which requires us to mix indexing and documents operations), the index will keep track of two sets of etags. The first is the index’s own collection of documents it is indexing, and it is known as the “last indexed etag”. The second is the last etag of the documents that are being referenced via LoadDocument by this index, and is known as the “last referenced etag”.
  • When a document from a collection that is being referenced is updated, we’ll wake the index and check all the documents in that collections after the last referenced etag we have. For each of those, we’ll see if they have any references in the “references table”. If they don’t, there is nothing to do. If there is, we’ll reindex those documents immediately (see below for some optimization opportunities there).
  • The index will then update the last referenced etag it scanned.
  • Such an index will be considered non stale if both the last indexed etag and the last referenced etag are equal to the last document etag in the database.

Basically, we move the entire responsibility of updating the index from the database as a whole to just the index.

It also makes the index in question alone pay for those costs. And given that we have a separate “re-indexing” round, we can track the additional costs of such measure directly.

It is a lot to take in, so let me try to explain in detail. We start with the new index definition.

from p in docs.Pets
select new { Name = p.Name, OwnerName = LoadDocument(p.OwnerId, “People”).Name }

The first change is that the LoadDocument call is going to specify the collection of the loaded document (or no collection, if the document can come from any collection).

The index is going to keep track of the following details:

  • LastIndexedEtag – for the collection that this covers, in this case, the “Pets” collection.
  • LastReferencedEtag – for the collection(s) specified in the LoadDocument, in this case, the People collection.

We now have the following state in the database:

  • LastIndexedEtag is 10 for the Pets collection.
  • LastReferencedEtag is 0 for the People collection.
  • People/1’s etag is set to 12.
  • Pets/1’s etag is set to 7.
  • Pets/2’s etag is set to 11.

Now, when indexing, we are going to do the following steps:

  • For each of the collections we have setup tracking for, get all documents following the LastReferencedEtag
      • In this case, scan the People collection for all etags following 0.
    • For each of the resulting documents, check whatever there is are documents referencing that document.
      • In this case, people/1 is returned, and it is being referenced by pets/1 and pets/2.
      • Because the etag of pets/1 (7) is lower than the LastIndexedEtag (10), we need to index that right away.
      • The etag of pets/2 (11) is higher than the LastIndexedEtag (10), so we don’t index it.
    • After we are done scanning through the People collection, we update our LastReferencedEtag to the last item in the people collection (which would be 12).
  • We then continue to index the Pets collection normally.
    • We get pets/2, whose etag is 12 and index that, loading People/1 again. (This is why we could skip it previously).
    • Finally, we update our LastIndexedEtag to 12 (the last Pets document we indexed).

On the next batch of indexing, we’ll again scan the People collection for documents that have changed, and then the pets that changed, and so on.

Now, a document that is being referenced by many other documents will not require any additional work on our side. We’ll just re-index the documents referencing it, which is much better than the current state.

Note that this design ignores a few details, but this should paint the general outline.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. RavenDB Cloud (2):
    26 Nov 2024 - Auto scaling
  2. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  3. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  4. re (33):
    28 May 2024 - Secure Drop protocol
  5. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}