Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,520
|
Comments: 51,141
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 432 words

imageIt’s the end of November, so like almost every year around this time, we have the AWS outage impacting a lot of people. If past experience is any indication, we’re likely to see higher failure rates in many instances, even if it doesn’t qualify as an “outage”, per se.

The image on the right shows the status of an instance of a RavenDB cluster running in useast-1. The additional load is sufficient to disrupt the connection between the members of the cluster. Note that this isn’t load on the specific RavenDB cluster, this is the environment load. We are seeing busy neighbors and higher network latency in general, enough to cause a disruption of the connection between the nodes in the cluster.

And yet, while the cluster connection is unstable, the  individual nodes are continuing to operate normally and are able to continue to work with no issues. This is part of the multi layer design of RavenDB. The cluster is running using a consensus protocol, which is sensitive to network issues and require a quorum to progress. The databases, on the other hand, uses a separate, gossip based protocol to allow for multi master distributed work.

What this means is that even in the presence of increased network disruption, we are able to run without needing to consult other nodes. As long as the client is able to reach any node in the database, we are able to serve reads and writes successfully.

In RavenDB, both clients and servers understand the topology of the system and can independently fail over between nodes without any coordination. A client that can’t reach a server will be able to consult the cached topology to know what is the next server in line. That server will be able to process the request (be it read or write) without consulting any other machine.

The servers will gossip with one another about misplaced writes and set everything in order.

RavenDB gives you a lot of knobs to control exactly this process works, but we have worked hard to ensure that by default, everything should Just Work.

Since we released the 4.0 version of RavenDB, we have had multiple Black Fridays, Cyber Monday and Crazy Tuesdays go by peacefully. Planning, explicitly, for failures has proven to be a major advantage. When they happen, it isn’t a big deal and the system know how to deal with them without scrambling the ready team. Just another (quite) day, with 1000% increase in load.

time to read 6 min | 1110 words

I run into this link, which is Lambda school offering to place a student of theirs at your company for up to four weeks without having to pay them or the student. They market it as: Add talented software engineers to your team for a 4 week trial at no cost for your company if you do not hire the fellow.

Just to point out, this is not relevant to me or mine, so I don’t have a dog in this fight, but I run into this offer because of this:

Scott then wrote: “Pay people for their work. Pay interns.”

I think that this ties closely together with the previous posts on money and open source. On the one hand, I think that this is a pretty good offer for a student, because they are able to get over what is likely to be their biggest hurdle, actual experience in the field. It also lowers the bar on actually getting their foot in the door, after which the second step is likely to be much easier.

On the other hand, for the company, it is a great way to do test runs for entry level positions. Hiring developers is hard, and being able to trial run for a few weeks is much better than trying to predict the candidate’s ability based on a short interview.

On the gripping hand, however, there are a bunch of issues here that are rife for abuse. This also means that the only people who’ll get to this program are those who are able to actually go a month or so without pay. Or that they will need to do this and their regular work as well.

An “easy” way to fix this would be to pay at least minimum wage to the students, of course. The problem is that this would curb a lot of the advantages of the program. I’ll refer you to Dan Ariely and the FREE! experiments for the reasons why. From the company’s perspective: Paying a minimum wage or getting an employee for free is pretty much the same thing. But there is a lot less process to getting someone for free. And once there is a foot in the door, it is a lot easier to convert internally.

What I wish was possible was to be able to hire people (at or near market rate) for a short amount of time and then decide if you want to keep them on afterward. The idea is that actually seeing their work is a much better indication of their capabilities than the interview process. That reduce the pressure to perform during an interview, gives candidate far better chance to impress and show off, allow them to learn, etc.

This is technically possible but not actually feasible in almost all situations. Leaving aside labor laws, consider the employee’s perspective in this case. If they are already working, going to another company and doing a “trial run” which can be unsuccessful is a very powerful reason to not go there. A company with the kind of reputation of “they hired me for a month and then fire me” is going to have hard time getting more employees. In fact, being fired without a few weeks or months of getting hired is such a negative mark on the CV that most people would leave it all together. Because of this, any company that want to do such trail runs cannot actually do so. They have to make the effort to do all the filtering before actually hiring an employee and reserve firing someone after a short time for mostly egregious issues.

The internship model neatly works around this issue, because you have a very clear boundaries. Making it an unpaid internship is a way to attract more interest from companies and reduce the barriers. For the student, even if they aren’t hired at that place, it gives actual industry experience, which is usually a lot more valuable in the job market.  Note that you can pay the grocery bill with Reputation bucks, it just takes a little longer to cash them out, usually.

The unpaid internship here is the problem, for a bunch of reasons. The chief among them is that making this free for the companies open this up for abuse. You can put controls and safeguards in place, but the easiest way to handle that would be to make it so they pay at least minimum wage to avoid that. The moment that this is paid, a lot of the abuse potential go away. I can imagine that this would be a major hassle for the school (I assume that the companies would rather pay an invoice rather than hire someone on for a short while), but it is something that you can easily do with economies of scale.

The chief issue then, however, would be that this is no longer free, so likely to subject the students to a much harsher scrutiny, which defeats the purpose of getting them out there in the field and gaining real experience.  That is also a problem for the school, I think, since they would have to try to place the student and face a bigger barrier.

Turning this around, however, consider that this was an offer made not for a company, but for open source projects? A lot of open source projects have a ton of work that needs to be done which gets deferred. This is the sort of “weeding the garden” that is usually within the capabilities of someone just starting out. The open source project will provide mentorship and guidance in return for this work. In terms of work experience, this is likely to be roughly on the same level, but without the option to being hired at the end of the four weeks. It also has the advantage of all the work being out there in the open, which allows potential future employers to inspect this.

Note that this is roughly the same thing as the offer being made, but instead of a company doing this, there is an open source project. How would that change your evaluation? The other aspects are all the same. This is still something that is only available for those who can afford to take a month without pay and still make it. From the student’s perspective, there is no major difference, except that there is far less likelihood for actually getting hired in the end.

time to read 5 min | 922 words

One of the interesting features with RavenDB is Subscriptions. These allow you to define a query and then to subscribe to its results. When a client opens the subscription, RavenDB will send it all the documents matching the subscription query. This is an ongoing process, so you will get updates from documents that were modified after your subscription started. For example, you can ask to get: “All orders in UK”, and then use that to compute tax / import rules.

Subscriptions are ideal for a number of use cases, but backend business processing is where they shine. This is because of the other property that subscriptions have, the ability to process the subscription results reliably. In other words, a failure in process a subscription batch will not lose anything, we can simply restart the subscription. In the same manner, a server failure will simply failover to another node and things will continue processing normally. You can shut down the client for an hour or a week and when the subscription is started, it will process all the matching documents that were changed while we didn’t listen.

Subscriptions currently have one very strong requirement. There can only ever be a single subscription client open at a given point. This is done to ensure that we can process the batches reliably. A subscription client will accept a batch, process it locally and then acknowledge it to the server, which will then send the next one.

Doing things in this manner ensures that there is an obvious path of progression in how the subscription operates. However, there are scenarios where you’ll want to use concurrent clients on a single subscription. For example, if you have a lengthy computation required, and you want to have concurrent workers to parallelize the work. That is not a scenario that is currently supported, and it turns out that there are significant challenges in supporting it. I want to use this post to walk through them and consider possible options.

The first issue that we have to take into account is that the fact that subscriptions are reliable is a hugely important feature, we don’t want to lose that. This means that if we allow multiple concurrent clients at the same time, we have to have a way to handle a client going down. Right now, RavenDB keeps track of a single number to do so. You can think about it as the last modified date that was sent to the subscription client, this isn’t how it works, but it is a close enough lie that would save us the deep details.

In other words, we send a batch of documents to the client and only update our record of the “last processed” when the batch is acknowledged. This design is simple and robust, but it cannot handle the situation when we have concurrent clients that are processing batches. We have to account for a client failing to process a batch and needing to resend it. This can be sent to the same client or to another one. That means that in addition the last “last processed” value, we also need to keep a record of in flight documents that were sent in batches and hasn’t been acknowledged yet.

We keep track of our clients by holding on to the TCP connection. That means that as long as the connection is open, the batch of documents that was sent will be considered in transit state. If the client that got the batch failed, we’ll have to note (when we close the TCP connection) and then send the old batch to another client. There are issues with that, by the way, different clients may have different batch sizes, for example. If the batch we need to retry has 100 documents, but the only available client needs 10 at a time, for example.

There is another problem with this approach, however. Consider the case of a document that was sent to a client for processing. While it is being processed, it is modified again, that means that we have a problem. Do we send the document again to another client for processing? Remember that it is very likely that you’ll do something related to this document, and it can be a cause for bugs because two clients will get the same document (albeit, two different versions of it) at the same time.

In order to support concurrent clients on the same subscription, we need to handle all of these problems.

  • Keep track of all the documents that were sent and haven’t been acknowledged yet.
  • Keep track of all the active connections and re-schedule the documents to be sent to clients that weren’t acknowledged if the connection is broken.
  • When a document is about to be sent, we need to check that it isn’t already being processed (an early version of it, rather) by another client. If that is the case, we have to wait until that document is acknowledged before allowing that document to be processed.

The latter is meant to avoid concurrency issues with handling of a single document. I think that limiting the work on a document basis is a reasonable behavior. If your model requires coordination across multiple distinct documents, that is something that you’ll need to implement directly. Implementing the “don’t send the same document to multiple clients at the same time”, on the other hand, is likely to result in better experience all around.

This post is meant to explore the design of such a feature, and as such, I would dearly love any and all feedback.

time to read 4 min | 616 words

I run into this tweet:

I wanted to respond to that, because it ties very closely to the previous post. As I already said, getting paid for open source is a problem. You either try to do that professionally (full time) or you don’t. Trying to get hobbyist amount of money from open source is not really working. And when you are doing this professionally, there is a very different manner of operating. For this post, I want to talk about the other side, the people who want to pay for certain things, but can’t.

Let’s say that Jane works for a multibillion dollar company. She is using project X for a certain use case and would like to extend its capabilities to handle and additional scenario. We’ll further say that this project has a robust team or community behind it, so there is someone to talk to.

If the feature in question isn’t trivial, it is going to require a substantial amount of work. Jane doesn’t want to just bug the maintainers for this, but how can she pay for that? The first problem that you run into is who to pay. There isn’t usually an organization behind the project. Just figuring out who to pay can be a challenge. The next question is whatever that person can even accept payments. In Israel, for example, if you aren’t an independent employee, there is a lot of bureaucracy you have to go through if you want to accept money outside of your employer.

Let’s say that the cost of the feature is set around 2,500$ – 7,500$. That amount usually means that Jane can’t just hand it over and claim it in her expenses. She needs someone from Accounts Payable to handle that, which means that it needs to go through the approval process, there should be a contract (so legal is also involved), they might be a required bidding process, etc.

The open source maintainer on the other side is going to get an 8 pages contract written is dense legalese and have to get a lawyer to go over that. So you need to cover that expense as well. There are delivery clauses in the contract, penalties for late delivery, etc. You need to consider whatever this is work for hire or not (matters for copy right law), whatever the license on the project is suitable for the end result, etc. For many people, that level of hassle for a rare occurrence of non life changing amount of money is too much.  This is especially true if they are already employed and need to do that on top of their usual work.

For Jane, who would like her employer to pay for a feature, this is too much of a hassle to go through all the steps and paperwork involved. Note that we aren’t talking about a quick email, we are probably talking weeks of having to navigate through the hierarchy, getting approval from multiple parties (and remember that there is also the maintainer on the other side as well).

In many cases, the total cost that is involved here can very quickly reach ridiculous levels. There is a reason why in many cases it is easier for such companies to simply hire the maintainers directly. It simplify a lot of work for all sides, but it does means that the project is no longer independent.

time to read 3 min | 559 words

I run into a (private) tweet that said the following:

Is there a way to pay for a feature in an opensource project in a crowdfunded manner with potential over time payouts? I would love to pay someone to implement a feature I really want, but I won't be able to pay it all.

I think that this is a very interesting sentiment, because the usual answer for that range between no and NO. Technically, yes, there are ways to handle that. For example, Patreon or similar services. I checked a few of those and found LineageOS – 205 Patrons with 582$ monthly.

There is also Librapay, which seems to be exactly what the tweet is talking about, but…  the highest paid individual in there is paid about under a thousand dollars a month. The highest paid organization is bringing in about 1,125$ / month.

There are other places, but they present roughly the same picture. In short, there doesn’t seem to be any money in this style of operation. Let me make it clear what I mean by no money. Going to Fiverr and sorting by the cheapest rate, you can find a developer for 5 – 10$ / hour. No idea about the quality / ability to deliver, but that is the bottom line. Using those numbers (which are below minimum wage) gives you not a lot of time at all.

A monthly recurring income of 500$ – 1,250$, assuming minimum wage, will get you about a week or two of work per month. But note that this is assuming that you desire minimum wage. I’m unaware of anywhere that a developer is charging that amount, typical salaries for developers are in the upper tier. So in term of financial incentives, there isn’t anything here.

Note that the moment you take any amount of money, you lose the ability to just mute people. If you are working on open source project and someone come with a request, either it is interesting, so it might be picked up, or it isn’t. But if there is money involved (and it doesn’t have to be a significant amount), there are different expectations.

There is also a non trivial amount of hassle in getting paid. I’m not talking about actually collecting the money, I’m talking about things like taxes, making sure that all your reports align, etc. If you are a salaried employee, in many cases, this is so trivial you never need to think about it. That on its own can be a big hurdle, especially because there isn’t much money in it.

Counter point to my entire post is that there are projects that have done this. The obvious one is the Linux kernel project, but you’ll note that such projects are extremely rare. And usually have had a major amount of traction before they managed to sort out funding. In other words, it got to the point where people were already employed full time to handle such projects.

Another option is Kickstarter. This isn’t so much for recurring revenue, but getting started, of course. On Kickstarter, there seems to be mostly either physical objects or games. I managed to find Light Table which was funded in 2014 to the tune of  316,720$ by 7,317 people. Checking the repository, there seems to be non activity from the beginning of the year.

time to read 2 min | 224 words

RavenDB Subscriptions allows you to create a query and subscribe to documents that match the query. They are very useful in many scenarios, including backend processing, queues and more.

Subscriptions allow you to define a query on a document, and get all the documents that match this query. The key here is that all documents don’t refer to just the documents that exists now, but also future documents that match the query. That is what the subscription part is all about.

The subscription query operate on a single document at a time, which leads to open questions when we have complex object graphs. Let’s assume that we want to handle via subscriptions all Orders that are managed by an employee residing in London. There isn’t a straightforward of doing this. One option would be to add EmployeeCity to the Orders document, but that is a decidedly inelegant solution. Another option is to use the full capabilities of RavenDB. For Subscription queries, we actually allow you to ask question on other documents, like so:

Now we’ll only get the Orders who employee is in London. Simple and quite elegant.

It does have a caveat, though. We will only evaluate this condition whenever the order changes, not when the employee changed. So if the employee moves, old orders will not be matched against the subscription, but new ones will.

time to read 3 min | 541 words

For a long time, whenever I tried to explain how RavenDB is a document database, people immediately assumed that I’m talking about Office documents (Word, Excel, etc) and that I’m building a SharePoint clone.  Explaining that documents are a different way to model data has been a repeated chore, and we still get prospects asking about RavenDB’s office integration. 

As an aside, I’ll be doing a Webinar on Tuesday talking about Data Modeling with RavenDB.

RavenDB 5.1 has a new feature, Nuget integration, which allows you to integrate Nuget packages into RavenDB’s indexes. Turns out, it takes very little code to allow RavenDB to search inside Office documents. Let’s consider a legal case system, where we track the progression of legal cases, the work done on them, billing, etc. As you can imagine, the amount of Word and Excel documents that are involved is… massive. Making sense of all of that information can be pretty hard. Here is how you can help your users with the use of RavenDB.

Here is the Filing/Search index definition:

As you can see, we are using two new features in RavenDB 5.1:

  • The LoadAttachment() / GetContentAsStream() methods, which expose the attachments to the indexing engine.
  • The Office.GetWordText() / Office.GetExcelText() methods, which extract the text from the relevant documents to be indexed by RavenDB.

Aside from that, this is a fairly standard index, we mark the Documents field as full text search (in red in the image below). There is also the yellow markers in the image, what are they for?

image

No, RavenDB didn’t integrate directly with Office, instead, we make use of the new Additional Assemblies (and the existing Additional Sources) to bring you this functionality. Let’s see how that works, shall we?

image

We tell RavenDB that for this index, we want to pull the NuGet package DocumentsFormat.OpenXml. And it will just happen, which means that we have the full power of this package in your indexes. In fact, this is exactly what we do. Here is the content of the Additional Sources:

What this code does is use the DocumetnsFormat.OpenXml package to read the data inside the provided attachments. We extract the text from them and then provide it to the RavenDB indexing engine, which enable us to do full text search on the content of attachments.

In effect, within the space of a single blog post, you can turn your RavenDB instance to a document indexing system.

Here is how we can query the data:

image

And the result is here:

image

And here is the relevant term inside the Office documents:

image

As you can imagine, this is a very exciting capability to add to RavenDB. There is much more that you can do with the ability to integrate such capabilities directly into your database.

time to read 4 min | 742 words

This is part of the same issue as the previous post. I was tracking a performance regression between RavenDB 4.1 and RavenDB 4.2, there was a roughly 8% performance difference between the two (favoring the older version) which was reported to us. The scenario was very large and complex documents (hundreds of objects in a document, > 16KB of JSON each).

The two code bases have had a lot of (small) changes between them, so it was hard to figure out exactly what was the root cause for the regression. Eventually I found something utterly bizarre. One of the things that we have to do when you save a document is check if the document has been modified. If so, we need to save it, otherwise, we can skip it. Here is the relevant piece of code in 4.1:

image

So this costs 0.5 ms (for very large documents), seems perfectly reasonable. But when looking at this on 4.2, we have:

image

This cost six times as much, what the hell?! To clarify, Blittable is the name of the document format that RavenDB uses. It is a binary JSON format that is highly efficient. You can think about this as comparing two JSON documents, because this is what it is doing.

I mentioned that there are differences between these versions? There have been quite a few  (thousands of commits worth), but this particular bit of code hadn’t changed in years. I just couldn’t figure out what was going on. Then I looked deeper. Here are the cost of these calls. Here is the 4.1 version:

image

And here is the 4.2 version:

image

There are a few interesting things here. First, we can see that we are using Enumerable.Contains and that is where most of the time goes. But much more interesting, in 4.1, we are calling this method a total of 30,000 times. In 4.2, we are calling it 150,000 times!!! Note that CompareBlittable is recursive, so even though we call it on 10,000 documents, we get more calls. But why the difference between these version?

I compared the code for these two functions, and they were virtually identical. In 4.2, we mostly change some error message texts, nothing major, but somehow the performance was so different. It took a while to figure out that there was another difference. In 4.1, we checked the changes in the documents in the order of the properties on the document, but on 4.2, we optimized things slightly and just used the index of the property. A feature of the blittable format is that properties are lexically sorted.

Here is the document in question, in our test, we are modifying Property6, as you can see here:

image

There are a total of 40 properties in this document. And much nesting. In this case, in 4.2, we are scanning for changes in the document using the lexical sorting, which means:

image

The CompareBlittable() function will exit on the first change it detect, and in 4.1, it will get to the changed Property6 very quickly. On 4.2, it will need to scan most of the (big) document before it find a change. That is a large part of the cost difference between these versions.

Now that I know what the issue is, we have to consider whatever behavior is better for us. I decided to use the order of inserted properties, instead of the lexical order. The reasoning is simple. If a user care about that, they can much more easily change the order of properties in the document than the names of the properties. In C#, you can just change the order the properties shows up in the class definition.

I have to say, this was much harder to figure out than I expected, because the change happened in a completely different location and was very much none obvious in how it worked.

time to read 4 min | 605 words

We were looking at the cost of writing 10,000 documents to RavenDB and found out something truly interesting. The documents in question are complex, this is an object graph that includes over 120 objects belonging to 20 different classes. The serialized JSON is over 16KB in size. It is expected that something like that would take a lot of time.

Here is the results under the profiler:

image

Given the size involved, taking (under the profiler), just under 4.2 ms per document isn’t bad, I thought. In general, we consider the cost of JSON serialization as pretty fixed and ignore it. This time, I looked little bit deeper and found this:

image

Note that this is deep in the call stack. But it is a pretty expensive call, I decided to look at all the calls that we had for this method, which gave me:

image

A few things that I would like to note here. The time this single method takes is roughly 50% of the total time it takes to serialize the documents. In fact, I checked what would happen if I could remove this cost:

image

You can insert your favorite profanity here, I usually go with Pasta!

Looking at the stack trace, it is clear what is going on. RavenDB has a bunch of customizations for the default way documents are serialized. And JSON.Net will call the CanConvet() method on the converters on each new object that it is about to covert.

Given the level of complexity in these documents (over 450 values), that means that we would call this a lot. And the cost of finding out if we need to check convert the value or not dominated this kind of operation completely.

I wrote the following converter, instead:

As you can see, the core of the fix is in the CanConvertInternal() method, there I’m checking the internal converters and caching the results. Note that I’m intentionally not using any thread safe operations here, even though this is meant to be a shared instance. The idea is that on each new item, we’ll create a new copy of the cache, and we can afford to waste the memory / work to compute the cache if we can reduce the common cost of verifying the types on each call.

We are actually expecting to lose the race conditions here, which is fine. After a very short while, we are going to be stabilized and not have to pay any additional costs.

Here is the result, you can see that we are taking about half of the previous run time.

image

And if we look at just the GetMatchingConverter() call:

image

I actually had to end up with two such calls, because I have some converters that are only for writes. Given the huge performance improvement, we just created two converters to handle the caching.

Note that without a profiler, I would have never been able to figure something like this out, and even with the profiler, I had to dig deep and it took a while to narrow down where the costs actually were. I’m very surprised.

time to read 2 min | 376 words

RavenDB can handle large documents. There actually is a limit to the size of a document in RavenDB, but given that it is 2 GB in size, that isn’t a practical one. I actually had to go and check if the limit was 2 or 4 GB, because it doesn’t actually matter.

That said, having large documents is something that you should be wary of. They work, but they have very high costs.

I have run some benchmarks on the topic a while ago, and the results are interesting. Let’s consider a 100MB document. Parsing time for that should be around 4 – 5 seconds. That ignores the fact that there are also memory costs. For example, you can have a JSON documents that is parsed to 50(!) time the size of the raw text. That is 5GB of memory to handle a single 100MB document. That is just the parsing cost. But there are others. Reading a 100MB from most disks will take about a second, assuming that the data is sequential. Assuming you have 1Gbits/S network, all of which is dedicated to this single document, you can push that to the network in 800 ms or so.

Dealing with such documents is hard and awkward, if you accidently issue a query on a bunch of those documents and get 25 of them page, you just got a query that is 2.5 GB in size.  With documents of this size, you are also likely to want to modify multiple pieces at the same time, so you’ll need to be very careful about concurrency control as well.

In general, at those sizes, you stop threating this as a simple document and move to a streaming approach, because anything else doesn’t make much sense, it is too costly.

A better alternative is to split this document up to its component parts. You can then interact with each one of them on an independent basis.

It is the difference between driving an 18 wheeler and driving family cars. You can pack a whole lot more on the 18 wheeler truck, but it got a pretty poor mileage and it is very awkward to park. You aren’t going to want to use that for going to the grocery store.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Challenge (75):
    01 Jul 2024 - Efficient snapshotable state
  2. Recording (14):
    19 Jun 2024 - Building a Database Engine in C# & .NET
  3. re (33):
    28 May 2024 - Secure Drop protocol
  4. Meta Blog (2):
    23 Jan 2024 - I'm a JS Developer now
  5. Production Postmortem (51):
    12 Dec 2023 - The Spawn of Denial of Service
View all series

RECENT COMMENTS

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}