Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,583
|
Comments: 51,214
Privacy Policy · Terms
filter by tags archive
time to read 4 min | 778 words

At the heart of RavenDB, there is a data structure that we call the Page Translation Table. It is one of the most important pieces inside RavenDB.

The page translation table is basically a Dictionary<long, Page>, mapping between a page number and the actual page. The critical aspect of this data structure is that it is both concurrent and multi-version. That is, at a single point, there may be multiple versions of the table, representing different versions of the table at given points in time.

The way it works, a transaction in RavenDB generates a page translation table as part of its execution and publishes the table on commit. However, each subsequent table builds upon the previous one, so things become more complex. Here is a usage example (in Python pseudo-code):


table = {}


with wtx1 = write_tx(table):
  wtx1.put(2, 'v1')
  wtx1.put(3, 'v1')
  wtx1.publish(table)


# table has (2 => v1, 3 => v1)


with wtx2 = write_tx(table):
  wtx2.put(2, 'v2')
  wtx2.put(4, 'v2')
  wtx2.publish(table)


# table has (2 => v2, 3 => v1, 4 => v2)

This is pretty easy to follow, I think. The table is a simple hash table at this point in time.

The catch is when we mix read transactions as well, like so:


# table has (2 => v2, 3 => v1, 4 => v2)


with rtx1 = read_tx(table):


        with wtx3 = write_tx(table):
                wtx3.put(2, 'v3')
                wtx3.put(3, 'v3')
                wtx3.put(5, 'v3')


                with rtx2 = read_tx(table):
                        rtx2.read(2) # => gives, v2
                        rtx2.read(3) # => gives, v1
                        rtx2.read(5) # => gives, None


                wtx3.publish(table)


# table has (2 => v3, 3 => v3, 4 => v2, 5 => v3)
# but rtx2 still observe the value as they were when
# rtx2 was created


        rtx2.read(2) # => gives, v2
        rtx2.read(3) # => gives, v1
        rtx2.read(5) # => gives, None

In other words, until we publish a transaction, its changes don’t take effect. And any read translation that was already started isn’t impacted. We also need this to be concurrent, so we can use the table in multiple threads (a single write transaction at a time, but potentially many read transactions). Each transaction may modify hundreds or thousands of pages, and we’ll only clear the table of old values once in a while (so it isn’t infinite growth, but may certainly reach respectable numbers of items).

The implementation we have inside of RavenDB for this is complex! I tried drawing that on the whiteboard to explain what was going on, and I needed both the third and fourth dimensions to illustrate the concept.

Given these requirements, how would you implement this sort of data structure?

time to read 1 min | 129 words

Watch Oren Eini, CEO of RavenDB, as he delves into the intricate process of constructing a database engine using C# and .NET. Uncover the unique features that make C# a robust system language for high-end system development. Learn how C# provides direct memory access and fine-grained control, enabling developers to seamlessly blend high-level concepts with intimate control over system operations within a single project. Embark on the journey of leveraging the power of C# and .NET to craft a potent and efficient database engine, unlocking new possibilities in system development.

I’m going deep into some of the cool stuff that you can do with C# and low level programming.

time to read 3 min | 487 words

Corax is the new indexing and querying engine in RavenDB, which recently came out with RavenDB 6.0. Our focus when building Corax was on one thing, performance. I did a full talk explaining how it works from the inside out, available here as well as a couple of podcasts.

Now that RavenDB 6.0 has been out for a while, we’ve had the chance to complete a few features that didn’t make the cut for the big 6.0 release. There is a host of small features for Corax, mostly completing tasks that were not included in the initial 6.0 release.

All these features are available in the 6.0.102 release, which went live in late April 2024.

The most important new feature for Corax is query plan visualization.

Let’s run the following query in the RavenDB Studio on the sample data set:


from index 'Orders/ByShipment/Location'
where spatial.within(ShipmentLocation, 
                  spatial.circle( 10, 49.255, 4.154, 'miles')
      )
and (Employee = 'employees/5-A' or Company = 'companies/85-A')
order by Company, score()
include timings()

Note that we are using the includetimings() feature. If you configure this index to use Corax, issuing the above query will also give us the full query plan. In this case, you can see it here:

You can see exactly how the query engine has processed your query and the pipeline it has gone through.

We have incorporated many additional features into Corax, including phrase queries, scoring based on spatial results, and more complex sorting pipelines. For the most part, those are small but they fulfill specific needs and enable a wider range of scenarios for Corax.

Over six months since Corax went live with 6.0, I can tell that it has been a successful feature. It performs its primary job well, being a faster and more efficient querying engine. And the best part is that it isn’t even something that you need to be aware of.

Corax has been the default indexing engine for the Development and Community editions of RavenDB for over 3 months now, and almost no one has noticed.

It’s a strange metric, I know, for a feature to be successful when no one is even aware of its existence, but that is a common theme for RavenDB. The whole point behind RavenDB is to provide a database that works, allowing you to forget about it.

time to read 6 min | 1075 words

I’m really happy to announce that RavenDB Cloud is now offering NVMe based instances on the Performance tier. In short, that means that you can deploy RavenDB Cloud clusters to handle some truly high workloads.

You can learn more about what is actually going on in our documentation. For performance numbers and costs, feel free to skip to the bottom of this post.

I’m assuming that you may not be familiar with everything that a database needs to run fast, and this feature deserves a full explanation of what is on offer. So here are the full details of what you can now do.

RavenDB is a transactional database that often processes far more data than the memory available on the machine. Consequently, it needs to read from and write to  the disk. In fact, as a database, you can say that it is its primary role. This means that one of the most important factors for database performance is the speed of your disk. I have written about the topic before in more depth, if you are interested in exploring the topic.

When running on-premises, it’s easy to get the best disks you can. We recommend at least good SSDs and prefer NVMe drives for best results. When running on the cloud, the situation is quite different. Machines in the cloud are assumed to be transient, they come and go. Disks, on the other hand, are required to be persistent. So a typical disk on the cloud is actually a remote storage device (typically replicated). That means that disk I/O on the cloud is… slow. To the point where you could get better performance from off-the-shelf hardware from 20 years ago.

There are higher tiers of high-performance disks available in the cloud, of course. If you need them, you are paying quite a lot for that additional performance. There is another option, however. You can use NVMe disks on the cloud as well. Well, you could, but do you want to?

The reason you’d want to use an NVMe disk in the cloud is performance, of course. But the problem with achieving this performance on the cloud is that you lose the safety of “this disk is persistent beyond the machine”. In other words, this is literally a disk that is attached to the physical server hosting your VM. If something goes wrong with that machine, you lose the disk. Traditionally, that means that you can only use that for transient data, not as the backend store for a database.

However, RavenDB has some interesting options to deal with this. RavenDB Cloud runs RavenDB clusters with 3 copies of the data by default, operating in a full multi-master configuration. Given that we already have multiple copies of the data, what would happen if we lost a machine?

The underlying watchdog would realize that something happened and initiate recovery, which will effectively spawn the instance on another node. That works, but what about the data? All of that data is now lost. The design of RavenDB treats that as an acceptable scenario, the cluster would detect such an issue, move the affected node to rehabilitation mode, and start pumping all the data from the other nodes in the cluster to it.

In short, now we’ve shifted from a node failure being catastrophic to having a small bump in the data traffic bill at the end of the month. Thanks to its multi-master setup, RavenDB can recover even if two nodes go down at the same time, as we’ll recover from the third one. RavenDB Cloud runs the nodes in the cluster in separate availability zones specifically to handle such failure scenarios.

We have run into this scenario multiple times, both as part of our testing and as actual production events. I am happy to say that everything works as expected, the failed node comes up empty, is filled by the rest of the cluster, and then seamlessly resumes its work. The users were not even aware that something happened.

Of course, there is always the possibility that the entire region could go down, or that three separate instances in three separate availability zones would fail at the same time. What happens then? That is expected to be a rare scenario, but we are all about covering our edge cases.

In such a scenario, you would need to recover from backup. Clusters using NVMe disks are configured to run using Snapshot backups, which consume slightly more disk space than normal but can be restored more quickly.

RavenDB Cloud also blocks the user's ability to scale up or down such clusters from the portal and requires a support ticket to perform them. This is because special care is needed when performing such operations on NVMe machines. Even with those limitations, we are able to perform such actions with zero downtime for the users.

And after all this story, let’s talk numbers. Take a look at the following table illustrating the costs vs. performance on AWS (us-east-1):

Type# of coresMemoryDiskCost ($ / hour)
P40 (Premium disk)1664 GB2 TB, 10,000 IOPS, 360 MB/s8.790
PN30 (NVMe)864 GB2 TB, 110,000 IOPS, 440 MB/s6.395
PN40 (NVMe)16128 GB4 TB, 220,000 IOPS, 880 MB/s12.782

The situation is even more blatant when looking at the numbers on Azure (eastus):

Type# of coresMemoryDiskCost ($ / hour)
P40 (Premium disk)1664 GB2 TB, 7,500 IOPS, 250 MB/s7.089
PN30 (NVMe)864 GB2 TB, 400,000 IOPS, 2 GB/s6.485
PN40 (NVMe)16128 GB4 TB, 800,000 IOPS, 4 GB/s12.956

In other words, you can upgrade to the NVMe cluster and actually reduce the spend if you are stalled on I/O. We expect most workloads to see both higher performance and lower cost from a move from P40 with premium disk to PN30 (same amount of memory, fewer cores). For most workloads, we have found that IO rate matters even more than core count or CPU horsepower.

I’m really excited about this new feature, not only because it can give you a big performance boost but also because it leverages the same behavior that RavenDB already exhibits for handling issues in the cluster and recovering from unexpected failures.

In short, you now have some new capabilities at your fingertips, being able to use RavenDB Cloud for even more demanding workloads. Give it a try, I hear it goes vrooom 🙂.

time to read 6 min | 1025 words

When we started working on Corax (10 years ago!), we had a pretty simple mission statement for that: “Lucene, but 10 times faster for our use case”. When we actually started implementing this in code (early 2020), we had a few more rules about the direction we wanted to take.

Corax had to be faster than Lucene in all scenarios, and 10 times faster for common indexing and querying scenarios. Corax design is meant for online indexing, not batch-oriented like Lucene. We favor moving work to indexing time and ensuring that our data structures on disk can work with no additional processing time.

Lucene was created at a time when data size was much smaller and disks were far more expensive. It shows in the overall design in many ways, but one of the critical aspects is that the file design for Lucene is compressed, meaning that you need to read the data, decode that into the in-memory data structure, and then process it.

For RavenDB’s use case, that turned out to be a serious problem. In particular, the issue of cold queries, where you query the database for the first time and have to pay the initialization cost, was particularly difficult. Now, cold queries aren’t really that interesting, from a benchmark perspective, you have to warm things up in every software (caches are everywhere, from your disk to your CPU). I like to say that even memory has caches (yes, plural) because it is so slow (L1, L2, L3 caches).

With Lucene’s design, however, whenever it runs an indexing batch, it creates a new file, and to start querying after that means that you have a “cold start” for that file. Usually, those files are small, but every now and then Lucene needs to merge several files together and then we have to pay the cold start price for a large amount of data.

The issue is that this sometimes introduces a high latency spike (hitting us in the P999 targets), which is really hard to smooth over. We spent a lot of time and engineering resources ensuring that this doesn’t have a big impact on our users.

One of the design goals for Corax was to ensure that this doesn’t happen. That we are able to get consistent performance from the system without periodic maintenance tasks. That led us to a very different internal design. The persistent data structures that we use are meant to be used as is, without initial processing.

Everything has a cost, and in this case, it means that the size of Corax on disk is typically somewhat larger than Lucene. The big advantage is that the amount of memory being used by Corax tends to be significantly lower. And in today’s world, disks are far cheaper than memory. Corax’s cold start time is orders of magnitude faster than Lucene’s cold start time.

It turns out that there is a huge impact in another scenario as well, completely unexpected. We continuously run performance tests on our system, and we got some ridiculous results when testing query performance using encrypted databases.

When you use encryption at rest, RavenDB ensures that the only time that your data is decrypted is when there is an active transaction using the data. In other words, even in-memory buffers are encrypted. That applies to documents as well as indexes. It does not apply to the in-memory data that Lucene holds in its cache, though. For Corax, however, all of its state is encrypted.

When we run our benchmark on encrypted database queries, we expect to see either roughly the same performance between Corax and Lucene or see Lucene edging out Corax in this scenario, since it can use its cache without paying decryption costs.

Instead, we got really puzzling results. I tried showing them in bar chart format, but I literally couldn’t make the data fit in a reasonable size. The scenario is testing queries on an encrypted database, using an m5.xlarge instance on AWS. We are hitting the server with 500 queries/second, and testing for the 99.99 percentile performance.

Indexing Engine99.99% percentile (ms)99.99% percentile (seconds)
Lucene40,21040.21
Corax1860.18

Take a look at those numbers! Somehow Corax is absolutely smoking Lucene’s lunch. And I was quite surprised about that. I mean, I’m happy, I guess, that the indexing engine we spent so much time on is doing this well, but any time that we see a performance number that we cannot explain we need to figure out what is going on.

Here is the profiler output for this benchmark, using Lucene.

As you can see, the vast majority of the time is spent decrypting pages. And we are decrypting pages belonging to a stream. Those are the Lucene files, stored (encrypted in this case) inside of Voron. The issue is that the access pattern that Lucene is using forces us to touch large parts of the file. It usually reads a very small portion each time, but in various locations. Given that the data is encrypted, we have to decrypt each of those locations.

Corax, on the other hand, keeps the persistent data structure in such a way that when we need to access specific pages only. That means that in terms of the number of pages touched by Corax or Lucene for this particular scenario, Lucene is using a lot more. You’ll usually not notice that since Voron (our storage engine) is memory mapped and those accesses are cheap. When using encrypted storage, however, we need to decrypt the data first, so that was very noticeable.

It’s interesting to note that this also applies to instances where there is a memory pressure involved. Corax would tend to touch a lot less memory and have a smaller working set, while Lucene will generate more page faults.

Really interesting results, and I’m both happy and amused that totally different design decisions have led to such a big impact in this scenario. In short, Corax is fast, really fast, and in many more scenarios than we initially thought.

time to read 8 min | 1411 words

One of the most frequent operations we make in RavenDB is getting a page pointer. That is the basis of pretty much everything that you can think of inside the storage engine. On the surface, that is pretty simple, here is the API we’ll call:

public Page GetPage ( long pageNumber )

Easy, right? But internally, we need to ensure that we get the right page. And that is where some complexity enters the picture. Voron, our storage engine, implements a pattern called MVCC (multi-version concurrency control). In other words, two transactions loading the same page may see different versions of the page at the same time.

What this means is that the call to GetPage () needs to check if the page:

  • Has been modified in the current transaction
  • Has been modified in a previously committed transaction and has not yet flushed to disk
  • The on-disk version is the most up-to-date one

Each one of those checks is cheap, but getting a page is a common operation. So we implemented a small cache in front of these operations, which resulted in a substantial performance improvement. 

Conceptually, here is what that cache looks like:

public unsafe class PageLocator {

     private struct PageData {

         public long PageNumber ;

         public byte* Pointer ;

         public bool IsWritable ;

    }

     private PageData [] _cache = new PageData [ 512 ];

     public byte* Get ( long page , out bool isWritable ) {

         var index = page & 511 ;

         ref var data = ref _cache [ index ];

         if ( data.PageNumber == page ) {

             isWritable = data.IsWritable ;

             return data.Pointer ;

        }

         return LookupAndGetPage ( page , ref data , out isWritable );

    }

     public void Reset () {

         for ( int i = 0 ; i < _cache.Length ; i++)

             _cache [ i ].PageNumber =-1 ;

    }

}

This is intended to be a fairly simple cache, and it is fine if certain access patterns aren’t going to produce optimal results. After all, the source it is using is already quite fast, we simply want to get even better performance when we can. This is important because caching is quite a complex topic on its own. The PageLocator itself is used in the context of a transaction and is pooled. Transactions in RavenDB tend to be pretty short, so that alleviates a lot of the complexity around cache management.

This is actually a pretty solved problem for us, and has been for a long while. We have been using some variant of the code above for about 9 years. The reason for this post, however, is that we are trying to optimize things further. And this class showed up in our performance traces as problematic.

Surprisingly enough, what is actually costly isn’t the lookup part, but making the PageLocator ready for the next transaction. We are talking about the Reset () method.

The question is: how can we significantly optimize the process of making the instance ready for a new transaction? We don’t want to allocate, and resetting the page numbers is what is costing us.

One option is to add an int Generation field to the PageData structure, which we’ll then check on getting from the cache. Resetting the cache can then be done by incrementing the locator’s generation with a single instruction. That is pretty awesome, right?

It sadly exposes a problem, what happens when we use the locator enough to encounter an overflow? What happens if we have a sequence of events that brings us back to the same generation as a cached instance? We’ll be risking getting an old instance (from a previous transaction, which happened long ago). 

The chances for that are low, seriously low. But that is not an acceptable risk for us. So we need to consider going further. Here is what we ended up with:

public unsafe class PageLocator {

     private struct PageData {

         public long PageNumber ;

         public byte* Pointer ;

         public ushort Generation ;

         public bool IsWritable ;

    }

     private PageData [] _cache = new PageData [ 512 ];

     private int _generation = 1 ;

     public byte* Get ( long page , out bool isWritable ) {

         var index = page & 511 ;

         ref var data = ref _cache [ index ];

         if ( data.PageNumber == page && data.Generation == _generation ) {

             isWritable = data.IsWritable ;

             return data.Pointer ;

        }

         return LookupAndGetPage ( page , ref data , out isWritable );

    }

     public void Reset () {

         _generation++;

         if ( _generation >= 65535 ){

                 _generation = 1 ;

                 MemoryMarshal.Cast < PageData , byte >( _cache ).Fill ( 0 );

        }

    }

}

Once every 64K operations, we’ll pay the cost of resetting the entire buffer, but we do that in an instruction that is heavily optimized. If needed, we can take it further, but here are our results before the change:

And afterward:

The cost of the Renew ()call, which was composed mostly of the  Reset () call, basically dropped off the radar,and the performance roughly doubled.

That is a pretty cool benefit for a straightforward change.

 

time to read 6 min | 1070 words

With the release of RavenDB 6.0, we are now starting to focus on smaller features. The first one out of the gate, part of RavenDB 6.0.1 release, is actually a set of enhancements around making backups faster, smaller and cheaper.

I just checked, and the core backup behavior of RavenDB hasn't changed much since 2010(!). In other words, decisions that were made almost 14 years ago are still in effect. There have been a… number of changes in both RavenDB, its operating environment and the size of the database that we deal with.

In the past year, we ran into a number of cases where people working with datasets in the high hundreds of GB to low TB range had issues with backups. In particular, with the duration of backups. After the 6.0 release, we had the capacity to do a lot more about this, so we took a look.

On first impression, you would expect that backing up a database whose size exceeds 750GB will take… a while. And indeed, it does. The question is, why? It’s a lot of data, sure. But where does the time go?

The format of RavenDB backups is really simple. It is just a GZipped JSON file. The contents are treated as a JSON stream that contains all the data in the database. This has a number of advantages, the file size is small, the format itself lends itself well to extension, it is streamable, etc. In fact, it is a testament to the early design decision that we haven’t really had to touch that in so long.

Given that the format is stable, and that we have a lot of experience with producing JSON, we approach the task of optimizing the backups with a good idea where we should go. The problem is likely with I/O (we need to go through the entire database, after all). There were some (pretty wild) ideas flying around on how to address this, but the first thing to do, of course, was to run it under the profiler.

The results, as you can imagine, were not what we expected. It turns out that we spend quite a lot of the time inside of GZip, compressing the data. It turns out that when we set up the backup format all those years ago, we chose GZip and Optimal compression mode. In other words, we wanted the file size to be as small as possible. That… makes sense, of course. But it turns out that the vast majority of the time is actually spent compressing the data?

Time to start looking deeper into that. GZip is an old format (it came out in 1992!). And recently there have been a number of new compression algorithms (Zstd, Brotli, etc). We decided to look into those in detail. GZip also has several modes that affect compression ratio vs. compression time.

After a bit of experimentation, we have the following details when backing up a 35GB database.

Algorithm & Mode    

Size

Time

GZip - Optimal

5.9 GB

6 min, 40 sec

GZip - Fastest

6.6 GB

4 min, 7 sec

ZStd - Fastest

4.1 GB

3 min, 1 sec

The data in this case is mostly textual (JSON), and it turns out that we can reduce the backup time by more than half while saving 30% in the space we take. Those are some nice numbers.

You’ll note that ZStd also has a mode that controls compression ratio vs compression time. We tried checking this as well on a different dataset (a snapshot of the actual database) with a size of 25.5GB and we got:

Algorithm & Mode   

Size

Time

ZStd - Fastest

2.18 GB

56 sec

ZStd - Optimal

1.98 GB

1 min, 41 sec

GZip - Optimal

2.99 GB

3 min, 50 sec

As you can see, GZip isn’t going to get a participation trophy at this point, coming dead last for both size and time.

In short, RavenDB 6.0.1 will use the new ZStd compression algorithm for backups (and export files),  and you can expect to have greatly reduced backup times as well as smaller backups overall.

This is now the default mode for RavenDB 6.0.1 or higher, but you can control that in the backup settings if you so wish.

image

Restoring from old backups is no issue, of course, but restoring a ZStd backup on an older version of RavenDB is not supported. You can configure RavenDB to use the GZip algorithm if that is required.

Another feature that is going to improve backup performance is the notion of backup mode, which you can see in the image above. RavenDB backups support multiple destinations, so you can back up to Amazon S3 as well as Azure Blob Storage as a single unit. 

At the time of designing the backup system, that was a nice feature to have, since we assume that you’ll usually have a backup to a local disk (for quick restore) as well as an offsite backup for longer-term storage. In practice, almost all backup configurations in RavenDB have a single destination. However, because we have support for multiple backup destinations, the backup process will first write the backup file to the local disk and then upload it.

The new Direct Upload mode only supports a single destination, and it streams the data to the destination directly, without touching the disk. As a result of this change, we are using far less I/O during backup procedures as well as reducing the total time it takes to run the backup.

This is especially useful if your backup destination is nearby and the network is good. This is frequently the case in the cloud, where you are backing up to S3 in the same region. In our tests, it reduced the backup time by 30% in some cases.

From a coding perspective, those are not huge changes, but together they mean that backups in RavenDB are now cheaper, faster, and far smaller. That translates to a better operating environment for your system. It also means that the costs of storing backups are going to go down by a significant amount.

You can read all the technical details about the few features in the feature announcements:

time to read 4 min | 622 words

A customer contacted us to complain about a highly unstable cluster in their production system. The metrics didn’t support the situation, however. There was no excess load on the cluster in terms of CPU and memory, but there were a lot of network issues. The cluster got to the point where it would just flat-out be unable to connect from one node to another.

It was obviously some sort of a network issue, but our ping and network tests worked just fine. Something else was going on. Somehow, the server would get to a point where it would be simply inaccessible for a period of time, then accessible, then not, etc. What was weird was that the usual metrics didn’t give us anything. The logs were fine, as were memory and CPU. The network was stable throughout.

If the first level of metrics isn’t telling a story, we need to dig deeper. So we did, and we found something really interesting. Here is the total number of TCP connections on the server over time.

So there are a lot of connections on the system, which is choking it? But the CPU is fine, so what is going on? Are we being attacked? We looked at the connections, but they all came from authorized machines, and the firewall was locked down tight.

unnamed

If you look closely at the graph, you can see that it hits 32K connections at its peak. That is a really interesting number, because 32K is also the number of ephemeral port range values for Linux. In other words, we basically hit the OS limit for how many connections could be sustained between a client and a server.

The question is what could be generating all of those connections? Remember, they are coming from a trusted source and are valid operations.  Indeed, digging deeper we could see that there are a lot of connections in the TIME_WAIT state.

We asked to look at the client code to figure out what was going on. Here is what we found:

There is… not much here, as you can see. And certainly nothing that should cause us to generate a stupendous amount of connections to the server. In fact, this is a very short process. It is going to run, read a single line from the input, write a document to RavenDB, and then exit.

To understand what is actually going on, we need to zoom out and understand the system at a higher level. Let’s assume that the script above is called using the following manner:

What will happen now? All of this code is pretty innocent, I’m sure you can tell. But together, we are going to get the following interesting behavior:

For each line in the input, we’ll invoke the script, which will spawn a separate process to connect to RavenDB, write a single document to the server, and exit. Immediately afterward, we'll have another such process, etc.

Each of those processes is going to have a separate connection, identified by a quartet of (src ip, src port, dst ip, dst port). And there are only so many such ports available on the OS. Once you close a connection, it is moved to a TIME_WAIT mode, and any packets that arrive for the specified connection quartet are going to be assumed to be from the old connection and drop. Generate enough new connections fast enough, and you literally lock yourself out of the network.

The solution to this problem is to avoid using a separate process for each interaction. Aside from alleviating the connection issue (which also requires non trivial cost on the server) it allows RavenDB to far better optimize network and traffic patterns.

time to read 9 min | 1686 words

In the previous post, I was able to utilize AVX to get some nice speedups. In general, I was able to save up to 57%(!) of the runtime in processing arrays of 1M items. That is really amazing, if you think about it. But my best effort only gave me a 4% improvement when using 32M items.

I decided to investigate what is going on in more depth, and I came up with the following benchmark. Given that I want to filter negative numbers, what would happen if the only negative number in the array was the first one?

In other words, let’s see what happens when we could write this algorithm as the following line of code:

array[1..].CopyTo(array);

The idea here is that we should measure the speed of raw memory copy and see how that compares to our code.

Before we dive into the results, I want to make a few things explicit. We are dealing here with arrays of long, when I’m talking about an array with 1M items, I’m actually talking about an 8MB buffer, and for the 32M items, we are talking about 256MB of memory.

I’m running these benchmarks on the following machine:

    AMD Ryzen 9 5950X 16-Core Processor

    Base speed:    3.40 GHz
     L1 cache:    1.0 MB
     L2 cache:    8.0 MB
     L3 cache:    64.0 MB

    Utilization    9%
     Speed    4.59 GHz

In other words, when we look at this, the 1M items (8MB) can fit into L2 (barely, but certainly backed by the L3 cache. For the 32M items (256MB), we are far beyond what can fit in the cache, so we are probably dealing with memory bandwidth issues.

I wrote the following functions to test it out:

Let’s look at what I’m actually testing here.

  • CopyTo() – using the span native copying is the most ergonomic way to do things, I think.
  • MemoryCopy() – uses a built-in unsafe API in the framework. That eventually boils down to a heavily optimized routine, which… calls to Memove() if the buffer overlaps (as they do in this case).
  • MoveMemory() – uses a pinvoke to call to the Windows API to do the moving of memory for us.

Here are the results for the 1M case (8MB):

Method N Mean Error StdDev Ratio
FilterCmp 1048599 441.4 us 1.78 us 1.58 us 1.00
FilterCmp_Avx 1048599 141.1 us 2.70 us 2.65 us 0.32
CopyTo 1048599 872.8 us 11.27 us 10.54 us 1.98
MemoryCopy 1048599 869.7 us 7.29 us 6.46 us 1.97
MoveMemory 1048599 126.9 us 0.28 us 0.25 us 0.29

We can see some real surprises here. I’m using the FilterCmp (the very basic implementation) that I wrote.

I cannot explain why CopyTo() and MemoryMove() are so slow.

What is really impressive is that the FilterCmp_Avx() and MoveMemory() are so close in performance, and so much faster. To put it in another way, we are already at a stage where we are within shouting distance from the MoveMemory() performance. That is.. really impressive.

That said, what happens with 32M (256MB) ?

Method N Mean Error StdDev Ratio
FilterCmp 33554455 22,763.6 us 157.23 us 147.07 us 1.00
FilterCmp_Avx 33554455 20,122.3 us 214.10 us 200.27 us 0.88
CopyTo 33554455 27,660.1 us 91.41 us 76.33 us 1.22
MemoryCopy 33554455 27,618.4 us 136.16 us 127.36 us 1.21
MoveMemory 33554455 20,152.0 us 166.66 us 155.89 us 0.89

Now we are faster in the FilterCmp_Avx than MoveMemory. That is… a pretty big wow, and a really nice close for this blog post series, right? Except that we won’t be stopping here.

The way the task I set out works, we are actually filtering just the first item out, and then we are basically copying the memory. Let’s do some math: 256MB in 20.1ms means 12.4 GB/sec!

On this system, I have the following memory setup:

    64.0 GB

    Speed:    2133 MHz
     Slots used:    4 of 4
     Form factor:    DIMM
     Hardware reserved:    55.2 MB

I’m using DDR4 memory, so I can expect a maximum speed of 17GB/sec. In theory, I might be able to get more if the memory is located on different DIMMs, but for the size in question, that is not likely.

I’m going to skip the training montage of VTune, understanding memory architecture and figuring out what is actually going on.

Let’s drop everything and look at what we have with just AVX vs. MoveMemory:

Method N Mean Error StdDev Median Ratio
FilterCmp_Avx 1048599 141.6 us 2.28 us 2.02 us 141.8 us 1.12
MoveMemory 1048599 126.8 us 0.25 us 0.19 us 126.8 us 1.00
             
FilterCmp_Avx 33554455 21,105.5 us 408.65 us 963.25 us 20,770.4 us 1.08
MoveMemory 33554455 20,142.5 us 245.08 us 204.66 us 20,108.2 us 1.00

The new baseline is MoveMemory, and in this run, we can see that the AVX code is slightly slower.

It’s sadly not uncommon to see numbers shift by those ranges when we are testing such micro-optimizations, mostly because we are subject to so many variables that can affect performance. In this case, I dropped all the other benchmarks, which may have changed things.

At any rate, using those numbers, we have 12.4GB/sec for MoveMemory() and 11.8GB/sec for the AVX version. The hardware maximum speed is 17GB/sec. So we are quite close to what can be done.

For that matter, I would like to point out that the trivial code completed the task in 11GB/sec, so that is quite respectable and shows that the issue here is literally getting the memory fast enough to the CPU.

Can we do something about that? I made a pretty small change to the AVX version, like so:

What are we actually doing here? Instead of loading the value and immediately using it, we are loading the next value, then we are executing the loop and when we iterate again, we will start loading the next value and process the current one. The idea is to parallelize load and compute at the instruction level.

Sadly, that didn’t seem to do the trick. I saw a 19% additional cost for that version compared to the vanilla AVX one on the 8MB run and a 2% additional cost on the 256MB run.

I then realized that there was one really important test that I had to also make, and wrote the following:

In other words, let's test the speed of moving memory and filling memory as fast as we possibly can. Here are the results:

Method N Mean Error StdDev Ratio RatioSD Code Size
MoveMemory 1048599 126.8 us 0.36 us 0.33 us 0.25 0.00 270 B
FillMemory 1048599 513.5 us 10.05 us 10.32 us 1.00 0.00 351 B
               
MoveMemory 33554455 20,022.5 us 395.35 us 500.00 us 1.26 0.02 270 B
FillMemory 33554455 15,822.4 us 19.85 us 17.60 us 1.00 0.00 351 B

This is really interesting, for a small buffer (8MB), MoveMemory is somehow faster. I don’t have a way to explain that, but it has been a pretty consistent result in my tests.

For the large buffer (256MB), we are seeing results that make more sense to me.

  • MoveMemory – 12.5 GB / sec
  • FIllMemory – 15.8 GB / sec

In other words, for MoveMemory, we are both reading and writing, so we are paying for memory bandwidth in both directions. For filling the memory, we are only writing, so we can get better performance (no need for reads).

In other words, we are talking about hitting the real physical limits of what the hardware can do. There are all sorts of tricks that one can pull, but when we are this close to the limit, they are almost always context-sensitive and dependent on many factors.

To conclude, here are my final results:

Method N Mean Error StdDev Ratio RatioSD Code Size
FilterCmp_Avx 1048599 307.9 us 6.15 us 12.84 us 0.99 0.05 270 B
FilterCmp_Avx_Next 1048599 308.4 us 6.07 us 9.26 us 0.99 0.03 270 B
CopyTo 1048599 1,043.7 us 15.96 us 14.93 us 3.37 0.11 452 B
ArrayCopy 1048599 1,046.7 us 15.92 us 14.89 us 3.38 0.14 266 B
UnsafeCopy 1048599 309.5 us 6.15 us 8.83 us 1.00 0.04 133 B
MoveMemory 1048599 310.8 us 6.17 us 9.43 us 1.00 0.00 270 B
               
FilterCmp_Avx 33554455 24,013.1 us 451.09 us 443.03 us 0.98 0.02 270 B
FilterCmp_Avx_Next 33554455 24,437.8 us 179.88 us 168.26 us 1.00 0.01 270 B
CopyTo 33554455 32,931.6 us 416.57 us 389.66 us 1.35 0.02 452 B
ArrayCopy 33554455 32,538.0 us 463.00 us 433.09 us 1.33 0.02 266 B
UnsafeCopy 33554455 24,386.9 us 209.98 us 196.42 us 1.00 0.01 133 B
MoveMemory 33554455 24,427.8 us 293.75 us 274.78 us 1.00 0.00 270 B

As you can see, just the AVX version is comparable or (slightly) beating the MoveMemory function.

I tried things like prefetching memory, both the next item, the next cache item and from the next page, using non-temporal load and stores and many other things, but this is a pretty tough challenge.

What is really interesting is that the original, very simple and obvious implementation, clocked at 11 GB/sec. After pulling pretty much all the stops and tricks, I was able to hit 12.5 GB / sec.

I don’t think anyone can look / update / understand the resulting code in any way without going through deep meditation. That is not a bad result at all. And along the way, I learned quite a bit about how the lowest level of the machine architecture is working.

time to read 8 min | 1588 words

In the previous post I discussed how we can optimize the filtering of negative numbers by unrolling the loop, looked into branchless code and in general was able to improve performance by up to 15% from the initial version we started with. We pushed as much as we could on what can be done using scalar code. Now it is the time to open a whole new world and see what we can do when we implement this challenge using vector instructions.

The key problem with such tasks is that SIMD, AVX and their friends were designed by… an interesting process using a perspective that makes sense if you can see in a couple of additional dimensions. I assume that at least some of that is implementation constraints, but the key issue is that when you start using SIMD, you realize that you don’t have general-purpose instructions. Instead, you have a lot of dedicated instructions that are doing one thing, hopefully well, and it is your role to compose them into something that would make sense. Oftentimes, you need to turn the solution on its head in order to successfully solve it using SIMD. The benefit, of course, is that you can get quite an amazing boost in speed when you do this.

The algorithm we use is basically to scan the list of entries and copy to the start of the list only those items that are positive. How can we do that using SIMD? The whole point here is that we want to be able to operate on multiple data, but this particular task isn’t trivial. I’m going to show the code first, then discuss what it does in detail:

We start with the usual check (if you’ll recall, that ensures that the JIT knows to elide some range checks, then we load the PremuteTable. For now, just assume that this is magic (and it is). The first interesting thing happens when we start iterating over the loop. Unlike before, now we do that in chunks of 4 int64 elements at a time. Inside the loop, we start by loading a vector of int64 and then we do the first odd thing. We call ExtractMostSignificantBits(), since the sign bit is used to mark whether a number if negative or not. That means that I can use a single instruction to get an integer with the bits set for all the negative numbers. That is particularly juicy for what we need, since there is no need for comparisons, etc.

If the mask we got is all zeroes, it means that all the numbers we loaded to the vector are positives, so we can write them as-is to the output and move to the next part. Things get interesting when that isn’t the case.

We load a permute value using some shenanigans (we’ll touch on that shortly) and call the PermuteVar8x32() method. The idea here is that we pack all the non-negative numbers to the start of the vector, then we write the vector to the output. The key here is that when we do that, we increment the output index only by the number of valid values.  The rest of this method just handles the remainder that does not fit into a vector.

The hard part in this implementation was to figure out how to handle the scenario where we loaded some negative numbers. We need a way to filter them, after all. But there is no SIMD instruction that allows us to do so. Luckily, we have the Avx2.PermuteVar8x32() method to help here. To confuse things, we don’t actually want to deal with 8x32 values. We want to deal with 4x64 values. There is Avx2.Permute4x64() method, and it will work quite nicely, with a single caveat. This method assumes that you are going to pass it a constant value. We don’t have such a constant, we need to be able to provide that based on whatever the masked bits will give us.

So how do we deal with this issue of filtering with SIMD? We need to move all the values we care about to the front of the vector. We have the method to do that, PermuteVar8x32() method, and we just need to figure out how to actually make use of this. PermuteVar8x32() accepts an input vector as well as a vector of the premutation you want to make. In this case, we are basing this on the 4 top bits of the 4 elements vector of int64. As such, there are a total of 16 options available to us. We have to deal with 32bits values rather than 64bits, but that isn’t that much of a problem.

Here is the premutation table that we’ll be using:

What you can see here is that when we have a 1 in the bits (shown in comments) we’ll not copy that to the vector. Let’s take a look at the entry of 0101, which may be caused by the following values [1,-2,3,-4].

When we look at the right entry at index #5 in the table: 2,3,6,7,0,0,0,0

What does this mean? It means that we want to put the 2nd int64 element in the source vector and move it as the first element of the destination vector, take the 3rd element from the source as the second element in the destination and discard the rest (marked as 0,0,0,0 in the table).

This is a bit hard to follow because we have to compose the value out of the individual 32 bits words, but it works quite well. Or, at least, it would work, but not as efficiently. This is because we would need to load the PermuteTableInts into a variable and access it, but there are better ways to deal with it. We can also ask the JIT to embed the value directly. The problem is that the pattern that the JIT recognizes is limited to ReadOnlySpan<byte>, which means that the already non-trivial int32 table got turned into this:

This is the exact same data as before, but using ReadOnlySpan<byte> means that the JIT can package that inside the data section and treat it as a constant value.

The code was heavily optimized, to the point where I noticed a JIT bug where these two versions of the code give different assembly output:

Here is what we get out:

This looks like an unintended consequence of Roslyn and the JIT each doing their (separate jobs), but not reaching the end goal. Constant folding looks like it is done mostly by Roslyn, but it does a scan like that from the left, so it wouldn’t convert $A * 4 * 8 to $A * 32. That is because it stopped evaluating the constants when it found a variable. When we add parenthesis, we isolate the value and now understand that we can fold it.

Speaking of assembly, here is the annotated assembly version of the code:

And after all of this work, where are we standing?

Method N Mean Error StdDev Ratio RatioSD Code Size
FilterCmp 23 285.7 ns 3.84 ns 3.59 ns 1.00 0.00 411 B
FilterCmp_NoRangeCheck 23 272.6 ns 3.98 ns 3.53 ns 0.95 0.01 397 B
FilterCmp_Unroll_8 23 261.4 ns 1.27 ns 1.18 ns 0.91 0.01 672 B
FilterCmp_Avx 23 261.6 ns 1.37 ns 1.28 ns 0.92 0.01 521 B
               
FilterCmp 1047 758.7 ns 1.51 ns 1.42 ns 1.00 0.00 411 B
FilterCmp_NoRangeCheck 1047 756.8 ns 1.83 ns 1.53 ns 1.00 0.00 397 B
FilterCmp_Unroll_8 1047 640.4 ns 1.94 ns 1.82 ns 0.84 0.00 672 B
FilterCmp_Avx 1047 426.0 ns 1.62 ns 1.52 ns 0.56 0.00 521 B
               
FilterCmp 1048599 502,681.4 ns 3,732.37 ns 3,491.26 ns 1.00 0.00 411 B
FilterCmp_NoRangeCheck 1048599 499,472.7 ns 6,082.44 ns 5,689.52 ns 0.99 0.01 397 B
FilterCmp_Unroll_8 1048599 425,800.3 ns 352.45 ns 312.44 ns 0.85 0.01 672 B
FilterCmp_Avx 1048599 218,075.1 ns 212.40 ns 188.29 ns 0.43 0.00 521 B
               
FilterCmp 33554455 29,820,978.8 ns 73,461.68 ns 61,343.83 ns 1.00 0.00 411 B
FilterCmp_NoRangeCheck 33554455 29,471,229.2 ns 73,805.56 ns 69,037.77 ns 0.99 0.00 397 B
FilterCmp_Unroll_8 33554455 29,234,413.8 ns 67,597.45 ns 63,230.70 ns 0.98 0.00 672 B
FilterCmp_Avx 33554455 28,498,115.4 ns 71,661.94 ns 67,032.62 ns 0.96 0.00 521 B

So it seems that the idea of using SIMD instruction has a lot of merit. Moving from the original code to the final version, we see that we can complete the same task in up to half the time.

I’m not quite sure why we aren’t seeing the same sort of performance on the 32M, but I suspect that this is likely because we far exceed the CPU cache and we have to fetch it all from memory, so that is as fast as it can go.

If you are interested in learning more, Lemire solves the same problem in SVE (SIMD for ARM) and Paul has a similar approach in Rust.

If you can think of further optimizations, I would love to hear your ideas.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  2. Webinar (7):
    05 Jun 2025 - Think inside the database
  3. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  4. RavenDB News (2):
    02 May 2025 - May 2025
  5. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}