Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,546
|
Comments: 51,168
Privacy Policy · Terms
filter by tags archive
time to read 8 min | 1581 words

In the previous post, I talked about a massive amount of effort (2+ months of work) and about 25,000 lines of code changes. The only purpose of this task was to remove two locks from the system. During high load, we spent huge amounts of time contending for these locks, so removing them was well worth the effort.

During this work, I essentially found myself in the guts of Voron (RavenDB’s storage engine) and mostly dealing with old code. I’m talking about code that was written between 10 and 15 years ago. I wrote a blog post about it at the time. Working with old code is an interesting experience, especially since most of this code was written by me. I can remember some of my thoughts from the time I wrote it.

Old code is working code, and old code is also something that was built upon. Other parts of the codebase are making assumptions about the way the system behaves. And the more time a piece of code doesn't change, the more likely its behavior is going to ossify. Changing old code is hard because of the many ways that such dependencies can express themselves.

I dug through all of this decade-plus old code and I realized something pretty horrible.

It turns out that I made a mistake in understanding how Windows implements buffering for memory-mapped files. I realized my mistake around mid-2024, see the related post for the

actual details.

The TLDR summary, however, is that when using unbuffered file I/O with memory-mapped files on Windows, you cannot expect the mapped memory to reflect the data written using the file I/O API. Windows calls it coherence, and it was quite confusing when I first realized what the problem was. It turns out that this applies only to unbuffered I/O and there is no such problem with buffered I/O.

The scenario I needed to work with can use buffered I/O, however, which has been a profound shock to me. Large portions of the architecture of Voron are actually shaped by this limitation.

Because I thought that you couldn’t use both file I/O and memory-mapped files at the same time in Windows and get a consistent view of the data (the documentation literally says that, I have to add), RavenDB used memory-mapped I/O to write to the data file. That is a choice, certainly, but not one that I particularly liked. It was just that I was able to make things work and move on to other things.

This is another tale of accidental complexity, actually. I had a problem and found a solution to it, which at the time I considered quite clever. Because I had a solution, I never tried to dig any deeper into it and figure out whether this is the only solution.

This choice of using only memory-mapped I/O to write to the data file had consequences. In particular, it meant that:

  • We had to map the data using read-write mode.
  • There was no simple way to get an error if a write failed - since we just copied the data to memory, there was no actual write to fail. An error to write to disk would show up as a memory access violation (segmentation fault!) or just not show up at all.
  • Writing to a page that isn’t in memory may require us to read it first (even if we are overwriting all of it).

I accepted those limitations because I thought that this was the only way to do things. When I realized that I was wrong, that opened up so many possibilities. As far as the refactoring work, the way Voron did things changed significantly. We are now mapping the data file as read-only and writing to it using file I/O.

That means we have a known point of failure if we fail to write. That probably deserves some explanation. Failure to write to the disk can come in a bunch of ways. In particular, successfully writing to a file is not enough to safely store data, you also need to sync the file before you can be assured that the data is safe. The key here is that write + sync ensures that you’ll know that this either succeeded or failed.

Here is the old way we were writing to the data file. Conceptually, this looks like this:


auto mem = EnsureFileSize(pagesToWrite[pagesToWriteLength - 1].EndPosition);
for(auto i = 0; i < pagesToWriteLength; i++)
{
    auto path = pagesToWrite[i];
    memcpy(mem + page.Number * 8192, page.Buffer, page.Length);    
}


// some later time
if(FAILED(msync(mem))
   return SYNC_FAILURE;

And here is the first iteration of using the file I/O API for writes.


fallocate_if_needed(pagesToWrite[pagesToWriteLength - 1].EndPosition);
for(auto i = 0; i < pagesToWriteLength; i++)
{
    auto path = pagesToWrite[i];
    if(FAILED(pwrite(page.Number * 8192, page.Buffer, page.Length)))
         return WRITE_FAILURE;
}


// some time later
if (FAILED(fdatasync(file))
   return SYNC_FAILURE;

Conceptually, this is just the same, but notice that we respond immediately to write failures here.

When we started testing this feature, we realized something really interesting. The new version was much slower than the previous one, and it also generated a lot more disk writes.

I moved mountains for this?

Sometimes you get a deep sense of frustration when you look at benchmark results. The amount of work invested in this change is… pretty high. And from an architectural point of view, I’m loving it. The code is simpler, more robust, and allows us to cleanly do a lot more than we used to be able to.

The code also should be much faster, but it wasn’t. And given that performance is a critical aspect of RavenDB, that may cause us to scrap the whole thing.

Looking more deeply into the issue, it turned out that my statement about old code and the state of the system was spot on. Take a look at the two code snippets above and consider how they look from the point of view of the operating system. In the case of the memcpy() version, there is a strong likelihood that the kernel isn’t even involved (the pages are already paged in), and the only work done here is marking them as dirty (done by the CPU directly).

That means that the OS will figure out that it has stuff to write to the disk either when we call msync() or when its own timer is called. On the other hand, when we call pwrite(), we involve the OS at every stage of the process, making it far more likely that it will start the actual write to the disk earlier. That means that we are wasting batching opportunities.

In other words, because we used memory-mapped writes, we (accidentally, I might add) created a situation where we tried very hard to batch those writes in memory as much as possible. Another aspect here is that we are issuing a separate system call for each page. That means we are paying another high price.

The good thing about this is that we now have a good way to deal with those issues. The pwrite() code above was simply the first version used to test things out. Since we now have the freedom to run, we can use whatever file I/O we want.

In particular, RavenDB 7.1 now supports the notion of write modes, with the following possible options:

  • mmap - exactly like previous versions, uses a writable memory map and memcpy() to write the values to the data file.
  • file_io - uses pwrite() to write the data, onc page at a time, as shown above.
  • vectored_file_io - uses pwritev() to write the data, merging adjacent writes to reduce the number of system calls we use (Posix only, since Windows has strict limits on this capability).
  • io_ring - uses HIORING (Windows) / IO_Uring (Linux) to submit the whole set of work to the kernel as a single batch of operations.

RavenDB will select the appropriate mode for the system on its own, usually selecting io_ring for modern Linux and Windows machines, and vectored_file_io for Mac. You can control that using the RAVEN_VORON_WRITER_MODE environment variable, but that is provided only because we want to have an escape hatch, not something that you are meant to configure.

With those changes, we are on a much better footing for overall performance, but we aren’t done yet! I would love to give you the performance numbers, but we didn’t actually run the full test suite with just these changes. And that is because we aren’t done yet, I’ll cover that in the next post.

time to read 10 min | 1875 words

Even though RavenDB 7.0 isn’t quite out of the gate yet (expect the release very soon), I want to start talking about the RavenDB 7.1 release. This release is going to represent the biggest change in RavenDB in about 7 years, and at the same time, it is expected to be a complete non-event for most people.

We care enough about performance that we have both a dedicated team to verify it as well as really talented people whose sole job is to run benchmarks and optimize RavenDB. This year, the goal was to test RavenDB’s performance when running on large machines (many hundreds of GBs or RAM, dozens of cores, etc). The idea was to find bottlenecks and remove them, and we quickly found a few.

Performance optimization is often a very repetitive process. Find a problem, figure out the root cause, and move on. It is a process full of small changes and 0.1% improvements. The key is that if you practice this routinely, you end up with real measurable results over time, since your changes compound.

Most of the time, these are the changes you celebrate:

Here you can see a 6.6% increase in the number of operations per millisecond. That is a pretty big change, especially since it affects reading documents from storage.

We were roughly two months into this process when we ran into the end of the line. The scenario in question was 95% reads / 5% writes, with the idea of seeing how quickly the system responds under load, but without stressing it to the limits. Take a look at our findings:

What you can see here is that creating a transaction is costly, andover 50% of that cost is due to contention over a lock. We didn’t notice that until we had a lot of cores to go around, since there wasn’t a sufficient number of threads actively contending for this lock. Once we started looking for it, the problem was very apparent and visible on smaller machines as well.

RavenDB creates a single transaction per request, so that means we are spending a lot of time doing essentially nothing useful. What is worse, this is actively spinning. So the more cores we have, the more CPU we’ll use and the less useful work we’ll do. Yuck!

When we looked at the root cause, the problem became pretty apparent. Take a look at the following snippet:


_txCreation.EnterReadLock(); // creating read transaction
try
{
    _cancellationTokenSource.Token.ThrowIfCancellationRequested();


    tx = new LowLevelTransaction(previous.LowLevelTransaction, transactionPersistentContext, context);


    ActiveTransactions.Add(tx);
}
finally
{
    _txCreation.ExitReadLock();
}






using (PreventNewTransactions()) // during commit of write transaction
{
    if (tx.Committed && tx.FlushedToJournal)
        Interlocked.Exchange(ref _transactionsCounter, tx.Id);


    State = tx.State;


    Journal.Applicator.OnTransactionCommitted(tx);
    ScratchBufferPool.UpdateCacheForPagerStatesOfAllScratches();
    Journal.UpdateCacheForJournalSnapshots();


    tx.OnAfterCommitWhenNewTransactionsPrevented();
}

What happens is that during the commit of a write transaction we need to make updates to a bunch of places. We also need all new read transactions to see all our changes at once. So during commit, we take a short lock to update a couple of locations. And when we create a write transaction, we hold a read lock to be able to properly read all those values safely.

This particular code (and the approach in general) dates back to the very early days of Voron:

On one hand, I look at this code and cringe internally. On the other hand, it has served us really well for a long time. Of course, on the gripping hand, that code has been around for long enough to have grown some barnacles.

Notice the OnAfterCommitWhenNewTransactionsPrevented() call we have there? It is meant to allow other operations that need to run while no read transactions are allowed, in order to maintain in-memory consistency.

Fixing this issue was both straightforward and quite complex at the same time. Instead of storing all the data in various places, I introduced the following record:


record EnvironmentStateRecord(
    Pager.State DataPagerState, 
    long TransactionId,
    ImmutableDictionary<long, PageFromScratchBuffer> ScratchPagesTable,
    TreeRootHeader Root,
    long NextPageNumber,
    (long Number, long Last4KWritePosition) Journal,
    object ClientState);

The actual details of what it does aren’t really that interesting. What is important is that we have an immutable record that is being generated by the write transaction. That record contains all the database state that we carry over between transactions. When we want to publish this state, we can just make a single atomic write to a pointer, no need for a lock.

Of course, it can’t be that simple. We needed to essentially convert large portions of the code to take their state from the environment record and provide a way for higher-level code to add their own state to the record on write.

That took about two months of hard work, I think. Not complicated work, just grinding through all of those places, figuring out the best way to represent that, and making all the necessary changes.

Unfortunately, that wasn’t the only lock convoy we found. Files inside RavenDB are held by a class called Pager (since it serves pages of data). A file (Pager) may hold multiple States, which is a fancy way of referring to a memory-mapped range we use. The Pager may hold multiple such mapping concurrently. For example, if we increase the file size, we need to map the memory again - so for a certain period of time, we have two such mappings.

A transaction may create a reference to such a state (and thus may access the mapping) for its lifetime. At the end of the transaction, we can release the state, and if it is the last reference, we’ll close the mapping.

This is implemented using the following code:

This runs for multiple Pagers for each transaction, mind you. The actual AddRef() call is very cheap, but the lock…


public void AddRef() => Interlocked.Increment(ref _refs);

As you can see from the image, this is also pretty old code. The problem was how to deal with the issue. I have to be able to keep track of the mapped memory if a transaction is using it, but at the same time, I want to dispose of it quickly if it isn’t being used.

The answer to this question was to lean on the GC. Instead of implementing AddRef() / Release(), I hold the Pager.State object and handle disposal in the finalizer. If there is a reference to it, the finalizer isn’t called, so this gives me the exact behavior I want, for free.

However… I also need to be able to dispose of this explicitly when closing the pager. A common scenario is that I’m closing the Pager and deleting the directory. If I have an outstanding reference to the file, that will fail.

Here is how I’m dealing with the issue. The Pager holds a Weak-Reference to the State and may explicitly dispose of it:


public void Dispose()
{
    foreach (WeakReference<State> state in _states)
    {
        if (state.TryGetTarget(out var v))
        {
            v.Dispose();
        }
    }
}

That also allowed me to refactor the way we deal with files in general inside Voron. The first version of Voron did everything in managed code (including significant P/Invoke and cross-platform work). At some point, we introduced a native DLL (PAL - Platform Abstraction Layer - written in C) that allowed us to have a more consistent API.

The issue was that there was no clean division of labor between the native C code and the managed code, and some responsibilities were split between them in a way that wasn’t very clean.

RavenDB is using Zig

To be rather more exact, I’m using Zig as a build system to make cross-compilation a lot easier. I initially moved everything to use Zig, but it turns out that Zig isn’t able to produce DLLs that would run properly on older versions of Windows.

At least, nothing that I did convinced Zig to give me something that wouldn’t fail to load. So we are still using MSVC to compile for Windows and then Zig to both Linux & Mac (x64 & ARM). If you have good ideas on how to solve that, I would really appreciate it.

As a final thought, this work took several months, and when I got to the end, we got a pull request with some interesting numbers (+11.3K LOC, -11.2K LOC).

I have to admit that I feel a bit bad about that for the reviewers 🙂. I was really hoping that I would get a net negative result in the number of lines changed, but that couldn’t be helped. The end result is that all of our state is now far simpler, but we didn’t actually change anything in the way we behave. We removed two frequently hit lock convoys and allowed RavenDB to be far more efficient.

How efficient? Well… that is a bit complex to answer, because we didn’t stop here. There are more things that happened in RavenDB 7.1 that I need to talk to you about. We’ll cover them in my next post.

time to read 14 min | 2742 words

In my previous post, I discussed how Linux will silently truncate a big write (> 2 GB) for you. That is expected by the interface of write(). The problem is that this behavior also applies when you use IO_Uring.

Take a look at the following code:


struct io_uring_sqe *sqe = io_uring_get_sqe(&ring);
if (!sqe) {
    return 1;
}
io_uring_prep_write(sqe, fd, buffer, BUFFER_SIZE, 0);
io_uring_submit(&ring);


struct io_uring_cqe *cqe;
ret = io_uring_wait_cqe(&ring, &cqe);
if (ret < 0) {
    return 2;
}

If BUFFER_SIZE is 3 GB, then this will write about 2 GB to the file. The number of bytes written is correctly reported, but the complexity this generates is huge. Consider the following function:


int32_t rvn_write_io_ring(
    void *handle,
    int32_t count,
    struct page_to_write *buffers,
    int32_t *detailed_error_code);

There is a set of buffers that I want to write, and the natural way to do that is:


int32_t rvn_write_io_ring(
    void *handle,
    int32_t count,
    struct page_to_write *buffers,
    int32_t *detailed_error_code)
{
    struct handle *handle_ptr = handle;
    for (size_t i = 0; i < count; i++)
    {
        struct io_uring_sqe *sqe = io_uring_get_sqe(
            &handle_ptr->global_state->ring);
        io_uring_prep_write(sqe,
            handle_ptr->file_fd, 
            buffers[i].ptr, 
            buffers[i].count_of_pages * VORON_PAGE_SIZE, 
            buffers[i].page_num * VORON_PAGE_SIZE
        );
    }
    return _submit_and_wait(&handle_ptr->global_state->ring,
                            count, detailed_error_code);
}


int32_t _submit_and_wait(
    struct io_uring* ring,
    int32_t count,
    int32_t* detailed_error_code)
{
    int32_t rc = io_uring_submit_and_wait(ring, count);
    if(rc < 0)
    {
        *detailed_error_code = -rc;
        return FAIL_IO_RING_SUBMIT;
    }


    struct io_uring_cqe* cqe;
    for(int i = 0; i < count; i++)
    {
        rc = io_uring_wait_cqe(ring, &cqe);
        if (rc < 0)
        {
            *detailed_error_code = -rc;
            return FAIL_IO_RING_NO_RESULT;
        }
        if(cqe->res < 0)
        {
            *detailed_error_code = -cqe->res;
            return FAIL_IO_RING_WRITE_RESULT;
        }
        io_uring_cqe_seen(ring, cqe);
    }
    return SUCCESS;
}

In other words, send all the data to the IO Ring, then wait for all those operations to complete. We verify complete success and can then move on. However, because we may have a write that is greater than 2 GB, and because the interface allows the IO Uring to write less than we thought it would, we need to handle that with retries.

After thinking about this for a while, I came up with the following implementation:


int32_t _submit_writes_to_ring(
    struct handle *handle,
    int32_t count,
    struct page_to_write *buffers,
    int32_t* detailed_error_code)
{
    struct io_uring *ring = &handle->global_state->ring;
    off_t *offsets = handle->global_state->offsets;
    memset(offsets, 0, count * sizeof(off_t));   


    while(true)
    {
        int32_t submitted = 0;


        for (size_t i = 0; i < count; i++)
        {
            off_t offset =  offsets[i];
            if(offset == buffers[i].count_of_pages * VORON_PAGE_SIZE)
                continue;


            struct io_uring_sqe *sqe = io_uring_get_sqe(ring);
            if (sqe == NULL) // the ring is full, flush it...
                break;


            io_uring_sqe_set_data(sqe, i);
            io_uring_prep_write(sqe, handle->file_fd, 
                buffers[i].ptr + offset,
                buffers[i].count_of_pages * VORON_PAGE_SIZE - offset,
                buffers[i].page_num * VORON_PAGE_SIZE + offset);


            submitted++;
        }


        if(submitted == 0)
            return SUCCESS;
        
        int32_t rc = io_uring_submit_and_wait(ring, submitted);
        if(rc < 0)
        {
            *detailed_error_code = -rc;
            return FAIL_IO_RING_SUBMIT;
        }


        struct io_uring_cqe *cqe;
        uint32_t head = 0;
        uint32_t i = 0;
        bool has_errors = false;


        io_uring_for_each_cqe(ring, head, cqe) {
            i++;
            uint64_t index = io_uring_cqe_get_data64(cqe);
            int result = cqe->res;
            if(result < 0)
            {
                has_errors = true;
                *detailed_error_code = -result;
            }
            else
            {
                offsets[index] += result;
                if(result == 0)
                {
                    // there shouldn't be a scenario where we return 0 
                    // for a write operation, we may want to retry here
                    // but figuring out if this is a single happening, of if 
                    // we need to retry this operation (_have_ retried it?) is 
                    // complex enough to treat this as an error for now.
                    has_errors = true;
                    *detailed_error_code = EIO;   
                }
            }
        }
        io_uring_cq_advance(ring, i);


        if(has_errors)
            return FAIL_IO_RING_WRITE_RESULT;
    }
}

That is a lot of code, but it is mostly because of how C works. What we do here is scan through the buffers we need to write, as well as scan through an array of offsets that store additional information for the operation.

If the offset to write doesn’t indicate that we’ve written the whole thing, we’ll submit it to the ring and keep going until we either fill the entire ring or run out of buffers to work with. The next step is to submit the work and wait for it to complete, then run through the results, check for errors, and update the offset that we wrote for the relevant buffer.

Then, we scan the buffers array again to find either partial writes that we have to complete (we didn’t write the whole buffer) or buffers that we didn’t write at all because we filled the ring. In either case, we submit the new batch of work to the ring and repeat until we run out of work.

This code assumes that we cannot have a non-error state where we write 0 bytes to the file and treats that as an error. We also assume that an error in writing to the disk is fatal, and the higher-level code will discard the entire IO_Uring if that happens.

The Windows version, by the way, is somewhat simpler. Windows explicitly limits the size of the buffer you can pass to the write() call (and its IO Ring equivalent). It also ensures that it will write the whole thing, so partial writes are not an issue there.

It is interesting to note that the code above will effectively stripe writes if you send very large buffers. Let’s assume that we send it two 4 GB buffers, like so:

OffsetSize
Buffer 11 GB 4 GB
Buffer 210 GB6 GB

The patterns of writes that will actually be executed are:

  1. 1GB .. 3 GB, 10 GB .. 12 GB
  2. 3 GB .. 5 GB, 12 GB .. 14 GB
  3. 14 GB .. 16 GB

I can “fix” that by never issuing writes that are larger than 2 GB and issuing separate writes for each 2 GB range, but that leads to other complexities (e.g., tracking state if I split a write and hit the full ring status, etc.). At those sizes, it doesn’t actually matter in terms of efficiency or performance.

Partial writes are almost always a sign of either very large writes that were broken up or some underlying issue that is already problematic, so I don’t care that much about that scenario in general. For the vast majority of cases, this will always issue exactly one write for each buffer.

What is really interesting from my point of view, however, is how even a pretty self-contained feature can get pretty complex internally. On the other hand, this behavior allows me to push a whole bunch of work directly to the OS and have it send all of that to the disk as fast as possible.

In our scenarios, under load, we may call that with thousands to tens of thousands of pages (each 8 KB in size) spread all over the file. The buffers are actually sorted, so ideally, the kernel will be able to take advantage of that, but even if not, just reducing the number of syscalls will result in performance savings.

time to read 15 min | 2973 words

RavenDB is a transactional database, we care deeply about ACID. The D in ACID stands for durability, which means that to acknowledge a transaction, we must write it to a persistent medium. Writing to disk is expensive, writing to the disk and ensuring durability is even more expensive.

After seeing some weird performance numbers on a test machine, I decided to run an experiment to understand exactly how durable writes affect disk performance.

A few words about the term durable writes. Disks are slow, so we use buffering & caches to avoid going to the disk. But a write to a buffer isn’t durable. A failure could cause it to never hit a persistent medium. So we need to tell the disk in some way that we are willing to wait until it can ensure that this write is actually durable.

This is typically done using either fsync or O_DIRECT | O_DSYNC flags. So this is what we are testing in this post.

I wanted to test things out without any of my own code, so I ran the following benchmark.

I pre-allocated a file and then ran the following commands.

Normal writes (buffered) with different sizes (256 KB, 512 KB, etc).


dd if=/dev/zero of=/data/test bs=256K count=1024
dd if=/dev/zero of=/data/test bs=512K count=1024

Durable writes (force the disk to acknowledge them) with different sizes:


dd if=/dev/zero of=/data/test bs=256k count=1024 oflag=direct,sync
dd if=/dev/zero of=/data/test bs=256k count=1024 oflag=direct,sync

The code above opens the file using:


openat(AT_FDCWD, "/data/test", O_WRONLY|O_CREAT|O_TRUNC|O_SYNC|O_DIRECT, 0666) = 3

I got myself an i4i.xlarge instance on AWS and started running some tests. That machine has a local NVMe drive of about 858 GB, 32 GB of RAM, and 4 cores. Let’s see what kind of performance I can get out of it.

Write sizeTotal writesBuffered writes

256 KB 256 MB 1.3 GB/s
512 KB 512 MB 1.2 GB/s
1 MB 1 GB 1.2 GB/s
2 MB 2 GB 731 Mb/s
8 MB 8 GB 571 MB/s
16 MB 16 GB 561 MB/s
2 MB 8 GB 559 MB/s
1 MB 1 GB 554 MB/s
4 KB 16 GB 557 MB/s
16 KB 16 GB 553 MB/s

What you can see here is that writes are really fast when buffered. But when I hit a certain size (above 1 GB or so), we probably start having to write to the disk itself (which is NVMe, remember). Our top speed is about 550 MB/s at this point, regardless of the size of the buffers I’m passing to the write() syscall.

I’m writing here using cached I/O, which is something that as a database vendor, I don’t really care about. What happens when we run with direct & sync I/O, the way I would with a real database? Here are the numbers for the i4i.xlarge instance for durable writes.

Write sizeTotal writesDurable writes

256 KB 256 MB 1.3 GB/s
256 KB 1 GB 1.1 GB/s
16 MB 16 GB 584 GB/s
64 KB 16 GB 394 MB/s
32 KB 16 GB 237 MB/s
16 KB 16 GB 126 MB/s

In other words, when using direct I/O, the smaller the write, the more time it takes. Remember that we are talking about forcing the disk to write the data, and we need to wait for it to complete before moving to the next one.

For 16 KB writes, buffered writes achieve a throughput of 553 MB/s vs. 126 MB/s for durable writes. This makes sense, since those writes are cached, so the OS is probably sending big batches to the disk. The numbers we have here clearly show that bigger batches are better.

My next test was to see what would happen when I try to write things in parallel. In this test, we run 4 processes that write to the disk using direct I/O and measure their output.

I assume that I’m maxing out the throughput on the drive, so the total rate across all commands should be equivalent to the rate I would get from a single command.

To run this in parallel I’m using a really simple mechanism - just spawn processes that would do the same work. Here is the command template I’m using:


parallel -j 4 --tagstring 'Task {}' dd if=/dev/zero of=/data/test bs=16M count=128 seek={} oflag=direct,sync ::: 0 1024 2048 3072

This would write to 4 different portions of the same file, but I also tested that on separate files. The idea is to generate a sufficient volume of writes to stress the disk drive.

Write sizeTotal writesDurable & Parallel writes

16 MB 8 GB 650 MB/s
16 KB 64 GB 252 MB/s

I also decided to write some low-level C code to test out how this works with threads and a single program. You can find the code here.  I basically spawn NUM_THREADS threads, and each will open a file using O_SYNC | O_DIRECT and write to the file WRITE_COUNT times with a buffer of size BUFFER_SIZE.

This code just opens a lot of files and tries to write to them using direct I/O with 8 KB buffers. In total, I’m writing 16 GB (128 MB x 128 threads) to the disk. I’m getting a rate of about 320 MB/sec when using this approach.

As before, increasing the buffer size seems to help here. I also tested a version where we write using buffered I/O and call fsync every now and then, but I got similar results.

The interim conclusion that I can draw from this experiment is that NVMes are pretty cool, but once you hit their limits you can really feel it. There is another aspect to consider though, I’m running this on a disk that is literally called ephemeral storage. I need to repeat those tests on real hardware to verify whether the cloud disk simply ignores the command to persist properly and always uses the cache.

That is supported by the fact that using both direct I/O on small data sizes didn’t have a big impact (and I expected it should). Given that the point of direct I/O in this case is to force the disk to properly persist (so it would be durable in the case of a crash), while at the same time an ephemeral disk is wiped if the host machine is restarted, that gives me good reason to believe that these numbers are because the hardware “lies” to me.

In fact, if I were in charge of those disks, lying about the durability of writes would be the first thing I would do. Those disks are local to the host machine, so we have two failure modes that we need to consider:

  • The VM crashed - in which case the disk is perfectly fine and “durable”.
  • The host crashed - in which case the disk is considered lost entirely.

Therefore, there is no point in trying to achieve durability, so we can’t trust those numbers.

The next step is to run it on a real machine. The economics of benchmarks on cloud instances are weird. For a one-off scenario, the cloud is a godsend. But if you want to run benchmarks on a regular basis, it is far more economical to just buy a physical machine. Within a month or two, you’ll already see a return on the money spent.

We got a machine in the office called Kaiju (a Japanese term for enormous monsters, think: Godzilla) that has:

  • 32 cores
  • 188 GB RAM
  • 2 TB NVMe for the system disk
  • 4 TB NVMe for the data disk

I ran the same commands on that machine as well and got really interesting results.

Write sizeTotal writesBuffered writes

4 KB 16 GB 1.4 GB/s
256 KB 256 MB 1.4 GB/s
2 MB 2 GB 1.6 GB/s
2 MB 16 GB 1.7 GB/s
4 MB 32 GB 1.8 GB/s
4 MB 64 GB 1.8 GB/s

We are faster than the cloud instance, and we don’t have a drop-off point when we hit a certain size. We are also seeing higher performance when we throw bigger buffers at the system.

But when we test with small buffers, the performance is also great. That is amazing, but what about durable writes with direct I/O?

I tested the same scenario with both buffered and durable writes:

ModeBufferedDurable

1 MB buffers, 8 GB write 1.6 GB/s 1.0 GB/s
2 MB buffers, 16 GB write 1.7 GB/s 1.7 GB/s

Wow, that is an interesting result. Because it means that when we use direct I/O with 1 MB buffers, we lose about 600 MB/sec compared to buffered I/O. Note that this is actually a pretty good result. 1 GB/sec is amazing.

And if you use big buffers, then the cost of direct I/O is basically gone. What about when we go the other way around and use smaller buffers?

ModeBufferedDurable

128 KB buffers, 8 GB write 1.7 GB/s 169 MB/s
32 KB buffers, 2 GB 1.6 GB/s 49.9 MB/s
Parallel: 8, 1 MB, 8 GB 5.8 GB/s 3.6 GB/s
Parallel: 8, 128 KB, 8 GB 6.0 GB/s 550 MB/s

For buffered I/O - I’m getting simply dreamy numbers, pretty much regardless of what I do 🙂.

For durable writes, the situation is clear. The bigger the buffer we write, the better we perform, and we pay for small buffers. Look at the numbers for 128 KB in the durable column for both single-threaded and parallel scenarios.

169 MB/s in the single-threaded result, but with 8 parallel processes, we didn’t reach 1.3 GB/s (which is 169x8). Instead, we achieved less than half of our expected performance.

It looks like there is a fixed cost for making a direct I/O write to the disk, regardless of the amount of data that we write.  When using 32 KB writes, we are not even breaking into the 200 MB/sec. And with 8 KB writes, we are barely breaking into the 50 MB/sec range.

Those are some really interesting results because they show a very strong preference for bigger writes over smaller writes.

I also tried using the same C code as before. As a reminder, we use direct I/O to write to 128 files in batches of 8 KB, writing a total of 128 MB per file. All of that is done concurrently to really stress the system.

When running iotop in this environment, we get:


Total DISK READ:         0.00 B/s | Total DISK WRITE:       522.56 M/s
Current DISK READ:       0.00 B/s | Current DISK WRITE:     567.13 M/s
    TID  PRIO  USER     DISK READ DISK WRITE>    COMMAND
 142851 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142901 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142902 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142903 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
 142904 be/4 kaiju-1     0.00 B/s    4.09 M/s ./a.out
... redacted ...

So each thread is getting about 4.09 MB/sec for writes, but we total 522 MB/sec across all writes. I wondered what would happen if I limited it to fewer threads, so I tried with 16 concurrent threads, resulting in:


Total DISK READ:         0.00 B/s | Total DISK WRITE:        89.80 M/s
Current DISK READ:       0.00 B/s | Current DISK WRITE:     110.91 M/s
    TID  PRIO  USER     DISK READ DISK WRITE>    COMMAND
 142996 be/4 kaiju-1     0.00 B/s    5.65 M/s ./a.out
 143004 be/4 kaiju-1     0.00 B/s    5.62 M/s ./a.out
 142989 be/4 kaiju-1     0.00 B/s    5.62 M/s ./a.out
... redacted ..

Here we can see that each thread is getting (slightly) more throughput, but the overall system throughput is greatly reduced.

To give some context, with 128 threads running, the process wrote 16GB in 31 seconds, but with 16 threads, it took 181 seconds to write the same amount. In other words, there is a throughput issue here. I also tested this with various levels of concurrency:

Concurrency(8 KB x 16K times - 128 MB)Throughput per threadTime / MB written

1 15.5 MB / sec 8.23 seconds / 128 MB
2 5.95 MB / sec 18.14 seconds / 256 MB
4 5.95 MB / sec 20.75 seconds / 512 MB
8 6.55 MB / sec 20.59 seconds / 1024 MB
16 5.70 MB / sec 22.67 seconds / 2048 MB

To give some context, here are two attempts to write 2GB to the disk:

ConcurrencyWriteThroughputTotal writtenTotal time

16 128 MB in 8 KB writes 5.7 MB / sec 2,048 MB 22.67 sec
8 256 MB in 16 KB writes 12.6 MB / sec 2,048 MB 22.53 sec
16 256 MB in 16 KB writes 10.6 MB / sec 4,096 MB 23.92 sec

In other words, we can see the impact of concurrent writes. There is absolutely some contention at the disk level when making direct I/O writes. The impact is related to the number of writes rather than the amount of data being written.

Bigger writes are far more efficient. And concurrent writes allow you to get more data overall but come with a horrendous latency impact for each individual thread.

The difference between the cloud and physical instances is really interesting, and I have to assume that this is because the cloud instance isn’t actually forcing the data to the physical disk (it doesn’t make sense that it would).

I decided to test that on an m6i.2xlarge instance with a 512 GB io2 disk with 16,000 IOPS.

The idea is that an io2 disk has to be durable, so it will probably have similar behavior to physical hardware.

DiskBuffer SizeWritesDurableParallelTotalRate

io2              256.00                1,024.00  No                         1.00              256.00    1,638.40
io2          2,048.00                1,024.00  No                         1.00          2,048.00    1,331.20
io2                   4.00    4,194,304.00  No                         1.00    16,384.00    1,228.80
io2              256.00                1,024.00  Yes                         1.00              256.00            144.00
io2              256.00                4,096.00  Yes                         1.00          1,024.00            146.00
io2                64.00                8,192.00  Yes                         1.00              512.00              50.20
io2                32.00                8,192.00  Yes                         1.00              256.00              26.90
io2                   8.00                8,192.00  Yes                         1.00                64.00                7.10
io2          1,024.00                8,192.00  Yes                         1.00          8,192.00            502.00
io2          1,024.00                2,048.00  No                         8.00          2,048.00    1,909.00
io2          1,024.00                2,048.00  Yes                         8.00          2,048.00    1,832.00
io2                32.00                8,192.00  No                         8.00              256.00    3,526.00
io2                32.00                8,192.00  Yes                         8.00              256.00 150.9
io2                   8.00                8,192.00  Yes                         8.00                64.00              37.10

In other words, we are seeing pretty much the same behavior as on the physical machine, unlike the ephemeral drive.

In conclusion, it looks like the limiting factor for direct I/O writes is the number of writes, not their size. There appears to be some benefit for concurrency in this case, but there is also some contention. The best option we got was with big writes.

Interestingly, big writes are a win, period. For example, 16 MB writes, direct I/O:

  • Single-threaded - 4.4 GB/sec
  • 2 threads - 2.5 GB/sec X 2 - total 5.0 GB/sec
  • 4 threads - 1.4 X 4  - total 5.6 GB/sec
  • 8 threads - ~590 MB/sec x 8 - total 4.6 GB/sec

Writing 16 KB, on the other hand:

  • 8 threads - 11.8 MB/sec x 8 - total 93 MB/sec
  • 4 threads - 12.6 MB/sec x 4- total 50.4 MB/sec
  • 2 threads - 12.3 MB/sec x 2 - total 24.6 MB/sec
  • 1 thread - 23.4 MB/sec

This leads me to believe that there is a bottleneck somewhere in the stack, where we need to handle the durable write, but it isn’t related to the actual amount we write. In short, fewer and bigger writes are more effective, even with concurrency.

As a database developer, that leads to some interesting questions about design. It means that I want to find some way to batch more writes to the disk, especially for durable writes, because it matters so much.

Expect to hear more about this in the future.

time to read 7 min | 1357 words

When building RavenDB, we occasionally have to deal with some ridiculous numbers in both size and scale. In one of our tests, we ran into an interesting problem. Here are the performance numbers of running a particular query 3 times.

First Run: 19,924 ms

Second Run: 3,181 ms

Third Run: 1,179 ms

Those are not good numbers, so we dug into this to try to figure out what is going on. Here is the query that we are running:


from index 'IntFloatNumbers-Lucene' where Int > 0

And the key here is that this index covers 400 million documents, all of which are actually greater than 0. So this is actually a pretty complex task for the database to handle, mostly because of the internals of how Lucene works.

Remember that we provide both the first page of the results as well as its total number. So we have to go through the entire result set to find out how many items we have. That is a lot of work.

But it turns out that most of the time here isn’t actually processing the query, but dealing with the GC. Here are some entries from the GC log while the queries were running:


2024-12-12T12:39:40.4845987Z, Type: GC, thread id: 30096, duration: 2107.9972ms, index: 25, generation: 2, reason: Induced
2024-12-12T12:39:53.1359744Z, Type: GC, thread id: 30096, duration: 1650.9207ms, index: 26, generation: 2, reason: Induced
2024-12-12T12:40:07.5835527Z, Type: GC, thread id: 30096, duration: 1629.1771ms, index: 27, generation: 2, reason: Induced
2024-12-12T12:40:20.2205602Z, Type: GC, thread id: 30096, duration: 776.24ms, index: 28, generation: 2, reason: Induced

That sound you heard was me going: Ouch!

Remember that this query actually goes through 400M results. Here are the details about its Memory Usage & Object Count:

  • Number of objects for GC (under LuceneIndexPersistence): 190M (~12.63GB)
  • Managed Memory: 13.01GB
  • Unmanaged Memory: 4.53MB

What is going on? It turns out that Lucene handles queries such as Int>0 by creating an array with all the unique values, something similar to:


string[] sortedTerms = new string[190_000_000];
long[] termPostingListOffset = new long[190_000_000];

This isn’t exactly how it works, mind. But the details don’t really matter for this story. The key here is that we have an array with a sorted list of terms, and in this case, we have a lot of terms.

Those values are cached, so they aren’t actually allocated and thrown away each time we query. However, remember that the .NET GC uses a Mark & Sweep algorithm. Here is the core part of the Mark portion of the algorithm:


long _marker;
void Mark()
{
    var currentMarker = ++_marker;


    foreach (var root in GetRoots())
    {
        Mark(root);
    }


    void Mark(object o)
    {
        // already visited
        if (GetMarket(o) == currentMarker)
            return;


        foreach (var child in GetReferences(node))
        {
            Mark(child);
        }
    }
}

Basically, start from the roots (static variables, items on the stack, etc.), scan the reachable object graph, and mark all the objects in use. The code above is generic, of course (and basically pseudo-code), but let’s consider what the performance will be like when dealing with an array of 190M strings.

It has to scan the entire thing, which means it is proportional to the number of objects. And we do have quite a lot of those.

The problem was the number of managed objects, so we pulled all of those out. We moved the term storage to unmanaged memory, outside the purview of the GC. As a result, we now have the following Memory Usage & Object Count:

  • Number of objects for GC (under LuceneIndexPersistence): 168K (~6.64GB)
  • Managed Memory: 6.72GB
  • Unmanaged Memory: 1.32GB

Looking at the GC logs, we now have:


2024-12-16T18:33:29.8143148Z, Type: GC, thread id: 8508, duration: 93.6835ms, index: 319, generation: 2, reason: Induced
2024-12-16T18:33:30.7013255Z, Type: GC, thread id: 8508, duration: 142.1781ms, index: 320, generation: 2, reason: Induced
2024-12-16T18:33:31.5691610Z, Type: GC, thread id: 8508, duration: 91.0983ms, index: 321, generation: 2, reason: Induced
2024-12-16T18:33:37.8245671Z, Type: GC, thread id: 8508, duration: 112.7643ms, index: 322, generation: 2, reason: Induced

So the GC time is now in the range of 100ms, instead of several seconds. This change helps both reduce overall GC pause times and greatly reduce the amount of CPU spent on managing garbage.

Those are still big queries, but now we can focus on executing the query, rather than managing maintenance tasks. Incidentally, those sorts of issues are one of the key reasons why we built Corax, which can process queries directly on top of persistent structures, without needing to materialize anything from the disk.

time to read 2 min | 247 words

I write a transactional database for a living, and the best example of why we want transactions is transferring money between accounts. It is ironic, therefore, that there is no such thing as transactions for money transfers in the real world.

If you care to know why, go back 200 years and consider how a bank would operate in an environment without instant communication. I would actually recommend doing that, it is a great case study in distributed system design. For example, did you know that the Templars used cryptography to send money almost a thousand years ago?

Recently I was reviewing my bank transactions and I found the following surprise. This screenshot is from yesterday (Dec 18), and it looks like a payment that I made is still “stuck in the tubes” two and a half weeks later.

 

I got in touch with the supplier in question to apologize for the delay. They didn’t understand what I was talking about. Here is what they see when they go to their bank, they got the money.

 

For fun, look at the number of different dates that you can see in their details.

Also, as of right now, my bank account still shows the money as pending approval (to be sent out from my bank).

I might want to recommend that they use a different database. Or maybe I should just convince the bank to approve the payment by the time of the next invoice and see if I can get a bit of that infinite money glitch.

time to read 13 min | 2479 words

RavenDB has a hidden feature, enabled by default and not something that you usually need to be aware of. It has built-in support for caching. Consider the following code:


async Task<Dictionary<string, int>> HowMuchWorkToDo(string userId)
{
    using var session = _documentStore.OpenAsyncSession();
    var results = await session.Query<Item>()
        .GroupBy(x =>new { x.Status, x.AssignedTo })
        .Where(g => g.Key.AssignedTo == userId && g.Key.Status != "Closed")
        .Select(g => new 
        {
            Status = g.Key.Status,
            Count = g.Count()
        })
        .ToListAsync();


    return results.ToDictionary(x => x.Status, x => x.Count);
}

What happens if I call it twice with the same user? The first time, RavenDB will send the query to the server, where it will be evaluated and executed. The server will also send an ETag header with the response. The client will remember the response and its ETag in its own memory.

The next time this is called on the same user, the client will again send a request to the server. This time, however, it will also inform the server that it has a previous response to this query, with the specified ETag. The server, when realizing the client has a cached response, will do a (very cheap) check to see if the cached response matches the current state of the server. If so, it can inform the client (using 304 Not Modified) that it can use its cache.

In this way, we benefit twice:

  • First, on the server side, we avoid the need to compute the actual query.
  • Second, on the network side, we aren’t sending a full response back, just a very small notification to use the cached version.

You’ll note, however, that there is still an issue. We have to go to the server to check. That means that we still pay the network costs. So far, this feature is completely transparent to the user. It works behind the scenes to optimize server query costs and network bandwidth costs.

We have a full-blown article on caching in RavenDB if you care to know more details instead of just “it makes things work faster for me”.

Aggressive Caching in RavenDB

The next stage is to involve the user. Enter the AggressiveCache() feature (see the full documentation here), which allows the user to specify an additional aspect. Now, when the client has the value in the cache, it will skip going to the server entirely and serve the request directly from the cache.

What about cache invalidation? Instead of having the client check on each request if things have changed, we invert the process. The client asks the server to notify it when things change, and until it gets notice from the server, it can serve responses completely from the local cache.

I really love this feature, that was the Good part, now let’s talk about the other pieces:

There are only two hard things in Computer Science: cache invalidation and naming things.

-- Phil Karlton

The bad part of caching is that this introduces more complexity to the system. Consider a system with two clients that are using the same database. An update from one of them may show up at different times in each. Cache invalidation will not happen instantly, and it is possible to get into situations where the server fails to notify the client about the update, meaning that we didn’t clear the cache.

We have a good set of solutions around all of those, I think. But it is important to understand that the problem space itself is a problem.

In particular, let’s talk about dealing with the following query:


var emps = session.Query<Employee>()
    .Include(x => x.Department)
    .Where(x => x.Location.City == "London")
    .ToListAsync();

When an employee is changed on the server, it will send a notice to the client, which can evict the item from the cache, right? But what about when a department is changed?

For that matter, what happens if a new employee is added to London? How do we detect that we need to refresh this query?

There are solutions to those problems, but they are super complicated and have various failure modes that often require more computing power than actually running the query. For that reason, RavenDB uses a much simpler model. If the server notifies us about any change, we’ll mark the entire cache as suspect.

The next request will have to go to the server (again with an ETag, etc) to verify that the response hasn’t changed. Note that if the specific query results haven’t changed, we’ll get OK (304 Not Modified) from the server, and the client will use the cached response.

Conservatively aggressive approach

In other words, even when using aggressive caching, RavenDB still has to go to the server sometimes. What is the impact of this approach when you have a system under load?

We’ll still use aggressive caching, but you’ll see brief periods where we aren’t checking with the server (usually be able to cache for about a second or so), followed by queries to the server to check for any changes.

In most cases, this is what you want. We still benefit from the cache while reducing the number of remote calls by about 50%, and we don’t have to worry about missing updates. The downside is that, as application developers, we know that this particular document and query are independent, so we want to cache them until we get notice about that particular document being changed.

The default aggressive caching in RavenDB will not be of major help here, I’m afraid. But there are a few things you can do.

You can use Aggressive Caching in the NoTracking mode. In that mode, the client will not ask the server for notifications on changes, and will cache the responses in memory until they expire (clock expiration or size expiration only).

There is also a feature suggestion that calls for updating the aggressive cache in a background manner, I would love to hear more feedback on this proposal.

Another option is to take this feature higher than RavenDB directly, but still use its capabilities. Since we have a scenario where we know that we want to cache a specific set of documents and refresh the cache only when those documents are updated, let’s write it.

Here is the code:


public class RecordCache<T>
{
    private ConcurrentLru<string, T> _items = 
        new(256, StringComparer.OrdinalIgnoreCase);
    private readonly IDocumentStore _documentStore;


    public RecordCache(IDocumentStore documentStore)
    {
        const BindingFlags Flags = BindingFlags.Instance | 
            BindingFlags.NonPublic | BindingFlags.Public;
        var violation = typeof(T).GetFields(Flags)
            .FirstOrDefault(f => f.IsInitOnly is false);
        if (violation != null)
        {
            throw new InvalidOperationException(
                "You should cache *only* immutable records, but got: " + 
                typeof(T).FullName + " with " + violation.Name + 
                " which is not read only!");
        }


        var changes = documentStore.Changes();
        changes.ConnectionStatusChanged += (_, args) =>
        {
            _items = new(256, StringComparer.OrdinalIgnoreCase);
        };
        changes.ForDocumentsInCollection<T>()
            .Subscribe(e =>
            {
                _items.TryRemove(e.Id, out _);
            })
            ;
        _documentStore = documentStore;
    }


    public ValueTask<T> Get(string id)
    {
        if (_items.TryGetValue(id, out var result))
        {
            return ValueTask.FromResult(result);
        }
        return new ValueTask<T>(GetFromServer(id));


    }


    private async Task<T> GetFromServer(string id)
    {
        using var session = _documentStore.OpenAsyncSession();
        var item = await session.LoadAsync<T>(id);
        _items.Set(id, item);
        return item;
    }
}

There are a few things to note about this code. We are holding live instances, so we ensure that the values we keep are immutable records. Otherwise, we may hand the same instance to two threads which can be… fun.

Note that document IDs in RavenDB are case insensitive, so we pass the right string comparer.

Finally,  the magic happens in the constructor. We register for two important events. Whenever the connection status of the Changes() connection is modified, we clear the cache. This handles any lost updates scenarios that occurred while we were disconnected.

In practice, the subscription to events on that particular collection is where we ensure that after the server notification, we can evict the document from the cache so that the next request will load a fresh version.

Caching + Distributed Systems = 🤯🤯🤯

I’m afraid this isn’t an easy topic once you dive into the specifics and constraints we operate under. As I mentioned, I would love your feedback on the background cache refresh feature, or maybe you have better insight into other ways to address the topic.

time to read 4 min | 618 words

We recently got a support request from a user in which they had the following issue:


We have an index that is using way too much disk space. We don’t need to search the entire dataset, just the most recent documents. Can we do something like this?


from d in docs.Events
where d.CreationDate >= DateTime.UtcNow.AddMonths(-3)
select new { d.CreationDate, d.Content };

The idea is that only documents from the past 3 months would be indexed, while older documents would be purged from the index but still retained.

The actual problem is that this is a full-text search index, and the actual data size required to perform a full-text search across the entire dataset is higher than just storing the documents (which can be easily compressed).

This is a great example of an XY problem. The request was to allow access to the current date during the indexing process so the index could filter out old documents. However, that is actually something that we explicitly prevent. The problem is that the current date isn’t really meaningful when we talk about indexing. The indexing time isn’t really relevant for filtering or operations, since it has no association with the actual data.

The date of a document and the time it was indexed are completely unrelated. I might update a document (and thus re-index it) whose CreationDate is far in the past. That would filter it out from the index. However, if we didn’t update the document, it would be retained indefinitely, since the filtering occurs only at indexing time.

Going back to the XY problem, what is the user trying to solve? They don’t want to index all data, but they do want to retain it forever. So how can we achieve this with RavenDB?

Data Archiving in RavenDB

One of the things we aim to do with RavenDB is ensure that we have a good fit for most common scenarios, and archiving is certainly one of them. In RavenDB 6.0 we added explicit support for Data Archiving.

When you save a document, all you need to do is add a metadata element: @archive-at and you are set. For example, take a look at the following document:


{
    "Name": "Wilman Kal",
    "Phone": "90-224 8888",
    "@metadata": {
        "@archive-at": "2024-11-01T12:00:00.000Z",
        "@collection": "Companies",
     }
}

This document is set to be archived on Nov 1st, 2024. What does that mean?

From that day on, RavenDB will automatically mark it as an archived document, meaning it will be stored in a compressed format and excluded from indexing by default.

In fact, this exact scenario is detailed in the documentation.

You can decide (on a per-index basis) whether to include archived documents in the index. This gives you a very high level of flexibility without requiring much manual effort.

In short, for this scenario, you can simply tell RavenDB when to archive the document and let RavenDB handle the rest. RavenDB will do the right thing for you.

time to read 4 min | 683 words

Reading code is a Skill (with a capital letter, yes) that is really important for developers. You cannot be a good developer without it.

Today I want to talk about one aspect of this. The ability to go into an unfamiliar codebase and extract one piece of information out. The idea is that we don’t need to understand the entire system, grok the architecture, etc. I want to understand one thing about it and get away as soon as I can.

For example, you know that project Xyz is doing some operation, and you want to figure out how this is done. So you need to look at the code and figure that out, then you can go your merry way.

Today, I’m interested in understanding how the LMDB project writes data to the disk on Windows. This is because LMDB is based around a memory-mapped model, and Windows doesn’t keep the data between file I/O and mmap I/O coherent.

LMDB is an embedded database engine (similar to Voron, and in fact, Voron is based on some ideas from LMDB) written in C. If you are interested in it, I wrote 11 posts going through every line of code in the project.

So I’m familiar with the project, but the last time I read the code was over a decade ago. From what I recall, the code is dense. There are about 11.5K lines of code in a single file, implementing the entire thing.

I’m using the code from here.

The first thing to do is find the relevant section in the code. I started by searching for the WriteFile() function, the Win32 API to write. The first occurrence of a call to this method is in the mdb_page_flush function.

I look at this code, and… there isn’t really anything there. It is fairly obvious and straightforward code (to be clear, that is a compliment). I was expecting to see a trick there. I couldn’t find it.

That meant either the code had a gaping hole and potential data corruption (highly unlikely) or I was missing something. That led me to a long trip of trying to distinguish between documented guarantees and actual behavior.

The documentation for MapViewOfFile is pretty clear:

A mapped view of a file is not guaranteed to be coherent with a file that is being accessed by the ReadFile or WriteFile function.

I have my own run-ins with this behavior, which was super confusing. This means that I had experimental evidence to say that this is broken. But it didn’t make sense, there was no code in LMDB to handle it, and this is pretty easy to trigger.

It turns out that while the documentation is pretty broad about not guaranteeing the behavior, the actual issue only occurs if you are working with remote files or using unbuffered I/O.

If you are working with local files and buffered I/O (which is 99.99% of the cases), then you can rely on this behavior. I found some vaguereferences to this, but that wasn’t enough. There is this post that is really interesting, though.

I pinged Howard Chu, the author of LMDB, for clarification, and he was quick enough to assure me that yes, my understanding was (now) correct. On Windows, you can mix memory map operations with file I/O and get the right results.

The documentation appears to be a holdover from Windows 9x, with the NT line always being able to ensure coherency for local files. This is a guess about the history of documentation, to be honest. Not something that I can verify.

I had the wrong information in my head for over a decade. I did not expect this result when I started this post, I was sure I would be discussing navigating complex codebases. I’m going to stand in the corner and feel upset about this for a while now.

time to read 3 min | 487 words

Corax is the new indexing and querying engine in RavenDB, which recently came out with RavenDB 6.0. Our focus when building Corax was on one thing, performance. I did a full talk explaining how it works from the inside out, available here as well as a couple of podcasts.

Now that RavenDB 6.0 has been out for a while, we’ve had the chance to complete a few features that didn’t make the cut for the big 6.0 release. There is a host of small features for Corax, mostly completing tasks that were not included in the initial 6.0 release.

All these features are available in the 6.0.102 release, which went live in late April 2024.

The most important new feature for Corax is query plan visualization.

Let’s run the following query in the RavenDB Studio on the sample data set:


from index 'Orders/ByShipment/Location'
where spatial.within(ShipmentLocation, 
                  spatial.circle( 10, 49.255, 4.154, 'miles')
      )
and (Employee = 'employees/5-A' or Company = 'companies/85-A')
order by Company, score()
include timings()

Note that we are using the includetimings() feature. If you configure this index to use Corax, issuing the above query will also give us the full query plan. In this case, you can see it here:

You can see exactly how the query engine has processed your query and the pipeline it has gone through.

We have incorporated many additional features into Corax, including phrase queries, scoring based on spatial results, and more complex sorting pipelines. For the most part, those are small but they fulfill specific needs and enable a wider range of scenarios for Corax.

Over six months since Corax went live with 6.0, I can tell that it has been a successful feature. It performs its primary job well, being a faster and more efficient querying engine. And the best part is that it isn’t even something that you need to be aware of.

Corax has been the default indexing engine for the Development and Community editions of RavenDB for over 3 months now, and almost no one has noticed.

It’s a strange metric, I know, for a feature to be successful when no one is even aware of its existence, but that is a common theme for RavenDB. The whole point behind RavenDB is to provide a database that works, allowing you to forget about it.

FUTURE POSTS

  1. RavenDB 7.1: Reclaiming disk space - about one day from now
  2. RavenDB 7.1: Shared Journals - 4 days from now

There are posts all the way to Feb 17, 2025

RECENT SERIES

  1. RavenDB 7.1 (4):
    12 Feb 2025 - Write modes
  2. Challenge (77):
    03 Feb 2025 - Giving file system developer ulcer
  3. Answer (13):
    22 Jan 2025 - What does this code do?
  4. Production post-mortem (2):
    17 Jan 2025 - Inspecting ourselves to death
  5. Performance discovery (2):
    10 Jan 2025 - IOPS vs. IOPS
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}