Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,583
|
Comments: 51,214
Privacy Policy · Terms
filter by tags archive
time to read 2 min | 294 words

It looks like I’m on a rule for administrators and operations features in RavenDB 3.5, and in this case, I want to introduce the Administrator JS Console.

This is a way for a server administrator to execute arbitrary code on a running system. Here is a small such example:

image

This is 100% the case of giving the administrator the option for running with scissors.

image Yes, I’m feeling a bit nostalgic Smile.

The idea here is that we have the ability to query and modify the state of the server directly. We don’t have to rely on prepared-ahead-of-time end points, and only being able to do whatever it is we thought of beforehand.

If we run into a hard problem, we’ll actually be able to ask the server directly, what ails you, and when we find out, we can tell it, so let us fix that. For example, interrogating the server about the state of its memory, then telling it to flush a particular cache, or changing (at runtime, without any restarts or downtime) the value of a particular configuration option, or giving the server additional information it is missing.

We already saw three cases where having this ability would have been a major time saver, so we are really excited about this feature, and what it means to our ability to more easily support RavenDB in production.

time to read 2 min | 302 words

One of the most important features in RavenDB is replication, and the ability to use a replica node to get high availability and load balancing across all nodes.

And yes, there are users who choose to run on a single instance, or want to have a hot backup, but not pay for an additional license because they don’t need the high availability mode.

Hence, RavenDB 3.5 will introduce the Hot Spare mode:

image

A hot spare is a RavenDB node that is available only as a replication target. It cannot be used for answering queries or talking with clients, but it will get all the data from your server and keep it safe.

If there is a failure of the main node , an admin can light up the hot spare, which will turn the hot spare license into a normal license for a period of 96 hours. At that point, the RavenDB server will behave normally as a standard server, clients will be able to failover to it, etc. After the 96 hours are up, it will revert back to hot spare mode, but the administrator will not be able to activate it again without purchasing another hot spare license.

The general idea is that this license is going to be significantly cheaper than a full RavenDB license, but provides a layer of protection for the application. We would still recommend that you’ll get for a full cluster, but for users that just want the reassurance that they can hit the breaker and switch, without going into the full cluster investment, that is a good first step.

We’ll announce pricing for this when we release 3.5.

time to read 2 min | 222 words

One of the things that happens when you start using RavenDB is that you start using more of it, and more of its capabilities. The problem there is that often users end up with a common set of stuff that they use, and they need to apply it across multiple databases (and across multiple servers).

This is where the Global Configuration option comes into play. This allows you to define the behavior of the system once, and have it apply to all your databases, and using the RavenDB Cluster option, you can apply it across all you nodes in on go.

image

As you can see in the image, we are allowing to configure this globally, for the entire server (or multiple servers), instead of having to remember to configure it for each. The rest of the tabs are pretty much the same manner. You can configure global behavior (and override it on individual databases, of course). Aside from the fact that it is available cluster-wide, this isn’t a major feature, or a complex one, but it is one that is going to make it easier for the operations team to work with RavenDB.

time to read 4 min | 642 words

The first thing you’ll see when you run RavenDB 3.5 is the new empty screen:

image

You get the usual “Create a database” and the more interesting, Create Cluster. In RavenDB 3.0 clusters were ad hoc things, you created them by bridging together multiple nodes using replication. In RavenDB 3.5, we have made that into a first class concept. Clicking on the “create cluster” link will give you the following:

image

And then you can start adding additional nodes:

image

RavenDB cluster is using Raft (more specifically, Rachis, our Raft implementation) to bridge together multiple server into a single distributed consensus. This allows you to manage the entire cluster in one go.

Once you have created the cluster, you can always configure it (add / remove nodes) by clicking the cluster icon on the bottom.

image

Raft is a distributed consensus algorithm, which allows a group of server to agree on the order of a set of commands to execute (not strictly true, but for more details, see my previous posts in the topic).

We use Raft to connect server together, but what does this mean? RavenDB replication is still running in a multi master node, which means that each server can still accept requests. We use Raft for two distinct purposes.

The first of which is to distribute changes of operations across the cluster. Adding a new database, changing a configuration settings. This allows to be sure that such changes are accepted by the cluster before they are actually performed.

But the one that you’ll probably notice most of all is that shown in the cluster image, Raft is a strong leader system, and we use it to make sure that all the writes in the cluster are done on the leader. If the leader is taken down, we’ll transparently select a new leader, clients will be informed about it, and all new writes will go to the new leader. When the previous leader recover, it will join the cluster as a follower, and there will be no disruption of service.  This is done for all the databases in the cluster, as you can see from the follow topology view:

image

Note that while the leader selection is handled via Raft, the leader database is replicating to the other nodes using multi-master system, so if you need to wait until a document is present in a majority of the cluster, you need to wait for write assurance.

The stable leader in the presence of failure means that we won’t have the clients switching back and forth between nodes, they will be informed of the leader, and stick to it. This generates a much more stable network environment, and allowing you to configure the cluster details once and have it propagate everywhere is a great reduction in operational work. I’ll talk more about this in my next post in this series.

As a reminder, we have the RavenDB Conference in Texas in a few months, where we’ll present RavenDB 3.5 in all its glory.

image

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  2. Webinar (7):
    05 Jun 2025 - Think inside the database
  3. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  4. RavenDB News (2):
    02 May 2025 - May 2025
  5. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}