Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,590
|
Comments: 51,218
Privacy Policy · Terms
filter by tags archive
time to read 1 min | 158 words

We pretty much completed all of the major work that was going to get into the next major RavenDB release. There is the indexing stuff that I already talked about in this blog, but most of the rest are a lot of seemingly minor things. Nothing that would blow you head in isolation.

Things like better aggressive caching invalidation, faster error in network timeout scenarios, better bucket selection algorithm for map/reduce, providing you with auto resolving conflicts, or even more improvements to the studio.

Hell, it is even things like this:

image

We output a lot of information that can help you figure out what is going on in your system. But now we are actually starting to take it up a notch and also provide good UX for handling that.

time to read 2 min | 360 words

RavenDB’s aggressive caching allows RavenDB Clients to skip going to the server and  serve requests directly off the client cache. That means that you can answer queries very  quickly, because you never even have to leave your process memory space.

The downside to that was, of course, that you might be showing information that have changed behind your back.

That is why we usually wrote aggressive caching code like this:

using (session.Advanced.DocumentStore.AggressivelyCacheFor(TimeSpan.FromMinutes(5)))
{
   var user =  session.Load<User>("users/1");
   Console.WriteLine(user.Name);
}

We gave it some duration in which it was okay to skip going to the server. So we had a maximum of 5 minutes when we had the cached information in place.

That was nice, but it was awkward. And not really nice thing to do in general. In particular, it meant that you had to wait 5 minutes for things to actually show up in the application. That can be... frustrating, because it looks like the system isn’t really doing something. On the other hand, it also means that you still have to query the server when the duration is over, and by necessity, the durations are relatively short.

We decided to change that. Now, when you are using aggressive caching, RavenDB will automatically subscribe to changes from the server. When that happens, we will be able to use the notifications to know when we need to re-check on the server.

That means that you can set an aggressive cache duration of much longer period, and that we will know, automatically and without you needing to do anything.

It is a small touch, but an important one. Things just get better Smile.

time to read 1 min | 168 words

By default, we expect to have a rather smaller number of entities types in a database. Unlike relational databases, where you typically see hundreds or thousands of tables, because everything gets dumped into a single database (and because a single document in RavenDB typically reside in many relational tables).

That said, we got a few complaints about this story, because the studio UI becomes hard to use when this happens. We decided to support it in an interesting fashion. This is what your documents will look like when you have ~60 entities types in a single database:

collcections

And here is where you go if you have hundreds of them. Psychedelic days are here again!

image

time to read 15 min | 2876 words

There are two things that I would change in the RavenBurgerCo sample app.

The first would be session management, I dislike code like this:

image

I would much rather do that in a base controller and avoid manual session management. But that is most a design choice, and it ain’t really that important.

But what is important is the number of indexes that the application uses. We have:

  • LocationIndex
  • DeliveryIndex
  • DriveThruIndex

And I am not really sure that we need all three. In fact, I am pretty sure that we don’t. What we can do is merge them all into a single index. I am pretty sure that the reason that there were three of them was because there there was a bug in RavenDB that made it error if you gave it a null WKT (vs. just recognize this an a valid opt out). I fixed that bug, but even with that issue in place, we can get things working:

   1: public class SpatialIndex : AbstractIndexCreationTask<Restaurant>
   2: {
   3:     public SpatialIndex()
   4:     {
   5:         Map = restaurants =>
   6:               from restaurant in restaurants
   7:               select new
   8:                   {
   9:                       _ = SpatialGenerate(restaurant.Latitude, restaurant.Longitude),
  10:                       __ = restaurant.DriveThruArea == null ? 
  11:                                     new object[0] : 
  12:                                     SpatialGenerate("drivethru", restaurant.DriveThruArea),
  13:                       ___ = restaurant.DeliveryArea == null ? 
  14:                                     new object[0] : 
  15:                                     SpatialGenerate("delivery", restaurant.DeliveryArea)
  16:                   };
  17:     }
  18: }

And from then, it is just a matter of updating the queries, which now looks like the following:

Getting the restaurants near my location (for Eat In page):

   1: return session.Query<Restaurant, SpatialIndex>()
   2:     .Customize(x =>
   3:                    {
   4:                        x.WithinRadiusOf(25, latitude, longitude);
   5:                        x.SortByDistance();
   6:                    })
   7:     .Take(250)
   8:     .Select( ... );

Getting the restaurants that deliver to my location (Delivery page):

   1: return session.Query<Restaurant, SpatialIndex>()
   2:     .Customize(x => x.RelatesToShape("delivery", point, SpatialRelation.Intersects))
   3:     // SpatialRelation.Contains is not supported
   4:     // SpatialRelation.Intersects is OK because we are using a point as the query parameter
   5:     .Take(250)
   6:     .Select( ... ) ;

Getting the restaurants inside a particular rectangle (Map page):

   1: return session.Query<Restaurant, SpatialIndex>()
   2:     .Customize(x => x.RelatesToShape(Constants.DefaultSpatialFieldName, rectangle, SpatialRelation.Within))
   3:     .Take(512)
   4:     .Select( ... );

Note that we use DefaultSpatialFieldName, instead of indexing the location twice.

And finally, getting the restaurants that are applicable for drive through for my route (Drive Thru page):

   1: return session.Query<Restaurant, SpatialIndex>()
   2:     .Customize(x => x.RelatesToShape("drivethru", lineString, SpatialRelation.Intersects))
   3:     .Take(512)
   4:     .Select( ... );

And that is that.

Really great project, and quite amazing, both client & server code. It is simple, it is elegant and it is effective. Well done Simon!

time to read 82 min | 16287 words

This is a review of RavenBurgerCo, created as a sample app for RavenDB spatial support by Simon Bartlett. This is by no means an unbiased review, if only because I had laughed out load and crazily when I saw the first page:

image

What is this about?

Raven Burger Co is a chain of fast food restaurants, based in the United Kingdom. Their speciality is burgers made with raven meat. All their restaurants offer eat-in/take-out service, while some offer home delivery, and others offer a drive thru service.

This sample application is their online restaurant locator.

Good things about this project? Here is how you get started:

  1. Clone this repository
  2. Open the solution in Visual Studio 2012
  3. Press F5
  4. Play!

And it actually works! It uses embeddable RavenDB to make it super easy and stupid to run it, right out of the box.

We will start this review by looking at the infrastructure for this project, starting, as usual, from Global.asax:

image

Let us see how RavenDB is setup:

   1: public static void ConfigureRaven(MvcApplication application)
   2: {
   3:     var store = new EmbeddableDocumentStore
   4:                         {
   5:                             DataDirectory = "~/App_Data/Database",
   6:                             UseEmbeddedHttpServer = true
   7:                         };
   8:  
   9:     store.Initialize();
  10:     MvcApplication.DocumentStore = store;
  11:  
  12:     IndexCreation.CreateIndexes(typeof(MvcApplication).Assembly, store);
  13:  
  14:     var statistics = store.DatabaseCommands.GetStatistics();
  15:  
  16:     if (statistics.CountOfDocuments < 5)
  17:         using (var bulkInsert = store.BulkInsert())
  18:             LoadRestaurants(application.Server.MapPath("~/App_Data/Restaurants.csv"), bulkInsert);
  19: }

So use embedded RavenDB, and if there isn’t enough data in the db, load the default data set using RavenDB’s new Bulk Insert feature.

Note that we set MvcApplication.DocumentStore property, let us see how this is used.

Simon did a really nice thing here. Note that UseEmbeddedHttpServer is set to true, which means that RavenDB will find an open port and use it, this is then exposed in the UI:

image

So you can click on the link and land right in the studio for your embedded database, which gives you the ability to view, debug & modify how things are actually going. This is a really nice way to expose it.

Now, let us move to the actual project code itself. Exploring the options in this project, we have map browsing:

image

And here I have to admit ignorance. I have no idea on how to use maps, so this is quite nice for me, something new to learn. The core of this page is this script:

   1: $(function () {
   2:  
   3:     var gmapLayer = new L.Google('ROADMAP');
   4:     var resultsLayer = L.layerGroup();
   5:  
   6:     var map = L.map('map', {
   7:         layers: [gmapLayer, resultsLayer],
   8:         center: [51.4775, -0.461389],
   9:         zoom: 12,
  10:         maxBounds: L.latLngBounds([49, 15], [60, -25])
  11:     });
  12:  
  13:     var loadMarkers = function() {
  14:         if (map.getZoom() > 9) {
  15:             var bounds = map.getBounds();
  16:             $.get('/api/restaurants', {
  17:                 north: bounds.getNorthWest().lat,
  18:                 east: bounds.getSouthEast().lng,
  19:                 south: bounds.getSouthEast().lat,
  20:                 west: bounds.getNorthWest().lng,
  21:             }).done(function(restaurants) {
  22:                 resultsLayer.clearLayers();
  23:                 $.each(restaurants, function(index, value) {
  24:                     var marker = L.marker([value.Latitude, value.Longitude])
  25:                         .bindPopup(
  26:                             '<p><strong>' + value.Name + '</strong><br />' +
  27:                                 value.Street + '<br />' +
  28:                                 value.City + '<br />' +
  29:                                 value.PostCode + '<br />' +
  30:                                 value.Phone + '</p>'
  31:                         );
  32:                     resultsLayer.addLayer(marker);
  33:                 });
  34:             });
  35:         } else {
  36:             resultsLayer.clearLayers();
  37:         }
  38:     };
  39:  
  40:     loadMarkers();
  41:     map.on('moveend', loadMarkers);
  42: });

You can see that loadMarkers method, which is getting called whenever the map is moved, and on startup. This end up calling this method with the boundaries of the visible UI on the server:

   1: public IEnumerable<object> Get(double north, double east, double west, double south)
   2: {
   3:     var rectangle = string.Format(CultureInfo.InvariantCulture, "{0:F6} {1:F6} {2:F6} {3:F6}", west, south, east, north);
   4:  
   5:     using (var session = MvcApplication.DocumentStore.OpenSession())
   6:     {
   7:         return session.Query<Restaurant, LocationIndex>()
   8:             .Customize(x => x.RelatesToShape("location", rectangle, SpatialRelation.Within))
   9:             .Take(512)
  10:             .Select(x => new
  11:                             {
  12:                                 x.Name,
  13:                                 x.Street,
  14:                                 x.City,
  15:                                 x.PostCode,
  16:                                 x.Phone,
  17:                                 x.Delivery,
  18:                                 x.DriveThru,
  19:                                 x.Latitude,
  20:                                 x.Longitude
  21:                             })
  22:             .ToList();
  23:     }
  24: }

Note that in this case, we are doing a search for items inside the rectangle. But the search options are a bit funky. You have to send the data in WKT format.  Luckily, Simon already create a better solution (in this case, he is using the long hand method to make sure that we all understand what he is doing). The better method would be to use his Geo library, in which case the code would look like:

   1: .Geo("location", x => x.RelatesToShape(new Rectangle(west, south, east, north), SpatialRelation.Within))

So that was the map, now let us look at another example, the Eat In example. In that case, we are looking for restaurants near our location to be figure out where to eat. This looks like this:

image

Right in the bull’s eye!

Here is the server side code:

   1: public IEnumerable<object> Get(double latitude, double longitude)
   2: {
   3:     using (var session = MvcApplication.DocumentStore.OpenSession())
   4:     {
   5:         return session.Query<Restaurant, LocationIndex>()
   6:             .Customize(x =>
   7:                            {
   8:                                x.WithinRadiusOf(25, latitude, longitude);
   9:                                x.SortByDistance();
  10:                            })
  11:             .Take(250)
  12:             .Select(x => new
  13:                             {
  14:                                 x.Name,
  15:                                 x.Street,
  16:                                 x.City,
  17:                                 x.PostCode,
  18:                                 x.Phone,
  19:                                 x.Delivery,
  20:                                 x.DriveThru,
  21:                                 x.Latitude,
  22:                                 x.Longitude
  23:                             })
  24:             .ToList();
  25:     }
  26: }

And on the client side, we just do the following:

   1: $('#location').change(function () {
   2:     var latlng = $('#location').locationSelector('val');
   3:  
   4:     var outerCircle = L.circle(latlng, 25000, { color: '#ff0000', fillOpacity: 0 });
   5:     map.fitBounds(outerCircle.getBounds());
   6:  
   7:     resultsLayer.clearLayers();
   8:     resultsLayer.addLayer(outerCircle);
   9:     resultsLayer.addLayer(L.circle(latlng, 15000, { color: '#ff0000', fillOpacity: 0.1 }));
  10:     resultsLayer.addLayer(L.circle(latlng, 10000, { color: '#ff0000', fillOpacity: 0.3 }));
  11:     resultsLayer.addLayer(L.circle(latlng, 5000, { color: '#ff0000', fillOpacity: 0.5 }));
  12:     resultsLayer.addLayer(L.circleMarker(latlng, { color: '#ff0000', fillOpacity: 1, opacity: 1 }));
  13:  
  14:  
  15:     $.get('/api/restaurants', {
  16:         latitude: latlng[0],
  17:         longitude: latlng[1]
  18:     }).done(function (restaurants) {
  19:         $.each(restaurants, function (index, value) {
  20:             var marker = L.marker([value.Latitude, value.Longitude])
  21:                 .bindPopup(
  22:                     '<p><strong>' + value.Name + '</strong><br />' +
  23:                     value.Street + '<br />' +
  24:                     value.City + '<br />' +
  25:                     value.PostCode + '<br />' +
  26:                     value.Phone + '</p>'
  27:                 );
  28:             resultsLayer.addLayer(marker);
  29:         });
  30:     });
  31: });

We define several circles of different opacities, and then show up the returned markers.

It is all pretty simple code, but the result it quite stunning. I am getting really excited by this thing. It is simple, beautiful and quite powerful. Wow!

The delivery tab does pretty much the same thing as the eat-in mode, but it does so in a different way. First, you might have noticed the LocationIndex in the previous two examples, this looks like this:

   1: public class LocationIndex : AbstractIndexCreationTask<Restaurant>
   2: {
   3:     public LocationIndex()
   4:     {
   5:         Map = restaurants => from restaurant in restaurants
   6:                              select new
   7:                                         {
   8:                                             restaurant.Name,
   9:                                             _ = SpatialGenerate(restaurant.Latitude, restaurant.Longitude),
  10:                                             __ = SpatialGenerate("location", restaurant.LocationWkt)
  11:                                         };
  12:     }
  13: }

Before we look at this, we need to look at a sample document:

image

I am note quite sure why we have in LocationIndex both SpatialGenerate() and SpatialGenerate(“location”). I think that this is just a part of the demo. Because the data is the same, and both lines should produce the same results.

However, for deliveries, the situation is quite different. We don’t just deliver to a certain distance, as you can see, we have a polygon that determines where do we actually delivers to. On the map, this looks like this:

image

The red circle is where I am located, the blue markers are the restaurants that delivers to my location and the blue polygon is the delivery area for the selected burger joint. Let us see how this works, okay? We will start from the index:

   1: public class DeliveryIndex : AbstractIndexCreationTask<Restaurant>
   2: {
   3:     public DeliveryIndex()
   4:     {
   5:         Map = restaurants => from restaurant in restaurants
   6:                              where restaurant.DeliveryArea != null
   7:                              select new
   8:                                         {
   9:                                             restaurant.Name,
  10:                                             _ = SpatialGenerate("delivery", restaurant.DeliveryArea, SpatialSearchStrategy.GeohashPrefixTree, 7)
  11:                                         };
  12:     }
  13: }

So we are indexing just restaurants that have a drive through polygon, and then we query it like this:

   1: public IEnumerable<object> Get(double latitude, double longitude, bool delivery)
   2: {
   3:     if (!delivery)
   4:         return Get(latitude, longitude);
   5:  
   6:     var point = string.Format(CultureInfo.InvariantCulture, "POINT ({0} {1})", longitude, latitude);
   7:  
   8:     using (var session = MvcApplication.DocumentStore.OpenSession())
   9:     {
  10:         return session.Query<Restaurant, DeliveryIndex>()
  11:             .Customize(x => x.RelatesToShape("delivery", point, SpatialRelation.Intersects))
  12:             // SpatialRelation.Contains is not supported
  13:             // SpatialRelation.Intersects is OK because we are using a point as the query parameter
  14:             .Take(250)
  15:             .Select(x => new
  16:                             {
  17:                                 x.Name,
  18:                                 x.Street,
  19:                                 x.City,
  20:                                 x.PostCode,
  21:                                 x.Phone,
  22:                                 x.Delivery,
  23:                                 x.DriveThru,
  24:                                 x.Latitude,
  25:                                 x.Longitude,
  26:                                 x.DeliveryArea
  27:                             })
  28:             .ToList();
  29:     }
  30: }

This basically says, give me all the Restaurants who delivers to a locations that includes me. And then the rest all happens on the client side.

Quite cool.

The final example is the drive thru mode, which looks like this:

image

Given that I am driving from the green dot to the red dot, what restaurants can I stop at?

Here is the index:

   1: public class DriveThruIndex : AbstractIndexCreationTask<Restaurant>
   2: {
   3:     public DriveThruIndex()
   4:     {
   5:         Map = restaurants => from restaurant in restaurants
   6:                              where restaurant.DriveThruArea != null
   7:                              select new
   8:                                         {
   9:                                             restaurant.Name,
  10:                                             _ = SpatialGenerate("drivethru", restaurant.DriveThruArea)
  11:                                         };
  12:     }
  13: }

And now the code for this:

   1: public IEnumerable<object> Get(string polyline)
   2: {
   3:     var lineString = PolylineHelper.ConvertGooglePolylineToWkt(polyline);
   4:  
   5:     using (var session = MvcApplication.DocumentStore.OpenSession())
   6:     {
   7:         return session.Query<Restaurant, DriveThruIndex>()
   8:             .Customize(x => x.RelatesToShape("drivethru", lineString, SpatialRelation.Intersects))
   9:             .Take(512)
  10:             .Select(x => new
  11:                             {
  12:                                 x.Name,
  13:                                 x.Street,
  14:                                 x.City,
  15:                                 x.PostCode,
  16:                                 x.Phone,
  17:                                 x.Delivery,
  18:                                 x.DriveThru,
  19:                                 x.Latitude,
  20:                                 x.Longitude
  21:                             })
  22:             .ToList();
  23:     }
  24: }

We get the driving direction from the map, convert it to a line string, and then just check if our path intersects with the drive thru area for the restaurants.

Pretty cool application, and some really nice UI.

Okay, enough with the accolades, next time, I’ll talk about the things that can be better.

time to read 6 min | 1143 words

By default, RavenDB make it pretty hard to shoot yourself in the foot with unbounded result sets. Pretty much every single feature has rate limits on it, and that is a good thing.

However, there are times where you actually do want to get all where all actually means everything damn you, really all of them. That has been somewhat tough to do, because it requires you to do paging, and if you are trying to do that on a running system, it is possible that incoming data will impact the way you are exporting, causing you to get duplicates or miss items.

We got several suggestions about how to handle that, but most of those were pretty complex. Instead, we decided to go with the following approach:

  • We will utilize our existing infrastructure to handle exports.
  • We don’t want to do that in multiple requests, because that means that state has to be kept on both client & server.
  • The model has to be a streaming based model, because otherwise we might get memory errors if you are trying to load millions of records out.
  • The stream you get out is frozen, that means that what you read (both indexes and data) is a snapshot of the data as it was when you started reading it.

And now, let me show you the API for that:

   1: using (var session = store.OpenSession())
   2: {
   3:     var query = session.Query<User>("Users/ByActive")
   4:                        .Where(x => x.Active);
   5:     var enumerator = session.Advanced.Stream(query);
   6:     int count = 0;
   7:     while (enumerator.MoveNext())
   8:     {
   9:         Assert.IsType<User>(enumerator.Current.Document);
  10:         count++;
  11:     }
  12:  
  13:     Assert.Equal(1500, count);
  14: }

As you can see, we use standard Linq to limit our search, and the new method we have is Stream(), which allows us to get an IEnumerator, which will scan through the data.

You can see that we are able to get more than the default 1,024 limit of items from RavenDB. There are overloads there for getting additional information about the query as well (total results, timestamps, etags, etc).

Note that the values returning from the Stream() are not tracked by the session. And that if we had a users #1231 that was deleted 2 ms after the export began, you would still get it, since the data is frozen at the time of the export start.

If you want, mind, you can specify paging, by the way, and all the other indexing options are also available for you as well (transform results, for example).

time to read 2 min | 399 words

Every so often we get a request for filtered replication. “I want to replicate to this node, but only those documents.” We explain that replication is a whole database kind of thing, you can’t just pick & choose what you want. That isn’t actually true, we have facilities to do filtering, and it would be fairly easy to expose them.

We don’t intend to do so. And the reason why is that the customer asking the question is usually starting asking us question from midway. He read about replication, thought that it would be a good fit for a particular scenario, if only it had that feature. Except that this is completely the wrong feature to use for the scenario at hand. And usually it takes a little back & forth to figure out what the scenario actually is.

For the most part, the scenarios for this feature are all about synchronizing data between two nodes*. In particular, that is often a use case for: “I have a mobile client and I want to replicate some of the data to that laptop”, or some such.

And this is where things gets complex. To start with, you say, let us just filtered the data where CustomerId = “customers/5”. Except that you need to apply this logic for each entity type in the database, and they usually have different rules about them. For example, you may have common reference data that you would want to replication, even though they don’t belong to customers/5. And invoices may have CustomerId property, but customers does not, so you need to define that for customers, it is the Id that you want to filter by, etc.

To make things even more interesting, you need to consider the case where the sync filter have changed, (this user now have access to “customers/5” and “customers/6”). At which point, you pretty much have to go and go through the entire data set again.

Then we move to the question of updates, how are those handled? What about conflicts? How do you handle disconnected clients that may move between addresses and ips all the time? Who maintains this operation? The client? The server? How about disconnected updates?

In short, it is a very different discussion that you need to have, and just exposing the replication filters won’t be that.

* Nitpicker corner: yes, I know about MS Sync.

time to read 3 min | 422 words

We are currently investigating the usage of LevelDB as a storage engine in RavenDB. Some of the things that we feel very strongly about is transactions (LevelDB doesn’t have it) and performance (for a different definition of the one usually bandied about).

LevelDB does have atomicity, and the rest of CID can be built atop of that without too much complexity (already done, in fact). But we run into an issue when looking at the performance of reading. I am not sure if that is unique or not, but in our scenario, we typically deal with relatively large values. Documents of several MB are quite common. That means that we are pretty sensitive to memory allocations. It doesn’t help that we have very little control on the Large Object Heap, so it was with great interest that we looked at how LevelDB did things.

Reading the actual code make a lot of sense (more on that later, I will probably go through a big review of that). But there was one story that really didn’t make any sense to us, reading a value by key.

We started out using LevelDB Sharp:

Database.Get("users/1");

This in turn result in the following getting called:

image

A few things to note here. All from the point of view of someone who deals with very large values.

  • valuePtr is not released, even though it was allocated by us.
  • We copy the value from valuePtr into a string, resulting in two copies of the data and twice the memory usage.
  • There is no way to get just partial data.
  • There is no way to get binary data (for example, encrypted)
  • This is going to be putting a lot of pressure on the Large Object Heap.

But wait, it actually gets better. Let us look at the LevelDB method that get called:

image

So we are actually copying the data multiple times now. For fun, the db->rep->Get() call also copy the data. And that is pretty much where we stopped looking.

We are actually going to need to write a new C API and export that to be able to make use of that in our C# code. Fun, or not.

time to read 5 min | 850 words

By far the most confusing feature in RavenDB has been the index’s Transform Result. We introduced this feature to give the user the ability to do server side projections, including getting data from other documents.

Unfortunately, when we introduced this feature, we naturally added it to the index, and that cause a whole lot of confusion. In particular, people seemed to have a very hard time distinguishing between what get indexed and is searchable and the output of the index. To make matters worse, we also had major issues with how to determine the input of the TransformResults function. In short, the entire thing works, but from the point of view of an external user, that is really something that is very finicky and hard to get.

Instead, during Rob’s sprint, we have introduced a totally new concept. Stand-alone Result Transformers.

Here is what they look like:

public class OrdersStatsTransfromer : AbstractTransformerCreationTask<Order>
{
    public OrdersStatsTransfromer()
    {
        TransformResults = orders =>
                           from order in orders
                           select new
                           {
                               order.OrderedAt,
                               order.Status,
                               order.CustomerId,
                               CustomerName = LoadDocument<Customer>(order.CustomerId).Name,
                               LinesCount = order.Lines.Count
                           };
    }
}

And yes, they are quite intentionally modeled to be very similar to the way you would define them up to now, but outside of the index.

Now, why is that important? Because now you can apply a Transform Results on the server side without being tied to a customer.

For example, let us see how we can make use of this new feature:

var customerOrders = session.Query<Order>()
    .Where(x => x.CustomerId == "customers/123")
    .TransformWith<OrdersStatsTransfromer, OrderViewModel>()
    .ToList();

This separation between the result transformer and the index means that we can apply it to things like automatic indexes as well.

In fact, we can apply it during load:

var ovm = session.Load<OrdersStatsTransfromer, OrderViewModel>("orders/1");

There are a whole bunch of other goodies in there, as well. We made sure that now you don’t have to worry about the inputs to the transform. We will automatically use the right values when you access them, based on whatever you stored the field in the index or if it is accessible on the document.

All in all, this is a very major step forward, and it makes it drastically easier to use Result Transformers in various ways.

FUTURE POSTS

  1. RavenDB & Distributed Debugging - 3 days from now
  2. RavenDB & Ansible - 6 days from now

There are posts all the way to Jul 21, 2025

RECENT SERIES

  1. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  2. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  3. Webinar (7):
    05 Jun 2025 - Think inside the database
  4. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  5. RavenDB News (2):
    02 May 2025 - May 2025
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}