Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,633
|
Comments: 51,252
Privacy Policy · Terms
filter by tags archive
time to read 8 min | 1542 words

I have run into this post by John Rush, which I found really interesting, mostly because I so vehemently disagree with it. Here are the points that I want to address in John’s thesis:

1. Open Source movement gonna end because AI can rewrite any oss repo into a new code and commercially redistribute it as their own.

2. Companies gonna use AI to generate their none core software as a marketing effort (cloudflare rebuilt nextjs in  a week).

Can AI rewrite an OSS repo into new code? Let’s dig into this a little bit.

AI models today do a great job of translating code from one language to another. We have good testimonies that this is actually a pretty useful scenario, such as the recent translation of the Ladybird JS engine to Rust.

At RavenDB, we have been using that to manage our client APIs (written in multiple languages & platforms). It has been a great help with that.

But that is fundamentally the same as the Java to C# converter that shipped with Visual Studio 2005. That is 2005, not 2025, mind you. The link above is to the Wayback Machine because the original link itself is lost to history.

AI models do a much better job here, but they aren’t bringing something new to the table in this context.

Claude C Compiler

Now, let’s talk about using the model to replicate a project from scratch. And here we have a bunch of examples. There is the Claude C Compiler, an impressive feat of engineering that can compile the Linux kernel.

Except… it is a proof of concept that you wouldn’t want to use. It produces code that is significantly slower than GCC, and its output is not something that you can trust. And it is not in a shape to be a long-term project that you would maintain over the years.

For a young project, being slower than the best-of-breed alternative is not a bad thing. You’ve shown that your project works; now you can work on optimization.

For an AI project, on the other hand, you are in a pretty bad place. The key here is in terms of long-term maintainability. There is a great breakdown of the Claude C Compiler from the creator of Clang that I highly recommend reading.

The amount of work it would require to turn it into actual production-level code is enormous. I think that it would be fair to say that the overall cost of building a production-level compiler with AI would be in the same ballpark as writing one directly.

Many of the issues in the Claude C Compiler are not bugs that you can “just fix”. They are deep architectural issues that require a very different approach.

Leaving that aside, let’s talk about the actual use case. The Linux kernel’s relationship with its compiler is not a trivial one. Compiler bugs and behaviors are routine issues that developers run into and need to work on.

See the occasional “discussion” on undefined behavior optimizations by the compiler for surprisingly straightforward code.

Cloudflare’s vinext

So Cloudflare rebuilt Next.js in a week using AI. That is pretty impressive, but that is also a lie. They might have done some work in a week, but that isn’t something that is ready. Cloudflare is directly calling this highly experimental (very rightly so).

They also have several customers using it in production already. That is awesome news, except that within literal days of this announcement, multiple critical vulnerabilities have been found in this project.

A new project having vulnerabilities is not unexpected. But some of those vulnerabilities were literal copies of (fixed) vulnerabilities in the original Next.js project.

The issue here is the pace of change and the impact. If it takes an agent a week to build a project and then you throw that into production, how much real testing has been done on it? How much is that code worth?

John stated that this vinext project for Cloudflare was a marketing effort. I have to note that they had to pay bug bounties as a result and exposed their customers to higher levels of risk. I don’t consider that a plus. There is also now the ongoing maintenance cost to deal with, of course.

The key here is that a line of code is not something that you look at in isolation. You need to look at its totality. Its history, usage, provenance, etc. A line of code in a project that has been battle-tested in production is far more valuable than a freshly generated one.

I’ll refer again to the awesome “Things You Should Never Do” from Spolsky. That is over 25 years old and is still excellent advice, even in the age of AI-generated code.

NanoClaw’s approach

You’ve probably heard about the Clawdbot ⇒ Moltbot ⇒ OpenClaw, a way to plug AI directly into everything and give your CISO a heart attack. That is an interesting story, but from a technical perspective, I want to focus on what it does.

A key part of what made OpenClaw successful was the number of integrations it has. You can connect it to Telegram, WhatsApp, Discord, and more. You can plug it into your Gmail, Notes, GitHub, etc.

It has about half a million lines of code (TypeScript), which were mostly generated by AI as well.

To contrast that, we have NanoClaw with ~500 lines of code. Not a typo, it is roughly a thousand times smaller than OpenClaw. The key difference between these two projects is that NanoClaw rebuilds itself on the fly.

If you want to integrate with Telegram, for example, NanoClaw will use the AI model to add the Telegram integration. In this case, it will use pre-existing code and use the model as a weird plugin system. But it also has the ability to generate new code for integrations it doesn’t already have. See here for more details.

On the one hand, that is a pretty neat way to reduce the overall code in the project. On the other hand, it means that each user of NanoClaw will have their own bespoke system.

Contrasting the OpenClaw and NanoClaw approaches, we have an interesting problem. Both of those systems are primarily built with AI, but NanoClaw is likely going to show a lot more variance in what is actually running on your system.

For example, if I want to use Signal as a communication channel, OpenClaw has that built in. You can integrate Signal into NanoClaw as well, but it will generate code (using the model) for this integration separately for each user who needs it.

A bespoke solution for each user may sound like a nice idea, but it just means that each NanoClaw is its own special snowflake. Just thinking about supporting something like that across many users gives me the shivers.

For example, OpenClaw had an agent takeover vulnerability (reported literally yesterday) that would allow a simple website visit to completely own the agent (with all that this implies). OpenClaw’s design means that it can be fixed in a single location.

NanoClaw’s design, on the other hand, means that for each user, there is a slightly different implementation, which may or may not be vulnerable. And there is no really good way to actually fix this.

Summary

The idea that you can just throw AI at a problem and have it generate code that you can then deploy to production is an attractive one. It is also by no means a new one.

The notion of CASE tools used to be the way to go about it. The book Application Development Without Programmers was published in 1982, for example. The world has changed since then, but we are still trying to get rid of programmers.

Generating code quickly is easy these days, but that just shifts the burden. The cost of verifying code has become a lot more pronounced. Note that I didn’t say expensive. It used to be the case that writing the code and verifying it were almost the same task. You wrote the code and thus had a human verifying that it made sense. Then there are the other review steps in a proper software lifecycle.

When we can drop 15,000 lines of code in a few minutes of prompting, the entire story changes. The value of a line of code on its own approaches zero. The value of a reviewed line of code, on the other hand, hasn’t changed.

A line of code from a battle-tested, mature project is infinitely more valuable than a newly generated one, regardless of how quickly it was produced. The cost of generating code approaches zero, sure.

But newly generated code isn’t useful. In order for me to actually make use of that, I need to verify it and ensure that I can trust it. More importantly, I need to know that I can build on top of it.

I don’t see a lot of people paying attention to the concept of long-term maintainability for projects. But that is key. Otherwise, you are signing up upfront to be a legacy system that no one understands or can properly operate.

Production-grade software isn’t a prompt away, I’m afraid to say. There are still all the other hurdles that you have to go through to actually mature a project to be able to go all the way to production and evolve over time without exploding costs & complexities.

time to read 2 min | 203 words

miscellaneous, development

In 2008, the movie Eagle Eye came out. I remember watching that at the time and absolutely loving this movie. It is an action movie, so enjoying it once is the sole criteria that I have. Surprisingly, I got flashbacks of this movie repeatedly in the past few weeks.

I think it is safe to talk about “spoilers” for a movie that is old enough to drive, so the core idea in this movie is that an AI wants to perform a certain action, but is prevented from doing so. It then comes up with a pretty convoluted approach to bypassing those limits. I’m intentionally vague here, because the movie is actually good and you should watch it.

The key here, which is the reason that I remember an 18 years old movie, is that we are actually seeing this behavior today with AI agents. It is an entirely relatable phenomenon to see the agent running into an obstacle, and then trying to bypass it using crazier and crazier techniques.

The movie aged particularly well in this regard, because what was a plot device in there is a daily occurrence in our lives now. For reference, see this Tweet.

time to read 9 min | 1674 words

I am working a bit with sparse files, and I need to output the list of holes in my file.

To my great surprise, I found that my file had more holes than I put into it. This probably deserves a bit of explanation.

If you know what sparse files are, feel free to skip this explanation:

A sparse filereduces disk space usage by storing only the non-zero data blocks.Zero-filled regions ("holes") are recorded as file system metadata only.

The file still has the same “size”, but we don’t need to dedicate actual disk space for ranges that are filled with zeros, we can just remember that there are zeros there. This is a natural consequence of the fact that files aren’t actually composed of linear space on disk.

Filesystems grow files using extents (contiguous disk chunks).A file initially gets a single extent (e.g., 1MB).Fast I/O is maintained as sequential data fills this contiguous block.Once the extent is full, the filesystem allocates a new, separate extent (which will not reside next to the previous one, most likely).The file's logical size grows continuously, but physical allocation occurs in discrete bursts as new extents are dynamically added.

If you are old enough to remember running defrag, that was essentially what it did. Ensured that the whole file was a single continuous allocation on disk. Because of this, it is very simple for a file system to just record holes, and the only file system that you’ll find in common use today that doesn’t support it is FAT.

At any rate, I had a problem. My file has more holes than expected, and that is not a good thing. This is the sort of thing that calls for a “Stop, investigate, blog” reaction. Hence, this post.

Let’s see a small example that demonstrates this:


#define _GNU_SOURCE
#include <stdio.h>
#include <fcntl.h>
#include <unistd.h>


int main()
{
    const off_t file_size = 1024LL * 1024 * 1024;
    int fd = open("test-sparse-file.dat", O_CREAT | O_RDWR | O_TRUNC, 0644);
    fallocate(fd, 0, 0, file_size);
    
    off_t offset = 0;
    while (offset < file_size) {
        off_t hole_start = lseek(fd, offset, SEEK_HOLE);
        if (hole_start >= file_size) break;
        
        off_t hole_end = lseek(fd, hole_start, SEEK_DATA);
        if (hole_end < 0) hole_end = file_size;
        
        printf("Start: %.2f MB, End: %.2f MB\n", 
               hole_start / (1024.0 * 1024.0),
               hole_end / (1024.0 * 1024.0));
        
        offset = hole_end;
    }
    
    close(fd);
    return 0;
}

If you run this code, you’ll see this surprising result:


Start: 0.00 MB, End: 1024.00 MB

In other words, even though we just use fallocate() to ensure that we reserved the disk space, as far as lseek() is concerned, it is just one big hole. What is going on here?

Let’s dig a little deeper, using filefrag:


$ filefrag -b1048576 -v test-sparse-file.dat 
Filesystem type is: ef53
File size of test-sparse-file.dat is 1073741824 (1024 blocks of 1048576 bytes)
 ext:     logical_offset:        physical_offset: length:   expected: flags:
   0:        0..      23:     165608..    165631:     24:             unwritten
   1:       24..     151:     165376..    165503:    128:     165632: unwritten
   2:      152..     279:     165248..    165375:    128:     165504: unwritten
   3:      280..     407:     165120..    165247:    128:     165376: unwritten
   4:      408..     535:     164992..    165119:    128:     165248: unwritten
   5:      536..     663:     164864..    164991:    128:     165120: unwritten
   6:      664..     791:     164736..    164863:    128:     164992: unwritten
   7:      792..     919:     164608..    164735:    128:     164864: unwritten
   8:      920..    1023:     164480..    164583:    104:     164736: last,unwritten,eof
test-sparse-file.dat: 9 extents found

You can see that the file is made of 9 separate extents. The first one is 24MB in size, then 7 extents that are 128MB each, and the final one is 104MB.

Amusingly enough, the physical layout of the file is in reverse order to the logical layout of the file. That is just the allocation pattern of the file system, since there is no relation between the two.

Now, let’s try to figure out what is going on here. Do you see the flags on those extents? It says unwritten. That means this is physical space that was allocated to the file, but the file system is aware that it never wrote to that space. Therefore, that space must be zero.

In other words, conceptually, this unwritten space is no different from a sparse region in the file. In both cases, the file system can just hand me a block of zeros when I try to access it.

The question is, why is the file system behaving in this manner? And the answer is that this is an optimization. Instead of reading the data (which we know to be zeros) from the disk, we can just hand it over to the application directly. That saves on I/O, which is quite nice.

Consider the typical scenario of allocating a file and then writing to it. Without this optimization, we would literally double the amount of I/O  we have to do.

It turns out that this optimization also applies to Windows and Mac, but the reason I ran into that on Linux is that I used the lseek(SEEK_HOLE), which considers the unwritten portion as a sparse hole as well. This makes sense, since if I want to copy data and I am aware of sparse regions, I should treat the unwritten portions as holes as well.

You can use the ioctl(FS_IOC_FIEMAP) to inspect the actual file extents (this is what filefrag does) if you actually care about the difference.

time to read 1 min | 172 words

I needed to export all the messages from one of our Slack channels. Slack has a way of exporting everything, but nothing that could easily just give me all the messages in a single channel.

There are tools like slackdump or Slack apps that I could use, and I tried, but I got lost trying to make it work. In frustration, I opened VS Code and wrote:

I want a simple node.js that accepts a channel name from Slack and export all the messages in the channel to a CSV file

The output was a single script and instructions on how I should register to get the right token. It literally took me less time to ask for the script than to try to figure out how to use the “proper” tools for this.

The ability to do these sorts of one-off things is exhilarating.

Keep in mind: this isn’t generally applicable if you need something that would actually work over time. See my other post for details on that.

time to read 5 min | 897 words

Modern coding agents can generate a lot of code very quickly.What once consumed days or weeks of a person’s time is now a simple matter of a prompt and a coffee break.The question is whether this changes any of the fundamental principles ofsoftware development.

A significant portion of software engineering (beyond pure algorithms and data structure work) is not about the code itself, but about managing the social aspects of building and evolving the software over time.

Our system's architecture inherently mirrors the structure of the organization that builds it, as stated by Conway's Law.Therefore, software engineering deals a lot with how a software project is structured to ensure that a (human) team can deliver, make changes, and maintain it over time.

That is why maintainability is such a high-value target: an unmaintainable project quickly becomes one no one can safely change. A good example is OpenSSL circa Heartbleed, or your bank’s COBOL-based core systems.

Does this still apply in the era of coding agents?If a new feature is needed, and I can simply ask a model to regenerate the whole thing from scratch, bypassing technical debt and re-incorporating all constraints, do I still need to worry about maintainability?

My answer in this regard is emphatically yes.There is immense value in ensuring the maintainability of projects, even in the age of AI agents.

One of the most obvious answers is that a maintainable project minimizes the amount of code you must review and touch to make a change.Translating this into the language of Large Language Models, this means you are fundamentally reducing the required context needed to execute a change.

It isn’t just about saving our token budget. Even assuming an essentially unlimited budget, the true value extends beyond mere computation cost.

The maintainability of a software project remains critical because you cannot trust a model to act with absolute competence.You do not have the option of simply telling a model, "Make this application secure," and blindly expecting a perfect outcome. It will give you a thumbs-up and place your product API key in the client-side code.

Furthermore, in a mature software project, even one built entirely with AI, making substantial changes using an AI agent is incredibly risky.Consider the scenario where you spend a week with an agent, carefully tweaking the system's behavior, reviewing the code, and directing its output into the exact shape required.

Six months later, you return to the same area for a change.If the model rewrites everything from scratch, because it can, the entire context and history of those days and weeks of careful guidancewill be lost.This lost context is far more valuable than the code itself.

Remember Hyrum’s Law: "With a sufficient number of users of an API, it does not matter what you promise in the contract:all observable behaviors of your system will be depended on by somebody."

The "sufficient number of users" is surprisingly low, and observable behaviors include non-obvious factors like performance characteristics, the order of elements in a JSON document, the packet merging algorithm in a router you weren’t even aware existed, etc.

The key is this: if a coding agent routinely rewrites large swaths of code, you are not performing an equivalent exchange.

Even if the old code had been AI-generated, it was subsequently subjected to human review, clarification, testing, and verification by users, then deployed - and it survived the production environment and production loads.

The entirely new code has no validated quality yet.You must still expend time and effort to verify its correctness.That is the difference between the existing code and the new one.

Over 25 years ago, Joel Spolsky wrote Things You Should Never Do about the Netscape rewrite. That particular article has withstood the test of time very well. And it is entirely relevant in the age of coding agents as well.

Part of my job involves reviewing code on a project that is over fifteen years old with over a million lines of code. The past week,I've reviewed pull requests ranging from changes of a few hundred lines to one that changed over 10,000 lines of code.

The complexity involved in code review scales exponentially with the amount of code changed, because you must understand not just the changed code, but all its interactions with the rest of the system.

That 10,000+ lines of code pull request is something that is applicable for major features, worth the time and effort that it takes to properly understand and evaluate the change.

Thinking that you can just have a coding agent throw big changes on a project fundamentally misunderstands how projects thrive. And assuming you can have one agent write the code and another review it is a short trip to madness.

In summary, maintainability in the age of coding agents looks remarkably like it did before.The essential requirements remain: clear boundaries, a consistent architecture, and the ability to go into a piece of code and understand exactly what it's doing.

Funnily enough, the same aspects of good software engineering discipline also translate well into best practices for AI usage: limiting the scope of change, reducing the amount of required context, etc.

You should aim to modify a single piece of code, or better yet, create new code instead of modifying existing, validated code (Open/Closed Principle).

Even with AI, the human act of reviewing code is still crucial.And if your proposed solution is to have one AI agent review another, you have simply pushed the problem one layer up, as you are still faced with the necessity of specifying exactly what the system is supposed to be doing in a way that is unambiguous and clear.

There is already a proper way to do that, we call it coding 🙂.

time to read 4 min | 710 words

I was reviewing some code, and I ran into the following snippet. Take a look at it:


public void AddAttachment(string fileName, Stream stream)
   {
       ValidationMethods.AssertNotNullOrEmpty(fileName, nameof(fileName));
       if (stream == null)
           throw new ArgumentNullException(nameof(stream));


       string type = GetContentType(fileName);


       _attachments.Add(new PutAttachmentCommandData("__this__", fileName, stream, type, changeVector: string.Empty));
   }


   private static string GetContentType(string fileName)
   {
       var extension = Path.GetExtension(fileName);
       if (string.IsNullOrEmpty(extension))
           return "image/jpeg"; // Default fallback


       return extension.ToLowerInvariant() switch
       {
           ".jpg" or ".jpeg" => "image/jpeg",
           ".png" => "image/png",
           ".webp" => "image/webp",
           ".gif" => "image/gif",
           ".pdf" => "application/pdf",
           ".txt" => "text/plain",
           _ => "application/octet-stream"
       };
   }

I don’t like this code because the API is trying to guess the intent of the caller. We are making some reasonable inferences here, for sure, but we are also ensuring that any future progress will require us to change our code, instead of letting the caller do that.

In fact, the caller probably knows a lot more than we do about what is going on. They know if they are uploading an image, and probably in what format too. They know that they just uploaded a CSV file (and that we need to classify it as plain text, etc.).

This is one of those cases where the best option is not to try to be smart. I recommended that we write the function to let the caller deal with it.

It is important to note that this is meant to be a public API in a library that is shipped to external customers, so changing something in the library is not easy (change, release, deploy, update - that can take a while). We need to make sure that we aren’t blocking the caller from doing things they may want to.

This is a case of trying to help the user, but instead ending up crippling what they can do with the API.

time to read 2 min | 266 words

RavenDB has recently introduced its dedicated Kubernetes Operator, a big improvement over the Helm charts that teams have been using. This is meant to streamline database orchestration and management, essentially giving you an automated "SRE-in-a-box."

You can read the full announcement here. And the actual operator is available here.

The Operator shifts the management paradigm from manual configuration to a declarative model. Simply applying a RavenDBCluster custom resource definition (CRD) allows developers to automate the heavy lifting of cluster formation, storage binding, and external networking, removing the operational friction typically associated with running stateful distributed systems on K8s.

Most importantly, it isn’t a one-time thing. The RavenDB Kubernetes Operator is all about "Day 2" operational intelligence. It handles complex lifecycle tasks with high precision, such as executing safe rolling upgrades with built-in validation gates to prevent breaking changes.

From dealing with the intricacies of certificate rotation—supporting both Let’s Encrypt and private PKI—to providing real-time health insights directly via kubectl, the automation of these critical maintenance tasks lets the Operator ensure that your RavenDB clusters remain resilient, secure, and performant with minimal manual intervention.

For example, you can push an upgrade from RavenDB 7.0 to RavenDB 7.2, and the Operator will automatically handle performing a rolling upgrade for you, ensuring there is no downtime during deployment. There is no need for complex orchestration playbooks, you just push the update, and it happens for you.

This is part of the same DevOps push we are making. If you are partial to Ansible, on the other hand, we have recently published great support there as well.

time to read 2 min | 266 words

I’m looking for a key technical voice to join the team: a Sales Engineer who will be based in a GMT to GMT+3 time zone to best support our growing European and international customer base.

We want someone who is passionate about solving complex technical challenges who can have fun talking to people and building relationships.You’ll bridge the gap between our technology and our customers' business needs.

The Technical Chops:We need a technical champion for the sales process.That means diving deep into solution architecture, designing and executing proof-of-concepts, and helping customers architect reliable, scalable, and ridiculously fast systems using RavenDB.You need to understand databases (SQL, NoSQL, and the cloud), and be ready to learn RavenDB's powerful features inside and out.If you have a background in development (C#, Java, Python—it all helps!) and enjoy thinking about things like indexing strategies, data modeling, and performance tuning, you’ll love this.

People Person: You need to be able to walk into a room (virtual or physical), quickly identify a customer's pain points, and articulate a clear, compelling vision for how RavenDB solves them.This role requires excellent communication skills—you’ll be giving engaging demos, leading technical presentations, and collaborating directly with high-level technical teams.If you can discuss a multi-region deployment strategy one minute and explain the ROI to a business executive the next, you’ve got the commercial savviness we’re looking for.

You should have 3+ years of experience in a pre-sales or solution architecture role. A strong general database background is required, experience with NoSQL databases is a big plus.

Please ping us either via commenting here or submit your details to jobs@ravendb.net

time to read 4 min | 716 words

You may have heard about a recent security vulnerability in MongoDB (MongoBleed). The gist is that you can (as an unauthenticated user) remotely read the contents of MongoDB’s memory (including things like secrets, document data, and PII). You can read the details about the actual technical issue in the link above.

The root cause of the problem is that the authentication process for MongoDB uses MongoDB’s own code. That sounds like a very strange statement, no? Consider the layer at which authentication happens. MongoDB handles authentication at the application level.

Let me skip ahead a bit to talk about how RavenDB handles the problem of authentication. We thought long and hard about that problem when we redesigned RavenDB for the 4.0 release. One of the key design decisions we made was to not handle authentication ourselves.

Authentication in RavenDB is based on X.509 certificates. That is usually the highest level of security you’re asked for by enterprises anyway, so RavenDB’s minimum security level is already at the high end. That decision, however, had a lot of other implications.

RavenDB doesn’t have any code to actually authenticate a user. Instead, authentication happens at the infrastructure layer, before any application-level code runs. That means that at a very fundamental level, we don’t deal with unauthenticated input. That is rejected very early in the process.

It isn’t a theoretical issue, by the way. A recent CVE was released for .NET-based applications (of which RavenDB is one) that could lead to exactly this issue, an authentication bypass problem.RavenDB is not vulnerable as a result of this issue because the authentication mechanism it relies on is much lower in the stack.

By the same token, the code that actually performs the authentication for RavenDB is the same code that validates that your connection to your bank is secure from hackers. On Linux - OpenSSL, on Windows - SChannel. These are already very carefully scrutinized and security-critical infrastructure for pretty much everyone.

This design decision also leads to an interesting division inside RavenDB. There is a very strict separation between authentication-related code (provided by the platform) and RavenDB’s.

The problem for MongoDB is that they reused the same code for reading BSON documents from the network as part of their authentication mechanism.

That means that any aspect of BSON in MongoDB needs to be analyzed with an eye toward unauthenticated user input, as this CVE shows.

An attempt to add compression support to reduce network traffic resulted in size confusion, which then led to this problem. To be clear, that is a very reasonable set of steps that happened. For RavenDB, something similar is plausible, but not for unauthorized users.

What about Heartbleed?        

The name Mongobleed is an intentional reference to a very similar bug in OpenSSL from over a decade ago, with similar disastrous consequences. Wouldn’t RavenDB then be vulnerable in the same manner as MongoDB?

That is where the choice to use the platform infrastructure comes to our aid. Yes, in such a scenario, RavenDB would be vulnerable. But so would pretty much everything else. For example, MongoDB itself, even though it isn’t using OpenSSL for authentication, would also be vulnerable to such a bug in OpenSSL.

The good thing about OpenSSL’s Heartbleed bug is that it shined a huge spotlight on such bugs, and it means that a lot of time, money, and effort has been dedicated to rooting out similar issues, to the point where trust in OpenSSL has been restored.

Summary

One of the key decisions that we made when we built RavenDB was to look at how we could use the underlying (battle-tested) infrastructure to do things for us.

For security purposes, that means we have reduced the risk of vulnerabilities. A bug in RavenDB code isn’t a security vulnerability, you have to target the (much more closely scrutinized) infrastructure to actually get to a vulnerable state. That is part of our Zero Trust policy.

RavenDB has a far simpler security footprint, we use the enterprise-level TLS & X.509 for authentication instead of implementing six different protocols (and carrying the liability of each). This both simplifies the process of setting up RavenDB securely and reduces the effort required to achieve proper security compliance.

You cannot underestimate the power of checking the “X.509 client authentication” box and dropping whole sections of the security audit when deploying a new system.

time to read 13 min | 2490 words

In the previous post, I talked about the PropertySphere Telegram bot (you can also watch the full video here). In this post, I want to show how we can make it even smarter. Take a look at the following chat screenshot:

What is actually going on here? This small interaction showcases a number of RavenDB features, all at once. Let’s first focus on how Telegram hands us images. This is done using Photoor Document messages (depending on exactly how you send the message to Telegram).

The following code shows how we receive and store a photo from Telegram:


// Download the largest version of the photo from Telegram:
var ms = new MemoryStream();
var fileId = message.Photo.MaxBy(ps => ps.FileSize).FileId;
var file = await botClient.GetInfoAndDownloadFile(fileId, ms, cancellationToken);

// Create a Photo document to store metadata:
var photo = new Photo
{
    ConversationId = GetConversationId(chatId),
    Id = "photos/" + Guid.NewGuid().ToString("N"),
    RenterId = renter.Id,
    Caption = message.Caption ?? message.Text
};

// Store the image as an attachment on the document:
await session.StoreAsync(photo, cancellationToken);
ms.Position = 0;
session.Advanced.Attachments.Store(photo, "image.jpg", ms);
await session.SaveChangesAsync(cancellationToken);

// Notify the user that we're processing the image:
await botClient.SendMessage(
chatId,
       "Looking at the photo you sent..., may take me a moment...",
       cancellationToken
);

A Photo message in Telegram may contain multiple versions of the image in various resolutions. Here I’m simply selecting the best one by file size, downloading the image from Telegram’s servers to a memory stream, then I create a Photo document and add the image stream to it as an attachment.

We also tell the client to wait while we process the image, but there is no further code that does anything with it.

Gen AI & Attachment processing

We use a Gen AI task to actually process the image, handling it in the background since it may take a while and we want to keep the chat with the user open. That said, if you look at the actual screenshots, the entire conversation took under a minute.

Here is the actual Gen AI task definition for processing these photos:


var genAiTask = new GenAiConfiguration
{
    Name = "Image Description Generator",
    Identifier = TaskIdentifier,
    Collection = "Photos",
    Prompt = """
        You are an AI Assistant looking at photos from renters in 
        rental property management, usually about some issue they have. 
        Your task is to generate a concise and accurate description of what 
        is depicted in the photo provided, so maintenance can help them.
        """,


    // Expected structure of the model's response:
    SampleObject = """
        {
            "Description": "Description of the image"
        }
        """,


    // Apply the generated description to the document:
    UpdateScript = "this.Description = $output.Description;",


    // Pass the caption and image to the model for processing:
    GenAiTransformation = new GenAiTransformation
    {
        Script = """
            ai.genContext({
                Caption: this.Caption
            }).withJpeg(loadAttachment("image.jpg"));
            """
    },
    ConnectionStringName = "Property Management AI Model"
};

What we are doing here is asking RavenDB to send the caption and image contents from each document in the Photos collection to the AI model, along with the given prompt. Then we ask it to explain in detail what is in the picture.

Here is an example of the results of this task after it completed. For reference, here is the full description of the image from the model:

A leaking metal pipe under a sink is dripping water into a bucket. There is water and stains on the wooden surface beneath the pipe, indicating ongoing leakage and potential water damage.

What model is required for this?

I’m using the gpt-4.1-mini model here; there is no need for anything beyond that. It is a multimodal model capable of handling both text and images, so it works great for our needs.

You can read more about processing attachments with RavenDB’s Gen AI here.

We still need to close the loop, of course. The Gen AI task that processes the images is actually running in the background. How do we get the output of that from the database and into the chat?

To process that, we create a RavenDB Subscription to the Photos collection, which looks like this:


store.Subscriptions.Create(new SubscriptionCreationOptions
{
    Name = SubscriptionName,
    Query = """
        from "Photos" 
        where Description != null
        """
});

This subscription is called by RavenDB whenever a document in the Photos collection is created or updated with the Description having a value. In other words, this will be triggered when the GenAI task updates the photo after it runs.

The actual handling of the subscription is done using the following code:


_documentStore.Subscriptions.GetSubscriptionWorker<Photo>("After Photos Analysis")
    .Run(async batch =>
    {
        using var session = batch.OpenAsyncSession();
        foreach (var item in batch.Items)
        {
            var renter = await session.LoadAsync<Renter>(
item.Result.RenterId!);
            await ProcessMessageAsync(_botClient, renter.TelegramChatId!,
                $"Uploaded an image with caption: {item.Result.Caption}\r\n" +
                $"Image description: {item.Result.Description}.",
                cancellationToken);
        }
    });

In other words, we run over the items in the subscription batch, and for each one, we emit a “fake” message as if it were sent by the user to the Telegram bot. Note that we aren’t invoking the RavenDB conversation directly, but instead reusing the Telegram message handling logic. This way, the reply from the model will go directly back into the users' chat.

You can see how that works in the screenshot above. It looks like the model looked at the image, and then it acted. In this case, it acted by creating a service request. We previously looked at charging a credit card, and now let’s see how we handle creating a service request by the model.

The AI Agent is defined with a CreateServiceRequest action, which looks like this:


Actions = [
    new AiAgentToolAction
    {
        Name = "CreateServiceRequest",
        Description = "Create a new service request for the renter's unit",
        ParametersSampleObject = JsonConvert.SerializeObject(
            new CreateServiceRequestArgs
            {
                    Type =         """
Maintenance | Repair | Plumbing | Electrical | 
HVAC | Appliance | Community | Neighbors | Other
""",
            Description =         """
Detailed description of the issue with all 
relevant context
"""
                })
    },
]

As a reminder, this is the description of the action that the model can invoke. Its actual handling is done when we create the conversation, like so:


conversation.Handle<PropertyAgent.CreateServiceRequestArgs>(
"CreateServiceRequest", 
async args =>
{
    using var session = _documentStore.OpenAsyncSession();
    var unitId = renterUnits.FirstOrDefault();
    var propertyId = unitId?.Substring(0, unitId.LastIndexOf('/'));


    var serviceRequest = new ServiceRequest
    {
        RenterId = renter.Id!,
        UnitId = unitId,
        Type = args.Type,
        Description = args.Description,
        Status = "Open",
        OpenedAt = DateTime.UtcNow,
        PropertyId = propertyId
    };


    await session.StoreAsync(serviceRequest);
    await session.SaveChangesAsync();


    return $"Service request created ID `{serviceRequest.Id}` for your unit.";
});

In this case, there isn’t really much to do here, but hopefully this conveys the kind of code this allows you to write.

Summary

The PropertySphere sample application and its Telegram bot are interesting, mostly because of everything that isn’t here. We have a bot that has a pretty complex set of behaviors, but there isn’t a lot of complexity for us to deal with.

This behavior is emergent from the capabilities we entrusted to the model, and the kind of capabilities we give it. At the same time, I’m not trusting the model, but verifying that what it does is always within the scope of the user’s capabilities.

Extending what we have here to allow additional capabilities is easy. Consider adding the ability to get invoices directly from the Telegram interface, a great exercise in extending what you can do with the sample app.

There is also the full video where I walk you through all aspects of the sample application, and as always, we’d love to talk to you on Discord or in our GitHub discussions.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. API Design (10):
    29 Jan 2026 - Don't try to guess
  2. Recording (20):
    05 Dec 2025 - Build AI that understands your business
  3. Webinar (8):
    16 Sep 2025 - Building AI Agents in RavenDB
  4. RavenDB 7.1 (7):
    11 Jul 2025 - The Gen AI release
  5. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
View all series

Syndication

Main feed ... ...
Comments feed   ... ...
}