RavenDB Ltd (formerly Hibernating Rhinos) has been around for quite some time!In its current form, we've been building the RavenDB database for over 15 years now.In late 2010, we officially moved into our first real offices.
Our first place was a small second-story office space deep in the industrial section, a bit out of the way, but it served us incredibly well until we grew and needed more space.Then we grew again, and again, and again!Last month, we moved offices yet again.
This new location represents our fifth office, with each relocation necessitated by our growth exceeding the capacity of the previous premises.
If you ever pass by Hadera, where our offices now proudly reside, you'll spot a big sign as you enter the city!
You can also see how it looks like from the inside:
Hibernating Rhinos is a joke name (see more on the exact reasons for the name below). The primary product I have been working on for the past 15 years has been RavenDB. That led to some confusion for people, but I liked the name (and I like rhinos), so we kept the name for a long while.
In the past couple of years, we have expanded massively, opening official branch companies in Europe and in the USA, both under the name RavenDB. At this point, my fondness for the name was outvoted by the convenience of having a single name for the group of companies that my little passion project became.
Therefore, we renamed the company from Hibernating Rhinos LTD to RavenDB LTD. That is a name change only, of course, everything else remains the same. It does make it easier that we don’t have to talk separately about Hibernating Rhinos vs. RavenDB (Microsoft vs. Excel is the usual metaphor that I use, but Microsoft has a lot more software than that).
For people using our profilers, they are alive and well - it’s just that the invoice letterhead may change.
As for Hibernating Rhinos - I chose that name almost twenty years ago as the name of a podcast (here is an example post, but the episodes themselves are probably long gone, I can’t be bothered to try to find them). When I needed a company name, I used this one because it was handy, and it didn’t really matter. I never thought it would become this big.
I have to admit that the biggest change for me personally after this change is that it is going to be much nicer to tell people who to invoice 🙂.
A good lesson I learned about being a manager is that the bigger the organization, the more important it is for me to be silent. If we are discussing a set of options, I have to talk last, and usually, I have to make myself wait until the end of a discussion before I can weigh in on any issues I have with the proposed solutions.
Speaking last isn’t something I do to have the final word or as a power play, mind you. I do it so my input won’t “taint” the discussion. The bigger the organization, the more pressure there is to align with management. If I want to get unbiased opinions and proper input, I have to wait for it. That took a while to learn because the gradual growth of the company meant that the tipping point basically snuck up on me.
One day, I was working closely with a small team. They would argue freely and push back if they thought I was wrong without hesitation. The next day, the company grew to the point where I would only rarely talk to some people, and when I did, it was the CEO talking, not me.
It’s a subtle shift, but once you see it, you can’t unsee it. I keep thinking if I need to literally get a couple of hats and walk around in the office wearing different hats at different times.
To deal with this issue, I went out of my way to get a few “no-men” (the opposite of yes-men), who can reliably tell me when what I’m proposing is… let’s call it an idealistic view of reality. These are the folks who’ll look at my grand plan to, say, overhaul our entire CRM in a week and say, “Hey, love the enthusiasm, but have you considered the part where we all spontaneously combust from stress?” There may have been some pointing at grey hair and receding hairlines as well.
The key here is that I got these people specifically because I value their opinions, even when I disagree with them. It’s like having a built-in reality check—annoying in the moment, but worth its weight in gold when it keeps you from driving the whole team off a cliff.
This ties into one of the trickier parts of managerial duties: knowing when to steer and when to step back. Early on, I thought being a manager was about having all the answers and making sure everyone knew it. But the reality? It’s more like being a gardener—you plant the seeds (the vision), water them (with resources and support), and then let the team grow into it.
My job isn’t to micromanage every leaf; it’s to make sure the conditions are right for the whole thing to thrive. That means trusting people to do their jobs, even if they don’t do it exactly how I would.
Of course, there’s another side to this gig: the ability to move the goalposts that measure what’s required. Changing the scope of a problem is a really good way to make something that used to be impossible a reality. I’m reminded of this XKCD comic—you know the one, where if you change the problem just enough to turn a “no way” into a “huh, that could work”? That’s a manager’s superpower.
You’re not just solving problems; you’re redefining them so the team can win. Maybe the deadline’s brutal, but if you shift the focus from “everything” to “we don’t need this feature for launch,” suddenly everyone’s breathing again.
It is a very strange feeling because you move from doing things yourself, to working with a team, to working at a distance of once or twice removed. On the one hand, you can get a lot more done, but on the other hand, it can be really frustrating when it isn’t done the way (and with the speed) that I could do it.
This isn’t a motivational post, it is not a fun aspect of my work. I only have so many hours in the day, and being careful about where I put my time is important. At the same time, it means that I have to take into account that what I say matters, and if I say something first, it puts a pretty big hurdle in front of other people if they disagree with me.
In other words, I know it can come off as annoying, but not giving my opinion on something is actually a well-thought-out strategy to get the raw information without influencing the output. When I have all the data, I can give my own two cents on the matter safely.
When we build a new feature in RavenDB, we either have at least some idea about what we want to build or we are doing something that is pure speculation. In either case, we will usually spend only a short amount of time trying to plan ahead.
A good example of that can be found in my RavenDB 7.1 I/O posts, which cover about 6+ months of work for a major overhaul of the system. That was done mostly as a series of discussions between team members, guidance from the profiler, and our experience, seeing where the path would lead us. In that case, it led us to a five-fold performance improvement (and we’ll do better still by the time we are done there).
That particular set of changes is one of the more complex and hard-to-execute changes we have made in RavenDB over the past 5 years or so. It touched a lot of code, it changed a lot of stuff, and it was done without any real upfront design. There wasn’t much point in designing, we knew what we wanted to do (get things faster), and the way forward was to remove obstacles until we were fast enough or ran out of time.
I re-read the last couple of paragraphs, and it may look like cowboy coding, but that is very much not the case. There is a process there, it is just not something we would find valuable to put down as a formal design document. The key here is that we have both a good understanding of what we are doing and what needs to be done.
RavenDB 4.0 design document
The design document we created for RavenDB 4.0 is probably the most important one in the project’s history. I just went through it again, it is over 20 pages of notes and details that discuss the current state of RavenDB at the time (written in 2015) and ideas about how to move forward.
It is interesting because I remember writing this document. And then we set out to actually make it happen, that wasn’t a minor update. It took close to three years to complete the process, to give you some context about the complexity and scale of the task.
To give some further context, here is an image from that document:
And here is the sharding feature in RavenDB right now:
This feature is called prefixed sharding in our documentation. It is the direct descendant of the image from the original 4.0 design document. We shipped that feature sometime last year. So we are talking about 10 years from “design” to implementation.
I’m using “design” in quotes here because when I go through this v4.0 design document, I can tell you that pretty much nothing that ended up in that document was implemented as envisioned. In fact, most of the things there were abandoned because we found much better ways to do the same thing, or we narrowed the scope so we could actually ship on time.
Comparing the design document to what RavenDB 4.0 ended up being is really interesting, but it is very notable that there isn’t much similarity between the two. And yet that design document was a fundamental part of the process of moving to v4.0.
What Are Design Documents?
A classic design document details the architecture, workflows, and technical approach for a software project before any code is written. It is the roadmap that guides the development process.
For RavenDB, we use them as both a sounding board and a way to lay the foundation for our understanding of the actual task we are trying to accomplish. The idea is not so much to build the design for a particular feature, but to have a good understanding of the problem space and map out various things that could work.
Recent design documents in RavenDB
I’m writing this post because I found myself writing multiple design documents in the past 6 months. More than I have written in years. Now that RavenDB 7.0 is out, most of those are already implemented and available to you. That gives me the chance to compare the design process and the implementation with recent work.
Vector Search & AI Integration for RavenDB
This was written in November 2024. It outlines what we want to achieve at a very high level. Most importantly, it starts by discussing what we won’t be trying to do, rather than what we will. Limiting the scope of the problem can be a huge force multiplier in such cases, especially when dealing with new concepts.
Reading throughout that document, it lays out the external-facing aspect of vector search in RavenDB. You have the vector.search() method in RQL, a discussion on how it works in other systems, and some ideas about vector generation and usage.
It doesn’t cover implementation details or how it will look from the perspective of RavenDB. This is at the level of the API consumer, what we want to achieve, not how we’ll achieve it.
AI Integration with RavenDB
Given that we have vector search, the next step is how to actually get and use it. This design document was a collaborative process, mostly written during and shortly after a big design discussion we had (which lasted for hours).
The idea there was to iron out the overall understanding of everyone about what we want to achieve. We considered things like caching and how it plays into the overall system, there are notes there at the level of what should be the field names.
That work has already been implemented. You can access it through the new AI button in the Studio. Check out this icon on the sidebar:
That was a much smaller task in scope, but you can see how even something that seemed pretty clear changed as we sat down and actually built it. Concepts we didn’t even think to consider were raised, handled, and implemented (without needing another design).
Voron HSNW Design Notes
This design document details our initial approach to building the HSNW implementation inside Voron, the basis for RavenDB’s new vector search capabilities.
That one is really interesting because it is a pure algorithmic implementation, completely internal to our usage (so no external API is needed), and I wrote it after extensive research.
The end result is similar to what I planned, but there are still significant changes. In fact, pretty much all the actual implementation details are different from the design document. That is both expected and a good thing because it means that once we dove in, we were able to do things in a better way.
Interestingly, this is often the result of other constraints forcing you to do things differently. And then everything rolls down from there.
“If you have a problem, you have a problem. If you have two problems, you have a path for a solution.”
In the case of HSNW, a really complex part of the algorithm is handling deletions. In our implementation, there is a vector, and it has an associated posting list attached to it with all the index entries. That means we can implement deletion simply by emptying the associated posting list. An entire section in the design document (and hours spent pondering) is gone, just like that.
If the design document doesn’t reflect the end result of the system, are they useful?
I would unequivocally state that they are tremendously useful. In fact, they are crucial for us to be able to tackle complex problems. The most important aspect of design documents is that they capture our view of what the problem space is.
Beyond their role in planning, design documents serve another critical purpose: they act as a historical record. They capture the team’s thought process, documenting why certain decisions were made and how challenges were addressed. This is especially valuable for a long-lived project like RavenDB, where future developers may need context to understand the system’s evolution.
Imagine a design document that explores a feature in detail—outlining options, discussing trade-offs, and addressing edge cases like caching or system integrations. The end result may be different, but the design document, the feature documentation (both public and internal), and the issue & commit logs serve to capture the entire process very well.
Sometimes, looking at the road not taken can give you a lot more information than looking at what you did.
I consider design documents to be a very important part of the way we design our software. At the same time, I don’t find them binding, we’ll write the software and see where it leads us in the end.
What are your expectations and experience with writing design documents? I would love to hear additional feedback.
RavenDB is typically accessed directly by your application, using an X509 certificate for authentication. The same applies when you are connecting to RavenDB as a user.
Many organizations require that user authentication will not use just a single factor (such as a password or a certificate) but multiple. RavenDB now supports the ability to define Two Factor Authentication for access.
Here is how this looks like in the RavenDB Studio:
You are able to generate a certificate as well as register the Authenticator code in your device.
When using the associated certificate, you’ll not be able to access RavenDB. Instead, you’ll get an error message saying that you need to complete the Two Factor Authentication process. Here is what that looks like:
Once you complete the two factor authentication process, you can select for how long we’ll allow access with the given certificate and whatever to allow just accesses from the current browser window (because you are accessing it directly) or from any client (you want to access RavenDB from another device or via code).
Once the session duration expires, you’ll need to provide the authentication code again, of course.
This feature is meant specifically for certificates that are used by people directly. It is not meant for APIs or programmatic access. Those should either have a manual step to allow the certificate or utilize a secrets manager that can have additional steps and validations based on your actual requirements.
One of the interesting components of RavenDB Cloud is status reporting. It turns out that when you offer X as a Service, people really care about your operational status.
For RavenDB Cloud, we have https://status.ravendb.net/, which will give you some insights into the overall health of the system. Here are some details from the status page:
The interesting thing about this page is that it shows global status, indicating issues affecting large swaths of users. For instance, Azure having issues in a whole region in the image above is a great example of one such scenario. Regular maintenance, which we carry over the span of days, is something that we report, but you’ll usually never notice (due to the High Availability features of RavenDB).
It gets more complicated when we start talking about individual instances. There are many scenarios where the overall system health is great, but a particular database may suffer. The easiest example is if you run out of disk space. That affects that particular instance only.
For that scenario, we are reporting Production Monitoring Alerts within the RavenDB Cloud portal. Here is what this looks like:
As you can see, we report specific problems on those instances, raising that to your awareness. That was actually needed because, for the most part, RavenDB itself handles those sorts of things via High Availability, which means that even if there are issues, you’re likely to not feel them for a while.
Resilience at the cluster level means that even pretty severe problems are papered over and the system moves on. But there is only so much limping that you can do. If you are running at the bare edge of capacity, eventually you’ll trip over the line.
Those Production Monitoring Alerts allow you to detect and act upon those issues when they happen, not when they bring down production.
This aligns with our vision for RavenDB, the kind of system where you don’t need to have a full-time babysitter monitoring the system. Instead, if there is a problem that the database cannot solve on its own, it will explicitly notify you, in advance.
That leads to a system that is far healthier all around and means that you can focus on building your system, rather than managing database minutiae.
A not insignificant part of my job is to go over code. Today I want to discuss how we approach code reviews at RavenDB, not from a process perspective but from an operational one. I have been a developer for nearly 25 years now, and I’ve come to realize that when I’m doing a code review I’m actually looking at the code from three separate perspectives.
The first, and most obvious one, is when I’m actually looking for problems in the code - ensuring that I can understand what is going on, confirming the flow makes sense, etc. This involves looking at the code as it is right now.
I’m going to be showing snippets of code reviews here. You are not actually expected to follow the code, only the concepts that we talk about here.
Here is a classic code review comment:
There is some duplicated code that we need to manage. Another comment that I liked is this one, pointing out a potential optimization in the code:
If we define the code using the static keyword, we’ll avoid delegate allocation and save some memory, yay!
It gets more interesting when the code is correct and proper, but may do something weird in some cases, such as in this one:
I really love it when I run into those because they allow me to actually explore the problem thoroughly. Here is an even better example, this isn’t about a problem in the code, but a discussion on its impact.
RavenDB has been around for over 15 years, and being able to go back and look at those conversations in a decade or so is invaluable to understanding what is going on. It also ensures that we can share current knowledge a lot more easily.
Speaking of long running-projects, take a look at the following comment:
Here we need to provide some context to explain. The _caseInsensitive variable here is a concurrent dictionary, and the change is a pretty simple optimization to avoid the annoying KeyValuePair overload. Except… this code is there intentionally, we use it to ensure that the removal operation will only succeed if both the key and the value match. There was an old bug that happened when we removed blindly and the end result was that an updated value was removed.
In this case, we look at the code change from a historical perspective and realize that a modification would reintroduce old (bad) behavior. We added a comment to explain that in detail in the code (and there already was a test to catch it if this happens again).
By far, the most important and critical part of doing code reviews, in my opinion, is not focusing on what is or what was, but on what will be. In other words, when I’m looking at a piece of code, I’m considering not only what it is doing right now, but also what we’ll be doing with it in the future.
Here is a simple example of what I mean, showing a change to a perfectly fine piece of code:
The problem is that the if statement will call InitializeCmd(), but we previously called it using a different condition. We are essentially testing for the same thing using two different methods, and while currently we end up with the same situation, in the future we need to be aware that this may change.
I believe one of the major shifts in my thinking about code reviews came about because I mostly work on RavenDB, and we have kept the project running over a long period of time. Focusing on making sure that we have a sustainable and maintainable code base over the long haul is important. Especially because you need to experience those benefits over time to really appreciate looking at codebase changes from a historical perspective.
I've been writing this blog since 2004. That means I have been doing this for twenty years, which is frankly unbelievable to me. The actual date is sometime in April, so I’ll probably do a summary post then about that.
What I want to talk about today is a different aspect. The mechanism and processes I use to write blog posts. A large part of the reason I write blog posts is that it helps me understand and organize my own thoughts. And in order to do that effectively, I have found that I need very little friction in the blogging process.
About a decade ago, Google Reader was shut down, and I’m still very bitter about that. It effectively killed a significant portion of the blogging audience and made the ergonomics of reading blogs a lot harder. That also led people to use walled gardens to communicate with others, instead of the decentralized network and feed aggregators. A side effect of that decision is that blogging tools have stopped being a viable thing people spend time or money on.
At the time, I was using Windows Live Writer, which was a high-quality editor and had a rich plugin system. Microsoft discontinued it at some point, it became an open-source project, and even that died. The website is no longer functional and even in terms of the GitHub project, the last commit was 5 years ago.
I’m still using Open Live Writer to write the majority of my blog posts, but given there are no longer any plugins, even something as simple as embedding code in my posts has become an… annoyance. That kills the ergonomics of blogging for me.
Not a problem, this is Open Source, and I can do that myself. Except… I really don’t have the time to spend on something ancillary like that. I would happily pay (a reasonable amount) for a blogging client, but I’m going to assume that I’m not part of a large enough group that there is a market for this.
Taking the code snippets example, I can go into the code, figure out what is going on there, and add a “code snippet” feature. I estimate that would take several days. Alternatively, I can place the code as a GitHub gist and embed it in the page. It is annoying, but far quicker than going to the trouble of figuring that out.
Another issue that bugs me (pun intended) is a problem with copy/paste of images, where taking screenshots using the Snipping Tool doesn’t paste into Writer. I need to first paste them into Paint, then into Writer. In this case, I assume that Writer doesn’t recognize the clipboard format or something similar.
Finally, it turns out that I’m not writing blog posts in the same manner as I used to. It got to the point where I asked people to review my posts before making them public. It turns out that no matter how many times it is corrected, my brain seems unable to discern when to write “whether” or “whatever”, for example. At this point I gave up updating that piece of software 🙂. Even the use of emojis doesn’t work properly (Open Live Writer mostly predates a lot of them and breaks the HTML in a weird fashion 🤷).
In other words, there are several problems in my current workflow, and it has finally reached the point where I need to do something about it. The last requirement, by the way, is the most onerous. Consider the workflow of getting the following fixes to a blog post:
and we run => and we ran
we spend => we spent
Where is my collaborating editing and the ability to suggest changes with good UX? Improving the ergonomics for the blog has just expanded in scope massively. Now it is a full-fledged publishing platform with modern sensibilities. It’s 2024, features like proper spelling and grammar corrections should absolutely be there, no? And what about AI integration? It turns out that predicting text makes the writing process more efficient. Here is what this may look like:
At this stage, this isn’t just a few minor fixes. I should mention that for the past decade and a half or so, I stopped considering myself as someone who can do UI in any meaningful manner. I find that the <table/> tag, which used to be my old reliable method, is not recommended now, for some reason.
This… kind of sucks. I want to upgrade my process by a couple of decades, but I don’t want to pay the price for that. If only there was an easier way to do that.
I started using Google Docs to edit my blog posts, then pasting them into Live Writer or directly to the blog (using a Rich Text Box with an editor from… a decade ago). I had to check the source code for this, by the way. The entire experience is decidedly Developer UX. Then I had a thought, I already have a pretty good process of writing the blog posts in Google Docs, right? It handles rich text editing and management much better than the editor in the blog. There are also options for things like proper workflows. For example, someone can go over my drafts and make comments or suggestions.
The only thing that I need is to put both of those together. I have to admit that I spent quite some time just trying to figure out how to get the document from Google Docs using code. The authentication hurdles are… significant to someone who isn’t aware of how it all plugs together. Once I got that done, I got my publishing platform with modern features. Here is what the end result looks like:
publicclassPublishingPlatform{private readonly DocsService GoogleDocs;private readonly DriveService GoogleDrive;private readonly Client _blogClient;publicPublishingPlatform(string googleConfigPath, string blogUser, string blogPassword){var blogInfo =newMetaWeblogClient.BlogConnectionInfo("https://ayende.com/blog","https://ayende.com/blog/Services/MetaWeblogAPI.ashx","ayende.com", blogUser, blogPassword);
_blogClient =newMetaWeblogClient.Client(blogInfo);var initializer =newBaseClientService.Initializer{
HttpClientInitializer = GoogleWebAuthorizationBroker.AuthorizeAsync(
GoogleClientSecrets.FromFile(googleConfigPath).Secrets,new[]{ DocsService.Scope.Documents, DriveService.Scope.DriveReadonly },"user", CancellationToken.None,newFileDataStore("blog.ayende.com")).Result
};
GoogleDocs =newDocsService(initializer);
GoogleDrive =newDriveService(initializer);}publicvoidPublish(string documentId){
using var file = GoogleDrive.Files.Export(documentId,"application/zip").ExecuteAsStream();var zip =newZipArchive(file, ZipArchiveMode.Read);var doc = GoogleDocs.Documents.Get(documentId).Execute();var title = doc.Title;var htmlFile = zip.Entries.First(e=> Path.GetExtension(e.Name).ToLower()==".html");
using var stream = htmlFile.Open();var htmlDoc =newHtmlDocument();
htmlDoc.Load(stream);var body = htmlDoc.DocumentNode.SelectSingleNode("//body");var(postId, tags)=ReadPostIdAndTags(body);UpdateLinks(body);StripCodeHeader(body);UploadImages(zip, body,GenerateSlug(title));
string post =GetPostContents(htmlDoc, body);if(postId !=null){
_blogClient.EditPost(postId, title, post, tags,true);return;}
postId = _blogClient.NewPost(title, post, tags,true,null);var update =newBatchUpdateDocumentRequest();
update.Requests =[newRequest{
InsertText =newInsertTextRequest{
Text = $"PostId: {postId}\r\n",
Location =newLocation{
Index =1,}},}];
GoogleDocs.Documents.BatchUpdate(update, documentId).Execute();}privatevoidStripCodeHeader(HtmlNode body){foreach(var remove in body.SelectNodes("//span[text()='']").ToArray()){
remove.Remove();}foreach(var remove in body.SelectNodes("//span[text()='']").ToArray()){
remove.Remove();}}privatestatic string GetPostContents(HtmlDocument htmlDoc, HtmlNode body){// we use the @scope element to ensure that the document style doesn't "leak" outsidevar style = htmlDoc.DocumentNode.SelectSingleNode("//head/style[@type='text/css']").InnerText;var post ="<style>@scope {"+ style +"}</style> "+ body.InnerHtml;return post;}privatestaticvoidUpdateLinks(HtmlNode body){// Google Docs put a redirect like: https://www.google.com/url?q=ACTUAL_URLforeach(var link in body.SelectNodes("//a[@href]").ToArray()){var href =newUri(link.Attributes["href"].Value);var url = HttpUtility.ParseQueryString(href.Query)["q"];if(url !=null){
link.Attributes["href"].Value = url;}}}privatestatic(string? postId, List<string> tags)ReadPostIdAndTags(HtmlNode body){
string? postId =null;var tags =newList<string>();foreach(var span in body.SelectNodes("//span")){var text = span.InnerText.Trim();const string TagsPrefix ="Tags:";const string PostIdPrefix ="PostId:";if(text.StartsWith(TagsPrefix, StringComparison.OrdinalIgnoreCase)){
tags.AddRange(text.Substring(TagsPrefix.Length).Split(","));RemoveElement(span);}elseif(text.StartsWith(PostIdPrefix, StringComparison.OrdinalIgnoreCase)){
postId = text.Substring(PostIdPrefix.Length).Trim();RemoveElement(span);}}// after we removed post id & tags, trim the empty lineswhile(body.FirstChild.InnerText.Trim() is " " or ""){
body.RemoveChild(body.FirstChild);}return(postId, tags);}privatestaticvoidRemoveElement(HtmlNode element){do{var parent = element.ParentNode;
parent.RemoveChild(element);
element = parent;}while(element?.ChildNodes?.Count ==0);}privatevoidUploadImages(ZipArchive zip, HtmlNode body, string slug){var mapping =newDictionary<string, string>();foreach(var image in zip.Entries.Where(x=> Path.GetDirectoryName(x.FullName)=="images")){var type = Path.GetExtension(image.Name).ToLower()switch{".png"=>"image/png",".jpg" or "jpeg"=>"image/jpg",_=>"application/octet-stream"};
using var contents = image.Open();var ms =newMemoryStream();
contents.CopyTo(ms);var bytes = ms.ToArray();var result = _blogClient.NewMediaObject(slug +"/"+ Path.GetFileName(image.Name), type, bytes);
mapping[image.FullName]=newUriBuilder{ Path = result.URL}.Uri.AbsolutePath;}foreach(var img in body.SelectNodes("//img[@src]").ToArray()){if(mapping.TryGetValue(img.Attributes["src"].Value, out var path)){
img.Attributes["src"].Value = path;}}}privatestatic string GenerateSlug(string title){var slug = title.Replace(" ","");foreach(var ch in Path.GetInvalidFileNameChars()){
slug = slug.Replace(ch,'-');}return slug;}}
You’ll probably not appreciate this, but the fact that I can just push code like that into the document and get it with proper formatting easily is a major lifestyle improvement from my point of view.
The code works with the document in two ways. First, in the Document DOM (which is quite complex), it extracts the title of the blog post and afterward updates it with the document ID. But the core of this code is to extract the document as a zip file, grab everything from there, and push that to the blog. I do some editing for the HTML to get everything set up properly, mostly editing the links and uploading the images. There is also some stuff happening with CSS scopes that I frankly don’t understand. I think I got it right, which is fine for now.
This cost me a couple of evenings, and it was fun. Nothing earth-shattering, I’ll admit. But it’s the first time in a while that I actually wrote a piece of code that was immediately useful. My blogging queue is rather full, and I hope that with this new process it will be easier to push the ideas out of my head and to the blog.
And with that, it is now 01:26 AM, and I’m going to call it a night 🙂.
And as a final thought, I had just made several changes to the post after publication, and it went smoothly. I think that I like it.
If you are reading this blog, I assume that you are a like-minded person. My idea of relaxation is to sit and write code. Hopefully on something that I’m not familiar with. I have many such blog post series covering topics I care about. It’s my idea of meditation.
For the end of 2023, I thought that we could do something similar but on a broader scale. A while ago Alex Klaus wrote a walkthrough on how to build a complete application from scratch using modern best practices (and RavenDB). We refreshed the code and made it widely available, offering you something fun , educational, and productive to engage with.
The system is a bug tracker (allowing us to focus on the architecture rather than domain concerns), and you can play with a deployed version live. The code is available under the MIT license, and we’ll be very happy to receive any suggested improvements.