Oren Eini

CEO of RavenDB

a NoSQL Open Source Document Database

Get in touch with me:

oren@ravendb.net +972 52-548-6969

Posts: 7,583
|
Comments: 51,214
Privacy Policy · Terms
filter by tags archive
time to read 3 min | 443 words

imageHere is an interesting problem that I run into. I needed to produce an XML document for an external system to consume. This is a fairly complex document format, and there are a lot of scenarios to support. I began to test drive the creation of the XML document, but it turn out that I kept having to make changes as I run into more scenarios that invalidated previous assumptions that I made.

Now, we are talking about a very short iteration cycle, I might write a test to validate an assumption (attempting to put two items in the same container should throws) and an hour later realize that it is a legal, if strange, behavior. The tests became a pain point, I had to keep updating things because the invariant that they were based upon were wrong.

At that point, I decided that TDD was exactly the wrong approach for this scenario. Therefor, I decided that I am going to fall back to the old "trial and error" method. In this case, producing the XML and comparing using a diff tool.

The friction in the process went down significantly, because I didn't have to go and fix the tests all the time. I did break things that used to work, but I caught them mostly with manual diff checks.

So far, not a really interesting story. What is interesting is what happens when I decided that I have done enough work to consider most scenarios to be completed. I took all the scenarios and started generating tests for those. So for each scenario I now have a test that tests the current behavior of the system. This is blind testing. That is, I assume that the system is working correctly, and I want to ensure that it keeps working in this way. I am not sure what each test is doing, but the current behavior is assumed to be correct until proven otherwise..

Now I am back to having my usual safety net, and it is a lot of fun to go from zero tests to nearly five hundred tests in a few minutes.

This doesn't prove that the behavior of the system is correct, but it does ensure no regression and make sure that we have a stable platform to work from. We might find a bug, but then we can fix it in safety.

I don't recommend this approach for general use, but for this case, it has proven to be very useful.

time to read 2 min | 235 words

Let us take a look at this test. When I saw it, the code under test called to me: "Help me, I being intimately harassed here!"

image

The problem is that this tests knows all sorts of unrelated things about the code under test. It knows that it should redirect, it knows where it should redirect, etc.

What is the relation between kicking off the checkout process and redirecting? Orthogonal concerns should be in different tests. I may want to send the users to Promotions, instead of Receipt, in the future. Should this test break?

So let us try this one:

image 

We removed the obvious cases of too-intimate tests. But we still have very bad behavior in the test. Specifically, there is a big issue with how we are asserting that the pipeline was kicked off.

On the surface, it may appear that we are doing state based testing, but what we are actually testing is the behavior of the checkout process itself. This is not something that we want to test in this test. In this test, we just want to verify that the checkout process is started.

This test will fail when we change the checkout process, even though the behavior that we intended to test remained the same.

time to read 1 min | 124 words

Let us take a look at this test. When I saw it, the test cried out in pain to me: "Help me, I am so overworked."

image

This test is doing far too much. I drew the battle line on the test, just to give you a clear indication on what is going on.

image

What we are asserting are things that have nothing to do with what the test is supposed to test.This is also the classic "A test should have a single assertion" example, I think.

The test pass by side effect. It will actually be a problem down the line.

time to read 6 min | 1003 words

Jacob have an interesting perspective on testability:

The thing is that using Typemock means that you can unit test literally any public method of any public class, regardless of any and all internal dependencies that class might have. And you can do so without changing the design and/or architecture of your software at all.

In other words, using Typemock means that everything is testable. Unit testable. Seriously. Everything.
....
Which is why I had such a hard time reading Jeffrey Palermo's latest blog post entitled “Inversion of Control is NOT about testability”. Since I know Jeffrey Palermo is a .Net developer, my initial response is a big fat “Duh. He must be using Typemock.” Sadly, this is not the case.
...

“Testable design” only has value in the things it allows us to do—namely, unit test our classes. If we can unit test our classes as easily no matter what design patterns we choose, then that frees us up to explore other aspects of design choices. It isn't that <le design du jour> ceases to have value, it's just that testability is no longer a factor in evaluating its utility.
...

So, uh, forget all I just said. Spend lots of time making sure your .Net projects are testable. Also: Typemock sucks. Don't bother going there...

There are so many things that I disagree with in this post, I am not sure where to even begin. Before that, I need to point out, as usual, that TypeMock is an awesome tool, and that it can bring a lot of value to the table. It doesn't fit the way that I , personally, works, but that is a personal opinion, and I am just a little bit biased. I just want to make it clear that this post is not an attempt to dismiss TypeMock or its value.

With that out of the way, let us concentrate on the actual content of the post.

First, there is no silver bullet. TypeMock is wonderful, but it is not the answer to every testability prayer (is there such a thing?). If the code is bad, TypeMock can do a lot to help, but it will still be a PITA to test this code.

Second, it looks like Jacob missing the actual message in Palermo's post about "this is not about testability". Good design is one which preserves separation of concerns, maintain the single responsibility principal, amenable to change, etc. This also happen to be a design that is easily testable, but that is beside the point.

Good design is simple one where each piece of the code does one thing, and is not dependent on everything else in the application. To violate that means that you would run into issues the first time you need to make a modification. The reason that we want to avoid dependencies as much as possible is that we have to avoid the cascading change reaction.

I didn't have to look far for an example, two posts prior to this one, Jacob has given an excellent example why a design that is testable with TypeMock (and thus, holy*), run into problems with real world requirements. This is not my scenario, it is his. I can come up with quite a lot of war stories about the problems that code with too much dependencies caused.

Let us take Rhino Mocks as an example (I still think of it as my best work), the project is over 3 years old, started in .Net 1.1 and C# 1.0. It has got through numerous versions and grown through two (and a half :-)) major framework versions. The code is stable and as usable as the the day I started this project. If you dig into Rhino Mocks, you'll find that the code is heavily segregated. That allowed me to add functionality and modify the way things work without difficulty, even when dealing with major changes, like the move to Dynamic Proxy 2 or support the AAA syntax.

That is what you get out of having a good design, software that is maintainable.

* sorry, this is a snipe at Jacob, not TypeMock.

time to read 1 min | 141 words

I just tried to spike something, and as usual, I created a console app and started hacking.

It is a non trivial spike, so I started refactoring it to allow proper separation and actually let me handle the complexity that I am trying to contain.

I couldn't continue the spike. Literally. I had no idea how to go about it.

I am currently in the process of moving the spike code into a proper environment, one that has tests, so I can actually work in small increment, and not try to implement the whole thing in a single go.

About an hour later, I have this mostly complete and working, and I can see how the tests helped me get into a situation where I can actually make a small set of changes are get things working.

time to read 1 min | 167 words

One of the more annoying things to test is time sensitive code.

I just spent five minutes trying to figure out why this code if failing:

repository.ResetFailures(failedMsgs);
var msgs = repository.GetAllReadyMessages();
Assert.AreEqual(2, msgs.Length);

Reset failures will set the retry time of the failed messages to 2 seconds in the features. GetAllReadyMessages will only get messages that are ready now.

Trying to test that can be a real pain. One solution that I have adopted across all my projects is introducing a separate concept of time:

public static class SystemTime
{
	public static Func<DateTime> Now = () => DateTime.Now;
}

Now, instead of calling DateTime.Now, I make the call to SystemTime.Now(), and get the same thing. This means that I can now test the code above easily, using:

SystemTime.Now = () => new DateTime(2000,1,1);
repository.ResetFailures(failedMsgs); 
SystemTime.Now = () => new DateTime(2000,1,2);
var msgs = repository.GetAllReadyMessages(); 
Assert.AreEqual(2, msgs.Length);

This is a really painless way to deal with this issue.

A TDD Dilemma

time to read 1 min | 143 words

I am currently modifying some core parts of the system, changing it from using a SQLite DB to using Berkeley DB. The problem is that it is causing... issues.

I have things fairly well isolated, but I need to write code that make this test pass:image

As you notice, this is a test for the repository, and it is verifying that the changes has been written to DB correctly.

I removed the references to SQLite and am ready to write the BDB implementation. But I can't. I have no idea how to design it, and I can't write tests to allow incremental design because all the tests are broken.

I am creating a Temp.Tests project now, and TDDing the implementation, after which I will fix the tests that currently cannot compile.

time to read 2 min | 333 words

I am writing some integration tests at the moment, using WatiN, and I am really enjoying the process. The last time that I tried to use WatiN it was in a WebForms environment, and it was... hard.

Using it with MonoRail is a real pleasure. Here is a simple test:

[Test]
public void Can_submit_new_values_for_webcast()
{
	browser.GoTo(appUrl + "webcast/edit/" + webcast.Id);
	var newTestName = "New test webcast name";
	browser.TextField("webcast_Name").Value = newTestName;
	var newDesc = "There is a new webcast description";
	browser.RichTextEditor("webcast_Description_Editor").Type(newDesc);

	browser.Button("webcast_save").Click();

	browser.Url.ShouldContain("webcast/view/" + webcast.Id);
	ReloadWebcastFromDatabase();
	webcast.Name.ShouldEqual(newTestName);
	webcast.Description.ShouldEqual(newDesc);
}

For a while, I was happy with this, but I have over two dozen such tests, and it is getting very annoying to remember what the all the names of all the fields are. The second time that I started copy & paste the ids from one test to another I knew that I had a big issue. How to solve this was something that I had to think about for a while, and then it came to me.

I can create a model for the interaction with the page, and test the UI through that. Here is my test model:

image Using that, I could get my test to look like this:

[Test]
public void Can_submit_new_values_for_webcast()
{
	browser.GoTo(appUrl + "webcast/edit/" + webcast.Id);
	var newTestName = "New test webcast name";
	edit.WebcastName.Value = newTestName;
	var newDesc = "There is a new webcast description";
	edit.Description.Type(newDesc);

	edit.Save.Click();

	browser.Url.ShouldContain("webcast/view/" + webcast.Id);
	ReloadWebcastFromDatabase();
	webcast.Name.ShouldEqual(newTestName);
	webcast.Description.ShouldEqual(newDesc);
}

Not that big deal, you may say, but now I can just blaze through those tests, writing them with far less friction along the way.

This is especially true when you modify the implementation of the page, but not the interaction, in this case, you want to be able to replace the implementation in a single place (say, change the id of a control, or the type of a control), instead of all over the place.

I am quite happy with this for now.

FUTURE POSTS

No future posts left, oh my!

RECENT SERIES

  1. Production postmorterm (2):
    11 Jun 2025 - The rookie server's untimely promotion
  2. Webinar (7):
    05 Jun 2025 - Think inside the database
  3. Recording (16):
    29 May 2025 - RavenDB's Upcoming Optimizations Deep Dive
  4. RavenDB News (2):
    02 May 2025 - May 2025
  5. Production Postmortem (52):
    07 Apr 2025 - The race condition in the interlock
View all series

Syndication

Main feed Feed Stats
Comments feed   Comments Feed Stats
}