Thursday 24 November 2011

Double Entry Accounting and TDD

Double-entry bookkeeping system
A double-entry bookkeeping system is a set of rules for recording financial information in a financial accounting system in which every transaction or event changes at least two different nominal ledger accounts.

At it's simplest you have two ledgers when you make an account transaction you make an entry in both ledgers. Then at the end of the month you reconcile these two ledgers and they should be the same. Essentially from an accounting point of view we're saying that by using two different ways of doing something we come to the same answer.

Why is the useful from a TDD perspective?

Well you should be using different ways to make your assertions from your implementation, particularly when you talk about integration tests.

Lets say your writing some integration tests for a repository. Lets say you're using Simple.Data to access your database. Your tests should then use something else. You should probably get as close to the metal possible. In this case you should probably use the SqlCommand class.

Put some data in your database using SqlCommand. Then assert that you can retrieve it using Simple.Data.

That way you are making a Double Entry Assertion in your tests. The chances of both systems being broken in the same way are reduced.

Sunday 13 November 2011

Red feature tests are pointless

We've spent a lot of time recently on fixing up our automated feature tests (AATs). The problem has been that these failing tests have blinded us to real problems that have crept into live systems. The usual answer is to 'just run them again' and eventually they go green. The main problem is our attitude to the tests, we don't respect them and we don't listen to them, as such they provide no feedback and are completely pointless. The response to broken feature tests would normally range from the test environment is down, the data is in contention, the database is down etc etc, but never something I've done has broken something.

So what is the solution?

We improved the reliability of the tests that we could and removed the ones we couldn't. Now you may think this is a bit of a cop out, but the amount of feedback we were getting from the tests was negligible. The best think you can get from your tests is something you don't expect. you should expect them to pass and be surprised or shocked when they don't.

Stop the line and fix the problem. Don't just keep running them again.