OK. So this is going to probably be a controversial posting.
Just call me an old crusty code warrior.. but..
It’s no secret I’m wholly unsold on test driven development [wikipedia] (TDD). I have nothing against testing or writing of tests, it is vital to the overall success and stability of any quality driven software. However, I have a major problem with the strategy that pits developers into the writing of tests (to prove design even) before writing any of their implementation. This is why..
1. It’s Inefficient/ineffectual – my experience leads me to believe that there are far too many changes (or requirement churn) during a development cycle (be it an iterative MSF/Agile development project or traditional waterfall methodology). By the time a developer has had a chance to write out all the possible test scenarios, what are the odds that the business logic changes? Then, aside from not having written any code, there are a bunch of unit tests to rewrite. Even worse when a change request comes in mid-stream. In other words, nothing really gets achieved but a few hundred odd unit tests which become unmaintainable (broken unit tests are useless) – does this mean perhaps that cost outweighs the reward?
2. It can create bottlenecks. What’s the worst that could happen? Someone (a developer) spends a decent portion of their budgeted time allocated to building up the tests first (even design oriented ones). Without some superhuman micro management there is a great chance that another developer will be blocked, waiting for the actual implementation to be produced. Personally, I prefer not to foster the development of bottlenecks or resource contention..
3. It reduces all developers to the same level, regardless of levels of quality or attention to detail. For my mind this is a way of ensuring a specific level of quality (meandering to the average) whilst punishing the talented (and less bug prone) developers from just getting the job done in the first place. Kind of like a speed limit, reducing the ability for skilled drivers to go faster due to the most common denominator.
To this end, I suggest "different rules for different tools" (I mean tools in a nice way). I can understand why graduates and intermediate developers might benefit from a tightly managed TDD driven policy. Seasoned developers though, may find such a blind principal to be too restrictive (not too mention overkill). TDD should be considered where it is likely to add the most benefit without causing more impact than it merits.
Yes, I know I’ve singled this out on its own, and TDD rightfully or wrongly should be coupled with other implementation and project management strategies…
What are your thoughts? Do you violently disagree? Can you suggest an alternative? Be creative and add a comment..
3 thoughts on “Reasons why I’m against test driven development”
If your requirements have changed before you can write the tests for the requirement, and they have changed so much that you have to throw out all the tests, then in a test-last environment they would certainly have changed before you finished the implementation, and you would likewise have to throw out all your implementation. So how exactly have you saved time? At least writing tests makes you think about how the software is supposed to behave up front and maybe you’ll realise the requirements don’t make sense before you get around to implementing them.
I don’t spend a week writing all my tests and then come back next week to implement. I write the test for one feature, implement that feature, then move to the next.
Code needs testing anyway. It’s not adding effort and creating bottlenecks, it’s simply moving effort to write the tests upfront. The benefit being that when do you specification/tests up-front 1) it helps shape the design of the software modules in small cohesive components., 2) it means you can adapt to change later and be sure that your system still works without having to wait a month for the testers to manually run through all your regression scenarios. 3) It documents the requirements for when someone else comes along later and modifies the software, so they don’t break the forgotten about requirement that wont be found till a month later when your testers do their regression testing and by then you’ve forgotten the details of what you did last month and it takes forever to debug the issue and work out how to fix it.
Test first encourages you to code to interfaces as you specify behaviour against a black box interface rather a concrete implementation. You can give this interface to the other developer to code against, thereby relieving a bottleneck that would exist if he is waiting for your implementation.
You agree we have to write tests anyway, otherwise what happens is we all hit ‘code complete’ and deliver into the test environment, and then spend a week or two getting the system to acutally work because we haven’t been testing until everyone elses work is ‘complete’.
So why not write the tests first? Then you get all the design and requirements ambiguity out of the way first.
I’ll keep this brief, but I agree with Dan. Also keep in mind that tests not only serve to excercise your code, but they can also provide examples to other developers of your intended usage. When working on in large teams where everyone is frequently working on everything test driven development is the only way to go in my book. I’ve been burned far too many times by things other devs did or didn’t do to code we were collaboratively working on.
Hi Omar,
Don’t get me wrong, I’m not in any way advocating not to write tests – just that I don’t see the value in writing them first. In fact, on a recent project I found I was uncomfortable committing any code until I had decent test coverage – but as always I wrote the implementation first and then applied negative and positive unit and integration tests. I wouldn’t even commit a changeset until I had both tests and implementation written (i.e. within the same changeset). Does that make sense?
R