Be it Bloody Mary, the sewer alligators, the kidney heist or you name it, we all know myths and urban legends. But today, let me add another really scary one to that list: 

The "cost of defect" metric.

Experienced software QA people tend to question everything, let's face it: It's in their DNA. One may think that such people don't become easily fooled by myths, urban legends - or their internet variant known as "hoax". Astonishingly though, the tale of the "cost of defect" keeps ringing in our head for decades. Just like a myth, we've been awestricken sharing it at our QA fireplaces and keep passing it on from one generation of professionals to the next. The tale goes like this, more or less:

The later a defect is found in the software development life cycle, the more expensive it is.

We all agree. Yep. It makes so much sense when you hear it, doesn't it? Let's see.

Mythbusting

I recently was doing some research for preparing a different blogpost when I came across Google search results that made me almost spill my coffee: What used to be an unconditional truth, carved into stone for years, suddenly crumbled into dust when I noticed that a lot of elaborate resistance against the "cost of defect" paradigm has emerged during the past years.

What bothered me most about it wasn't the fact that I needed to rework the blog post but that it took me almost 3 years to get aware of it. And, most of all, that I never questioned it despite my skepticism when I first heard it. Needless to say that the arguments against it perfectly matched with my doubts.

Anyways, better late than never. To give you some examples, here's from my personal favorites on the criticism list - Capers Jones and Michael Bolton:

  • Capers Jones

    • Capers Jones did an outstanding job in tearing apart this myth. If you are interested in details, his piece is a must-read, filled to the brim with details and deductive evidence as to why that cost-of-defect metric makes no sense. In short, his points of criticism are
       
    1. No one ever mentioned what that "cost" actually is made up of, besides obviously, labor cost for fixing and testing fixes. Those however have the same price tag respectively, regardless of SDLC stage where they occur.
      For any “more expensive personell” argument see Michael Bolton below

    2. With each bug found in early stages of testing, the number of remaining bugs that can be found in later stages decreases. (The number of bugs in a product is unknown perhaps - but not inifite) Therefore, by nature, fewer bugs are found in later stages than in earlier stages.
      Comparing the number of bugs in consecutive test stages is comparing apples with pears
       
    3. No exact math exists to prove the famous outrageously high factor of cost increase between subsequent test stages. On the internet one will see quotes ranging from "5-times higher" up to more than "100-times higher" across case studies and reporters ...which however all fail to explain what -for heaven's sake- exactly is contained in that "cost".

      cost per defect graph
       
    4. It inverts the meaning of software quality: The fewer bugs exist in a product, the more expensive each one is. In turn, if a product has tons of bugs, each one has little cost. It actually dispraises the writing of quality software that has little to no bugs.
       
  • Michael Bolton

    • Michael Bolton (QA professional and writer of quite some fame and undoubtedly expertise, and no, not the singer...) took a look at reasons as to why it might be more expensive to fix bugs in later stages.
       
    • Be it a more complex analysis and fixing, more expensive people required, integration effort - whatever. One by one he gave nice real-world exceptions to it.
      In a very entertaining way - a good read definitely: The full post is here
       
    • Conclusion: There's far too much exceptions to these rules in order to wholeheartedly still calling them “rules” at all.

Wrong from the start?

So, that "cost of defect" was rubbish right from the start, right?
No. It's not all wrong: We'll have to go back in history to understand. Common sense
has it that Barry Boehm "invented" this cost-of-defect
metric, although I could not find a trustworty citation or reference. Capers Jones put it like this:

While there may be earlier uses, the metric was certainly used within IBM by the late 1960’s for software; and probably as early as 1950’s for hardware.

My friends usually wind me up with joking about the fact that I still know from my early professional days what punch cards look like. But I also remember what software development was like on midrange systems, mainframes and PCs when I took my first steps in IT in the late 80s. This, by the way, also marks the first time I heard people talking about the cost-of-defect.

And now go figure - that metric is even by far older!
That means it was born in times where software projects were delivered in waterfall by huge teams only and tools weren't even close to what we are used to today - I doubt they existed at all. Code analysis meant looking at printed source code on paper and going through the lines of text - and to make a real myth stay alive, yes, the pen could have been taken from your pocket protector, of course:

pocket protector

Back then, finding a bug meant a lot of work which started only once it was fixed: Test stages that would have eventually already been carried out on the affected code had to be repeated again. If the fix occurred in a shared subroutine - well, print that cross-reference list and go happy testing on all programs that used it. With all the logistics around code and testing, finding bugs late in the SDLC really meant more work than when finding it early. I can defintely agree with that metric here, even without pulling out numbers. However...

So here's my 3 takeaways

  1. Myth busted? busted
    My personal conclusion is – yes, myth busted. You will have to decide for your own how to deal with that metric. Like with many things in life: It depends, your mileage may vary. The metric may not be all true (anymore) - but it wasn't all wrong either for quite some time. But that time is over. 
    We can certainly agree on calling it a piece of QA history which made sense until some point. But as of today, please let's stop using it for each and every occasion and keep the myth alive. It is counter-productive when trying to overcome today's QA challenges.
     
  2. Boehm to blame?
    So should we blame Barry Boehm, tar and feather him and drag him through town? Definitely not. First of all, I am still not convinced that the metric was his idea. He may have simply been the first one to refer to it in a published media available to the public. Secondly, even if so, and even if that metric would be all wrong, it was at least finally a metric! It helped make people think about testing challenges. Although perhaps based on wrong assumptions or approaches, its intention was the most noble one: Show the need to invest in quality and early testing.
    (Capers Jones describes a better economic justification for investing in quality BTW)

  3. Learnings
    We need to shake off the dust of history in QA: What used to work in the past may not be the ideal blueprint solution any more, at least not always. Only think of the software testing pyramid that is flipped when it comes to mobile. The more complex our world becomes, the more flexible we need to become - and we won't succeed by sticking to traditional patterns without questioning whether they are still valid. Or, whether we need to adapt, improve. We should do this every single day anyway. We're QA people after all.

Conclusion

As with all good myths, there is a (tiny) bit of truth in it. But the way how software development is done today does not allow us to apply this metric any more. Too much has changed since when it was created. The real challenges behind it however still exist:

Communication, planning, management, teamwork and most of all - quality.


Share this post:
Facebooktwittergoogle_pluslinkedinmail

Links and recommended further reading:

Copyright notice for artwork used on this page (in addition to these):