Post Format

The cost of defects …

Leave a Reply

I am currently reading Laurent Bossavits’ fascinating book “The Leprechauns of Software Engineering”. He dissects one software engineering ‘myth’ after another: “Cone of Uncertainty”, “10x developer productivity” etc. What he is after is proof. Or at least some valid empirical data. And mostly he finds: not much at all. Sources cited often do not support the claim; there is no data given, or even better: an impressive graph (that surely must be based on empirical findings) turns out to be based on subjective experience.

I like his determination, he thoroughly tries to track down the original behind a lineage of sometimes inaccurate citations, even getting behind distortions of the original article.

Especially with the “cone of uncertainty” he does a convincing job: Have you ever experienced cost under runs? The graph surely looks impressive, but it does not say much more than the famous quip: “It’s difficult to make predictions, especially about the future” (ascribed to Mark Twain, but that is another story).

relative costs to fix defects (from: https://i1.wp.com/www.infoq.com/resource/articles/resilient-security-architecture/en/resources/figure1.jpg)

I still believe that in non-agile software process defects that have been introduced early are more costly to fix than defects introduced later. But there seems to be no empirical data. So if there is no data backing up this claim, is there at least a valid qualitative argument (for non-agile projects at least)?

I assume that the requirements document is simply shorter than the resulting software code. Now compare a wrong sentence (a defect) in a requirements document and a defect in a line of code. If the defect is found in testing or later then there is a high chance that the defect in the requirements document has influenced a large portion of the code, therefore I would conclude (qualitative reasoning!) that the defect is more costly to fix than the defect in a line of code. But probably comparing lines is problematic at least.

Let’s try another line of reasoning. Assuming again a software process based on the waterfall model: a defect found in testing that has been introduced in the requirements phase results in (1) a change in the requirements document, (2) probably a change in the design and (3) surely a change in the code. Depending on the rigidness of the process there might be some “quality gates”, “reviews” etc. to pass. Compare that to a defect introduced in coding: Only the code has to be changed. The code also has to be deployed and tested, but this holds true for both kinds of defects.

I still think, Laurent Bossavits has a very valid point: a paper that claims to be based on empirical data, hast to back up that claim, by showing us the data, so that we can check the validity of the claim. So read the book, even if you disagree. It’s fun and I think it will help you detect myths yourself.

Leave a Reply

Required fields are marked *.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s