Even if I might not make it all the way to strict mathematical scientific precision, I want to throw in an argument that is just a little bit less fuzzy. Let’s talk about test, baby …
Say that we have a decently long method. Probably it will take care of a few things, taking responsibility of A, B and C. This could be respectively validating some data, doing the right thing on validation error and processing it in different ways. For each of these we have a few different variants (m,n, resp p).
To test this code we need to create a test for every path through the method. As we have m alternatives for A, n alternatives for B, and p alternatives for C we will probably need about m*n*p tests. This is roughly the same as the cyclomatic complexity.
Now, let’s do some refactoring. Responsibility A is split out to a method of its own, B to a method of its own, and C likewise. What remains is of course the same old method as before, but it now just consists of a few calls to a(), b(), and c().
Ok, agreed that the analysis is a bit rough and not really rigid; but I think the general idea holds. If someone wants to elaborate to an example and do a strict analysis, let me know.
As for “detestable” I like to give the credit to the unknown person of the Denver JUG (if I’m not misinformed) who coined the expression: darn good job.