I think there’s a lot of survivor’s bias in judgment of what works or doesn’t work in product development. Either you made something the customers needed, in which case whatever path was taken is lauded as the right one, or you didn’t, and whatever path was taken is held unaccountable because it doesn’t matter any more. For what it’s worth, Accelerate tries to take a more evidence driven approach to answering this question of “what works for real”, and is worth a read.
An example of the sort of survivor’s bias decision that is made: can product teams succeed without dedicated quality assurance engineers?  Quality Assurance (and Project Management!) have been dedicated roles in the past which lots of organizations are now trying to offload into anyone else who’ll do them. The stated reasoning goes something like this.
What executive doesn’t want to listen to “hire fewer people, get same result”! Especially when the outcome is so hard to measure. So, “QA is everyone’s job now”: developers are supposed to write unit tests for each other while they’re reviewing code, and field engineers and product managers are supposed to do functional testing. To be honest, there’s a lot to like about the concept of a band of master craftspersons producing artisanal products instead of mass-produced Taylorism code factories… but the enterprise of today is hiring for and setting expectations for the latter, and doesn’t support the former.
This goes exactly nowhere unless someone, like a product manager or field engineer or team lead, plays the thankless project management taskmaster role. The result is reduced capacity because one person is now juggling what three did, so products ship with less testing done. It’s great that tests are more easily automated with CI/CD and that some software engineers are very competent and diligent about writing tests, but that doesn’t take the place of integrated functional testing against a large set of conditions. There’s no guarantee that hiring a QA engineer will produce this level of testing, but it’s even less likely with not hiring one.
Of course, even if you have lots of QA people they won’t find all the problems. You can’t prove a negative so there’s no way to be completely sure that every scenario is accounted for. A popular QA joke goes like this:
A QA engineer walks into a bar and orders a beer.
She orders 2 beers.
She orders 0 beers.
She orders -1 beers.
She orders a lizard.
She orders a NULLPTR.
She tries to leave without paying.
Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.
The bar explodes.
The joke is funny because it’s true. Formalism and rigor cannot prevent trouble from arising when your product interacts with chaotic reality. Even TDD and the V Model will fail if the product team doesn’t expect the customer to ask for a bathroom. That shouldn’t mean you can just give up! But it does mean that you’ve got a thorny issue to solve, and maybe some team scars to work around. For example, people asking for more bake time or avoiding Friday deploys or feature flagging new stuff to off by default. None of these is wrong on its own, but together they can drastically slow the organization.
Or, cut the Gordian knot and lay off the QA team. Does your company immediately sink beneath the waves? No. Can you measure the difference in your product outcomes? Maybe. Are you sure you’re not measuring confounding factors? Not really, and so things go until someone can successfully make an argument that QA is needed. Here’s a thought experiment to try: in a browser that isn’t logged in, go to LinkedIn and search for “quality assurance engineer”, then “software engineer”. I just got a 50K difference in raw counts, and glancing through the postings seems to show a marked difference in salary levels.
But now let’s ask… does the difference in product quality eventually damage the company, and if so how would anyone know?