Whenever we compare estimates from software development agencies and specialists, I expect to see major differences between them. Our clients don’t. In fact the variation in software development time estimates, amongst equally skilled and competent developers, is often as high as 70%. We find that some estimates are systematically lower or higher than others and this is partly due to one or more forms of bias.
If after adjusting for bias the estimates appear to be flawed - for simplicity, too high or too low to be credible - then the estimator’s judgement might have been affected by something called "Noise". In this post we discuss both bias and noise. We also describe an approach to identify the most appropriate software development partners by adjusting their estimates for the impact of bias and noise.
JUDGEMENT IS BOTH BIASED AND NOISY
Bias typically has a systematic impact on judgements. Consider the example of a software developer based in a low cost country, bidding on work for international and local clients. They may be biased in the sense that their estimates for international clients (perceived to have larger budgets) might be consistently higher than those for local clients. Also, imagine a typical software development agency based in a major western city.
For identical work, they might systematically discriminate between startups and established businesses. Specifically, they might routinely offer better value proposals only to those businesses that have the skills and resources to professionally analyse and challenge their proposals. However, even after adjusting for the type of bias above and more, we typically find significant variations in estimates for the same software development work offered by equally skilled and competent developers.
PEOPLE ARE NOT RELIABLE JUDGES
One reason is that software developers and people in general are not reliable decision makers. So much more than we are aware of, including context and exposure to seemingly irrelevant information, affects our judgement. Even factors such as the weather and mood affect judgements and account for much of the 70% difference that we routinely see across the estimates that we receive, from different suppliers for the same software development work.
In one example, I was in discussion with two team leaders at the same software development company and asked them (in a single email to both) to submit an estimate for a project. Each one thought that the request was addressed to him and both of them submitted separate time and cost estimates that were vastly different, for the same work. The difference was close to 55%. Both were highly experienced and trusted leaders within the company and both had relied on the same talent pool of equally skilled and experienced developers. How could their estimates be so widely different?
To validate our initial observations, we first looked at independent studies across sectors such as law, software development and finance. A 2001 study of 1000 rulings found that cases heard at the end of the day or just before lunch had about a zero chance of receiving a favourable ruling. How hungry or tired a judge is should have no impact on their ruling, and yet the data says it does.
Academic researchers have repeatedly confirmed that professionals even often contradict their own prior judgments when given the same data on different occasions. For instance, when software developers were asked on two separate days to estimate the completion time for a given task, the hours they projected differed by 71%, on average. The judges and the developers were affected by something that the experts refer to as Noise.
NOISE AND ITS HIDDEN COSTS
So much affects human judgement. In their book, “Noise: A Flaw in Human Judgment by Daniel Kahneman, Olivier Sibony and Cass Sunstein”, the authors define noise as the unwanted variability in professional judgments. In one example from the book, the authors looked at investment decisions at an investment firm. Senior leaders there wanted to find out whether there was noise among their analysts. They designed a case. They said, “Here is a company, here is its P&L, here is its cash-flow statement.” They gave all that information to their analysts, who are supposed to be applying the same methods and the same techniques to value all the companies that they are looking at. They gave the same set of data to all their analysts, and found that, on average, [between any two analysts] you would get a 44% difference in their evaluations. Leaders at the investment firm had no idea that the level of variability would be this large.
In their book, the authors convincingly and exhaustively prove that Noise imposes a significant hidden tax across many industries. Below we analyse bias and noise within the context of software development planning estimates.
INEVITABLY NOISY ESTIMATES
For every project that we work on at Buildze, we may review dozens of time and price estimates. Some simply don’t make the shortlist because they’re clearly off target. Those that do, vary significantly in terms of time and price - in fact, by the second shortlist cut-off, I still expect differences in price of up to 70%. Bias, as described above, accounts for some of this but I find that after adjusting for bias, there are still significant variations, often accounting for differences of up to 60%. In my experience, just like in the cases reviewed by Daniel Kaheman et al, Noise is the biggest determinant of differences.
Based on our own experience, analysing hundreds of software development time and price estimates, below are a few ideas to help you take immediate action to address both bias and noise in project estimates that you receive. In a future blog we will present the complete framework in greater detail.
FIRST, REDUCE OPPORTUNITIES FOR BIAS
Even seemingly irrelevant information about your company may have an impact on your suppliers’ judgements and this carries over into their time and price estimates. For example, if your supplier knows that you are a Startup rather than a major corporation, your estimate may reflect your perceived ability to pay. If you are known to operate in a low cost country the estimates you get might be lower than if you were known to generate your revenue in a major western city.
What to do - Carefully consider what information (especially about your company) you make available when requesting estimates.
NEXT, LOOK FOR NOISE AND TAKE ACTION
The following are just some of many areas to examine. Are suppliers pessimistic or optimistic about your project? What risks do they anticipate? Have they recently completed a similar project? The lower price and shorter time may seem attractive but if your supplier is forced to absorb the unknown risks it may affect the quality and outcome of your project.
Action - Question whether risks are correctly accounted for in the estimate. Be knowledgeable and transparent about all known risks
When did you ask for the estimate? Did you send your request on a Friday, with the expectation of a reply by Monday? As this is skilled judgement based work, rushing often leads to poor decisions.
Action - Ask what process was followed to arrive at the estimate. Make sure that a formal process is followed and that the right people are involved, with sufficient time allowed for the correct preparation of your estimate.
Do they know your expectations and does each agency in your lineup know that they are competing with other agencies and how many? Agencies that are exposed to others’ bids will be influenced by them.
Action - Avoid exposure to others’ estimates or indeed to your expectations. Ask what process was followed to arrive at the estimate. Make sure that a formal process is followed and that the right people are involved. SOFTWARE DEVELOPMENT BIAS AND NOISE AUDIT FRAMEWORK
Over the past few years, we've seen an increasing number of instances where the same developers, asked at different times of the week, give wildly different estimates for seemingly identical work. The same phenomenon seems to be at work when equally qualified and competent developers also give highly divergent estimates for the same project. Bias is important but it is harder to identify and remove than noise. Noise is more pervasive and insidious - especially in software development - but it can be identified and reduced. To do this, we have developed a framework that we will share in a future blog.
Comentarios