Contact us

Normalising Scores – is “least worst” really good enough?

In a previous blog on this subject (available here) I noted that normalising scores is an effective way to highlight the differences between bids, potentially easing the process of debriefing unsuccessful tenderers. However, what messages does it give to the bidders and the stakeholders within the procuring organisation?

Let us assume that you applied the thresholds at the right time (i.e. before normalising the scores) and are content that the solution you’ve chosen does indeed deliver what you need (i.e. is acceptable to you). But how good is it really? It all depends on what you are measuring it against. If you normalise the scores, then you are measuring the tenders against each other rather than your needs. Let us look at this in some more detail.

Normalising the cost score means that you are giving the highest score to the least expensive tender. This is often done by calculating the normalised cost score as the cost of the least expensive tender divided by the cost of the tender you are scoring. For example, if you have a budget of £100M and the (acceptable) tenders come in costing in the range £98M to £100M (i.e. are all really expensive), then the tender costing £98M gets the maximum normalised cost score of 100%, even though it is very near the top of your budget.

Similarly, normalising the quality score means that you are giving the highest score to the tender with the best quality. This is often achieved by calculating the normalised quality score as the score of the tender you are scoring divided by the highest quality score achieved by all the acceptable tenders. This time, let’s say that you decided that tenders must score 10 points out of a maximum of 100 points for quality to be acceptable to you. Assume that the tenders un-normalised scores are in the range of 10 points to 20 points (i.e. are all only scraping past the quality threshold), then the tender with un-normalised score of 20 points gets the maximum normalised quality score of 100%.

Thus, you could have a barely acceptable tender which is only just within your budget scoring a seemingly perfect 100%. However, taking the normalising scores approach has masked the fact that it is in fact poor quality and expensive. Mind you, so were the rest of the tenders! What you have really chosen is the “least worst” tender which, in this case, represents pretty terrible value for money.

So, what message do you want to give to the bidders, and is it the same message as to your stakeholders? Our recommendation is that the message should be the same for both and that it should represent the actual value of the tendered solution to the buyer and not how it compares to the other tenders. In other words: don’t normalise the scores!

If you would like to know more about how to calculate tender scores so that they truly reflect the value for money being offered by your bidders, take a look at our Real Value for Money AWARD® module, supporting services and training.

[1] Thresholds being used to reject tenders from the competition are applied BEFORE the scores are inflated. If thresholds are applied after normalisation (either to the scores of individual criteria or to overall, aggregated scores) then they are identifying tenders which are significantly worse than other tenders rather than those which don’t meet the needs of the Authority. This would indeed be grounds for challenge.

Get in touch to find out more