Normalising Scores – all things are relative

Estimated reading time: 2 minutes

Question: When are the following two statements both completely true?

• Your tender’s final score is 80%
• Your tender was terrible

Answer: When the scores have been normalised.

There appears to be an increasing trend for Purchasing Authorities to normalise (pronounced “inflate”) the scores that tenders are awarded in procurements. In this approach the tender that scores highest in the evaluation has their score inflated to 100% and the scores of the other tenders are scaled up accordingly. The normalised score for each tender is calculated as

(evaluated score) / (highest evaluated score) * 100%

This has the effect of expanding the gap between the scores of the best and the worst tenders (and, of course, all the others in between). Presumably the reason for doing this is that it makes debriefing easier! In an extreme example, if all tenders are evaluated to score between 0% and 10% against the criteria then the outcome of normalisation would be that their score would be multiplied by 10 to be between 0% and 100%. Isn’t it easier to say to a losing tenderer that they lost the competition because their score was 5% lower (normalised scores) than that of the winner rather than only 0.5% lower (un-normalised score)?

This sounds dangerously as if the tenders are being scored relative to each other which would immediately open any Authority using this approach to challenge under the Procurement Contract Regulations. However, the un-normalised score is against the defined needs of the Authority (as defined by the evaluation criteria), so is indeed fair and objective as required by the regulations; it is only the scaled score after normalisation that is relative to that of the other tenders.

All is fine so long as any thresholds being used to reject tenders from the competition are applied BEFORE the scores are inflated. If thresholds are applied after normalisation (either to the scores of individual criteria or to overall, aggregated scores) then they are identifying tenders which are significantly worse than other tenders rather than those which don’t meet the needs of the Authority. This would indeed be grounds for challenge.

It is interesting to note that normalisation is widely applied to scoring price. In what is sometimes called “proportional” scoring of price, the score of the price of each tender is calculated as

(lowest price) / (tender price) * 100%

This has the effect of giving 100% to the lowest tendered price and docking the scores of other tenders according to how many percentage points they are above the lowest price.

So, normalise if you want to make debriefing easier, but apply thresholds to the un-normalised scores.