The strengths and weaknesses of Score Normalisation

Estimated reading time: 4 minutes

In an age where competition in bidding for a strategic procurement is fierce, the need for a robust evaluation model which creates clear separation between bidders has never been greater.

Typically, when it comes to major contracts of $100 million and above, there are only a handful of bidders who can realistically provide the specialist product and supporting services that the buying authority requires. As such, in each respective market, potential bidders will know who their likely competition will be. They will be able to conduct war gaming exercises in an attempt to gain the additional 1 or 2 percent that could be the difference between winning and losing.

This desire to win has also driven losing bidders to become far more savvy when it comes to looking for signs of a chink in the amour of an assessment scheme. Any competition in which there is a very small margin between the winner and the second-place bidder is a prime candidate for an initial review by the lawyers.

The Authority  therefore faces a very real and increasing threat of challenge which can’t be ignored. To be more specific, even if the result is only in the process of being challenged, it puts the entire competition on hold. This will most likely delay the original purpose of the competition in the first place – the delivery of a much-needed product or service by a certain deadline. Furthermore, it almost goes without saying that the financial cost to the Authority of a successful challenge could be crippling – this adds further pressure to the up-front design of the competition in order to help mitigate the risk of challenge.

The impact of Normalisation in practice

There is a school of thought that supports the ‘normalisation of scores’. Using this approach, the tender that scores highest in the evaluation has their score inflated to 100%, and the scores of the other tenders are scaled up accordingly (the normalised score for each tender is calculated as: evaluated score / highest score * 100%). Designing the evaluation model to include the normalisation of scores expands the gap between the scores of the best and worst tenders (and all in between), typically resulting in larger differences in score between first and second place. The theory goes that by taking this approach the Authority will end up with a more significant difference in scores between all the bidders. This results in the debrief process becoming a lot easier and helps reduce the likelihood of challenge.

The counter argument to the use of Normalisation suggests that it effectively mask a poorly designed marking structure that only creates scoring differences at the surface level that can be picked apart.

There are three specific issues with normalising the technical score:

  • In cases where all the technical scores are low, the winner will be awarded a score that appears to relate to a good technical offering when it’s actually a poor solution. This is misleading all round, and certainly won’t be appreciated by the project sponsors.
  • On a related note, the carefully considered weightings which were applied to the components of the technical score can then end up losing their desired effect and the outcome can unintentionally be changed.
  • Ranking paradoxes may occur – for example, the elimination or withdrawal of a tender that has the best technical score but is not an overall contender can change the final ranking, with the hitherto leading bid suddenly losing the top spot for no good reason.

Here at Commerce Decisions, while we don’t promote the use of Score Normalisation as best practice, when it comes to complex and/or strategic procurements we certainly understand why some of our customers choose to use it, for some of the reasons outlined above. Our expertise in designing competition and criteria schemes, in conjunction with Score Normalisation being an optional feature in AWARD®, means that we can help create the ideal foundation for your procurement in order to help get the best possible outcome. For more information on how we can support you then please get in touch below or take a look at the AWARD® area of our website.

For more tips on how to achieve the best possible outcome from your procurement, read our procurement guide.

This blog was co-authored with Product Manager, Dan Paterson and is a follow up to the Normalising Scores – All things are relative by Principal Consultant, Andy White