Why model answers are a bad thing!

Estimated reading time: 3 minutes

As part of the process of being able to demonstrate that a purchasing authority is being open, fair and objective in its assessment of tenders, it is common practice for the evaluation to take place as follows:

  1. A panel of subject matter experts (SMEs) assesses the submitted tenders against the criteria using the published scoring guidance. Each SME works independently of all others and can only see their own assessments, not those of the other SMEs.
  2. Once the SMEs have completed their assessments the final authority assessment of each criterion is determined by moderating the SME assessments. Each criterion has a nominated moderator who is responsible for resolving any inconsistencies in the assessments of the SMEs and recording the final, agreed assessment and the reasons for it.
  3. The moderated assessments are used to inform the process of choosing the successful tender.

However, the process of moderation can be fraught with difficulty! For example, if you and I are equally qualified to assess a tender against a particular criterion (i.e. we have similar levels of relevant experience and understanding) then how do we avoid the problem where, from reading exactly the same information in the tender, I give a low score but you give a high score? Whilst it would be impossible to completely avoid this happening, the likelihood can be reduced by describing what each of the points on the scoring scale means.

A traditional way of doing this is to provide a “model answer”. In other words, if the supplier’s solution “looks like this” then give them this mark. A real example of this that was encountered by Commerce Decisions related to an element of a contract to control vermin on the client’s site. The model answer talked about the number of rat traps that would be used. One supplier said afterwards that they had a better solution however they couldn’t propose it as they knew they would be marked down as it wasn’t the same as the model answer. Their innovative solution was to encourage owls to nest around the site! Hence model answers are bad news as they overly constrain both the suppliers’ solution and the ability of the assessors to give credit to an innovative solution.

A better way of describing the scoring scale is to describe what the successful outcomes will be, not what the solution itself looks like. In other words, what characteristics would the solution have to exhibit in order to give confidence to the purchaser that it will meet their needs? We at Commerce Decisions call these the “Confidence Characteristics”. Using the vermin example, we need to define the reduction in the population of rats required in order to award each mark. Be careful how you do this, if you judge on the number of dead rats then an unscrupulous supplier may ship in dead rats! If you judge on lack of rats, you should get lack of rats, be it via rat traps, owls or some other mechanism.