Estimated reading time: 7 minutes
Colour team reviews are an integral part of any bidding process, providing an additional perspective on how your bid will be viewed by evaluators. However, the approach different organisations use to undertake these reviews can vary significantly.
Whilst the colours that bid teams use to represent the different stage reviews may differ, you’ll need to ensure that all your reviewers adopt a consistent approach to reviewing and providing feedback. This has become even more important with many dispersed bid teams continuing to work remotely. Without a formalised structure in place, your reviews can often end up being purely technical or solution-focused, rather than concentrating on the most important aspect: is your response going to achieve the score you’re aiming for?
Our experience of over 20 years’ supporting many of the largest organisations that bid into public sector procurements, including into the UK MOD, means we have a unique understanding of the challenges bid teams face. We’ve used this insight to develop a structured approach to formal bid reviews, designed to help you avoid the following 5 common mistakes:
1. Focusing on aspects of the response inappropriate to its maturity
Not every bidding opportunity allows time for, or justifies the use of, multiple review phases but where multiple reviews are used, it’s important to focus on the aspects of the response that are relevant to its maturity. Too often the review focuses on aspects that aren’t appropriate to the bid’s maturity which can result in incorrect and unhelpful feedback being provided. This in turn will have an impact on the accuracy of the scoring, as well as the final outcome. The main review cycles applied to the bid response are:
- Storyboard review (often referred to as a ‘Pink’ review)
The purpose of a storyboard review is to ensure that the plan for the response is structured in an appropriate manner and identifies:
All of the key aspects of the criteria (question) to be addressed
- The evidence to be incorporated into the response
- The customer benefits to be highlighted throughout the response.
The Pink review should be carried out with the assumption that the sections are well written and will include all of the elements within the plan, and the potential score should be considered from this perspective.
- Proposal Draft review (often referred to as a ‘Red’ review)
This review is often carried out at response maturity between 70% and 90% complete and should be used to determine that:
All of the key aspects of the criteria (question) have been fully addressed
- The evidence incorporated into the response is appropriate to the criteria, and emphasises strengths (and differentiation) in the areas being tested by the Authority
- The customer benefits are clearly highlighted throughout the response.
This review should therefore be conducted from a different perspective to that of the Pink review and should be used to identify any gaps and ensure that your bid is compliant, comprehensive, and compelling, to help you achieve the highest possible score.
2. Reviewing a bid from a technical solution perspective only
One of the biggest issues we come across with both Pink and Red reviews is that they tend to focus on the ‘technical’ aspects of the solution rather than the ‘quality’ of the response. It’s generally too late to change the technical elements at this stage of the process and it’s also widely recognised that the ‘quality’ of the response has a far greater impact on your final score.
When a review focuses purely on the technical merits of the solution, other key elements of the question that have not been addressed can be completely overlooked. This will usually result in over-optimistic scoring and can be one of the main contributing factors to losing the bid. So next time you’re in the review process, make sure you also consider the quality of the response, to help you reach top marks.
3. Providing inconsistent feedback to authors
The biggest challenge response authors face is when different reviewers provide inconsistent and contradictory feedback. This is often caused by expectations not being set for either the reviewers or authors at the start of the review cycle. It typically leads to a lack of clarity and lower quality responses as it’s difficult for authors to truly know what they need to do to enhance the response.
With a large proportion of the reviewers likely to come from outside the capture team, they’ll need to understand the maturity of the response, the evaluation method they should follow and how feedback needs to be created and presented to authors, to ensure their feedback is clear and consistent. Taking advantage of expert training before the start of the process will ensure reviewers and authors have a clear understanding of what is required for the different review levels. This will help to improve the overall quality and eventual outcome of the review.
4. Scoring predictions that are inconsistent with the Authority’s approach
Buying organisations need to develop and use a robust, defendable scoring mechanism and then share this with bidders as part of the ITx documents. In doing so, the Authority advises exactly how they will score bidders for each individual response.
However, we often see bidders using their own internal (standard or bid specific) scoring scheme and guidance when conducting colour reviews. Examples include a simple RAG score or a 1-10 scale. Although these scoring approaches will allow you to compare how you see the relative quality of your responses internally, they may bear little or no resemblance to the customer’s scoring scheme and guidance. This may result in very different individual response and total proposal scores between your internal review and the customer evaluation.
But you can avoid this all-too-common mistake, by ensuring you have a structured colour team review and feedback process. This will provide bidders with greater clarity on the scoring scheme the Authority will be using, as well as increased focus on how their response will be judged.
5. Poor version control and document management
Finally, when reviewers pass feedback directly to authors for them to update, this can lead to a lack of control which causes frustration and reduces the efficiency of the entire review process. Examples include:
- Not using a defined versioning nomenclature/convention or version control tool
- Not showing a clear demarcation between what has been updated and what is unchanged
- Updating the same version of the response document throughout the process making rollback or comparisons between iterations impossible.
Establishing a well-defined process to provide structured feedback to authors, will enable greater accuracy and improved decision making when scoring individual responses, ensuring the best possible bid submission.
Understanding how these 5 common mistakes can be made and more importantly, how to avoid them, will help you get the most from your next colour team review process and ensure you’re better placed to improve your score and secure the winning bid.
How Commerce Decisions can help
Our export support and guidance is here to help you overcome the issues you’re likely to face at every stage of the review process and includes:
- Colour review training so that authors and reviewers are clear on the expectations for the review cycle. This includes the specific nuances of the review that are dependent on whether the responses and the review are at Pink or Red level.
- Best practice advice for defining and encouraging better bid response versioning control including ensuring that the folder structure for your review documents mirrors your review process and using version control software or manually enforcing a filename versioning convention.
- A structured colour team review and feedback process to ensure that all reviewers evaluate and score in the same way as the customer. This helps you focus on how the response will be judged on all the specific criteria elements relative to the scoring guidance, not just the quality or strengths of the technical or commercial solution.
- Use of our feedback template combined with a well-defined, tried and tested in-brief for both reviewers and authors to ensure that feedback is received in a structured framework in one concise document. This ensures a more accurate scoring assessment of both individual responses on the whole proposal, and allows you to make more informed decisions on the competition, their solution, and positioning to win.
Find out more
For more information on our wide range of expert consultancy and training and how we can help you prepare the best possible bid submission, please get in touch.
You may also be interested in our recent blog: ‘Are your Win Themes really winning?‘ in which Principal Consultant Sector Lead, Antony Mitchell, explores several key areas for improvement.