How DoD redefined the source selection process

Bob LohfeldIn a continuing effort to improve the Defense Department’s source selection process, Claire Grady, director, Defense Procurement and Acquisition Policy, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics issued new source selection procedures (SSP) on March 31 that rescind the previous policies issued 5 years ago.

The new procedures will have a significant impact on proposal evaluations in DoD and specifically on how they handle best value tradeoffs and lowest price technically acceptable (LPTA) procurements. I’ve highlighted some of the more important changes in this article—some of these procedures are new and some are re-emphasized from the previous procedures. If you would like to read the full 40-page memorandum and its three appendices, it is available on our website.

Redefining the best value continuum

DoD added a new source selection approach to the best value continuum for use as a standalone evaluation approach or in combination with the previously defined best value subjective tradeoff or LPTA tradeoff. The new approach is called Value Adjusted Total Evaluated Price (VATEP) tradeoff, and it allows the source selection authority (SSA) to include monetized adjustments to an offeror’s evaluated price based on specific enhanced characteristics proposed in the offeror’s solution.

In traditional best value subjective tradeoff evaluations, bidders may exceed minimum contract requirements, but no guidance is provided about how much the government is willing to pay for performance above minimum performance requirements. In subjective tradeoff procurements, the evaluation team must carefully document instances where a bidder offers to exceed a contract minimum requirement, and the SSA has to subjectively weigh the benefits of each feature in evaluating the offeror’s proposal and trading off these benefits against price.

In the new VATEP approach, the government will clearly identify minimum (threshold) and maximum (objective) performance requirements in the RFP and identify how much it is willing to pay in terms of price increase (either percentage or dollars) for measurable performance above the threshold.

This approach quantifiably links value and cost in such a way that a bidder can make an informed decision whether it should propose to meet or exceed threshold levels.

For example, if speed is a performance requirement, the government will clearly state what it is willing to pay for increased speed above the threshold level up to the objective level. If it costs 10% more for the offeror to increase speed from the threshold to the objective performance level, and the government is willing to pay 20% more to achieve this higher performance, then proposing the higher performance level would be a good decision. On the other hand, if the government is only willing to pay 5% more, and the offeror would have to raise its price by 10% to achieve this higher performance level, then the offeror would be better served to just propose performance at the lower threshold level.

In VATEP procurements, it is expected that the offeror will meet all threshold performance levels but will receive monetized evaluation credits for performance above thresholds. For performance above thresholds, the SSA will reduce the offeror’s evaluated price (for evaluation purposes only) by the amount of the credit the RFP assigns for performance above the threshold.

The government may assign an affordability cap to set an upper limit on how much the government will pay in total for all performance enhancements. Exceeding the affordability cap would make the offeror ineligible for award.

Any enhancements proposed above the threshold will be incorporated into the awardee’s contract.

Standardizing rating methodology and terminology

For all negotiated procurements (FAR Part 15), major system acquisitions (FAR Part 2.101), and task orders greater than $10 million on multiple award contracts, the new SSP standardizes evaluation terminology using five color ratings or adjectival ratings. These are:

  1. Blue (Outstanding) = a proposal with an exceptional approach and understanding of requirements that contains multiple strengths.
  2. Purple (Good) = a proposal with a thorough approach and understanding of requirements and contains at least one strength.
  3. Green (Acceptable) = a proposal with an adequate approach and understanding of requirements and has no strengths.
  4. Yellow (Marginal) = a proposal that does not demonstrate an adequate approach and understanding of requirements.
  5. Red (Unacceptable) = a proposal that does not meet the requirements of the solicitation and thus contains one or more deficiencies and is unawardable.

The above definitions apply when the government decides to consider performance risk as a separate evaluation factor. If performance risk is combined with the technical evaluation, the five color scores remain the same, but the definitions are slightly modified to include performance risk.

Clearly, to score well in highly competitive bids a proposal will need to have multiple strengths associated with each evaluation factor.

Neutral past performance rating may not be neutral

Past performance evaluations consider each offeror’s demonstrated recent and relevant record of performance in supplying products and services that meet contract requirements.

Relevancy is unique to each solicitation but may include, but not be limited to, similarity of product/service/support, complexity, dollar value, contract type, use of key personnel (services bids), and extent of subcontracting/teaming. Ratings are generally adjectival and are typically scored as Very Relevant, Relevant, or Somewhat Relevant. For example, very relevant would include present or recent past performance of an effort that involved essentially the same scope, magnitude, and complexities as those in the solicitation.

Quality of product or service as a separate rating is not required, however, a separate confidence assessment is required based on the overall record of recency, relevancy, and quality of performance.

Confidence ratings have five adjectival levels—Substantial, Satisfactory, Neutral, Limited, or No Confidence.

A neutral confidence rating occurs when there is no recent/relevant performance record available or the record is so sparse that no meaningful confidence rating can be assessed. When a neutral rating is received, the offeror’s past performance may not be evaluated favorably or unfavorably, however, the SSA may determine that another offeror with a substantial confidence or satisfactory confidence rating is worth more than a neutral confidence rating in a best value tradeoff as long as the determination is consistent with the stated evaluation criteria.

In LPTA procurements, an offeror with a neutral rating is given a passing score, so offerors are not penalized for lack of past performance.

LPTA procurement requirements defined

The new SSPs clearly state when an LPTA procurement is appropriate and emphasizes that this approach is appropriate when the products or services being acquired have:

  1. Well-defined requirements;
  2. Minimal risk of unsuccessful contract performance;
  3. Price has a dominant role in the source selection process; and
  4. There is no value, need, or interest to pay for higher performance.

Well-defined requirements mean technical requirements with acceptability standards that can be articulated by government and clearly understood by industry.

Appendix C to the new SSPs cites acquisition of commercial items or non-complex services or supplies as acquisitions that are appropriate for LPTA evaluations. This guidance is consistent with DoD’s Better Buying Power initiatives.

Small business participation

The government will evaluate the extent of small business participation proposed. Small business participation may be a standalone evaluation factor or a subfactor under the technical evaluation.

The requirement for small business participation must be clearly stated in the RFP as percentage goals for small business participation with the applicable breakdown of goals for various categories of small business concerns.

Proposed small business participation will be rated as either acceptable or unacceptable or scored using the same five color scores used for evaluating the technical proposal. When color scores are used, a Blue (Outstanding) rating is defined as “a proposal with an exceptional approach and understanding of the small business objective.”

The procedures do not say that in order to earn a Blue rating the offer must propose to exceed small business participation goals.

Mandatory use of discussions

Discussions are now mandatory for all procurements with an estimated value of $100 million or greater.

The procedures acknowledge that awards without discussions on large procurements are seldom in the best interest of government. Awards without discussions on complex, large procurements are discouraged.

Discussions, as a minimum, must include:

  1. Any adverse past performance information to which the offeror has not had an opportunity to respond; and
  2. Any deficiencies or significant weaknesses that have been identified during the evaluation.

The Procuring Contracting Officer (PCO) is encouraged to discuss other aspects of the proposal that could enhance the offeror’s potential for award such as evaluation weaknesses, excesses, and price, but is not required to discuss every area where the proposal could be improved.

There is no requirement to discuss all weaknesses in an offeror’s proposal even when undiscussed weaknesses may be determinative in the award.

Selecting the Source Selection Authority

The new procedures continue the practice of requiring the agency head to designate, in writing, someone other than the PCO as the SSA for procurements with values greater than $100 million (including options and planned orders). For these larger procurements, the SSA must establish a Source Selection Advisory Council (SSAC) to provide functional expertise.

When established, the SSAC’s primary role is to provide a written comparative analysis of the offerors and provide an award recommendation to the SSA. In the absence of an SSAC, the Source Selection Evaluation Board (SSEB) does not prepare a comparative analysis or recommendation for award since this task is the responsibility of the SSA.

The Source Selection Decision Document (SSDD) provides the rationale for award, and a redacted version can be provided at the debriefing.

The establishment of an SSA other than the PCO and use of the SSAC on larger procurements moves the selection decision solidly toward the organization needing the products, systems, or services being procured and away from individuals on the procurement side of the organization that may be more inclined to choose price over performance.

Final thoughts

The new source selection procedures, just like the previous, provide excellent guidance to improve DoD evaluation practices. I believe this procedure will serve DoD and industry well in the coming years and will help industry write better, more competitive proposals.

This article was originally published May 2, 2016 in WashingtonTechnology.com.

Download a copy Bob’s latest article (PDF).

Email your comments to me at RLohfeld@LohfeldConsulting.com.

 

 

author avatar
Lohfeld Consulting