首页    期刊浏览 2024年12月04日 星期三
登录注册

文章基本信息

  • 标题:Best value source selection: the Air Force approach, Part II
  • 作者:Alexander R. Slate
  • 期刊名称:Defense AT&L
  • 出版年度:2004
  • 卷号:Nov-Dec 2004
  • 出版社:Defense Acquisition University​

Best value source selection: the Air Force approach, Part II

Alexander R. Slate

Part I of this article introduced the Air Force method for conducting best value source selections, a process that doesn't use qualitative numbering formulas but takes instead proposal strengths, inadequacies, and deficiencies to come up with a color rating of red, yellow, green, or blue at the subfactor level of mission capability. Part I also discussed proposal risk. Part II briefly covers the significance of past performance and addresses the crux of the entire source selection: the integrated assessment and how cost plays into it.

[ILLUSTRATION OMITTED]

Past Performance

I do not intend to explain the mechanics of how we conduct the past performance assessment. However, I will say that it is based upon the assessment of relevant and recent experience on the part of the offerors and their sub-contractors and that the ratings used are from the Air Force Supplement to the Federal Acquisition Regulation (AF-FARS), Part 5315 as follows:

* Exceptional/High Confidence -- Based on the offeror's performance record, essentially no doubt exists that the offeror will successfully perform the required effort.

* Very Good/Significant Confidence -- Based on the offeror's performance record, little doubt exists that the offeror will successfully perform the required effort.

* Satisfactory/Confidence -- Based on the offeror's performance record, some doubt exists that the offeror will successfully perform the required effort.

* Neutral/Unknown Confidence -- There is no performance record identifiable (see FAR 15.305(a)(2)(iii) and (iv)).

* Marginal/Little Confidence -- Based on the offeror's performance record, substantial doubt exists that the offeror will successfully perform the required effort. Changes to the offeror's existing processes may be necessary in order to achieve contract requirements.

* Unsatisfactory/No Confidence -- Based on the offeror's performance record, extreme doubt exists that the offeror will successfully perform the required effort.

[ILLUSTRATION OMITTED]

The Integrated Assessment

Once all the proposal evaluations are completed, the final ratings are documented and presented to the Source Selection Authority (SSA). One of the documented reports is the proposal analysis report, which documents the results of the evaluation and provides a comparative analysis of the competitive offerors. The SSA determines what combination of ratings provides the best value based on what was approved in the source selection plan and what was said in section M of the request for proposal (RFP). Let us look at an example; for simplicity's sake, we will say that there was only one subfactor in the mission capability factor, giving us only a single color rating for this factor. The factor ranking of importance is as follows: mission capability is co-equal with past performance, and cost/price is co-equal with risk. The example is shown in the chart at the foot of the page.

Given that we do not use quantitative relationships between the factors, a case could be made for any of the four offerors winning this award, though it is not likely that we would award to offeror D. If the risk for D was low and the past performance was exceptional, maybe we would award to offeror D--but not as it is presented in the chart. However, A, B, and C are good candidates for award. The question the SSA needs to answer is this: Is the combination of the mission capability and past performance of offerors A or B enough to override the lower cost and low risk of offeror C?

Now let's change the factor ranking of importance so that mission capability and cost/price are coequal, and past performance and risk are co-equal but of lesser importance. Keeping the same assessments, it tends to raise the likelihood that offeror C would be the best value and perhaps even offeror D, but it lowers the likelihood of award to offeror B, especially as compared to A.

Of course, in real life things are not so simple, and we typically have color ratings for two to three subfactors under mission capability to integrate into our overall assessment. The practical result of this is sometimes a de facto rollup (as discussed under "Color Ratings Step 2" in Part I of this article, Defense AT & L, September-October 2004), even though it is understood that we do not really roll up to a factor rating.

Some may take issue with my example, pointing out that according to the AFFARS, ratings of yellow should really be used as interim or initial ratings: "Through exchanges, the government evaluators should be able to obtain the necessary information from offerors with interim Yellow/Marginal ratings to determine if the proposal inadequacies have been satisfactorily addressed. Yellow/Marginal ratings should be rare by the time of the final evaluation" (Part 5315). To answer the critics: that means the assessments I used for Offeror D should be different, and mission capability should either be green or red in the final assessment; however, it doesn't mean that a color rating can't be yellow.

The Better Choice?

Is non-quantitative source selection better than quantitative source selection? The answer (like the answers to so many other questions) is "it depends." Both systems have their applications. But for the majority of source selections I am aware of, particularly in new system or services acquisitions, I believe the non-quantitative system as the Air Force applies it is better. Why? Because the non-quantitative system provides the evaluation team and SSA with greater flexibility in assessing the various benefits and impacts of different approaches taken by offerors to the requirement. The narrative justifications of each strength, weakness, inadequacy, and/or deficiency provide clear detail and rationale for the decision, with the result that there's less second-guessing.

No two source selections are the same; the needs of the government and the particular circumstances of the acquisition need to be taken into account when selecting a contractor. In my experience, the Air Force system is more flexible in this regard. Using color rating scales to choose a more balanced proposal over an unbalanced one if it seems best, or an unbalanced one over a balanced one if the circumstances dictate, is a powerful tool and something that is extremely difficult to handle in quantitative source selections.

[ILLUSTRATION OMITTED]

The blue rating is another advantage of the color system since blue ratings flow from strengths. A strength requires two things: that it offer some operational enhancement or other benefit to the government, and that the offeror be willing to incorporate that level of performance in the contract. So a statement from an offeror to the effect that "it might be possible to enhance the performance of X under certain conditions" can't warrant a blue rating because "it might" indicates that the offeror isn't willing to make the performance level contractually binding.

What about protests? There may be a protest, but as long as teams (1) follow the source selection plan in evaluating subfactors exactly as they said they would in sections L and M of the RFP, (2) apply their ratings consistently from offeror to offeror, and (3) document their determination adequately, the protest will not generally be upheld, and the SSA's decision will stand.

For these varied reasons, it is actually easier to defend a decision based upon a color rating determination than one based upon a numerical analysis--even if intuition tells you otherwise. The perception may be that color ratings seem fuzzy (though they aren't), and so engineers and scientists tend to distrust them. But as someone who has been both scientist (principal investigator in an Air Force lab) and engineer (project engineer for the ALCOA Corporation and test manager for the Air Force), my experience is that once initial skepticism is overcome, this source selection method can be a powerful tool.

The Integrated Assessment

         Mission
Offeror  Capability  Past Performance                  Cost       Risk

A        Blue        Satisfactory/Confident            High       High
B        Blue        Very Good/Significant Confidence  Very High  Medium
C        Green       Very Good/Significant Confidence  Medium     Low
D        Yellow      Satisfactory/Confident            Low        Medium

Editor's note: The author welcomes questions and comments and can be contacted at [email protected].

Slate is a facilitator at the Brooks City-Base Acquisition Center of Excellence. He has been a program manager, test manager, and laboratory principal investigator during his civil service career.

COPYRIGHT 2004 Defense Acquisition University Press
COPYRIGHT 2004 Gale Group

联系我们|关于我们|网站声明
国家哲学社会科学文献中心版权所有