USAID enlists artificial intelligence to fix contractor evaluations
Contracting experts and watchdogs say the U.S. government’s system for assessing contractor performance has flaws that result in arbitrary scores, limiting its usefulness.
By Michael Igoe // 30 April 2021The U.S. government’s system for assessing contractor performance is in the midst of a badly needed rethink. The challenges apply across a broad range of federal agencies, but some experts say the effort to address them could have specific implications for companies that work with the U.S. Agency for International Development. The Contractor Performance Assessment Reporting Systems is supposed to provide government agencies a way to evaluate companies on how well they implement projects, and then to use those evaluations to inform future award decisions. Contracting experts and government watchdogs say the system has flaws that result in arbitrary scores, as well as a tendency to rate a growing number of contractors as “satisfactory,” which limits the usefulness of a comparative rating system. “The value sort of has been diminishing, particularly in the last few years,” said Stephanie Kostro, executive vice president at the Professional Services Council, a trade group for government contractors. The government’s process for assessing contractor performance has faced criticism for more than a decade. In 2009 the Government Accountability Office found that officials were reluctant to rely on past performance data in making award decisions due to their “skepticism about the reliability of the information and difficulty assessing relevance to specific acquisitions.” That same year, former U.S. President Barack Obama’s administration issued a memorandum aimed at strengthening the use of contractor performance information, but government officials and contractors still question the value of the current approach, Kostro said. As recently as 2019, USAID’s Office of the Inspector General described ratings that seemed to show little correlation with whether a contractor actually delivered its project objectives. “None of the sampled acquisition awards received a rating less than satisfactory (the average rating was 4, or “very good”) — regardless of their performance,” OIG found. “The two lowest performing awards from our analysis (achieving 27 percent and 10 percent of expected results) were given an average rating of 5 and 4.5, respectively,” it noted. “There has been an effort from [the Office of Federal Procurement Policy] to reemphasize past performance, but as we've seen from the data, the tendency to give everyone a middle of the road rating has really stymied that.” --— Stephanie Kostro, executive vice president, Professional Services Council Contractors also are not happy with the system. Devex obtained anonymous feedback about CPARS from a company that works with USAID, which detailed problems with timeliness and consistency of the assessments. “CPARs are meant to be a management tool for USAID to provide formal feedback annually on contracts, but in our experience they are routinely missed,” the company wrote. The same response pointed to “deep inconsistencies between the language of the evaluation and the scores actually given” and “widely inconsistent scoring and grading.” “It’s probably an effect of [contractor officer] efforts to combat grade inflation, but it makes the process seem capricious when the words offer higher praise and we only eke out a “satisfactory,’” the company wrote. Since 2014, the number of “satisfactory” ratings has increased, while the number of “very good” and “exceptional” ratings has decreased, according to data published by the consulting group GovConRx. The data begs the question of whether that is because government contractors are performing worse, or because of something systemic in how their performance is assessed, Kostro said, adding that analysis of the trends points to a systemic explanation. The incentives for government officials responsible for filing the assessments are such that anything other than a “satisfactory” rating requires significantly more justification, and opens the agency to potential challenges, Kostro said. “You have to show what you used as evaluation criteria for the rating, and for every statement that you say they did better than average, you have to have justification for it,” she said. “There has been an effort from [the Office of Federal Procurement Policy] to reemphasize past performance, but as we've seen from the data, the tendency to give everyone a middle of the road rating has really stymied that,” Kostro said. Earlier this month, Federal News Network reported that USAID is among a group of federal agencies participating in a new pilot project aimed at improving the contractor assessment system. The pilot, led by the Department of Homeland Security, will bring artificial intelligence to bear on contractor performance evaluations in hopes of making them more useful for award decisions. “If successful, this tool will help reduce time in making new awards with the best organizations,” acting USAID Spokesperson Pooja Jhunjhunwala wrote to Devex. USAID declined a request for an interview. Kostro said that for USAID, the new tools could allow contracting officers to filter potential partners according to more specific past performance criteria. That could prove particularly valuable for an agency that awards contracts of highly diverse amounts, and for a wide variety of different activities. “It may benefit companies to be very bespoke in how they approach requirements. It may also encourage partnering in ways that we haven't seen before,” she said. A separate initiative from the General Services Administration is looking to build more contractor self-assessments into the evaluation process, so that implementers can document their performance over time to better inform the final rating they receive. Kostro said the key is better upfront communication to determine what it would mean for a contractor to meet “very good” or “exceptional” performance requirements — “and then at the end of the day, the justification is already there when you have to enter the final rating.” If performance assessments do become more rigorously tied to whether or not a contractor is meeting project requirements, there is also the potential for more ratings below “satisfactory,” Kostro said. Update April 30, 2021: This article has updated its description of the Professional Services Council.
The U.S. government’s system for assessing contractor performance is in the midst of a badly needed rethink. The challenges apply across a broad range of federal agencies, but some experts say the effort to address them could have specific implications for companies that work with the U.S. Agency for International Development.
The Contractor Performance Assessment Reporting Systems is supposed to provide government agencies a way to evaluate companies on how well they implement projects, and then to use those evaluations to inform future award decisions. Contracting experts and government watchdogs say the system has flaws that result in arbitrary scores, as well as a tendency to rate a growing number of contractors as “satisfactory,” which limits the usefulness of a comparative rating system.
“The value sort of has been diminishing, particularly in the last few years,” said Stephanie Kostro, executive vice president at the Professional Services Council, a trade group for government contractors.
This story is forDevex Promembers
Unlock this story now with a 15-day free trial of Devex Pro.
With a Devex Pro subscription you'll get access to deeper analysis and exclusive insights from our reporters and analysts.
Start my free trialRequest a group subscription Printing articles to share with others is a breach of our terms and conditions and copyright policy. Please use the sharing options on the left side of the article. Devex Pro members may share up to 10 articles per month using the Pro share tool ( ).
Michael Igoe is a Senior Reporter with Devex, based in Washington, D.C. He covers U.S. foreign aid, global health, climate change, and development finance. Prior to joining Devex, Michael researched water management and climate change adaptation in post-Soviet Central Asia, where he also wrote for EurasiaNet. Michael earned his bachelor's degree from Bowdoin College, where he majored in Russian, and his master’s degree from the University of Montana, where he studied international conservation and development.