The Government Accountability Office is adopting artificial intelligence and other emerging technology tools to increase its oversight capabilities for the audits and evaluations it conducts of federal programs.
AI tools are helping the agency keep up with the growing volume of audit work requested by Congress, said GAO’s Chief Data Officer Lindsey Saul during Federal News Network’s AI and Data Exchange.
“There are opportunities to make these searches more efficient and allow our staff to be more productive. We’re really focusing on those legislative mandates, seeing where we hear GAO’s name or work cited by any of the congressmen or congresswomen. That helps expedite our work with audits,” Saul said.
Role for generative AI at GAO
GAO is also using generative AI to help its auditors navigate the Federal Audit Clearing House, which consolidates audit data from across the government. Saul said GAO auditors are using genAI tools help look for the information they need for their own audits.
“There are hundreds of thousands of lines of data in this database that we have been able to create an application for, but it is just an enormous amount of data,” Saul said. “What our data scientists have been able to do is compile this data and use generative AI to really sort that data and summarize that data in a way that makes sense to the analyst and auditor community.”
The agency is developing these and other applications for AI through its Innovation Lab. The Innovation Lab brings together data scientists, analysts and economists to support GAO’s mission work.
“Congress took an interest in AI and science and technology assessments in the last five years or so, and we’ve really grown that capability in house,” Saul said.
While AI presents opportunities for greater efficiency, its effectiveness hinges on data quality.
“Data drives and is the foundation for generative AI. … The output is only as good as the inputs,” she said. “And so, what I’m doing as chief data officer at GAO is really utilizing the massive amounts of data that we have.”
In addition to its own operational data, GAO relies on other agencies’ data to conduct its oversight work too.
“The data that we have on the mission side — the work that is brought in from the federal agencies that we are doing audits on — we are not the data owners of, per se. The agencies are. They are the subject-matter experts,” she pointed out.
“We do have highly talented individuals at GAO who are as knowledgeable about that data, through years of experience and work with these agencies. … We often refer to ourselves as the data stewards of that data, and we must keep that data at the same security level, have the same protections in order to make sure that we are handling that data with the same care that the executive branch agencies do.”
As for its own operational data, GAO is looking for opportunities to eliminate data silos.
“On the operation side, there are opportunities for cross pollination, for merging datasets,” Saul said. “By merging, I mean finding common data elements or areas where different mission teams might be utilizing similar data.”
Democratizing data at GAO
GAO aims to “democratize” data, ensuring it is leveraged as a strategic asset, while maintaining the protections, she said.
“When and if [data] can be reused in other audits over the period of time that data is retained or that we have access to it, that is really what we’re looking for — but within the confines of the law and the contracts and expectations that we have with our partnering agencies,” Saul said.
GAO has created governance councils and governance boards with various stakeholders from different groups to make these determinations.
“The idea, in democratizing data where possible, is to come up with standards, rules and processes that best serve and are beneficial for the entirety of the enterprise,” she said. “It requires the hard work and the patience and enduring nature of people coming to these meetings, putting their enterprise hat on for a minute, getting out of their silos, listening to the issues that come up and then making decisions.”
According to Saul, democratization does not mean unrestricted access but rather identifying shared standards and data quality requirements that benefit multiple teams. This approach balances transparency with security, ensuring sensitive audit data remains protected while maximizing efficiency.
This work aligns with the Foundations for Evidence-Based Policymaking Act, which focuses on ensuring federal data is open and accessible by default.
“There’s value to be gained because data is a strategic asset. There’s value to be gained at the corporate level if we can figure out where those commonalities are,” she said. “There’s going to be a reason for certain silos. And so, we’re not taking data from a group and publicizing it and making it available to others willy-nilly.”
Challenges in AI adoption across federal agencies
Beyond GAO’s internal initiatives, the agency also evaluates agencies’ AI adoption across the federal government.
“The greatest issue we found is really with deployment. Putting some of these models and tools into production. Being in a development environment is one thing, but when you go to production, there’s all sorts of security firewall issues that need to be taken into account,” Saul said.
GAO has also taken a closer look at agencies’ AI use case inventories. In some instances, agencies have conflated other forms of automation, such as robotic process automation, with AI use cases, she said.
“With AI, there is a component of learning involved with robotic process automation. It’s a computer or a machine really following scripts and code and working those out, whereas the AI actually learns whether it’s unsupervised learning or supervised learning,” Saul said, and added, “With RPA, oftentimes you don’t need a human other than to just monitor and make sure that there’s no quirks in its automation. But if it’s running properly, it should be able to do some of those manual tasks that and make things more efficient for government workers at large. With AI — and because there’s learning involved, and it is not a human — it’s really important to keep a human in the loop through various stages of the AI development lifecycle.”
To address these challenges, GAO developed its AI Accountability Framework, which helps agencies evaluate their AI implementations. This framework complements other federal guidelines, such as those from the National Institute of Standards and Technology.
“GAO’s best practices are best practices that are also gathered across industry and across the government. Those are to really make sure that that we understand the data that’s being plugged into these AI models, to take the outputs with a grain of salt,” Saul said.
“Validation of the sources is really key. Sometimes these generative AI large language models can hallucinate and make up information, and so there’s always those risks. Think about the data sources that are feeding these models because it’s really important to detect bias in all of this.”
Discover more articles and videos now on our AI & Data Exchange event page.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.