Artificial intelligence has changed how organizations think about IT modernization. At the same time, AI itself is rapidly changing.
For large language models, “this is the year we’re going to see really huge leaps in efficiency, less hallucinations and more accuracy,” said Stuart Brown, partner for digital business at Guidehouse, during Federal News Network’s AI and Data Exchange.
Brown noted that 2025 will bring the advent of what he called multithreaded thinking.
Multithreaded thinking differs from standard queries such as sequel statements or even conventional prompts to an LLM that’s been trained to look in a specific place for the answer.
“Now, we need to think about a reasoning model,” Brown said, “one that’s looking across domains and actually knowing where to go look for the answer — as opposed to being told where the answer is.”
Evolving the infrastructure to advance AI results
Enabling an AI model to engage in multithreaded thinking requires exposing it to a wide range of data. That in turn requires a system architecture to ensure more data is available to the model.
“A lot of the data that resides in organizations today is not utilized unless someone knows where it is,” Brown said. “If we build our architecture to where that data is exposed, this idea of continuing down a path to get to an answer and the model asking itself questions as it goes is exactly the kind of thing we want to get to.” That makes the model perform more closely to how the human brain reasons, he said.
Like many companies, Guidehouse has looked into new developments like DeepSeek, OpenAI Deep Research, Grok, etc. “It’s just unbelievable how these models can do really deep reasoning and ask clarifying questions,” Brown said and predicted that LLM purveyors in the United States will spend 2025 developing similar capabilities.
“You can imagine what that’ll mean for customer service, what it will that mean for all kind of things as we move forward,” he said.
Reenvisioning AI architectures
In tailoring their systems for wider data exposure, IT staffs will need to think through their architectures carefully, Brown said.
“This is the tricky part,” he said.
A responsible AI architecture will “understand things like hallucinations and security and user permissions and bias and all those things that humans apply to their thinking. We need the models to apply that to our thinking as well,” Brown said.
A responsible architecture will also bring predictability. “We need to have predictability, from an organizational perspective, that the answer the model is giving is one that we trust, or it’s within the range of trust where the human or the user, the consumer, can make the decision,“ he explained.
The fundamental difference in architecture lies in the fact that “in the past, infrastructure and technology were about systems, and the data was an output,” Brown said. In the era of LLMs, “the data is the core of our organization. It’s the core of how we serve. That’s flipping the script.”
That flip requires the ability to ensure an organization’s data is accurate and reliable as well as properly secured and tagged, Brown said.
“We’ve got to be really careful as we start exposing our datasets to AI intelligence,” he said.
All IT organizations will need to rethink their best practices, policies and processes, Brown advised. IT will need to review “our security procedures, our responsibility matrices to make sure that we’re not violating either core competencies of our organizations or regulatory competencies as well, as we move forward,” he said.
Making human-AI interactions more natural
Brown also predicted that multithreaded thinking will let users interact with AI models using natural language more than has been possible. This will lessen the need for elaborate prompt engineering training.
“The reality is the way that we’re going to interact with models is going to be much more natural,” he said. On the human productivity front, the models in turn will become more active in offering decision options.
“We’re going to get emails and pings, and Teams or Zoom invites and other things,” Brown said, “and the model is going to automatically think as it comes in. We’re going to get information intelligence. It’s going to offer us options.”
Discover more articles and videos now on our AI & Data Exchange event page.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.