While the Trump administration formulates new policy for artificial intelligence, Congress has created its own agenda. As we reported earlier, the last session’s House Task Force on AI produced a long list of policy and legislative ideas. Many of them have to do with health and health care. Attorneys at Brownstein have distilled them out. Senior policy counsel Deema Tarazi and senior policy advisor Adam Steinmetz joined the Federal Drive with Tom Temin to discuss more.
Tom Temin So you looked at this through the lens of health care. And presuming Congress ever acts on any of this, what does it generally say? It was a comprehensive report. They spent a lot of time and interviewing on it. What does it say to Health and Human Services fundamentally, do you think?
Adam Steinmetz So it’s pretty broad in what it’s saying. It addresses areas in which there are a lot of problems and a lot of problems could arise. It identifies a number of areas in which AI can be helpful, such as drug development with FDA. Getting through those processes quicker, hospitals streamlining information, cutting down on administrative burden. But it also identifies a number of areas that we should watch out for. Biases in health care, or AI making too many medical decisions, liability insurance. A number of areas that Congress could address if they should choose to.
Tom Temin Right. And let’s talk about the FDA for a moment. It is an agency that is mired in literally billions and billions of pages of paperwork every year and somebody presumably reads them. And it takes 10 years, 12 years, 20 years to get a drug onto the market. Do you feel that Congress feels that FDA could somehow speed all of this up in a real, meaningful way without killing everybody with what it approves?
Adam Steinmetz Yeah, and that’s exactly what they point to. They say the average drug, it takes between about 12 years from pre-clinical through being approved by the FDA. And again, the amount of money each drug, about $1 billion a drug. So this can help in a lot of different ways. And they’re seeing that. They report in 2016 applications, there was only one application that had any AI element in it. By 2021, that was up to 130, by 2024 over 300. So you’re seeing that. What they’re doing is during drug discovery, pre-clinical trials and clinical trials, they’re using AI to streamline the whole process. So when you’re trying to figure out what a drug is targeting, you can use AI to really go through a lot of combinations to really quicken that. You can also use it to look at how dosages would be used. So then maybe that would reduce how many clinical trials you’d have to do. And then ultimately, when you’re designing your trial, you need to know how many people to include to best get the outcome of either the drug works or it doesn’t work. So I can really tell you, you need 1,000 patients versus 1500, which would cut down time. And that 12 years, they’re hoping that 12 years will shrink, but also the money invested, which would then also drop prices of drugs.
Tom Temin And Deema, does this seem to say that these artificial intelligence tools should augment what people are doing already, but not make the decision so that you put it into an LLM and it spits out, yeah, this is safe, go ahead and sell it.
Deema Tarazi I think there’s still going to be that second layer of looking at the data and making sure that it is correct. But AI, as Adam said, it’s going to really help kick in some of that load that the scientists and researchers do at the front end so that they’re able to quickly get a diagnosis, get a new vaccine or a new drug on the market quicker by not having to go reams of research, reams of just clinical data. And so it’s going to help analyze it. But, yes, I think researchers and scientists will still have to be there to double cross and double check that these are accurate and making sure that it’s good for the general public and the patients.
Tom Temin And of course, that’s clinical trials and so forth. And drug efficacy is one area. The whole area of insurance evaluation, with the government being the biggest single insurer, there’s vast ramifications. What did the report go into on that front?
Adam Steinmetz So it identified that the Center for Medicare and Medicaid Services (CMS), is using some of these things to go through it, but they’re being very careful in how quickly they identify these, because they are worried about these making too many decisions that either are preventing care from happening or making care go out the door too quickly. So they did identify CMS is using it. But again, the rollout of it has been a little slow, which I think has been they’re okay with they’re good with that and they want to see more. But again, they want to be careful that it’s not doing too much.
Tom Temin We’re speaking with Adam Steinmetz, he’s senior policy adviser. And with Deema Tarazi, senior policy counsel, both at the Brownstein law firm. Again, the whole idea of fraud can be a dangerous area. I think there’s been some European case histories where application of AI resulted in disaster for recipients of federal governmental services because of just bad data or whatever. That sounds like a cautionary thing that Congress is concerned with also.
Deema Tarazi Yes, I would agree that fraud and abuse is something that Congress talks about at large in the health care system. And I think AI is going to be no different. I think the one conscious thing, too, when it comes to fraud, but also you look at it from a different angle of making sure that you’re having a good AI deployment system, that it’s knowing who the patient is as well. I think that ties into a little bit of the biases that they’re very mindful about. How does AI deploy to make sure that whatever information that a doctor is receiving, that it’s actually with the patient. And I think in addition to making sure it’s not fraudulent, it’s making sure that the data is good. And I think one of the things that this paper talks about is being mindful about the biases that AI could perpetrate just to make sure that there’s no inappropriate care or making outcome worse for a patient as well.
Tom Temin All right. Well, let me ask you, as people who advise companies in this area that are dealing with the government, I’m presuming. The AI policy of the government has really been driven by the Trump 45 then the Biden. And now, as far as we can tell, the Trump 47 administration, they’re very different approaches to AI policy, much more or less fair and so on. A few basics in common, but very different policy. Congress has been talking about it and issuing a 237 page report about it. What would you like to see? What should they be legislating, do you think, to kind of make sure that the whatever policies administrations have, have some statutory rails around them?
Adam Steinmetz Yeah, the legislating gets tough. I think members of the health committees have said they want to put guardrails, but they’re very worried that they will become obsolete or they will age very quickly. This is a very quick moving field. So something that applies now in a year from now might be already outdated. Congress struggles to update things as it is. We talk about Medicare, Medicaid. These are 50, 60-year-old health programs that go through additions and whatnot, but they’re largely the same as they were 60 years ago. So I think a lot of the members are struggling with what do we do? We want to make sure we don’t hamper innovation. We want to make sure competition is out there. We want new products, but we want some guardrails there to make sure that these biases don’t exist. Another area that comes up is liability. So if a doctor has access to a AI, but doesn’t follow it, can they then be sued by the patient? So I think there’s some look into the liability space in Biospace right now.
Tom Temin Yes. And what are you seeing in the private sector health care area that I guess is working, that could be a model for the federal government delivery of it?
Deema Tarazi I think the private sector right now is, I don’t know if there’s a perfect model out there that the private sector is using. I know you have, Metaverse is really trying to put together AI models. And even just recently, the Trump administration has gotten this program off the ground called Stargate. And you have Oracle, OpenAI and SoftBank coming together to really revolutionize how AI is doing or how it’s going to look in the future. And so I think private companies have utilized it right now, but they’re still looking at how to do it in a better way, especially with how competitive it is out there in the markets, when you’re looking at AI, not just in America, but on a global scale as well.
Tom Temin And there’s also that liability which weighs very heavily in the private sector, fair to say?
Deema Tarazi That is correct. I think when it comes especially in the health care system, as Adam mentioned, where is the liability? Who is going to be responsible? Is it going to be medical malpractice insurance? Is it going to be the company who created the AI? It’s going to be very difficult. And I think that’s where courts there’s really no precedent out there just already. And so courts are going to have a really hard time, I think, deciphering is it going to be Peters fault or is it going to be a person’s fault?
Tom Temin Right. And if tort lawyers get involved, it’ll be whoever has the most money, whether the responsible or not. Fair to say. Well, maybe you don’t want to say that being from a law firm. Brownstein might not like that. And just a final question. What about Veterans Affairs? Here you have a very large health care delivery system, federally operated. But the eligibility of people for VA care is not the question ever. You’re a veteran, you get it. Any AI thoughts for them?
Deema Tarazi Yeah. So the Veterans Affairs Department I know for the last couple of years have been actually working on how to ensure that AI is being deployed within their electronic health systems. And so they want to make sure that their HRs are up to date and they’re being able to get the data going from one veteran to another. And I think that’s going to be a big space that we look at. Not just in the veterans community, but in hospitals as well. Patients want their data, so the electronic health record is really where the Veterans Affairs community has been focusing on to make sure that data is being accurate and being shared.
Copyright
© 2025 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.