The White House has issued two long, detailed policies for development and use of artificial intelligence — one last year and one last month. That’s fine, the U.S. should lead and should set an example of ethical use of this technology. The head of AI Policy at the Abundance Institute, Neil Chilson, cautioned on the Federal Drive with Tom Temin that with federal and state-level laws and regulations aimed at AI, the whole industry could end up hobbled.
Interview transcript:
Tom Temin And briefly, a word or two about the Abundance Institute. Everybody likes abundance.
Neil Chilson Yeah, we’re finding that out. It’s been a great launch; we launched back in April. The Abundance Institute is a new nonprofit organization that’s focused on creating the right cultural and policy environments for emerging technologies. Right now, we’re very focused on energy and AI. Those are areas where we see a huge amount of opportunity, but also a huge amount of threat that those cultural and policy environments aren’t going to let us maximize the ability of those technologies to create widespread human prosperity that we think they can.
Tom Temin And clearly, the Biden administration has understood the import of artificial intelligence, both in how it will affect the government itself and also how it will affect the economy and could affect people in lots of good and bad ways. What’s your thought about those policies? And of course, now we’ll see if they endure with a change in administration coming.
Neil Chilson Well, I always have to start by saying that artificial intelligence is almost as old as the history of computing. People have been trying to make computers do things that look like intelligent work for a long time, and they’ve succeeded. And every time they succeed, we don’t call it AI anymore. We just call it computers. And so this latest wave, though, that started, I think, in the public mind and certainly in the mind of Washington in many ways was launched by GPT-3 being released. And that kicked off a whole bunch of activity. And you’ve mentioned two of the big ones that the Biden administration did in the executive branch, and these are executive orders. One is an executive order that came out about a year ago. And then the most recent one is a memorandum on A.I. in the national security context. And I find there’s a lot of overlap between them. But I find there is a very interesting difference in many ways, too. The executive order, which built on some other internal documents or documents that the Biden administration had released, one of them being an AI Bill of Rights, is very cautiously optimistic about AI. It tends to emphasize risks and harms. And a lot of the executive order, which is, by the way, the third longest executive order ever issued, is a lot of that is direction to various federal agencies and federal bodies to apply their existing authority to try to dig into how we can mitigate these potential risks; [it] focuses a lot less on the opportunities. The recent National Security memorandum, in contrast, it starts very early on by saying “this is a powerful technology that the U.S. needs to stay in the lead on.” It’s quite optimistic about the potential of the technology and the need for the U.S. to keep its leadership. And that tone change is pretty interesting, I think, between those. I think a lot happened in the year. But in the nitty gritty, they both take a top down approach to this that says, hey, let’s let’s figure out the right government structures to make sure the industry is getting this right.
Tom Temin Well, what’s your sense that the industry itself is aware of the possible abuses of these technologies, especially? I think people are worried about the generation of hallucinations deliberately in the deepfake and that kind of thing.
Neil Chilson Yeah, so, not in a novel way. This is quite unlike the development of the internet technologies where the leading companies in this space are in many ways, sort of overly cautious, I think, and quite concerned. And I think a lot of that comes from the leadership. And so they are carefully proceeding with the training of these very large models and have created a whole AI safety ecosystem that looks in many ways — before these turned into giant businesses, I think OpenAI was sort of cautious, trying to be a safe mover ahead, and all of a sudden it turned out like people really like to use these technologies — I think that sort of changed their orientation. But overall, I think the companies have been quite cautious and quite aware of the potential of people to misuse what is a very powerful technology.
Tom Temin We’re speaking with Neil Chilson. He’s head of artificial intelligence policy at the Abundance Institute. And we should mention you’re a former chief technologist at the Federal Trade Commission. And in something that you have written recently through the Institute, you mentioned that there are hundreds of potential laws and regulations on the books at the nonfederal level. And it sounds like a lot of this is not harmonized, which we see in a lot of cases. And you’re worried that AI itself could be just tamped down too far with all of this panoply or patchwork of potential controls that could be enacted?
Neil Chilson Yeah, that’s right. I think there’s over 700 — depends on how you track them — 700 active pieces of legislation at the state and federal level. Obviously, most of those are at the state level. And at the state level, there’s a lot of different issues that they are trying to tackle. And so that really does create the risk. These models are being deployed. They might be developed in one state, but they’re deployed internationally. And so finding a way to comply with not just 50 different, but hundreds of different jurisdictions. One of the first regulations post ChatGPT was actually adopted by New York City. And so there are even cities getting in on this action. And even where these bills are well-intentioned, well executed, having a wide variety of them makes it very difficult, especially on the smaller players in the space, which, this being software, there are a lot of smaller players who are trying to deploy these this technology. And so some harmonization at the federal level seems to make a ton of sense. One of the things, if I could go back in time and help improve the executive order, the initial executive order, would have been for it to take a more comprehensive look at how to get, both to say that harmonization is important, but then also to have taken some strategy, some thoughtful strategy about how to get to that harmonization, given the federal system of laws that we have in the U.S.
Tom Temin Right. And of course, at the federal agency level, every agency talks about the need to use AI in some manner. And increasingly the generative style of AI. At the same time, the government has an imperative to use small business. So it seems like the government at the agency and application level could almost be a model for the proper deployment of these technologies because of the strictures in the public sector that go with applications deployed to the public.
Neil Chilson Yeah, ideally, there’s huge potential for these technologies to make the business of government work better. And there really could be a model here. And the government procurement process is complicated. In many ways, government agencies are still figuring out how to do cloud computing, or storing their content and their processes in the cloud rather than in servers that are located at the agency. They’re still getting that right. And so this new AI technology, some leadership from the White House is going to be really important to help the agencies figure out how to do this well. And Congress might need to provide some streamlining of procurement and other types of regulation that make it hard to adopt novel services in a way that the government just hasn’t contracted for before in the past.
Copyright
© 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.