Contractors and agencies alike have long had standards for identity management when people log on to systems. Now the National Institute of Standards and Technology proposes to tighten the requirements for identity proofing and related procedures. Attorney Sean Griffin, a partner at Robinson and Cole, provided an analysis on the Federal Drive with Tom Temin.
Interview transcript:
Tom Temin Mr. Griffin, good to have you with us.
Sean Griffin Good to be here. Thank you for having me.
Tom Temin So you have looked carefully at the NIST proposed guidelines. Really, I’m not sure they’re going to be mandatory. Is this rulemaking or just guideline updating? Let’s start there.
Sean Griffin They’re just as guidelines. If there’s ever rulemaking that’d be coming straight from the Department of Defense or whatever other federal agency your contract was with.
Tom Temin Okay. So these are new guidelines, which NIST is really good at updating, digital identity guidelines. What are they talking about here? And we’ll get into the implications.
Sean Griffin Well, yeah, let’s talk about that. And I’ll start with a horror story with a client who shall remain nameless because there are so many of them that this has happened to. The person in charge of writing checks or sending wire transfers, gets an email from his boss or her boss saying, I need you to wire $5 million to this account in Hong Kong or whatever that you’ve never heard of before. And back in the day, the person may or may not believe that the money may or may not go out. But, you know, there are people now who have seen these suspicious emails, so they go, what, not going to fool me, I’m going to demand a Zoom call. So they get to the Zoom call with this boss and all the other financial people. And they say, yes, send it to this, confirming the wire transfer instructions. And he says, okay, well then I have all these people. I’ll do it. Come to find out that the only real person on that Zoom call was him, the person who was sending the money. Everybody else had been deepfaked, the audio deepfakes, the digital deepfakes, and without substantial identity verification, this could happen on a government-based system which could be really, really bad.
Tom Temin Wow. That is really multilayered attack, you might say.
Sean Griffin Yes, they have really gotten a lot more sophisticated and the tools have gotten a lot stronger, more widely available. So, NIST, I can see why they step into action.
Tom Temin Right. It’s a good thing those people aren’t in the pager business. All right. So now these new guidelines updates FIPS publication, special publication 201-3, specifically, standards for personal identity verification. What are they introducing here?
Sean Griffin They’re introducing a few things which I’ll talk about in broad categories. One is expanded identity proofing modules. They offer a new taxonomy and structure for moving who you say you are, and that will depend on what kind of communication you’re doing, whether it’s video conferencing. If you’re on site, it will be different there, which makes sense if you think about it. The second one is continuous evaluation and monitoring. NIST dipped its toes into these waters before they had a December 2022 initial public draft, they’re talking about continuous improvement, but they have beefed up those requirements as well. Then they have more fraud detection and mitigation requirements. The draft guidelines add programmatic fraud requirements for credential service providers and government agencies and organizations have to just basically keep their eye on evolving threats and do so pretty aggressively, which you should be doing anyway. But NIST is now formalizing that. They’ve had syncable authenticators in digital wallets.
Tom Temin Yeah, what’s that one all about?
Sean Griffin And so, what we’ve had before is syncable authentication is if you dial into a lot of systems, they force you to put in a PIN number, like if you have Microsoft Authenticator, Google Authenticator. So they beefed up those requirements to make those a little more sophisticated. And it does allow contractors to manage their digital certificates and credentials that present them securely to different federal systems. That’s actually kind of a boon to contractors. They also have risk-based authentication mechanisms in that. All that means is that you have more rigorous requirements. If there’s the risk is higher, less if it’s lower. They’ve also talked a lot about privacy equity, but usability and this is a lot about preventing bias, which this goes into the AI field. The AI field is focusing more and more on biases, not just bias in terms if you think about racial biases, but bias in the way that you can actually get something wrong because you’re so used to looking at a problem in a particular way. It’s a cousin to the hallucinations inferred so much about. Then we also have authentication versus biometrics and also multifactor authentication. Like again, I talked about the stigma of authentication. It talks about that as well.
Tom Temin We’re speaking with attorney Sean Griffin. He’s a partner at Robinson and Cole. Yeah, So it’s a long list of things which I think are fairly prevalent in the corporate world already. I know, you know, like Microsoft Authenticator, it’s just a two digit number, but it ensures that the device is really in your hand and so forth, and it goes with facial and blah, blah, blah. There’s a lot in here, too, in the draft guidelines about artificial intelligence, not just the bias in it, but about its use in identity systems and machine learning. And that seems to be one of the fresh ground areas.
Sean Griffin Yes, everybody’s talking about AI and the use of AI nowadays. And, you know, like any tool, there’s a good and a bad way to use it. If you’re talking about a hammer, good way to use would be to hit a nail, a bad way would be to hit yourself in the head. I think a lot more people are hitting themselves over the head with AI. So what the new NIST guidelines do is they say that uses of AI and machine learning sometimes called ML must be documented and communicate to organizations that are relying upon these systems. That’s getting to be more and more of a standard requirement to disclose that you’re using AI. And then all organizations using AI and machine learning must provide information to entities that are using that technology on the methods and techniques used for training those models. That goes back to preventing bias and hallucinations that I talked about before. And you also have to describe the datasets that you’re using. That’s a little technical, but again, AI, like every other computer system, is garbage in, garbage out. And so this enables you to see what is going into the AI model that you are using. And if you’re deploying an AI model, you should be doing that anyway. And then organizations using AI and ML systems must implement the NIST AI risk management framework that’s a bit broader that talks about general risk management to reduce bias in artificial intelligence.
Tom Temin All right. And in your experience with clients, how big a lift is adopting these guidelines for an average organization? I imagine a lot of these pieces might already be in place.
Sean Griffin A lot of these pieces are often in place, yes, but my clients, it depends on the client. More recent clients have to put into place an AI governance program that should have most, if not all, of this stuff in it. Most companies have a written policy for everything important that you do. How many vacation days you can take? What are you supposed to walk with, the forklifts, moving, what have you? AI is becoming one of those areas where you do need a written policy with stakeholder buy in. And if you have that, if you manage to implement a solid air governance program, then you’re going to have a lot of these things. You’re going to have methods for communicating with your clients. Are you using are you going to be able to talk with your users, your customers about the datasets you’re using with your customers? So this private business or the federal government can and you’ll be able to do that pretty explicitly. So once you put the government program in place, a lot of this will fall into place as well.
Tom Temin Right. And then you can feel confident in a self attestation, for example, that you have these kind of controls in place because that’s coming in a lot of different domains.
Sean Griffin That’s exactly right. And it’s also sort of a selling point, too, because if you’re using AI in your TV, the cost savings that you can do, you could tell your client, listen, this is a we’re giving you here. It is not just check if we’re just doing something at you, but we actually have guidelines in place to make sure you’re getting a quality product to service, whatever that may be.
Tom Temin Right. And the other thing we should note, you have noted in these guidelines that are in draft form, this does not quite a command and control organization like the DoD customer. People can comment. And what do you anticipate might become rentable, say, on these?
Sean Griffin I think there will be some push back on some of the rules. I mean, if you don’t have an AI government, this program in place, this is going to feel somewhat onerous to you to bring it into place. Personally, I think that if you’re going to use AI, you should know how to use it. You will just allow your employee to take the company car without some guidelines as to what they could do. Even if the guidelines were just please don’t crash my car and AI is going to be the same way. You ought to be able to have those guidelines in place and move forward from there. They’re going to be cautious as to how this relates to CMMC 2.0, sorry, the cybersecurity mature model certification that DOD is pushing out, which, you know, will have more of the force of law, his contract law and harmonizing that as well. And there’ll be a few people who have like ideas about doing some behavior more straight. A lot of people are much more skeptical of AI than other businesses are. And so you’ll probably have some people on that end. But I think that’ll be the three main categories you see. The biggest one being, hey, could you retch this back a little bit? And then a bunch of questions.
Tom Temin All right. Well, those comments are open until October 7. Attorney Sean Griffin is a partner at Robinson and Cole, thanks so much for joining me.
Sean Griffin It was a pleasure. Thank you for having me.
Tom Temin And we’ll post this interview along with a link to more information about the NIST guidelines at federalnewsnetwork.com/federaldrive. Hear the Federal Drive on your schedule. Subscribe wherever you get your podcasts.
Copyright
© 2024 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.