Fast Facts
The President’s Office of Management and Budget published a new policy Thursday concerning the ways AI can be used by government agencies. Before the end of the year, government agencies will be required to implement “concrete safeguards” in order to integrate AI systems. One expert said the policy protects people through specific, enforceable measures while boosting innovation at the same time.
When President Joe Biden issued his executive order on artificial intelligence last year, the biggest criticism it received was that it lacked teeth.
On Thursday, 150 days after the order was issued, the President’s Office of Management and Budget (OMB) released a policy that serves as a directive to Federal agencies on the ways they can — and can’t — integrate AI technology.
The policy is divided into three main sections: one, the establishment of clear AI governance systems, two, the mitigation of risks posed by AI, and three, the advancement of responsible innovation.
Read the full memorandum here.
Related: Biden signs sweeping new executive order on the heels of OpenAI’s latest big announcement
OMB’s governance systems
Under the terms of the policy, Federal agencies have until the end of May to both designate Chief AI Officers and establish AI governance boards. The officers and boards alike will coordinate the use of AI across their given agencies.
The White House said that OMB has been convening these AI officers as part of a Chief AI Officer Council since December, intended to “coordinate their efforts across the Federal Government.”
The White House also said that the Departments of Defense, Veterans Affairs, Housing and Urban Development and State have already established the required governance boards.
The agencies covered by the policy must consistently and publicly list the ways in which it is, or plans to be, in compliance with the policy.
As part of this governance effort, the Administration plans to hire 100 AI professionals by the summer.
Related: A whole new world: Cybersecurity expert calls out the breaking of online trust
Mitigating risk
The policy calls for certain minimum practices and concrete safeguards when agencies plan to use certain types of AI deemed to be “rights-impacting.” The only caveat is that these practices don’t apply to “elements of the Intelligence community.”
Otherwise, if an agency cannot apply these practices and safeguards, they “must cease using the AI system.”
Agencies have until December to implement the safeguards — “before” they attempt to implement an AI system.
An element of these practices includes careful testing and risk assessment — a given agency must clearly explain the intended purpose for the specific type of AI it wants to use, supported by metrics or qualitative analysis. Agencies must also detail all potential risks of using that system and evaluate the “quality and appropriateness of the relevant data” used to develop, train and operate a given algorithm.
AI algorithms must also be tested for contextual real-world performance, and they must be independently evaluated and monitored on an ongoing basis.
Human oversight is an additional caveat of AI integration, complete with a requirement for adequate human training.
The policy also highlights the potential for discriminatory algorithmic decision-making multiple times, saying that agencies must “mitigate disparities that lead to, or perpetuate, unlawful discrimination or harmful bias.” Agencies must also conduct ongoing monitoring designed to test and mitigate “AI-enabled discrimination.”
Agencies must also “provide and maintain” a method for people to “conveniently” opt out of using an AI mechanism in favor of a human alternative.
The policy specifically says that, at airports, travelers must be able to opt out of the use of facial recognition without “any delay or losing their place in line.”
When AI systems are used in Federal healthcare systems or to detect fraud in government services, the policy maintains that there must be human oversight integrated into such automated processes.
Related: There’s AI-generated porn for sale on Etsy
Responsible innovation
The policy also calls for the removal of barriers to responsible uses of AI, saying that agencies should ensure they have adequate technological infrastructure and data to build and deploy necessary tools responsibly.
The White House specifically cited the ways in which the Federal Emergency Management Agency is using AI to assess structural damage in the wake of hurricanes, the ways in which the Centers for Disease Control and Prevention is using AI to predict and mitigate the spread of diseases and the ways in which the Federal Aviation Administration is using AI to help ease up air traffic as examples of such responsible, helpful use cases.
Related: IBM highlights the actual promise of AI (not ChatGPT)
Experts: A ‘crucial step forward’
Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor, said in a post that while the executive order was a good first step, the “OMB memo is where the rubber meets the road.”
Venkatasubramanian co-authored the White House’s AI Bill of Rights.
“We’ve moved from asking WHETHER we should deploy responsibly, to asking HOW to deploy responsibly. The AI Bill of Rights spelled the HOW out in great detail, and the OMB memo now codifies this for the entire US government,” he said, adding still that there are ways in which the memo “didn’t go far enough.”
“How agency Chief AI officers execute on this guidance will matter greatly,” Venkatasubramanian added. “After all, we are talking about sociotechnical systems here. People matter, and we need to maintain scrutiny. But this is a crucial step forward.”
Nik Marda, the technical lead for AI governance at Mozilla, said that the release of the policy marks a “big day for getting AI right.”
First, the accountability part — here’s how today’s AI policy sets up its approach to risk mitigation
The short summary of how this works is: (1) Check if an AI system impacts safety or rights, and if it does (2) add protections for that AI system pic.twitter.com/mIE7su5oeB
— Nik Marda (@nrmarda) March 28, 2024
The policy, he wrote, is importantly grounded in a core tenant, that “not all applications of AI are equally risky or equally beneficial.”
Marda said that the rules laid out in the memo take a risk-based approach that comes complete with oversight, transparency and clear accountability.
He noted that while there’s great potential for AI systems to “make government services work better and faster,” a lot of research has shown the harms in bias, security and privacy that AI exacerbates.
“That’s why today’s AI policy is such a big deal,” he said. “It protects people and accelerates innovation at the same time. It is thoughtful, specific, and paired with real accountability.”
The tech-justice-focused nonprofit Upturn, however, noted that while there are important provisions in the policy, there are also potential loopholes — Chief AI Officers could say certain AI systems are not rights-impacting, and are therefore not subject to the practices laid out in the policy.
“We need the federal government to lead by example in ensuring that AI systems do not reproduce discrimination or cause people harm,” Upturn said. “The final memo has important provisions, but time will tell how agencies ultimately interpret and implement this guidance.”
Contact Ian with tips and AI stories via email, [email protected], or Signal 732-804-1223.
Related: The ethics of artificial intelligence: A path toward responsible AI