“The framework enables a set of binding requirements for federal agencies to implement safeguards for the use of AI so we can leverage its benefits and enable the public to trust the services the federal government provides,” said Jason Miller , deputy director of OMB. for management.
The concept note highlights certain applications of AI where the technology could harm rights or safety, including healthcare, housing and law enforcement – all situations where algorithms have historically led to discrimination or denial of services.
Examples of potential safety risks mentioned in the OMB draft include automation for critical infrastructure such as dams and self-driving vehicles such as the Cruise robotaxis that was shut down in California last week and is currently under investigation by federal and state regulators after a pedestrian was hit by a vehicle has been hit. was dragged twenty feet. Examples of how AI could violate citizens’ rights in the draft memo include predictive policing, AI that can block protected speech, plagiarism or emotion detection software, tenant screening algorithms, and systems that could impact immigration or custody of children.
According to OMB, federal agencies currently use more than 700 algorithms, although federal agencies’ inventories are incomplete. Miller says the draft memo requires federal agencies to share more about the algorithms they use. “Our expectation is that in the coming weeks and months we will continue to improve agencies’ ability to identify and report on their use cases,” he says.
Vice President Kamala Harris mentioned the OMB memo among other responsible AI initiatives in remarks today at the US Embassy in London, a trip made for the UK AI Safety Summit this week. She said that while some voices in AI policymaking focus on catastrophic risks, such as the role AI may one day play in cyberattacks or the creation of biological weapons, biases and disinformation are already being amplified by AI and affecting individuals and communities every day.
Merve Hickok, author of a forthcoming book on AI procurement policy and a researcher at the University of Michigan, welcomes the way the OMB memo requires agencies to justify their use of AI and assign specific people responsibility for the technology . That’s a potentially effective way to ensure AI doesn’t end up in every government program, she says.
But granting exemptions could undermine these mechanisms, she fears. “I would be concerned if we saw agencies using that waiver extensively, especially in the areas of law enforcement, homeland security and surveillance,” she said. “Once they get the waiver, it could be indefinite.”