麻豆视传媒免费观看

Sections

Commentary

1 year later, how has the White House AI Executive Order delivered on its promises?

U.S. President Joe 麻豆视传媒免费观看 with Vice President Kamala Harris signs an executive order during the artificial intelligence event at the White House in Washington on October 30, 2023.
U.S. President Joe 麻豆视传媒免费观看 with Vice President Kamala Harris signs an executive order during the artificial intelligence event at the White House in Washington on October 30, 2023. Yuri Gripas/ABACAPRESS.COM

The 麻豆视传媒免费观看 administration鈥檚 approach to the governance of artificial intelligence (AI) began with the , released in October 2022. This framework highlighted five key principles to guide responsible AI development, including protections against algorithmic bias, privacy considerations, and the right to human oversight.

These early efforts set the tone for more extensive action, leading to the release of the or the White House EO on AI, on October 30, 2023. This EO marked a critical step in defining AI regulation and accountability across multiple sectors, emphasizing a “whole-of-government” approach to address both opportunities and risks associated with AI. Last week, it reached its one-year anniversary.

The 2023 Executive Order听on Artificial Intelligence represents one of the U.S. government鈥檚 most comprehensive efforts to secure the development and application of AI technology. This EO set ambitious goals aimed at establishing the U.S. as a leader in safe, ethical, and responsible AI use. Specifically, the EO directed federal agencies to address several core areas: managing dual-use AI models, implementing rigorous testing protocols for high-risk AI systems, enforcing accountability measures, safeguarding civil rights, and promoting transparency across the AI lifecycle. These initiatives are designed to mitigate potential security risks and uphold democratic values while fostering public trust in the rapidly advancing field of AI.

To recognize the one-year anniversary of the EO, the White House听released a听 of achievements, pointing to the elevated work of various federal agencies, the voluntary agreements made with industry stakeholders, and the persistent efforts made to ensure that AI benefits the global talent market, accrues environmental benefits, and protects鈥攏ot scrutinizes or dislocates鈥擜merican workers.

One example is the work of the U.S. AI Safety Institute (AISI), housed in the National Institute of Standards and Technology (NIST), which has spearheaded pre-deployment testing of advanced AI models, working alongside private developers to strengthen AI safety science. The AISI has also signed agreements with leading AI companies to conduct red-team testing to identify and mitigate risks, especially for general-purpose models with potential national security implications.

In addition, NIST released Version 1.0 of its , which provides comprehensive guidelines for identifying, assessing, and mitigating risks across generative AI and dual-use models. This framework emphasizes core principles like safety, transparency, and accountability, establishing foundational practices for AI systems鈥 development and deployment. And just last week, the federal government released the first-ever , which will serve as the foundation for the U.S.鈥檚 safety and security efforts when it comes to AI.

The White House EO on AI marks an essential step in shaping the future of U.S. AI policy, but its path forward remains uncertain with the pending presidential election. Since much of the work is being done by and within federal agencies, its tenets may outlive any possible repeal of the EO itself, ensuring the U.S. stays relevant in the development of guidance that balances the promotion of innovation with safety, particularly in national security. However, the EO鈥檚 long-term impact will depend on the willingness of policymakers to adapt to AI鈥檚 rapid development, while maintaining a framework that supports both innovation and public trust. Regardless of who leads the next administration, navigating these challenges will be central to cementing the U.S.鈥檚 role in the AI landscape on the global stage.

In 2023, Brookings scholars weighed in following the adoption of the White House EO. Here鈥檚 what they have to say today around the one-year anniversary.

Aaron Klein

Treasury's AI Executive Order response is the gold standard

The Treasury Department went above and beyond the White House Executive Order鈥檚 requirements, which were more modestly confined to asking the agency to write a report about AI and cybersecurity risks. As Treasury wrote in , artificial intelligence (AI) is both a source of risk for malfeasance and a source of opportunity for 鈥渟ignificantly improv[ing] the quality and cost efficiencies of [financial institutions鈥橾 cybersecurity and anti-fraud management functions.鈥澨

Moving beyond cybersecurity, Treasury鈥檚 Financial Stability and Oversight Committee (FSOC) brought together all financial regulators for a jointly held with Brookings on AI and financial stability. In giving the keynote at the event, Secretary Yellen unveiled a new request for information (RFI) on uses, opportunities, and risks of AI in the financial services sector. This RFI will better inform Treasury, FSOC, and all financial regulators about the ideas and concerns held by financial institutions, academics, technology experts, and the rest of the public.听

While independent financial regulators were excluded from the White House EO鈥攁 common practice for executive orders鈥擣SOC鈥檚 engagement in this space serves a critical role in convening and informing financial regulators to focus their engagement. As FSOC Deputy Assistant Secretary Sandra Lee and I wrote: 鈥淭houghtful and open dialogue among leaders from multiple vantage points can help policymakers, industry leaders, and academics better harness the potential of AI while guarding against its risks.鈥 While it remains to be seen what specific actions Treasury and the financial regulators will take, Treasury鈥檚 engagement and follow-through has exceeded their mandate.

Cameron F. Kerry

The mother of all AI legislation

President 麻豆视传媒免费观看鈥檚 2023 Executive Order on Artificial Intelligence (AI) is the longest in history at 110 pages. At a conference I attended in Brussels soon after its release, Anu Bradford, who famously coined the term 鈥渢he Brussels effect,鈥 called the order 鈥渢he mother of all AI legislation.鈥澨 It spelled out an array of actions for federal agencies to take in their interactions with AI鈥攂oth as policymakers and as users鈥攚ithin periods of up to 270 days from the order鈥檚 release. Now that it has been one year since the EO鈥檚 adoption, the agencies have handed in their homework.

Perhaps the most consequential product of these actions is the Office of Management and Budget (OMB) on management of AI systems and applications used by federal agencies. The final guidance, issued in March 2024, defines 鈥渞ights-impacting鈥 and 鈥渟afety-impacting鈥 AI for agency decisions that use AI and risk assessments that agencies are required to conduct. The includes those that affect 鈥渃ivil rights, civil liberties, or privacy鈥qual opportunities鈥r access to or the ability to apply for critical government resources.鈥 Meanwhile, 鈥鈥 encompass 鈥渉uman life and well-being鈥limate or environment鈥ritical infrastructure鈥r strategic assets or resources.鈥

The OMB guidance will flow into the marketplace for AI through the agencies鈥 roles as customers. OMB鈥檚 definitions of risk resemble the 鈥渉igh-risk鈥 categories regulated in the European Union鈥檚 , demonstrating significant transatlantic alignment on AI risks taking effect even before the EU legislation.听

The OMB guidance does not apply to national security agencies, but the more recent and accompanying “,鈥 issued on October 24, 2024, include a related set of limits to these agencies, alongside steps to promote responsible adoption and use of AI and strengthen the capabilities of agencies in national security.听

The limits in the framework include prohibiting certain AI use cases affecting individual rights, such as tracking individuals based on their exercise of protected rights, specifically calling out suppression of free speech or the right to counsel, discrimination based entirely on protected categories, and certain inferences based on biometric data. It also limits the use of AI for intelligence products and decision-making to ensure that these generally involve human judgment and that significant use of AI is apparent. The framework also establishes categories of 鈥渉igh-impact uses cases鈥 and 鈥渦se cases impacting federal personnel,鈥 which require a variety of procedural safeguards to be deployed. National security agencies are directed to put in place risk assessment practices for high-impact and new AI uses within 180 days. The use of both risk categories and prohibited use cases, along with the emphasis on individual rights and limits to government use, resembles the EU AI Act.

Anu Bradford was right.

Courtney C. Radsch

The EO has done little to rein in Big Tech

In the year since President 麻豆视传媒免费观看 released a pioneering Executive Order (EO) on artificial intelligence (AI), the AI market has created more than , as AI tools seemingly invade every part of our life鈥攆rom how we work and study to how we create and shop. In the appalling absence of legislation by Congress to meaningfully rein in the tech giants as they rapidly roll out AI products with their typical 鈥渕ove fast and break things鈥 ethos, the 麻豆视传媒免费观看 administration put forth detailed guidance to the federal agencies and offices about how they can use, acquire, and oversee an increasingly vast range of AI systems.

The EO is a detailed roadmap for how to put responsible AI principles into practice. According to the administration鈥檚 own report card, the agencies completed each of the related to federal policies, practices, and procurement across a range of sectors including employment, housing, law, safety, civil rights, and international collaboration. Spelling out what the administration means by safety and rights impacts allows each agency to assess their progress, which they have done in a of accompanying frameworks, reports, and guidance.听

However, I am concerned the EO does little to mitigate the . The U.S. government cannot become on only a few corporate behemoths. To help mitigate these concerns, governing cloud providers as public utilities and data as a public good should be front and center in efforts to codify the EO in laws and regulation.

The EO implemented important steps towards safer and more trustworthy AI systems. Yet, we have heard virtually nothing about the EO鈥檚 progress throughout the election cycle, despite AI鈥檚 implications for job security in the coming years. Moreover, the tech giants cannot be expected to regulate themselves. Although significant progress has been made in some domains, voluntary commitments have been shown to be . Indeed, surveillance and data collection have intensified over the past year amid ongoing as AI corporations seek to build bigger and better AI systems. They have done all of this without internalizing the societal or environmental costs of doing so.

The government must leverage its procurement power to source only responsible, ethical AI that adheres to labor and environmental standards, while respecting copyright and privacy. Similarly, advancing AI and integrating it into the government without any laws to ensure competition and protection of the public interest will the power of the handful of Big Tech corporations that have access to the , needed to develop and deploy advanced AI systems.

Mark MacCarthy

The National Security Memorandum on AI is a missed opportunity

In July 2024, the U.N. Secretary-General bluntly, 鈥淢achines that have the power and discretion to take human lives are politically unacceptable and morally repugnant, and should be banned by international law.鈥澨 More specifically, he called for 鈥渢he conclusion, by 2026, of a legally binding instrument to prohibit lethal autonomous weapons systems that function without human control or oversight and that cannot be used in compliance with international humanitarian law, and to regulate all other types of autonomous weapons systems.鈥澨

The U.S. issued one year ago, on October 30, 2023, did not mention lethal autonomous weapons, deferring instead to the (鈥淧olitical Declaration鈥) that was re-released at about the same time.听听

On October 24, 2024, almost exactly a year after the AI Executive Order, the White House released its . It is largely focused on what the U.S. needs to do to avoid 鈥渓osing ground鈥 and 鈥渃eding [its] technological edge鈥 to 鈥渟trategic competitors.鈥 It also establishes processes for AI safety evaluations and internal risk management systems and advocates a 鈥渟table and responsible international AI governance landscape,鈥 to implement these procedural precautions.听 But it refers only indirectly to lethal autonomous weapons through a nod to the Political Declaration.听听

But this Political Declaration dodges the issue of lethal autonomous weapons, committing countries only to follow international law in their use of AI weapons and to ensure that their officials 鈥渁ppropriately oversee,鈥 鈥渆xercise appropriate care,鈥 and make 鈥渁ppropriate context-informed judgments鈥 about the development and deployment of AI weapons.听

On October 29, 2024, the U.S. that almost 60 countries had signed the Political Declaration, but it is hard to see the U.S. exercising international leadership on AI while avoiding the central question of lethal autonomous weapons in this way.听

As law professor Charlie Trumbull recently, the U.S. does not favor a treaty to ban specific classes of weapons or to impose a substantive obligation regarding the use of autonomous weapons, such as a requirement to maintain 鈥渕eaningful human control.鈥澨

But the U.N. Secretary-General鈥檚 July announcement indicates that a legally binding instrument on AI weapons is coming. As Professor Turnbull suggests, the U.S. should engage in this process, rather than resisting or ignoring it. The National Security Memorandum was a perfect opportunity to grab the mantle of leadership on this vital international AI issue. The U.S. missed it.

Sorelle Friedler

The OMB AI memos are a key step for protection against AI harm, if agencies comply

Buried towards the end of the long was an impactful provision that required the Office of Management and Budget (OMB) to issue guidance to federal agencies on their use and procurement of AI. The final guidance to agencies came out this past spring (), and the procurement guidance () came out this October. Taken together, these have the potential to put in place safeguards that can protect against ineffective or discriminatory government use of AI.

There are a few key pieces to the released guidance. First, the guidance is for agency use of AI, including procurement, and identifies specific AI systems that require additional protective actions based on their use cases. Specifically, the guidance defines safety- and rights-impacting AI systems as those with uses that have a significant impact on people鈥檚 safety and/or rights. Critically, in addition to these definitions, the guidance also identifies use cases that are presumed to be safety-impacting or rights-impacting. This includes safety-impacting AI systems such as those used for 鈥渃ontrolling the physical movements of robots or robotic appendages within a workplace鈥 or 鈥渃ontrolling the transport, safety, design, or development of hazardous chemicals or biological agents.鈥 Systems presumed to be rights-impacting systems include those used for 鈥渕onitoring tenants in the context of public housing,鈥 鈥減re-employment screening,鈥 and 鈥渋dentifying criminal suspects or predicting perpetrators鈥 identities.鈥 This scoping appropriately puts the attention toward use cases that could have serious negative impact on public safety and/or rights while also providing the specificity necessary for agency action.

Second, the guidance identifies specific minimum practices that agencies must put in place in the use (including via procured AI) of safety-impacting or rights-impacting AI. For example, both safety- and rights-impacting AI must be tested based on the real-world context in which they will be deployed, and rights-impacting AI must have demographic disparities assessed and mitigated. In addition, agencies must release plans describing in detail how they will comply with this guidance (and many ). While these steps are appropriately described as 鈥渕inimum practices,鈥 thoughtful implementation of these practices could be a large step forward in government use of AI while protecting the public.

These are key steps鈥攂ut will they work? Agencies must report information about their AI use and comply with the guidance by December 1, 2024. Waivers and extensions are available, as is the potential for agency interpretation of the guidance in a way that narrows its scope. Under former President Trump鈥檚 , agencies were already required to . However, for example, the DOJ only reported four AI use cases in their 2023 inventory, none of which were facial recognition, despite a finding extensive DOJ use of facial recognition AI. Will we see more complete reporting and compliance by December 1 ?

Nicol Turner Lee

More needs to be done

The 2023 Executive Order (EO) has driven essential progress toward responsible and trustworthy AI use. But, one year later, it still requires strong legislative backing from Congress to safeguard it from partisanship and to codify many of the important risk management frameworks and task forces that it established across government agencies. Many caution that not doing so will limit the EO鈥檚 scope to federal agency use and procurement, leaving critical gaps. Without legislative action, the safety measures risk becoming outdated or insufficient as AI technology continues to evolve rapidly. With China at the center of concern when it comes to AI and national security, Congress must create a responsive regulatory framework that keeps pace with technological change, ensuring that innovation can advance with the appropriate guidance and guardrails. And if they don鈥檛, states have already signaled their interests in addressing what AI governance should look like, making it harder to pass national laws.

While the EO also lays the groundwork for addressing equity and civil rights in AI, it still lacks the necessary enforcement to ensure consistent protection. Questions remain about how civil rights violations will be identified, monitored, and rectified, particularly in complex and often opaque AI systems. The U.S. Department of Justice and other federal agencies are tasked with overseeing civil rights in this domain, but Congress must step in to establish clear, enforceable accountability measures. With a legislative push, safeguards can be solidified to protect all communities and build trust in the AI systems shaping society.

Going forward, a Harris administration would likely with policies emphasizing AI safety and responsibility, potentially expanding on existing initiatives鈥攊ncluding those focused on civil rights. Conversely, Donald Trump has already indicated to revoke the Executive Order if elected, a move that aligns with his broader deregulatory stance. What has become increasingly clear to the global community is that some form of AI governance matters, and time will soon tell if the U.S. will embody this leadership.