Takeaways from Biden’s Executive Order on AI
Biden's AI Executive Order has been subject to some unfair critique but isn't without its flaws.
Last month, President Biden issued an Executive Order addressing Artificial Intelligence (AI). This Order largely consists of directives to federal agencies to analyze and report on the potential risks of AI with respect to critical infrastructure, safety, civil rights, and privacy with additional provisions aiming to build government competency in AI.
While the White House Office of Science and Technology policy previously issued an advisory “AI Bill of Rights,” this Executive Order represents the first major step towards AI governance by the Biden administration and provides the first hints of what a regulatory approach towards AI might look like in America.
Given its importance, the Executive Order merits discussion on both its strengths and shortcomings, especially since some commentators are missing the mark entirely in their reviews.
Where the Critics Get it Wrong
Opposing Innovation?
Technologists and industry commentators view the Order excessively negatively: Steven Sinofsky, former President of Microsoft’s Windows Division, characterizesthe Order “less a document of what should be done with the potential of technology than it is a document pushing the limits of what can be done legally to slow innovation.” Echoing Sinofsky (and citing his post approvingly several times), Ben Thompson from the Stratechery blog posits that the Order is utterly “opposed to how innovation happens.”
Viewpoints like these are unsupported by the text of the Executive Order, where Biden goes to great lengths to underline his administration's dedication to “promoting innovation” and “advancing AI.” The fact that the Executive Order provides for substantial investments into embracing AI by hiring domain experts and building AI expertise at the highest level of the government suggests that Biden isn’t trying to oppose or throttle the technology at all.
Regulatory Capture
Critics also accuse the Order of enabling regulatory capture. This line of thinking suggests that Big Tech can build competitive moats and protect themselves from competitors by supporting expensive and onerous regulations that they have the expertise and capital to address but that startups would not. This accusation points to how the Order calls for documenting and reporting not only potential vulnerabilities of AI models that are over a certain size but also the results of “red-team” testing for the model’s weaknesses and potential for harm. Critics, including Sinofsky and Thompson, frame the Order as allowing regulatory capture here, claiming that the reporting and testing requirement benefit incumbents like OpenAI, Google, and Meta, who can easily bear compliance costs, at the cost of entrants, who they claim would be less equipped.
But this doesn’t line up with the reality of training models of the sizes that are subject to the regulations mentioned in the Order. Any startup with founders qualified and talented enough to train a state-of-the-art model will not struggle to find the funding to do so. And any company, whether an incumbent or startup, that is training state-of-the-art models will find that the costs of regulatory compliance pale in contrast with the tens or hundreds of millions of dollars that go into computing costs for training.
While the reporting requirements of the Order currently apply to models trained using a quantity of computing power greater than 10^26 integer or floating-point operations (until the Commerce Secretary issues new guidelines), Samuel Hammond of the Niskanen Center writes that no company has even trained models on the 10^26 flops threshold listed in the Order. As written, the Order hardly moves the needle in terms of regulatory compliance and as written, it’s difficult to see it bestowing a significant competitive advantage to any established players in the space.
A Note on IP Considerations
Additionally, the Order’s approach to model testing and risk reporting specifically have been widely discussed, but the Order’s provisions concerning AI’s implications for intellectual property law have flown under the radar and also merit praise: Biden orders the Director of the U.S. Copyright Office and the Director of the US Patent Office to come up with analyses and recommendations regarding works generated by AI and clarity is sorely needed while the US Copyright Office has made clear that only works generated by humans are eligible for protection, there’s a lack of certainty on other questions, particularly training, such as whether works trained on copyrighted material violate intellectual property rights or whether an AI-generated vocal track that resembles the voice of a professional artist violates copyright. While I’m sure the reports and recommendations from the USPTO and Copyright Office won’t answer every intellectual property question here, further clarity from government agencies (and, preferably, Congress!) is certainly overdue.
Where the Order Misses:
Inherent Limitations of the Form
While the Executive Order is moderate in tone and avoids staking a position at any extreme end of the AI discourse, it’s notable that Biden’s first major step towards AI governance was via Executive Order and not through marshaling legislation through Congress. There are inherent limitations of an Executive Order: it is easily appealed by a successor, it does not carry the force of law, and it only affects federal agencies. These limitations invariably undermine its desired impact as a definitive statement on AI by the administration and reduce its persuasiveness. As a set of AI-related directives to federal agencies, the Executive Order suffices. But as a trailblazing step towards a bold and responsible philosophy of AI governance, the Executive Order is rather uninspiring, largely for its unwillingness to put forth a coherent philosophy of AI governance. The administration fails to fully address critical considerations like AI’s impact on the labor market, the tradeoff between openness and security, and important free speech implications discussed in turn below.
AI’s impact on jobs
Notably, the Biden administration and industry commentators dwell little on the impact AI will have on American employment- In the Executive Order, Biden devotes only a few sentences to the AI’s potential impact on jobs, merely directing Chairman of the Council of Economic Advisers and the Secretary of Labor to deliver reports on the possible impacts of AI on the labor force and write about how AI can advance “employees’ well-being.”
This is a good impulse, but it isn’t a plan. Yes, reports can be useful but commissioning reports is easy- will the administration follow the report? What sort of economic and political risks is Biden willing to take to protect workers? If protecting workers means curtailing innovation, how would Biden decide what to do? And if curtailing innovation means that our country becomes less competitive and productive compared to countries that don’t put any limits on AI at all, would Biden still do it to protect workers? Biden prides himself on being pro-union and the WGA and SAG-AFTRA negotiations have both prominently involved AI as an important negotiating point. Does Biden support “promoting innovation” as he champions, or does he want to minimize “job-displacement risks?” Biden says in this Order that he’s looking into it- but transformative changes in AI are already taking place and waiting months or years for studies and reports and then starting to come up with a plan means that the government will be on its backfoot while the technology will have already advanced at a rapid clip.
The Openness vs Security Tradeoff
Nothing in the Executive Order indicates that the Administration is seriously grappling with the inherent tradeoff between promoting openness and security, which any nationwide regulation of artificial intelligence must do. Presently, open-source AI models are in vogue, with Stable Diffusion and LLAMA garnering substantial attention in recent months. Reportedly, even OpenAI is preparing an open-source model. The popularity of open-source models presents a challenge for any legislator interested in preventing use of AI by bad actors. While it’s notoriously difficult to predict how AI will evolve, it’s easy to imagine, for example, an agentic, next generation AI that greatly expands an adversary’s abilities to exploit vulnerabilities in our cyber infrastructure or compromise sensitive information. While our adversaries are certainly making these attempts with the technologies they already have, the calculus of how the US responds changes drastically if these adversaries are using technology made by American companies to attack America and this technology also happens to be freely available to anyone with an internet connection.
To be sure, open source is a core part of the Silicon Valley ethos. Preserving open source helps level the playing field, allowing wider access to near-cutting edge technology. An open-source ecosystem also ensures that the AI models in common usage are transparent and auditable, which can provide critical information to regulators and the public about the functions and weaknesses of prevailing software approaches. But if open-source models can enable attacks or threats against the United States at an unprecedented scale, how should the government respond?
To that end, Arvind Narayan and Sayesh Kapoor at the AI Snake Oil blog promote (and suggest Biden also endorses) “defending attack surfaces,” which means identifying possible attack surfaces (which they list as “disinformation, cybersecurity, bio risk, financial risk, etc”) and defending these attack surfaces individually. Narayan and Kapoor favor this approach for its ability to preserve openness. But the point here isn’t that risks exist and need to be addressed, it is that the risks are inherently caused by how the models are available without limitation to any bad actor. The increased risks of attacks and threat arise from how AI can be accessed easily and cheaply, which enables any bad actor to carry out attacks in an automated, scalable, and cheap way, so defending each attack surface individually ignores the root cause of open distribution. Notably the White House and DARPA launched a $20M AI Cybersecurity challenge and this is to be commended, but the Administration still needs to present a clearer perspective on how it plans on striking the balance here.
⠀How Much Can the Government Regulate?
The tradeoff between openness and security rests on another important consideration –the ability of the government to regulate AI in the first place. While advocates for regulation compare AI development to nuclear weapons, an important difference here is that the code behind the models may be protected speech. To some extent, regulating large training runs is the equivalent of banning math, and math and code are both protected under free speech laws.
Sinofsky draws a parallel here to the ill-fated ban on cryptography exports and points to a t-shirt from that era with RSA encryption framed as a munition per law. This shirt thus fit under export restrictions in the United States.
The famous illegal t-shirt with RSA cryptography, from the Hardcore Software blog
To technologists like Sinofsky, this is the type of absurd situation we may be headed towards and it’s a point worth considering.
It is difficult to imagine any regulatory framework that significantly limits the tail end of AI development while respecting free speech. Eliezer Yudkowsky from the Machine Intelligence Research Institute for example, has been thinking about this problem for decades, and the best idea he’s come up with is to track every GPU sold and shut down all the large GPU clusters to prevent Artificial General Intelligence (AGI). While this approach would provide a degree of security and sidestep the implications of “banning math,” this is also a decidedly impractical idea for a multitude of reasons, not least because of the U.S.’s competitive stance with China and the myriad benefits of Ai.
However, the approach of regulating based on hardware as a proxy for scale and potency is complicated by increased algorithmic efficiency. Recent advances in inference methods, data processing, and model architecture can provide remarkable performance increases without requiring any increase in compute power. This further suggests that limiting AI by compute specifically may not be as effective of an approach as proponents would wish and that the complications of regulating software itself may be some of the fault lines that future disrupt regulatory designs. But the Executive Order sheds little light on how the Administration plans to address this dynamic.
Bottom Line
This report has something for everyone- the identity-based political groups will be pleased at the civil rights and equity language, the technologists will like the language on promoting and investing in innovation, and the alignment crowd will appreciate the reporting requirements on red-teaming and emergent abilities. But Biden is only able to give something to everyone because he presently refuses to face the most important considerations and tradeoffs head on.
It’s easy to understand where Biden and the administration get their hesitation from. The technology is still developing and it’s unreasonable to expect the administration to have all the answers. But when profoundly transformative technology is on the horizon, commissioning reports and biding your time isn’t good leadership; in fact, it’s not leadership at all.
It certainly benefits the industry to be able to set the tone on AI and have the government react accordingly, but such an approach prevents the government from being proactive in minimizing any potential harms. The present moment does not demand heavy handed regulation or a cap on innovation, but it does demand that the Administration take a stand on critical AI issues and considerations sooner rather than later. If the hesitation and uncertainty that characterize the Executive Order persist, then we might be waiting a long time.