Artificial intelligence regulation

Artificial intelligence regulation

Author: Lindsay Fraser
|
Read time:  8 minutes
Published date:  25 July 2024
AI regulation is happening at different speeds in the EU and in the U.S. Here’s a guide to where things stand.

The proliferation of artificial intelligence systems has sparked a debate about when and how AI should be regulated. 

Regulating artificial intelligence: key concerns

Technologists, economists, and business leaders expect AI will have a far-reaching impact across industries and sectors. 

Policymakers and critics of AI have raised concerns over numerous possible applications of the AI systems. Thus far, these concerns are largely concentrated in a few areas:

  • Discrimination: Bias stemming from the datasets that feed AI systems, algorithm design, and human interpretation can all result in unfair or discriminatory outcomes.

  • Data security: AI systems rely on large amounts of data for training. Many of these datasets, particularly those that pertain to healthcare and biometric records and financial data, contain sensitive information that can be targeted in cyber attacks.

  • Privacy and surveillance: Facial recognition systems, location tracking tools, and other AI-powered surveillance technologies raise concerns about the infringement of civil liberties.

  • Intellectual property concerns: Legal concerns about the use of copyrighted material in data sets used to train generative AI systems and AI tools have already resulted in lawsuits. There are also unanswered questions about the ownership of AI-generated works.

  • Synthetic media and disinformation: AI tools can be used to create and promulgate synthetic media, including deepfakes and disinformation, which poses a variety of risks to society. These risks include decreased election integrity, financial fraud, health fraud, and general societal distrust.

  • National security: Adversarial actors, including terrorists, might utilize AI to threaten national security or military operations, particularly as the technology continues to develop. Experts have warned about the eventual use of AI in a nefarious context, such as the development and proliferation of chemical or biological weapons.

Existing AI regulations

Creating AI regulation that protects against the risks while promoting innovation is a top priority for policymakers in both the United States and Europe, but their approaches and progress have varied. 

EU AI regulations

So far, Europe has taken the lead in writing legislation to regulate AI.

In May 2024, the EU Council approved its landmark EU AI Act following its approval by the European Parliament in March. The EU’s new AI policy is the first law in the world to regulate general-purpose AI systems, and will apply to companies that operate in the 27 EU member states regardless of where the company is headquartered. 

Although the law fully comes into effect in May 2026, several parts of the EU AI Act come into effect in December 2024. These sections focus primarily on privacy, including provisions that govern predictive policing (using AI algorithms to predict potential future crime) and the scraping of facial images from the internet or CCTV footage for use by AI systems.

U.S. AI regulations

In the United States, progress on AI policy has been slower and more fragmented. To date, Congress has not enacted legislation to regulate the use of AI in the United States.

The primary driver of federal AI policy is the Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence from October 2023. The executive order (EO) primarily focuses on guidance to federal government agencies on how to utilize AI to achieve more efficient outcomes. The EO asserts the federal government’s intent to foster an environment for innovation in the AI sector while finding ways to ensure AI doesn't introduce new biases or security concerns into public-sector work. 

The EO articulates over 100 required actions for federal agencies to undertake over the next year, ranging from streamlining the visa process for applicants using AI to the creation of National AI Research Institutes.

AI regulation: EU vs. U.S

Comparing the two primary policies governing AI in Europe and the U.S. so far, there are some key differences between the two approaches: 

EU AI Act

Biden Executive Order

Coverage

Directly applies to any provider of general-purpose AI models operating in the EU.

Instructs federal agencies but doesn't impose regulation.

Legal status

Can only be repealed by the EU parliament or struck down by a court.

Can be revoked by another president or found unconstitutional by a court.

Threshold for risk

A lower computing threshold means the Act covers a broader range of AI models, likely including some already in use by Google and Microsoft.

A higher computing threshold means that no existing AI model is known to exceed the threshold.

Ambition

Aims to be the global standard.

Focused on domestic agencies.

While the current state of federal AI policy in the U.S. is less comprehensive than the EU AI Act, particularly with regard to the applicability to the private sector, AI policy continues to develop at all levels of government in the U.S.:

  • Senate AI working group roadmap: Senate Majority Leader Chuck Schumer and a bipartisan Senate AI Working Group released a roadmap for artificial intelligence policy. The roadmap places a large emphasis on funding research for AI innovation, and progressives have criticized the roadmap for vague language about preventing AI-exacerbated bias. Schumer hinted legislation may move as soon as this year (although AI experts have expressed skepticism with this timeline), and indicated the Senate will advance individual provisions as they are ready rather than wait for a comprehensive bill to come together. 

  • State legislation: To date, over 25 states have introduced AI laws and continue to push forward with their own AI legislation to fill the gaps at the federal level. For example, over a one-month period in May and June 2024, the California state legislature proposed roughly 30 new AI-related pieces of legislation. This follows Colorado Governor Jared Polis signing into law legislation that regulates the private sector’s use of AI in decision-making processes. The bill implements discrimination protections for consumers when AI is used for “consequential” decisions by requiring companies deploying these types of high-risk AI systems to create a notification and risk-management regime. We expect states to continue to pursue their own legislation as Congress stalls.

  • Federal agencies: To comply with the Executive Order, federal agencies have turned their attention to AI. This includes the SEC, which proposed a broad rule that would govern the ways in which financial advisers could use a wide array of AI-related technological tools; the proposal was met with bipartisan pushback based on its breadth and the SEC is likely to modify and re-propose

Bottom line: The United States has focused on encouraging AI research and potential government use cases while European policy has focused more closely on major systemic risks, particularly those related to protecting human rights and negating bias. 

Although the United States and Europe have tackled AI policy in different ways thus far, it’s clear that AI policy remains a top priority for both jurisdictions. The U.S. and EU are also aligned on a few key items, including: 

  • Using a risk-based approach

  • Establishing principles of trustworthy AI

  • The eventual creation of international standards. 

Further collaboration between the two regions will be critical as policy surrounding AI evolves in response to new use cases and newly identified problems or risks. 

AI liability in the states

While the federal government continues to stall on artificial intelligence (AI) policy, the states have begun to step in, particularly as it relates to increasing AI safety through increased liability for developers of AI.

AI researchers and practitioners agree that AI poses risk—but the exact extent of the risk posed by AI is an area of debate. Although some cite catastrophic risks posed by AI as a serious concern, others in the industry argue that the large-scale risks are blown out of proportion and are concerned that over-indexing for these risks will negatively impact AI innovation.

And while regulating AI will require a swath of new policies, increasing liability for those who develop AI systems has been a focus by state policymakers looking for a way to mitigate AI-related risks.

Current AI liability landscape

Neither the courts nor Congress have provided reliable legal guidance on generative AI in the United States. Currently, AI companies operate under traditional product liability law, which falls under state jurisdiction. Notably, traditional product liability law does not account for specific unique characteristics of AI, including complexity, autonomy, and opacity; this will make it extremely difficult to assign liability using the current legal frameworks in each state. 

In previous case law at the state level, many courts found that if the software was “ reasonably safe when designed and installed,” the defendants were not liable for plaintiff’s damages. These cases implied that creators of AI software or hardware were not liable for any harm as long as these products were non-defective when made. This has largely remained the standard in state courts. However, if legal liability hinges only on AI systems being “non-defective” when developed, AI developers avoid responsibility. Moreover, many AI systems are open-source and unsecured, meaning that bad actors can alter these systems and disable safety features, make their own custom versions, and abuse them for a wide variety of tasks.

In May 2024, the Senate AI Working Group released a legislative roadmap advocating for the “enforcement of existing laws” with respect to AI. However, as outlined above, there are no clear laws in the United States regarding liability surrounding AI, leaving many decisions up to the courts to decide.

SB 1047

With these gaps in the product legal liability framework concerning AI, states are beginning to step in and legislate. California is leading the way with SB 1047, a bill making its way through the California state legislature. 

On July 2, the California State Assembly Judiciary Committee passed  SB 1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act), following its passage by a bipartisan vote in the California State Senate on May 21. The bill mandates specific safety testing for AI models  that cost more than $100 million in computing power to train and that use a specific, high level of computing power. If they fail to do so, they will be held liable if their AI system leads to a “mass casualty event” or more than $500 million in damages in a single incident or set of closely linked incidents. As of July 2024, there were no AI models that meet the threshold for required safety training. The bill also creates whistleblower protections for employees of frontier AI laboratories and establishes an advisory council to support safe and secure open-source AI.

The bill is opposed by venture capital firms and tech companies, including Meta. Opponents say the regulations take aim at developers and instead should be focused on those who use and exploit the AI systems for harm. Meta’s chief AI scientist Yann LeCun has argued that SB 1047 will “ destroy California’s fantastic history of technological innovation” and ultimately discourage companies from developing large AI systems, and from keeping their technology open-source, which would stifle the efforts of small startups and open source developers. 

While industry fiercely pushes against SB 1047, the American public seems to agree with leading AI safety scientists: a September 2023 poll found that 73% of voters believe AI companies should be held liable for harms from technology they create compared to just 11% that believe they should not.

Next steps 

The bill will be up for a final vote in August. If it passes, it still remains unclear if Governor Newsom will sign the bill into law as he has not yet provided any public feedback on it.

Looking at the larger picture, states stepping in and  building their own liability regimes for AI will eventually lead to a patchwork of regulation that will become increasingly complex and untenable. The continued development of liability regimes at the state level will result in regulatory fragmentation, leading to vastly increased regulatory complexity and costs for AI companies active throughout the United States.

UPDATE (as of October 8):

On September 29, California Governor Gavin Newsom vetoed SB 1047. In his memo outlining the decision to veto the bill, Gov. Newsom cited that the bill, which sought to regulate large AI models, could potentially stifle innovation by applying stringent standards to even the most basic functions of large systems but did not address smaller, specialized systems that could present equal or greater risks. He noted that regulation should focus on the entire AI ecosystem, rather than just focusing on the largest and most expensive models.

While SB 1047 is no longer in the legislative process, there will likely be more proposed legislation in the near future. The governor announced that he is working with leading AI researchers to develop new legislation he is willing to support. The Carta team will continue to track these updates.

Sign up below to receive Carta’s Policy Weekly Brief:

For timely updates on how public policy is shaping the private markets, sign up for the Carta Policy Team's weekly newsletter.

Lindsay Fraser
Lindsay Fraser is a public policy analyst at Carta.
DISCLOSURE: This publication contains general information only and eShares, Inc. dba Carta, Inc. (“Carta”) is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services nor should it be used as a basis for any decision or action that may affect your business or interests. Before making any decision or taking any action that may affect your business or interests, you should consult a qualified professional advisor. This communication is not intended as a recommendation, offer or solicitation for the purchase or sale of any security. Carta does not assume any liability for reliance on the information provided herein. All product names, logos, and brands are property of their respective owners in the U.S. and other countries, and are used for identification purposes only. Use of these names, logos, and brands does not imply affiliation or endorsement. ©2024 eShares Inc., d/b/a Carta Inc. (“Carta”). All rights reserved. Reproduction prohibited.