Artificial intelligence regulation

Artificial intelligence regulation

Author: Lindsay Fraser
Read time:  5 minutes
Published date:  June 17, 2024
AI regulation is happening at different speeds in the EU and in the U.S. Here’s a guide to where things stand.

The proliferation of artificial intelligence systems has sparked a debate about when and how AI should be regulated. 

Regulating artificial intelligence: key concerns

Technologists, economists, and business leaders expect AI will have a far-reaching impact across industries and sectors. 

Policymakers and critics of AI have raised concerns over numerous possible applications of the AI systems. Thus far, these concerns are largely concentrated in a few areas:

  • Discrimination: Bias stemming from the datasets that feed AI systems, algorithm design, and human interpretation can all result in unfair or discriminatory outcomes.

  • Data security: AI systems rely on large amounts of data for training. Many of these datasets, particularly those that pertain to healthcare and biometric records and financial data, contain sensitive information that can be targeted in cyber attacks.

  • Privacy and surveillance: Facial recognition systems, location tracking tools, and other AI-powered surveillance technologies raise concerns about the infringement of civil liberties.

  • Intellectual property concerns: Legal concerns about the use of copyrighted material in data sets used to train generative AI systems and AI tools have already resulted in lawsuits. There are also unanswered questions about the ownership of AI-generated works.

  • Synthetic media and disinformation: AI tools can be used to create and promulgate synthetic media, including deepfakes and disinformation, which poses a variety of risks to society. These risks include decreased election integrity, financial fraud, health fraud, and general societal distrust.

  • National security: Adversarial actors, including terrorists, might utilize AI to threaten national security or military operations, particularly as the technology continues to develop. Experts have warned about the eventual use of AI in a nefarious context, such as the development and proliferation of chemical or biological weapons.

Existing AI regulations

Creating AI regulation that protects against the risks while promoting innovation is a top priority for policymakers in both the United States and Europe, but their approaches and progress have varied. 

EU AI regulations

So far, Europe has taken the lead in writing legislation to regulate AI.

In May 2024, the EU Council approved its landmark EU AI Act following its approval by the European Parliament in March. The EU’s new AI policy is the first law in the world to regulate general-purpose AI systems, and will apply to companies that operate in the 27 EU member states regardless of where the company is headquartered. 

Although the law fully comes into effect in May 2026, several parts of the EU AI Act come into effect in December 2024. These sections focus primarily on privacy, including provisions that govern predictive policing (using AI algorithms to predict potential future crime) and the scraping of facial images from the internet or CCTV footage for use by AI systems.

U.S. AI regulations

In the United States, progress on AI policy has been slower and more fragmented. To date, Congress has not enacted legislation to regulate the use of AI in the United States.

The primary driver of federal AI policy is the Biden Administration’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence from October 2023. The executive order (EO) primarily focuses on guidance to federal government agencies on how to utilize AI to achieve more efficient outcomes. The EO asserts the federal government’s intent to foster an environment for innovation in the AI sector while finding ways to ensure AI doesn't introduce new biases or security concerns into public-sector work. 

The EO articulates over 100 required actions for federal agencies to undertake over the next year, ranging from streamlining the visa process for applicants using AI to the creation of National AI Research Institutes.

AI regulation: EU vs. U.S

Comparing the two primary policies governing AI in Europe and the U.S. so far, there are some key differences between the two approaches: 


Biden Executive Order


Directly applies to any provider of general-purpose AI models operating in the EU.

Instructs federal agencies but doesn't impose regulation.

Legal status

Can only be repealed by the EU parliament or struck down by a court.

Can be revoked by another president or found unconstitutional by a court.

Threshold for risk

A lower computing threshold means the Act covers a broader range of AI models, likely including some already in use by Google and Microsoft.

A higher computing threshold means that no existing AI model is known to exceed the threshold.


Aims to be the global standard.

Focused on domestic agencies.

While the current state of federal AI policy in the U.S. is less comprehensive than the EU AI Act, particularly with regard to the applicability to the private sector, AI policy continues to develop at all levels of government in the U.S.:

  • Senate AI working group roadmap: Senate Majority Leader Chuck Schumer and a bipartisan Senate AI Working Group released a roadmap for artificial intelligence policy. The roadmap places a large emphasis on funding research for AI innovation, and progressives have criticized the roadmap for vague language about preventing AI-exacerbated bias. Schumer hinted legislation may move as soon as this year (although AI experts have expressed skepticism with this timeline), and indicated the Senate will advance individual provisions as they are ready rather than wait for a comprehensive bill to come together. 

  • State legislation: To date, over 25 states have introduced AI laws and continue to push forward with their own AI legislation to fill the gaps at the federal level. For example, over a one-month period in May and June 2024, the California state legislature proposed roughly 30 new AI-related pieces of legislation. This follows Colorado Governor Jared Polis signing into law legislation that regulates the private sector’s use of AI in decision-making processes. The bill implements discrimination protections for consumers when AI is used for “consequential” decisions by requiring companies deploying these types of high-risk AI systems to create a notification and risk-management regime. We expect states to continue to pursue their own legislation as Congress stalls.

  • Federal agencies: To comply with the Executive Order, federal agencies have turned their attention to AI. This includes the SEC, which proposed a broad rule that would govern the ways in which financial advisers could use a wide array of AI-related technological tools; the proposal was met with bipartisan pushback based on its breadth and the SEC is likely to modify and re-propose

Bottom line: The United States has focused on encouraging AI research and potential government use cases while European policy has focused more closely on major systemic risks, particularly those related to protecting human rights and negating bias. 

Although the United States and Europe have tackled AI policy in different ways thus far, it’s clear that AI policy remains a top priority for both jurisdictions. The U.S. and EU are also aligned on a few key items, including: 

  • Using a risk-based approach

  • Establishing principles of trustworthy AI

  • The eventual creation of international standards. 

Further collaboration between the two regions will be critical as policy surrounding AI evolves in response to new use cases and newly identified problems or risks. 

Sign up below to receive Carta’s Policy Weekly Brief:

For timely updates on how public policy is shaping the private markets, sign up for the Carta Policy Team's weekly newsletter.

Lindsay Fraser
DISCLOSURE: This publication contains general information only and eShares, Inc. dba Carta, Inc. (“Carta”) is not, by means of this publication, rendering accounting, business, financial, investment, legal, tax, or other professional advice or services. This publication is not a substitute for such professional advice or services nor should it be used as a basis for any decision or action that may affect your business or interests. Before making any decision or taking any action that may affect your business or interests, you should consult a qualified professional advisor. This communication is not intended as a recommendation, offer or solicitation for the purchase or sale of any security. Carta does not assume any liability for reliance on the information provided herein. All product names, logos, and brands are property of their respective owners in the U.S. and other countries, and are used for identification purposes only. Use of these names, logos, and brands does not imply affiliation or endorsement. ©2024 eShares Inc., d/b/a Carta Inc. (“Carta”). All rights reserved. Reproduction prohibited.