AI Act Fundamentals

Does the EU AI Act Apply to US Companies?

December 10, 2025 9 min read Yuliia Habriiel
Does the EU AI Act Apply to US Companies?

Yes, it does. US companies operating AI systems that touch EU markets, clients, or individuals must comply with EU AI Act – even if their head offices are located in USA.

Short Answer: Yes, if your AI systems affect EU individuals or markets

The EU AI Act, which began enforcement in August 2024, has extraterritorial reach that extends far beyond European borders. US companies operating AI systems that touch EU markets, clients, or individuals must comply with the regulation – regardless of where their headquarters are located.

Let’s be honest – when the EU AI Act first passed, most US companies probably thought “European law, European problem.” But just like GDPR caught American businesses off guard, the AI Act is about to do the same thing. And this time, the fines are even steeper.

The Reality Check: You’re Probably Already in Scope

Think you’re safe because you’re based in Silicon Valley? Think again. Here’s how the extraterritorial reach actually works in practice:

The Netflix Scenario: Netflix’s recommendation algorithm decides what shows to surface for users in Paris, Madrid, and Rome. Even though Netflix is a US company, that AI system is making decisions about EU residents – boom, you’re covered by the AI Act.

The Zoom Dilemma: Zoom’s AI meeting transcription and summary features process conversations involving EU participants. Doesn’t matter that the server is in Ohio and the company is California-based. If EU individuals are in those meetings, the AI Act applies.

The Startup Trap: A Y Combinator-backed startup builds an HR screening tool. They land their first enterprise client, a German manufacturing company. Suddenly, they’re operating a “high-risk” AI system under EU law, with all the compliance headaches that come with it.

The Uber Example: Uber’s surge pricing algorithm and driver-matching AI affect EU customers daily. Their fraud detection systems make decisions about European users. Under the AI Act, all of these could be classified as high-risk systems requiring extensive documentation, human oversight, and risk assessments.

When the EU AI Act Applies to US Companies

The EU AI Act applies to your US company if:

Direct EU Market Presence

  • You have EU customers or clients
  • You provide AI-powered services to EU individuals
  • Your AI systems process data from EU residents
  • You have EU subsidiaries or offices using AI

Indirect EU Impact

  • Your AI systems affect people physically present in the EU
  • You provide AI tools to EU-based businesses
  • Your AI influences decisions about EU individuals (hiring, lending, etc.)
  • You offer AI services through EU partners or distributors

When a Simple Chatbot Becomes Your Biggest Compliance Headache

Let’s talk about everyone’s favorite AI implementation – customer service chatbots.

The Salesforce Reality: Companies using Salesforce’s Einstein AI for customer interactions are discovering they need to disclose when EU customers are talking to AI (not humans). Sounds simple? Try implementing that across 50+ customer touchpoints while maintaining a smooth user experience.

The Shopify Surprise: E-commerce sites using AI for product recommendations, dynamic pricing, or fraud detection are suddenly realizing they’re operating what the EU considers “AI systems that significantly impact economic opportunities.”

That yoga studio owner in Portland selling meditation apps to customers in Amsterdam? They’re technically in scope.

The High-Risk Reality Check

The EU didn’t mess around with their “high-risk” categories. Here’s what triggers the strictest requirements:

The LinkedIn Scenario: AI-powered hiring tools that screen CVs and rank candidates are automatically high-risk. Every tech company using AI for recruiting (which is basically all of them) needs comprehensive risk management systems, bias testing, and human oversight protocols.

The DocuSign Dilemma: AI tools that help with loan approvals or credit decisions are high-risk. Any fintech serving EU customers through AI-powered underwriting just signed up for extensive compliance documentation and regular audits.

The Coursera Challenge: Educational AI that affects academic outcomes is high-risk. Online learning platforms using AI for grading, course recommendations, or progress tracking need to prove their systems don’t discriminate and maintain detailed logs of all decisions.

EU AI Pact Signatories

These companies signed the EU’s voluntary AI Pact, committing to early compliance with AI Act requirements:

Big Tech that Signed:

  • Amazon, Google, Microsoft, and OpenAI signed the voluntary pact, per TechCrunch.
  • Adobe, IBM, and Samsung also joined
  • Other major names include Accenture, Atlassian, Cisco, and Palantir.

Notable Absences:

  • Apple and Meta are conspicuously missing from the AI Pact
  • Meta stated they “welcome harmonized EU rules” but are “focusing on compliance work under the AI Act” rather than joining the pact

GPAI Code of Practice Signatories (July 2025)

For general-purpose AI models specifically:

Signed:

  • Google confirmed it would sign despite having reservations
  • OpenAI, Microsoft signed, gaining rebuttable presumption of conformity
  • Amazon, Anthropic, IBM, and others also signed

Refused:

  • Meta announced it wouldn’t sign the voluntary GPAI code of practice, calling it “legally uncertain and overreaching”

Individual Company Compliance Statements

Microsoft has been particularly vocal:

  • Published comprehensive EU AI Act compliance documentation on their Trust Center Microsoft
  • Incorporated “prohibited practices” into their internal Restricted Use Policy Microsoft
  • Has “dedicated working groups combining AI governance, engineering, legal, and public policy experts working on compliance” Microsoft

Atlassian

  • Joined the EU AI Pact in September 2024 and published detailed compliance resources.

The Strategic Split

What’s interesting is the clear divide among US tech giants:

Proactive Approach: Google, Microsoft, Amazon, OpenAI are positioning themselves as compliance leaders, likely viewing this as competitive advantage in the European market.

Cautious Approach: Apple and Meta are taking a wait-and-see stance, possibly concerned about committing to voluntary pledges that could be used against them later.

The stakes are high – penalties for non-compliance can reach up to 7% of global annual revenue for violating banned uses of AI (according to TechCrunch) which could mean billions in fines for these companies.

Most telling: companies that sign the Code gain “rebuttable presumption of conformity” while those that refuse “may face stricter documentation audits”, as Aicerts News reminds, when enforcement ramps up in 2026.

The GDPR Precedent: Why US Companies Should Take Notice

The EU AI Act follows the same extraterritorial approach as GDPR, which has already resulted in billions in fines for US companies including:

  • Meta (€1.2 billion)
  • Amazon (€746 million)
  • Google (€90 million)

The lesson is clear: geographic location doesn’t shield you from EU regulations if your technology affects EU individuals.

What This Means for Your Business Right Now

For SaaS Companies: That AI feature you just shipped? Check if any European customers use it. Your terms of service probably need updating, and you might need to implement explainability features you never planned for.

For AI Startups: Seed stage companies are discovering that EU AI Act compliance can eat 20-30% of their engineering resources. Factor this into your fundraising – investors are starting to ask about regulatory compliance in due diligence.

For Enterprise Tech: B2B companies selling to EU enterprises are finding AI Act compliance is becoming a checkbox item in procurement processes. No compliance documentation? No deal.

Risk Categories That Trigger Compliance

High-Risk AI Systems (Strictest Requirements)

  • AI in hiring and HR decisions
  • Credit scoring and lending algorithms
  • Medical device AI
  • Educational AI systems
  • Law enforcement AI tools

Limited Risk Systems (Transparency Requirements)

  • Chatbots and virtual assistants
  • AI-generated content systems
  • Deepfake detection tools

Minimal Risk Systems (Best Practices)

  • Recommendation algorithms
  • Basic automation tools

The Practical Next Steps Everyone’s Asking About

Start with the Audit Everyone’s Avoiding: Map every AI system your company uses to potential EU touchpoints. That marketing automation platform? Customer support AI? Fraud detection system? If they affect EU individuals, they’re in scope.

The Documentation Deep Dive: High-risk AI systems need extensive documentation. We’re talking risk assessments, bias testing results, human oversight protocols, and incident reporting systems. Companies are discovering this isn’t a “set it and forget it” compliance task – it’s ongoing operational overhead.

The Training Reality: The AI Act requires AI literacy training for your team. This isn’t a one-hour compliance video – it’s genuine education about how AI systems work, their limitations, and their risks.

What US Companies Must Do Now to Comply with EU AI Act

Immediate Steps

  1. Audit your AI systems – Identify which systems could affect EU individuals
  2. Assess risk levels using eyreACT interactive EU AI Act risk assessment questionnaire – Determine if you’re operating high-risk, limited-risk, or minimal-risk AI
  3. Review client base – Document any EU connections in your customer portfolio
  4. Check vendor relationships – Verify if your AI tools are used by EU partners

Immediate EU AI Act Compliance Requirements for US Companies

  • Implement risk management systems for high-risk AI
  • Establish human oversight protocols
  • Create technical documentation and logs
  • Set up monitoring and reporting processes
  • Designate authorised representatives in the EU (if needed)

Why Early EU AI Act Compliance Is Actually Smart Business

The Competitive Angle: Enterprise customers are already asking vendors about AI Act compliance in RFPs. Being able to say “yes, we’re compliant” while your competitors are still figuring it out is a real advantage.

The Insurance Argument: Cyber insurance companies are starting to ask about AI governance policies. Good compliance practices could mean lower premiums (or getting coverage at all).

The Talent Magnet: Top engineers increasingly want to work for companies that take AI safety seriously. Strong compliance practices signal that you’re building for the long term.

The Business Case for Early Compliance

Competitive Advantage

  • EU AI Act compliance becomes a differentiator in enterprise sales
  • Early compliance demonstrates responsible AI governance
  • Builds trust with privacy-conscious customers globally

Risk Mitigation

  • Avoid fines up to €35 million or 7% of global annual turnover
  • Prevent market access restrictions in the €16 trillion EU economy
  • Reduce legal and reputational risks

Common US Company Misconceptions

“We don’t operate in Europe” – Physical presence isn’t required, since digital impact is sufficient.

“We’re too small to matter” – Company size doesn’t exempt you – AI impact does.

“It’s just another privacy law” – AI Act covers algorithmic governance, not just data protection.

“We’ll deal with it later” – Compliance timelines are already underway with enforcement increasing.

Practical Next Steps

For SaaS Companies

  • Review customer contracts for EU clauses
  • Audit AI features in your software
  • Implement explainability features for high-risk use cases

For AI Tool Providers

  • Classify your AI systems by risk level
  • Create compliance documentation packages
  • Establish EU legal representation if needed

For All US Companies Using AI

  • Map AI systems to EU touchpoints
  • Develop internal AI governance policies
  • Train teams on EU AI Act requirements

The Bottom Line

The EU AI Act isn’t just a European problem. It’s a global reality for any company using AI technology. US businesses that proactively address compliance will gain competitive advantages on European markets, while those that wait risk significant penalties and EU market access restrictions.

Long story short, yes, EU AI Act applies to your US company. Make sure you’re prepared to meet its requirements!

Frequently Asked Questions (FAQ)

Yes, it does. US companies operating AI systems that touch EU markets, clients, or individuals must comply with EU AI Act – even if their head offices are located in USA.

All organizations developing, deploying, or using AI systems in the EU must ensure compliance.

Different provisions of the EU AI Act have varying timeline requirements, with full compliance required by August 2026.

eyreACT provides automated compliance tools, documentation systems, and expert guidance to ensure full EU AI Act compliance.

Ready to Start Your EU AI Act Compliance Journey?

Take our free 5-minute assessment to understand your compliance requirements and get a personalized roadmap.


Tags:

Share: