Singapore made a calculated bet: that practical AI compliance tools and industry adoption would achieve better outcomes than binding legislation imposed before the technology matures.
Why the Lion City’s “soft law” approach might be smarter than you think — and what it means for your compliance strategy
When I first looked at Singapore’s approach to AI governance, my European regulatory lawyer brain had a small crisis.
No binding legislation? Voluntary frameworks? Self-assessment guides?
Coming from the EU — where we have 180 articles of AI Act, mandatory conformity assessments, and fines up to €35 million — Singapore’s approach felt almost… too relaxed.
Then I spent time actually understanding it. And now I think there is something genuinely clever happening in Singapore that European companies should pay attention to.
First, Let’s Clear Up the Confusion: Is There a Singapore Digital Act at All?
I see “Singapore Digital Act” searched a lot. Here is the thing: there is no single Singapore Digital Act for AI.
Singapore deliberately chose not to create comprehensive AI legislation. Instead, they built an ecosystem of voluntary frameworks, practical tools, and sector-specific guidance.
This is not because Singapore does not care about AI governance. It is because they made a strategic choice: enable innovation first, regulate where necessary, and let industry develop best practices before locking them into law.
Whether you agree with this philosophy or not, you need to understand it if you are doing business in Asia-Pacific.
The Singapore AI Governance Landscape
Let me walk you through what actually exists.
National AI Strategy (NAIS)
Singapore published its first National AI Strategy in 2019. NAIS 2.0 came in late 2023 with over SGD 1 billion committed over five years for computing infrastructure, talent development, and industry advancement.
The strategy is ambitious. Singapore wants to be a global hub for developing, testing, and deploying AI solutions. They are betting that good governance enables innovation rather than restricting it.
Model AI Governance Framework
This is the cornerstone. First released in 2019, updated in 2020, and expanded in 2024 to cover generative AI.
The framework provides practical guidance on:
- Explainability and transparency
- Fairness and non-discrimination
- Human oversight and accountability
- Data management and quality
It is voluntary. But “voluntary” in Singapore increasingly means “expected standard of care.” Companies that ignore these guidelines do so at their own risk.
Model AI Governance Framework for Generative AI (2024)
With ChatGPT and friends changing everything, Singapore released specific guidance for generative AI in May 2024. Over 70 global organisations contributed, including OpenAI, Google, Microsoft, and Anthropic.
The framework covers nine dimensions:
- Accountability
- Data
- Trusted development and deployment
- Incident reporting
- Testing and assurance
- Security
- Content provenance
- Safety and alignment
- AI for public good
It reads like a practical checklist rather than legal requirements. Which is exactly the point.
AI Verify
This is where Singapore gets genuinely innovative. AI Verify is a government-developed testing toolkit that helps organisations validate their AI systems against governance principles.
Think of it as a technical implementation of the Model Framework. You can actually test your AI for fairness, explainability, and other qualities — not just write policies about them.
The AI Verify Foundation, launched in 2023, is now building an open-source community around these tools. Singapore is essentially trying to create global standards through practical adoption rather than legal mandate.
ISAGO (Implementation and Self-Assessment Guide for Organisations)
ISAGO 2.0 helps organisations operationalise ethical AI governance. It is a self-assessment tool that translates principles into practical steps.
For companies wondering “where do I start?” — ISAGO is actually a useful resource, even if you are not operating in Singapore.
Sector-Specific AI Regulation in Singapore
While the general approach is voluntary, specific sectors have binding requirements.
Financial Services
The Monetary Authority of Singapore (MAS) does not play around. In December 2024, MAS released mandatory AI governance requirements for regulated financial institutions.
The “Artificial Intelligence Model Risk Management” guidelines establish three mandatory focus areas:
- Board-level oversight of AI risk strategy
- Comprehensive risk management systems
- Standardised development, validation, and deployment protocols
If you are in fintech or financial services and operating in Singapore, these are not optional.
Healthcare
AI-enabled medical devices fall under the Health Products Act. They must be registered before use. The regulatory pathway is clearer because it builds on existing medical device frameworks.
Legal Sector
In 2024, Singapore’s Supreme Court released guidance on generative AI use in legal proceedings. The Ministry of Law announced in 2025 that it is developing guidelines for legal professionals using AI tools.
How Singapore Compares to EU AI Act
Here is where it gets interesting for European companies.
| Aspect | EU AI Act | Singapore |
|---|---|---|
| Legal status | Binding legislation | Voluntary frameworks |
| Risk classification | Mandatory (prohibited, high-risk, limited, minimal) | Guidance-based, no formal categories |
| Conformity assessment | Required for high-risk AI | Self-assessment encouraged |
| Penalties | Up to €35M or 7% global turnover | No direct penalties for framework non-compliance |
| Approach | Precautionary, rights-based | Innovation-enabling, industry-led |
| Timeline | Phased enforcement 2024-2027 | Continuous evolution |
The philosophical difference is significant. EU says: “Prove your AI is safe before deploying.” Singapore says: “Here are the tools to make your AI trustworthy — use them.”
Neither approach is objectively “right.” They reflect different regulatory cultures and risk appetites.
What This Means for Your Business
If you are a European company expanding to Asia-Pacific:
Singapore’s voluntary frameworks are increasingly the regional benchmark. The ASEAN Guide on AI Governance and Ethics, released in 2024, draws heavily from Singapore’s approach.
Complying with EU AI Act does not automatically mean you meet Singapore’s expectations. The EU focuses on legal compliance and rights protection. Singapore emphasises practical implementation and trustworthiness.
You may need to demonstrate:
- AI Verify testing results
- ISAGO self-assessment completion
- Alignment with Model Framework principles
These are different deliverables than EU AI Act technical documentation.
If you are a Singaporean company selling to Europe:
Voluntary frameworks will not save you. EU AI Act applies to anyone placing AI systems on the European market, regardless of where the company is headquartered.
The good news: Singapore’s frameworks align reasonably well with EU principles. Companies following ISAGO and the Model Framework are not starting from zero.
The challenge: you need binding documentation, conformity assessments, and evidence management that Singaporean frameworks do not require. The governance mindset is similar, but the compliance mechanics are different.
If you operate in both jurisdictions:
This is increasingly common, and it requires careful thinking.
My recommendation: use EU AI Act as your compliance baseline (it is more demanding), then layer Singapore-specific elements on top. The documentation you create for EU purposes can often satisfy Singapore expectations with minor adaptation.
But do not assume one-size-fits-all. Singapore’s emphasis on AI Verify testing, for example, goes beyond what EU AI Act requires. And MAS requirements for financial services may impose obligations that EU frameworks do not address.
The 2026 Developments Worth Watching
Singapore is not standing still.
Agentic AI Governance
In October 2025, Singapore released draft guidance specifically for agentic AI — systems capable of autonomous decision-making and goal-setting. This is ahead of most jurisdictions.
The addendum introduces:
- Capability-based risk framing
- Workflow mapping to identify autonomy-related risks
- Human-in-the-loop oversight requirements
- Scenario-based testing
Public consultation is open until December 2025. If you are building autonomous AI agents, this document is worth reading.
Quantum-Safe Guidelines
Also released in October 2025, these guidelines address the threat quantum computing poses to current cryptography. Singapore is thinking ahead about long-term AI security implications.
International Alignment
Singapore is actively working to align its frameworks with OECD AI Principles and the Global Partnership on AI (GPAI) Code of Practice. The goal is interoperability with EU, UK, and US assurance models.
This matters for multinational companies. If Singapore’s frameworks become recognised equivalents to EU requirements, compliance could become more efficient.
Practical Recommendations
For compliance teams:
- Do not ignore Singapore’s voluntary frameworks just because they are not legally binding. They represent expected standard of care and will likely influence future regulation.
- Use AI Verify even if you are not required to. The testing methodology is solid and generates evidence useful for any jurisdiction.
- Map your EU AI Act documentation to Singapore’s nine dimensions. Identify gaps where Singapore expects something EU does not require.
- If you are in financial services, treat MAS guidelines as mandatory — because they are.
For product teams:
- Build with Singapore’s principles in mind from the start. Explainability, fairness, and human oversight are easier to design in than retrofit.
- Consider AI Verify testing as part of your development process. It is free and catches issues early.
- Document your AI governance decisions. Both EU and Singapore expect you to show your work.
For leadership:
- Understand that “voluntary” in Singapore does not mean “optional” for serious companies. Reputational and commercial pressures enforce what law does not.
- Watch the MAS space. Financial services regulation often previews what becomes general practice.
- Singapore is positioning itself as the bridge between Western and Asian AI governance. Companies that align with Singapore frameworks may find regional expansion easier.
My Take
I started this research slightly dismissive of Singapore’s soft-law approach. I end it with genuine respect.
Singapore made a calculated bet: that practical tools and industry adoption would achieve better outcomes than binding legislation imposed before the technology matures. The jury is still out on whether that bet pays off.
What I can say is that Singapore’s frameworks are thoughtful, practical, and increasingly influential across Asia-Pacific. European companies ignoring them are making a mistake.
The future likely involves some convergence. Singapore is actively aligning with international standards. The EU is learning that implementation matters as much as legislation. Both jurisdictions are watching each other.
For companies operating globally, the smart approach is not to pick sides but to build governance systems that satisfy both philosophies. Document like the EU requires. Test like Singapore encourages. And stay flexible — this landscape is evolving fast.
Further Reading
- Model AI Governance Framework for Generative AI (2024)
- AI Verify Foundation
- MAS AI Model Risk Management Guidelines
- ISAGO 2.0 Self-Assessment Guide
- ASEAN Guide on AI Governance and Ethics
Have questions about managing compliance across EU and Singapore? We are building eyreACT to help companies navigate exactly this challenge. Get in touch!
Frequently Asked Questions (FAQ)
Singapore made a calculated bet: that practical AI compliance tools and industry adoption would achieve better outcomes than binding legislation imposed before the technology matures.
Ready to Start Your EU AI Act Compliance Journey?
Take our free 5-minute assessment to understand your compliance requirements and get a personalized roadmap.