
We talk a lot about what AI can do in healthcare, from predicting health risks before they escalate to even personalizing treatment plans. But we often miss the uncomfortable truth: most countries don’t have the rules in place to make sure these systems are safe and trusted. In fact, recent studies show that only around 15.2% of nations have developed robust regulations for healthcare AI.
That gap matters. Why? Without clear regulations in place, patients hesitate to trust these systems, and groundbreaking projects may end up getting stuck in pilot projects rather than changing lives.
In the Middle East, countries are trying to write a different story. Last month, the Global AI Competitiveness Index placed Saudi Arabia and the UAE among the world’s top 20 countries for AI talent density, ahead of some major economies. While this achievement speaks volumes about the region’s commitment to developing world-class expertise, it’s not the only area where these nations are moving ahead.
Just as important and perhaps even more impactful for healthcare is how the Gulf states are approaching regulation. They understand that AI talent and investment can only go so far. That’s why they are putting just as much effort into building agile, forward-looking policies that protect patient data, ensure ethical use, and keep innovation moving at speed.
When we discuss “regulation”, we often think of it from a compliance perspective, that there will be some set “rules” that need to be followed. But it’s much more than that. Regulation is actually about ensuring that AI improves healthcare rather than widening the existing disparities.
Without clear guardrails, several risks can emerge:

This is why regulation matters. It’s not about slowing AI down, but when patients know their privacy is safeguarded and when clinicians know the tools are accountable, adoption becomes smoother and more sustainable. In fact, countries in the Middle East are proving that regulation doesn’t have to be a roadblock but can be an enabler.
Saudi Arabia has been at the forefront of regulating AI in healthcare by embedding ethical principles, data privacy protections, and cultural principles into its regulations. In 2022, the Saudi Data and Artificial Intelligence Authority (SDAIA) issued its AI Ethics Principles. These apply to private, public, and non-profit parties that are developing or adopting AI solutions.
For healthcare, these principles are especially important:
If Saudi Arabia is building trust by tightening data protection, the UAE is taking a different route: regulating by experimenting. It has positioned itself as a kind of living lab for AI in healthcare, where innovation can be tested quickly but always under oversight.
What makes the country stand out is its governance. Back in 2017, it established a Minister for Artificial Intelligence, Digital Economy & Remote Work Applications Office. As part of this, the Council for Artificial Intelligence and Blockchain, since then, has rolled out a roadmap that covers everything from data protection laws to AI ethics guidelines. At the city level, both Abu Dhabi and Dubai have their own AI in healthcare policies, something which is rare globally. Some key highlights of these policies include:
Moreover, the UAE enacted two national laws with direct implications for healthcare AI. The Personal Data Protection Law (PDPL) categorizes health and biometric data as "sensitive," and allows for its processing without patient consent for uses including diagnosis, treatment, or public health purposes. The ICT in Health Fields Law has strict rules about confidentiality and data localization, meaning health data generally cannot cross borders unless there are specific approvals. While this ensures robust protections, it also presents complications for training AI models, which typically require pervasive, cross-border datasets.
Unlike other countries in the region, Qatar doesn’t carve out “health data” as its own legal category. Instead, its data protection law treats medical information as “personal data of a special nature”, similar to the GDPR’s approach. The country’s National AI Strategy highlights healthcare as one of the priority sectors where AI can deliver the most impact.
Qatar's Data Protection Law and related guidelines enforce strict controls on how health data can be stored or shared. Specifically, sensitive data categories, such as genetic and biometric information, face more stringent controls. They also require healthcare organizations to ensure safety procedures, like anonymization or encryption, are met before data can be accessed or modeled with AI.
Similarly, Oman is at an earlier stage but is moving in the right direction. Its 2022 Personal Data Protection Law explicitly classifies health information as sensitive data. Thus, it requires higher standards for consent, storage, and processing. While it doesn’t yet have AI-specific regulations for healthcare, this legal foundation signals a clear intent: before AI tools are widely adopted in hospitals or research, the underlying protections for patient data must be firmly in place.
The Gulf experience shows that regulation doesn’t necessarily slow innovation; it can reshape it. By prioritizing ethics and embedding patient protection in their AI strategies, countries in the region are proving that safeguards can coexist with speed. Several advanced economies often grapple with this. Too often, regulation is seen as red tape, something that holds back innovation. But the Middle East shows that it doesn’t have to be this way.
The lesson is simple: don’t wait until trust is broken to act. Build the trust first, and innovation will follow.
What lessons from the Middle East’s approach do you think other regions should adopt first? Let’s discuss.