4.2.1 “Hard” Governance in AI Development

Home > Resources > Government Interventions in AI

Governments globally are increasingly regulating how AI models are developed, with risk management requirements being the most common thread among such regulations. The EU AI Act, the world’s first, comprehensive, dedicated AI regulation, mandates data governance (Art. 10), and transparency (Art. 13) for high-risk systems, while classifying some generative AI models as posing with systemic risk (Art. 51) triggering additional obligations (Art. 55). 14 Similar laws are being drafted in Brazil (and other Latin American nations) and have passed in South Korea. Other countries, like the UK, have taken lighter or sector-specific approaches, or, like the U.S., have remained largely hands-off or have taken a sub-national approach.

Regulations relevant to AI are supplemented by cross-sectoral (i.e. not AI specific) legislation covering topics such as data localisation, copyright, data protection in multiple jurisdictions, for example, GDPR and the DSA. Amidst concerns that AI regulations could hamper innovation, some jurisdictions have introduced AI regulatory sandboxes, sectoral in the UK and Singapore, cross-sectoral in South Korea and Norway. 15 16 In some jurisdictions, AI is being regulated under existing regulations, such as in India, while the need for dedicated regulations is explored.

Liability regimes for AI have been discussed globally but are still evolving. While countries like China place strict liability for AI-generated content on developers under its Interim Measures (which has subsequently been enforced in court rulings), discussions in the EU have revolved around the now-withdrawn AI Liability Act, which would have introduced a presumption of causality requiring AI providers to prove that their system did not cause alleged harms. In some jurisdictions, some are seeking to use existing negligence and product liability laws to impose liability, as is the case in the EU under its updated Product Liability Directive.

Some governments seek to regulate training data. The EU AI Act (Art. 53(1)(c)) applies copyright obligations to model training datasets, and the UK is drafting similar rules. Beyond copyright, France, and Italy have taken GDPR-based legal actions against unauthorized use of personal data for model training, while China requires training datasets to be embedded with “core socialist values.”

Governments also regulate model outputs in some jurisdictions. For example, watermarking of AI-generated content is required in the EU AI Act (Art. 50) and imposes additional obligations (Art. 55) on general-purpose AI models deemed to have systemic risk (Art. 51), as well as in China and India. Additionally, output filtering features in some jurisdictions: the EU AI Act (Art. 52) targets illegal content, while China and Russia impose stricter controls based on their own definitions of illegal content (which have requirements for political and ideological alignment that are not present in rights-protecting jurisdictions). Some EU jurisdictions (e.g. Italy) have taken a more prescriptive approach in their implementation of the EU Act by introducing criminal offenses and mechanisms for content traceability and authenticity. In addition, China specifically requires models to be licensed ahead of deployment as an additional way of ensuring compliance with censorship.

Finally, as with AI infrastructure, national security concerns have also prompted export controls: for example, China restricts certain AI model exports, while the U.S. has considered export restrictions on model weights.

Many in civil society and even some governments argue that current laws, as well as related enforcement and remedy mechanisms, are insufficient to address emerging AI risks. 17 These gaps can create challenges for accountability and transparency, meaning that the lack of strong AI regulation can ultimately harm human rights.

14 A high risk system is defined in Art. 6 of the EU AI Act

15 A controlled environment that allows developers to test and validate innovative AI systems under regulatory supervision for a limited time, aiming to foster innovation while identifying and mitigating potential risks.

16 There been been discussions within academia and civil society on ensuring such sandboxes are inclusive and promote responsible AI development

17 See examples from US and pan-African civil society






Government Interventions in AI

Please click on any quadrant to read the taxonomy.



< Back to main page
Copyright Global Network Initiative
Website by Eyes Down Digital