Governments globally are adopting diverse approaches to regulation intended to influence AI deployment, ranging from outright prohibitions in certain sensitive areas to outcome-related mandates that deploy varying forms of enforcement and liability apportionment. In some cases, governments are attempting to restrict when and where AI can be used. For example, the EU AI Act (Art. 5) prohibits use cases with disproportionate rights impacts, as well as in certain defined circumstances (e.g., elections and CSAM). It also defines specific use cases as high risk (Art. 6), which require additional assessment and mitigation measures, including risk-based fundamental rights impact assessments (Art. 27) in certain limited instances. Furthermore, companies could be required to conduct human rights due diligence for their EU AI deployments under the CSDDD (Art. 1).
However, to date, it has been more common for governments to affirmatively mandate the use of AI. For instance, India’s Intermediary Guidelines require the use of AI tools to proactively detect and remove misinformation, deepfakes, and illegal content; the UK’s Online Safety Act mandates the use of “proactive technologies” for content moderation; and Vietnam requires the use of AI for rapid “toxic content” detection and takedown.
Some countries restrict access to tools like Deepseek on government devices due to national security concerns. Others, such as Russia, Turkey, and China, block certain AI services entirely due to national security concerns, efforts to control information ecosystems, or to ensure alignment with domestic laws on content and data sovereignty.
Pre-deployment and ongoing model evaluations are mandated in some jurisdictions. California’s vetoed SB 1047 represented the first state-level attempt in the US to establish deployment restrictions based on third-party safety audits for large AI models. 22 California has since passed SB 53, which places transparency, safety and accountability obligations on frontier AI developers, which has a downstream impact on deployers. Similarly, the EU AI Act (Art. 43) mandates third-party evaluations for high-risk systems, while New York State requires third-party assessments of hiring algorithms. In China, model security self-assessments have to be filed with the Cyberspace Administration of China (CAC) before deployment.
Finally, documented law enforcement demands for AI chatbot logs have also started emerging in some jurisdictions, such as the US and UK, and it is likely that requests for user data, as well as law enforcement use of AI for surveillance purposes more broadly, will emerge in other jurisdictions as well.
22 While these requirements technically begin prior to deployment, given that such mandates often include ongoing evaluation, we see them as fitting in the “deployment,” rather than the “development,” phase of the AI lifecycle.
Government Interventions in AI
Please click on any quadrant to read the taxonomy.
< Back to main page