In addition to or in lieu of relying on binding regulations, some jurisdictions have enacted principle-based approaches to build trust and mitigate risk in model development. For instance, Australia’s AI Ethics Principles offer high-level, non-binding guidance on fairness, privacy, and accountability, emphasizing co-regulation between government, industry, and civil society. Meanwhile, Singapore’s AI Verify provides voluntary testing and benchmarking tools for responsible AI, and its Model AI Governance Framework proposes an AI governance framework emphasizing interoperability and accountability. Similar initiatives exist in Japan (AI Guidelines for Business), India (AI Safety Institute), and the EU, which released a voluntary AI code of practice that complements the EU AI Act, and the Inter-American Development Bank’s (IDB) fAIr LAC+ platform in Latin America and the Caribbean.18
In relation to training data, the applicability of existing copyright laws in the context of AI training data is currently being contested in multiple jurisdictions. The UK is exploring a pioneering collective licensing initiative, supported, though not led, by the government, to ensure that authors are fairly compensated when their works are used to train AI models. This approach aims to balance the quality and diversity of training data with the protection of intellectual property rights and the livelihoods of content creators.
18 While not a government, the IDB influences Latin American and Caribbean governments by conditioning loans and cooperation on policy guidelines and by providing technical expertise to address capacity gaps.
Government Interventions in AI
Please click on any quadrant to read the taxonomy.
< Back to main page