Governments around the world are actively deploying AI to attempt to enhance the delivery of public services. Examples include the Indian government’s use of AI in agriculture, Rwanda’s use of AI-enabled triage tools in healthcare, Singapore’s use of its Virtual Intelligent Chat Assistant (VICA) for public service delivery, the UK National Health Service’s predictive analytics to identify need for early social care interventions, and the Office of the Federal Chief Information Officer’s extensive documentation of AI use cases by federal agencies in the US .
In some jurisdictions, public service delivery extends into the use of AI (especially facial recognition) for surveillance that underpins law enforcement. Beyond well-documented use cases in China, AI-based surveillance is extensively used in Israel and Singapore for surveillance purposes, impacting the right to privacy and other rights in all three cases.
To ensure rights-protecting public sector use, the UK has developed procurement guidelines to ensure ethical AI adoption in government operations, while a U.S. federal memorandum provides guidance, which, among other things, places requirements to identify risks of high-impact AI use cases through an AI Impact Assessment. Canada mandates the use of Algorithmic Impact Assessments for public sector deployments of AI in automated decision making.
Such guidance can also have broader consequences. For example, the Trump Administration’s July 2025 AI Action and Executive Order on Preventing Woke AI in the Federal Government mandates that procurement guidelines be updated to ensure that only AI systems deemed objective and free from “top-down ideological bias” are eligible for government contracts. The lack of clarity as to how to objectively define bias and demonstrate its absence has raised concerns about the practicality of this requirement, as well as its consistency with freedom of expression.
Government Interventions in AI
Please click on any quadrant to read the taxonomy.
< Back to main page