Date
Tech, media and telecoms

Is the UK Government's use of algorithms missing the beat?

By
Author
 
First name
Alessandra
 
Baldacchino
 
First name
Michaela
 
Smart
Body

The pressure on governments to cut costs and increase efficiency in core governance functions is set to increase. In the UK, the covid-19 pandemic has brought a huge expansion of public spending, while at the same time posing challenges to how public services are delivered.

Artificial intelligence (AI) technologies and applications could form part of the solution. They could reduce the cost of core governance functions, improve the quality and speed of decisions, and unleash the power of public data - making the UK state more efficient and responsive. However, the use of AI in the UK public sector has taken a hit this summer. Most publicly, the Department for Education made a significant U-turn over the algorithm used to determine the A-level results of students unable to sit exams due to lockdown. This was an excellent example of the pitfalls of technology: what may look fair based on complex modelling may not last long in the cauldron of public opinion.

Other departments and public bodies have had similar experiences. The Home Office announced it was scrapping a tool it has used since 2015 to screen visa applications. And around 20 local councils across the UK have reportedly scrapped systems used to make sensitive decisions on social benefits and welfare, including assessing the risk to children of abuse or neglect.

Pressure will continue to rise to find savings in the public sector. Concerns over AI’s efficacy and ethics add an unwelcome barrier for those commissioning services and products. Cases where automation was introduced and then abandoned are likely to create a feeling of ‘once bitten, twice shy’ for officials spending taxpayers’ cash.

The government remains keen to embrace more automation and algorithmic decision-making in the future. But expansive medium-term use of AI in the UK public sector will depend principally on two key factors:

 

1. Public trust

The use of AI in any sector poses fundamental questions about accountability, but these intensify in the public sector. Public officials, elected or otherwise, work on behalf of citizens. When they make decisions that affect citizens’ rights, withhold benefits, or deny them opportunity, they must explain why. AI can add confusion to this process, and at worst create plausible deniability.

Not only are algorithmic systems complex and hard to explain, but many advanced AI tools are not fully explainable in and of themselves. The inner logic of AI and algorithmic tools are often unknown. This was one of the central reasons behind local councils scrapping their tools for welfare: officials could not determine why certain outcomes were being reached. Whether public bodies can align AI tools and algorithms with existing legal norms of accountability, transparency and non-discrimination will be critical to their future.

Government will continue to face resistance over sensitive applications. For instance, facial recognition technology used by police forces has raised serious concerns around transparency and effectiveness. Not only does it remain somewhat unclear to the public what technologies police forces are using, but there are significant concerns regarding the ability of these technologies to function without error. Given the level of sensitivity of this data, and potential harms to personal freedoms, resistance advancement in this sector will likely continue.

Public failures only serve to weaken public support for these tools, and so how governments respond to AI governance architecture will be key to building public trust. At a national level we are likely to see the government continue to press on with the use of AI and algorithms, despite recent troubles. At the local level, the situation is more nuanced. Without the direct protection of well-known politicians, local government officials open themselves up to a high level of scrutiny when choosing to use these technologies, which they often struggle to explain. Thus, at this level it is reasonable to expect a degree of caution in the introduction of AI applications.

 

2. The legislative landscape

Despite a growing appetite among many government departments to use AI to improve public services, the lack of a comprehensive regulatory framework for AI is a major obstacle. The Committee for Standards in Public Life’s 2020 review into AI and Public Standards argued that a clear framework for the implementation of AI, embedded in law and regulation, would help build trust in new technologies among public officials and users, and accelerate adoption. Without this framework, public bodies are still focusing on early-stage digitisation of services, rather than more ambitious projects using more transformative technologies.

The delay in the public sector roll-out of AI is consistent with the UK government’s more cautious, ethical, and evidence-based approach to the technology more generally. The establishment of The Centre for Data Ethics and Innovation in 2019 and The Alan Turing Institute as the national institute for data science and AI in 2017 are evidence of this. However, while the UK government may want to develop AI regulation, Brexit and covid-19 have paralysed the domestic agenda.  

 

*******

 

The future of algorithmic governance in the UK is fragile. Managed well, AI tools can modernize and strengthen public administration. But if managed poorly, government deployment of AI tools can widen the public-private technology gap, create undesirable opacity in public decision-making, and heighten concerns about arbitrary government action. It is thus reasonable to expect the UK to establish a clearer governance structure for AI, both within the public sector and more widely, and such regulation is unlikely to stifle innovation.

Within central government, AI tools are unlikely to go out of favour; the efficiency savings they offer and the belief amongst Number 10 political advisors that technology can future-proof systems will maintain their influence. It is therefore likely that, in the long term, AI tools will be increasingly used by different public bodies.

As for the private sector, the UK government is expected to continue to rely on industry knowledge and expertise to deliver quality results at speed. The announcement in June on Guidelines for AI procurement - setting out how public sector organisations can employ “off the shelf” AI confirms this trend. The ring-fencing of £200m for AI procurement over the next four years and the establishment of a dynamic purchasing system for AI by the Crown Commercial Service, also show a clear trajectory of increased AI procurement.

However, the AI industry, as well as government officials, will need to prepare itself for unavoidable questions of data privacy, ethics, and national security to unlock the expansive use of AI in the public sector. 

 

Proactive planning

Those in the AI and wider technology industry need to be proactive in mitigating effects from potential upcoming changes:

  1. Changes in regulation should be expected, especially in areas where sensitive data is being used (police, NHS, customs, and borders). Regulation is likely to concern data security, accountability, national security, and ethical practice. Stakeholders should consider mapping the likely trajectory of regulation and undertaking direct engagement with policymakers to shape the landscape. Government is hard-pressed and short of ideas and technical expertise; industry can help. Conforming to the highest degree of data security and ethical practices is also advised.
  2. Government procurement of AI remains a mixed bag. While we can expect to see a drop in the public deployment of AI tools in the short-term, in the long term government departments and agencies are likely to move towards buying software off-the-shelf to keep up with our increasingly digitised society. Procurement of services via the framework for AI by the Crown Commercial Service is expected and those in the industry are therefore likely to benefit from engagement with the framework.
  3. Reputation is important in this industry, both within the government and the wider public. Those in the AI sector should take pre-emptive and reactionary measures to ensure reputation integrity. This is especially important when dealing with local government officials, some of whom have had fewer positive experiences with AI and digital technology in general.

The views expressed in this research can be attributed to the named author(s) only.