Date

What should a national strategy on AI policy look like?

By
Body

The use of big data to drive machine learning and artificial intelligence will transform the way people live and work and the way economies function. In making a national strategy for AI, policymakers will have to weigh priorities that span everything from workplace regulation to industrial policy. What should a national strategy for AI look like? Is one even possible given the wide terrain and pace of change.

In the global race to dominate the creation of the technologies themselves, Europe appears to have made an uncertain start. China and US start-ups, for example, comprised 87% of equity funding to AI start-ups globally in 2017. The large markets of the US and China appear to have facilitated easy scaling. European responses through public funding have been isolated and focused on pre-commercialisation research in a way that has yet to register in international competition. The UK, which is allocating around €1 bn for AI development and benefits from a favourable AI ecosystem, is the most competitive in the race for AI funding, research and talents. Elsewhere in Europe, fragmented national efforts to attract inward investment or the €2.5 bn EU funding programme for AI seem unlikely to materially adapt this picture.

What conclusions should we draw from this? Is Europe’s deficit reversible? Has the race to dominate these technologies sidelined debates about other aspects of this change in China and the US? And conversely – does Europe’s relative weakness in this respect actually make it fertile ground for thinking through questions of competition, equity and regulation?

Under budgetary constraints, one potential strategy for European policymakers will be to turn to regulation to strengthen European global AI competitiveness, or to ringfence the EU market from external challengers. Liability policy is emerging as one playing field for the former: tax for the latter. Data policy will be pulled in both directions. Businesses may hope for a harmonised European landscape on liability to provide legal certainty and help investors weigh risk, but with European countries like Estonia developing their own liability frameworks, there is still a clear risk of fragmentation instead. On data, policymakers are on one hand asking whether EU data protection and privacy law is a check on ambitions for global competitiveness in AI. But on the other, they are tempted to see a more interventionist approach as a way to erode the dominance of US large tech firms, often linked to taxation issue.

Where should these balances between competitiveness and a sound and robust regulatory framework be struck? Can they be reconciled?

European policymakers have also been much quicker than their US and Chinese counterparts to adopt a defensive position on the potential disruptive social impacts of AI. Businesses have responded to this concern with a flurry of national AI codes of conduct and ethical guidelines. Competition policymakers have begun to think hard and publicly about conventional theories of market power and network effects. Yet with AI technologies still at an early stage of development, it is hard to predict their future implementations and long-term social consequences with accuracy. 

Are attempts to get out in front of AI disruption the right approach? Are there parallels with globalisation in the lessons to be learnt about securing social acceptability for disruptive change? Do attempts to frame problems pre-emptively risk shutting down avenues for innovation or colouring public attitudes in a counterproductive way.
 

Work in progress: selected national AI strategies
Selected national AI strategies

This article was written for the Politics of AI conference convened by Global Counsel in 2019 and forms a part of a wider AI briefing pack: /insights/report/politics-ai

    The views expressed in this research can be attributed to the named author(s) only.