Date
Tech, media and telecoms

The old, the accelerated and the new: three ways of thinking about generative AI policy

By
Body

Platforms such as ChatGPT and DALL-E are revolutionising how we create, communicate, learn, and work, with millions of users globally leveraging the benefits. However, just as quickly as generative AI technologies entered the spotlight, so did concerns about how these tools could be used to cause harm. Policymakers across the globe are under pressure to ensure that regulatory frameworks protect from the risks while allowing the benefits to flourish.

This has led to a flurry of policy activity, including by the European Parliament where amendments are being discussed that look to classify generative AI as ‘high risk’ in its incoming AI Act. The Italian data protection regulator has imposed a series of bans on generative AI chatbots for fear of minor safety, and New York City’s Department for Education announced a ban on ChatGPT on school devices and networks. 

As policymakers come to terms with what the technology is actually capable of, and where the risks lie, it will become increasingly evident that there are three basic types of questions that governments and regulators will need to address.

1. Existing policy questions that will become more pressing

Generative AI implies greater scale and efficacy for automated processes, which amplify and accelerate several policy challenges that governments and regulators are already grappling with. 

For example, there is now potential for criminal activities to happen at an elevated level of authenticity and greater scale and output. With the ability to generate human-like text and speech, potentially even personalised to the recipient, phishing emails and social engineering attacks can take place on an unprecedented scale. Generative AI can even be used to learn from a large dataset of previous phishing emails and generate new, more highly convincing ones that are harder to detect.

A similar dynamic applies to bots, deepfakes, and other automated methods for issues such as mis- and disinformation, revenge porn and hate speech. Although chatbots such as ChatGPT typically disallow malicious uses through their usage policies, there exists the potential for bad actors to train AI models on false information, which the models could then spread. In some cases, it doesn’t even require malicious intent. Decades of false information that already exists on the internet inevitably informs perpetuated inaccuracies: Google’s system delivered a disputed astronomical claim almost immediately upon use. 

Generative AI also further complicates existing tensions at a geopolitical level, specifically regarding competition between China and the West. There is increasingly a looming prospect of the country developing alternative large language models which have been trained on a completely different data set from those used by OpenAI and other American companies. However, while Chinese industry might be keen to get ahead and stay competitive (with Baidu launching ERNIE, a Chinese version of ChatGPT, this week), and China’s Ministry of Science and Technology declaring its support for integration of AI across industry, strict internet controls in the country might make this difficult. Baidu’s new ERNIE chatbot will draw from information scraped from inside but also outside China’s firewall, including restricted sites like Wikipedia and Reddit. Whether the Chinese government decides to make exceptions to its current internet laws in the name of remaining competitive on generative AI remains to be seen. 

2. Old policy questions in a new setting 

Many policy questions on generative AI will be long-standing but will manifest themselves differently in a generative AI setting, especially in areas such as intellectual property and platform liability. 

Intellectual property is always one of the first issues to be tested with a wave of new digital transformation, but generative AI is producing particularly novel issues. Traditionally, IP has focused on the outputs of content – re-using or re-issuing someone else’s audio, video, or writing – but with generative AI, the focus is shifting to inputs. Since language learning models are trained on enormous amounts of data, such data will likely include third-party IP, for which use has not been pre-authorized. This sort of IP infringement has already resulted in litigation, most recently with Getty Images filing a lawsuit against Stability AI, accusing the company of infringing on its copyrights by misusing Getty photos to train its AI art-generation tool.

On the platform liability front, although previous waves of communications technologies (e.g., social media apps) have often been shielded from liability under Section 230 for content posted on their platforms, this may cease to be the case with platforms such as ChatGPT. For example, there is a case to be made that the output of a chatbot could be considered content developed, at least in part, by the tech company itself - effectively making the likes of Google or Bing the “publisher or speaker” of the AI’s responses. For this very reason, in Europe, policymakers are now thinking about regulating generative AI via the AI Act and its associated liability regimes, rather than through the Digital Services Act. 

3. New policy questions

Finally, there will also be genuinely novel policy questions prompted by the growth of generative AI.

Even prior to generative AI’s exponential uptake, there were questions about whether the technology had advanced to such a point where it justified an entirely new approach. Last summer, there was public controversy around the ‘sentience’ of Google’s LaMDA chatbot, after a former employee suggested that it was in many ways equivalent to a human child. 

Although that particular view is very much disputed, the legal rights and responsibilities of AI models are being questioned and re-assessed in unprecedented ways. For example, with a non-player character (NPC) in a game whose speech is determined by generative AI, where does liability fall if that character starts verbally abusing a human player? If it can’t be the AI itself, then is it the gaming company? Or the AI designer? 

More immediately, there are questions of whether AI creators are due inventorship and patenting rights. While in the US, the Copyright Office will not register a work created purely by an autonomous AI tool, UK legislation provides copyright protection to work without a human author. Section 9 of the CDPA makes the person who initiated the AI work the author of computer-generated works, and their copyright protection can last for 50 years. The UK Supreme Court has heard a case this month that looks to take this one step further, in a question about whether an AI can actually own the patent rights and be named as the inventor.

Looking forward

As evidenced by the examples above, the trajectory of future regulation for generative AI is far from cut and dried. 

There is already a messy mixture of litigation, existing data regulation, new sectoral rules, and cross-cutting horizontal legislation, and this jigsaw of measures will likely continue. Different markets will likely tend towards more of one than the other - with the US having a history of opting for litigation to solve tech policy concerns, the UK expected to set new principles for sectoral regulation, and the EU finalising its AI Act. The use cases where the technology is deployed most commonly will also shape the scope and pace of regulation. 

What the future holds for the future of generative AI is therefore unclear, but if prior waves of tech regulation are anything to go by, looking to Europe as a first mover may provide some clues.
 

The views expressed in this research can be attributed to the named author(s) only.