The initial promise of the internet age was for a free and open space to exchange information. However the advent of algorithm-driven content raises important questions about what we see, how much we share a common information landscape and the reliability of what we rely on for news.
One of the biggest transformations of media is the data-driven turn towards users and audiences and the advertising model that governs it. This model relies heavily on speed and volume of delivery: the quality of information is less important than whether it is read. Platforms also compete by customising news, entertainment and information based on preferences ‘revealed’ by clicks, likes and searches. Users’ data is in turn sold to advertisers, whose interest is generally in user eyeballs rather than the quality, breadth and objectivity of the information they are seeing.
Positively, digital media has been shown to expand the number of sources to which a reader is exposed – contradicting oft-expressed concerns over ‘echo chambers’ and ‘filter bubbles’. Citizens’ media literacy is developing to account for a wider range of views. However, the current social media landscape poses other problems: the competition for attention encourages inaccuracy and extremism. This continues to shape the public sphere into a place where emotion and simplicity are the norm – hampering efforts to persuade through nuanced argument. This has important implications for politics and political campaigning.
The rise of AI-driven disinformation is also threatening to undermine our sense of shared reality. Synthesised text, audio and video powered by machine learning threaten to produce artefacts indistinguishable from real life, both stimulating confusion and providing plausible deniability to gaffe-prone or scandal-hit politicians. Combined with the scaled sharing platforms we use, including encrypted messaging, this is already creating enormous social discord in countries like India and Myanmar.
But AI can also be used for enormous social benefit in the information environment. Data aggregation and analytics tools similar to those used by intelligence services and law enforcement could create efficiencies and opportunities in journalism itself, as well as business processes such as regulatory compliance, know-your-customer checks and anti-money laundering efforts.
Can an advertising revenue model for large platforms and information aggregators be made more conducive to heterogenous sources of information and news? If so, how?
Transparency is a big issue. Social media algorithms are constantly evolving and increasing in complexity, turning into mysterious black boxes of decision making. Users have little understanding of what personal data is used and how and even less control or ways to influence these processes. What platforms tell their advertisers they know about their users can be somewhat at odds with the picture they present to users themselves.
How much transparency is required with respect to the use of user data on platforms? Where should the line between proprietary analytical technology and user interests in their own data and its collection and interpretation be drawn? Does the advertising driven model create incentives to use data in ways that could be different in other revenue models?
Revenue and AI information curation models in the platform economy
This article was written for the Politics of AI conference convened by Global Counsel in 2019 and forms a part of a wider AI briefing pack: /insights/report/politics-ai