Risks in AI: re-evaluating the regulatory rulebook

Financial Services

AI is already transforming the delivery of financial services. This is true both at the highly sophisticated end of the market, where firms make use of AI to execute high-frequency trading strategies and at the level of the individual retail investor, where AI can tailor and deliver financial “robo-advice” to consumers. Deferring to machine judgement and automation is by no means new - automated trading is already the dominant form in most large markets. Yet AI clearly has much further to reach into financial decision making – from investment and trading strategies to client classification and product recommendations.

The scope in all of these areas to improve business processes, mitigate risk, lower costs and improve decision making is clear. From the predictive power of big data in insurance to data-driven ‘contextual
banking’, customer information is increasingly utilised to drive improvements and customise services and refine risk profiles. This can create more personal, individualised services that consumers are familiar with – and increasingly coming to expect – in other areas of their lives such as shopping, entertainment or news provision. It can also lower costs like insurance premiums or investment fees.

But the same big data effects – machine learning and automated judgements – that drive these benefits also present potential challenges. Refined models of risk will inevitably mean rising costs for some – or even exclusion from services. Machine-learnt strategies may inherit the biases of the people who program and ‘teach’ them. Customers may also begin to resist the concept that the trade-off for lower prices is to “pay” in personal data, especially if this means a very close scrutiny of their financial, health or other personal data.

Will customers always be comfortable with the experience of being contextualised by their data? What questions does this use of data raise? And how will the inherent value of this data change the way it is stored, shared and protected?

The automation of consumer investment advice and personal financial management is also a big change for an industry that has always been based on human intermediation and which relies on trust in order to function. With lower barriers to entry, policymakers will hope that the increased availability of financial advice will boost savings and investment among an ageing population whose retirement provision is an increasing concern. At the same time, regulators will want clear lines of accountability and liability, to prevent mis-selling scandals from arising in the first place – and to know where to focus interventions if they do.

How should regulators and firms anticipate and address these risks? Is it possible to simply apply the existing rulebook to these technologies? What does a technologically neutral approach to regulating robo-advice actually mean?

AI in financial services

AI un financial services

This article was written for the Politics of AI conference convened by Global Counsel in 2019 and forms a part of a wider AI briefing pack: /insights/report/politics-ai


The views expressed in this note can be attributed to the named author(s) only.