Consultant Abigail Healey explores AI regulation and data protection in FTAdviser
May 15, 2023
Consultant Abigail Healey explores recent litigation involving artificial intelligence, and argues that it is only a matter of time before regulatory action catches up with AI.
Abigail’s article was published in FTAdviser, 12 May 2023, and can be read here.
The increasing presence of Artificial Intelligence (AI) in everyday life has caused ripples across the business and finance sectors, but its deployment has also created a potential new legal and regulatory minefield. As the rapid roll-out of the technology gathers and is increasingly embedded in many different aspects of financial services businesses, the spotlight is now on just what checks and balances (and legal and regulatory safeguards) may be needed.
At present, there is no over-arching legislation in relation to the use of AI, and that seems likely to remain the case if recent government comment is anything to go by. That is not to say that there is no regulation – far from it. Given AI is used in a variety of sector-specific ways, it seems sensible that a nuanced, sector-specific approach to the potential risks may be more fit for purpose.
AI has swept through the world of finance, bringing with it significant benefits for firms, customers and markets generally. At the same time, it has the potential to cause significant harm.
AI may be used in customer-profiling, including risk-profiling, and identifying potentially suitable financial products and/or services. This could be a largely positive experience for consumers as it may mean operational savings which can be passed on to the consumer, as well as a more tailored offering to the individual’s needs and risk-profile. However, poorly performing systems, those with inbuilt or inadvertent bias, and/or problems with human oversight, may result in unfair treatment of customers or potentially discriminatory behaviours. In those circumstances, the use of AI could expose businesses to the same litigation and regulatory intervention as if the potentially discriminatory or unfair differential treatment had been undertaken by an employee.
AI’s inexorable march into the workplace also throws up challenges for those responsible for its actions in relation to employees. Last month, the New York City Council passed legislation regulating the use of automated employment decision tools (AEDT) in an attempt to curb AI bias reported by complainants. The law is the world’s first in relation to AI bias and will likely lead to similar action across the globe, given AI’s growing prevalence in everyday workplace decision-making.
The law bans the use of AEDT algorithms using protected characteristics – such as age, gender, race and sexuality – when making any decisions relating to employees. Those employers using the AI are also required to test the software for potential biases and the potential for discrimination against protected classes, after a flood of reports of endemic discrimination arising from AI bias.
In one study, AI-driven hiring systems were shown to be twice as likely to reject female applicants than male applicants. In another, 84% of executives reported their AI algorithms as only reinforcing, rather than avoiding, racial or gender bias. Taking firm action to protect workers’ rights is a welcome move for New York’s workforce, but across the globe there is still a long road ahead. While the UK’s Equality Act 2010 prevents workplace discrimination per se, AI-specific guidance and legislation may be needed as employers increase their reliance on automation. In the meantime, it may become fertile ground for employment-related claims.
One of the biggest areas of concern with the use of AI is in relation to data protection and privacy. Last month, Italy’s data regulator temporarily banned the use of ChatGPT over data security concerns, though more recently it has said it will allow its return if developer OpenAI takes “useful steps” to address concerns.
Here in the UK, the government published a white paper at the end of March, outlining five principles for the safe and innovative use of AI – which include safety, transparency, fairness, accountability and contestability. Notably, it states that regulators should consider the need for people to have clear routes to dispute harmful outcomes or decisions generated by AI.
The government’s over-arching aim is stated to be the need to avoid “heavy-handed legislation which could stifle innovation”. Instead, it will empower existing regulators to prepare an approach tailored to how AI is used in their specific sector. Over the next 12 months, regulators (including the FCA) are expected to issue further guidance. However, given the financial services sector is one of the most heavily regulated sectors, it may be a case of “fine-tuning” the existing regulatory framework (as the FCA has previously suggested) rather than anything more game-changing.
Businesses are, by and large, very sensitive to the need to comply with the Data Protection Act 2018 and GDPR. An AI solution will likely involve the processing of large amounts of data, which may include personal data. Every business using AI will need to ensure that their AI tool collects and uses that personal data in a way which is compliant with data protection legislation. If it does not, it may face enforcement action, with the threat of eye-watering fines of up to €20 million or 4% of global turnover, as well as potential claims from the affected data subjects, such as customers or employees.
The use of AI, on the other hand, may enable or enhance GDPR compliance. A business must have “appropriate technical and organisational measures” in place to keep data secure. AI can, for instance, be used as an effective cybersecurity tool, used to predict potential cyber threats and/or to identify potential vulnerabilities in a system. Of course, as with any AI use, it is only as robust as the training data used and, if there is any bias, the AI system is likely to be biased, which may have serious security implications and may fall foul of the appropriate technical measures requirement.
While there is doubtless some way to go for regulation to catch-up with the increasing use of AI, the next year or so is likely to bring more clarity. It is also only a matter of time before claims relating to businesses’ use of AI start to filter through the courts and the first test cases will be keenly studied by legal experts as the profession grapples with the brave new world AI has ushered in.

Quillon Law ranked in The Legal 500 UK Guide 2024
Quillon Law has been ranked in The Legal 500 UK 2024 guide for ‘Fraud: Civil’, ‘Banking Litigation: Investment and Retail’ and ‘Commercial Litigation: Mid-Market’.

Partner Mark Hastings explores crypto investment fraud in The Times
In light of Lloyds Bank issuing an urgent warning about crypto investment fraud, Partner Mark Hastings discusses how robust regulation and education are key to combating investment scams on social media.

Partner Nicola McKinney comments on the cum-ex trading scandal in City A.M.
In light of a Supreme Court ruling which saw the founder of hedge fund Solo Capital Partners lose a bid to prevent a £1.4bn trial over cum-ex tax trades, Partner Nicola McKinney comments on the wider implications of this case.