AI Ethics and Regulation: How Investors Can Navigate the Maze
In reality, the progress has been uneven and is far from complete. There is no uniform approach to AI regulation across jurisdictions, and some countries introduced their regulations before ChatGPT launched in late 2022. As AI proliferates, many regulators will need to update and possibly expand the work they’ve already done.
For investors, the regulatory uncertainty compounds AI’s other risks. To understand and assess how to deal with these risks, it helps to have an overview of the AI business, ethical and regulatory landscape.
Data Risks Can Damage Brands
AI involves an array of technologies directed toward performing tasks normally done by humans and performing them in a human-like way. AI and business can intersect through generative AI, which includes various forms of content generation, including video, voice, text and music; and large language models (LLMs), a subset of generative AI focused on natural language processing. LLMs serve as foundational models for various AI applications—such as chatbots, automated content creation, and analyzing and summarizing large volumes of information—that companies are increasingly using in their customer engagement.
As many companies have found, however, AI innovations may involve potentially brand-damaging risks. These can arise from biases inherent in the data on which LLMs are trained and have resulted, for example, in banks inadvertently discriminating against minorities in granting home-loan approvals, and in a US health insurance provider facing a class-action lawsuit alleging that its use of an AI algorithm caused extended-care claims for elderly patients to be wrongfully denied.
Bias and discrimination are just two of the risks that regulators target and that should be on investors’ radars; others include intellectual property rights and privacy considerations concerning data. Risk-mitigation measures—such as developer testing of the performance, accuracy and robustness of AI models, and providing companies with transparency and support in implementing AI solutions—should also be scrutinized.
Dive Deep to Understand AI Regulations
The AI regulatory environment is evolving in different ways and at different speeds across jurisdictions. The most recent developments include the European Union (EU)’s Artificial Intelligence Act, which is expected to come into force around mid-2024, and the UK government’s response to a consultation process triggered last year by the launch of the governmemt’s AI regulation white paper.
Both efforts illustrate how AI regulatory approaches can differ. The UK is adopting a principles-based framework that existing regulators can apply to AI issues within their respective domains. In contrast, the EU act introduces a comprehensive legal framework with risk-graded compliance obligations for developers, companies, and importers and distributors of AI systems.
Investors, in our view, should do more than drill down into the specifics of each jurisdiction’s AI regulations. They should also familiarize themselves with how jurisdictions are managing AI issues using laws that predate and stand outside AI-specific regulations—for example, copyright law to address data infringements and employment legislation in cases where AI has an impact on labor markets.
Fundamental Analysis and Engagement Are Key
A good rule of thumb for investors trying to assess AI risk is that companies that proactively make full disclosures about their AI strategies and policies are likely to be well prepared for new regulations. More generally, fundamental analysis and issuer engagement—the basics of responsible investment—are crucial to this area of research.
Fundamental analysis should delve not only into AI risk factors at the company level but also along the business chain and across the regulatory environment, testing insights against core responsible-AI principles (Display).
link