Sustainability, governance and responsible use of AI

15 May 2024

Organisations need to strike a balance between innovation and the responsible use of AI, says Stuart Riddick, senior sustainability manager, abrdn.

Artificial Intelligence (AI) has rapidly emerged as a transformative force with the potential to revolutionise operations, decision-making, and customer experiences while enhancing the productivity and competitiveness of companies. While the benefits of AI could be substantial, its adoption in corporate settings raises myriad concerns that require careful consideration and management. These concerns challenge organisations to strike a balance between innovation and the responsible use of AI.

One of the primary challenges for companies is to ensure robust governance. There is scope for the outcomes of AI to diverge from the goals and values of service providers and users.  AI has no concept of the value of materials or ethics.  Its capabilities are rooted in algorithms, data and human programming that allow it to process information. Clear guidelines and oversight are needed to deliver outcomes that align with important, qualitative objectives.

We have already seen AI systems, trained on vast amounts of data, inadvertently encode biases which may produce discriminatory and misleading outcomes. This raises ethical dilemmas about fairness, equity, and social responsibility in decision-making processes.

The opacity of AI algorithms poses challenges for accountability and transparency. Unlike traditional decision-making processes where human reasoning can be explained and scrutinised, AI algorithms can operate as ‘black boxes’, making it difficult to trace how decisions are made. This lack of transparency undermines accountability, making it challenging to identify and rectify errors or biases in AI-driven decisions.

As AI technologies evolve, regulatory frameworks struggle to keep pace, creating ambiguity around legal obligations and liabilities. Organisations must navigate a complex web of regulations, including data protection laws, intellectual property rights, and sector-specific regulations, to ensure compliance while leveraging AI capabilities effectively. Failure to comply with these regulations can result in severe legal repercussions, financial penalties, and reputational damage.

The limits of the existing regulatory framework elevates the importance of companies developing guidelines for responsible AI.  We believe it is crucial that companies with significant exposure to AI technologies transparently disclose evidence of robust ethical guidelines, governance structures, oversight, and due diligence.

Several companies have received shareholder resolutions on AI for their 2024 annual meetings.  Considering the importance of this issue, we are seeking to engage with these companies and the proponent of the resolutions prior to instructing our votes. This can help us encourage change and play a part in fostering an AI-enabled future that delivers sustainable benefits for shareholders and other stakeholders.

Earlier this year we voted for an AI resolution at Apple Inc. The resolution, filed by the American Federation of Labor and Congress of Industrial Organizations (AFL-CIO), asked Apple to prepare a transparency report on its use of AI in its business operations and disclose any ethical guidelines that the company has adopted regarding its’s use of AI technology.

We engaged with Apple and the proponent to discuss this resolution and their opposing arguments in more detail.  In our view, the resolution afforded Apple sufficient discretion to prepare a transparency report that omitted proprietary information.  The company is exposed to various risks associated with the use of AI and the requested disclosure, including ethical guidelines, could provide shareholders with robust evidence of due diligence and safeguards that can protect long-term value. Although the resolution failed to pass, we were encouraged by the notable level of support it received.

To support the adoption of AI, companies also need to consider the impact on the workforce.  As use of AI becomes more widespread, non-technical staff will require training to understand the opportunities, limitations, and ethics of its use.  Like those impacted by the energy transition, workers may also require access to retraining to adapt to the changing labour market.

In recent years, the entertainment industry, particularly Hollywood, has witnessed a growing debate and concern over the use of AI in film and television production. This contentious issue reached a boiling point when Hollywood actors went on strike to protest the increasing use of AI-generated performances, fearing that it could jeopardise their livelihoods, artistic integrity, and creative control.

One of the primary reasons for the actors’ strike was the concern that AI-generated performances could replace human actors, leading to job displacement and reduced opportunities in an already competitive industry.

We’ll be engaging with a number of entertainment providers as they face resolutions on their AI disclosures at their upcoming annual meetings.  This engagement supports our analysis of the resolutions and can be particularly beneficial when tackling emerging and rapidly evolving concepts like this.

Ultimately, the adoption of AI in corporate settings presents significant risks and opportunities that demand strong corporate governance. By developing comprehensive AI governance frameworks that prioritise ethical principles, regulatory compliance, and strategic alignment, organisations can foster trust, transparency, and accountability in their AI practices, ensuring sustainable and responsible AI-driven innovation.

Professional Paraplanner