Why your supporters are wealthier than you expect. Course details.

Want to use AI in your charity? Start with governance

Illuminated WAIT sign on a UK pedestrian crossing - photo: Pexels
W-AI-T. Think governance before applying AI. Photo by Phil Hearing on Unsplash

AI can help you do some amazing things. But it carries two significant risks: that AI gives you the wrong answers, and that AI is used in ways that cause harm for individuals and society. For any organisation, validating its AI systems and monitoring for AI harms are essential aspects of AI governance.

These are key lessons from the development of AI across the commercial and public sectors. Groups such as SurvivAI, ValidateAI and The Operational Research (OR) Society’s AI Working Group have been formed to help identify and disseminate best practice to help AI users in all sectors mitigate these two risks.

It doesn’t matter how small or large your charity is, if you want to use AI then the first step is to establish effective AI governance structures and processes. These can be lightweight and agile but they must be in place before AI applications are developed and deployed as part of business operations.

Advertisement

Getting Started with TikTok: An Introduction to Fundraising & Supporter Engagement

Building trust first with effective governance

AI governance is an essential aspect of building trust, managing AI-related risks, and for ensuring that AI principles (such as fairness, accountability, and privacy) are implemented and adhered to.

When the GDPR was introduced, even the smallest of organisations (for example, allotment associations) had to make compliance arrangements and this is a great place to start thinking about the responsible use of AI. Without data, AI is useless; AI ingests data and churns out decisions. These decisions might be fully automated as part of a charity’s business operations or they might be AI recommendations that are subject to human oversight. Thus, keeping data governance and AI governance arrangements closely coupled is not only efficient – it is essential. Charities – of any scale – must introduce effective governance structures and processes before embarking on their AI journey.

Regular giving and legacy pledgers

Examples of using AI, or more specifically machine learning, can be borrowed from the commercial sector and adapted to fundraising and donor journeys. Many commercial organisations will build ‘churn’ models to analyse which customers might stop a subscription or service, and these same techniques can be used for regular givers – with those who are assessed as “more likely to stop giving” being given a different communication plan. Or another example: legacy pledgers are some of the most committed charity donors and offer a vital source of income for many charities.

However, not everyone in a donor base would be the right person to target for a legacy ask; machine learning allows us to find similar people to previous legacy pledgers based on their interactions with the charity.

Applying AI principles to the subjects of AI – donors

Whilst these projects are a good use of charitable resource and ethical in their aims, trust must be established between the charity and its audience if they are also to be ethical in their means. SurvivAI (a collaboration between academics and a major ‘ethical AI’ consultancy) argues that we need to go beyond AI principles for organisations and also apply these AI principles to the subjects of AI (for example, donors and fundraisers).

SurvivAI propose an AI trust gateway in which the ethical principles of organisations lead to the identification of principles for AI subjects:

Including the user experience when using AI

Ethical principles and intentions are all well and good, but by themselves they do not translate into trust with donors and avenues for the redress of harm – intended or unintended – caused by the use of AI. Regardless, establishing an AI trust gateway need not be onerous. While a charity should and must think about transparency, explainability, and contestability in their deployment of AI applications, they should also bear in mind the other side of the fence, i.e., the user experience, which is concerned with awareness, understandability, and disputability. Here, a focus on user interface design and design thinking techniques can add considerable value and lead to AI implementations that are more effective in achieving a charity’s goals.

Governance must cover the lifecycle of AI solutions

From an internal perspective, governance plays an important role over the entire lifecycle of AI solutions. During the development, testing, deployment and operational use of AI systems, it is essential to have clear processes and structures in place that guide when to use and when not to use data sets, that regulate under which circumstances an AI model remains valid and thus can be applied, that put regular checks in place, and that clearly state fallback options in scenarios where an AI approach is not appropriate.

A fundamental aspect of good internal governance is AI validation, which ensures that systems are fit for purpose, safe, reliable, timely, maintainable and trustworthy.

ValidateAI’s five principles

ValidateAI, a community interest company working cross-sector and in partnership with the OR Society, argues that AI systems will only fulfil their promise for society if they can be relied upon to address five high level principles:

  1. Has the objective of the AI application been properly formulated?
  2. Is the AI system free of software bugs?
  3. Is the AI system based on properly representative data?
  4. Can the AI system cope with anomalies and inevitable data glitches?
  5. Are the decisions and actions recommended by the AI system sufficiently accurate?

How to manage AI systems during Covid19?

Taking these five principles into account, ValidateAI has applied its process to the major concern of managing AI systems during the COVID pandemic (for details, see the 2020 ValidateAI white paper). This analysis highlights the need to ensure there is an effective approach to monitoring, remedial re-alignment, and stress testing of AI systems, which in normal times are typically overlooked. This approach works hand in hand with practitioner-centric standards and frameworks to evidence technical rigour, effective governance and ethical acceptability. Much work is still to be done to design what these approaches would look like and how they will be implemented in organisations in what is still the very early stages of the AI revolution.

AI done well has great power. AI done badly can harm not just the people affected by the AI analysis, but also the organisation’s reputation and the trust of its stakeholders. Before any charity starts to use AI, it must establish how it will manage AI risks through establishing AI governance structures.

Governance for charities encompasses an external focus on creating an AI trust gateway for donors and an internal focus on AI validation. This oversight might be achieved through a large charity’s cross-functional governance committee, or a small charity’s head of operations reporting directly to the Board. Good oversight and governance can enable you to mitigate the risks and harness the power of AI safely.

The OR Society is a charity and learned society that promotes education in and awareness of Operational Research (the application of scientific methods including AI to organisational decision-making). Its Pro Bono Service has been serving charities for over 10 years.

Photo of Matthias Kern
Mathias Kern

Mathias Kern is senior research manager for resource management technologies and optimisation in BT’s Applied Research team. He is an experienced industrial researcher and business modelling specialist particularly interested in applying Artificial Intelligence, optimisation and simulation techniques to real-life problems.

Photo of Shakeel Khan
Shakeel Khan

Shakeel Khan is Artificial Intelligence Capability Building Lead at HM Revenue and Customs and co-founder of Validate AI CIC. He has 25+ years related experience leading AI projects in industry and government sectors as well as supporting international capability building. He works closely with academics to promote cutting edge AI applications and champion methodologies to Validate AI to build trust.

Photo of Richard Vidgen against an orange background
Richard Vidgen

Richard Vidgen is Emeritus Professor of Business Analytics at the University of New South Wales Business School (UNSW), Sydney, Emeritus Professor of Systems Thinking at the University of Hull, and a visiting professor at the University for the Creative Arts. His current research focuses on the management, organisational and ethical aspects of AI. He is a member of the UK Operational Research Society’s Analytics Development Group and a joint editor in chief for the Journal of Business Analytics. He is an associate at Ethical AI Advisory and a founder of SurvivAI.

Loading

Mastodon