Questions on AI’s implications for fundraising
Generative AI presents considerable potential for fundraising by charities. But it also produces a huge range of challenges, some familiar from previous tech developments, and some utterly new and present.
So I have some questions. Plenty of them.
I’ve watched tech develop and be adopted by fundraisers and charities for several decades. I started professional fundraising when moving from a manual typewriter to an electric one was a big technology leap forward.
Advertisement
I learned my first software packages by working my way through 150+ page printed manuals – WordStar, WordPerfect, Lotus 1-2-3 etc.
I’ve watched as many charities took their time to adopt the web, email and social media.
But the speed and scale of what generative AI is already bringing just raises more questions in my mind, so here for the record are some.
SEE ALSO: AI for fundraising: why ChatGPT matters to fundraisers (15 March 2023)
I hope they help, either by confirming you’re not alone in trying to grapple with what is already with us, or by helping you recognise that there is far far more to AI.
I hope they help you get beyond the current popular discussion themes about whether or not AI deserves to be considered creative (the answer is in the description ‘generative AI’), which jobs its use might replace, and how satisfied we are that so much fundraising still relies on ‘fundamental’ human qualities.
(And as with my previous posts on generative AI, I recognise that there are plenty of people who have for some time been researching, advising on and building products that use machine learning and AI to help fundraisers and charities).
Thinking big
What is generative fundraising and generative giving?
What does generative fundraising look like? And ‘generative giving’? Or perhaps even ‘conversational fundraising’?
We (I’m guilty) often focus on the tech implications for our profession and organisation first, yet this rapid societal shift affects charities’ supporters just as much.
What to call this?
At this stage it could be useful to use shared terms to describe this.
Should we call this AI fundraising? Or generative fundraising? Or conversational fundraising?
One solution is to see which terms stick in business, marketing and sales, and adopt appropriate choices from there.
Which fundraising channels will change fastest due to AI?
Which fundraising functions will grow fastest due to AI? Proceeding from William Gibson’s notion of the future being “just not very evenly distributed“, what will be the impact of a fundraising mix with one or more functions growing or developing at a sudden and rapid rate compared to others? How do you manage that imbalance or budget for it? Do you plan to wait until 2024 to build in AI testing into your budget?
Having asked which fundraising channels develop fastest, I’ll reframe that: which of your current fundraising assumptions will be transformed (positively or negatively) first, the most, or fastest by AI?
What does conversational fundraising look like when chatGPT integrates with third party tools?
What does conversational fundraising look like when you start integrating the likes of chatGPT with its API, with plugins, and with live access to some or all of the web? In other words, your CRM and business processes become enhanced by any number of third party functions and tools. The current focus on prompts looks positively straightforward in comparison.
What happens when generative AI moves beyond acting on text and images?
How does this develop beyond analysing and generating text and images? For example how does it affect and change search? What does ‘generative search’ look like? And how will it affect not just the internet-wide engines and the closed/private search functions, such as your charity’s knowledge base?
Will generative AI lead to new charities being formed?
For example, to support the individuals affected by poorly implemented commercial AI? Equally, will it cause other charities to close, or at least refocus some of their public outreach activities?
Thinking about your charity
Which of your suppliers already use AI?
How do your suppliers use AI (including machine learning)? What are their plans for development? Is this detailed in your contract with them? How long before some suppliers who don’t use AI become less attractive?
Who on your board can advise you on AI?
Are you looking to recruit to that role? Do you recognise you have a governance gap there? How long will it take to fill that role, and what risks/opportunities from AI will arise in your organisation in that period?
Does AI help you unblock blockages that have held your charity back?
Are you already looking at what functions within your organisation are frustratingly slow, difficult to resource, and could free up staff/time/resources if only they could be automated?
From where do you get your AI fundraising/leadership insight?
Which thinkers or practitioners, which trainers, suppliers, conferences are helping inform your understanding of generative AI and how it might support your charity and your fundraising?
Are you ready for managing at unexpected pace?
What is the most fast-paced environment you have managed a team or organisation in?
Do you have a sense yet of how quickly generative AI is developing, with all its implications for businesses, and how to prepare an organisation for that kind of external change of pace?
Will conversational fundraising boost digital fundraising with face-to-face levels of interaction and satisfaction?
Generative AI ushers in a world of ‘conversational fundraising’. Think chatbots. Does this bring an unanticipated combination of digital and face-to-face fundraising? How might you resource that, given the scale that that could operate at, if done well? Nick Scott details this possible development well.
How does a charity work in an era of one multi-purpose app like chatGPT?
At the moment chatGPT looks like a one-size-fits-all app, used by almost anyone for almost any function. (Not every function, and certainly with far from reliable or even accurate results).
But previously we needed to use multiple tools to gather and process the data now available to everyone via one interface.
But until chatGPT the closest tool we all had to that was Google, Bing and search engines. Try entering into a conversation with them (yes, even using Siri) and asking them to adopt a different persona or output content for a specific reading level. Or to acknowledge an error and resubmit a response.
How do you protect your charity’s creative assets?
Photos, images, fundraising campaign content etc. Or rather how should you have protected them, given that OpenAI has already trained its tools on some or all of your public content? Will you block Large Language Models from being trained on your content? If so, all or some of it? All LLMs or just some? Remember your website’s robots.txt file? Your organisation has been doing this kind of signposting to automated processes and bots for many years.
How do you handle a wholesale change in trust in content?
How do you remind staff that all digital content has now become questionable? Whether you are using it for research, social sharing, or re-use.
Charity staff need critical thinking skills – from now on. How do you instil that level of understanding quickly?
At best, your charity ends up sharing fake content from others, or integrating it with your own research findings. At worst, malicious actors use AI to harm your reputation or socially engineer access to your systems.
Of course, most of us already have a healthy distrust of digital content. “That’s been Photoshopped” we say, and we are aware of the many financial scams we are at risk from. But the possibilities of generative AI content are many levels above that.
On the same subject, how do you tag images/video on your site to reassure people that it is yours and genuine? Do you have a policy of acknowledging which content is created by people and which has been generated by AI?
How does making your charity’s website an interactive resource change fundraising?
Until now we browse websites, finding (with any luck) the information we are seeking. Now that you can interrogate a charity’s website using a chatbot trained purely on your website’s content, will people be more likely to donate, if they can get all their questions and follow-up questions answered?
Thinking of donors
How are donors reacting to chatGPT and other AI tools?
How will donors respond to a charity that makes extensive use of chatGPT and other AI tools? Will they welcome the efficiencies and levels of interaction achieved, or shy away when the sophistication of personalisation possible becomes evident and uncomfortable?
Have you asked donors to your charity for their views on increased personalisation and automation?
How will donors react to a persona-based bot to guide them around your charity’s website?
How are you prepared for fundraising from another bot?
Could chatbot-get-chatbot become another version of member-get-member? When you deploy chatbots for conversational AI-based fundraising, are you anticipating some donors applying their personal AIs to do the research, contact, and decision on which charity to support, when and in what way?
How does an organisation comply with the right to be forgotten, or a data subject’s request?
You might remove an individual’s data, but if the AI tools you use continue to train on data that relates to that individual, you could well end up processing data regarding them again, and again.
How might donors be a resource for better impact with AI?
Have you asked your supporters for pro bono support in terms of AI skills, governance, mentoring?
Thinking of colleagues and teams
How does your recruitment change in the face of generative AI?
Which new posts are you recruiting for that might need reconsidering in the age of AI automation? Note I’m talking about now, not next year.
And how do you adapt:
- your people/HR policies?
- your expectation of skills and experience?
- your policies regarding BYOD (bring your own device) and its 2023 equivalent (BYOAI – bring your own personal AI)? In other words your staff and volunteers’ personal AI that they use and would likely be using while they are at work – for personal and work functions?
Since it will take a while to produce an updated policy, what steps can you take now to protect your reputation, staff, supporters etc? For example, confidentiality, explaining what assets and content are commercially sensitive, both existing assets e.g. code, and new e.g. prompts that support your prospect research. “Sensitive data makes up 11% of what employees submit to ChatGPT.”
Think of the employee who has been experimenting with generative AI longer than you have, and who pastes the charity’s strategic plan into a tool with the aim of summarising it, or simply turning it into a slide presentation.
Or the prospect researcher who pastes a donor or prospective donor’s name into an AI tool together with other information e.g. the size of their past donations, or their personal links with a board member.
Wellbeing, mental health, anti-discrimination
A question for those that wish they could take their time with the implications of this change: when shall we know when ChatGPT and other tools have overcome the bias and discrimination in the text which it has been trained on, and which your colleagues could be exposed to in their work roles? Who will verify that?
How will all colleagues cope in an era of very rapid workplace change?
Does AI open the floodgates to side-hustles to your staff, supporters and volunteers?
Do you encourage these, within reason? Forbid them? Or take a similar gradualist approach to that of many charities in the early days of social media, when staff members were not encouraged to use social media, whether Twitter or LinkedIn, to mention the charity?
Have you asked your colleagues if any of them are exploring creating or enhancing a service in their own time?
The legal implications for businesses and charities are immense so I’m not going further to detail the insurance, health and safety, wellbeing, privacy, EDI and other issues that are no doubt coming to your mind too.
Even bigger picture
Looking beyond fundraising and leadership roles, do charities themselves have a role to play in how AI develops? Of course, as a core sector in society, they do. But they are unlikely to have the time or expertise to devote to that role.
For example, do any charities, or more likely charitable representative bodies, have a view on whether further implementation of generative AI should be halted, given the (much debated) risk of Artificial General Intelligence (AGI) becoming a pressing reality?
I share these questions fully expecting some of them to be the wrong question, leading us down a dead-end. Some will, in short time, doubtless appear silly, ill-informed or indeed quaintly simplistic. But I’ll take heart in the hope that one or two of them gave fundraising and charity leaders a head start in evaluating where we already are.
If I had to choose one them to come to terms with, it is probably the astonishing speed of change that is with us. The sobering recognition that “this is the worst that AI will ever be”, and “this is the slowest that AI will ever develop”.
So, please do share your thoughts in the comments below. And add your own questions that you think deserve to be raised to help charity and fundraising leaders prepare.
- More thoughts on ChatGPT from the fundraising sector (17 March 2023)
- AI in fundraising: monthly focus (17 June 2021)
- The Smart Nonprofit: Staying human-centred in an automated world