AI policy for charities: what it is, why it matters and how to get started
What type of music does AI prefer? Algo-rhythms.
Moving swiftly on, we all know that getting to grips with Artificial Intelligence is no joke. It’s a topic that just about every organisation imaginable is faced with. If you work for a charity, AI policy is probably an issue that you’re already addressing, or is underlined for attention on your agenda.
From trustees to regulators and staff to donors, AI will probably be helping people in your network to create new content, answer questions, pick TV suggestions, help with journeys and de-junk email inboxes. Part of the challenge within charities, however, is that AI is often being used informally, inconsistently and without clear guidance.
In that context, AI policy for non-profits matters, so this article aims to help you understand what an AI policy is, why it counts and how you can shape one that suits your organisation.
Why AI is the talk of the town
In boardrooms, backrooms and hybrid-working box rooms, the upsurge in awareness about AI means it’s increasingly a live topic.
And it’s not just the basis for small talk, it’s a huge phenomenon that’s full of opportunities, uncertainties and challenges.
Yet, among the many charities currently engaging with AI in its various forms, what proportion are using it without clarity on the best approach regarding governance, or with appropriate guidance in place?
The British Heart Foundation is quite rightly a highly-regarded organisation. The Charity Commission has reported that it’s been looking at AI for some years, having established the likes of a working group and an AI strategy. Soon after the turn of the decade, the organisation was finding heartening ways to harness the technology.
But you don’t have to look long to see that it’s also had reasons to be wary of it. In other words, whether your non-profit is big or small, while AI is amazing, it’s also a minefield. For charities and AI policies then, simply hoping that the algorithms will always be on your side isn’t enough.
How are charities saying ‘aye’ to AI?
The Charity Digital Skills Report 2025 has some fascinating statistics in it, including that more than three quarters (76%) of charities are now using AI tools, while nearly half (48%) are developing AI policies (though that figure jumps to 68% for large orgs).

AI is clearly on the march, so let’s examine just some of the ways the charity sector is using it.
- Fundraising: artificial intelligence is being used to assess donors and predict donation patterns and risks, to support more personalised engagement, and to help teams understand what’s working across campaigns. Within charity-specific digital fundraising tools like Access Raise, charities are also saving time by using AI-generated suggestions to build campaigns quickly.
- Content: generative AI is experiencing rapid uptake, as teams use it to help write initial versions of everything from outreach materials to bid documents, reports to speeches, and even policies.
- Administration: AI tools boost efficiency by turning speech into text (e.g. making meeting minutes a cinch), onboarding staff quickly via ‘buddy’ agents, and screening CVs and job descriptions in a jiffy.
- Communications: the new tech can tackle FAQs and enquiries 24/7 via chatbots, send personalised messages and thanks to donors, and make content more appealing through a range of visual tools.
- Data: AI is being used to process big volumes of feedback (e.g. comments and surveys) to optimise services and identify improvements, and to forecast future needs (e.g. prediction of homelessness rates).
- Innovation: it’s employed inventively too, from multimodal AI use that finds apt cancer treatments, to mixing citizen scientist data and geographical info for better habitat mapping and management.
Save time with AI-powered fundraising tools
What exactly is a charity AI policy?
An AI policy for charities is a way for non-profit organisations to be transparent about both their approach and attitude to AI.
Here’s one fuller definition:
A charity’s AI policy defines a non-profit organisation’s ethical, safe and effective use of Artificial Intelligence. It aligns with its mission, values, objectives and legal obligations, and addresses openness and accountability. It covers issues such as data privacy and security, bias mitigation and risks – including hazards related to the likes of copyright and the safeguarding of vulnerable individuals. It also demonstrates the retention of human oversight regarding the likes of content generation, governance and critical decision-making.
An AI policy for charities should not, however, be a vast technical manual or an impenetrable legal document.
Its purpose is to state your relationship with AI clearly so that staff, donors and other stakeholders know when and how you will or could use it. It will be different from a general IT or data policy, because its focus is specifically on AI and its capacity to carry out the kind of cognition we usually associate with human intelligence.
This distinction becomes especially important as AI is increasingly built into everyday charity software. Platforms underpinned by shared data models and clear governance, such as those built on Access Evo, are designed to ensure AI supports decision-making rather than replacing it – keeping people firmly in the loop.
It should also differ from AI policies for corporate entities, which in many instances will be using it to help maximise profits. Instead, an AI policy for charities should confirm attempts to use it responsibly to further charities’ purposes (though helping with the task of fundraising will no doubt be part of the picture too).
To that end, it should set out a shared set of principles and boundaries that guide how AI can be used responsibly within such organisations.
Make no mistake, AI policy for charities matter
Having a carefully defined AI policy in place for your charity means that all stakeholders know about AI’s use in – and in connection with – the organisation, and are made aware of key risks and issues associated with that usage. Let’s list some of the many advantages that come with a good AI policy:
- It builds trust: being clear about AI fosters faith in your organisation’s principles and practices, both internally and externally.
- It supports consistency: with everyone singing from the same AI song sheet, you’ll give your staff confidence to use AI in steady and appropriate ways.
- It improves AI engagement: when AI is built into trusted systems that staff already use – such as digital fundraising platforms – teams are more likely to engage with it confidently and appropriately, focusing on better fundraising outcomes rather than worrying about risk.
- It protects sensitive data: by advocating for the structured use of AI, you minimise the danger from problems like data protection concerns, information inaccuracy and bias.
- It reduces risk: by highlighting issues such as AI hallucinations, you make your team alert to potential problems, which will help to avoid or mitigate them.
- It guards reputations: with evidence that you’re ‘on top of’ AI, you help to shield your brand’s name and strengthen relationships.
- It bolsters security: with AI tools vulnerable to attack by hackers, a policy that sets out measures to prevent such occurrences, and what happens if data’s leaked, is a great asset.
- It helps with legal obligations: with a clear, vetted AI policy approved by relevant parties, you put your charity in a stronger position regarding issues like copyright and the avoidance of harmful content.
What goes in to a good AI policy?
To help you further pinpoint what an AI policy for charities is and what it should include, you’ll find a non-profit AI policy template to refer to after the next and final part of this article. A good rule of thumb here is to keep it proportionate and reasonably concise (limit it to a few pages, if possible), and to favour practicality over technical obscurity. It’s also worth considering whether a policy is for both internal and external consumption, as some organisations may well need two separate, if perhaps similar, documents.
In summary, below you’ll find some of the core elements that a sensible policy could cover, for charities of any size (though larger non-profits are likely to need larger policies):
- Policy applicability: which parties the policy applies to.
- Policy purpose: why the policy exists and what it intends to achieve.
- AI definition: the organisation’s understanding of what AI is.
- AI governance and management: how the organisation will provide AI oversight and support its use.
- AI use: which forms of AI technology the organisation uses and plans to keep up with developments.
- AI risk management: what AI-related risk analysis the organisation has undertaken and its outcomes.
- AI data protection and security: how data and privacy will be safeguarded re AI.
- AI ethics: what principles the organisation holds re the technology and its usage, e.g. re bias mitigation.
- AI legal compliance: how the organisation’s use of AI relates to legal issues, e.g. re copyright concerns.
- AI environmental impact: the organisation’s stance re the technology’s high energy usage.
Get started with a simple policy template
As AI becomes embedded in tools charities already rely on – from fundraising platforms to analytics – having a clear policy in place helps ensure those capabilities are used consistently, ethically and with confidence.
We hope this article has hit all the right notes regarding the creation of AI policy for charities. Naturally, all non-profit organisations need to determine for themselves – in conversation with their relevant teams, from HR to IT and legal – exactly what their AI policy should include and how it’s best articulated.
But it’s always nice to have a helping hand, so here’s an AI policy for non-profits template – just complete the form below.
AU & NZ
SG
MY
US
IE