As artificial intelligence continues to evolve at a rapid pace, the need for a thoughtful and comprehensive AI policy becomes increasingly important.
There are many benefits to using AI but also limitations and risks. You may well be aware of these, but we believe that it is worth restating here:
- AI struggles with common-sense reasoning.
- They can perpetuate or amplify bias.
- Their internal workings are unclear, making it difficult to understand how they arrive at their conclusions, which in turn hinders the establishment of accountability and trust.
- Concerns about privacy, autonomy, job displacement and misuse.
- Environmentally, they are computationally intensive and use large amounts of power.
Given the above, some of the questions that organisations should be asking are:
- Are your staff using AI to help with their daily work processes?
- How can you verify whether staff are/aren’t using AI?
- Are applicants using AI to help with their applications? Is this okay?
- Are you explicit in your application guidance?
- Do projects that you fund make use of AI in any way?
- Do your suppliers use AI? Is this okay?
We have included a sample AI policy for you to consider and potentially use as a basis for a policy which aligns with your own organisation’s mission and values:
AI Policy for [Charity Name]
SAMPLE POLICY FOR CONSIDERATION.
NOT INTENDED TO BE USED AS IS.
- Purpose
This policy outlines [Charity Name]’s approach to the use of Artificial Intelligence (AI) in our operations, grantmaking, and interactions with grantees. It aims to ensure responsible, ethical, and transparent use of AI aligned with our mission and values.
- Scope
This policy applies to:
- Internal use of AI by staff and contractors.
- Expectations for grantees and funding applicants using AI.
- Partnerships involving AI tools or technologies.
- Guiding Principles
We commit to the following principles when using or funding projects involving AI:
a) Transparency
- Disclose when AI tools are used in decision-making processes, including grant assessments.
- Ensure AI-driven tools used internally or by grantees are explainable and understandable.
b) Fairness & Equity
- Avoid reinforcing bias or systemic inequities through AI.
- Prioritise AI use that promotes inclusive outcomes and equitable access.
c) Privacy & Data Protection
- Ensure that all AI use complies with relevant data protection laws (e.g. GDPR).
- Do not use AI to infer or predict sensitive personal attributes unless explicitly justified and consented.
d) Accountability
- Maintain human oversight over all critical decisions, including grant approvals.
- Regularly review AI tools and their impact.
e) Safety & Risk Mitigation
- Assess the risks of any AI system used or funded, especially those involving vulnerable populations.
- Avoid high-risk AI applications (e.g. surveillance, automated profiling) unless justified with clear safeguards.
- Use of AI in Grantmaking
a) Internal Use
- AI tools may be used to streamline administrative tasks (e.g. application triage or summarisation) but will not fully automate grant decisions.
- Staff must receive training on ethical AI use.
b) Grantee Requirements
- Applicants must disclose any substantial use of AI in project delivery.
- Projects using AI must demonstrate alignment with the principles above.
- Applicants must disclose whether AI was used in preparing their application.
- Funding AI Projects
We will:
- Support projects using AI for public benefit, particularly in underserved communities.
- Prioritise funding for ethical, transparent, and community-informed AI initiatives.
- Review & Updates
This policy will be reviewed annually or when significant AI developments occur. Feedback from stakeholders, grantees, and communities will inform revisions.
- Contact
For questions or concerns about this policy, please contact: [email/department name]