Responsible use of Artificial Intelligence Policy
Introduction
Artificial Intelligence (AI) is rapidly transforming how organisations operate, offering significant opportunities to enhance efficiency, impact, and reach. Pilotlight recognises the transformative potential of AI to advance its mission. This policy outlines our commitment to the ethical, responsible, and effective use of AI technologies, ensuring alignment with our core values while integrating seamlessly with our other organisational policies.
Pilotlight is committed to reviewing and updating this policy in response to ongoing advancements in AI technology, evolving best practices, and changes in the regulatory landscape. This approach ensures that our use of AI remains current, relevant, and aligned with both our values and the broader landscape in which we operate.
Scope
This policy applies to all staff, volunteers, trustees, associates and consultants working on behalf of the Pilotlight. Where relevant, it may also apply to third party suppliers, contractors and stakeholders for example Associates.
This policy aims to:
- Provide clear guidelines for the use of AI tools and technologies
- Maximise the benefits of AI while mitigating associated risks
- Ensure the ethical and responsible deployment of AI in line with Pilotlight's values and legal obligations (including GDPR and other relevant regulatory requirements)
- Foster a culture of responsible experimentation, learning, and continuous improvement in AI adoption.
This policy applies to all AI systems and tools, whether developed internally, procured from third parties, or used on a trial basis, across all Pilotlight operations.
Defining AI at Pilotlight
We know some of the words and phrases to do with AI may be new, unfamiliar and potentially confusing. These definitions are here to explain some of the common terms.
Artificial Intelligence
Artificial Intelligence (AI) is the ability of machines or software to perform tasks that would normally require human intelligence such as learning, problem-solving, decision-making, and natural language processing.
AI is a broad field that encompasses many different types of systems and approaches to machine intelligence, including rule-base AI, machine learning, neural networks, natural language processing and robotics. AI systems can be trained to recognise patterns, make predictions, and improve their performance over time, often with minimal human intervention.
Bias
Bias in data comes about where some people or situations are under or over-represented. This can lead to flawed conclusions. If the data only contains a partial or unrepresentative picture of the real world, it is biased and therefore unreliable.
Generative AI
A range of Artificial Intelligence (AI) tools that can create new text, images, video, audio, code or synthetic data.
Hallucination
In this context, hallucination describes what happens when a generative AI tool returns a false result, but claims it is true.
Input
Information you put into a generative AI tool, usually a short string of text.
Large Language Model (LLM)
This is the data and procedures that enable an AI tool to generate the most likely response to natural-language inputs, for example, prompting with the ‘The sky’ is likely to return the phrase ‘is blue’
Misinformation
False information that is shared unintentionally. This frequently happens on social media. Generative AI creates new risk in relation to misinformation because it can quickly produce results that sound plausible but have little or no connection to the facts.
Probabilistic
Large Language Models use ‘probabilistic’ algorithms to generate their outputs. This means that they predict or guess the next most likely word in a sequence, based on what has been input so far. This means that the AI system doesn’t need to understand the meaning of words in prompts, it just pattern-matches based on the most likely answer to a prompt.
Prompt
An input (usually text) entered into a generative AI tool. It could be very short ‘write an advert for a café' or much longer ‘write a poster advert for a vegan café in North Lanarkshire emphasising local fresh food’.
Output/result
What comes out of a generative AI system (text, images, sounds) based on an initial input. Because generative AI tools work in a probabilistic way, the same initial prompt can generate different responses at different times.
Refining
You can enter additional prompts to many generative AI tools. For example, you might ask a chat-based tool to write responses in a particular tone.
Principles for the responsible use of AI
The principles we will adopt to ensure fair and ethical use of AI have been designed to be in line with the organisational values.
Human-Centred & Collaborative (We bring people together):
- AI tools are designed to augment and empower our people, fostering greater collaboration and freeing up time for meaningful human connection and interaction. They are not replacements for human judgment or relationships.
- We ensure clear human oversight and accountability for all critical decisions and outputs influenced by AI. People remain in control, ensuring our work continues to be rooted in genuine engagement and expertise.
- We will encourage open discussion and shared learning about AI, building collective understanding and capabilities across our team and network.
- We will ensure that any AI content created will respect the dignity of individuals and represent them in the way they would with to be represented.
Examples of acceptable use cases:
- AI-Powered Collaboration Tools: Utilise AI-driven platforms to enhance team collaboration, for example by automating meeting scheduling, generating meeting summaries, and providing real-time language translation for global teams. This fosters a collaborative and inclusive environment where diverse teams can work together seamlessly.
Unlocking Potential & Innovation (We believe in potential):
- We embrace AI as a powerful tool to unlock new possibilities, enhance efficiency, and amplify our impact.
- We commit to training and equipping our staff to understand and leverage AI's potential, empowering everyone to innovate and improve our work.
- We will actively seek out and test AI solutions that can help our partners and the wider sector realise their full potential, fostering growth and new approaches.
- We are mindful of potential biases in AI and commit to continuously learning, identifying, and mitigating them to ensure our AI use genuinely supports equitable opportunities and outcomes for all.
- We use AI to elevate the quality, accuracy, and reach of our work, striving for excellence in all our endeavours. This includes using AI to analyse complex data, refine communications, and streamline processes.
Examples of acceptable use cases:
- Personalised Learning and Development: Implement AI-driven learning platforms that provide personalised training modules, track progress, and offer real-time feedback to staff. This helps to nurture strengths in individuals, support continuous skill development, and encourage people to reach their full potential.
- Enhancing Data Analysis: AI can be employed to analyse large datasets, identify patterns, and generate insights that inform decision-making. For example, AI-driven analytics can help us understand and communicate stakeholder behaviour and preferences.
Excellence & Impact (We aim high):
- All AI-generated content or insights must be rigorously fact-checked, reviewed, and aligned with Pilotlight's high standards of integrity, brand, and mission. We maintain a "human-in-the-loop" approach for critical outputs.
- We will be transparent about our use of AI where appropriate, building trust and demonstrating leadership in responsible technology and AI adoption within the charity sector. Please refer to the Responsible AI Use Statement section.[LM1]
- Our deployment of AI will always be driven by its potential to deliver measurable, significant positive impact towards our mission, ensuring responsible resource allocation and continuous improvement.
- We recognise that AI deep fakes can be difficult to identify but those using our social media will be made aware of the risk. We will not like, share or support in the comments any imagery or content that we reasonably suspect to be fake.
- All reasonable efforts have been made to identify any bias, hallucinations or misinformation, including the source data it might use. Any and all bias will either be eradicated, corrected or mitigated to the point where it is within an acceptable level of risk.
Examples of acceptable use cases:
- AI-Enhanced Decision-Making: Use AI tools to assist in making informed decisions by providing data-driven insights and recommendations. For example, AI can analyse project data to suggest optimal resource allocation or identify potential risks, ensuring that we pursue our goals with method, rigour, and determination.
- Improving Customer Service: AI-powered chatbots and virtual assistants can handle common customer queries, provide instant responses, and escalate complex issues to human agents when necessary. This ensures a seamless and efficient customer service experience.
Responsible Stewardship of Data and Safeguarding (Underpins all values):
- Where AI is used to create content, there are appropriate checks and safeguards in place to ensure that:
- We are open and transparent that the content has been created by AI (please refer to the Responsible AI Use Statement section),
- Factual and non-factual content is either self-evident or clearly identified,
- It will not be used, for purposes where the use of AI has been specifically not permitted, and
- There is appropriate content moderation by humans, to minimise the potential for errors and bias, hallucination, misinformation or defamatory phrases etc.
- All efforts should be made to comply strictly with other organisational values and policies.
- We exercise caution when inputting any personal, sensitive, or confidential information into AI models, always verifying the data privacy policies of tools used.
- Wherever possible, we will opt out of data sharing for machine learning purposes when using third-party AI tools, reflecting our commitment to responsible data stewardship.
Generative AI
Generative AI can be a powerful tool for creative ideation, helping to generate new ideas and concepts. However, it is important to use it responsibly, especially when direct fact-checking of outputs isn't feasible.
Here are some guidelines:
- Leverage AI for Inspiration: use generative AI to brainstorm ideas, generate creative content, and explore new concepts. AI can help spark creativity and provide fresh perspectives.
- Apply human judgement: always apply your own human judgement and critical thinking to evaluate AI-generated outputs. While AI can provide valuable suggestions, it is essential to review and refine these ideas to ensure they align with our goals and values.
- Highlight the need for oversight: since direct fact-checking of AI-generated content may not always be feasible, it is crucial to have human oversight. Ensure that any AI-generated ideas or content are reviewed and validated by a knowledgeable individual before implementation.
- Refer to the Responsible AI Use statements and use where necessary as this will indicate that the content was generated by AI. This will help to manage expectations and maintain transparency.
- Where the subject may be emotive or challenging – especially with stock or digitally enhanced imagery – will ensure that the way in which we do so is not reasonably likely to mislead and, potentially, influence people.
How to identify AI
With AI and technology rapidly evolving, many software and applications incorporate AI features that many not be explicitly known to the user. To ensure alignment with this Policy, transparency and informed usage, it is important for you to be aware of this and be able to identify the use of AI where it might not explicitly be disclosed.
Some key signs that AI might be used in software:
- Check the documentation: start by checking a software’s documentation or help section for any mention of AI features. Look for terms like “machine learning", “neural networks” or “AI-powered”.
- Automated Responses: if the software provides instant replies or suggestions, such as predictive text or chatbots, it likely uses AI.
- Data Analysis: tools that analyse large datasets, identify patterns, or generate insights often rely on AI algorithms.
- Personalisation: software that customises user experiences based on behaviour or preferences, like recommendation systems, typically uses AI.
- Image and Speech Recognition: features that recognise and process images or speech, such as facial recognition or voice assistants, are powered by AI.
- Decision-Making Support: tools that offer recommendations or assist in decision-making by evaluating various factors are likely using AI.
- Natural Language Processing: software that understands and processes human language, such as translation services or sentiment analysis, employs AI techniques.
Responsible AI Use Statement
A statement should be included in all Pilotlight Staff email signatures to clearly communicate to external stakeholders our commitment to the responsible use of AI to support our wok.
You are receiving this email because we believe it’s relevant to do so. At Pilotlight, we handle your data with care, store it securely and avoid unnecessary contact. We use Artificial Intelligence (AI) responsibly to support our work, with all AI-generated content reviewed by our team to ensure it reflects our values. You can read our full Privacy Policy at www.pilotlight.org.uk/privacy-policy, and our Responsible Use of AI Policy at https://www.pilotlight.org.uk/responsible-use-of-ai-policy. You can contact us anytime at www.pilotlight.org.uk/contact.
References
The Alan Turing Institute. (n.d.). AI regulation and standards. Available at: https://www.turing.ac.uk/research/research-programmes/public-policy/public-policy-themes/ai-regulation-and-standards (Accessed: 24 May 2025).
CAST AI Peer Group resources
CAST Digital Peer Network Group resources
Charity Commission. (2024). Charities and Artificial Intelligence. Available at: https://charitycommission.blog.gov.uk/2024/04/02/charities-and-artificial-intelligence/ (Accessed: 24 July 2025).
Charity Excellence Framework. (n.d.). Charity AI Governance and Ethics Framework. Available at: https://www.charityexcellence.co.uk/charity-ai-governance-and-ethics-framework (Accessed: 24 May 2025).
Scottish Council for Voluntary Organisations (SCVO). (n.d.). Generative AI glossary. Available at: https://scvo.scot/support/digital/guides/ai/glossary (Accessed: 24 May 2025)
Disclaimer
This Policy has been put together using multiple sources (listed above) and to enhance efficiency and generate initial ideas, parts of this document were drafted using Artificial Intelligence tools (more specifically Copilot).
All AI-generated content has been thoroughly reviewed, fact-checked, and edited by the Pilotlight team to ensure accuracy, uphold our values, and maintain our high standards of quality.
Our human experts remain fully accountable for the final published version.