AI Safeguarding in Schools
How UK schools should approach AI safeguarding — covering DfE guidance, content filtering, pupil-facing tools, and staff training obligations.
Why AI Safeguarding Matters in Schools
AI tools are increasingly accessible to pupils both inside and outside school. ChatGPT, Snapchat's AI, Character.ai, and dozens of other tools are available to anyone with a smartphone. Schools have a safeguarding duty to understand these tools and their risks.
The key safeguarding concerns around AI in schools include: exposure to inappropriate or harmful content generated by AI, use of AI to facilitate bullying or create deepfakes, over-reliance on AI for homework undermining learning, data privacy risks when pupils enter personal information, and the potential for AI chatbots to provide unsuitable advice.
The Department for Education (DfE) expects schools to address AI within their existing safeguarding frameworks, including filtering and monitoring systems, acceptable use policies, and staff training.
DfE Guidance and Expectations
The DfE's guidance on generative AI in education sets out several expectations for schools:
- Risk assessment: Schools should assess the risks of AI tools used in or accessible to the school community
- Filtering: AI tools should be covered by the school's content filtering and monitoring systems where they are accessed on school networks or devices
- Acceptable use: Policies should be updated to address AI use by both staff and pupils
- Academic integrity: Schools should define clear expectations around AI use in assessed work
- Staff training: Staff need training to understand AI tools, their benefits, and their risks
Keeping Children Safe in Education (KCSIE) requires schools to have appropriate filtering and monitoring in place. AI tools that can generate text, images, or chat-style interactions should be included in this assessment.
Practical Safeguarding Measures
Here are practical steps schools can take to address AI safeguarding:
- Update your filtering systems. Ensure your web filtering covers major AI platforms. Work with your filtering provider to understand which AI services are blocked, allowed, or partially restricted on your network.
- Update your acceptable use policy. Add clear sections on AI use for staff and pupils. Our AI Policy Generator can create a starting draft tailored to your school type.
- Brief your designated safeguarding lead. Your DSL should understand what AI tools are, how pupils might access them, and what risks they present. Include AI in your safeguarding team's CPD plan.
- Talk to pupils. Age-appropriate conversations about AI — what it can do, what risks it carries, and how to use it responsibly — are more effective than simply banning it.
- Address academic integrity directly. Define what constitutes acceptable use of AI in homework and assessed work. Be explicit rather than assuming pupils will know the boundaries.
Staff Training Obligations
Staff training on AI doesn't need to be complex, but it should cover the fundamentals:
- What AI tools are and how they work — our What Is Workplace AI? guide covers this in plain English
- The specific AI risks in educational settings — content generation, data privacy, academic integrity, and pupil wellbeing
- Your school's AI policy — what is permitted, what is not, and where to report concerns
- How to respond if a pupil reports a concern related to AI — this should follow your existing safeguarding reporting procedures
Use our AI Literacy Basics quiz as a low-pressure starting point for staff CPD. Follow it with the AI Risk Awareness quiz to deepen understanding. Our AI Training Checklist template can help you plan a structured training session.