Pilot of the New AI Module for the ST4S Framework

We are excited to announce the release of Artificial Intelligence (AI) controls within the ST4S (Safer Technologies for Schools) framework. This new ST4S AI module is designed to complement the existing ST4S framework and aims to improve security, privacy, and online safety standards for K-12 schools across Australia and New Zealand.

The ST4S AI Module is being released in a pilot phase first, followed by an updated release later in the year.

The AI Module covers topics on security, privacy, and safety, focusing on critical controls to reduce risk in these areas.

Overview of Criteria

The AI module follows the key categories or topics of ST4S which includes cybersecurity, privacy and online safety. 


Models must be secure from attack, securely transmit information and be audited frequently through rigorous and transparent processes.


AI must be opt-out by default, actively restrict personal information from being processed or stored, be transparent in operation and provide easy privacy controls to users.


Every reasonable attempt must be made by the organisation to reduce risk, educate users on safe usage and AI features must be appropriate for young people.

The AI module employs a risk-based approach, assessing the varying risks associated with different AI use cases. This aligns with the current ST4S approach for assessing services and other frameworks, including legislation such as the European Union’s AI Act.

Examples of criteria include:

  • Responsible AI and Ethics: Recognising the power and rapid development of AI, the module mandates the establishment of a responsible AI framework. Organisations must appoint an independent AI Ethics Officer to oversee the ethical implementation of AI technologies.
  • Privacy by Default: All AI-related services must have user data opt-in settings switched off by default. Users should find it easy to opt out if they choose to opt in initially, ensuring their data is not used for AI training without explicit consent.
  • Regular Testing and Safety Measures: Developers and companies are required to conduct regular testing, including jailbreaking and prompt injection tests, to ensure AI functionalities do not perform unexpected actions or expose users to harmful content.
  • Focus on Young People: Our criteria also focus on how young people (e.g. students) may use AI and include measures to ensure it is clear to young people when they are engaging with an AI (and not a human), and how to use AI responsibly with language that is clear to them.

Rollout Plan

The AI module will be introduced as a separate category within the ST4S framework and will be gradually incorporated into the ST4S assessment process.

Companies that have previously engaged with ST4S and have recently introduced the usage of AI are being invited to pilot first.

Later in the year, the AI module will be fully incorporated into the ST4S assessment process, including the Readiness Check and other tools.