
Summary #
Status #
- Latest Version: This is the latest framework version that's currently active.
Key changes and items #
- V2024.1 introduces minor updates to the AI Module first introduced in v2023.2.
- Not all use cases of AI are assessable at this time. Review the ST4S Excluded / High Risk list for more information.
- Transition options are available to companies with v2023.1 or v2023.2 to the latest V2024.1.
Dates #
- Original framework publication date: 20-January-2025
- Last update to this post: 18-March-2025
Description #
ST4S v2024.1 introduces minor updates to the AI module following its pilot and initial release in v2023.2 which aligns to the principles published by the Australian Ministerial Framework for AI in Education.
Approximately 10 changes were made to the AI Module from v2023.2 to v2024.1. No other significant changes have been made to the framework.
Key Features and Compliance Areas #
Incorporating major research and development by ESA and members of the ST4S Working Group, the AI module is designed to align with the key principles of the Ministerial Framework for AI in Education.
Overall, companies will need to demonstrate a commitment to safety and security, respect privacy, and be transparent on their usage of AI.
Key features of the AI Module:
- An exclusion list applies:
- Not all AI services are assessable and there is a focus on educational specific use cases of AI. Examples of services not assessable include using AI for biometrics, sensitive/personal information or student monitoring and administrative decision making (e.g. automated enrollment ratings with AI or analysing behaviour of students with AI).
- Some items on the exclusion list are subject to change pending advice from government and industry.
- Privacy controls:
- All services must opt-out information and user data or assets from training by default.
- If a service may use information for training or further developing the AI module, the service must provide privacy controls that allow users to easily opt-in, opt-out, and customise their preferences regarding how their information and data is to be used.
- Ethical framework and appointing an ethics/safety officer:
- Organisations must establish an ethical AI framework and appoint an AI ethics lead that is independent of development. The AI ethics lead should handle complaints from users, oversee feature development, and ensure safety testing and privacy practices are in place.
- Testing requirements:
- Testing is required for security (e.g. jailbreaking attempts), privacy (preventing personal information being disclosed to the model), and safety (e.g. testing for inappropriate content generation and outputs).
Suppliers and App Developers #
If you are interested in how to best comply with the AI criteria we recommend reviewing the supporting resources and the criteria published in the supplier guide.
Transition options are available for services previously assessed depending on the framework version it was assessed under.
Transitioning #
Options are available for apps or services with an assessment under v2023.1 or later to transition to v2024.1.
Transition options are not guaranteed and the information contained within this section is subject to change. Whether a transition option is available will depend on a number of factors such as whether any other changes have occurred to the service or organisation being assessed, the nature of the changes etc.
In all cases, please contact the ST4S Team first so we can evaluate the most efficient approach for you.
Please contact the ST4S Team on the contact us page on our website.
The ST4S Team is working with suppliers on how best to transition by completing the AI Module separately in order to reduce assessment activities.
If you have no plans to use AI or make AI features available within your service, we recommend reviewing v2025.1 instead upon its release in the second half of 2025.
A full reassessment is required. Please review the Readiness Check on our website to begin the assessment process.
Consultation and Feedback #
Providing Feedback #
The ST4S Framework has a consultation and feedback process with a range of interested persons and organisations. Notably our feedback and consultation occurs government agencies in Australia and New Zealand such as the Department of Education in each State/Territory within Australia and the Ministry of Education in New Zealand.
Consultation also occurs with independent and catholic sector school representatives who are members of the ST4S Working Group.
ST4S originally launched in 2019 and is the work of multiple organisations and persons. Cybersecurity, privacy and online safety are rapidly changing and feedback is an important part of the framework. Releases are planned twice a year. Additional updates may occur in the interim.
Feedback is open to:
- App developers / suppliers.
- Academic researchers and independent ethical hackers.
- Advocacy groups (e.g. privacy or human rights advocacy groups etc).
- Industry groups relating to cybersecurity, privacy, safety and software development.
- ST4S Working Group Members (e.g. Department of Education staff, Catholic and Independent sector representatives that are members of the group).
- Government agencies both local (e.g. Australia, New Zealand) and international.
Other Enquiries #
For general enquiries please contact us on our website. For media or press related enquiries, please contact our media team at Education Services Australia Limited (ESA).