Responsible AI (RAI)
Responsible AI framework and ethics policy
RAI Standard and Evaluation
The Responsible Artificial Intelligence (RAI) Evaluation is a standard used to investigate AI-enabled digital products and services used by Australian schools, to make sure they comply with ethical principles. The evaluation has been realised in response to national concern for emerging risks posed by AI in the classroom.
About the Evaluation
In consultation with all the Departments of Education, the Catholic and Independent school sectors, a draft Responsible AI framework has been created. This framework will be used to evaluate AI-enabled digital products for compliance with RAI principles, and will complement the ST4S AI module assessment. Evaluations of products under the framework will be used, as with ST4S, to inform procurement decisions in schools and schools authorities, and to align decision-making around product expectations nationality.
The framework is currently being piloted, with the aim to be rolled out for Australian education in 2026.
Register your interest
EdTech vendors who have successfully completed an ST4S assessment are eligible to register their interest for an RAI evaluation. Vendors yet to begin the ST4S process, should complete a Readiness Check to evaluate their preparedness for an ST4S assessment.
