The AITAS  

Developed by Michelle Charlton in early October 2024, the AI Transparency Advice Scale (AITA Scale or AITAS) aims to provide audiences an understanding about the origin of what they are reading.

With the proliferation of content produced by increasingly sophisticated generative AI tools, it is becoming more difficult to discern information as human versus machine outputs.

While the benefits of generative AI tools can be numerous, the risks may balance those benefits when content sources are becoming increasingly opaque.

The AITAS ratings rank output material as a ratio of human to AI involvement in its production.

The rating scale is agnostic in terms of which generative AI tool/s may have been used.


20.2.26 - Update to representation and example descriptors

Rating
Short form to describe content origin
Description detail
Examples
1 Human intelligence

A rating level of 1 signifies content has been developed, written and edited by a human with zero AI involvement in the process.

A person has an idea, drafts it, edits it and publishes it. Spell check, grammar check in a Word document, dictionary or thesaurus may have been used.
2 AI-refined content

A rating level of 2 signifies content has been refined with the aid of AI. 

Human-generated content has been assisted by an AI tool to refine/polish output. The output remains predominately human with approximately 10% (or less) of the production process being AI-driven. 

A typical situation for a Rating 2 would be where a person has thought of, developed and written their content and as part of the editing process, at near final draft state, the content is reviewed by AI for refinement suggestions. This process might include using tools like Grammarly which will suggest changes to sentences and wording.
3
AI-assisted content

A rating level of 3 signifies content has been developed with the aid of AI.

The AI may have helped with ideation and/or writing, but the output remains mostly human with approximately 30% (or less) of the production process being AI-driven. 

A typical situation for a Rating 3 would be where someone needed to create something and they turned to generative AI for assistance to ideate or develop that information. AI has assisted the human by generating ideas or content that the human then edits and synthesises to complete the final product.
4 Mostly AI-developed content

A rating level of 4 signifies content has been mostly developed by AI. A human has reviewed the AI-generated content, checking for accuracy of outputs and has applied edits where necessary. AITAS 4 content is produced with approximately 80% AI effort; 20% or less human involvement.

A typical situation for a Rating 4 would be where a person asks an AI to develop or write something. There may be back and forth with prompting to refine the outputs. The person checks what the AI has produced, makes edits or adjustments as required, and then publishes.
5 Artificial intelligence

A rating level of 5 signifies content has been developed by AI. Besides the initial prompt to trigger the output, zero human input is in the output.

When content is Rating 5 it means someone has given an AI a prompt and then accepted the output by publishing it without making edits or refinements. The content may or may not have been reviewed by a human before publishing.

AI Transparency Advice Scale (AITAS) © 2024-26 by Michelle Charlton is licensed under CC BY-NC-SA 4.0
An AITAS Rating adds transparency to content by informing audiences how it was created

How to use AITAS Ratings?

Evaluate your content production process and apply the most truthful account of how the material was produced in terms of ratios of human and artificial intelligence. This may take the form of a short statement that says "AITAS Rating <#>"

Use of labels may be appropriate in other circumstances.


Labels


If you'd like a copy of the AITAS Rating labels to use, please email us and we'll send the image files to you.


By the way, this page is AITAS Rating 1


28 November 2025

The Australian Government released a guide for businesses "Being clear about AI-generated content".

By giving content an AITAS Rating you will be complying with the 'labelling' transparency mechanism.

(The Australian Government Guide is referenced on this page with permission from the National AI Centre).

Your feedback is welcome

The AITAS is still in its infancy with further analysis and refinements likely warranted.

One of the challenges in developing AITAS was to think about a rubric that would sufficiently represent a variety of circumstances, without being overly complex.

If you have feedback, we'd love to hear it: Give feedback on AITAS here

Background to the development of the AITAS

When generative AI (GenAI) tool ChatGPT was released to mainstream public in late 2022, Michelle was an early adopter of the technology and started to immediately explore the use cases and implications of ChatGPT (and subsequent other tools released), especially with regards to teaching and learning environments.

Into 2023, as Michelle continued her university studies she wove GenAI into each of her assessments, commenting on the ethical and practical implications of its use. Her university lecturer at the time recognised Michelle's keen interest in the topic and suggested Michelle create a rubric to overlay and guide use of generative AI in educational environments. But that fell to the wayside as other priorities came to the fore for Michelle. Michelle continued to monitor the advancement of GenAI tools, making her own regular posts about various use cases and outcomes of trial applications. 

One of the most fascinating issues for Michelle was the gradual and forecast degradation of factual content; as the AI developers scraped the internet for training purposes, the pool of information on which the large language models were being trained became more flawed as AI-generated content (based on biased and flawed initial information) became part of the training data pool - giving rise to even more hallucinations that may have been for some, quite believable.

It wasn't until late 2024 that the idea of a rubric again surfaced onto Michelle's radar. 

Each year, Michelle is contracted to perform QA functions on presenter materials for a major industry conference. The purpose of the contract is to ensure accuracy and authenticity of content (including proper referencing of source materials) submitted for delivery at the conference.

When performing that role in 2024, Michelle was struck by the rise of session material that was both obvious and highly likely to have been generated with or by AI. Yet the rate of disclosure about this was low to zero.

One session in particular triggered the first thoughts of the AI Transparency Advice Scale. If it listed 10 citations, on checking, only one source was real (although still incorrect). The implications of this really started to hit home.

As a parallel, Michelle started to note a shift in social media feeds with content changing in style and tone compared to regular 'voice' of the poster, as well as a shift in volume. The ease with which GenAI can generate content has meant a proliferation of information. Yet, in many cases - especially with regards to compliance related matters - the information presented was incorrect; either partially or wholly. 

While there are many benefits to use of GenAI and the efficiencies it brings, and despite requiring students to adhere to policies that dictate allowable levels of AI involvement in assessment, the disclosure by businesses and educators on the levels of their own use of AI remains murky.

The AI Transparency Advice Scale was devised to be a simple indicator easily applied to content to inform the reader about the information they are consuming. Especially in the context of examples provided above, the utility of the Scale would be beneficial in supporting informed user action. 

Naturally, application of the AITA Scale is dependent on truthful disclosure of the business/individual developing the content. And other limitations may include relative subjectivity on determining levels within the Scale. 

However, if applied, the AITA Scale will give a clear gauge on the degree of AI use in how content has been produced. We believe this supports transparency in business and educational communications, and fosters development of critical literacies as audiences are prompted to recognise the origins of what they are seeing, hearing and reading.

- Updated 3.4.25

Previous versions of AITAS

A rating level of 1 or (one dot) signifies content has been developed, written and edited by a human with zero AI involvement in the process

A rating level of 2 (or two dots) signifies content has been refined with the aid of AI. Human-generated content has been assisted by an AI tool to refine/polish output. The output remains predominately human with approximately 10% (or less) of the production process being AI-driven. A typical situation for a Rating 2 would be where human content at near final draft state is reviewed by AI for refinement suggestions

A rating level of 3 (or three dots) signifies content has been developed with the aid of AI. The AI may have helped with ideation and/or writing but the output remains mostly human with approximately 30% (or less) of the production process being AI-driven. A typical situation for a Rating 3 would be where AI has assisted the human by generating ideas or content that the human then edits and synthesises to complete the final product

A rating level of 4 (or four dots) signifies content has been mostly developed by AI. A human has reviewed the AI-generated content, checking for accuracy of outputs and has applied edits where necessary. AITAS 4 content is produced with approximately 80% AI effort; 20% or less human involvement

A rating level of 5 (or five dots) signifies content has been developed by AI. Besides the initial prompt to trigger the output, zero human effort has been applied to check, review and/or edit the output

Version 1.2 (March 2025) is below.


Version 1.0 above - basic delineation of possible AI involvement in outputs (October 2024)


Version 1.1 above - expanded detail within levels of possible AI involvement in outputs; acknowledgment that human involvement is required to trigger an output via the initial prompt to the AI (October 2024)