AI Transparency Advice Scale (AITAS)
For Added Trust in Communications
Developed by Michelle Charlton in early October 2024, the AI Transparency Advice Scale (AITA Scale or AITAS) aims to provide audiences an understanding about the origin of what they are reading.
With the proliferation of content produced by increasingly sophisticated generative AI tools, it is becoming more difficult to discern information as human versus machine outputs.
While the benefits of generative AI tools can be numerous, the risks may balance those benefits when content sources are becoming increasingly opaque.
The AITA Scale rates output material as a ratio of human to AI involvement in its production.
A rating level of 1 or (one dot) signifies content has been developed, written and edited by a human with zero AI involvement in the process
A rating level of 2 (or two dots) signifies content has been refined with the aid of AI. Human-generated content has been assisted by an AI tool to refine/polish output. The output remains predominately human with approximately 10% (or less) of the production process being AI-driven. A typical situation for a Rating 2 would be where human content at near final draft state is reviewed by AI for refinement suggestions
A rating level of 3 (or three dots) signifies content has been developed with the aid of AI. The AI may have helped with ideation and/or writing but the output remains mostly human with approximately 30% (or less) of the production process being AI-driven. A typical situation for a Rating 3 would be where AI has assisted the human by generating ideas or content that the human then edits and synthesises to complete the final product
A rating level of 4 (or four dots) signifies content has been mostly developed by AI. A human has reviewed the AI-generated content, checking for accuracy of outputs and has applied edits where necessary. AITAS 4 content is produced with approximately 80% AI effort; 20% or less human involvement
A rating level of 5 (or five dots) signifies content has been developed by AI. Besides the initial prompt to trigger the output, zero human effort has been applied to check, review and/or edit the output
The AITA Scale is still in its infancy with further analysis and refinements likely warranted.
Version 1.2 (March 2025) is below.
By the way, this page is AITAS Rating 1
One of the challenges in developing AITAS was to think about a rubric that would sufficiently represent a variety of circumstances, without being overly complex.
If you have feedback, we'd love to hear it: Give feedback on AITAS here
When generative AI (GenAI) tool ChatGPT was released to mainstream public in late 2022, Michelle was an early adopter of the technology and started to immediately explore the uses cases and implications of ChatGPT (and subsequent other tools released), especially with regards to teaching and learning environments.
Into 2023, as Michelle continued her university studies she wove GenAI into each of her assessments, commenting on the ethical and practical implications of its use. Her university lecturer at the time recognised Michelle's keen interest in the topic and suggested Michelle create a rubric to overlay and guide use of generative AI in educational environments. But that fell to the wayside as other priorities came to the fore for Michelle. Michelle continued to monitor the advancement of GenAI tools, making her own regular posts about various use cases and outcomes of trial applications.
One of the most fascinating issues for Michelle was the gradual and forecast degradation of factual content; as the AI developers scraped the internet for training purposes, the pool of information on which the large language models were being trained became more flawed as AI-generated content (based on biased and flawed initial information) became part of the training data pool - giving rise to even more hallucinations that may have been for some, quite believable.
It wasn't until late 2024 that the idea of a rubric again surfaced onto Michelle's radar.
Each year, Michelle is contracted to perform QA functions on presenter materials for a major industry conference. The purpose of the contract is to ensure accuracy and authenticity of content (including proper referencing of source materials) submitted for delivery at the conference.
When performing that role in 2024, Michelle was struck by the rise of session material that was both obvious and highly likely to have been generated with or by AI. Yet the rate of disclosure about this was low to zero.
One session in particular triggered the first thoughts of the AITA Scale. If it listed 10 citations, on checking, only one source was real (although still incorrect). The implications of this really started to hit home.
As a parallel, Michelle started to note a shift in social media feeds with content changing in style and tone compared to regular 'voice' of the poster, as well as a shift in volume. The ease with which GenAI can generate content has meant a proliferation of information. Yet, in many cases - especially with regards to compliance related matters - the information presented was incorrect; either partially or wholly.
While there are many benefits to use of GenAI and the efficiencies it brings, and despite requiring students to adhere to policies that dictate allowable levels of AI involvement in assessment, the disclosure by businesses and educators on the levels of their own use of AI remains murky.
The AI Transparency Advice Scale was devised to be a simple indicator easily applied to content to inform the reader about the information they are consuming. Especially in the context of examples provided above, the utility of the Scale would be beneficial in supporting informed user action.
Naturally, application of the AITA Scale is dependent on truthful disclosure of the business/individual developing the content. And other limitations may include relative subjectivity on determining levels within the Scale.
However, if applied, the AITA Scale will give a clear gauge on the degree of AI use in how content has been produced. We believe this supports transparency in business and educational communications, and fosters development of critical literacies as audiences are prompted to recognise the origins of what they are seeing, hearing and reading.
- Updated 3.4.25
Version 1.0 above - basic delineation of possible AI involvement in outputs (October 2024)
Version 1.1 above - expanded detail within levels of possible AI involvement in outputs; acknowledgment that human involvement is required to trigger an output via the initial prompt to the AI (October 2024)