top of page

AI Usage Transparency in Education and Academia

  • Aug 7
  • 5 min read

Updated: Aug 17

Academic institutions like colleges and universities have seen an increase in the use of generative AI tools by students in everything from written assessments, coding tasks, and creative media projects. This shift has challenged the expectation that education providers will foster creativity, originality of thought and innovation, and raised questions on how to meaningfully assess students learning in an AI-enabled environment.

My name is Dunja Lewis, and I am the co-founder and Chief Innovation Officer AIUC Global. I encountered these challenges first hand over the last year, as both a student completing her Applied Coaching Graduate Certificate and as a university mentor for an industry project unit. Drawing on this dual perspective, this case study explores how the AI Usage Classification ™ Standard developed by AIUC Global, offers an alternative, practical approach to addressing these challenges.

To use or not to use AI - that is the question?

Students with AI
Managing stress through AI support

If you look at the mission statements of the 4 public universities in Western Australia, they all set a high bar, referencing impact, creativity, relevance, trustworthiness, leadership, innovation and engagement to various degrees.

But in today’s world, 7 in 10 young people (aged 16-25) are concerned, worried or stressed about school, study and exams, and of those, 2 out of 3 reported that study stress is the most concerning issue for them (ReachOut, 2023, Understanding the Issues Impacting Young People's Mental Health).

For me, although I am a little older than the researched cohort, during 2024, whilst balancing part time study, demanding client work, and managing a startup with sole parenting responsibilities, it was difficult to manage the conflict between leveraging tools that make it easier to finish the work. Even going to Chat GPT to find references for my own idea, knowing that I was going to dig up the references and read and cite them properly still made me feel like I was cheating. Like it was somehow going against the original goal of gaining a deeper understanding on how to foster people’s desire and ability to change.

I know others in my cohort felt the same, but with the support of the staff at ACAP University College, we managed to fumble our way through to a successful outcome even with limited standardised guidance. But for me success was defined by tangible knowledge and techniques I could use in my day to day, but for students who have spent years in high school being told that getting good grades is the difference between success and failure - how can we expect them to prioritise creativity and originality from the moment they show up?

Does AI Usage impede the achievement of education objectives?

Many education institutions provide and promote AI tools for personalised learning support and targeted feedback on assignments and essays. From Studiosity, which provides AI-powered feedback on assignments drafts, to Turnitin which scans for AI and plagiarised text and restricts student’s ability to submit assignments for marking until they have validated their originality score

Some Lecturers also encourage students to work with AI to perfect their language, and with nearly 30% of the WA university student cohort being made up of international students, AI can greatly improve the quality of the written content that is submitted.

But beyond this, there is a lack of clarity on what is the acceptable scale of AI usage. Some reference standards go as far as requesting that every prompt and every AI response is included in the references. On one of my assignments, this would have come to 60 odd pages - 20 times the content of the assignment itself - most of which I never even read.

The base tools that are made available by education providers, in my opinion, are very supportive of positive student outcomes, if the expected outcomes are to submit an assignment that reads well. But I asked Chat GPT to provide a summary of general expectations of a first year student, and it came up with the following list based on university handbooks, common academic frameworks and sector based publications.


List of first year university expectations
List of first-year university student expectations according to Chat GPT.

But interestingly, what it doesn’t mention are the broader mission targets of innovation, thought leadership, impact or creativity. And there is a simple reason for that - we know that education is staged with expectations changing year to year. So how do we ensure that we provide clarity to students in a way that is easy to understand whilst also reflecting the fact that not every year and every topic will be the same?

An alternative - clear objectives and tailored governance

Generally, assessment briefs have very clear learning outcomes. But what if they also included the scale of acceptable AI Usage based on the AI Usage Classifications ™?


AI Usage Classification ™ Badges
AI Usage Classification ™ Badges

With slight adjustments to submission formats, lecturers could have a library of acceptable usage policies depending on their field with some examples below:

  1. Visual Arts - Acceptable AI Usage Classification - Human-Led ™ - Students must submit both their original and unedited photograph or video, together with the mastered version, and explain the tools and techniques (including AI) that they used to generate them and the challenges experienced.

  2. Year 1 Case Studies - Acceptable AI Usage Classification - Co-Created ™ - Students are welcome to leverage AI tools to develop the concepts that they present within their case study. In their references students are expected to provide a screenshot of the response to the following question posed to the AI engine that they are using: “Please summarise the following <insert AI agent name> chats including <add the names of the chats used to co-create this assignment>”

  3. Year 3 Case Studies - Acceptable AI Usage Classification - Human-Led ™ - Students are welcome to leverage AI tools to identify sources that corroborate their discussion points, and to adjust language and form of their case study. In their references, students are expected to provide a screenshot of the response to the following question posed to the AI engine that they are using: “Please summarise the following <insert AI agent name> chat, including <add the names of the chats used to support the completion of this assignment>”


The later-year submissions could also be strengthened with a requirement for an early submission of the expected discussion topic and their hypothesis. As one of the current challenges of AI is an ever growing and evolving source data set, and one prompt never giving the same response twice, if they attempt to co-create or entirely generate the topic and their assumed conclusions, a simple delay of 2-3 weeks between the high level and the final submission would alter the underlying content enough that they would not be able to generate a sufficient scale of logical and referenceable content for a full assignment without creative thinking.

Although tailoring the expectations may seem like a significant piece of work, many of the assignment expectations are the same across a variety of disciplines, and it pales in comparison to the productivity loss in stress and the time spent managing misaligned expectations.

Conclusion

The use of AI in our society will only continue to grow, and education providers have a responsibility not only to educate students in their chosen fields, but also to equip them to thrive in a rapidly evolving world. By leveraging consistent and nuanced language from the AI Usage Classification™ Standard, to define the acceptable scale of AI use and influence on the submitted content or artifacts, education providers can foster trust through transparency in what people are engaging with and consuming. This can be further strengthened by slight adjustments to submission protocols. In doing so, we believe students can be effectively supported to navigate this changing landscape with confidence, whilst using AI ethically, efficiently and in alignment with the broader expectations.


AI Usage Classification - Human-Led (TM)


The content in this post is classified as Human-Led™ in accordance with the AI Usage Classification TM standard.


© 2025 by AIUC Global Pty Ltd

All rights reserved.

Comments


bottom of page