Transparency Without Nuance: The New Threat to Copyright
- Oct 27
- 4 min read
When we talk about copyright and AI, generally it is about the ethics of the content being fed into the learning models and then replicated without regard for the rights of the author, without any royalty payment or recognition going back to them. But there is another side to consider - the ownership of the content that was generated in collaboration with AI tools.
Now lets introduce a little bit of context on two seemingly unrelated topics and then we'll jump to the rather terrifying conclusion.
Global responses to transparency measures
There are a number of jurisdictions that have started issuing legislative demands for transparency in the use of AI. The EU AI Act was the first, China has issued their own and others are following. Many of these are especially focused on deep fakes, but the Chinese legislation does refer to nuance.
In response to this, the technology giants are responding with their own ways of compliance including :
Photos which are edited even slightly using functions like object eraser on your phone are watermarked with "AI Generated Content".

Facebook adds an AI info tag that indicates that content may have been generated with AI, and for marketing it goes a step further and LinkedIn has started auto applying Content Credentials which again mention that photos have been generated with AI.

Facebook and LinkedIn responses to demands for transparency Google announced that they were embedding SynthAI into AI-generated images, audio, text or video which also applies to what we at AIUC would consider Human-Led or Co-Created, although at least it is preserving the proportionality of AI vs human generation.

Court stance on copyright of generated content
A United States Court of Appeals, in the Thaler v. Perlmutter case, upheld a ruling stating that ownership and copyright "requires work to be authored in the first instance by a human being to be eligible for copyright registration." (Case Ruling, 2025). And I expect if raised in other countries the rulings would be the same.
In Australia, the Copyright Act of 1968 has a similar definition which states: "the owner of the copyright, for any purpose of this Act, shall be deemed to be the person who is the owner of the copyright." And EU "Enforcement of intellectual property rights" has similar references to a person's rights.
A Skadden article, in relation to the USA case, also mentions that "the court acknowledged that there might be disagreements over how much AI contribution is permissible for a work to still be considered authored by a human; however, these line-drawing issues are separate from the core question of whether a machine can be an author."
I found the case particularly encouraging. Given the volume of creative work that has been used to train AI models without compensating the original creators, it is reassuring to see that these systems, and their developers, can't claim ownership over the content they create.
The concern - taking creators rights away
Transparency is critical, and up to this point my concern with oversimplified labels has been that they didn't appropriately recognise the efforts of the people creating content and art. As as a result, in the eyes of many, such labeling diminishes the value of the final production. And it's misleading and raises a worrying thought: when my grandkids are looking at our photos one day, will they dismiss memories like the sunrise photo as not real?
But I've realised there is an even bigger issue at play. These labels don't just misrepresent the creative process, are actually strip the creators of their ownership rights.
By labeling the photo as "AI Generated", I can no longer claim ownership of that image. I can't claim royalties or licensing fees, nor do I have recourse to prevent it from being used for other purposes, because none of the global copyright protections apply. And even if I go to the effort of cropping the photo to remove of the watermark, the watermarked version could be retained, with or without my consent, and used for anything because it is no longer considered mine.
Think about all the photos on your Google Photos that are labeled this way. They aren't yours any more. If online platforms like LinkedIn and Facebook mark written content as well with their statements like "may contain AI generated content", it undermines any claims to authorship. Maybe that's acceptable for social media content, but what if publishers or academic platforms start "innocently" applying the same "suspicions" on books, research papers or articles?
When we created the AI Usage Classifications as part of AIUC Global, the necessity for nuanced declaration was clear. Our intent was to recognise the efforts of the creators and rebuild trust in consumers and audiences. What I hadn't considered at the time was that "recognising the effort" also protects the creators' rights.

This has only strengthened the case for nuanced transparency. Authors and creators need to understand that without intentional and specific declaration now, we risk erasing, not only recognition, but also the very rights that protect human creativity through current and future oversimplified transparency measures.
Transparency must empower, not erase. It should honour the people behind the work, clarify how technology was used, and ensure that creators retain both their credit and their control.
Disclaimer: The views expressed are based on my understanding and interpretation of current copyright principles. They do not constitute legal advice, and readers should seek professional legal counsel for specific guidance regarding intellectual property or copyright matters.

The content in this article is classified as Human-Led™ in accordance with the AI Usage Classification™ standard.
References:
Case Ruling, 2025 - https://media.cadc.uscourts.gov/opinions/docs/2025/03/23-5233.pdf
Skadden article - https://www.skadden.com/insights/publications/2025/03/appellate-court-affirms-human-authorship
Australian Copyright Act 1968 - https://www.legislation.gov.au/C1968A00063/2019-01-01/2019-01-01/text/1/epub/OEBPS/document_1/document_1.html
LinkedIn content credentials application - https://www.socialmediatoday.com/news/linkedin-labels-ai-generated-content/716674/
Meta labeling - https://about.fb.com/news/2024/04/metas-approach-to-labeling-ai-generated-content-and-manipulated-media/
Google SynthAI - https://blog.google/feed/synthid-reimagine-magic-editor/#:~:text=Starting%20this%20week%2C%20Google%20Photos,using%20Reimagine%20in%20Magic%20Editor.
How SynthAI works - https://www.youtube.com/watch?v=_fMFb2Lv7rI




Comments