Innovation, theft, or workforce shift? AI, copyright, and the need for upskilling & reskilling
AI is reshaping copyright—and creatives are paying the price. As Big Tech bends the rules, the real challenge is clear: protect creators, enforce fair use, and reskill the workforce to thrive alongside AI, not be replaced by it.
Today, we’re diving into one of the most heated debates of our tech-driven era: how AI companies are handling copyright law. Are they enabling “groundbreaking innovation,” or just rewriting the rules for their own benefit? And beyond that: What does this situation mean for creators, and how can they protect what is rightfully theirs? Let’s break it down.
Recently, OpenAI and Google pushed the U.S. government to classify AI model training on copyrighted data as “fair use,” (that is, relaxing copyright rules). Their argument? Limiting AI training on copyrighted materials could slow down America’s tech development and make the country lag behind rivals like China.
OpenAI went so far as to publish that its proposal “can strengthen America’s lead on AI and in so doing, unlock economic growth, lock in American competitiveness, and protect our national security.”
But tech companies have double standards when it comes to intellectual property. Back in 2011, Google paid $12.5 billion for Motorola’s patents. Why is this tech giant willing to pay Motorola, but not individual creators? Something doesn’t add up.
On top of that, the European Union is urging Big Tech to share detailed summaries of the data used in training their models. Sadly, the companies are straight-up refusing to do so. If they genuinely believe their AI models are trained fairly and ethically, why their hesitation to lift the curtain?
Beyond the legal battle, this is a defining moment for creatives. AI is here to stay. Will creators be left behind, or will they upskill and reskill to work alongside it? And speaking of reskilling: Who’s actually doing it? Everyone’s hyped about “new job descriptions,” but where are the jobs? Who’s making sure displaced workers actually land on their feet?
Thomas Wolf, co-founder of AI startup Hugging Face, is one of the rare voices pushing for openness, though he knows not everyone is willing to follow suit. “It’s hard to know how it will work out. There is still a lot to be decided,” he admitted.
What is “fair use,” actually?
But first things first. What exactly do companies like OpenAI and Google mean when they talk about “fair use”? Because from what we’re seeing, their definition is a bit... convenient.
According to the Stanford Copyright & Fair Use Center, fair use is when the original work is used in a way that creates “new information, new aesthetics, new insights, and understandings,” as in criticism or commentary. In other words, fair use must be “transformative.”
But using copyrighted content to train AI models doesn’t add anything new. It simply uses it to fuel algorithms, without offering something fresh in return.
Just days ago, Studio Ghibli got dragged into the AI copyright mess after fans spotted AI-generated trailers mimicking its signature hand-drawn style. The internet went feral. Sam Altman himself, CEO of OpenAI, set his X profile picture to an avatar created with this feature.
Artist Karla Ortiz, who is suing other AI image generators for copyright infringement, argued it’s “another clear example of how companies like OpenAI just do not care about the work of artists and the livelihoods of artists.”
Ghibli’s work is the result of decades of artistry, the kind no algorithm can fake. If AI can grab from legends like them without a second thought, what chance do smaller creators have to protect themselves?
And here’s the million-dollar question: Why has no government done anything about it? Where are the investigations, the lawsuits, the perp walks? Why does Altman get to break the rules, shrug it off, and keep cashing checks while artists get steamrolled? The governments’ radio silence is telling. If they’re not protecting creators, the real question is: Who are they protecting?
A fair AI for creators
How do you protect creators when AI systems are using their work without paying them a dime? One approach is stronger copyright enforcement. Another is empowering creators to adapt. Take, for instance, Fairly Trained, a project dedicated to building transparent and fair AI models trained with creators’ consent.
And beyond advocacy, there’s another way: learning how to work with AI. Musicians are experimenting with AI-assisted production. Writers are using AI-powered tools for editing and ideation. Designers are integrating AI-generated elements into their creative process.
All of them are developing a new skill: the ability to leverage AI as a tool instead of being overshadowed by it.
Fair use starts with playing fair
Innovation? Great. Competing internationally? Fair game. But if we want the rules to be fair, we need to actually play by them. And if governments allow Big Tech to bend those rules, the fallout could be massive. We’re talking about musicians, writers, filmmakers, designers — all left unprotected.
The truth is, creators should be getting paid when their work is used to train AI. Think of how streaming platforms pay for music. Spotify doesn’t just indiscriminately add music to its platform; it makes deals, pays record labels, and compensates artists. AI should be doing the same for creators. Until then, it’s a free-for-all where the people who actually make the stuff get nothing in return.
And for countries around the world, AI is also a matter of national sovereignty. If the U.S. is building the AI models everyone else depends on, what happens when those models come with a price tag? Will other countries pay just to keep up? Countries that don’t develop their own tech are going to be at the mercy of those who do.
AI’s future shouldn’t come at creativity’s expense, but creativity needs to evolve to keep up. If Big Tech wants to claim “fair use,” they’ve got to play fair. Otherwise, we’re all just building our own digital prisons, and paying rent to stay inside.