Throughout the rise of artificial intelligence (AI) over the past few years, one theme has become hard to ignore: AI companies ARE training their models on data without permission.
Some of the tech sector’s biggest names, including Google (GOOGL) , which is facing multiple lawsuits, have been sued for exactly that. Meanwhile, a group of media companies led by the New York Times (NYT) took OpenAI to federal court earlier this year on the grounds that the AI research organization had violated copyright laws by using their data.
While other companies have reached settlement agreements after suing tech companies for training models on their copyrighted data, these defendants have opted for a more aggressive approach. However, as the world of AI remains largely unregulated, much remains uncertain.
Now, both tech companies are taking action in an attempt to avoid legal action in the future, advocating for a major policy change.
Sam Altman, CEO of OpenAI, is among the tech leaders advocating for less restrictive AI regulation.
Silicon Valley takes the fight to Washington D.C.
As AI remains a relatively new technology, regulators likely still are unsure how to approach it properly. For companies like Google and OpenAI, which have been accused of illegally using copyrighted data to train large language models (LLMs), this has likely proved beneficial.
But these two companies seem bent on changing the rules to ensure that content creators aren’t able to stop them from using their material.
They have both submitted proposals to the White House advocating for fewer restrictions on the data they can use for model training, focusing on global AI dominance cited by Vice President JD Vance at the recent Paris AI Action Summit.
Related: JD Vance shocks AI world with latest decision
In a proposal addressed to Faisal D’Souza of the White House Office of Science and Technology Policy, OpenAI issued the following statement:
“American copyright law protects the transformative uses of existing works, ensuring that innovators have a balanced and predictable framework for experimentation and entrepreneurship. This approach has underpinned American success through earlier phases of technological progress and is even more critical to continued American leadership on AI.”
In its own policy proposal, Google makes similar arguments, highlighting the need for what it describes as “balanced copyright rules. But as Ars Technica notes, “its preference doesn’t seem all that balanced” as it would tilt the deck in favor of companies who already have vast resources.
Other experts have similar takes on the implications of this AI regulation campaign. Phil Mataras, founder of cloud network AR.IO, spoke to TheStreet about what these events could mean for the industry.
Two tech leaders engage in fierce chess match over hot AI startupOpenAI rival powers toward milestone with massive implicationsControversial author publishes shocking AI conversation with ChatGPT
“Google and OpenAI’s petition to the US government to relax copyright laws so they can train their AI models free of lawsuits is a massive issue that comes with incredible ethical and moral implications,” he states.
“With the rapid advancement of AI, creators are already fighting for their lives. Relaxing laws around copyright for AI models is a clear attack on them that will mean more lost revenue as the value of their work is further diminished.”
The future of AI regulation is hanging in the balance
This petition from Google and OpenAI comes as Capitol Hill is considering the best ways to regulate AI, which may mean not regulating it. The White House is preparing to implement its AI Action Plan by mid-2025, which isn’t far away.
Related: Former Google CEO makes shocking case against AI proposal
Given Vance’s remarks at the Paris AI Action Summit, it seems likely that the White House will favor easing regulatory measures on AI technology. This would make it easier for companies like Google and OpenAI to continue training their models with even fewer guardrails, if any.
That could violate the rights of everyone creating the content and data used to train these models. But as Mataras adds, they are not always verified and are “potentially filled with bias from a centralized organization,” leading to even further risk.
“With the proliferation of AI throughout the internet, particularly Google search, we risk one or a few organizations having massive control over the truth. And with current cloud models that do not permanently store data, we may not be able to question that narrative or prove any errors,” he notes.
Related: Veteran fund manager unveils eye-popping S&P 500 forecast