In a potentially transformative year for content creators, a series of lawsuits has been initiated by news outlets, comedians, authors, and creative professionals against tech giants, alleging unfair use of their work in the development of artificial intelligence (AI). These legal actions target the likes of OpenAI and its owner Microsoft, with the New York Times taking a prominent role in accusing them of unlawfully utilizing millions of pieces of journalistic content to train large language models.
The dual challenge faced by publishers and content creators involves inadequate compensation for their contributions to AI model training and the looming threat of AI disrupting the online information search business. The fear is that AI models may eventually replace the search traffic monetized by both tech platforms and publishers.
At present, creators and publishers generate revenue through digital advertising when users access their websites via search engine results. However, the power dynamics have long favored Big Tech, leaving content creators at the mercy of revenue-sharing terms dictated by tech giants.
Recent legal actions have drawn attention to the need for fair compensation, a debate that gained traction when Australia and Canada compelled tech platforms to negotiate payments with publishers a couple of years ago. Nevertheless, these negotiated fees remain a fraction of what many experts consider fair value.
A study conducted by researchers from Columbia University, the University of Houston, and the Brattle Group highlighted the significant shortfall. They estimated that if Google allocated 50% of the value created by news content to US publishers, the annual payout would range between $10-12 billion. In contrast, the New York Times, one of the largest news publishers, receives a mere $100 million over three years.
The rise of AI exacerbates the asymmetry further. Chatbots like OpenAI’s ChatGPT and Google’s Bard provide direct answers to user queries without directing them to creator websites. This threatens to confine users within the walled gardens of Big Tech companies, and the fact that these AI models are trained on copyrighted content adds to the concerns.
The impact extends beyond traditional content creators; brands are creating AI-driven virtual social media influencers to avoid paying fees charged by real influencers. Additionally, the shift towards AI raises concerns about the displacement of creative, white-collar jobs in fields like Hollywood.
The evolution of the internet, initially focused on facilitating user navigation among original web pages, has taken a turn with tech platforms aiming to retain users within their ecosystems. While AI disrupts this model, it also aligns with the broader strategy of surveillance capitalism, emphasizing data and attention mining for higher profit margins.
Lawsuits, such as the one filed by the Arkansas news organization Helena World Chronicle, underscore the concerns of publishers regarding Google’s “unlawful tying” arrangements and the introduction of Bard in 2023. The lawsuit argues that the chatbot, trained on content from various publishers, failed to compensate any of them.
Whether or not chatbots eventually replace traditional search methods, the winners in this iteration of surveillance capitalism are undoubtedly the Big Tech companies. The ongoing legal battles may determine whether these companies will be compelled to pay a fair price for the content they exploit.