On December 29, 2023, the New York Times filed a lawsuit against Microsoft and OpenAI for copyright infringement, accusing them of using millions of the publications’ articles to train the AI models. The lawsuit was filed with the Federal District Court in Manhattan. In a new update, OpenAI has refuted all the claims.
OpenAI Was In Duscssion With The New York Times A Few Days Before The Lawsuit
In a blog post published on January 8, 2024, the ChatGPT maker says that it disagrees “with the claims in The New York Times Lawsuit.” Further, the company sees it as an opportunity to clarify its business. Elaborating on how the publication isn’t telling the full story, OpenAI goes on to clarify that it collaborates with news organizations “to explore opportunities, discuss their concerns, and provide solutions.”
Here’s How The ChatGPT Maker Defends The Use Of Publicly Available Material For Traning Its AI Model
OpenAI mentions how using publicly available internet material is fair use. The company views “this principle as fair to creators, necessary for innovators, and critical for US competitiveness.” The blog reads how training AI models are permitted to be used fairly by academics, civil society groups, startups, leading US companies, authors, and others who have “submitted comments to the US Copyright Office.”
However, this is where the company divulges some key information. Per the blog, there’s an opt-out process for publishers, completing which they can stop the company from training its AI models on their content. However, The New York Times didn’t adopt the opt-out process until recently, in August 2023.
Further, OpenAI also writes that it was in discussion with the publication before the lawsuit. Both parties were trying to work out a “high-value partnership around real-time display with attribution in ChatGPT,” in which the publication will gain a new way to connect with their existing and potentially new readers. OpenAI also writes how it explained that the publication’s articles, like any single source, didn’t contribute to the training of the existing models.
OpenAI Claims The Publication “Intentionally” Manipulated Prompts For Regurgitation
However, the lawsuit on December 27 “came as a surprise,” which OpenAI learned about by reading the publication itself. The regurgitations The New York Times induced “appear to be from years-old articles that have proliferated on multiple third-party websites,” as publications often distribute their content via media networks to be available to a wider audience.
OpenAI claims that the publication intentionally manipulated prompts by including lengthy excerpts of its articles for the AI model to regurgitate. The ChatGPT maker also suggests that the publication either “instructed the model to regurgitate or cherry-picked their examples from many attempts.” Concluding the blog, OpenAI writes that it regards the lawsuit “to be without merit” and is hopeful for a constructive partnership in the future.
One Must Wait For The Court’s Investigation And Verdict
On the other hand, The New York Times said that it talked about the concerns related to using their content for training the AI chatbots, but to no avail. At this point, it is difficult to tip things to one party’s side, and readers should wait for the court to investigate the issue and devise a ruling.
You can follow Smartprix on Twitter, Facebook, Instagram, and Google News. Visit smartprix.com for the most recent news, reviews, and tech guides.