当前位置:首页 >知識 >【】

【】

2024-11-22 00:27:04 [探索] 来源:有聲有色網

A California law firm has filed a class-action lawsuit against OpenAI for "stealing" personal data to train ChatGPT.

Clarkson Law Firm, in a complaint filed in the Northern District of California court on Wednesday, alleges ChatGPT and Dall-E "use stolen private information, including personally identifiable information, from hundreds of millions of internet users, including children of all ages, without their informed consent or knowledge." To train its large language model, OpenAI scraped 300 billion words from the internet, including personal information and posts from social media sites like Twitter and Reddit. The law firm claims OpenAI "did so in secret, and without registering as a data broker as it was required to do under applicable law."

SEE ALSO:Lawyers fined $5K for using ChatGPT to file lawsuit filled with fake cases

OpenAI has been the subject of controversy for how and what data it collects to train and further develop ChatGPT. Until recently, there was no explicit way for users to opt out of letting OpenAI use their conversations and personal information to feed the model. ChatGPT was initially banned in Italy, using Europe's General Data Protection Regulation (GDPR), for inadequately protecting user data, especially when it comes to minors. This lawsuit includes OpenAI's opaque privacy policies for existing users, but largely focuses on data scraped from the web that was never explicitly intended to be shared with ChatGPT. Through billion-dollar investments from Microsoft and subscriber revenue for ChatGPT Plus, OpenAI has profited from this data without compensating its source.

Mashable Light SpeedWant more out-of-this world tech, space and science stories?Sign up for Mashable's weekly Light Speed newsletter.By signing up you agree to our Terms of Use and Privacy Policy.Thanks for signing up!

The 15 counts in the complaint include violation of privacy, negligence for failing to protect personal data, and larceny by illegally obtaining massive amounts of personal data to train its models. Datasets like Common Crawl, Wikipedia, and Reddit, which include personal information, are publicly available as long as companies follow the protocols for purchase and use of this data. But OpenAI allegedly used this data without permission or consent of users in the context of ChatGPT. Even though people's personal information is public on social media sites, blogs, and articles, if data is used outside of the intended platform, it can be considered a violation of privacy.

Mashable Games

In Europe, there's a legal distinction between public domain and free-to-use data thanks to the GDPR law, but in the US, that's still up for debate. Nader Henein, a privacy research VP at Gartner who thinks the sentiment of the lawsuit is valid, said, "People should have control as to how their data is used, even when it is available in the public domain." But Henein is unsure if the US legal system would agree.


Related Stories
  • OpenAI quietly lobbied for weaker AI regulations while publicly calling to be regulated
  • OpenAI sued for defamation after ChatGPT allegedly fabricated fake embezzlement claims
  • OpenAI calling for AI regulation is a solid step in no direction

Ryan Clarkson, managing partner said in the firm's blog post, it's critical to act now with existing laws instead of waiting for Executive and Judicial branches to respond with federal regulation. "We cannot afford to pay the cost of negative outcomes with AI like we’ve done with social media, or like we did with nuclear. As a society, the price we would all pay is far too steep."

TopicsPrivacyChatGPT

(责任编辑:時尚)

    推荐文章
    热点阅读