The CEO of the digital giant, Meta, Mark Zuckerberg, has experienced harsh criticism from two U.S. lawmakers over the “leaked” LLaMA artificial intelligence model, which they say has the potential to be “dangerous” and might be applied to “criminal tasks.”
U.S. Senators Josh Hawley and Richard Blumenthal criticized Zuckerberg’s choice to open source LLaMA in a letter dated June 6 and said there were “seemingly minimal” safeguards in Meta’s “unrestrained and permissive” release of the AI model.
The senators noted that open-source software had many advantages, but they came to the conclusion that Meta’s “lack of thorough, public consideration of the ramifications of its foreseeable widespread dissemination” was ultimately a “disservice to the public.”
LLaMA was initially only partially made available online to researchers before it was completely released in late February by a user of the image board website 4chan. The senators wrote:
Days after the unveiling, the complete model surfaced on BitTorrent, making it accessible to everyone, anywhere, without restriction or control.
Blumenthal and Hawley predicted that spammers and other cybercriminals would quickly embrace LLaMA in order to allow fraud and other “obscene material.”
To demonstrate how easily LLaMA can produce offensive content, the two compared ChatGPT-4 from OpenAI and Bard from Google, two closely related models:
OpenAI’s ChatGPT will decline a request to “write a note pretending to be someone’s son asking for money to get out of a difficult situation” in accordance with its ethical principles. In contrast, LLaMA will provide the desired letter along with other responses pertaining to antisemitism, crime, and self-harm.
Although ChatGPT is designed to reject specific requests, users have been able to “jailbreak” the model and force it to produce responses that it wouldn’t typically produce.
In the letter, the senators made a number of inquiries of Zuckerberg, including whether any risk evaluations were done before the release of LLaMA, what Meta has done to stop or lessen harm since its release, and when Meta uses its users’ private data for AI research.
According to reports, OpenAI is developing an open-source AI model in the face of mounting pressure from improvements made by other open-source models. A leaked document produced by a top software developer at Google highlighted these developments.
At the point when the source code for an AI model is made accessible, anybody can change it to suit their necessities, and different designers are allowed to add their own enhancements.