Tuesday 6 June 2023

Mark Zuckerberg Questioned by Senators via Letter Regarding Meta’s LLaMA Leak

Meta’s LLaMA Model Under Fire from US Senators Regarding Risk of Misuse

According to a letter from US Senators Richard Blumenthal (D-CT) and Josh Hawley (R-MO) to Meta CEO Mark Zuckerberg, they are concerned about the “potential for misuse” of the company’s open-source large language model LLaMA. The senators called out Meta’s “unrestrained and permissive” distribution of LLaMA, which they claim represents “a significant increase in the sophistication of the AI models available to the general public, and raises serious questions about the potential for misuse or abuse.” The letter also questioned Meta’s risk assessment and policies, and asked about the steps the company is taking to prevent misuse and abuse of the model. LLaMA was reportedly trained on public data and was released in February for download by approved researchers.

The Senators’ letter takes aim at the open-source community, which has been having a red-hot debate over the past months following a wave of recent large language model releases. LLaMA’s performance was immediately hailed as superior to that of GPT-3, despite having 10 times fewer parameters. Some open-source models released were tied to LLaMA, and developers around the world subsequently accessed a GPT-level LLM for the first time. None of these open-source LLMs are yet available for commercial use because the LLaMA model is not released for commercial use, and the OpenAI GPT-3.5 terms of use prohibit using the model to develop AI models that compete with OpenAI. However, some who build models from the leaked model weights may not abide by those rules.

As reported in an interview with VentureBeat, Meta’s VP of AI research Joelle Pineau said that accountability and transparency in AI models are essential. She also pointed out that Stanford’s Alpaca project is an example of “gated access,” where Meta made the LLaMA weights available for academic researchers, who fine-tuned the weights to create a model with slightly different characteristics. But while she did not comment to VentureBeat on the 4chan leak that led to the wave of other LLaMA models, she told the Verge in a press statement that “some have tried to circumvent the approval process.” Pineau also said that Meta received complaints from both sides regarding its decision to partially open LLaMA.

In summary, Meta is under scrutiny for LLaMA’s unrestrained availability and potential for misuse, with concerns raised as to risk assessment and policy. The danger of fundamentally dangerous models is a concern, though Meta’s VP of AI research argues for transparency and accountability.

Source link



from GPT News Room https://ift.tt/FXjHhk9

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...