Wednesday, 18 October 2023

Stanford study reveals lack of transparency in the world’s largest AI models

AI Foundation Models Lack Transparency, According to Stanford HAI Report

No significant developer of AI foundation models is providing enough information about the potential impact of their models on society, as indicated by a recent report from Stanford HAI (Human-Centered Artificial Intelligence).

Stanford HAI recently published its Foundation Model Transparency Index, which examined whether the creators of the top 10 AI models disclose relevant information about their work and the usage of their systems. The highest-ranking model in terms of transparency was Meta’s Llama 2, followed by BloomZ and OpenAI’s GPT-4. However, none of the models received particularly high scores according to the index.

The evaluation also included other models like Stability’s Stable Diffusion, Anthropic’s Claude, Google’s PaLM 2, Cohere’s Command, AI21 Labs’ Jurassic 2, Inflection’s Inflection-1, and Amazon’s Titan.

Transparency Evaluation and Model Disclosure

The researchers from Stanford HAI admitted that transparency is a broad concept for evaluation. They based their assessment on various indicators such as information about the model’s construction, functionality, and usage. By examining publicly available data, the researchers assigned a score to each model, considering factors such as disclosure of partners and third-party developers, identification of private information usage, and other relevant inquiries.

Among the evaluated models, Meta scored the highest at 54%, excelling in terms of model basics due to its research publications. Meanwhile, the open-source model BloomZ closely followed at 53%, and GPT-4 obtained a score of 48%. Despite OpenAI’s relatively restricted design approach, Stable Diffusion achieved a score of 47%.

While OpenAI refrains from releasing much of its research and does not disclose its data sources, GPT-4 still received a higher score due to the abundance of publicly available information about its partnerships. OpenAI collaborates with multiple companies that integrate GPT-4 into their products, presenting substantial publicly accessible details.

The Verge reached out to several companies including Meta, OpenAI, Stability, Google, and Anthropic for comment but has not yet received any responses.

However, the Stanford researchers discovered that none of the model creators provided any information regarding societal impact. This absence extends to guidance on privacy concerns, copyright complaints, or addressing potential bias grievances.

A Benchmark for Transparency

The Stanford Center for Research on Foundation Models aims to provide a benchmark for governments and companies through the index, according to Rishi Bommasani, the society lead at the center and one of the researchers involved in the study. Bommasani states that proposed regulations like the EU’s AI Act may soon require developers of large foundation models to produce transparency reports.

“Our goal with the index is to make models more transparent and break down this vague idea into measurable aspects,” says Bommasani, highlighting that the study focused on one model per company to facilitate comparisons.

Despite the substantial open-source community behind Generative AI, major companies in the field refrain from publicly sharing their research and codes. For instance, OpenAI, despite its name, no longer distributes its research due to competitiveness and safety concerns.

While the group is open to expanding the scope of the index, Bommasani confirms that the current evaluation is limited to the 10 foundation models already assessed.

Editor Notes: Prioritizing Transparency in AI Development

The Stanford HAI report sheds light on the lack of transparency surrounding AI foundation models. As AI technology continues to advance, it is crucial for developers and companies to prioritize transparency in their work. Transparent models not only build trust between developers and users but also enable societies to have a better understanding of the potential impact and ethics of AI systems.

Transparency reports, disclosure of data sources, and addressing concerns about privacy, bias, and copyright are essential steps in ensuring responsible AI development and deployment. AI has the potential to greatly benefit society, but only when developed and used responsibly.

For more news and updates on artificial intelligence and related topics, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/uVB1rjQ

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...