Wednesday, 18 October 2023

Foundational models are concealed in secrecy, according to Stanford’s AI transparency index

Stanford University Researchers Publish Report on Transparency of AI Models

Researchers from Stanford University recently released a report assessing the transparency of popular foundational artificial intelligence models developed by companies like OpenAI LP and Google LLC. According to the report, these models lack sufficient transparency, with none of them providing significant information about their potential societal impact. The researchers emphasized the need for more disclosure regarding the data and human labor involved in training these AI models.

The report, called the Foundation Model Transparency Index, was compiled by Stanford’s Human-Centered Artificial Intelligence research group. The rankings were based on metrics that evaluated the extent to which the creators of these models disclose information about their work and how their systems are used. The top-ranked model in terms of transparency was Meta Platforms Inc.’s Llama 2, with a score of 54%. It was followed by BigScience’s BloomZ at 53% and OpenAI’s GPT-4 at 48%.

Findings and Rankings of the Transparency Index

The transparency index included various AI models, such as Stability AI Ltd.’s Stable Diffusion, Anthropic PBC’s Claude, Google’s PaLM 2, Cohere Inc.’s Command, AI21 Labs Inc.’s Jurassic-2, Inflection AI Inc.’s Inflection, and Amazon Web Services Inc.’s Titan. However, none of these models received high scores, indicating a lack of transparency.

While the research acknowledged that transparency is a complex and subjective concept, the authors developed 100 indicators to assess information about the building process, functionality, and usage of these models. They collected publicly available data on each model and assigned scores based on the indicators. The rankings also took into account the disclosure of partners and third-party developers, the use of private information, and other factors.

Meta’s open-source Llama 2 received the highest score due to the company’s previous research on the model’s development. The report highlighted the advantage of open-source models, with BloomZ, another open-source model, coming in second place. OpenAI, despite its opaque design approach, managed a score of 47%. The researchers noted that while OpenAI doesn’t frequently publish its research or disclose data sources for training GPT-4, there is public information available through the company’s partners.

Critical Assessment and Regulatory Implications

The researchers’ main critique is that even open-source models lack information about their societal impact, such as avenues for addressing privacy, copyright, and bias concerns. The Foundation Model Transparency Index aims to create a reliable benchmark for governments and companies as they navigate the regulatory landscape surrounding AI. The European Union is in the process of enacting an Artificial Intelligence Act that will impose strict regulations on AI. The index can serve as a valuable tool for ensuring compliance with the act, which requires companies using AI tools like GPT-4 to disclose copyrighted materials used in development.

Call for Transparency and Future Updates

Stanford’s Human-Centered Artificial Intelligence research group intends to regularly update the Foundation Model Transparency Index, expanding its scope to include additional models. By promoting transparency and providing concrete measures of transparency, the index aims to stimulate more openness within the AI industry.

Editor’s Note

Transparency is a crucial aspect of the AI industry, and the Foundation Model Transparency Index serves as an important initiative to assess and promote transparency among AI models. As we move towards a more regulated environment for AI, it is essential for companies to prioritize transparency and disclose relevant information to address societal concerns. The index can also aid governments and organizations in ensuring compliance with upcoming regulations. By encouraging transparency, we can build greater trust and accountability in the AI ecosystem. For more news and updates on the latest advancements in AI, visit GPT News Room.

Your vote of support is important to us and it helps us keep the content FREE.

One-click below supports our mission to provide free, deep and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU

Source link



from GPT News Room https://ift.tt/Fnf79W0

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...