Tuesday 24 October 2023

Major transparency assessment shows that AI’s leading LLM creators have all failed

**Eye on AI: Stanford Institute Releases Foundation Model Transparency Index**

In recent years, the transparency around the development and usage of leading language models (LLMs) has been on the decline, while their societal impacts continue to increase. Recognizing this concerning trend, the Stanford Institute for Human-Centered AI conducted a comprehensive evaluation of major foundational model developers to assess their transparency. They released the Foundation Model Transparency Index, which examined 100 different indicators of transparency across the model development process, functionality, and usage.

The evaluation focused on ten major developers, including OpenAI and Google, and designated a flagship model from each for assessment. The findings were not encouraging. Meta, evaluated for LLama 2, received the highest score of 54 out of 100, followed closely by Hugging Face with a score of 53. Interestingly, Hugging Face scored 0% in both the “risk” and “mitigations” categories. Other notable scores include OpenAI with 48, Stability with 47, Google with 40, and Anthropic with 36. Cohere, AI21 Labs, and Inflection scored in the mid-30s to low 20s range, while Amazon received the lowest score of 12.

Rishi Bommasani, the CRFM Society Lead at Stanford HAI, shared his expectations for these results. While the opacity of companies was anticipated, the researchers were surprised by the lack of transparency in critical areas such as data, labor practices, and downstream impact. Bommasani highlighted the decline in transparency compared to the successes of the 2010s with deep learning, where datasets, models, and code were openly shared.

The researchers contacted all ten companies to allow them to respond to the initial draft of the ratings. While specific details were kept private, eight out of the ten companies contested their scores, resulting in adjustments of 1.25 points on average. This engagement demonstrates the importance of transparency and provides hope for future improvements.

The FMTI index sheds light on the current state of AI, revealing a shift towards decreasing transparency as the technology gains power and societal impact. With no requirement for transparency, companies prioritize market competitiveness and shareholder value over ethical considerations such as privacy and safety. This trend mirrors what we have witnessed with social media, where greater opacity accompanies increased influence.

The release of the FMTI index is only the beginning. The researchers aim to conduct regular analyses and hope to work at a faster pace to keep up with the rapidly evolving field of AI. By holding companies accountable and encouraging transparency, society can better navigate the transformative power of AI.

**Hugging Face Users Blocked in China, Canva Introduces AI Tools for Education, and Apple Cancels Show Amid AI Coverage Dispute**

In other AI news, Hugging Face, a popular open-source platform, confirmed that its users in China have been unable to access its services since May. The exact reason for the blockage remains unclear, but it may be related to local regulations governing foreign AI companies. Chinese authorities frequently restrict access to websites they disapprove of.

Meanwhile, Canva, an online design platform, has introduced a suite of AI-powered tools designed specifically for teachers and students. These tools, available on the Canva for Education platform, include a writing assistant, translation capabilities, alt text suggestions, Magic Grab, and one-click animation. By leveraging AI, Canva aims to enhance the design experience for educators and students.

On a different note, Apple has reportedly canceled John Stewart’s show due to tensions arising from his interest in covering AI and China. The third season of “The Promise” was in production, but Apple decided to cancel it. The details of the dispute regarding AI and China coverage remain undisclosed, but Apple’s close ties with China have come under scrutiny amid rising tensions. The company is also looking to diversify its supply chain by moving some operations out of China.

Lastly, the Cyberspace Administration of China (CAC) has proposed a global initiative for AI governance. The Global AI Governance Initiative emphasizes the need for laws, ethical guidelines, personal and data security, geopolitical cooperation, and a “people-centered approach to AI.” The document recognizes the potential of AI to drive progress while acknowledging the risks and challenges it presents.

**Editor Notes: Promoting Transparency in Artificial Intelligence**

The release of the Foundation Model Transparency Index highlights a crucial concern: the decline of transparency in AI development. As AI becomes increasingly powerful and influential in our lives, companies must prioritize transparency to safeguard against potential risks and protect societal well-being.

Transparency fosters accountability and helps build public trust. Companies should embrace openness by sharing datasets, models, and code whenever possible. By doing so, they enable independent researchers and organizations to evaluate the impact and fairness of AI models.

As consumers, we should support initiatives like the FMTI and encourage companies to prioritize transparency. A transparent AI ecosystem benefits everyone, fostering innovation, ethical practices, and responsible deployment.

To stay updated on the latest developments in AI, visit GPT News Room for reliable and insightful coverage of AI-related news and advancements.

**This article was brought to you by [GPT News Room](https://ift.tt/S3fhwAH

Source link



from GPT News Room https://ift.tt/DueBXJI

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...