Monday 12 June 2023

Making Companies Fully Disclose Risks: The Initial Measure for Effective AI Regulation

Tech Titans Clash over AI Regulation: The Importance of Disclosure and Auditing

In the debate over regulating AI, Sam Altman, CEO of OpenAI, argues that government intervention would be critical in managing risks associated with increasingly powerful AI models. However, legendary tech investor Marc Andreesen disagrees, suggesting that such regulation would likely stifle innovation and only strengthen market leaders.

As with military intelligence, having too large a gap between worst and best case scenarios calls for more information. To create a successful regulatory response, we need to begin with registration and mandatory disclosure of the most recent measures of AI risks and performance characteristics. We cannot regulate what we do not comprehend.

These disclosures should rely on current best practices for managing AI systems and should be mandatory, consistent, regular, and independently audited, much like the requirements for public companies to disclose and audit their financials.

The disclosure of performance characteristics and risks of trained AI models, as well as training data, is essential. This work, done by Timnit Gebru, Margaret Mitchell, and their coauthors, is a good first step in developing the Generally Accepted AI Management Principles.

It is crucial that these principles are created with the involvement of AI system creators, so they reflect actual best practices. However, they cannot be developed solely by tech companies. James G. Robinson, Director of Policy for OpenAI, emphasizes the necessity of participatory and accountable processes in making moral choices for algorithms.

Robinson also calls for predictions about the impact of algorithms, including AI, and the auditing of whether those predictions are correct. This idea is adaptable for AI, as is the GPT-4 model card, which reads much like an IPO registration statement, requiring forward guidance, and retrospective accounts of whether the guidance has been met.

Large, centralized models from companies such as OpenAI, Google, and Microsoft track user requests, making reporting like this possible. However, it is more difficult for open-source models. Nonetheless, it is much like accounting standards based on a consensus view of what good financial reporting entails and adopted widely by the industry.

While the best practices are unknown, understanding what companies are doing to manage AI risks is a good place to start. The Generally Accepted AI Management Principles must begin with mandatory disclosure and auditing, and they should be consistent and independent. Ultimately a participatory and accountable process for making moral choices for algorithms will help ensure that the benefits of AI will be realized.

Editor’s Notes:

The AI industry plays a vital role in shaping our future. Consequently, it is essential to hold tech companies accountable for the risks associated with their AI models. The Generally Accepted AI Management Principles are critical in ensuring the benefits of AI while minimizing its risks. Stay up to date with industry trends by following GPT News Room!

Source link



from GPT News Room https://ift.tt/nM8ipvV

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...