Friday, 26 May 2023

OpenAI’s Sam Altman Assures That His Company Will Remain in the EU, Without Comprise.

OpenAI’s Sam Altman has expressed concern about the EU’s proposed AI regulations while on a trip to London this week. Specifically, Altman stated that if his company can’t comply with the regulations, they’ll cease operating in Europe. However, Altman toned down his statement on Twitter after returning to the US. Meanwhile, there continues to be a lack of AI legislation in the US, despite the Algorithmic Accountability Act and a proposed “AI Task Force.” The EU’s proposed AI Act, on the other hand, could have significant implications for AI companies like OpenAI that train language models on scraped user data from the internet. In response, OpenAI has created a grant program to fund groups that can determine AI rules and regulations, such as whether AI should be used for emotional support or allowed to identify people’s gender or race based on images. However, there are still concerns about letting companies regulate their own industry.

Keywords: OpenAI, Sam Altman, EU, AI regulations, Algorithmic Accountability Act, AI Task Force, AI Act, language models, scraped user data, grant program.

Heading 2: The Situation in Europe and the US

Sam Altman, the CEO of OpenAI, a leading AI development company, has expressed concern about potential AI regulations in Europe. Specifically, Altman stated that if his company can’t comply with Europe’s proposed AI regulations, they’ll be forced to cease operations in Europe. However, after returning home, Altman softened his statement, indicating that he was excited to continue operating in Europe and had no plans to leave.

Meanwhile, the US continues to lag behind in terms of AI legislation. While there have been attempts to address this through the Algorithmic Accountability Act and the proposed creation of an “AI Task Force,” there is still a significant lack of laws regulating the rapidly expanding world of AI implementation in the US.

Heading 2: Implications of the EU’s Proposed AI Act

The EU has proposed an AI Act that could have substantial implications for AI companies like OpenAI that train language models on scraped user data from the internet. The proposed law could label AI systems as “high risk” if they could be used to influence elections and much more.

Heading 2: OpenAI’s Response to Proposed AI Regulations

OpenAI, recognizing the need for regulation, has created a grant program to fund groups willing to determine AI rules and regulations. Through this funding, research groups are tasked with developing “proof-of-concepts for a democratic process that could answer questions about what rules AI systems should follow.” The deadline for this program is in just a month, by June 24, and OpenAI provided some examples of the questions grant seekers should look to answer, such as whether AI should offer “emotional support” to people or if vision-language AI models should be allowed to identify people’s gender, race, or identity based on their images.

Heading 2: Concerns about AI Company Regulation

There remain significant concerns about letting AI companies regulate their own industry, as they may have a financial incentive to prioritize their own interests over reducing the potential harm caused by AI.

Editor Notes

The need for AI regulation is becoming increasingly important as the technology continues to grow and become more integrated into our daily lives. However, the challenge is determining who should regulate AI and what those regulations should be. OpenAI’s grant program is a positive step towards developing a democratic process for determining AI rules and regulations. For more on AI news and developments, check out GPT News Room.

Source link



from GPT News Room https://ift.tt/mH6F7JT

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...