Monday 26 June 2023

Race among governments to regulate AI tools (Reworded for clarity and conciseness)

Title: The Latest Steps Governments are Taking to Regulate AI: A Comprehensive Overview

Introduction:
Rapid advancements in artificial intelligence (AI) are causing challenges for governments in establishing effective regulations for the technology. Microsoft-backed OpenAI’s ChatGPT is a prime example of this, leading to increased efforts by national and international governing bodies to address the issue. In this article, we will explore the latest steps taken by various countries and organizations to regulate AI tools. Stay updated on the latest headlines by following our Google News channel online or via the app.

Australia:
The Australian government is currently consulting with the country’s main science advisory body to determine the next steps in regulating AI. The industry and science minister’s spokesperson mentioned this development in April, highlighting the government’s commitment to effectively govern AI.

Britain:
In Britain, the Financial Conduct Authority (FCA) has been assigned the task of developing new guidelines for AI. To enhance its understanding of the technology, the FCA is consulting with leading organizations such as the Alan Turing Institute, as well as legal and academic institutions. Additionally, the competition regulator in Britain has begun examining the impact of AI on consumers, businesses, and the economy to determine if new controls are required. In March, Britain announced its plan to allocate the responsibility of governing AI to existing regulators for human rights, health and safety, and competition, rather than creating a new regulatory body.

China:
Chinese government officials expressed their intention to initiate AI regulations during Elon Musk’s visit to China in early June. Moreover, China’s cyberspace regulator unveiled draft measures in April to manage generative AI services, requiring companies to submit security assessments before launching new offerings to the public. Additionally, Beijing’s economy and information technology bureau announced its support for leading enterprises in building AI models capable of rivalling ChatGPT.

European Union:
Lawmakers in the European Union (EU) recently reached an agreement on changes to the draft of the bloc’s AI Act. These changes will now be discussed with EU countries to finalize the legislation. Facial recognition and biometric surveillance emerge as key concerns, with some lawmakers advocating for a total ban, while EU countries seek exceptions for national security purposes. EU tech chief Margrethe Vestager emphasized the need for the AI industry to adopt a voluntary code of conduct to provide temporary safeguards as new laws are being developed. Additionally, the European Consumer Organisation (BEUC) has called on EU consumer protection agencies to investigate AI chatbots like ChatGPT to evaluate potential harm to individuals.

France:
France’s privacy watchdog, CNIL, launched an investigation into several complaints regarding ChatGPT in April. This followed the temporary ban of the chatbox in Italy due to suspected privacy rule breaches. France’s National Assembly approved the use of AI video surveillance during the 2024 Paris Olympics, despite objections from civil rights groups.

G7:
The Group of Seven (G7) leaders, meeting in Hiroshima, Japan, acknowledged the need for governance of AI and immersive technologies. As a result, they established the “Hiroshima AI process” and assigned ministers to discuss the technology and report the results by the end of 2023. G7 digital ministers also called for “risk-based” regulations on AI during their April meeting in Japan.

Ireland:
Ireland’s data protection chief emphasized the need to regulate generative AI, but urged governing bodies to carefully consider the best approach before implementing prohibitions. Striking the right balance between innovation and protecting human rights is crucial.

Israel:
Israel has been actively working on AI regulations for the past 18 months, aiming to find a balance between innovation and safeguarding human rights. Israel published a 115-page draft AI policy document in October, soliciting public feedback before making a final decision.

Italy:
Italy’s data protection authority plans to review other AI platforms and hire AI experts to further evaluate their compliance with privacy rules. After concerns raised by the national data protection authority, ChatGPT was temporarily banned in Italy. However, it became available again in April.

Japan:
In June, Japan’s privacy watchdog issued a warning to OpenAI, stressing the importance of obtaining appropriate permissions for collecting sensitive data and minimizing such collection. The watchdog emphasized the possibility of further actions if concerns persist.

Spain:
Spain’s data protection agency launched a preliminary investigation into potential data breaches related to ChatGPT. It has also requested the EU’s privacy watchdog to assess privacy concerns surrounding the technology.

United Nations:
UN Secretary-General Antonio Guterres expressed support for the establishment of an AI watchdog, analogous to the International Atomic Energy Agency, proposed by certain AI executives. Guterres also stated plans to create a high-level AI advisory body by the end of the year to review AI governance and provide recommendations.

United States:
The National Institute of Standards and Technology (NIST) announced the launch of a public working group focused on generative AI. The group will consist of expert volunteers who will address opportunities and risks associated with the technology, ultimately developing guidance. President Joe Biden expressed his intention to seek expert advice on the risks AI poses to national security and the economy. The US Federal Trade Commission confirmed its commitment to using existing laws to regulate AI and mitigate potential dangers, such as the consolidation of power by dominant firms and fraudulent activities. Additionally, Senator Michael Bennet introduced a bill in April to create a task force dedicated to assessing US policies on AI while prioritizing privacy, civil liberties, and due process protection.

Conclusion:
As AI technologies continue to advance, governments worldwide are realizing the need for effective regulation. Various nations and international organizations are taking steps to govern AI tools and strikes a balance between innovation, data privacy, and human rights concerns. Stay informed about the latest developments in AI regulation by following our Google News channel.

Editor Notes:
As AI rapidly develops, the importance of establishing regulation becomes increasingly evident. Governments around the world are taking proactive steps to address this issue and ensure that AI technologies are utilized ethically and responsibly. GPT News Room provides comprehensive coverage of the latest developments in AI regulation and other AI-related news. Explore GPT News Room and stay ahead of the curve in the ever-evolving world of artificial intelligence. Click here to visit GPT News Room.

Source link



from GPT News Room https://ift.tt/rA2BwT3

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...