Friday 19 May 2023

ChatGPT usage for employees limited by Apple

Apple Implements Restrictions on Employee Use of AI Due to Confidentiality Concerns

As reported by The Wall Street Journal, Apple is placing restrictions on its employees’ use of AI out of concerns for proprietary data confidentiality. The move is part of an internal regulation initiative aimed at preventing other companies from exploiting Apple’s proprietary data by ingesting it into their AI systems. To this end, the company is advising its employees not to use OpenAI’s ChatGPT, GitHub’s Copilot, or any comparable products.

This decision aligns Apple with other large companies, including JP Morgan, Verizon, and Samsung, which have banned or limited the use of large language model AI systems like ChatGPT. Announced in November 2022, ChatGPT, or Chat Generative Pre-Trained Transformer, is a chatbot that operates on top of OpenAI’s GPT-3 series of language models. The platform had 100 million signups in the two months since it launched on November 30, 2022.

Apple’s decision follows OpenAI’s launch of an iOS version of its ChatGPT program. OpenAI announced the software via social media on Thursday, stating that “Introducing the ChatGPT app for iOS! We’re live in the US and will expand to additional countries in the coming weeks.”

The move towards AI regulation has been gaining momentum recently. Only a few days before OpenAI’s announcement, the company’s CEO Sam Altman testified before Congress on the need for government oversight of AI technologies. Sen. Michael Bennet, D-Colo., recently introduced legislation that would create a new federal agency to regulate artificial intelligence. The proposed Federal Digital Platform Commission would be tasked with making rules to govern companies that provide “content primarily generated by algorithmic processes.”

Editor Notes:

The rapidly increasing adoption of AI technology across various sectors has led to growing concerns about data privacy and confidentiality. As AI systems continue to evolve and become more sophisticated, there is a heightened risk that sensitive information could be inadvertently exposed to external entities.

Apple’s move to restrict employee use of AI is a proactive step towards safeguarding proprietary data, and it sets a precedent for other companies to adopt similar measures. As the AI revolution continues to unfold, it is becoming increasingly clear that responsible regulation is critical to ensuring that this powerful technology is used ethically and securely.

For more news and updates on AI and other emerging technologies, be sure to check out the GPT News Room.

Source link



from GPT News Room https://ift.tt/0GemQiS

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...