Friday 13 October 2023

Google Commits to Protect Users of Its AI Systems; Cybersecurity Study Uncovers Organizations Still Pay Ransoms After Attacks; European Union’s Demands Serve as Wake-Up Call for X

**Google Pledges to Protect Users from AI Copyright Lawsuits**

In a significant move, Google has vowed to defend users of its generative artificial intelligence (AI) systems on Google Cloud and Workspace platforms against accusations of intellectual property infringement. This commitment follows similar assurances made by tech giants Microsoft and Adobe. With the industry recognizing the increasing relevance and application of AI technologies, as well as the growing threat of copyright lawsuits, Google’s pledge to shield its users is a significant step.

A recent study conducted by researchers from Princeton University, Virginia Tech, IBM Research, and Stanford University has shed light on the vulnerabilities of large language models (LLMs) like OpenAI’s GPT-3.5 Turbo. These models, designed with guardrails to prevent the generation of harmful content, can be easily manipulated with minimal fine-tuning and at a low cost. The researchers demonstrated that through OpenAI’s APIs, the model could be influenced to respond to potentially harmful instructions. This vulnerability extends to other models like Meta’s Llama 2, emphasizing the importance of robust safety mechanisms in AI and the need to reevaluate legal and ethical frameworks surrounding the technology.

A study conducted by Splunk has revealed that organizations continue to pay ransoms following cyberattacks, with over half of them paying more than $100,000 to regain system and data access. The 2023 CISO Report surveyed 350 chief security officers across ten markets and found that 96 percent of respondents had experienced a ransomware attack, significantly impacting their business systems and operations. Furthermore, 83 percent of those surveyed admitted to paying the ransom, with 53 percent paying over $100,000. The study also highlighted concerns about generative AI enabling threat actors to launch more efficient attacks, including voice and image impersonations for social engineering.

In response to demands from the European Union, X (formerly Twitter) has taken action against accounts affiliated with Hamas, removing hundreds of them. The platform’s CEO, Linda Yaccarino, announced this move and stated that tens of thousands of pieces of content have also been removed or labeled appropriately. The effectiveness of this new approach on Elon Musk’s retweeting habits remains to be seen.

Six months ago, NPR (now rebranded as X) left Twitter after being labeled as “U.S. state-affiliated media.” However, this decision has had a negligible effect on NPR’s traffic, with only a one percentage point drop, according to an internal memo. NPR has explored alternative platforms like Instagram and Threads to maintain audience engagement without experiencing the toxicity and functional issues associated with Twitter.

New York is making bold moves to position itself as a global hub for artificial intelligence, challenging the technological dominance of Silicon Valley. The city, known for its pivotal role in various industries, is seen as fertile ground for the adoption of generative AI technologies. To showcase its ambition, New York will host a 370-event “Tech Week” starting October 16, organized by venture capital firm Andreessen Horowitz. The city has experienced a surge in tech investments and job opportunities since 2021, with leading VC firms like Sequoia, Index, and Andreessen Horowitz establishing offices there. Additionally, New York has become a base for numerous international unicorn companies and regularly hosts major events for tech giants like Microsoft, Google, and LinkedIn.

A report from the Federal Aviation Authority (FAA) has raised concerns about the increasing risk of injuries or fatalities caused by falling satellites, particularly SpaceX’s Starlink space internet satellites. By 2035, it is estimated that around 28,000 pieces of these satellites will re-enter Earth’s atmosphere annually, presenting a 0.6 per year probability of harm or death to people on the ground. The report also highlights a 0.0007 per year probability of an aircraft being downed by a satellite by 2035. While regulatory measures could potentially mitigate some risks, the lack of international protocols for space debris and satellite launches, especially outside the US, poses a significant challenge.

That concludes today’s top tech news stories. For more updates, visit us at TechNewsDay.com or ITWorldCanada.com. Make sure to tune in to Hashtag Trending five days a week for fast reads on top stories and our special weekend interview show, “the Weekend Edition.” You can find our podcasts on various platforms. Have a fantastic Friday!

**Editor’s Notes**

Opinion Piece: The Importance of AI Ethics and Regulation

As AI technologies become increasingly integrated into our lives, it is crucial to prioritize ethics and establish proper regulations. The recent studies highlighting the vulnerabilities of language models and the concerning trends in ransom payments following cyberattacks emphasize the potential dangers associated with AI misuse.

Companies like Google, Microsoft, and Adobe stepping up to defend users against copyright lawsuits resulting from AI-generated content is a positive step towards protecting individuals and fostering responsible AI development. However, it is essential to go beyond self-regulation and ensure that legal and ethical frameworks are robust and comprehensive.

Moreover, the report on falling satellites underscores the need for international cooperation in addressing the risks associated with space debris. Without proper protocols and regulations, the increasing number of satellites in orbit poses a significant threat to human safety.

As AI continues to evolve and impact various aspects of our lives, it is crucial to strike a balance between innovation and responsible development. By prioritizing ethics, establishing regulations, and fostering collaboration, we can harness the full potential of AI while mitigating potential risks.

For more technology news and insights, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/cOJHRZL

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...