Wednesday, 18 October 2023

Developing a Strong Code of Ethics in AI: Preventing its Negative Impacts from Overwhelming Humanity

Creating Principled and Accountable AI: A Framework for Tech Companies and Regulators

By Ann Skeet, Brian Green and Paul Tighe

The Concerns Surrounding Generative Artificial Intelligence

As the power of generative artificial intelligence (AI) becomes increasingly evident, industry leaders in the AI sector have expressed their worries about the implications of unleashing such advanced machine learning systems.

Some AI creators, having already introduced AI-powered products, are now advocating for regulations and legislations to control its usage. One proposal is to impose a six-month ban on training AI systems that surpass OpenAI’s GPT-4. This call raises several alarming questions:

  • Should machines be allowed to flood information channels with propaganda and untruths?
  • Is it acceptable to automate all jobs, even the fulfilling ones?
  • Do we risk developing nonhuman minds that may eventually outnumber, outsmart, and replace humans?
  • Are we willing to risk the loss of control over our civilization?

While legislative regulation or development moratoria have been the primary focus in response to these concerns, there is a third option: not creating potentially dangerous products in the first place.

The Path to Ethical and Responsible AI Development

Adopting an ethical framework and implementing it provides tech companies with a clear path to AI development, and regulators with a guide to responsible regulation. This approach assists AI leaders and developers in navigating the complex decisions that arise with any new technology.

Standing for Ethical Values

The desire to uphold high ethical standards in the tech industry is evident among senior representatives of Silicon Valley companies. This commitment is reflected in various initiatives aiming to ensure that technology is “responsible,” serves humanity, is human-centered, and is ethical by design. Despite these intentions, ethical lapses still occur in the tech industry.

What is needed is a consensus on precisely how to develop products and services grounded in ethical values, achieving the desired goals for both the public and industry leaders.

An Operational Roadmap for Ethical Technology Development

Over the past four years, the Institute for Technology, Ethics, and Culture in Silicon Valley (ITEC) has been working on a comprehensive roadmap. This roadmap aims to connect good intentions with practical guidance for tech development.

This roadmap consists of five practical stages that offer guidance to leaders, managers, and technologists:

  1. Tech ethics leadership: Understanding the need for ethical leadership within tech companies.
  2. Cultural assessment: Assessing the ethical culture within organizations.
  3. Tech ethics governance framework: Developing a framework to ensure ethical practices.
  4. Integrating ethics into product development: Incorporating tech ethics into the product development life cycle.
  5. Measuring success and continuous improvement: Establishing methods to measure the impact of ethical practices and drive improvement.

This roadmap provides specific guidance, enabling individuals involved in developing new and powerful technologies to navigate the complexities of ethics at a granular level. It offers practical methods, such as examining usage data for signs of inequitable access and developing appropriate remedies.

Fostering Responsibility and Action in the Tech Industry

The aim of this guidance is to empower tech leaders to take action and improve their practices. It encourages organizations to assess their own practices and have conversations with peer organizations to drive collective improvement.

Building a more just and caring world and creating an ethically responsible tech industry and AI products and services is both possible and necessary. With so much at stake, it is worth the effort.

About the Authors

Ann Skeet and Brian Green are authors of “Ethics in the Age of Disruptive Technologies: An Operational Roadmap.” They are colleagues at the Markkula Center for Applied Ethics at Santa Clara University. Paul Tighe is the secretary of the Vatican’s Dicastery for Culture and Education.

Editor Notes

Building ethical frameworks and responsible AI is crucial for the future of technology. Tech companies and regulators must work together to ensure that AI development aligns with principled values. To stay updated with the latest news and insights in the tech industry, visit the GPT News Room.

More: Former Facebook security head warns 2024 election could be ‘overrun’ with AI-created fake content

Also read: Religion is mixing with business and raising workplace questions for employers

-Ann Skeet, Brian Green, Paul Tighe

This content was created by MarketWatch, which is operated by Dow Jones & Co. MarketWatch is published independently from Dow Jones Newswires and The Wall Street Journal.

(END) Dow Jones Newswires

10-18-23 1622ET

Copyright (c) 2023 Dow Jones & Company, Inc.

Source link



from GPT News Room https://ift.tt/PLDdr6i

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...