Wednesday, 18 October 2023

Why AI Requires a Robust Code of Ethics to Prevent Its Negative Impact

Fluently insert boldface to highlight important keywords.With the advancement of generative artificial intelligence, AI-industry leaders have been openly expressing their concerns about the power of the machine learning systems they are unleashing.

Some AI creators, having launched their new AI-powered products, are calling for regulation and legislation to curb its use. Suggestions include a six-month moratorium on the training of AI systems more powerful than OpenAI’s GPT-4, a call that includes several alarming questions:

Should we allow machines to overwhelm information channels with propaganda and untruth?

Should we automate all jobs, even the ones that provide fulfillment? 

Should we create nonhuman minds that could eventually outnumber, outsmart, outdate, and replace us? 

Should we risk losing control of our civilization?

In response to these concerns, two main options have received the most attention: legislative regulation or development moratoria. However, there is a third, less discussed option: not creating potentially dangerous products in the first place.

But how can this be achieved? By adopting an ethical framework and implementing it, companies have a path for the development of AI, and legislators have a guide to implement responsible regulation. This path offers an approach to assist AI leaders and developers in grappling with the myriad decisions that come with any new technology. 

A Commitment to Values

For several years, we have been listening to senior representatives of Silicon Valley companies who seem genuinely interested in maintaining high ethical standards for themselves and their industry. This commitment is evident through the numerous initiatives aimed at ensuring that technology remains “responsible,” at “the service of humanity,” “human-centered,” and “ethical by design.” These intentions reflect a desire to do good and a concern for reputation and long-term commercial viability.  

It is an interesting moment of consensus between public opinion and the ethical values that corporate leaders believe should guide technological development. These values include safety, fairness, inclusion, transparency, privacy, and reliability. However, despite these good intentions, the tech industry still faces challenges. 

What is lacking is a consensus on how exactly to develop products and services using these values and achieving the goals desired by both the public and industry leaders.

Over the past four years, the Institute for Technology, Ethics, and Culture in Silicon Valley (ITEC) — an initiative of the Markkula Center for Applied Ethics at Santa Clara University with support from the Vatican’s Center for Digital Culture at the Dicastery for Culture and Education — has been working on developing a system to bridge this gap. The result is a comprehensive roadmap that guides companies toward organizational accountability and the production of ethical products and services. This strategy includes both a governance framework for responsible technology development and use, as well as a management system for its deployment.

The approach is laid out in five practical stages suitable for leaders, managers, and technologists. These stages address the need for tech ethics leadership, a candid assessment of organizational culture, the development of a tech ethics governance framework, the integration of tech ethics into the product development life cycle, and methods for measuring success and continuous improvement.

People working in organizations that develop new and powerful technologies now have a resource that was previously missing — one that provides specific and practical guidance for bringing well-considered principles into the day-to-day work of engineers and technical writers. The roadmap offers examples of how to implement principles such as fairness, inclusivity, and non-discrimination by examining usage data for signs of inequitable access to a company’s products and developing remedies accordingly. 

We believe that this guidance, which translates ethical principles into practical steps, will encourage action among tech leaders. Rather than doing nothing about the perceived impending doom associated with new technologies, industry leaders can now assess their practices and seek areas for improvement. They can also engage with peer organizations to learn from their experiences.

We have built on the existing work in the industry and combined it with our understanding of ethics, with the belief that we can create a more just and compassionate world. It is possible to have a more ethically responsible tech industry and AI products and services. Given the high stakes involved, the effort is undoubtedly worthwhile.

Ann Skeet and Brian Green are the authors of “Ethics in the Age of Disruptive Technologies: An Operational Roadmap” (The ITEC Handbook) and colleagues at the Markkula Center for Applied Ethics at Santa Clara University. Paul Tighe is the secretary of the Vatican’s Dicastery for Culture and Education.

More: Former Facebook security head warns 2024 election could be ‘overrun’ with AI-created fake content

Also read: Religion is mixing with business and raising workplace questions for employers

### Editor Notes: Encouraging Ethical Practices in the AI Industry

Artificial intelligence is a groundbreaking technology that has the potential to revolutionize various industries. However, concerns about the ethical implications of AI have been raised by industry leaders themselves. In response to these concerns, there are calls for regulation and legislation to promote responsible AI development.

An alternative approach, suggested by the Institute for Technology, Ethics, and Culture in Silicon Valley, is to adopt an ethical framework for AI development. This framework provides practical guidance to companies and assists legislators in implementing responsible regulations. By integrating values such as safety, fairness, and inclusion into the development process, AI can be utilized in a way that benefits humanity.

The roadmap laid out by the Institute for Technology, Ethics, and Culture consists of five stages that address the need for tech ethics leadership, cultural assessment, governance framework development, integration into the product development life cycle, and methods for measuring success. This comprehensive approach empowers tech leaders to take action and improve their ethical practices.

It is crucial that the AI industry prioritize ethical considerations. With the potential risks associated with AI, the adoption of responsible practices is essential for ensuring the well-being of society. By embracing ethical frameworks and fostering a culture of accountability, the tech industry can create a more just and caring world.

For more insights on the intersection of technology and ethics, visit GPT News Room.

Source link



from GPT News Room https://ift.tt/Xg7sU3Y

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...