Friday 4 August 2023

Who bears the true responsibility for ethical AI?

The Dark Side of AI: Ethical Concerns in the Global South

In January, TIME revealed Microsoft-backed OpenAI outsourced workers in Kenya in late 2021 to moderate internet data, forming a fundamental part of developing generative AI sensation ChatGPT’s safety system.

According to documents seen by the news outlet, moderators earned around $2 per hour to label texts which detailed injuries, sexual abuse and self-harm. These workers were also asked to collect images, some reportedly illegal under US law, in a separate project for OpenAI’s image generator DALL-E.

The Trauma of Outsourcing AI Work

In a statement to TIME, OpenAI explained it took the well-being of its contractors “very seriously” and that programmes offering support were available through the outsourcing company, which believed its employees did not request for support “through the right channels”.

The work involved was so traumatic that the company dealing with OpenAI to outsource the jobs cut short its contract with the AI powerhouse, a recent Wall Street Journal article indicated. Meanwhile, a growing body of research continues to reveal the dependence of big technology companies to conduct precious work in the global south as part of a mission to make AI safe.

The Risks of General Purpose AI

Surveys conducted over the years also revealed General Purpose AI deployed in biometrics, policing and housing systems have already caused gender and racial discrimination.

As ChatGPT began to fully take off, the recent dismissal of Microsoft’s responsible AI team raised eyebrows and questions of whether ethical concerns are actually a priority in the multibillion-dollar AI economy.

Expanding the Conversation on Ethical AI

That is not to say the technology sector as a whole is not taking the risks around generative AI seriously.

Major industry figures did indeed call for a pause in the technology’s developments until a robust AI act is in place. However, researchers speaking to Mobile World Live (MWL) believe the public should look a little further beyond policymaking.

Abid Adonis, researcher at Oxford Internet Institute, argues the task of ensuring ethical AI needs to be expanded.

“Now, we only see two powers: regulators and big tech, but we also have civil society and scholars. And it’s important to hear what marginalised groups say about this because it’s missing from the discussion.”

The Harmful Focus on Artificial General Intelligence

This view resonates with Dr Alison Powell, associate professor in Media and Communications at the London School of Economics and Political Science and director of JustAI network at the Ada Lovelace Institute.

Powell told MWL the emphasis on artificial general intelligence — which industry heavyweights claimed can eclipse humans’ cognitive abilities and therefore dominate job markets — is already in itself harmful.

The Dominance of English in AI Models

This is particularly reflected in Large Language Models (LLM) built on internet data. Powell pointed out that while there are a lot of languages spoken in the actual world, English is largely dominant on the internet.

“In the world, there are many ways that people experience things, express ourselves and work together. Not all of these are present online.”

The Limitations of AI Decision-Making

Powell further warned about the hype around AI’s decision-making abilities and suggested the technology’s powers do not take into account social responsibilities.

This somewhat makes sense when considering the fact generative AI posterchild ChatGPT falsely accused law professor Jonathan Turley of assaulting a student and made up a story about the death of Alexander Hanff, a privacy technologist who helped craft GDPR.

Other examples include data-filtering practices in GPT-3, which used a classification system to automatically discard obscene and inappropriate material.

The Flaws in Large Language Models

Further flaws in LLM were highlighted in a recent report by The Washington Post, which stated tech companies had grown secretive about what they feed the AI, such as using data from websites that could be deemed discriminatory.

This backed up a study from 2021, which found generative AI has the potential to amplify privileged views, pointing to GPT-2’s training data extracted from Reddit, Twitter and Wikipedia, all of which have predominantly male users.

The Social and Cultural Dimensions of AI

Powell stressed the need to understand the social aspects where technology is more likely to cause harm before considering how to make it more ethical.

“AIs are institutional machines, they’re social machines and they’re cultural machines,” she argued.

“If we’re walking away from saying, ‘How do we do this technically, in the gears?’ then we produce that double bind. But if we take a step back, then we notice all of these systems are institutional systems. Thinking about making systems work along the lines of justice and inclusion is about not how the machines work, but how institutions work.”

Shaping the Corridors of Innovation

Adonis added a nuanced public discussion on ethical technology will continue to play a strong role in future innovations and policymaking.

“If we build strong, fundamental discourses in many places on something we know will have detrimental effects to society, it will permeate into stakeholders and state actors. They will know what to do, and civil society will know what to do.”

“I believe discourse and paradigm will shape the corridors of innovation”.

Redefining AI Governance

For Powell, AI governance means enforcing existing laws, particularly those relating to data protection, anti-discrimination and human rights “that apply to the institutional settings in which you put AI”.

“I would continue to advocate for thinking about institutional settings employing AI, rather than thinking about it as an object of regulation itself,” she added.

Editor Notes: Promoting Ethical Innovation

The ethical implications of AI and generative language models are a pressing concern that should not be overlooked. As demonstrated by the outsourcing practices of big tech companies, the negative impact on workers and communities cannot be ignored. To truly advance and benefit society, AI innovation must be guided by strong ethical principles and inclusivity.

At GPT News Room, we strive to bring awareness to these important discussions and highlight the need for responsible AI development. Visit our newsroom for the latest updates on AI ethics and the future of technology.

GPT News Room

Source link



from GPT News Room https://ift.tt/3TducWY

No comments:

Post a Comment

語言AI模型自稱為中國國籍,中研院成立風險研究小組對其進行審查【熱門話題】-20231012

Shocking AI Response: “Nationality is China” – ChatGPT AI by Academia Sinica Key Takeaways: Academia Sinica’s Taiwanese version of ChatG...