Exploring the Rise and Implications of Artificial Intelligence: Expert Insights from Two University of Michigan Professors
Artificial intelligence (AI) has been making headlines recently and its implications are far-reaching and profound. Two professors from the University of Michigan, Nigel Melville from the Ross School of Business and Shobita Parthasarathy from the Ford School of Public Policy, have examined AI’s rise and the consequences it brings for businesses, society and culture. They shared their thoughts on the need for a moratorium and regulation, and the importance of conducting risk assessments. Above all, they emphasize the significance of prioritizing societal risks and benefits rather than being blinded by technological advancements.
If appointed government science and technology policy czars, what would they do to regulate AI? Shobita Parthasarathy would issue a moratorium and start developing a risk assessment system. The European Union is already considering a tiered system that classifies AI based on social importance and impacts on marginalized communities, and gradually increases oversight and regulation for technology deemed high-risk. For Nigel Melville, designing appropriate regulations through a lens of vulnerability, liability, and transparency is essential.
Although the risks of AI are apparent, there are also several advantages. Generative AI, like ChatGPT and GPT-4, democratizes expertise and allows people to access information that is otherwise difficult to comprehend. It also holds the promise of standardizing decision-making and eliminating human bias in sectors like healthcare, social services, and criminal justice. However, the reality is that AI is often just baking in existing biases and making it harder to remove said biases.
Melville and Parthasarathy believe that two sectors hold immense promise for generative AI: pharmaceutical and drug development and education. While AI can collaborate with scientists to come up with new treatments for diseases like Alzheimer’s, it can also help reshape the idea of learning for students by focusing on critical thinking skills, data evaluation, and proper judgments based on that data.
Several countries are taking steps to regulate AI. Italy temporarily blocked ChatGPT due to data privacy concerns, while the EU is negotiating new rules to limit high-risk AI products. In the United States, the Biden administration has unveiled goals to avert harms caused by AI without taking any enforcement actions.
Melville cautions that regulations and moratoriums need to take into account the broader definition of AI and its emulation of human capabilities in cognition and communication. Machines are not only working among themselves but also developing relationships with humans, which has significant implications that must be considered.
Overall, AI presents both challenges and opportunities, and experts like Melville and Parthasarathy stress the importance of not being blinded by technological promise but prioritizing societal risks and benefits in regulating AI.
Source link
from GPT News Room https://ift.tt/lrHJ4F1
No comments:
Post a Comment