Editor's PickHealth, Safety & WellbeingTechnology
Home Our latest stories Social DevelopmentTechnology & Innovation AI: Can we regulate it and how would we do it?

AI: Can we regulate it and how would we do it?

May 20th, 2023

by Shannay Williams

Whether we recognize it or not, most of us have interacted with artificial intelligence (AI) in some form. We’re all familiar with Google, Siri, Alexa, and the like and we’ve been using them for years with ease. So what’s changed?

If you’ve been following the debate, you may have heard that just two weeks after the public release of ChatGPT, an open letter signed by 27,000+ people so far, including tech titans and entrepreneurs, requested a six-month halt in the development of AI.  According to the letter, “[p]owerful AI systems should be developed only when we are confident that their effects will be positive and their risks will be manageable.”

AI can be split into two broad categories: weak AI and strong AI. Weak AI is trained to perform specific tasks – like asking Siri a question. Strong AI is, (theoretically) trained to have intelligence equal to humans or that exceeds the capabilities of the human mind. The idea of Strong AI feeds into our fears because we often ask questions like will AI replace your job completely? Can AI outsmart us? Will AI deceive us? How will we know what’s real?

You may have seen the viral “picture of the Pope” which focused a lot of concern on generative AI. Generative AI creates new content from the material it has already been exposed to or has access to. So theoretically, the Internet could be generative AI’s data playground, which is why it can be scary; we know that social media and the Internet are filled with trolls, angry content, and disinformation. If we ask AI to generate a story or art, would it create mostly violent material? Or will it create misleading content?

While we have reason to be concerned about the direction of AI and its potential negative consequences, we also have reason to look forward to its progress. AI could aid in the prediction of consumer behavior, enabling more targeted digital marketing strategies and better customer service. It may also reduce the amount of time physicians spend studying patient data; in fact, it may assist in reducing medical errors. With autonomous delivery vehicles, self-organizing fleets, and self-driving cars, AI could even transform the way we travel. Of course, AI as a tool has the potential to improve our quality of life, which is always exciting.

All these possibilities are thought-provoking, but, like many I am concerned about regulations. How will we regulate AI? Some have suggested disclaimers/watermarks on AI-generated content and even limitations on parameters users can set when using AI systems. While there is a lot of popular debate on how much we should train AI to do and how far it will go, we still have more questions than answers; and more concerns than solutions.

What will be our safeguards? Are we confident that the effects would be mostly positive and the negative risks will be manageable? The questions are endless and depending on your industry may be even more concerning but as the debate rages on, we can expect AI systems like Midjourney and ChatGPT to continue to evolve.

Share

About the author

Shannay Williams

Shannay Williams is from St. Thomas, Jamaica. She is a holder of a Bachelor of Laws (LLB) degree from the University of the West Indies and is passionate about service. In her free time, she enjoys creating and sharing content as a “bookfluencer”. She hopes to raise awareness of issues in her country that affect both her region and the world. 

Submit your content

Submit a video
Submit an article

by Shannay Williams

Whether we recognize it or not, most of us have interacted with artificial intelligence (AI) in some form. We’re all familiar with Google, Siri, Alexa, and the like and we’ve been using them for years with ease. So what’s changed?

If you’ve been following the debate, you may have heard that just two weeks after the public release of ChatGPT, an open letter signed by 27,000+ people so far, including tech titans and entrepreneurs, requested a six-month halt in the development of AI.  According to the letter, “[p]owerful AI systems should be developed only when we are confident that their effects will be positive and their risks will be manageable.”

AI can be split into two broad categories: weak AI and strong AI. Weak AI is trained to perform specific tasks – like asking Siri a question. Strong AI is, (theoretically) trained to have intelligence equal to humans or that exceeds the capabilities of the human mind. The idea of Strong AI feeds into our fears because we often ask questions like will AI replace your job completely? Can AI outsmart us? Will AI deceive us? How will we know what’s real?

You may have seen the viral “picture of the Pope” which focused a lot of concern on generative AI. Generative AI creates new content from the material it has already been exposed to or has access to. So theoretically, the Internet could be generative AI’s data playground, which is why it can be scary; we know that social media and the Internet are filled with trolls, angry content, and disinformation. If we ask AI to generate a story or art, would it create mostly violent material? Or will it create misleading content?

While we have reason to be concerned about the direction of AI and its potential negative consequences, we also have reason to look forward to its progress. AI could aid in the prediction of consumer behavior, enabling more targeted digital marketing strategies and better customer service. It may also reduce the amount of time physicians spend studying patient data; in fact, it may assist in reducing medical errors. With autonomous delivery vehicles, self-organizing fleets, and self-driving cars, AI could even transform the way we travel. Of course, AI as a tool has the potential to improve our quality of life, which is always exciting.

All these possibilities are thought-provoking, but, like many I am concerned about regulations. How will we regulate AI? Some have suggested disclaimers/watermarks on AI-generated content and even limitations on parameters users can set when using AI systems. While there is a lot of popular debate on how much we should train AI to do and how far it will go, we still have more questions than answers; and more concerns than solutions.

What will be our safeguards? Are we confident that the effects would be mostly positive and the negative risks will be manageable? The questions are endless and depending on your industry may be even more concerning but as the debate rages on, we can expect AI systems like Midjourney and ChatGPT to continue to evolve.