23.9 C
Pakistan
Saturday, May 18, 2024

Three undesirable effects of generative AI

The’move quickly and break things’ standard operating procedures are in high gear. Every sector and industry is doing its best to implement the large language model (LLM) in various aspects of corporate operations. Maintaining a competitive advantage is the subject of a frenzy and significant FOMO (fear of missing out).

The sheer amount of online stuff is already overwhelming people. We have social networking websites, search engines, blog and media sites, platforms for sharing videos, news websites, and so forth. Today, generative AI exists. When you hear a new term, your initial thought is, “What is that? And make it understandable to the average individual. I’ll start now. Generative AI, often known as Gen AI, refers to a group of algorithms that appear to be able to create content such as text, photos, and video utilizing material taken from web resources. Examples of artificial intelligence in the real world include DALL-E (painting), ChatGPT (writing), and, most recently, Timberland organizing a song collaboration with The Notorious B.I.G.

Although the adoption of generative AI has not yet become widespread, there is still time to include fairness principles into current platforms, tools, and systems. And there is much less time available to locate, take into account, and address the damages brought on by generative AI. My top 3 undesirable effects of adopting generative AI are its inclination to increase disinformation, expand surveillance techniques, and normalize a society that doesn’t understand artificial intelligence.

Boosting Disinformation

AI-generated material is fundamentally, at best, a rehash of someone else’s words, and at worst, sanctioned lying. The premise is that anything may be made up. utilizing unidentified and untested repositories, you can create content utilizing text, pictures, videos, and music. There are no mechanisms in place to verify what is accurate or legitimately made available for public use. Without determining their veracity, narratives might be supported or rejected. Actually, this describes how automated content moderation is currently done. Check out the Heinrich-Boell-Stiftung-sponsored algorithmic misogynoir I wrote. However, I digress.

In any case, when content is devoid of context, situational awareness, and/or history…

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles