- The advent of artificial intelligence on the back of the revolutionary evolution of information technology-driven solutions/applications has become a big boon for humankind always on the lookout for introducing something new. Of course, every introduction of novel ideas and initiatives has enormously served the interests of humanity propelling the growth to hitherto uncharted territories with great aplomb. We are aware of how IT solutions have completely altered our ways of living, working, and the general outlook towards existence itself. No one is complaining though. IT allowed us to explore diverse social media platforms dynamically altering the very definition of entertainment for good. The effort towards embracing digitization appears almost complete.
PC: The Economic Times
- However, as is the case with any new invention, the IT-driven artificial intelligence domain too is emerging as a double-edged sword that could be useful and at the same time, quite destructive as well. As you know, the latest to emerge from the stable of AI is the deepfakes that have stormed the entire world leading to a slew of untoward occurrences causing severe embarrassment as well as destruction of great magnitude. From Dhaka protests to the Wayanad tragedy to British riots to impersonating celebrities from different hues, it’s a reminder of the dangers of AI deepfakes that trigger unrest worldwide. As explosive events in Bangladesh unfolded, Bengal police alerted the public not to step into a fake news trap that can spin into disturbance.
- Worryingly, images of Bangla protests have deluged messaging apps. It’s well-nigh impossible for the public to differentiate between authentic and doctored, or fake, images. And it takes nothing to get confused, desperate, and riled. It is this emotional impact of, and reaction to, images that makes fake images so dangerous. Take for instance, how the multi-city anti-immigrant riots are sweeping Britain – in part triggered by fake news of a Muslim immigrant having stabbed three British girls. Recently, a fact-checker unit posted that the image of an infant cradled in its mother’s arms, both caked in mud from the Wayanad landslide that took their lives, was AI-generated. The picture moved like wildfire across online media as defining images capturing the devastation.
PC: The Economic Times
- Now we’re told it wasn’t real. Pitfalls of political manipulation of images and their impact are known, but who gains from a deepfake of a tragedy? Scientists say those with AI images target how people remember events. One, the first thing that fades from memory is where an image came from. Two, striking photos can seldom be erased from memory. So, if it’s stuck in one’s mind, no fact-checking is likely to dislodge one’s recollection of the Wayanad landslide formed from the mother and baby image. Memory is non-factual. People trust photos. And AI researchers believe deepfakes will eventually become undetectable. Now, that’s a dreadful scenario that IT professionals should effectively address sooner rather than later.