Social dilemma of dealing with deepafake contents
'It is extremely important for regulators to sit up and take notice as this is the time to put in place stringent regulations with exemplary punishment for offenders.'
'It is extremely important for regulators to sit up and take notice as this is the time to put in place stringent regulations with exemplary punishment for offenders.'
'It is extremely important for regulators to sit up and take notice as this is the time to put in place stringent regulations with exemplary punishment for offenders.'
New Delhi: Ever since the deepfake bomb augmented by generative AI was dropped on the digital world, it started mushrooming at an exponential rate. At the beginning of 2019, there were 7,964 deepfake videos online, according to a report from start-up Deeptrace; just nine months later, that figure had jumped to 14,678. It has no doubt continued to balloon since then. 96% of deepfakes are porn.
With the use of GenerativeAI (GenAI), the world of ‘fake news’ and ‘true lies’ just got murkier. Last week, President Joe Biden’s ‘fake’ video resulted in his Administration issuing an Executive Order related to the governance of AI frameworks. The fake video of Rashmika Mandanna earlier this week took Bollywood by storm, with senior members of the fraternity calling for legal action. Deep Fake Love is a Spanish reality TV dating show on Netflix that uses deepfake technology to blur the lines between reality and fabrication.
Deepfakes, using Generative Adversarial Networks (GANs), have been around for many years. However, with the emergence of GenAI, they have become more lifelike and much easier to produce at scale. Invariably, fake videos would be of celebrities and politicians. With several elections around the corner in India, politicians and political parties could be both creators, as well as at the receiving end of such fake videos. These would be used to spread misinformation, put political opponents on the spot, or even build an entire campaign to sway voters.
The aam junta – people like you and me – could also be victims. It could be someone wanting to embarrass us professionally, or a jilted lover wanting revenge on their ex. It could even be an inconsequential prank by ‘friends’ wanting to make fun of us on social media. The possibilities, unfortunately, are endless.
It is extremely important for regulators to sit up and take notice – this is the time to put in place stringent regulations with exemplary punishment for offenders. It should be mandated that anyone using an AI model to produce an image or information must disclose it. People must be made aware of Classifiers – software that can detect AI-generated content – and widespread use of the same, much like antivirus. There is an entirely ethical and moral conversation that must gain traction to create awareness of how GenAI must be utilised.
(Jaspreet Bindra is the Founder & MD of The Tech Whisperer Ltd, UK)