With artificial intelligence (AI) slowly permeating every corner of our lives, from ChatGPT to social media algorithms that can seemingly read minds, actor Hania Aamir is the latest celebrity who has been a victim of deepfake videos and photos being spread in her name.
Taking to her Instagram Stories, the Mere Humsafar actor posted a screenshot on Thursday of an account displaying photos of a woman who looks eerily similar to the actor herself.
“This AI stuff is very scary,” wrote an angered Aamir underneath stills of photos of her doppelganger, who was clad in a series of revealing outfits that had earlier drawn ire from the public. Urging authorities to take action against deepfake photos and videos, Hania added, “Can there be some laws in place? There are NOT my videos if anyone is convinced they are.”
Later, Aamir reached out to her followers on Instagram via Stories to request that they report the account. Posting a screenshot of the offending Instagram account, which appears to belong to an Indian user, Aamir wrote, “I’m blocked, but can you guys please report this account?”
AI: An eerie election tool
Aamir is not the first celebrity to have fallen prey to AI images parading in her name. In August, US Republican presidential candidate Donald Trump stirred up controversy yet again after he acknowledged that he had posted images online depicting Taylor Swift endorsing his campaign. According to Variety, the images included several ‘Swifties’ supposedly supporting him. However, Trump later acknowledged that the images were “fake”, but was not worried that he would be sued, since the AI images were “all made up by other people.”
When it comes to national elections this year, it is not just Trump’s campaign that featured AI images of a celebrity. In April, prior to the Indian national elections held in June, Reuters reported that fake videos featuring Bollywood stars Aamir Khan and Ranveer Singh were asking fans to vote for the opposition.
In a 30-second video that showed Khan and another 41-second clip of Singh, the two actors purportedly said Prime Minister Narendra Modi failed to keep campaign promises and failed to address critical economic issues during his two terms as prime minister. Both actors later went on record to refute the videos, which were viewed more than half a million times in a week by then.
The implications are clear: even before celebrities’ images and deepfake videos have a chance to be identified for what they are, they have the potential to cause widespread defamation or even swerve the course of an election as they go viral before easily-influenced fans.
All celebrities at risk
It is not just political campaigns that strive to use AI versions of celebrities. In May this year, Hollywood actor Scarlett Johansson was embroiled in a tussle with OpenAI, accusing the company of creating a voice, known as ‘Sky’, for ChatGPT that sounded “eerily similar” to herself. According to Johansson, “my closest friends and news outlets could not tell the difference.”
However, after Johansson released a statement voicing her views, OpenAI CEO Sam Altman released his own statement to Reuters.
“The voice of Sky is not Scarlett Johansson’s, and it was never intended to resemble hers,” said Altman. “Out of respect for Ms Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms Johansson that we didn’t communicate better.”
Johansson ultimately won her battle with OpenAI, but whether Aamir is able to successfully shut down the account that is using an artificially engendered version of her face remains to be seen. If this year’s war between celebrities and AI is any indication, however, the message is clear: anyone whose face and voice are recognisable continues to be at risk.