This story was initially published as a part of Tech News Now, TheStreet’s daily tech rundown. 

Concerns over AI-generated deepfakes have been mounting in recent weeks as the scale and scope of identity theft has seemingly evolved. 

One cybersecurity expert recently referred to the situation as “identity hijacking,” a more dangerous iteration of identity theft in which anyone’s likeness — through video, imagery, audio or text — can be synthesized and fraudulently misused.

Related: Cybersecurity expert says the next generation of identity theft is here: ‘Identity hijacking’

Recent instances of this demonstrate the variety of attacks that have already been perpetuated — a podcast used an AI in an attempt to create a new comedy special in the voice and style of late comic legend George Carlin; deepfaked, sexually explicit images of Taylor Swift recently went viral on X, highlighting the growing problem of deepfake porn which has already impacted high school students; a finance worker at a firm in Hong Kong was duped into giving scammers — posing in a video call as the company’s CFO — $25 million.

And these are only a few recent, headline-grabbing events. 

Last year, a mom received a phone call from scammers who claimed they had kidnapped her daughter, allowing her to scream into the phone to convince her mom to send a $1 million ransom payment.

Her daughter had never been kidnapped; she was safe, at home and in bed. Her screams were generated by AI. 

The problem of deepfakes is proliferating, and it poses enormous threats not just to individual people, but also to electoral processes around the world through deepfake electoral misinformation — a fake, AI-generated robocall of President Joe Biden encouraged voters not to participate in the New Hampshire primary at the end of January. 

The tech sector recently acknowledged such risks through its announcement of the “AI Elections Accord” last week, a voluntary pact among 20 of the largest social media and AI players to mitigate the electoral risks posed by their technology. 

The voluntary pact would not prohibit the creation or dissemination of such content. 

Neither the Accord nor Microsoft responded to a detailed request for comment. 

Related: Deepfake porn: It’s not just about Taylor Swift

‘Disrupting the Deepfake Supply Chain’

In a new open letter published Wednesday — titled “Disrupting the Deepfake Supply Chain” — hundreds of scientists and executives are calling for new laws to at least hinder this proliferation of deepfakes.

The letter calls for the full criminalization of deepfake child pornography, and calls on governments to establish criminal penalties for “anyone who knowingly creates or facilitates the spread of harmful deepfakes.”

The letter goes on to call for responsibility to be allocated to software developers and distributors, calling on governments to require such parties to prevent their products from creating harmful deepfakes, and to “be held liable if their preventive measures are easily circumvented.”

More than 90% of deepfakes are nonconsensual sexual material, fraud or attempts to mess with elections. There’s a surge in momentum across the ideological spectrum to disrupt the deepfake supply chain. Please consider joining me as a signatory of this open letter: #BanDeepfakes

— Max Tegmark (@tegmark) February 21, 2024

The letter’s 430 signatories include AI researcher Gary Marcus, Facebook whistleblower Frances Haugen and AI scientist Yoshua Bengio.

The letter comes almost exactly 11 months to the day that a letter calling for the immediate six-month pause in the development of more powerful AI models was published. That letter, whose signatories include Elon Musk, has gained more than 33,000 signatures; despite its publicity, the letter, published March 22, 2023, has not resulted in such a pause.

It highlighted the risk of allowing machines to “flood our information channels with propaganda and untruth,” a threat that in the past year has seemingly metastasized. 

Contact Ian with AI stories via email, [email protected], or Signal 732-804-1223.

Related: Think tank director warns of the danger around ‘non-democratic tech leaders deciding the future’