Microsoft wants to ban AI-generated abuse and scams

In addition to the lack of legislation, Smith argues that the focus so far has also been too narrow. The tech industry has been focused on using deepfakes for political purposes in the run-up to the election. Ironically, this blog post comes just a day after Elon Musk shared an unmarked deepfake of Vice President Kamala Harris on X.

However, Smith stresses the need for broader awareness. Industry players and legislators need to look at “…“The important role that deepfakes play in these other types of crimes and abuses,” he writes.

A tool – and a weapon

The blog post marks the release of a 42-page report, which begins by repeating a warning Smith made in his 2019 book, Tools and Weapons. Then, as now, he wrote that “technological innovation can serve as both a tool for societal progress and a powerful weapon.”

The blog discusses the transformative power of AI in many fields, including medicine. But it also details how the FBI just dismantled a Russian bot farm designed to “spread AI-generated foreign disinformation.”

“We are at a point in history where anyone with access to the internet can use AI tools to create highly realistic synthetic media that can be used to deceive: a clone of a family member’s voice, a doctored image of a political candidate, or even a forged government document.” – Smith

He warns that we are at a profound tipping point where “…AI has made media manipulation dramatically easier – faster, more accessible, and requiring little skill,” simply stating: “As quickly as AI technology became a tool, it became a weapon.”