Microsoft boss says AI deepfakes are already widespread and a major concern


Widespread use of deepfakes on the web has Microsoft chief Brad Smith worried and he has warned artificial intelligence software should be regulated during a speech on Thursday.

AI’s rise has been quick in recent years, and is now commonly used in hospitals and war zones.

Speaking at a conference in Washington DC, Mr Smith said deepfakes created by AI will need to be addressed by tech giants.

The Microsoft president continued: “We’re going to have to address in particular what we worry about most foreign cyber influence operations, the kinds of activities that are already taking place by the Russian government, the Chinese, the Iranians.”

He added: “We need to take steps to protect against the alteration of legitimate content with an intent to deceive or defraud people through the use of AI.”

READ MORE: New superbug-killing antibiotic discovered using AI

Deepfakes use a form of artificial intelligence called deep learning to make images of fake events or people.

It can allow AI users to alter videos, such as placing another person’s face on someone’s body.

They use a blend of AI and computer imagery and have been used frequently in recent years, particularly during Russia’s invasion of Ukraine.

A deep fake video of Ukrainian President Volodymyr Zelenskyy, in which a fake version of him was telling his soldiers to stand down, made the rounds on social media recently.

In his speech, Mr Smith said licensing of critical AI tech would help stop ethical violations.

He said that new measures would be needed to “ensure that these models are not stolen or not used in ways that would violate the country’s export control requirements”.

His speech came during a discussion between Washington lawmakers and AI experts about how the technology can be used positively.

Sam Altman, the ChatGPT chief, also discussed the topic, laying bare his own fears for the future of technology.

He said: “My worst fears are we, the technology, cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong.”

The biggest concern facing technology, Mr Smith added, was the misinformation during election periods.



Leave a Reply

Your email address will not be published.