David Graff, Google Trust and Safety’s VP of Global Policy and Standards

This election year, generative AI poses new challenges, as cybercriminals can use it to create misleading content, such as deep fakes or fake images, videos, or audio.

Michael Kaiser, CEO of Defending Digital Campaigns, had a fireside chat with David Graff, Google Trust and Safety’s VP of Global Policy and Standards. The talk occurred Thursday morning at Google’s half-day summit on election security at its Austin downtown office.

In addition to the presidential election, more than 1,000 seats in Texas are on the ballot in November, which might attract U.S. adversaries, cybercriminals, and hacktivists to launch targeted attacks, according to Defending Democracy Campaigns.

In past elections, cybercriminals have attacked individuals and organizations and spread dangerous misinformation. Kaiser said that this year, generative AI threatens to disrupt elections even more.

These days, Graff and his team at Google spend a lot of time on generative AI.

“This is a little bit like old wine in a new bottle,” Graff said.

Graff said that Google has been dealing with the challenges of misinformation, impersonation, and bad actors. He said Google has a series of enforcement actions and policies on dealing with fake content.

Graff said Google has developed policies requiring disclosure if advertisers use AI in consequential ways in political ads.

“So, people understand what they are seeing,” Graff said.

 So far, campaigns seem cautious about using generative AI in deceptive ways, but he said there are concerns about misuse by unofficial actors.

Graff said that Google’s search engine elevates authoritative, fact-based information for election queries.

“In terms of the new technology, which is incredibly transformation and incredibly powerful, it does present some new challenges,” Graff said. It allows people to create high-quality video and audio content, he said.

But the large language models are prone to hallucinations or making stuff up, and they can propagate misinformation, he said.

Graff said Google also promotes transparency around AI-generated content through techniques like watermarking and metadata standards. He said Google’s focus is helping the public identify synthesized content.