The worst days of deepfakes are likely ahead of us, potentially impacting all levels of government
By: Pierluigi Oliverio
The internet is rife with videos of famous people seemingly saying and doing unorthodox things. It may look like them or sound like them, but is it really Joe Biden, Donald Trump or Volodymyr Zelenskyy speaking? No one wants to be duped, but even perfectly intelligent people are unable to distinguish these so-called “deepfakes” from reality. The internet may be lauded for its ability to bring us together, but misinformation threatens to split us further apart.
Deepfakes include audio, video and image manipulations or can be completely fake creations altogether. Examples include face swaps, lip syncing, puppeteering and even creating people who don’t exist (check out www.thispersondoesnotexist.com). Sometimes deepfakes are done for comedic entertainment and are so outrageous that they are obviously fake. In other cases, deepfakes can be downright malicious, such as using a person’s image in pornography. I fear the worst days of deepfakes are likely ahead of us, potentially impacting all levels of government.
What is to stop those with nefarious intent from depicting government officials with or via deepfakes? Unfortunately, not much, as deepfakes can be created from anywhere and distributed globally with little to no accountability. It is likely we’ll see viral deepfakes of dubious timing before every election, with the intent to influence voter judgement. Or perhaps deepfakes are used to create chaos, using a county public health director, city manager or police chief to carry a fake narrative during a crisis.
When false government documents regarding a COVID outbreak were circulated in Los Angeles County, a county supervisor had to publicly call out the hoax to contain panic. Government entities that refute the validity of digital media too often may then be faced with a “boy who cried wolf” scenario, where constituents who are unable to discern what is genuine or trustworthy no longer believe anything from the government.
In order to fight misinformation and distinguish what can be trusted in today’s digital media environment, the Content Authenticity Initiative (CAI) was formed. The purpose is to add a layer of verifiable trust to all types of digital content through an open-source standard. CAI includes news purveyors such as AP News, BBC, CBC Radio Canada, Gannett, McClatchy, New York Times, Reuters, Wall Street Journal and the Washington Post.
The goal is to enable content creators to adopt this standard to increase trust and create less misunderstanding. With this method, when digital content appears on screens across the world, its history moves with it, and if anything was changed along the way, everyone can see. This is an opt-in approach and not a mandate. It is up to individuals to look for digital provenance to warrant our trust. At some point in the future, any digital media that does not incorporate this free open-source technical standard will likely be suspect.
What can be done at the consumer level? We should suggest that government entities implement such standards with their own digital media assets before publishing, so we as constituents know that what we are viewing is trustworthy. Ask Congress to pass the bipartisan Deepfake Task Force Act (S2559). Ask state and federal representatives to promote this technical standard especially with social media.
But until then, consumers should remain skeptical of digital media that lacks appropriate certification. Most important, we must be better stewards of the information we consume, and we should reflect before sharing things more broadly. We should also hone our critical thinking skills as much as possible. Personally, I recommend occasionally picking up a hard copy newspaper to obtain information, as it is much easier on your eyes and supports local journalism as well.
Pierluigi Oliverio is chair of the San Jose Planning Commission and a former San Jose City Councilman.