Some experts predict that deepfakes—videos created using AI that can make it look like someone did or said something they didn't—could become one of the biggest battles the tech era faces, and that's saying something. The technology has vast implications, with the potential to cause mass uncertainty in politics, as well as the murky worlds of cyberbullying and online pornography.

And then there's business. How can industries including finance, insurance or even the arts survive when it becomes increasingly impossible to tell what is real or not? Video manipulation has been possible for over a decade, but rapid and very recent advances in machine learning have made it exponentially easier for an algorithm to capture a person’s likeness from one or two photographs and remake it into a moving image depicting something that never happened.

Last week Google released thousands of deepfake videos so that researchers and tech experts could start creating algorithms to combat them. But are we doing enough?

It is a phenomenon that's consuming Gen.T honouree Jack Chao and his business. A serial entrepreneur since he was 19, Chao was the youngest doctoral candidate in the history of the Department of Electrical Engineering at National Taiwan University, and is also the chief data scientist for Startup@Taipei, which is part of the Taipei City Government’s Department of Economic Development. Today, he fuses computing vision, natural language processing and AI technologies to automate and optimise processes for the insurance industry through his company Bravo AI. And one of the major roles they are taking on is hunting out deepfakes.

Tatler Asia
Above Jack Chao

“In my opinion there is no good or bad technology, only good and bad people,” he says, slowly but authoritatively, on the phone from Taipei. “We have to remember that technology is advancing for both sides. It feels like we are building up two networks and making them fight with each other: one is the deepfake one, and the other is there to detect it. When they strike a balance, they can generate something that controls the other.”

The tech giants have promoted a similar concept and, publicly at least, are quashing the idea that deepfakes could be a major problem of the future, claiming that AI will help deal with the trouble it's created. When he testified in front of the US Congress last year, Facebook CEO Mark Zuckerberg promised that AI would help it identify fake news stories, using algorithms trained to distinguish between accurate and misleading text and images in posts.

See also: We Can Now Create “Virtual Humans”—And That’s Kind Of Terrifying

Chao says the same is true of any industry. “We need to use AI to fight all of this, because anything false that has been created using AI is harder to detect and almost impossible to spot with the human eye. But it is also important to remember that fakes have been part of all industries for so long. In insurance, we need to try to figure out if a claim is a fraud, so we use historical data, find elements of what is fake or not, look at people’s career, age, patterns etc. It’s not that different now, it’s just we are setting up all the indexes and making the AI learn the historical fraud so they can detect similar patterns and forecast future patterns.”

In my opinion there is no good or bad technology, only good and bad people

- Jack Chao -

Tatler Asia
Above An image from Google's project to combat deepfakes

The amount of effort being put into the development of deepfake detectors suggests a high-level solution is on the way, and that governments are taking the threat seriously. For example, Darp—the tech-focused research branch of the US Defense Department—has recently created a programme that funds researchers working on automated forgery detection tools, specifically deepfakes.

But aside from the technological solutions, Chao argues that the second best tool in the fight against deepfakes is to educate larger swathes of the population, and not just rely on a small subset of tech engineers to do the work for us. “The most important thing is that there is widespread common knowledge about AI,” he says. “If we teach normal people—and not just engineers—about AI, at least they can have a basic idea about whether that video they see is fake or not.”

“My central point of view on all of this is that AI is about all of us,” he continues. “We all need to have the basic skills to see if something is fake or not, and the education to know exactly what AI can do. That is the only way we will gain control over the deepfake industry, and through that, our future.”

Topics