With a crudely altered video of House Speaker Nancy Pelosi, D-Calif., fresh on everyone's minds, lawmakers heard from experts how difficult it will be to combat these fakes and prevent them from being used to interfere in the 2020 election.
"We don't have a general solution," said David Doermann, a former official with Defense Advanced Research Projects Agency. "This is a cat and a mouse game." As the ability to detect such videos improves, so does the technology used to make them.
The videos are made using facial mapping and artificial intelligence . The altered video of Pelosi, which was viewed more than 3 million times on social media, gave only a glimpse of what the technology can do. Experts dismissed the clip, which was slowed down to make it appear that Pelosi was slurring her words, as nothing more than a "cheap fake."
Rep. Adam Schiff, the committee chairman, said the Pelosi video "demonstrates the scale of the challenge we face." But he said he fears a more "nightmarish scenario," with these video spreading disinformation about a political candidate and the public struggling to separate fact from fiction.
The technology, said Schiff, D-Calif., has "the capacity to disrupt entire campaigns, including that for the presidency."
Doermann said the threat has grown worse due to the proliferation of what was once specialized technology. Creating convincing fabricated videos once required expensive equipment and software, but now "a high school student with a good computer" can do it, he said.
"It's not something that you have to be an (artificial intelligence) expert to run. A novice can run these types of things," he said.
Schiff told reporters after the hearing he believes federal regulation aimed at curbing deepfakes is "worthy of serious consideration." But he said the problem will never be completely solved, only suppressed.
Clint Watts, a research fellow with the Foreign Policy Research Institute, suggested tech companies should play a role in deciding which false videos should be taken down from the internet. But that idea drew skepticism from the committee's top Republican, California Rep. Devin Nunes. He raised concerns about granting too much authority to tech giants such as Facebook and Twitter to make judgment calls about content.
Nunes claimed that current filters have a pro-liberal bias. "Most of the time it's conservatives who get banned," he said. "It's all in who's building the filter, right?" (Nunes has sued Twitter and several of the platform's users, accusing them of defaming him.)
Danielle Citron, a University of Maryland Law professor, told lawmakers that many of the laws regulating online video date back decades and need to be overhauled to keep pace with the growing threat.
"We have an audience primed to believe things like manipulated video of lawmakers," Citron said. "I would hate to see the deepfake where a prominent lawmaker is purported to ... (be) seen taking a bribe that you never took."
U.S. intelligence officials have repeatedly warned about the threat of foreign meddling in American politics, especially in the lead-up to elections.
U.S. officials determined Russia carried out a sweeping political disinformation campaign on U.S. social media to influence the 2016 election. The director of national intelligence, Dan Coats, has said Russia attempted to meddle in the 2018 midterm elections, but was unsuccessful.