In a World of Widespread Video Sharing, What’s Real and What’s Not?
Skip to content
Policy Organizations Dec 22, 2022

In a World of Widespread Video Sharing, What’s Real and What’s Not?

A discussion with a video-authentication expert on what it takes to unearth “deepfakes.”

A detective pulls back his computer screen to reveal code behind the video image.

Michael Meier

Based on insights from

Nicola Persico

Bertram Lyons

With the rise of the smartphone, the last decade has seen an explosion in the production and distribution of video across the web. Today, people around the world have unprecedented ability to capture and disseminate everything from children’s dance recitals to police violence or war crimes.

But technological advances in AI have also encouraged the proliferation of deepfakes: seemingly realistic videos that contain inauthentic elements. Perhaps unsurprisingly, there is also a burgeoning industry developing around spotting these manipulated videos.

“When you have a digital video in your hand, there are a lot of questions that we can answer about it,” says Bertram Lyons, CEO of Medex Forensics, a software engineering company that supports source detection and authentication of digital video files. “Where’d it come from? Who created it? What was used to create it? What was the process that it went through to come to be as it is at this very moment?”

Lyons sat down with Nicola Persico, a professor of managerial economics and decision sciences at the Kellogg School, to discuss his company’s unique approach to analyzing a video, the industries that rely on video authentication, and the regulatory gray areas social-media platforms face when it comes to deepfakes.

This video has been edited for length and clarity.

Nicola PERSICO: It seems like video and audio recordings exist along a continuum, from real to selectively edited, to manipulated or doctored, to faked.

Bertram LYONS: I think that’s a good way to put it. It’s definitely a spectrum. And I’ll say that the analysis of a given video object, in order to place some claim on its veracity, also needs to include a spectrum of approaches.

Most tools today are focused on content: looking for faces, looking for changes in shadows, and evaluating pixels. That’s content; there’s no context there. Our tool provides a context-based approach. What we do is a historical provenance analysis that tells you a video’s context before you even watch it.

PERSICO: Can you tell us about that process?

LYONS: If you think about it, the file itself is a piece of evidence that’s been constructed in some way. We take the objects apart within the file, and we look at them from a variety of perspectives. What are they? In what sequence were they put together? When we see a file, we can say, “This file is most similar to a file that’s gone through this or that process.” The goal is to understand every byte in the file.

PERSICO: You got into this field by working with the FBI. I can see how understanding historical provenance would be particularly useful for government agencies if they are trying to distinguish between, say, those who possess child sexual abuse material (CSAM) and those who actually produce it.

LYONS: Exactly. In the U.S. alone, about 45 million suspected CSAM videos come through national tip lines every year. A county prosecutor’s office is often tasked with investigating and putting charges in place when they discover that an individual has CSAM on one of their devices. Usually what happens in this case is there’s a forensic extraction of the video from the device.

And at that point, most investigators haven’t historically been able to really go much further. So the person gets a possession charge, which is a less hefty charge—unless they were caught in a distribution ring, where the video was captured somewhere else and was tracked back to that person. More difficult still is to find the people who are actually producing this material.

But our software can analyze these video files for the investigators and identify the context of these video files. We can demonstrate that only this screen-capture app on this type of device creates files in this way. Investigators can then see it is likely that this person created the video. So in such cases, we are able to help move a possession charge to a production charge, which is the ultimate goal of all CSAM investigators—to find the producers.

PERSICO: Authenticity looks like a big part of your business model. You talked about law enforcement in the context of distinguishing between production and possession for video. What industries or entities need to ascertain authenticity for video?

LYONS: We work with law enforcement and public safety here too, where questions around digital evidence—its believability and the ability to extract information and investigative leads from it—are an important part of the day-to-day work. Further down that pipeline is the legal field, where evidence is introduced into court to be used as part of adjudication. So lawyers and their supporters need the ability to say, is this particular piece of video evidence authentic? Can it be trusted? Ultimately, is it what it is purported to be?

We work with social-media organizations, where disinformation and misinformation are quite rampant, and video is often a player in that particular conversation. Those larger organizations have strong interest in understanding ways they can prioritize content moderation of video coming onto their platforms.

We also work with investigative journalists who are using digital video evidence as part of the raw data that goes into putting together media pieces. Investigative journalists are very interested in making sure they understand where the video they’re looking at came from so that they can interpret it correctly.

“The goal is to understand every byte in the file.”

Bertram Lyons

PERSICO: For instance, I know you have been working with The New York Times on verifying videos from the Ukraine conflict.

LYONS: Yes. Early on, we had the issue where a nuclear plant had been under fire by Russian troops and there was a cry from inside to stop the firing on this facility. Some videos had come out from inside of the plant itself, and they looked like what you might see on a 1960s science-fiction film. It just looked like that.

So our colleagues on The New York Times Visual Investigations team wanted to document this and write about it. They ran these files through Medex to try to identify context: How did the video get to Telegram? Does this represent what you might expect a camera-original file to look like when filmed within the Telegram app or uploaded to the Telegram app from the library on a particular phone? Or does it look like something that was generated through Adobe Premiere Pro or Sony Vegas and then uploaded through Telegram?

PERSICO: This gets to the question of who the guardians of authenticity should be. If it is the platforms on which videos appear, this would put a lot of risk on the heads of those platforms. To the related question of sharing copyrighted material, the EU recently decided to use a negligence standard, where “best effort” on the platform’s part is a defense for platforms that inadvertently share copyrighted material. In the U.S., the battle over the penalty for sharing copyrighted material was won by Google and Wiki, which favored no liability.

The guardians-of-authenticity discussion extends to deepfakes. How are deepfakes evolving? Has identifying them gotten more complicated over time?

LYONS: They’re evolving in a couple of ways. One, they’re more visually compelling: that’s the evolution that consumers are seeing. From the production perspective, as the technology’s getting more advanced, more people can create them. Five years ago, it was pretty hard to make them. Today, it’s easy to do it cheaply, if poorly. Anybody can put a deepfake app on their phone today and create a mediocre one. But it’s easier than it was to create a good one, too. Instead of it taking a team and many weeks to put something together, now it’s to the point where an individual who has any knowledge of computing can follow a set of instructions on a GitHub repository and have a pretty powerful deepfake tool running.

One good thing about it is that it’s become so widespread that people are aware of deepfakes. People are becoming more literate at the same rate that the videos are getting better and better.

PERSICO: Video authentication, like many issues where innovation touches on social norms, is an area where regulation is late to the party. It reminds me of the issues around Google Glass, where the rights over the distribution of video recordings arose. When a person appears on a video, who has the right to control its distribution: The person featured? The person who shot the video? The platform on which it appears?

It seems likely that conflict over the distribution of videos will grow exponentially with the continued rise of social media. The tweak here is that the issue of who owns a video—and who can restrict its distribution—may depend on whether the video is authentic or not. This raises other questions, like should inauthentic videos be less shareable than authentic ones? Who has legal standing to restrict its distribution?

What do you think the rules of the road are going to be going forward?

LYONS: I think for sure there’s a need for regulation. Platforms certainly have a responsibility to evaluate what’s being broadcast by users. The danger with deepfakes is mostly in these broader social platforms, where content can get out and stay longer and be seen and believed longer before it can be debunked.

It’s less of a danger when a video comes out from an individual or an actor’s platform. When it comes from the website of a single organization, the organizations with the greatest technology to analyze that—whether that’s news media or law enforcement—are going to be on it immediately trying to put an answer in place before it becomes really widespread. But when it comes out of these social-media platforms, it gets to quickly spread. So I think the regulations have to be focused on ensuring that platforms are making a best effort at identifying potentially harmful content at the time of ingestion into their networks, meaning they are evaluating uploaded content prior to re-encoding it for publication. Today, most platforms take a file from a user, upload it, re-encode it—either on the device or on the edge of their network—then evaluate it in a content-moderation pipeline. We leave a good deal of useful information on the table when platforms re-encode files before evaluating them.

From a digital-media perspective, that is one example of an area where regulations will be helpful to increase access to valuable data points that will be helpful in the larger content-moderation algorithms. The more data we can generate and store for downstream moderation, the more accurate we can become at identifying and reducing harmful media content.

Featured Faculty

John L. and Helen Kellogg Professor of Managerial Economics & Decision Sciences; Director of the Center for Mathematical Studies in Economics & Management; Professor of Weinberg Department of Economics (courtesy)

About the Writer

Fred Schmalz is the business and art editor of Kellogg Insight.

Add Insight to your inbox.
This website uses cookies and similar technologies to analyze and optimize site usage. By continuing to use our websites, you consent to this. For more information, please read our Privacy Statement.
More in Policy & the Economy Policy
close-thin