3 of the creepiest things about ‘deepfake’ video

3 of the creepiest things about ‘deepfake’ video –


GUEST:

Fraudsters typically line their pockets by forging our signatures, cloning our credit cards, and stealing our personal identities. Yet we’d like to think that folks who know us personally would catch these counterfeiters if they brazenly claimed to be us in public. After all, seeing is believing, isn’t it? If you don’t look like me, you’re not me. If you do look exactly like me, the chances are that you are me. Right?

Well … maybe. This could soon become the subject of some confusion.

Imagine if stealing your identity could include stealing your image. Scammers could then use that image to put words in your mouth and, in some cases, fake your very actions. This isn’t just some outlandish thought experiment, but a foreseeable hazard if we fail to prepare for a surge in the production of “deepfakes.”

What is a deepfake, anyway?

The Urban Dictionary gives a characteristically unrefined definition for deepfake:

A horrific AI-assisted face swapping app which takes someone’s face and places it on someone else’s body. Particularly great if you’re a creep imagining what your favorite celeb-crush looks like naked.

A BBC article offers clearer detail on the simple process by which existing footage can be expertly doctored using readily available tools:

By using machine learning, the editing task has been condensed into three user-friendly steps: Gather a photoset of a person, choose a pornographic video to manipulate, and then just wait. Your computer will do the rest, though it can take more than 40 hours for a short clip. The most popular deepfakes feature celebrities, but the process works on anyone as long as you can get enough clear pictures of the person — not a particularly difficult task when people post so many selfies on social media.

So there we have it: Almost anyone can do it, and literally anyone could become the star of some fake footage (which, to be clear, need not be pornographic).

What could possibly go wrong?

Here are three important concerns you should consider as we look into the future of deepfake.

1. Anyone can do it

Just to reiterate, anyone with a will can find a way to create deepfake footage. You could do it. And so could anyone you know.

Motherboard reported that the Reddit user who started this whole phenomenon has already created an app that helps users create deepfake videos. The app can apparently generate convincing videos with only one or two high-quality clips of the face the user wants to fake. This means if someone can access real footage of you, they can manipulate it. The somewhat good news here is that many users have reported bugs in using the app, so not every face-swap attempt will work.

Eric Goldman, a professor at Santa Clara University School of Law and director of the school’s High Tech Law Institute, has cautioned that we “have to prepare for a world where we are routinely exposed to a mix of truthful and fake photos and videos.”

Is it OK to manipulate a video if no one is hurt or embarrassed? Many users produce deepfakes for the purposes of humor and fun. When does the subject of a joke or a taunt become a victim of something akin to hate speech or slander? Where the deepfake is sexual, is it less harmful than so-called “revenge porn” (given that it isn’t the victim’s actual body being exposed)?

At the moment, the parameters of the software’s acceptable use are unnervingly loose and ill-defined.

2. It’s a pretty sticky legal area

That’s right: As preposterous as it may sound, mounting a fightback won’t be easy if some miscreant makes a video in which your face is grafted onto the body of someone who is, well, doing anything the creator wants you to do.

The Verge explains that there is no single law to help you. Defamation? Well, maybe, but these cases are expensive and hard to win, and if the creator is anonymous or overseas, then such claims are unhelpful. And you can’t sue someone for a privacy violation when the intimate details they’re exposing are not of your life. Furthermore, pushing to have content removed can even count as a First Amendment violation.

The third-party websites hosting the videos are in no way liable for the video, nor can you force them to remove it — unless the copyright owner of the original video asserts an infringement. This means you’d have to track them down and enlist their help.

Fortunately, major sites like Reddit, PornHub, and Twitter have announced bans on deepfake content, which means most sites will likely err on the side of protecting victims of deepfake rather than siding with users who post altered footage.

3. It’s a tool for deception

The person featured in a deepfake isn’t necessarily the only victim. This fake video of former President Barack Obama is just one of many AI-generated fakes portraying a political leader. Its creators warn that in the future we could see similar pseudo-videos that are used to spread disinformation, panic, and fear in the same way as we’ve witnessed with various recent “fake news” scandals. This could harm us without altering a single pixel of our own images.

Moreover, as these tools become more refined and realistic — and we’re on that trajectory — they could be used for things like bribery, the production of false evidence, and any number of other criminal activities. All with relative ease.

This is, perhaps, even more worrying in a climate where AI surveillance promises the world in crime reduction, and thus validates the collection of enormous amounts of vulnerable video footage.

What’s next?

We need to continue developing techniques that can counteract the pernicious effects of AI technology, while keeping pace with those effects. This could mean creating another AI that can call out this AI. It sounds ridiculous, but that might be what we ultimately rely on.

But herein lies another problem: If we’re dependent upon an infinite regress of smart technology (AI that holds accountable the AI that holds accountable the AI … and so on), do we drop the reins in a way we will all live to regret? In other words, will these deepfake videos evolve to the level whereby no human, no matter how smart, could differentiate real footage from fake footage?

And if that’s where we’re headed, shouldn’t we really be having conversations about if it’s where we want to go?

This story originally appeared on Medium. Copyright 2018.

Fiona J. McEvoy is a tech ethics researcher and founder of YouTheData.com

Leave a comment