How deepfaking takes the ‘death of the author’ to a new level

Examining the multilayered ethics of manipulating the voices of the dead

Anthony Bourdain. (Flickr/ dackelprincess)

Anthony Bourdain. (Flickr/ dackelprincess)

When director Morgan Neville disclosed that he had used software to imitate Anthony Bourdain’s voice in Roadrunner: A Film About Anthony Bourdain, people were disturbed. The lines were taken from an email Bourdain had sent earlier in his life, and turned into speech for the film using old recordings and voice manipulation software. 

Figuratively, the “death of the author” refers to what happens when we analyze someone’s creative work while ignoring their intentions and personal context. But what happens when we use technology to make it seem like they’re still speaking after they die? Do their intentions matter?

Maybe, though the idea that people are appropriating the culinary star’s voice years after his passing raises some uncomfortable ethical questions we have yet to fully answer for ourselves.

To some, deepfaking Bourdain’s voice is justifiable; Neville says that he consulted with Bourdain’s close circle and literary executor, and the choice to recontextualize words from text to speech gives them more emotional impact.

To others, the decision represents a transgressive posthumous violation of consent, an upsetting glimpse of how technology can usurp one’s agency and autonomy, and a troubling act of deception embedded in a documentary — a genre commonly understood to showcase factual storytelling.

The immediate effect of using Bourdain’s modified voice to fabricate sentences is small, just a few lines in a movie. Who cares if he said them exactly that way in his lifetime or not? Bourdain was a writer, and there are hundreds of hours of audio and thousands of pages of his words available for public consumption and interpretation. A few sentences are a drop in the bucket, a small price to pay for heightening the emotional immediacy of the film.

However, even if the immediate consequences of deepfaking Bourdain’s voice were debatably harmless, the implications of the act of doing it are not. Though it didn’t damage his reputation or mislead the audience into believing he wrote something he didn’t, it has shown us the power that filmmakers have in controlling our understanding of reality, even in ways audiences think are problematic.

Not only can it be perceived as a violation of Bourdain’s consent, it’s a violation of the audience’s consent to suspending disbelief, as there’s no acknowledgement of the deepfaked voice in the film itself.

Deepfaking someone’s voice in a documentary after they die is a collision of two ethically grey areas that regulators have yet to seriously grapple with: the acceleration of our technological capability to alter the perception of truth, and the question of who controls our personal data and public image, even after we die.

Think of what happens to people’s social media accounts after they pass away. Facebook allows users to select a “legacy contact” who can access their account and delete it. For users who don’t think to do this in advance, their page becomes “memorialized” instead, turning it into a bottomless receptacle for people’s thoughts and prayers that can’t be changed or deleted. 

It just sits there, continuing to exist. Forever. 

All the photos, posts, and data accumulated under that person’s name persist under the stewardship of a giant tech corporation, and the identity the data established outlives the person who created it all to begin with, like a digital ghost.

As we increasingly spend time online our concept of personal identity is merging with this data, which is probably why it feels creepy to think about it being used in ways we can’t predict. 

Using software to literally enable the dead to speak new sentences — whether they wanted to or not — comes off like some kind of spooky tech-driven necromancy. Though unlike rendering seances obsolete with holograms of Tupac and Kim Kardashian’s deceased father, deepfaking is inherently deceptive by nature. Taking Anthony Bourdain’s voice data and using it to create original sentences is an uncomfortably stark example of how the traces we leave behind could be used after we die, regardless of our original intentions. 

That is, of course, unless policymakers catch up to this century and strengthen the protection of personal data autonomy as a human right. Just don’t hold your breath.