Welcome to Rendering, a Deadline column reporting at the intersection of AI and showbiz. Rendering examines how artificial intelligence is disrupting the entertainment industry, taking you inside key battlegrounds and spotlighting change makers wielding the technology for good and ill. Got a story about AI? Rendering wants to hear from you: [email protected].
At first glance, documentaries and artificial intelligence are not natural bedfellows. The pursuit of truth through artifice would seem to defeat the object, and early experiments, like cloning Anthony Bourdain’s voice, were met with serious skepticism. But as AI has evolved, filmmakers have grown bolder in embracing the technology. So much so, there is a new frontier in the docs world: Using avatars to protect the identities of sensitive contributors.
The technique was put to good use in 2023’s Another Body, in which directors Sophie Compton and Reuben Hamlyn masked a college student fighting for justice after she was a victim of deepfake pornography. The disguise — which is pretty effective, even with formative generative AI modes — was only revealed about 15 minutes into the film, a neat bit of storytelling that highlighted the power of deepfakery.
More recently, UK network Channel 4 experimented with the AI anonymisation in Kill List: Hunted by Putin’s Spies, while last month, Netflix deployed the technology in its documentary The Investigation of Lucy Letby. The latter provoked a flurry of headlines in the UK, where the case of convicted baby killer Letby is deeply contested.
Netflix opened the 90-minute film with a disclaimer noting that “contributors have been digitally disguised to maintain their anonymity.” Two interviewees were protected: The mother of a child killed by Letby and a former friend of the nurse, both of whom provided emotional testimony, albeit from behind an AI-generated mask. The results unnerved some, with viewers posting on social media about the uncanny quality of the interviewees.
The Investigation of Lucy Letby was produced by ITN Productions, but credits Deep Fusion Films with “identity protection” work, though Netflix has done nothing to promote this. The UK-based AI company, which is best known for creating a podcast presented by the late chat show host Michael Parkinson, has a video on its website explaining its anonymization technique.
The company boasts a “4K, 10-bit, HDR likeness replacement workflow,” which takes the original footage of the participant and overlays it with a new identity and voice. This means that the contributor’s original facial expressions, emotions, eye movements, and body language remain faithful to the original interview.
Netflix did not respond to questions about whether this was the process used on The Investigation of Lucy Letby, but has previously said the masking was done with the consent of participants to “uphold their anonymity either by request or due to court order.” Deep Fusion’s co-founder Benjamin Field declined to comment on the film, though he agreed to discuss anonymization more generally in an interview with Rendering.
‘The Investigation of Lucy Letby’
We are all used to watching documentaries in which silhouetted, pixelated, or disguised contributors divulge their experiences. Identity protection is part of the grammar of the medium, but Field is not convinced that it best serves those taking risks in telling their stories. “Their truth is never actually heard, because it always has to go through a filter,” he says. “Why would we ever go back to hiding somebody’s face once we can watch the emotional nuance in a digital double?”
Field says Deep Fusion aims to work with contributors to give them control over the avatar, allowing them to curate the look and sound of the AI-generated image that will appear on screen. He then says it is the job of the filmmakers and the studio to be open about artificial intelligence being used, which Netflix was in the case of Letby. “Audiences aren’t stupid,” Field adds. “The more information you give the audience, the better things are received.”
Rachel Antell, an experienced documentary filmmaker and co-founder of the U.S.-based Archival Producers Alliance (APA), agrees that masking can help viewers “connect on an emotional level” with an interviewee. Referring to the APA’s AI guidelines, she supports the need for “rolling consent” from contributors and transparency with audiences. The latter, she says, could extend beyond a title card disclaimer to include subtle signposting within the film, such as deploying a different visual style for masked interviews.
Antell says the method is not without dangers for doc makers, who may be using AI tools without any clue about the provenance or potential biases of the large language models. “One of the things you’re dealing with all the time is this black box quality of AI. You know what you’re putting in, and you know what you’re getting out, but you have no idea what happened in between those two things,” she explains. It’s an obvious conundrum for journalists seeking verified information.
The ultimate test of masking is whether it is accepted by those watching. Early experiments have provoked unease, supporting Field’s theory that “audiences generally don’t think AI is cool.” Field jokes that he was once called a “digital meat puppeteer” for regenerating Thunderbirds creator Gerry Anderson in 2022’s A Life Uncharted, but with time, he thinks audiences won’t question the technique, particularly if artifice can boost authenticity.



