Fear of artificial intelligence has long proven fertile ground for film and TV makers — and yes, it usually errs on the apocalyptic side of things.
Master of the World, 2001: A Space Odyssey, Ex Machina, the Terminator andThe Matrixfranchises, Avengers: Age of Ultron, I, Robot, even episodes of The X-Filesand The Simpsons, to name a few, have each contended with a machine learning threat against humanity. These films play on audiences' social distrust of the technological unknown, the looming threat of the singularity, and usually involve killer robots and a mainframe that needs hacking to save the planet. Yet as the decades have gone by, what we once considered science fiction has evolved closer to scientific reality, causing the antagonistic sci-fi trope to make a timely comeback onscreen.
In the last year, M3GAN, Jung_E, Operation Fortune: Ruse de Guerre, Mission Impossible: Dead Reckoning- Part 1and, of course, Black Mirrorhave explicitly grappled with techno-paranoia. Together, these screen stories reiterate warnings to audiences about the misuse of AI, question its nefarious implementation by corporate or bureaucratic structures as well as caution against its public roll-out before its capabilities have been ethically tested.
SEE ALSO:How to support the writers' and SAG strikes online and offThey also arrive at a time when real-world anxieties in Hollywood about the economic and creative threat of AI is being battled on the picket line. The Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) joined the Writers Guild of America (WGA) on strike for the first time in 60 years in July 2023, while their union reps negotiate fairer terms with the Alliance of Motion Picture and Television Producers (AMPTP). This includes contractual protection against being rendered obsolete by AI tools. But both in front of the camera and behind it, the question remains: is AI the real threat – or is it the humans using it?
'Jung_E' tells the story of an AI mercenary, code-named Jung_E. Credit: NetflixIn Gerard Johnstone's horror film M3GAN, it appears it's the AI in charge, as an artificially intelligent doll becomes self-aware and obsessed with its owner, goes on a murder spree, and covers its tracks by deleting and corrupting its own files.
But M:I7 star Hayley Atwell is leaning more towards human responsibility. In the seventh instalment of Mission Impossible, a rogue sentient AI called the Entity breaches its human programming and embarks on a cyberterrorism campaign against the world powers that seek to control it. But Atwell is less concerned about this sort of tech being corrupted inherently.
"I feel like power, money, AI, in and of itself is a neutral thing," Atwell told theFade To Blackpodcast, in an interview conducted before SAG-AFTRA joined the strike. "It's how it's used, how it's abused, and what it's used for. [AI] itself doesn't scare me because being fearful of something that is progressing means you're not participating in the right use of it."
While Hollywood itself isn't using AI to directly threaten humanity (yet) like its movie villains, its use has become the core ethical argument in negotiations. Many Hollywood actors and writers would arguethat studio executives pushing AI are not participating in the right use of it. According to Black Christmasscreenwriter and WGA member April Wolfe, whose second feature Clawfoot is in post-production, machine learning tools were not originally a main contention point in the WGA's new deal proposal for the AMPTP, "but the fact that [they] did not budge at all on AI basically gave away their hand," she explains to Mashable.
"You spend your entire life trying to train in a craft and then someone says that they're fine without you, they can do it with a robot – it feels just awful."
"We have already seen the parallels of how AI affects other industries; it's all basically the same problem. The bigwigs, the head of the company, want to do AI because they see it as cost cutting, they want to get rid of labour."
Company enthusiasm for AI learning, then taking over human crafts like these contributes to a devaluation of the worker, who is in this case, the writer.
"You spend your entire life trying to train in a craft and then someone says that they're fine without you, they can do it with a robot – it feels just awful," Wolfe adds.
Mark Gatiss and Charles Parnell in 'Mission: Impossible Dead Reckoning Part One'.Credit: Paramount Pictures and SkydanceAn increasing number of AI writing and content creation tools entering the marketplace, like ChatGPT, has emboldened the WGA, which represents around 11,500 screenwriters, to demand that studios not use them to pen outlines or first drafts. This would prevent writers from getting sole credit pay checks. WGA members earn significantly less money if they are only brought in for rewrites or polishing a script rather than originating one as the first writer. Using AI in this way would make it more difficult for members of "the WGA to earn the minimum amount to qualify for insurance or pension," the screenwriter says.
The current minimum for the pensionscheme plan is $5,000 (£3,972) and around $41,700 (£33,129) of covered work over the course of four calendar quarters to qualify for a year's health insurance. They can also wrack up credits earning more than $100,000 in union-sanctioned work and by qualifying for WGA health insurance for 10 years, and members, according to the most recent WGA figures, earn around $250,000 (£198,617) a year (before taxes, union dues and payments out to any agents, managers and lawyers in their team).
But not every member works consistently enough (it can often be years between projects, some of which don't get greenlit to become films or series) or is paid enough to maintain coverage in the gig economy of Hollywood – especially as this strike continues. For SAG-AFTRA members, according to actor Tavi Gevinson, only 12.7 percent of the over 160,000 guild members earn the annual $26,470 required to qualify for union health insurance. Fortunately, SAG-AFTRA approved continued health insurance coverage through the end of the year for members who made at least $22,000 before the strike started.
Secondly, notes Wolfe, AI would signal the deterioration of creative originality and integrity. Generative AI writing tools rely on being fed preexisting scripts and literature trawled from the internet. "You can only get AI to regurgitate what has already been done," she says. "That means a complete stop to any advancement of storytelling."
Both She-Hulkand Black Mirrorhave this year used the idea of AI-generated storytelling as a pejorative plot point. While the Disney+ series offers a meta-joke about the Marvel Cinematic Universe being written by K.E.V.I.N., an enhanced AI named after Marvel Studio's head honcho Kevin Feige, Charlie Brooker's Black Mirror episode "Joan Is Awful"turns its streaming home Netflix into a corporate villain, calling it Streamberryand seeing its CEO exploit subscribers' lives for dramatic content, using a quamputer that automatically generates personalised stories using private audio recordings taken from users' devices.
"Using technology to potentially undermine artistry, as opposed to augment it, is a concern."
Netflix has already employed AI toolssuch as its Machine Learning Platformto "aid creative decision makers" and mitigate the risk when it comes to choosing which films and series to commission and promote. Earlier this year, Netflix was also criticised for using AI to generate background artfor the short film Dog and Boy. In a tweet by Netflix Japan, the company suggested the work of Netflix Anime Creators Base, technology developer rinna Inc., and WIT STUDIO was "an experimental effort to help the anime industry," because of a labour shortage. Studio Ghibli co-founder Hayao Miyazaki once saidof AI generated images, in a 2016 documentary, "I strongly feel that this is an insult to life itself," and many in artist communities share that sentiment as well as a frustrationabout low wages and long hours — big factors that contribute to the shortage of labour.
British actor and SAG-AFTRA member Himesh Patel, who starred in "Joan Is Awful" as a deepfake version of himself, tells Mashable, "Using technology to potentially undermine artistry, as opposed to augment it, is a concern." Appearing in the Black Mirrorepisode as well as hearing information gleaned by union reps in meetings, he says, has also been "a wake up call."
SEE ALSO:Hollywood writers are fighting the studios on AI, but not for the reason you thinkPatel refers to SAG-AFTRA's chief negotiator Duncan Crabtree-Ireland, who claimedthat the AMPTP had "proposed that our background performers should be able to be scanned, get paid for one day's pay, and their company should own that scan, their image, their likeness, and to be able to use it for rest of eternity, on any project they want, with no consent and no compensation". Others like filmmaker Justine Bateman have suggestedstudios also wanted "to feed 100 years of acting performances (for a nominal fee) to train [generative] GAI models." The AMPTP has disputedthese claims, but the fear of unwittingly signing your likeness away is very much real – and a fundamental plot point of "Joan Is Awful."
"These things can be done insidiously by inserting certain wording into contracts and then you've signed it but I'm in a privileged position where I have people to thoroughly vet that for me," Patel says. "Whereas, the people that we're striking for and the people that need this protection, by and large, are the people who aren't earning enough money to get health care, let alone employ a corporate lawyer to vet all these contracts for them."
In 'M3gan,' an artificially intelligent doll becomes self-aware and obsessed with its owner.Credit: Geoffrey Short/Universal PicturesWolfe is also fearful that this sort of deepfake technology might lead to greater censorship as well as the distortion of an actor's intended performance or a writer's intended story. She points to the 2022 film The Fall, which used company Flawless's AI tool to dub 30 utterances of the F-word – and match the actor Virginia Gardner's lips to the new dialogue – in order to change its R rating to PG-13.
"There was something about it that felt wonky when I was watching it," she recalls. "How this application can be used domestically or internationally for different kinds of regimes that would like to censor different things is a little bit scary to me as a creator because it means that you have less control over what's happening and how your film is being perceived in different places."
In British sci-fi thriller T.I.M, directed by Spencer Brown from a script and co-written with his wife, author Sarah Govett, an AI manservant becomes dangerously obsessed with its owner, prosthetic engineer Abi (Georgina Campbell). The robot uses deep-fake video and voice technology, as well as various breaches of privacy, to emotionally manipulate a fissure in the marriage of Abi and her husband. As with Black Mirror'sJoan, Abi and Paul give their consent for T.I.M to access their data, not realising how it might be seriously misused.
SEE ALSO:Everything you need to know about ChatGPT"The idea that we have these devices that we voluntarily take into our homes – they eavesdrop on us and they try to manipulate us – was a starting point," Govett tells Mashable.
"We're really paranoid about privacy as well," adds Brown. "We do find that invasion very scary and we wanted to do something that almost personified this ultimate stalker that is technology."
T.I.M. looks like a human but he can never experience authentic human emotions, a point the filmmakers were intent on hammering home.
"For us, AI can impersonate emotions but it cannot properly hold them," says Govett. "It's really just a string of code, logic. That's what terrifies me most about AI – the coldness of it."
They are also unconvinced that AI writing tools could ever offer anything more original than the written work they learn from.
"We went into ChatGPT when it was first hitting news," says Brown. "All of the things that it came up with, the plot twists, were something you've seen 100 times."
Clearly the consensus, on and off the screen, is that AI, in the wrong hands, is a threat to humans that cannot be ignored. Whether that's in terms of workplace ethics concerning the cost-cutting and creative attempts to employ AI as a writer rather than a tool or stripping actors of their autonomy through deep fake avatars. Film and television will suffer if studios implement AI technology incapable of original emotional thought. So as long as real-world fears in Hollywood intensify, fictional storylines about homicidal robots and algorithms outsmarting their human creators or nefarious tech companies exploiting the lives of their subscribers, won't slow down.
"The cynic among us might think that's all that art is – people taking information and then spitting it out in some other way – but what we're forgetting is the human element of it," says Patel. "There's another filter through which this stuff comes out that's intangible – [AI] can't replicate it."
TopicsArtificial IntelligenceFilm
(责任编辑:焦點)
Make money or go to Stanford? Katie Ledecky is left with an unfair choice.