Skip to main content

The Meaning of Life in 'Blade Runner 2049'

A philosopher expounds on the film's deep questions about knowledge and genetically engineered life, and offers some clues as to its ambiguous ending.
A scene from Blade Runner 2049.

A scene from Blade Runner 2049.

For an '80s flick starring Harrison Ford and gigantic shoulder pads, Blade Runner has a serious philosophical streak. Director Ridley Scott's sci-fi classic is set in 2019, in Los Angeles, a city that's been devastated by pollution, industrialization, and overpopulation. The film tracks Rick Deckard (Harrison Ford), a detective whose career is devoted to capturing "replicants"—synthetic human servants now feared after a violent, off-world rebellion. Through its oft-poetic replicants, the film asks questions of what if means to be human, and to what extent a synthetic humanoid can have a real personality.

Many had hoped Blade Runner 2049, the film's sequel out in theaters now, might answer some of the many remaining questions left by the original. But director Denis Villeneuve clearly had other intentions. Blade Runner 2049, true to the radical spirit of its predecessor, instead poses new questions to viewers about the nature of humanity, this time with a biological spin. The latest film follows K (Ryan Gosling), a member of a new and more obedient generation of replicant, and himself a blade runner tasked with hunting down older, more rebellious replicants. When K finds the bones of a pregnant replicant on the land of a replicant he's "retired"—blade runner-speak for killed—his boss orders him to erase all traces that replicants can produce—even the dead woman's replicant child—fearing that the information could lead to a human-replicant war. As he embarks on his investigation, K comes to question the reality of his memories, and even his identity as a replicant.

To learn more about the philosophical questions posed by Blade Runner 2049—as well as potential answers—we turned to Timothy Shanahan, a philosophy professor at Loyola Marymount University and the author of Philosophy and Blade Runner. Shanahan's original book teased out the original film's philosophy, and contrasted the film's ideas with those of Sartre, Descartes, and other canon philosophers. Speaking to Pacific Standard, he discussed the film's atypical narrative arc, the humanity of a hologram, and what was up with Jared Leto's eyes. Be warned: There are many spoilers ahead.


What are some of the philosophical lessons that you argued the first film conveyed in Philosophy and Blade Runner?

Arguably the key philosophical issue in the [original] film is the question "What does it mean to be human?" The film conveys the idea that simple diagnostic criteria for deciding who or what is or is not human are bound to fail and that what it means to be human isn't so much an objective fact that's out there in the world waiting for us to discover, but rather it has more to do with our attitudes toward one another. Being human is more like a social construct than it is an objective fact. Now, that doesn't mean that objective facts in the world aren't relevant—they're very relevant and they're probably necessary—they're just not sufficient.

There are other, secondary issues. The replicants make their way back to Earth on a quest, led by [rogue replicant leader] Roy Batty, to talk to [replicant manufacturer] Mr. Tyrell, [Roy] wants more life. He seems to think that, if he gets more life, by which he means more years [because replicants have only a four-year lifespan], his problems will be solved. I try to argue in the book that it's not obvious that, even if Roy got what he wanted, his problems would be solved, because then he would have an equally serious problem: What would I do with the time that I've been granted? Even Tyrell, who's not a sympathetic figure in the film, he tries to convey this to Roy by saying, "The candle that burns twice as bright burns half as long." I think that's Tyrell telling Roy Batty, it isn't the length of your life that matters, it's the quality of your life that matters and how intensely you live it, that's what matters. So I think that's one of the implicit arguments that the film is making.

Timothy Shanahan.

Timothy Shanahan.

Does the philosophy of Blade Runner change or expand in any major ways that you noticed in this new movie?

One thing that is very striking to me in the new film is that the world that's been constructed in Blade Runner 2049 is very much a continuation of the world from the original Blade Runner movie; it's a believable three-decades-on world, and it's basically the same world and atmosphere.

I would identify maybe five or six philosophical issues that are pretty clear in the new film. We again get the question, "What does it mean to be human?" Secondly, "What is real?" The Deckard character in the new movie says to [the replicant-manufacturer chief executive officer character, played by Jared Leto,] Niander Wallace, "I know what's real," but he says it in such a way, there's a context in which [I thought], "I think what he means is, 'I hope I know what's real.'" Related to that is, "What can I know?" Various characters in the film are trying to find out things, and it's not clear to them or to us that they can know what they want to know because of contingent factors or because the information is just not available.

The fourth question is: "What grounds a person's identity?" K, Ryan Gosling's character, starts off believing he's a replicant; at some point in the film he becomes convinced that he's not really a replicant, that he's special. But then by the end of the film, he's sort of back to where he began: He knows he's a replicant, and he even knows he's a pretty ordinary replicant. So his sense of identity, at least, has been subverted a couple of times, and [so is the] audience's perception because of course we're manipulated into thinking, "Oh, he's special, oh, he's not."

Related to [that question] is, "Can I trust my memories?" The K character has to deal with this, he has memories of that wooden horse. Dr. Ana Stelline [a subcontractor who manufactures memories to be implanted in replicants] tells him at one point that these memories that she's looking at are real memories. It turns out [later] they're real memories, they're just not his real memories.

The last one is the question of the meaning of life—more particularly, "How can we make our lives meaningful through our choices?" K is leading a fairly grinding life, you see the weariness in his eyes and the weariness on his face when he's "retiring" that [first] replicant—the same kind of weariness that Deckard exhibited in the original film. They're going about their jobs and it's a nasty business, but it's what they have to do. But, throughout the film, he makes choices that seem to go against what he was designed to do and what he's designed to be. By the end of the film, when he's laying on his back on the steps and the snow's falling on his body, I think the audience is supposed to get the sense that he's at peace, that he feels he's made the right choices, and that he has accomplished something with his life. Now, what's striking to me is that a lot of big issues are unresolved at that point, and yet he's achieved a kind of meaning in his life by helping a father unite with his daughter. It seems like a small thing in the grand scheme of things, but that's the kind of action that can make his life meaningful.

Let's dig deeper into your first question, "What does it mean to be human?" This time we get this question through the perspective of a character who is explicitly a replicant. How does that change the film's philosophy of what it means to be human?

I think in the first film, you've got humans who think that the presence or absence of empathy is a defining or diagnostic feature of what it means to be human, and that's why they use that Voight-Kampff test [to measure empathy]. I don't know whether that diagnostic criterion of what it means to be human is ever undermined in the first film—rather, it [shows] that humans can lack empathy and replicants can have it. It's not that the test is wrong or anything; it's just that empathy itself is absent in some humans and present in some replicants.

In the new film, it's almost as if they're taking a biological perspective on what it means to be human and what it means to be a replicant. And what I mean by "taking a biological perspective" is the issue of procreation, which didn't play any role in the old film, starts to loom large in the sequel. So is the daughter, Dr. Ana Stelline, what she is, turns out to be very important for various individuals in the film. In the new film [the VK Test] seem to be there. K does use another retinal scanning device to scan the eyeball of the protein farmer, but it's not really about that. It seems like it's more a matter of biology, it's a matter of how you came into being. So at various points in the film, the idea that somebody is born rather than made seems to carry a lot of weight.

At the same time, humans seem to be inhabiting a more cyborg world: Wallace has an implant to help him see because he seems to be blind, and the advertisements for the hologram companion Joi are all over the city. Are humans themselves becoming more replicant-like in this film?

I'm really glad you asked this question because this is something else I've been thinking about after watching this film: There are some new philosophical issues in this film that aren't present in the original film. We think of a cyborg as an organic-machine synthesis. I don't think we get any [cyborgs] in the original film: Arguably Niander Wallace is something like that in the sequel. But if he's a cyborg, he doesn't look like a very impressive example of being a cyborg because I'm not sure his eyes are electronic.

So to me the far more interesting issue is Joi, the holographic girlfriend for K. The fact that she progressively seems to have a certain degree of autonomy, that's interesting because now we have the issue of artificial intelligence. I assume—because the film doesn't explain it—that there's an AI system that lies behind the holographic Joi. There are suggestions in the film that whatever she does is programmed. Here are some examples: She decides to call K by the name "Joe," and that doesn't seem random, because another Joi later in the film also addresses K as "Joe," as if that's just the name that this AI system was programmed to give to male potential customers. Likewise, the viewer is treated over and over again to those electronic billboards that said something like "Everything You Want," and you noticed that Joi tends to tell K what he wants to hear. She's catering to his needs but also telling him things like "You're special," which is exactly what he hoped is the case when she's telling him that. But later, when she's giving K directions to destroy the little unit that controls her [in K's home] and that includes her memory and he's reluctant to do it because he knows that destroying her memory is essentially destroying her, he's treating her like she's more than just a programmed AI. Her active self-sacrifice looks like it could be more than a programmed response.

This is the classic philosophical issue concerning AI: Could an AI system ever be conscious? Could an AI system ever have genuine emotions? Could an AI system ever have free will? These are great questions and all we have to go on is the behavior—the output—but what we can never do is get inside and know what it's like to be an AI system. So that's a real philosophical puzzle. In discussing films with students and other people a persistent understanding of the original film is that the replicants are AI. But they're not, they're biological, they're organic—they bleed; there's no electronics in the replicants, they're not AI, they're synthetic humans.

Let's talk about one of the other philosophical questions you had, which is "What can I know?" As a viewer follow K on his journey in this movie, he has an atypical Hollywood arc: What we believe to be our hero turns out to be a very ordinary replicant. Why do you think that the filmmakers included that in the film?

I would say the best philosophical films—whether they're intending to be philosophical or not—are ones that induces a philosophical state in the viewer that at least some of the characters in the film are struggling with. K is struggling to figure out not just who he is, but what he is—am I a replicant or maybe a hybrid of a human and a replicant? Am I the first born, the only born of a replicant? Or was I produced like every other replicant? He doesn't know, and neither do we, for much of the film.

What do you make of this new film's philosophy of death, especially since in the final scene K seems to accept his impending death?

One angle on death in the film would be, if there's one moment in the new film where it feels a little heavy-handed, where it feels a little bit like the director is telling rather than showing, it's when the leader of the replicants, Freysa, says to K something like, "Fighting for a cause greater than yourself is the most human thing you can do." That's very significant because it's as if K internalizes that message and then his cause is to save Deckard and unite Deckard with his daughter. When he's done that, he's lying on his back on the steps leading up to the memory-maker building and snowflakes are falling on his face. Now, there's a question: Is he dying at that point? Well, it looks like it, and there's clues in the film to suggests that he is. When Roy Batty dies in the original film, there's rain falling on his face. When K dies, there's snow falling on his face—it's water in both cases. Second, the music playing when K is on the steps is the same music that was playing when Roy Batty drops his head and dies in the original film. In fact, that song, by Vangelis, is called "Time to Die." And you put that with that statement about fighting for a cause being the most human thing you can do, it looks like K has achieved the kind of status, let's say, equal to a human. Biologically he's still not a human, but it's like, who cares? It doesn't matter; he's done something human.

What are the most important philosophical questions you'd ask audiences to ponder after this film?

I wouldn't want audiences to walk out of the film and ask, "How is this film a commentary on the times we live in?" Because that seems to me just a little too trite, as if the filmmaker is trying to comment on whatever happens to be happening at the moment in the world. That seems like not a very fruitful way to think about the film. A second thing that I would hope audiences wouldn't leave the theater thinking about is just the special effects—the holograms, the CGI, how entertaining it was or wasn't. What I would hope is that people would walk out and think to themselves, "This film entertained me but also challenged me." And: "What questions does this film raise in my mind? What does it means to be human? Who am I? What do I know?" I would hope audiences might realize that a film like this can be the catalyst or the prompt for self-examination. If you compare this film to most other films, it's just way more thought-provoking and way deeper, and a way better catalyst for self-examination than the vast majority of films that are produced.

This interview has been edited for length and clarity.