Seven years after the original game, The Talos Principle 2 has finally launched to the masses. The sequel to philosophical puzzler The Talos Principle has earned high marks from critcs. But the questions about humanity The Talos Principle 2 explores are far different, and in a way more grown up, than those in the original game.

The growing awareness of artificial intelligence in daily conversation is not what’s changed the focus of the games, though. If anything, that only adds depth to the ideas Croteam explores in the sophomore entry into the series because, according to the developers, the anxieties about artificial intelligence reflect deeper concerns humanity has with itself.

Related
The Talos Principle 2 Review

The Talos Principle 2 brings more of what made the first game a success, with a bit more polish and expanded puzzle mechanics.

Husband-and-wife writing team Jonas and Verena Kyratzes sat down with Game ZXC for an interview shortly after the game’s launch, exploring the philosophy, influences, cats, and humanity found in The Talos Principle 2.

The following interview has been edited for brevity and clarity.

Q: I have to ask starting off–can a robot be a human?

Jonas: Yes, I would say in a way, I guess, depending of course on how you define the way a human can be human. In the sense of being part of human civilization, I would say yes. The game says yes.

Verena: I agree. That's the answer. You're right. That is correct. No, I think not right now, but I think, given technological progress, yes, sure, at some point in the future. And I would be very happy to see it.

Q: I noticed that Talos Principle 2 is less preoccupied with this question than the first game was, what went into that decision?

Jonas: Well, basically, we kind of dealt with it in the first game so extensively that it seemed like it would just be repeating the same beats in the sequel. And we were very, very concerned with taking a step forward with the narrative and kind of saying, “what's next?” Otherwise, it would have felt like we're just making the same game again. And we really didn't want to do that.

Verena: The thing that I always like to say is that the first game dealt with these robot humans in their infancy, when their civilization was just newly born, essentially. And now we're looking at their adulthood. It seemed a logical progression.

Q: “Can a robot be a human” really encapsulated the first game. Does Talos Principle 2 have a similar central question?

Jonas: I think it's asking more than one question, because it kind of has to. Because I think once you are born, once you have established yourself as who you are, you are confronted with all the other questions. Where do we fit in? Where do humans fit into the bigger picture? And how do all these parts of the bigger picture function? What happens now? What is our relationship to this machine–biological or electrical–that we are in the universe? What is this place that we're in? And how do we live together inside of it?

talos principle 2 puzzle

Q: A lot of our understanding about AI has changed since the first game came out in 2014. How has that affected the development and the story of Talos Principle 2?

Jonas: To be honest, it hasn't.

For one thing, we wrote it before most of this ChatGPT-type stuff became so mainstream. I mean, we were aware of these things, but they hadn't really entered the mainstream in this way. And also, since there is debate about this, which is legitimate and complex, I certainly don't think that current large language models are in any way intelligent. The game is very much about general artificial intelligence, what’s called GAI or true AI. So I think it just didn't really influence our thinking about that very much.

Verena: I think the main body of the game was written like, two, two and a half years ago? And this current debate about AI was what, like a year, year and a half old, maybe? And so we thought at some point in development, we could make some adjustments, but we didn’t know what exactly because what we're talking about isn't really what they're all talking about right now. Like, it's adjacent, but we're talking about genuine intelligence.

Jonas: If there are parallels of anything, I think, it’s one with iterations and how that works in the simulation in the first game, but not so much with Talos 2 in which they really established intelligent beings and embodied intelligent beings and now they kind of have to figure out what to do with themselves. It's much less about artificial intelligence really, it's much more about humanity itself, simply through the medium of artificial intelligence.

Q: The sequel definitely continues the religious allegory of the original, even turning Athena into a messianic figure. What's the game saying with these connections?

Jonas: I don't want to say that the game has said something, I think the game is exploring something. I don't think there's a message that it should be boiled down to. The second game is very much concerned with faith and forms of faith, both negative or dangerous when we mythologize things that maybe shouldn't be mythologized, but also, it's very concerned with maintaining faith and needing faith–needing to believe in something to be able to function in the world, to be able to keep fighting when things are bad. What form that faith can or should take. And characters have very different perspectives, some even arguing that the thing that you have faith in doesn't need to be real, necessarily–although I disagree, they don't see it that way.

It's a game that overall is concerned very, very much with faith, and it does touch on religion, both obviously in its imagery, but also in our relationship with the universe itself, with the cosmos, with beauty, and a sense of the sublime. These robots are capable of experiencing. They're capable of seeing beauty and seeking some kind of greater meaning or experience or spirituality or something in the universe. They're looking for it and kind of trying to make sense of all that, just like we are.

So I think it continues those things in the same way where it both has a great deal of respect and interest in religion. And I think the complexity of the first game has some of that as well, where it's a game that speaks very much in religious terms, but it's a very humanist and materialist game. And it's kind of where those two meet that it becomes interesting.

talos principle 2 dome

Q: What do you each personally think of the Goal? Is it a good way to live?

Jonas: The Goal is the belief that there should be no more than a certain number of people to preserve some sense of harmonious balance. I obviously don't want to prejudice players, in the sense that the game deliberately allows you to take very different positions, and that we present characters who are legitimate in their support for these things like Alcatraz, who is a very sympathetic character and who believes in these ideas.

Now, personally, I don't support the Goal. I think they're catastrophic. I find all forms of eco-austerity to be deeply, deeply anti-human and threatening to our future and our ability to exist as a species. Also, thoughts like the Goal can only lead to the most reactionary outcomes. That doesn't mean that that's the message of the game. It's a perspective that exists in the game.

Verena: I agree with Jonas. We tried very hard to write the game in a way that if you don't agree with the writers, then you can play the game and the game won't force its opinion down your throat. But I also think that the Goal is, especially in our society of robots that essentially live forever, is a very bad idea. For us and the real world, too.

Q: In addition to continuing the discussion of what makes a person human, Talos Principle 2 has started to bring in more political philosophy. Was that an intentional decision on your part, to expand the types of philosophical arguments the game was making?

Jonas: Yeah, by necessity. The first game is about an individual, it's about who am I? What makes me me? But individuals don't exist outside of social context. We exist in society, we are political animals. That's what we want to explore: the individual in context, both of the city, but then also the cosmos itself. What's our relationship with the universe? Is the universe benign? But also, what's our relationship with other people? The state? How do we organize ourselves in a way that benefits human flourishing in the individual or not?

Verena: I think what resonates with a lot of players is that, nowadays, a lot of people feel a little bit lost. Like, they don't entirely know what to do with themselves. And they feel like they want to have a larger social context. They haven't necessarily explicitly formed that thought, but I think that's what they yearn for. And they also feel like they're left alone by the system. It resonates very well with players.

And it's also something that we wanted to explore; this idea of the responsibility of the society towards the individual, but also the responsibility of the individual towards society, and explore what it means to maybe be a cohesive whole.

Q: You just touched on something that I noticed while playing that I thought was interesting. When 1K is asked about these grand societal problems, there's usually an option to choose a response that essentially says, “Hey, I've existed for about two hours now, why is this my decision to make?”

Jonas: Yeah, we're all rushed into this big pre-existing thing. And the choices we make sometimes have a huge impact on ourselves, sometimes no impact at all. It's a confusing experience; you're thrust into the middle of this enormous story, and you have no idea what's going on, not just in the game but in everyday life. You're born into a world where people years ago made a bunch of decisions that suddenly, for example, remove a lot of freedoms that you would have had 20 years ago, we throw out the window because of terrorism.

And you're like, “I wasn't born when this happened, when we made all these choices for me, so why can't I get on an airplane without taking off my shoes?” You know, we're born into a world that already has all this backstory, and we didn't choose any of that. And it's very hard to see how the hell we can change any of it. But at the same time, you feel like you kind of have to take a position on a lot of it because it seems kind of ethically important. And I think it's a desperately confusing situation for most people.

Verena: So when you're the writer, especially when you're the writer of a sequel, it's very important to always take a step back and go “Wait a minute, there might be players who have never played The Talos Principle, or who have maybe not read all the texts.” So part of it is just realistically saying I am a person, I'm 1K, or I'm a player who has literally interacted with this entire world for one hour now, and maybe they don't have an opinion yet.

Jonas: I mean, the phrase we always had was “1K hasn't played The Talos Principle either.” Yeah.

talos principle 2 forest

Q: What is the role of puzzles in our culture in our real world?

Jonas: That's something that the first game discussed a bit. The scientist who created all this, Alexandra Drennen, talks about the fact that human beings just love games. Not just puzzles, we just like solving stuff. This is something that's, really, really, really in our nature. You give humans blocks, they'll build something. There's something about us, in our nature, really, that I think makes us create problems for ourselves to solve because we enjoy it.

Video games are a perfect example. Why are you paying money to have a bunch of problems on your computer that you can solve? Well, because we kind of like doing that. We like being able to solve problems. And I think if you look further, one of the problems with our current world is that this impulse that we have is not getting a lot of use. There's not a lot for people to do. The jobs that we have that are available to the majority of people aren't satisfying in that way.

There's a discussion later in the game about someone getting addicted to a farming game and why he's addicted. And one of the ways of looking at it, as suggested, is you just want to do things that feel constructive.

We've kind of become a society that builds very little and repairs very little and explores very little and mostly can do weird numbers stuff and sit around. The people who do physical things often aren't appreciated for that. And that's a very weird way to construct a society when you're a species that just loves building and fixing and changing and all of these things.

Verena: I think Jonas has kind of accidentally touched upon something there, which is that if you look at human civilization, of the history of our civilization, whenever we started doing things that weren't, strictly speaking, necessary to survival, like playing games, or talking about philosophy, we had a certain amount of prosperity. The cradle of philosophy was in ancient Greece around the Mediterranean, which was an incredibly rich place to live, and so people didn't have to worry about getting eaten by a saber-toothed tiger. Okay, I’m slightly mixing up the timelines here.

But they could talk about shadows in a cave and all these things. I have not studied philosophy, by the way, so I should probably stop talking at this point. But I feel like we're kind of losing that again. Nowadays, in society, people have to hold down three, four jobs to make a living and pay the rent. And there's a certain disregard for this joy and just doing something for the sake of doing it. Human beings don't like to be idle.

If I had like $10 million, I would never work again in my life. But I would do something with my time, and so I think this impulse to just do things for the joy of it is something incredibly human that very few species beyond ours have.

Q: That actually brings me to something. I loved everything about Milton's Rest. I love that Milton the cat is remembered more than Milton the library assistant. And pets are things that humans have for the joy of it. What do cats say about the human experience?

Jonas: Oh, that's a con. That's a difficult one.

Verena: That we're gluttons for punishment.

Jonas: Why would we have these animals? We say, as people who have far too many of them for our own good. This is genuinely interesting and complicated, because I think I can take this in a lot of different directions.

So it's kind of amazing what we do for cats. When you think about it, what we provide them with, I sometimes think about how long our cats live, right? They can live 20 years, if they're lucky. They never have to work. They never have to even hunt. We take care of them, we provide them with quite an amazing experience. And that's a choice that we make, we say, you're just going to have this amazing life. Because I would feel awful if you had a difficult life, because you don't really understand the world. You don't understand why you don't feel well, so I'm going to take you to the doctor and make you better because I understand and you don't.

There's something there about the responsibility that we have towards other life forms simply by being intelligent. There's something that later on in the game, the character who's there at Milton's Rest, he kind of talks about our responsibility to lessen suffering in other species. So I think there's something in there about our responsibility to make the world better for other species that do not have our gifts of being able to, for example, figure out how a cat disease works and then create a vaccine for it. Or just even figure out a way of cutting open a cat and removing a tumor and then, you know, the cat lives, which is not a technology cats are going to be discovering any time soon.

I think something about that is very fascinating. But in addition, I think there's something very magical and wonderful about simply our ability to reach out beyond our own species and have this relationship with another kind of being that's aware, and to a degree, that you can form this kind of bond even between different species. There’s something incredibly powerful about that.

Verena: Yeah, I think that is an interesting question. Because if you look at humans and dogs–I have a dog, I'm a human and the dog looks at me, and he says, “You're God, I love you. I would throw myself in front of a bus for you.” And I have a cat and the cat says “feed me.” The relationship was a lot less altruistic, really, on the cat’s part.

I deeply believe that my cats love me. And I think they think Jonas is okay. But I think we have a certain need for companionship. I think if you make a cat like you, then you feel like you've really achieved something. So that's something to work towards.

Jonas: I also think something important that we always say is that humans domesticated cats and dogs simply because they're useful in one way or another. I'm not really convinced by that. Certainly they were useful, but I'm not convinced that someone was like, “Oh, I found a wolf puppy, and I'm going to teach it to herd sheep.” No, they found a wolf pup, and they were like “damn, its mother died, I feel sorry for it. Okay, we're gonna take it to the cave, and I'm gonna feed it from my food, okay, nobody get mad at me.” And other people were like, ”No, it's kind of cute, we should have more of them.” And then maybe they taught them how to herd sheep.

You see that even in really ancient societies, for example, the love of dogs was a very profound thing. Like, the Odyssey has a dog. It has a tragic story about a dog and the dog's love for his master. So that's something that even 3,000 years ago was recognized as something important in societies that were much more brutal than the society we live in now.

talos principle 2 robot and mountain

Q: Going back to puzzles, do Talos Principle puzzles in any way reflect the philosophical arguments the game is presenting?

Jonas: Not strongly. I don't want to make claims that I can't support with evidence. I do think that the approach the game has, there is something there in that simple puzzles are kind of simulated, there is a logic to them, and that you can find a solution that's not necessarily the solution that the developers intended, and it's going to work. It's not something that's kind of hard-coded to only function in one way. Unlike, say, the old adventure game where you had to use the ice cream on the dog to get the key to open the cobra or something.

In a game like Talos, there's a real logic there, and you can use your intelligence in these ways. But at the same time, the puzzles have a logic inside this world, and there is a reason they're there, and everything just kind of tied together. I would not go so far as to say, “this puzzle is the Socratic method,” though.

Q: I noticed a lot of the names of these mechanical humans, as I've taken the calling them, reflect some of the great creative minds and thinkers of human history. Who were some of the most influential figures in history that you drew from?

Jonas: I'm thinking of the robot called Empanada.

Q: Well, there's also Purple. Incidentally, I have to say, I loved Purple.

Verena: Oh, that makes me so happy.

Jonas: But I mean, I don't think we drew on specific historical figures in creating the characters, but Byron is named after the travel writer and anti-fascist Robert Byron, who's a fascinating person that I've loved for a long time. He died very young, sadly, in World War II. And so there's an expression from him in the sense that he was someone who was extremely opinionated, and maybe sometimes didn't know when to shut up.

Byron has a bit of that, he's not purely Robert Byron, but he has this attitude of believing in things very strongly and then sometimes being a bit, you know, being a bit sarcastic and saying things maybe in context where you should be a bit more diplomatic.

Verena: There's also some Carl Sagan in him right?

Jonas: So there's definitely some in him. I think if Carl Sagan was alive now, he would feel the frustration that Byron feels with where we are as a society. Obviously, Alexandra Drennen is from the first game, but she's very important–her legacy is very important in the second game. She's very strongly influenced by Carl Sagan as well.

There are a lot of writers and thinkers who have influenced the game, but I can't necessarily tie them to individual characters. Except maybe Damien since we named him after a particular writer, a writer on ecology issues that I find very fascinating called Leigh Phillips, whose full name is Damien Leigh Phillips. I thought, okay, I'll steal his name.

Q: What about the thinkers in history who have influenced the world and narrative?

Jonas: Carl Sagan, certainly in terms of this moving humanism that he had. To me, it seems almost prophetic. If you asked me if God sent any prophets in the last couple of centuries… In terms of poetic influences, there's a bit of William Blake, who had an influence especially in the first game that kind of continues into the second game with the marriage of heaven and hell. I'm sure I'm forgetting some.

I mean, obviously the big sci fi writers–Ian Banks, for example, was an influence because he imagined a very positive future in his Culture books, with intelligent artificial intelligences that have a sense of humor and personalities. There are other thinkers, of course, philosophical thinkers that have an influence. For example, there's Karl Marx, who has an influence on the game through historical materialism and the question of how history evolves, whether everything is defined by the conditions that we find ourselves in, or whether we can push back against that. So there's a lot of discussion there in the space on people like Marx and Hegel, that has an influence on the game.

Verena: I was like, classic Star Trek? What Star Trek used to be, as opposed to what it is nowadays. Interstellar is a movie that we both completely adore and just the positivity about humanity that that brings with it. Another, less well known movie, called Sunshine.

talos principle 2 robots

Q: Given the general outlook presented in the Talos Principle games, it would be fair to classify you as optimists about GAI. As another huge proponent of GAI, I have to ask what makes you so optimistic?

Verena: My dad, who hopefully is not gonna read this–Hi, dad–he's a software developer. Nothing incredibly interesting, but he understands computers and how they work. A couple of years ago, I talked to him, a long time before we started writing The Talos Principle, to talk to him about proper artificial intelligences. He said, “This is a terrible idea, the moment this happens, we will all die.” He basically imagined the Skynet-esque scenario. And I was like, “But why?” And he said, because when we do this, when we develop this, it will be developed by the military, so its only objective will be to kill. And I'm like, “okay, I take your point, but what if it isn't?” And he couldn't imagine that.

Whatever this will be, this proper sentience, it will depend incredibly on who creates it. And I think, especially in the scientific community, there are a lot of people who share our optimism for the world and for what humanity can achieve, given enough time, and that we don't wipe ourselves out. And so I really hope that the right people are going to do it, and that they're going to share that faith in humanity.

Jonas: I want to put it slightly differently, which is that I think all discussions about artificial intelligence are just discussions about humanity. We project all our worst fears about ourselves onto these artificial beings. So when we say, “Oh, but it's going to be a machine that's optimized to kill,” Well, that's the military industrial complex, we already have that. In every other way, we're always saying, “Oh, it's going to do this, it’s going to be that,” well, that's what we're already doing. What you're describing is that we have some very problematic incentives in our civilization. And we can see that things are headed in a certain way because profit accumulates in particular ways.

But that has nothing to do with artificial intelligence itself. It has everything to do with where we find ourselves and how we find ourselves essentially unable to have empathy with others, because we're so atomized and because everything is about competition and maximizing certain things. That's just how we are. There's a lot of people like that already. You don't need them to be artificial people for that to be a problem. That's just the problem we already have. So I think we project these problems into the form of technology so that we can pretend that it's a technological problem, when in fact, it's a sociopolitical problem.

Q: I was surprised to see social media being represented in your game, because it’s widely seen as a really driving negative force in our society. So why did you choose to put social media in the future?

Jonas: I mean, it's a technology that kind of naturally develops I feel. And you're right, certainly, in our society with the incentives that it has and with the incentives also of the companies that make it. That means it’s a terrible thing. It's horrible, at least now.

The internet itself wasn't always like this. But to the degree that it is, it is certainly not great. But I don't think we can avoid social media. It does amplify certain things, and you see it in Talos Principle 2. To me, they have some of the same tendencies we have. Maybe not to the same degree, but they do have them. I feel like because there's social tendencies that are hard to get rid of, but maybe if you change some of the other problems that we have, we're not going to project all of them onto social media.

Verena: It did seem a bit like we were cheating if we didn't have it, because I feel especially a society of digital beings would have it. If you had Facebook in your head, you would be using it–don't listen, Mark Zuckerberg, please–you would use it all the time.

I think we had a lot of fun writing it. And it gave us a chance to look at some aspects of society that we could look at that we didn't have the space or the time to explore in the real world of the game. And plus, I like to think that, well, while I said that this is about robot humanity growing up, they're like, slightly more grown up than we are right now. So maybe they're a little bit more responsible about using media.

Jonas: Breakdowns and stuff do happen as well. We have moderators coming in saying “Enough of this.”

Verena: No more “frogs are people!”

talos principle 2 statue

Q: Do you think that humans can treat general artificial intelligence with the kind of respect and empathy that would be necessary for coexistence?

Jonas: You know, to go back to my AI is a mirror; can humans treat other humans with the empathy that's required for coexistence? I think we can.

I think we're relatively friendly under normal conditions. And we're mostly pitted against each other by material conditions and sociopolitical forces that have to do with resources and other things. So I'm personally relatively optimistic about people being capable of it. In theory, I'm not necessarily optimistic about how easy it is to get there, but I think accepting that a person is a person isn't that hard ultimately. We have seen, for example, a lot of weird prejudices kind of fall away over the last few decades, which is good.

I think, in general, we, as humans, can be very inclusive. Historically speaking, we're very good at building communities that have all kinds of different groups and kind of reconfiguring ourselves into some other bigger group. Even though people always focus on the infighting and bigotry and these things, but if you look at human history, literally none of us have the origin that our nation claims, right? Everything is hundreds of groups that have come together over centuries and millennia.

I think we are capable of doing that with each other. And we're capable of doing it with other species, with general artificial intelligence. We’re capable of it with anything ultimately. But the same forces that drive us to fight with each other will also drive us to blame artificial intelligence, just like we blame all kinds of individual other groups for whatever bigger economic problems there are.

Verena: I agree with Jonas on that one. I mean, I think I'm a little less hopeful than him. But in general, yes, because I think as humanity we have this infinite potential for understanding things and for betterment.

But also, I would like to say that I think if a GAI came along next year, and humanity was confronted with it, everybody would run screaming for the hills because stories have told us for the last 20 years that the second a general AI makes an appearance, the first thing it's gonna do is murder us all. I think a lot of groundwork is going to have to be done before people don't as the first thought go, "How long until it stabs me?"

Q: As another person who thinks that AI would not think to stab you unless you told it to, I want to just extend my thanks for being part of the counter narrative to that.

Jonas: Talos in its entirety is kind of an attempt to push back against the way the entire culture is going. And just kind of to say, on every level, just hear something a bit more contemplative here; something a bit more optimistic. Here are some perspectives that you don't usually hear on our relationship with nature. And whether humans are more important or less important, or all of these things. It's very nice that people responded so well to it. There's a conscious element of saying, we're very disturbed by cultural trends overall. And hopefully, we can, even in our teeny, tiny, tiny way push back.

Q: Is there anything else about the broadness of the things that we've discussed today that you want to share with our readers?

Jonas: It's very wide, it's this thing, right? I mean, my personal thing is always just to say, please remember that you're a human being, and it’s absolutely incredible to have the ability to understand the world and to do things and to change things and just at least ask yourself if it's healthy that we're constantly told that we're a virus, and we're bad, and we're inherently awful and nature's great, and we're terrible, and we're unnatural. Whether it's normal that so many of us have this weird self-hate, this hatred of intelligence and the ability to transform the world, and we glorify random Darwinian chaos.

Just ask yourself, at least, if you saw this in a science-fictional context, would you say this is normal, that they all hate themselves, and everyone is like “humans are bad,” and to them, this is a profound statement. Isn't that a little sick? But we all see it that way, or so many of us see it way. I think it’s a bit unhealthy. That's something I desperately wanted to put into people's hands. So just think about that a little bit. Is that really so? Is that such a great perspective to take?

Verena: And I want to go a bit deeper into that thing that I just said about AI with the self-reinforcing myth. So there was this movie once, which was called Tomorrowland. And it was based on a theme park ride at Disney World. And I was like, this can't possibly be good. And it's not necessarily the best movie in the world, but the entire premise of the movie is that we've told ourselves that the future is gonna be bleak for so long that everybody has stopped believing it and tons of just slightly supernatural self-reinforcing prophecies. But that's actually true. Like, this movie was incredibly mind opening for me in that regard, because it's true.

We've been telling ourselves as a species for the last 60 years, or longer even than that, no matter what we do, it'll always end in tears. Cure for cancer? Yeah, okay, but probably, I don't know, it makes people's arms fall off in 20 years and we didn't know. No, we don't have arms anymore. Oh my god, better not cure cancer. That's every story of the future that you see almost everywhere. Even Star Trek has turned to that–suddenly we have drug addicts and whatnot in Star Trek, like the one TV show that was supposed to be about a brighter future. So it was very important for us to do something to push back against that a little bit. Against this ongoing, relentless narrative of no matter what we try, it's not going to work, just give up, just sit in a corner and do your job until you die. And that will probably be for the best. Because if you save that puppy, how do you know that the puppy isn't going to kill somebody in 10 years?

Jonas: Wasn't it I Am Legend? The Will Smith version that had, I think, a vaccine for cancer or something? It was a vaccine, turned everyone into a zombie vampire. And I did think when COVID happened, people were like, oh, vaccines will kill us all. I was like, “Yeah, that's what all of the science fiction stories have been saying for the last 70 years or something.” Every time we invent something, that's gonna go horribly wrong, and we'll have vampires or something.

So we have developed such a deep phobia of everything. Parts of it are legitimate, right, that always reflect something. Certainly, the pharmaceutical industry is terrifying and driven by very inhuman maximization of profits over its actual job. But nevertheless, you see that we are so afraid of everything now, that even technologies that could help us in the case of things that I particularly care about–nuclear power, or genetic modification, things that could have huge impacts on climate change, and also on obviously, just our own health–we're terrified of them.

We can't actually imagine any outcome that's not negative. We look at anything, and we think that's going to explode, that's going to mutate us all, that's going to do this. We just can't imagine "that could actually work." A lot of things that we have that allow us to live with some kind of quality of life, they just work. And they've worked for centuries or decades and have made our lives a lot better. We don’t die of certain horrible diseases anymore.

It’s a terrible kind of failure of imagination that we've all imposed on ourselves.

[END]