Agnieszka Kurant: I’d like to start with a small provocation. Your short stories are essentially thought experiments and philosophical questions about the consequences of certain ideas. And what I’ve been doing in my own work are condensations of thought experiments into visual forms. One of the things that brought me to think about how a story can be condensed or compressed into a sculpture are memory boards—“lukasa”—mnemonic devices used by the Mbudye Society of the Luba peoples in the Democratic Republic of the Congo to preserve collective memory: complex sculptural objects into which vast amounts of information about the history, territory, and culture of this society were encoded.
I was wondering if it ever crossed your mind to condense your short stories into something even more compressed, like a single neologism that could go into wide circulation, or perhaps even into an object form?
Ted Chiang: Without intending any insult to mnemonic devices, I like the written word too much to opt for anything else. Mnemonic devices require a chain of contact between the originator and any new reader. If maintaining continuity is the highest priority, then mnemonic devices are a good fit. By contrast, the written word makes it possible to convey information precisely to new readers without needing old readers to serve as intermediaries. That is a fundamental property of prose as a medium, and consuming prose made me who I am. The written word is a part of me, and if there is anything I want to say, the written word is how I will say it.
AK: You often talk about the fact that creating an AI with higher intelligence and consciousness would likely result in this being suffering. In my conversations with Antonio Damasio—who traces the genesis of intelligence, feelings, and emotions to forms of cognition in very simple forms of life, such as bacteria—he said that for machines to develop something analogous to human intelligence, we would have to program equivalents of pain and death into computers because the most basic building blocks of intelligence in living systems are the avoidance of death and pain. Would you agree? Is some form of suffering the condition for the emergence of an organic type of intelligence?
TC: To clarify: I suspect actual intelligence requires consciousness; I wouldn’t use the word “higher.” I think consciousness necessarily entails the ability to perceive negative stimuli; I don’t think it’s meaningful to say that something can only experience different degrees of positive stimuli. Any living thing will move toward certain stimuli and away from other; if something prevents it from moving away from the things it’s trying to avoid, it experiences more negative stimuli. It’s reasonable to call that suffering.
Whether the possibility of death is required is an interesting question. Some have argued that intelligence can only exist in systems that dissipate energy and generate entropy. If that’s the case, any intelligent entity could be said to be gradually dying in the same way that all living organisms are. A digital intelligent entity could be kept alive indefinitely much more easily than a biological organism, but it would not be anything like existing computer programs, which are always in a state of equilibrium.
AK: According to Yuk Hui, technology is enabled and constrained by particular cosmologies that exist in different cultures. We can see an alternative path of the development of technology in China or in Mesoamerica, where technological progress had taken a different route. There is thus no single technology but rather multiple “cosmotechnics.” Would you agree with Hui that plural ontologies could enable different technological futures? Can we envisage the bifurcation of technological futures by conceiving different cosmotechnics?
TC: Technology can definitely take many different routes in the future, but I’m not sure the histories of China or Mesoamerica provide the best examples; there have been plenty of empires in both. The desire for domination can be found everywhere, and if the goal is to dominate, that will lead toward certain technological paths no matter what else you believe.
Likewise, there are plenty of people in the West who have entirely different goals than those of Silicon Valley venture capitalists, and I don’t think they necessarily follow different cosmotechnics. This is not to say that I have anything against different cosmotechnics; the more people who oppose Big Tech, the better.
AK: Does the current inability to imagine the future as better than the present affect your writing? Or do you see it as similar to the earlier waves of “the end of the future”?
TC: The ability to imagine the future in any sense is in itself a recent phenomenon. For most of human history, the future looked like the present, which looked like the past; if you tried to imagine your grandchildren’s lives, you imagined something essentially identical to your grandparent’s lives. Moreover, there was a long period in European history when it was taken for granted that the height of civilization lay in the ancient past, and that the world was going to end when Christ returned. The very idea of progress is something that only gained popularity with the Industrial Revolution.
There is the well-known sentiment, popularized by Fredric Jameson, that it is easier to imagine the end of the world than the end of capitalism. But we should also remember what Ursula Le Guin said about capitalism: “Its power seems inescapable. So did the divine right of kings.” I think that for much of history, it was easier to imagine the end of the world than the end of the divine right of kings. And yet here we are, living in a world without kings.
AK: What did you make of the actors and writers strike in Hollywood and the use of artificial intelligence in the film industry? The ability of AI to generate films with actors on whose images and voices it was trained seems to change the future of cinema. Perhaps living actors will become superfluous. The actors’ loss of control over their images is a very serious problem. Are we in the early stages of a new filmmaking revolution? How will the use of AI-generated images of living or dead actors to make films change cinema and storytelling?
TC: Both the Writers Guild of America and the Screen Actors Guild achieved significant restrictions on the use of generative AI. The battle is far from over, but I think it’s incorrect to say that actors have lost control over their images; SAG-AFTRA ensured that actors must grant permission before their likeness is used for a digital performance, and for dead actors, their estates must grant permission. If studio executives wants to create a fully digital character, not based on the appearance or voice of a human actor, they can, but it remains to be seen whether audiences will accept such characters as substitutes for human actors.
I’m also not sure whether studios will want to create fully digital characters, because doing so might reduce their competitive advantage over the long term. Eventually anyone might be able to generate movies populated with digital characters on their home computers. That could be great for independent filmmakers, but in such a world, why would anyone pay to watch a studio’s movie if its stars are also digital characters? The studios’ best strategy might be to cultivate relationships with real actors that they have exclusive contracts with.
AK: What is the future of labor? Do you think there is a chance for a future revolution of the “ghost workers” or consumers of digital technologies fighting for the redistribution of the capital of Amazon, Meta, and Google? Or has capitalism found a perfect worker, completely alienated, in front of their computer screen and unable to unionize?
TC: Capitalism is certainly looking for the perfect worker in the form of AI, but I don’t believe AI is likely to provide it; capitalism will require human workers for the foreseeable future, and I think it’s inevitable that those workers will eventually fight for their rights and dignity. Labor unions aren’t as strong as they once were, but they have had some recent victories (including the victories against movie studios mentioned above). Don’t dismiss the possibility that workers in the tech industry will unionize.
AK: Vast amounts of energy are involved in computation today and the future of AI depends on the future of energy. In 2006 Masdar City in Abu Dhabi tried to create a new energy currency named “ergos,” based on energy unit expenditure and consumption. Inhabitants were meant to be issued a balance of energy credits. What do you think will be the mineral or currency of the future after lithium, which is in batteries? Will energy become a currency?
TC: The idea of an energy-based currency isn’t new; back in the 1930s this was proposed by Howard Scott, founder of the Technocracy movement. One problem with it is that not all energy is the same in the context of the climate emergency. In his novel The Ministry of the Future, Kim Stanley Robinson proposed the idea of a monetary system based on carbon; such a system would make it expensive to release carbon into the atmosphere and make it profitable to keep carbon in the ground. I also remember that in a novel from many years ago, called The Fortunate Fall by Raphael Carter, there was a system where you paid for goods using two currencies, one for the machine labor that went into making it and one for the human labor that went into making it (which you earned with your own labor).
AK: It is disquieting that the output of large language models (LLMs) now makes up more and more of the data available for their training. Exposed to their own output, LLMs are observed to rapidly decohere, and their outputs become homogenized. Do you think this is going to lead to an even deeper homogenization of culture?
TC: I think the problem of LLMs degrading when trained on their own outputs is so serious that, unless a breakthrough is made, LLMs will be a technological dead end. That in itself is fine; good riddance to them. The more serious issue is that they may take the internet with them. The web is becoming increasingly polluted with the effluent of LLMs, and it might be rendered permanently unusable as a source of information. The web might become filled with noise even without LLMs, but LLMs are definitely accelerating the process. As bad as this is, I don’t think it would homogenize culture; in fact, it might do the opposite.