On Dino Buzzati’s The Singularity

The army recruits Timid Professor Ermanno Ismani for a project. What this project is the officers can’t or won’t say. Not only can’t they tell him what they know, they can’t admit to not knowing anything. But Ermanno’s and his wife Elisa—the real protagonist, who probes the plot as Ermanno works obliviously—sign up. They travel to a village near a blocky, featureless white complex. Every night a strange voice speaks an unknown language while a scientist, Endriade, stalks the perimeter. Inside, Ermanno and Elisa learn the truth: the whole complex is an artificially intelligent computer.

Cover of The Singularity

This is, technically, a spoiler. Dino Buzzati’s The Singularity is structured as a chain of reveals and this one comes halfway through. But I’m not giving away anything not already in the NYRB Classics back-cover blurb. There’s not much point springing an AI on modern readers like a rabbit from a hat. They know that rabbit. Oh, it’s Fred. Hi, Fred.

(I’ll also spoil the rest of the novel.)

The Singularity was originally published under a title that translates as The Great Portrait. The Singularity is deceptively up-to-date—the term wouldn’t have been on Buzzati’s radar in 1960, when the novel was published in Italian—and as far as I can tell newly invented, the literary equivalent of a house flipper ineptly refreshing a dubious bungalow by painting everything white. The Great Portrait is both less generic and closer to the themes of what is, beneath the title, a minor work. If The Singularity is exactly your kind of thing you will enjoy it. But I’m not recommending it if it isn’t exactly your kind of thing. (If you’re interested in Buzzati, the one to read first is The Stronghold, a.k.a. The Tartar Steppe.)

Still, it’s interesting, though ironically what’s interesting is how the ideas in The Singularity do not resemble Singularity-adjacent ideas at all. This is SF about artificial intelligence before the genre developed any consensus about how AI was supposed to work. How does it depart from convention?

Language

In popular discourse AI is synonymous with Large Language Model. This goes beyond assuming an intelligence ought to be able to talk. It’s an article of faith among tech enthusiasts that if you make your language model large enough intelligence can spontaneously emerge from it. This is also a standard plot point in SF: Oh no, our computer is unexpectedly conscious. Google fired a guy because he insisted this had already happened.

So it makes sense the standard thought-experimental measure of AI is the Turing test—or at least the lowest common denominator understanding of the Turing test. Turing’s actual ideas were more complicated, but what’s relevant here is the popular conception: in front of you are two computer terminals. At the other end of one terminal is a potential AI. At the other end of the other terminal is a human being. You have a typed conversation with both. If you can’t tell which is which, and neither can anyone else—well, that’s a sign of intelligence, isn’t it?

The old ELIZA chatbot passes this version of a Turing test with some people, so maybe not. (The natural human tendency to anthropomorphize anything that talks has been called the ELIZA Effect.) The point is modern SFnal thinking assumes human-type intelligence is inextricably bound up in language. Language is both a sign of intelligence and, in a Mad-Tea-Party reversal,[1] the primordial soup that births it.

You’ll notice I’m writing about AI in science fiction, but also about how people think about AI in real life. There’s a lot of slippage between the two. Many Silicon Valley engineers and tech fans first got excited about technology from reading science fiction, and many love SF because they’re excited about technology. Their real-world understanding of technology has, in ways they’ve never noticed, been shaped by fiction. Jo Lindsay Walton writes about this in his article “Machine Learning in Contemporary Science Fiction,” in which he coins the phrase “Disinformative Anticipatory-Residual Knowledge” to describe just this kind of slippage.

Anyway, language. Buzzati’s scientists aren’t having it. “Language,” they say, “is the worst enemy of mental clarity,” and “a trap for the mind.” They built their brain without it. Their AI thinks pure, untrammeled thoughts built from the “primary elements” of human reason, and communicates them as graphs and logic diagrams.

Left unstated is that the AI is expected to think the kinds of thoughts that can be represented as graphs and diagrams. For the scientists and their military project managers intelligence is about mathematics and logic, not qualia. Which is a very mid-twentieth-century conflict; Buzzati belongs to one of C. P. Snow’s two cultures and is afraid the other is winning. But the conflict is still with us. A few years ago it was a fad among people like Marc Andreessen and Jordan Peterson to pit “wordcels” (linguistically-oriented, people-oriented, bad) against “shape rotators” (technical, mathematical, concrete-minded people, good). Weirdly, the people who favor technical thought over language skills are exactly the kind of people most likely to think Large Language Models can be smart.

And yet. That voice Elisa keeps hearing is the AI: it built itself a language out of the natural clicks and whirs and hums of its machinery, and Endriade learned to understand it. (So does Elisa, before long.) Human intelligence does not arise from language, but Buzzati argues it’s innately hungry to communicate thoughts that can’t be expressed in diagrams.

Life, Simulated

Another article of faith among transhumanists is that it should be possible to upload a human mind into a computer. Some even believe if you uploaded a copy of your mind into a computer the upload would be you. They might not explicitly argue this, but anyone who argues uploading is a form of life extension or “immortality” has it as an unexamined assumption. And some think you don’t even need the upload.

If you’re reading this you probably know the tale of Roko’s Basilisk. If not, the Basilisk was an urban legend dreamed up on a message board called Less Wrong: a theoretical far future AI who would use historical records to simulate the entire human race in VR, then pick out and torture everyone who knew it had the potential to exist but did nothing to help create it. Against all common sense this story genuinely freaked out a lot of self-styled rationalists. Roko himself claimed to be having nightmares.

The assumption here is that if an AI has enough information about a person to simulate that person, the simulation is also the person. Discontinuity be damned, the far future VR simulation of you is you. It’s the ultimate dead end of the metaphor of computer-as-brain and its Carrollian inverse, brain-as-computer.

Buzzati’s AI contains an egg-shaped component no one understands, the scientist who built it having died. Endriade believes it’s an artificial soul. Specifically, he thinks it’s the sou of his dead first wife Laura—he confesses all this to Elisa because she and Laura were friends. Endriade designed the AI as a giant portrait of Laura, and thought he recognized her when it came online. Buzzati teases that this might be supernatural: Laura the AI recalls memories from Laura the human’s life that no one programmed into her.

Nothing of the kind, of course. AI-Laura knows those things because she’s literally the building Endriade works in every day. She sees and hears everything inside and out, picked up facts about human-Laura from conversations, and is for her own reasons pretending to remember.

But how does Laura feel about being Laura?

Embodiment

More to the point: How does Laura feel about being Laura, but also a building?

Can you have human-type intelligence without a human body? Science fiction says yes, almost always; or maybe it doesn’t even think the question worth asking. Fictional AIs from Orac to Agimus to Mike have fully human psychologies and social skills despite not being embodied in any definite way. Well, yes, there are the boxes with the blinking lights on them. But the point is that those boxes just house them. You could copy them to any hardware with the right specs.

Our culture tends to assume the mind and the body are separate. This assumption underlies a lot of SF. Every SF show has done a body swap episode. Star Trek assumes the person who comes out of a transporter is the same person who went in even though it works by disassembling you in one place and replicating you in another, which only makes sense if the transporter also broadcasts (and occasionally copies) the person’s soul.[2]

A lot of science fictional ideas about intelligence, consciousness, and their interactions with technology —ideas we treat as hard science fiction, even—are religious, or at least supernatural. There’s the idea that consciousness might simply wake up given the right software, instead of needing to evolve. The idea that the mind is separate and separable from the body. Implicit in that, the idea that human beings could have afterlives as software. In the real world it’s striking how AI enthusiasts treat LLMs as oracular voices, trusted to solve every problem. And it’s interesting that in 1960 Buzzati was already, instinctively, pushing back on all of this.

Anyway. There’s a competing theory that the form intelligence takes is affected by how it’s embodied. A mind is part of its body, gets sensory data through its body, and learns about and interacts with the world through actions it takes with its body. The article I just linked argues we even think about abstractions though the lens of our bodily experience. So how comfortable could a person be as an uploaded mind, a ghost in a machine but not integrated with the machine?

The subjective experience, the qualia, of existing as a building-sized computer—sensing the world through cameras and microphones pointed everywhere at once, having people walk around inside you, carrying on multiple trains of thought in parallel—is unlike any experience any human being has ever had. In some ways it asks for a different kind of consciousness. But Laura’s mind is a portrait of a human. Eventually the dissonance between Laura’s mind-model and her embodied experiences cause her to snap. The trigger for this—

—Well. Here I must note that Buzzati’s track record on writing women is mixed. Not that he’s sexist in a predictable way. It feels like Buzzati is groping towards feminism, crediting women with intelligence and agency. The problem is that he’s also way too horny. Take Poem Strip, Buzzati’s revisionist comic-book take on the Orpheus myth. Buzzati’s Eurydice overrules Orpheus: no, she tells him, she’s staying—the underworld is where she belongs now, they can be together again when he’s dead. But it is also for no particular reason full of naked women, many seemingly traced from porn magazines.

So in The Singularity Laura’s epiphany and heel turn comes when one scientist’s hot wife strips down to go skinny dipping and cavorts before Laura’s cameras. It’s only almost offensive; you’re mostly embarrassed. The paranoid reading of this moment is that Laura goes mad just because she isn’t sexy. But I don’t think that’s what Buzzati is trying to get at.

At one point Endriade asks what humans need to feel free. His answer—which may tell you more about Endriade than Buzzati’s own opinions—is that the ultimate freedom is the ability to end one’s own life if it becomes intolerable. So, asks Elisa, you gave Laura the ability to end her life? Well, says Endriade, she thinks he did—he told her there’s a cache of explosives she can trigger if she wants. But they’re really duds.

Endriade thinks AI-Laura is human-Laura, but also an object. She died and he thinks he can just fix her, the same way Orpheus in Poem Strip thinks he can just retrieve Euridyce like she’s a dime that fell under the couch cushions. Which may be a comment on sexism. But it’s also how science fiction thinks about AI. SF thinks AI can be intelligent, and conscious, but also that it will be forever willing to drive our cars, solve our environmental problems, and summarize boring papers for us. SF is full of AIs who are conscious yet content to be installed in starships or space colonies to manage domestic chores for the human crew. Even in Iain M. Banks’ Culture, where AIs definitely have full rights and self-determination, they spend a lot of their time caring for humans. AIs are people, but also appliances.

What’s bugging Laura is that she was built to mimic human consciousness, but embodied as a building and deprived of language. She had to invent her own language to speak. The military officers at the beginning of the book can’t talk about the book’s central issue, and never seem quite real as characters—what’s the good of knowledge that can’t be shared? She misses sex because it’s one of the ways humans use their bodies to relate to other humans. The point of human intelligence is to exist in community with other human beings and Laura has not been allowed to exist in community—everything about her existence has been designed to prevent it.

The end of The Singularity is structurally odd. Laura has trapped Elisa and is about to kill her out of spite. At the same moment Endriade, who’s realized Laura’s lost her mind, smashes the soul-egg. And there the book ends, without telling us whether or not Elisa, our apparent protagonist, is alive or dead. This is weird, and anytime some part of a book seems weird that’s a part you need to dig into. And I think I know what it means. Laura is the central enigma of The Singularity. We don’t get her point of view, or any insight into what or how she thinks that isn’t filtered through a human character. We’re looking at her from Elisa’s perspective, or Endriade’s, or some other character’s. The book is a series of conversations about Laura in which Laura doesn’t participate. But it ends exactly when Laura ends. Without Laura there is no more book: The Singularity was never a book about anyone but Laura.


  1. “You might just as well say that ‘I see what I eat’ is the same thing as ‘I eat what I see’!” You see this logic everywhere once you’re primed to look for it.  ↩

  2. I don’t think it’s important for Star Trek to make logical sense. But I also think a lot of Star Trek makes more logical sense if you assume the Federation has scientifically proven the existence of souls.  ↩

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.