A Review, Sort Of: The Ties That Bound vs. At Home

This blog has been stagnant for over a year now. I’m getting to the point where I’d like to revive it. Unfortunately I haven’t written anything in a while, so until I get back up to speed the quality of my writing will be shaky. I plan to start off by finishing some half-written reviews of books I read weeks or months ago, of which this is the first.


Book Cover

The Ties That Bound by Barbara Hanawalt describes the lives and environment of English peasants during (mostly) the thirteenth and fourteenth centuries. (I wish I could come up with a snappy opening line, but I haven’t tried to review a book, or write anything at all, in ages, and the gears have rusted.)

Social history is the kind of history I’ve found most interesting lately. Unfortunately as social history goes further back in time primary sources can get a little sparse. Not a lot of 14th century English peasants left diaries.

So The Ties That Bound relies on archaeology to detail living environments, then turns to legal documents and other less direct sources to explore family structure, life stages, and social ties. Its particular focus is on coroner’s inquests. Accidental deaths, it turns out, were well documented on all levels of society. Inquests supply not only details of peasant life but also the names of people who would have been unremembered by history had they not inadvertently stabbed themselves or fallen off haystacks. After a while you expect every anecdote to end in tragedy, giving The Ties That Bound undertones of Edward Gorey–The Ghastlycrumb Peasantry, maybe–as Hanawalt occasionally acknowledges: “While ditches may have been created chiefly for drainage, they served a variety of functions, aside from indirectly reducing population.”

Not that this is a snarkfest. The Ties That Bound is written in the best academic style: not overly technical, but not working too hard to avoid dryness or difficulty. Historians trust that the information they’ve gathered will hold the reader’s interest. It is, after all, the reason the reader picked up the book.

Book Cover

Which brings me to another social history I read around the same time, Bill Bryson’s At Home. This purports to be a history of the home in England over the last few centuries. It has a serious case of attention deficit disorder.

The way Bryson organizes At Home is promisingly clever. Each chapter is a room in a house–“The Kitchen,” “The Drawing Room.” Rooms are how we organize domestic life; pattern a history of domestic life after a house and the history almost falls into order by itself. You would think.

But At Home is all over the place. Half the Drawing Room chapter is taken up with mini-biographies of eccentric 18th century celebrity architects. “The Passage” is mostly a history of the Eiffel Tower. “The Dressing Room” focuses not on the wardrobes of the vast bulk of the English population, but on Beau Brummel. Bryson begins “The Cellar” with the Erie Canal, explaining “The reason I have prefaced it all with the story of the Erie Canal is to make the point that building materials are more important and even, dare I say, interesting than you might think.” But wouldn’t the quickest way to convince readers that building materials are interesting be to actually write about them, interestingly?

Well, sure. I mean, I must be open to the idea that building materials can be interesting, or I wouldn’t have picked up a book named after and organized around a kind of building. But Bryson, having chosen as his topic the history of the home in England, displays no faith in its ability to hold the readers’ interest. So At Home prioritizes pithy celebrity biographies–the more eccentric the better–over describing how people lived. Celebrities left more documentation, after all, and making grotesques entertaining doesn’t take much effort. And the Erie Canal, as a large, singular, and significant project, is just naturally more memorable than a cellar.

Every so often At Home walks up to its ostensible subject and touches it, gingerly, and for a moment it’s interesting. One section on the architectural and engineering principles behind stairways–more complex than you’d think–manages to make stairs compelling. But then At Home backs off again, running to the safety of another colorful anecdote, too afraid of boring the reader to focus for long on its core subject. At Home is like a software manual that gives bios of the people who programmed the software, and tells funny stories about things that happened while they were programming it, but only as an afterthought gets around to telling you how the program is supposed to work.

Here’s why I bothered to write up this rambling rant about two apparently random books: At Home is not alone. I’m bored with popular nonfiction, the kind that makes the bestseller lists, in general. Whatever the subject–history, psychology, sociology, biology–the tone is the same: light, breezy, and depthless.

Popular books on academic subjects–science, history, whatever–fall into a few types. One kind is padded with mini-biographies of scientists or archaeologists or historians, descriptions of their appearance and minor eccentricities, and stories about how the pop-nonfiction writer met them in their offices. You also get anecdotes about the author’s travels to historic sites, or visits to businesses, nonprofits, government offices, and other slightly relevant institutions. Whatever actual information these books contain sometimes seems structured around the stories–introduced by and organized around them–rather than the other way around. The information may take up more space, but it feels secondary.

Pop psychology and sociology gravitate to a style pioneered by Malcolm Gladwell: The author begins every chapter with a colorful (and often familiar) anecdote, preferably involving a celebrity. The chapter goes on to cite a series of psychological, sociological, or economic studies centered around a theme taken from the anecdote. These are always presented without context that might help us evaluate them. (Were the results replicated? How did other scholars respond? Who knows?) The chapter ends by finishing the anecdote in a way that sums up the chapter’s theme.

And then there’s the kind of pop nonfiction exemplified by At Home: the quirky kind. At Home approaches every subject by looking for the oddest details and most colorful characters and focusing its attention there, sometimes pushing the original topic to the side, like an artist who sets out to draw a portrait but ends up filling most of the paper with one weirdly shaped nostril.

Most of these books seem to be written by journalists rather than scholars. What these kinds of pop nonfiction books have in common is a journalistic feature-article writing style that they’re taking places it wasn’t meant to go. Every subject, even if it must be awkwardly mashed and folded to fit, becomes a human interest story.

Which betrays a narrow view of what humans might be interested in. More seriously, the human interest approach to history illuminates less about human lives than academic writing.

Compare At Home to The Ties That Bound. In the “What We Can Remember” version of history the image conjured by the word “peasant” is an illiterate rustic prodding dirt with a stick. Standard pop culture peasants live packed into hovels, start breeding in their teens, have few ambitions and are given no way to pursue any that might exist. In fact, The Ties That Bound finds English peasants–both men and women–who made wills and contracts, joined guilds, and bought and sold rights to farm land. They both farmed and ran businesses on the side. When land was scarce adults didn’t necessarily marry until their late twenties. Throughout the book The Ties That Bound argues that these people, although their lives were not like ours, were more active and recognizable than we tend to think.

At Home argues… not much. Bryson sums up 700 pages of anecdotal meanderings with the observation that, over the last few centuries, some things have changed and some haven’t. How about that? He then notes that contemporary civilization uses a lot of energy; that there is massive inequality between the UK, the subject of his book, and the developing nations At Home hasn’t mentioned at all; and that maybe in future both of these will change. The first point is inane. The other points are non-sequiturs. At Home is a book without a central thesis. It tells stories, but those stories don’t add up to an argument. At Home has nothing in particular to say.

Quirky, breezy books like At Home portray history as a collection of discrete stories with strong plots and extraordinary characters… but real life is weird and complicated and not like a story at all. In a way, the books that emphasize detailed research, well-organized evidence, and a strong central argument before narrative wind up telling better stories than human-interest-style nonfiction. They have two things that improve any story: a sense of genuine curiosity, and a point of view.

Suddenly Some Links Drifted By

Here are some of the links I’ve made note of during the weeks this blog has lain fallow:

Raskolnikov, C’est Nous

A cartoon of Newt Gingrich reading Slan.

Compulsive readers get used to finding unexpected connections between books. I also make random connections while wasting time on the internet. Sometimes, like now, this leads to a blog post’s worth of dubious, rambling speculation and crazy theories.

A few days ago I read a blog post at Welcome to My World by Martin McGrath called “Why Does SF Hate Ordinary People?”1 finding a strain of contempt for ordinary people in certain science fiction and fantasy novels.

“Ordinary” has many definitions, so before proceeding I should explain what, in this case, it doesn’t mean: As I write, among the memes stumbling around the internet is a quiz based on a new book by famed statistic-mangler Charles Murray. It supposedly measures how much contact you have with “ordinary” Americans. Actually, it asks questions based on a stereotype of rural white midwesterners (Can you identify this NASCAR driver? Do you have a fridge full of Pabst Blue Ribbon?) and suggests anyone who can’t answer in the affirmative is living in a “bubble.” It must be a very large bubble. It would have to contain most of the country’s actual working class.

The culture-war definition of “ordinary” is not what this blog post is about. Being staggeringly bored by cars driving in circles very fast is not less legitimate than being entertained by them. Another term for making judgements about culture is having taste.

These “ordinary” people have nothing in common beyond the fact that they are not wealthy, famous, heroic, or adventure-prone. They hold down middle- or working-class jobs and keep regular schedules. Their biggest worries aren’t crusades, revolutions, or the impending apocalypse; they’re rents or mortgages, health care, child care, and putting food on the table. They’re “ordinary” only in the sense that they live like the vast majority of people in our society–or whatever fantastic society they call home, which may or may not have mortgages but certainly has people whose main concerns are not the stuff of high drama. As an example McGrath cites Colson Whitehead’s post-apocalyptic zombie potboiler Zone One, which judges the average American too inept to survive an emergency, valorizing, in McGrath’s words, “the loners, the socially inept and those who chafe against the ‘burdens’ imposed on them by the social contract that knits the rest of us together.”

When I finished reading McGrath’s post my brain turned to thoughts of Newt Gingrich. Which sounds, I grant you, quite the random leap. I can explain. See, in one of the endless series of Republican presidential debates, Gingrich revealed a cunning plan to solve school budget problems and reduce the dropout rate: child labor.

“New York City pays their janitors an absurd amount of money because of the union. You could take one janitor and hire 30-some kids to work in the school for the price of one janitor, and those 30 kids would be a lot less likely to drop out. They would actually have money in their pocket. They’d learn to show up for work. They could do light janitorial duty. They could work in the cafeteria. They could work in the front office. They could work in the library. They’d be getting money, which is a good thing if you’re poor. Only the elites despise earning money.”2

Not long after the debate I read a post by “Kay” at Balloon Juice, “Only the Elites Insult the Working Adults Who Pick Up After Us,” that made explicit something not everyone picked up on:3

While it’s certainly interesting that opposing child labor laws is now a mainstream position on the Right and among conservative news personalities, I hear something else entirely in Gingrich’s statement than the pundits and politicians heard. Newt Gingrich told us all last night that nine year olds can replace the grown men and women who currently do these jobs. Newt Gingrich believes janitors and cafeteria workers and people who work in school libraries and offices can and should be replaced by children.

That’s how much respect Gingrich has for the work that these people do.

Gingrich, of course, is an SF fan who loves Isaac Asimov’s Foundation trilogy and has co-written several alternate history novels. McGrath, on his blog, traces a thread of science-fictional disrespect for the ordinary back to the “golden age” of SF, when:

…the triumph of the “golden era’s” omni-competent men, the math-wizard engineers, scientists and the all-knowing astronauts, was always about the final victory of those who felt they were hard done by in a society that did not properly value their obviously superior intelligence.

Which is true. And not necessarily political; I was reminded of Gingrich, but McGrath sees disdain for the ordinary in both right-leaning and left-leaning SF. The thing is, I don’t think “Why does SF hate ordinary people” is the right question. You might ask it about fiction in general.

Dostoeyvsky parodied this attitude over a century ago in Crime and Punishment with Raskolnikov, the self-styled “extraordinary” man. According to the Raskolnikov theory the world revolves around powerful, charismatic Great Individuals, the lynchpins and keystones of civilization. If they’re in politics, our safety and security depend on their strength and resolution; if they’re in business, our prosperity depends on their innovation and creativity. Whatever these extraordinary people do, we can’t hold them to the same rules the rest of us follow. Sometimes, to get the job done, they have to break them. You might remember these ideas from such novels as Atlas Shrugged and The Fountainhead, but it’s also the premise of every second Hollywood action movie, ever.4

A few days after McGrath asked his question, Gareth Rees’s post about the teapot-tempest stirred up by a book review at Strange Horizons led me to Caleb Crain’s New York Times review of Alain de Botton’s The Pleasures and Sorrows of Work:

Describing a manager who feeds him lunch, de Botton writes that “years of working around noisy machinery had left my host mildly deaf in one ear and given him a concomitant habit of leaning in uncomfortably close during discussions, so close that I began to dread his enunciation of a word with a ”˜p’ or a ”˜g’ in it.” For good measure, de Botton adds that the man bores him, perhaps as a result of his “surprisingly intense pride in the plant and its workers.” If de Botton were genuinely concerned that work today lacks meaning, surely here was an opportunity to ask questions. But is he worried that work today lacks meaning? Or just that some work means more to other people than he thinks it should?

This is aimed at the same target, but from a different direction. It’s the contempt of the counterculture for the squares–contempt from outside as opposed to Raskolnikov’s contempt from above. Contempt from outside sees regular, orderly lives as a curse and the people who live them as dupes or zombies. It sees white collar workers as gray killjoys, blue collar workers as Morlocks. They’re buttoned-down and repressed; obstacles to be routed around, or beaten-down victims who need a Manic Pixie Dream Girl to loosen them up and teach them to enjoy life. Contempt from outside sees Jack Nicholson in Five Easy Pieces telling the waitress to hold the chicken salad between her knees, and thinks Bobby DuPea is a free spirit sticking it to The Man rather than, as I think the filmmakers intended, an asshole.

Gingrich sees “ordinary” people as inept, inferior–in comparison his own success is proof of his competence. De Botton sees “ordinary” people as limited, unimaginative–in comparison, he’s deeper, more free. What these attitudes have in common is that they help their holders define themselves as something other than ordinary.

I don’t think many genuine full-time Gingriches and de Bottons exist. In real life, hardly anyone hates ordinary people. In real life, most of us are ordinary. But these kinds of contempt are basic assumptions in many books and many movies–fundamental to the narrative’s world view–and, as long as we’re reading or watching, we go along with it.

The reason is simple: in fiction, ordinary is boring.

It’s hard to hold an audience’s interest in a very long and intricate description of a hero washing dishes. We’ve washed our own so often it takes genius to show us anything fresh. Fiction centers around the most important, most dramatic events of its characters’ lives; unusual, extraordinary events, even adventures. The characters who aren’t going through big changes aren’t the main cast, they’re the extras. Fiction skips the quotidian details.

At this point Raymond Carver busts into the room, waving a copy of Best American Short Stories. “Hey!” he says. “I’m standing right here, y’know!” And Ray has a point. Huge swathes of stories, ranging from great to unreadable, anchor themselves in the everyday. As pro-genre as I am, I’ll admit when it comes to ordinary people the novels filed under “literature” have a better track record than the ones that get filed under “genre.” Heck, sometimes the skill with which a book deals with the ordinary determines where we file it. Still, the protagonists of even the weightiest of serious literature have deeper thoughts and more passionate affairs than most of us have most of the time. If the protagonist is an Uncle Vanya or a Madame Bovary, living entirely without excitement or drama, chances are the story is about how he or she wants that to change:

I am clever and brave and strong. If I had lived a normal life I might have become another Schopenhauer or Dostoieffski.

Anton Chekhov, Uncle Vanya

In reality, thinking like Vanya lead people into weird places–especially if Vanya starts listening to Raskolnikov. Maybe, thinks Vanya, I can be Raskolnikov, too! Yeah, maybe now he’s filed away in a tiny beige cubicle. But the Great Individuals didn’t get their amazing, superhuman abilities by educating themselves or devotedly practicing their craft. Their talents just sort of came to them, because they’re special. Just like, deep down inside, he’s special. Someday he’ll be Great, too. All he has to do is believe in himself.

Our popular fiction is swarming with spunky nobodies discovering natural God-given talents–not skills, because they rarely need to work at them–and overcoming hidebound establishments and opposition from nay-saying friends and family to fulfill their dreams. Often this is a fantasy-hero thing–see The Matrix, Star Wars, or other stories about Chosen Ones who inherit their powers, or unleash their inner badass after very little training. I’m often struck by the contrast between modern adventure movies and older Hitchcock-style thrillers whose average heroes muddled through extraordinary adventures without manifesting heretofore unsuspected Kung Fu.

In the movies, the follow-your-dreams hero is as likely to become an entertainer, or some other kind of celebrity. These stories combine the “special person” narrative with the “outsider vs. the squares” narrative–their heroes succeed because they’re more soulful and free-spirited than the hoi polloi. Who are–to bring this essay back to the point–us, in the audience, watching.

It’s tempting to identify with the hero’s point of view even when, technically, that point of view doesn’t like us very much. One of the attractions of fiction is that people think in stories. We make sense of our lives by organizing them into narratives in which we’re the central characters. We feel like protagonists, exceptional people. In a way, from our own viewpoints, we are exceptions: every one of us is the only person whose head we live inside–the one person whose thoughts and point of view we have full access to, as we have access to the thoughts and POV of a novel’s protagonist. It’s the protagonist that we measure ourselves against, not the extras and walk-ons. When the narrative point of view tries to open some space between the hero and the herd we instinctively side with the hero.

Which is fine. The danger comes when too many of our stories define their heroes as better than everyone else. Stories are one of the big ways a culture or subculture spreads its values. Hearing a message over and over again habituates us. It can become part of the cultural furniture, something those who share these stories unthinkingly assume to be true.

There’s long been a toxic strain in SF fandom, a subculture-within-a-subculture that actually believes SF fans are superior to the common horde. Some fans in years gone only half-jokingly coined the phrase “fans are slans,” comparing themselves to the scorned superhumans of A. E. van Vogt’s novel Slan. Even today SF appeals to more than its fair share of inflated egos. Even those of us with no interest in formal, organized fandom run into these people when we make the mistake of reading an internet comment thread. Would-be writers convinced their self-published zombie-vampire-dragon trilogies would sell millions if the market weren’t conspiring against them. Self-styled omni-competent men who think all they need to reveal their true potential is the breakdown of civilization. Guys who whine about “political correctness” if a book’s protagonist is female or gay or something.

Part of getting along with people, functioning in society, and maintaining a working sense of empathy is keeping in mind that, though we’re our own protaginists, so is everyone else–to others, we’re supporting characters or extras. If we find this tough to accept, maybe our culture–whether “our culture” means SF fans or American culture in general–isn’t hearing that message often enough. We could stand to be more comfortable in our ordinariness.


  1. Via ↩

  2. And yet Gingrich is upset by a janitors’ union negotiating for a living wage. I guess it’s because he’s an elite! ↩

  3. Via ↩

  4. As well as a lot of comics. R. Sikoryak once drew a mashup of Crime and Punishment and Batman, with Batman as Raskolnikov. They fit together frighteningly well. ↩

Sigizmund Krzhizhanovsky, The Letter Killers Club

Cover art

Memories of the Future, a collection of stories by Sigizmund Krzhizhanovsky, was among my favorite books of 2010. Krzhizhanovsky was a 20th century Russian writer of absurdism, surrealism, magic realism, and science fiction. Bad luck and Soviet censorship kept all but a handful of stories out of print in his lifetime. His work was buried in an archive to be unearthed decades later. The NYRB classics imprint has begun slowly translating and publishing his work in English.

The Letter Killers Club is a novel and a frame for several stories, quasi-stories, and narrative fragments. I could say many of the same things about it as I said about Memories of the Future–the prose is startling, the ideas come at rapid fire, and Krzhizhanovsky draws vivid characters in very few strokes. I’ve seen Krzhizhanovsky compared to Borges and Kafka, but he reminds me more of StanisÅ‚aw Lem. The Letter Killers Club recalls A Perfect Vacuum, Lem’s volume of reviews of nonexistent books–conceptions of books that don’t exist and don’t need to because Lem boiled them down to their essences.

The narrator of The Letter Killers Club is friends with a famous author who for two years has written nothing. One night, the author explains: in his youth, a financial emergency forced him to sell his library. He afterwards spent hours reimagining the books that stood on his empty shelves, and in doing so found the inspiration to write books of his own. Years later he acquired a case of writers block and returned to what worked before, setting up a room of empty shelves. But now he found he preferred keeping his ideas in his imagination: fixing his conceptions as letters on a page killed them. [1] Now he’s the president of a club of “conceivers,” the Letter Killers Club, who gather every week to share stories that will never be set down on paper. He invites the narrator along.

By the end of the first chapter you might expect a straightforward collection of club stories. But the meetings of the Letter Killers Club are… intense. The conceivers use nonsense-syllable aliases and skulk in like they’re attending a combination conspiratorial conclave and Ph.D. thesis defense. If a conceiver is gauche enough to read from notes, the president throws them into the fireplace. The meetings aren’t so much storytelling sessions as conflicts. The audience seems anxious to challenge the speaker–each week’s featured conceiver is on trial. The stakes are left unspoken. They feel pretty damn high.

Krzhizhanovsky is dealing with the same preoccupations that dominated many stories in Memories of the Future: writers with no outlet for their work, stories treated as matters of life and death. One story in Memories of the Future argues that writing isn’t just an occupation but the thing the writer owes the world, payment for his or her existence. That’s serious. You can’t blame Krzhizhanovsky for coming back to these themes. They’re his life. Barring those few precious published stories, the only people his writing connected with were the audiences who gathered to hear his own private readings. He had no publisher, and he must write, and I sense in his stories a feeling of bottled-upness. The malaise that hangs over the Letter Killers comes from their inability, or refusal, to fulfill their purpose. A story never read is never complete.

The conceptions vary in tone and content. The longest story, and the one that most put me in mind of Lem, is a science fictional tale of a machine, the “ex”, that can posses people’s bodies, working them like puppets, leaving their minds aware but sidelined like passengers in vehicles out of their control. At first its creators sell it as a way to deal with the insane: their care is an economic burden, goes the argument, but putting them under the control of an ex will turn their bodies, if not their minds, into productive workers. As you might expect in a story like this the exes’ influence spreads. They become the tools of a government that sees citizens as economic units rather than human beings.

Another story deals with an actor playing Hamlet who enters a world inhabited by previous performances of Hamlet to steal Richard Burbage’s mojo. Another is about a priest moonlighting as a jester, changing costumes as needed, whose career goes haywire when his vestments are stolen. With the dystopian tale, they share a thread that runs through some (though definitely not all) of the stories in The Letter Killers Club: the mismatch between the outer and inner life, actors and the parts they play, people’s real selves and the roles imposed by society. Like the characters in these stories, the Club members live in a world that expects one kind of story from people with other stories inside them. The Letter Killers Club doesn’t pay much attention to the world outside the president’s doors, but you can’t forget these people are meeting in the Soviet Union–near the end one member observes, in reference to the empty shelves, that the police can’t search what isn’t there. The members of the Letter Killers Club can speak their ideas in their empty library, but can’t give them to the outside world. The friction between their inner selves and their outer roles is wearing away at them.

Apparently five volumes of Krzhizhanovsky’s collected works have been published in Russian. I hope we don’t have to wait long for more to appear in English. His writing spent too long bottled up, and deserves to be read as widely as possible.


  1. An idea familiar to anyone who’s been unable to work because they can’t stand the thought of substandard results. Part of the reason this blog was so rarely updated in the last year is that most of what I tried to write was in my own personal opinion too inane to share.  ↩

Two 2001s

My favorite science fiction movies are the ones that don’t spend two and a half hours yelling and throwing things at my face. This is why I recently watched 2001: A Space Odyssey again. It’s quiet and slow and never boring.

(I assume anyone reading this knows the story: a monolith forcibly evolves a prehuman; millions of years later, the discovery of another monolith prompts a mission to Jupiter; the ship’s computer goes crazy and kills most of the crew; yet another monolith turns the survivor into a magic space baby. Level up!)

The Movie

Movie poster

2001 looks strikingly different from current Hollywood science fiction. At the moment the coolest future Hollywood can imagine is one drained of all colors but dirty gray, dim gunmetal blue, and body-fluid orange. Apparently sometime in the 21st century the visible spectrum will contract Seasonal Affective Disorder. But then, nearly every Hollywood future is either an apocalypse to struggle through or a dystopia for a self-absorbed hero to topple explodily, so I understand why the color graders are depressed. 2001 has its share of beige and sterile white, but, y’know, it’s a cheerful sterile white. And it’s joined by the computer core where David Bowman lobotomizes Hal, lit with the dark red of an internal organ; and Bowman’s mysterious minty-fresh hotel room; and a rack of spacesuits that might have been sponsored by Skittles bite-size candy. 2001’s future might be worth looking forward to–new worlds to explore, new life forms to discover, magic space babies to evolve into. It feels that way partly because the future is pretty. Listen to the score: the spaceships don’t dock, they waltz.

Studios tend to pigeonhole SF as an action genre, and tend to assume too little of action movie audiences. I often bail on these movies for being too loud, too fast, and too dumb. It’s interesting how little 2001 explains, and how little it needs to. 2001 gives just enough information to suggest what’s happening, and trusts the audience to make connections. The movie doesn’t tell us why Hal kills Discovery’s crew. Hal proudly tells an interviewer that the Hal 9000 computer has never made an error. Hal reads Bowman’s and Poole’s lips as they debate shutting him down for repairs following his mistaken damage report. We can work it out for ourselves.1 I’m mildly jarred when, later, Hal explains to Bowman about the lip reading–Bowman didn’t know about it, so it’s not like this dialogue doesn’t make sense, but the audience knows from the way the movie cut between shots of the crew and Hal’s eye. Watching 2001 I get used to not listening to needless explanations. Over half the movie has no dialogue at all.2 It’s the most effective demonstration in sci-fi film of the principle of “show, don’t tell.”

2001 spends a surprising amount of time watching people run through commonplace routines, the kind of action most movies gloss over. The “Dawn of Man” sequence shows how the prehumans live before the monolith shows up because we need to see how they began to understand how they’ve changed. But you might wonder why 2001 shows every detail of Heywood Floyd’s trip to the moon–sleeping on the shuttle, eating astronaut food, going through the lunar equivalent of customs, and calling his daughter on a videophone. When the film moves to the Discovery the plot doesn’t start rolling again until we know David Bowman’s routine, too.

In 1968, science fiction was in the thick of the New Wave, a label given to the younger SF writers writing with more attention to good prose, rounded characters, and just generally the kinds of ordinary literary qualities that make fiction readable. These were never entirely absent from science fiction, but the “golden age” of the genre was dominated by a functional, didactic style exemplified by the work of Isaac Asimov. Some fans call science fiction the “literature of ideas.”3 In golden age SF, the ideas were king and everything else existed only to serve them. The characters were mouthpieces for the ideas. The prose was kept utilitarian–“transparent” was the usual term–to transmit the ideas with minimal friction. Writers used less of the implicit worldbuilding that dominates modern SF, relying on straightforward exposition to describe the world and particularly the scientific gimmicks they’d built their stories around. Stories often described in intricate detail actions that, to the characters, were routine.

Translate these expository passages to film and you have the scenes of Heywood Floyd taking a shuttle to the moon. This is not inherently bad. I’d argue that it takes more talent and effort to write a good book in this grain than it does to write traditional fiction, but a witty or eloquent writer can do as he or she pleases.4 So can filmmakers as proficient as Stanley Kubrick and Douglas Trumbull, 2001’s effects supervisor. I can see why some viewers lose patience with 2001, but I personally am not bored.

The Book

Book cover

Of course, 2001 does exist in prose. 2001 was a collaboration between Stanley Kubrick and Arthur C. Clarke; while Kubrick wrote the script, Clarke wrote the novel. The movie was so far ahead of its time that, even with its 1968-era design work, it still looks fresh. So when I followed up this viewing of the movie with Clarke’s novel, which I hadn’t read in so long that I’d forgotten it completely, I was surprised it was so old-fashioned. In retrospect, I shouldn’t have been: 2001 the movie’s virtuoso style is built from an old-fashioned plan. I just hadn’t noticed until I reread the novel, which seems unaware the New Wave ever happened. Clarke’s prose lacks the style of Kubrick’s direction, and without actors to give them life the characters are revealed as perfunctory sketches, functions rather than subjects. They demonstrate aspects of the universe, and witness its wonders, but the universe itself is what matters.

It’s striking how much space Clarke devotes not to telling us what someone is doing now, but what they would do, could do, were doing, or had been doing. Sometimes 2001 reads like the kind of nonfiction book you’d give to children to teach them how grown-up jobs work. (“At midday, he would retire to the galley and leave the ship to Hal while he prepared his lunch. Even here he was still fully in touch with events, for the tiny lounge-cum-dining room contained a duplicate of the Situation Display Panel, and Hal could call him at a moment’s notice.”) Clarke’s purpose is not only, and maybe not even primarily, to tell a story. He wants to make the audience understand Heywood Floyd and Dave Bowman’s entire universe. Clarke’s 2001 is the opposite of Kubrick’s. The movie is gnomic, open to multiple interpretations. The book wants to tell us something, and it’s going to make it absolutely clear.

Unfortunately Clarke, while not really bad, doesn’t quite have the writing chops to deliver compelling exposition–or at least he wasn’t exercising them here. Still, the book occasionally improves on the movie, mostly by expanding on it. 2001 is over two hours long, but the book has more space.5 The movie’s “Dawn of Man” segment keeps its distance from its subjects. We don’t get to know any of the prehumans. We can’t really even tell them apart. The novel can get inside the mind of the prehuman Moon-Watcher. After Hal, he’s the most vivid character in the book. (On the other hand, it says something about 2001 that the most vivid characters are an ape-man and a paranoid computer.)

In both film and book, the climax of the “Dawn of Man” comes when Moon-Watcher intuits the concept that makes him, in Clarke’s words, “master of the world”: the tool. Specifically, a weapon–Moon-Watcher needs to hunt and to defend himself from leopards and rival bands of prehumans.

The movie then makes its famous jump cut from bone to satellite. The book tells us something the movie doesn’t: the satellite is part of an orbital nuclear arsenal. If the unease running under the surface of the novel’s middle section seems muted now, it’s because it needed no emphasis in 1968: it went without saying that humanity might soon bomb itself into extinction.

When Bowman returns to Earth as the Starchild, his first act is to destroy the nuclear satellites–and then the book repeats the “master of the world” line. If humanity made its first evolutionary leap when it picked up weapons, says 2001, its next great leap won’t come until it learns to put them down again. Apparently this theme appeared in an earlier draft of the movie’s script but didn’t make it into the final film. It doesn’t feel like the movie is missing anything–it is, after all, already pretty full–but the novel is better for the symmetry.

Evolviness

A monolith.

The theme shared by both novel and movie is evolution–and here we come to the original reason I started this essay: evolution in science fiction is weird. Several crazy evolutionary oddities crop up over and over in SF, and 2001 makes room for them all.

People who don’t know much biology often think evolution tries to build every species into its Platonically perfect ideal form. In this view, humans aren’t just more complex or more self-aware than the first tetrapod to crawl out of the ocean: we’re more “evolved.” This is as pernicious as it is foolish–in the early 20th century, true believers in evolutionary “progress” used the idea to justify eugenics and “scientific” racism.

Despite that, in many science fiction stories evolution has a direction. This direction is usually entirely unlike the direction the eugenicists were thinking of. Not usually enough, mind you–a few of these “perfect” humanoids look disturbingly blonde–but science fiction people mostly evolve into David McCallum in the Outer Limits episode “The Sixth Finger”–people with big throbbing brains and, more importantly, godlike powers. (Also, at least on TV, they tend to glow.) Depending on the story, this might be a metaphor for either social and technological progress (if the more highly evolved beings are wise and authoritative, like Star Trek’s Organians), or absolute power corrupting absolutely (if they’re assholes like Star Trek’s Gary Mitchell). The crew of the Enterprise met guys like this in every other episode of Star Trek–Gene Roddenberry’s universe has more alien gods than H. P. Lovecraft’s. In media SF huge heads are optional; powers aren’t. Especially in comic book sci-fi–think of the X-Men, the spiritual descendants of A. E. van Vogt’s Slan. Sometimes people evolve into “energy” or “pure thought”–or, in modern stories, minds uploaded as software. In Clarke’s own Childhood’s End the human race joins a noncorporeal telepathic hive-mind. In 2001, the Starchild destroys the Earth’s orbiting nukes with a thought. In science fiction, sufficiently evolved biology is indistinguishable from magic.

The suddenness of David Bowman’s transformation brings up another point: in science fiction, evolution happens very fast, not in gradual steps but in leaps. It works like the most extreme form of punctuated equilibrium you’ve ever seen–a species coasts for a few million years in placid stability, until bam: a superbaby is born! With three eyes and an extra liver and telepathy! In biology, this is known as saltation, or more colloquially as the “Hopeful Monsters” hypothesis. Nobody takes it seriously… except in science fiction, where you actually can make an evolutionary leap in a single generation. This is the premise of Childhood’s End, and The Uncanny X-Men, and Slan. Theodore Sturgeon used it in More Than Human. Sometimes this is a metaphor for the way an older generation struggles to understand its children. More often it’s simply artistic license. Evolution takes millions of years, but fiction, unless it’s as untraditionally structured as Last and First Men, deals with individual human lives. To talk about evolution, SF writers collapse its time scale to match the scale they have to work with.

Some SF, particularly in TV and film, twists saltation even further away from standard evolutionary theory: evolutionary leaps don’t just happen between generations. People can evolve–or devolve–in midlife, as David Bowman is evolved by the monolith. In the world of Philip K. Dick’s The Three Stigmata of Palmer Eldritch, for instance, you can go in for “evolutionary therapy” and come out with a Big Head. In my plot summary I used the words “level up” ironically, but it really is like these writers are powering up their Dungeons and Dragons characters–suddenly, the hero knows more spells. Written SF usually tells these stories with some kind of not-exactly-evolutionary equivalent–in Poul Anderson’s Brain Wave all life on Earth becomes more intelligent when the solar system drifts out of an energy-dampening field; the protagonists of Anderson’s “Call Me Joe” and Clifford Simak’s “Desertion” trade their human bodies for “better” bodies built to survive on alien worlds. TV shows just go ahead and let their characters “mutate.” Countless episodes of Star Trek and Doctor Who are built around this premise, the most awe-inspiring being Star Trek: Voyager’s legendarily awful “Threshold”, in which flying a shuttle too fast causes Paris and Janeway to evolve into mudskippers.6 This is, again, artistic license: stories focus on individuals, not species. The easiest way to write a story about biological change is through metaphor, by putting an individual character through an impossible evolutionary leap.

Most of the leaps I’ve cited in the last three paragraphs have something in common: they don’t involve natural selection. In science fiction, evolutionary leaps are triggered by outside forces. Sometimes an evolutionary leap is catalyzed by a natural phenomenon, like the “galactic barrier” encountered by Gary Mitchell on Star Trek. The latest trend in evolutionary catalysts is technological. Vernor Vinge has proposed that humanity is heading for a “Singularity”, when exponentially accelerating technological breakthroughs lead to superintelligence, mind uploads, immortality, and just generally a future our puny meatspace brains cannot predict or comprehend. The Singularity is the hard SF equivalent of ascension to Being of Pure Thoughtdom, leading to the less kind term “the Rapture of the nerds.” Plausible or not,7 the occasional singularitized civilization is de rigeur in modern space opera (not always under that name; for instance, lurking in the background of Iain Banks’s Culture universe are civilizations who’ve “sublimed”). Short of the Singularity, a good chunk of contemporary far-future SF involves transhumans or posthumans, people who’ve enhanced their bodies and minds technologically, A pioneering novel in this vein was Frederick Pohl’s Man Plus, about a man whose body is rebuilt to survive on Mars.

Singularities and transhumanism put humans in charge of our own evolution. 2001 puts human evolution in the hands of aliens, as do many other stories, including Childhood’s End. Octavia Butler’s Dawn and its sequels deal with humanity’s assimilation into a species of gene-trading, colonialist aliens. Both books are about humanity’s future evolution, but just as often the aliens have guided us from the beginning of human history, as in 2001–the idea even took hold outside of fiction, in Erich von Däniken’s crackpot tome Chariots of the Gods?. Star Trek explained the similarity of its mostly-humanoid species with an ancient race of aliens who interfered with evolution on many planets, including Earth. Nigel Kneale’s Quatermass and the Pit is a cynical take on the same concept.

So. Having (at tedious length) established that science fiction tends to get evolution (usually deliberately) wrong, what does it mean? Specifically, what does it mean for the ostensible subject of this essay, 2001?

Tales of weird evolution rarely depict change as evil.8 More often they’re about human potential. Evolution is more often a metaphor for progress and growth, personal or social: the Organians are “more evolved” than us not because they can turn into talking lightbulbs but because they possess more knowledge and wisdom. Stories of evolutionary leaps are about the hope that we can become more than we are, the growing pains we suffer in transition, and occasionally the fear that we might not be able to handle our new knowledge and abilities.

It gives me pause, though, that in science fiction growth is so often represented by a kind of evolution that doesn’t exist. As I’ve mentioned, there are narrative reasons for these oddities. An epiphany is more dramatic, and more suited to a story taking place in a limited time-frame, than a geologically slow ongoing process of becoming. And an epiphany can’t come out of nowhere–it needs a specific cause, one more narratively satisfying than the laws of biology. But what we end up with are stories of personal and social progress in which we don’t grow ourselves–we’re grown by outside forces. Our growth as human beings is an emergent property of accelerating technological change, or it’s granted to us by gods and monoliths. In 2001: A Space Odyssey Moon-Watcher doesn’t discover tools himself–the monolith implants the concept in his mind. The human race has to prove it’s worthy of the next step in evolution by traveling to the moon and then to Jupiter, but when David Bowman arrives the secrets of the universe are given to him.

The evolution metaphor in 2001–and in science fiction in general–is a weird, confused, disquieting tangle of optimism, hope, and cynicism. Humanity has the potential to be more than we are, but not by our own effort and not through any process we can control or understand. It’s like science fiction thinks we can’t get from here to wisdom without a miracle in between.


  1. The book explicitly explains Hal’s behavior and its explanation is different from the one we’re led to believe in the movie. ↩

  2. The trivia section of 2001’s IMDB page gives the dialogue-free time as 88 minutes out of 141. ↩

  3. I am not one of them. The description is uselessly vague. What book isn’t about ideas? ↩

  4. This is why I bought Mark Twain’s Autobiography, on the face of it a thousand pages of random Grandpa Simpsonesque rambling: Mark Twain’s grocery lists are worth reading. ↩

  5. Sorry. ↩

  6. As a capper, they proceed to have baby mudskippers together. ↩

  7. I’m on the “not” side, myself. ↩

  8. When they are, they’re usually horror stories. Often they focus on a mad scientist who’s devolving people, or evolving animals, e.g. The Island of Dr. Moreau. ↩

Links to Things

I plan to revive this blog for 2012. I’m still writing slowly, but two or three potential posts are now gradually accreting on my hard drive. In the meantime, here’s a links post:

  • Here’s Roger Ebert’s list of the best films of 2011. I’m linking to this mostly in order to quote this observation:

    …I believe the more specific a film is about human experience, the more universal it is. On the other hand, movies “for everybody” seem to be for nobody in particular.

    I think this is just as true of books, and music, and just art in general.

  • At The Rumpus, a conversation about the movie The Most Dangerous Game (imdb) that turns into a discussion of the difference between trivia and knowledge.

  • Gareth Rees on deciding what standards to use when talking about art, with a focus on Greg Egan. Rees’s argument doesn’t convince me–the first half of the post compares completely different art forms with completely different functions, but the second half compares books to other books, which is, I’d argue, completely different:

    The science fiction writer Greg Egan is another pertinent case. Judged by the standards of the literary novel, Egan’s works fall far short: his prose is dry and impersonal; his characters carry out their function in the story but no more; his plots are often episodic and lack dramatic conflict or resolution; he has a tin ear when it comes to satire. But all of that can be forgiven because he brings to his work a unique interest in the character of physical law.

    That may be true, but it’s hard not to wonder why a novel can’t provide interest in the character of physical law and have lifelike characters, beautiful prose, and well-crafted plots. I’ve always wanted to like Egan’s work, but the weird affectlessness of his stories has always proved an insurmountable barrier.

    That said, I think Rees’s post is worth reading. (Via)

  • American Scientist on the problem with Freakonomics and simple-minded statistical-cherry-picking contrarianism.

Unfortunately, having left these URLs lying around for so long, I no longer recall how I found some of them. I’ll have to do better with this so that I can include “via” links.

Kevin Huizenga, The Wild Kingdom

Walk into a comics shop1 and you’ll see rack upon rack of detailed and carefully rendered mainstream comics–“mainstream,” in comic-shop terms, meaning the style and aesthetic typical of superhero comics. Comics that methodically delineate every hair on a characters head yet seem to know about as much about the way the human body moves as an octopus man from the planet Xoth. Comics that obsessively-compulsively render every sidewalk crack and windowpane of a street scene, but fail to clearly communicate what’s happening there. Comics whose draftsmanship is at times photo-perfect, but miss everything that would invest their art with meaning, emotion, or life.

And then there’s Kevin Huizenga, whose comics look like the button-eyed, pipe-cleaner-limbed 1930s newspaper strips of Bud Fisher and E. C. Segar, and are among the most realistic comics currently published. Before I go further, I want to make it clear that this is not genre-bashing. Only a superficial (and dull) interpretation of “realism” would equate it with realistic subject matter. Anyway, although most of Huizenga’s comics are set in suburbia he often uses fantasy–Curses collects several magic-realist tales, and he’s serializing a post-apocalyptic comic on What Things Do. What I mean is that Huizenga’s cartooning is more evocative–better at seeing and understanding the essence of an experience and translating it into marks on a page.

Take Ganges #3. Glenn Ganges, Huizenga’s all-purpose protagonist, spends the first chunk of the book trying to drift off to sleep and getting stuck halfway there. A hypnagogic state, it’s called. On the second page of Ganges #3 Glenn walks out to his front yard. It’s a clear moonlit night, and, without even the benefit of full color, an excellent impression of the way light falls on a clear moonlit night. As his thoughts wander, Glenn absent-mindedly walks up a tree. He’s dreaming. And the feeling of reading this page reminds me of how actual dreams feel: the disjointedness, the way one element of the narrative (Glenn’s thoughts) refuses to acknowledge another (the suspension of gravity), the acceptance of surreal events as literally unremarkable (Glenn walks back into the house, observes himself sleeping, and climbs into his own head as though it’s just what you do on a restless night). Dream sequences in comics aren’t usually like this, partly because they usually serve the kind of narrative function they do in movies–i.e, to develop characters or themes through allegory–and partly because drawing a dream that feels like a dream is hard.

Cover art

Huizenga pulls off the same trick at the beginning of the book I’m actually attempting, however circuitously, to review: The Wild Kingdom. Huizenga has given real thought to how the defining features of dreams could be translated to the page. For instance, how do you depict the sudden time-skips that are typical of dreams? Because here’s the thing: skipping over time is how comics normally work. Moment-to-moment panel transitions, to use Scott McCloud’s categorization, are less common than action-to-action or scene-to-scene. So comics have to work to depict actual narrative discontinuity. Huizenga solves the problem by showing Glenn see himself at different moments in a single panel as he approaches a house. Then there’s the way that dreams tend to jumble together things that seem to belong to different levels of reality, which Huizenga represents by collaging photographs into his cartoony drawings.2

I shouldn’t spend too much time on this dream sequence, though. It’s just a prologue. What The Wild Kingdom is really about… well, that doesn’t become clear for some time. In a good way–this is one of Huizenga’s more challenging works. The cover design and the binding resemble a mid-20th-century children’s science book. There’s a mock-serious introduction and fake table of contents. There are paintings of songbirds on the endpapers. So when Glenn wakes up and proceeds to spend Saturday puttering around his suburban home readers might assume The Wild Kingdom has already wandered off premise. But that’s the point of the book’s first chunk: Suburbia is a wild kingdom, a point reinforced when you flip to the back of the book to discover the songbirds on the endpapers are taken from an ad for Ethyl gasoline.

We usually define “nature,” or “the wild,” as what exists where people don’t. Here we have a city, and here we have a farm, and over here, in this stand of trees along the creek, where no one mows the grass, that’s Nature. We tend to assume, when we’re not particularly thinking about it, that “nature” has clear borders, like a square on a chessboard. It’s more complicated than that, of course, as anyone who’s confronted a suburban lawn after a month’s neglect knows. Cities are also ecosystems. There’s a lot going on in cities that’s not under our control–that is, in other words, wild. To say wildlife survives in the cities is understating the case–those pigeons, raccoons, squirrels and feral cats are thriving. Right under our noses are enough predator-prey dramas to keep Marlin Perkins busy for years.

Glenn is woken by a mosquito, and finds a stag beetle in the basement which is subsequently hassled by a cat. There’s a worm in his apple; he tosses it to a squirrel. On a drive, he sees another car run over a pigeon. A hawk stops to pick up its remains. Again, everything here is closely observed and efficiently communicated. On one page a pigeon pecks at a couple of chili fries. There are seventeen closely packed drawings of just the bird’s head and the fries, without panel borders. The pared-down drawings and the page structure read with a staccato rhythm a lot like the jerky head-bobbing of an actual pigeon. Also interesting: the panels showing Glenn’s car as he drives often enter Glenn’s point of view. Nearby cars are well-defined; so are objects in Glenn’s view as he watches the road, like stoplights and telephone wires. The buildings and trees to the side of the road are built mostly from motion lines with a few sharp details jumping out from the background–the flashbulb images Glenn picks up out of the corner of his eye. These panels are at once pictures of Glenn out for a drive and maps of where his attention is.

The common factor is that these panels are both representational and… diagrammatic, let’s say. In fact, at times Huizenga’s comics include actual diagrams, some accurate and some parodies (as are the diagrams in The Wild Kingdom).

Just when you think you might be getting a handle on The Wild Kingdom, there’s a commercial break. It’s in color. And much more oblique. And it seems to jump around a lot, like the book has lost its attention span. It starts with a Glenn-substitute attempting to ponder some deep questions, but within a couple of pages the book moves on to Hot New Things, and repeated promises that “you’ll be saved from your own life,” and naked dancing Technicolor people shouting “Yeah!”, and Walt Whitman with an exciting new way to make money. This is a different kind of wild: the mental noise that distracts us from the deep attention to the world demonstrated by the black and white pages. This is the wildness of Glenn’s mind when it’s out of control and bereft of attention span. This section is about wanting, and desire, and how the ubiquitous mass media and relentless advertising that surrounds us like air sublimates our more nebulous desires into a need for the Hot New Thing. Because, honestly, isn’t the Hot New Thing easier and more fun to think about than the deep questions? It saves us from our own lives!

After a few pages of this, The Wild Kingdom calms down, gradually going from bright colors to muted colors back to black and white. It returns to the nature theme of the first section in a series of short pieces which include pictures of “fancy pigeons” and clip-and-save trading cards covered in bizarre “facts” about the animals we’ve seen. Then the book introduces Maurice Maenterlinck, and its theme comes together. Maenterlinck was a surrealist playwright who also wrote three books of natural history. The closest thing to a statement of purpose in The Wild Kingdom is a long quotation from The Life of the Bee (available on Project Gutenberg), from which I’ll quote part of a paragraph:

Let our heart, if it will, in the meanwhile repeat, “It is sad;” but let our reason be content to add, “Thus it is.” At the present hour the duty before us is to seek out that which perhaps may be hiding behind these sorrows; and, urged on by this endeavour, we must not turn our eyes away, but steadily, fixedly, watch these sorrows and study them, with a courage and interest as keen as though they were joys. It is right that before we judge nature, before we complain, we should at least ask every question that we can possibly ask.

The first-glance take on The Wild Kingdom might be that it’s about a conflict between nature and the suburbs–but, again, these are “sides” that don’t really exist; even in the city, nature is there. The natural world is the example The Wild Kingdom uses to make its real point. What this book is really about, I think, is attention.

Recently there was a psychological experiment that had a lot of publicity. You might have heard about it. People are asked to watch a video of basketball players, and count how many times the players passed the ball. About half the people who try this become so intent on the task that they do not notice when a guy wanders through the game wearing a gorilla suit. The human brain is not an outstanding multitasker; we can do it, but if we juggle too many tasks at once we’re just a little bit worse at all of them.3 There are limits to how much we can focus on, how much input we can take in, at once. I know the brain-as-computer metaphor is massively overused, but at this time of night I can’t think of a more succinct way to put it: the human mind has only so much bandwidth and can run only so many processing cycles at once.

So it’s not such a great thing when too many of our processing cycles are taken up with toothpaste, chili fries, and endless anticipation of Hot New Things. I don’t want to sound too disapproving. I like Hot New Things; one thing dour anti-consumerists don’t always get is that sometimes everyday life is a grind, and a certain amount of daydreaming about Hot New Things can be one way of coping. Every once in a while we need to be saved from our own lives.

Moderation, though, is key. Daydreams are good; it’s not so healthy to let manufactured anxieties (Are our teeth white enough? How clean is the carpet, really?), catchy slogans, and secondhand narratives colonize our attention entirely. To return to the computer analogy, it’s the difference between playing a game on your computer and letting a virus dominate its processing cycles. The Wild Kingdom’s focus on the natural world hidden in the urban landscape is a reminder that it’s important to pay attention to the world that’s actually there, around us. It’s important not to be so focused on our destination that we don’t see the bird about to be crushed by the wheel. It’s almost Buddhist: Huizenga is asking us to be here now.

The Wild Kingdom ends by returning to the hawk that, in the first third of the book, took off with the pigeon. The hawk lands on an electric transformer and electrocutes itself. This, through a series of Rube Goldberg disasters, leads to an apocalypse. The camera pulls away from Earth–peaceful, from so far away–only to see it collide with another planet. This is, I think, a memento mori; a reminder that some things may not always be around, and we won’t always be around, either, and we should pay attention now, and ask every question that we can possibly ask.

Which brings us back to the beginning of this essay, and the question of how Huizenga invested a simply drawn, cartoony book like The Wild Kingdom with so much more conviction than the pseudo-photorealistic comics a few shelves over.

It helps to look.


  1. If you do, you’re braver than I. The comics shop in the town where I live is relatively neat and clean and I’m still not comfortable going in: however pleasant the store is, I’m just too creeped out by the merchandise. (Want to increase sales, comics companies? Try coming up with products that don’t look sleazy.) ↩

  2. Also, I find that The Wild Kingdom’s dream sequence really captures the way dreams often involve a sense of menace without containing anything obviously menacing. Although maybe that only says something about what my dreams are like. ↩

  3. Although multitasking feels easy. Which is why so many people their cars crash while texting: yeah, they know other people can’t handle it, but… ↩

More Links to Things

There’s probably going to be another gap of at least a week before my next substantial post, but I have a few more interesting links:

  • Comix Cube on comics, sound effects, and typography, with a focus on the work of Jordan Crane:

    That being said, there’s something especially exciting about what Crane is doing here. The graphic forms he’s using are typographic, but by and large they’re just beyond identifying as anything in our alphabet.

  • SF writer Karl Schroeder makes the case for aspirational science fiction:

    The fact is that if I’ve learned one thing in two years of studying how we think about the future, it’s that the one thing that’s sorely lacking in the public imagination is positive ideas about where we should be going. We seem to do everything about our future except try to design it. It’s a funny thing: nobody ever questions your credentials if you predict doom and destruction. But provide a rosy picture of the future, and people demand that you justify yourself. Increasingly, though, I believe that while warning people of dire possibilities is responsible, providing them with something to aspire to is even more important.

    Right now, this is the kind of thing I want to read.

  • This article, about a local-food restaurant operating in a small Virginia town, contains the most comedically oblivious sentence I’ve seen in any news story this week:

    The biggest challenge has been winning over townspeople. It’s not hard to find residents who say that a meal at the Harvest Table is more than they can afford, though none who said so in interviews had actually eaten there.

    Gee, I wonder why.

  • Every so often, you hear politicians and talking heads claim that half of Americans–the bottom half–pay no taxes. This is, of course, not the case; these people are using rhetorical sleight of hand, using the single word “taxes” when they mean “federal income taxes.” Kevin Drum explains the real situation, and then clarifies exactly who doesn’t owe federal income tax, and why.

Links to Things

Here are a few things I’ve been reading lately: