The Elephant in the Brain is the most cheerfully cynical book I have read. It argues for a simple core claim:
Our brains are built to act in our self-interest while at the same time trying hard not to appear selfish in front of other people. And in order to throw them off the trail, our brains often keep “us,” our conscious minds, in the dark. […]
Our aim in this book, therefore, is not just to catalog the many ways humans behave unwittingly, but also to suggest that many of our most venerated institutions—charities, corporations, hospitals, universities—serve covert agendas alongside their official ones. Because of this, we must take covert agendas into account when thinking about these institutions, or risk radically misunderstanding them.
I think the core claim is basically right, and is a very important fact about the world. I think the modern “consensus” view on the core claim is a little bit hard to tease out, and that makes it hard to tell whether the book is advancing a radical contrarian hypothesis or just repeating things everyone already knows.
I’d summarize the situation as: successful people mostly already agree with the book’s core claim and behave accordingly, but explicit social consensus mostly denies or downplays particular instances of the claim (except when politically convenient). That means that many people do just fine, but it’s a problem for those of us who try to lean heavily on explicit reasoning (and especially when we want to use explicit reasoning in social contexts and need to appeal to explicit consensus).
One obvious reason for the mismatch is that all else equal, worse people really will have more cynical views, and we all want to look like good people. This puts the book in an unfortunate place, where it can both be considered contrarian enough that it gets an unusual degree of skepticism while simultaneously arguing for claims obvious enough to leave people unimpressed.
One might expect the EA and rationalist communities to be a bad audience for this book because it’s stuff we already know so well. Unfortunately, I think the EA and rationalist crowds persistently underestimate these dynamics, not quite as much as normal humans but surprisingly close. (The problem is exacerbated by the rationalist community’s recent interest in introspection, without nearly enough skepticism about social motives affecting introspection.)
I think The Elephant in the Brain is at its most persuasive when pointing out hypotheses that look plausible in hindsight, making plausible a priori arguments, and pointing out consistencies and contradictions with the authors’ own experiences that they expect readers to share. I think that’s a fine style of argument.
The book also cites academic evidence for many of its claims, and sometimes tries to make more elaborate arguments. For the most part I didn’t feel that these arguments were sufficiently careful to be persuasive. I expect it to get more flak for this than other similarly-rigorous popular science writing, but to be honest I feel similarly most of the time even about books that superficially appear to be much more careful. Overall this didn’t affect my enjoyment of the book that much, though I think it does make it harder to support wholeheartedly with potentially-hostile audiences.
I do think the book throws a lot of hypotheses out there. Many of the particular hypotheses seem highly debatable or wrong in details. I think the overall hypothesis holds up despite that. I think the authors are reasonably forward about the situation and not too obnoxious about it; the tone of the book is also surprisingly pleasant and not very similar to reading a post on Overcoming Bias.
Overall I found The Elephant in the Brain to be a surprisingly enjoyable read. I don’t think it’s going to blow anyone’s mind, and it’s relatively light, but it will probably point out some new arguments and observations and may move some things from unconscious awareness to conscious attention.
I would prefer live in a community where more people are having more open discussions about the topics raised in this book, in the same spirit as this book. I think the subject matter is quite important and unusually neglected, and would be discussed much more if not for the very hypocrisy that the book focuses on. Between that and being a fun read, I’d recommend that you at least take a look and see if you enjoy it, and that you think seriously about the core claims whether or not you want to invest the time to read the book.
(Related post on this blog: If we can’t lie to others, we will lie to ourselves.)
Annotating 10 random claims
The rest of this review will consist of my annotations for 10 uniformly random passages from The Elephant in the Brain. I committed to the following methodology:
- Pick a uniformly random line from my Elephant in the Brain ebook, and start reading from the beginning of that line.
- Quote the next complete sentence. Optionally add the preceding sentence and any number of the following sentences, until we get a chunk that represents a coherent claim.
- If we get an interesting claim that could be true or false, write down my view on how well-supported that claim is.
- Repeat until I get 10 claims.
(At the meta level, I think this was an OK methodology but there would have been better sampling strategies to get interesting claims.)
My overall take from this exercise:
- The reviewed claims mostly seem true.
- They are mostly believable to the extent they are supported by anecdotal evidence and a priori reasoning.
- Often the book discusses one explanation or effect without much reason to think that they are picking out the most important effects. (It’s possible in some cases I’ve forgotten about other relevant discussion from the book.)
- I think a more careful analysis would often come to different precise conclusions, but would leave the book’s overall picture intact.
- To the extent the book is trying to really convince, I would have strongly preferred more careful argument and empirical evidence about a smaller number of claims. I think that covering a larger number of topics was reasonable as a somewhat light read intended to expose readers to this kind of analysis.
When a high-status person chooses someone as a mate, friend, or teammate, it’s often seen as an endorsement of this associate, raising that person’s status. This (among other things) creates an incentive to win the affections of people with high status.
Solid a priori reasoning + supported by observation. The argument isn’t made totally clear and is actually a little bit subtle, but I believe it.
I don’t think this is the strongest incentive to win affections of high status people. The first-order incentive seems to be that high status folks are almost by definition useful allies, so of course you want to be allies with them. (I don’t think the rest of the section really depends on which of these explanations you prefer.)
Body language differs from spoken language—words—in at least one crucial regard. In spoken language, the mapping between symbols and meanings is mostly arbitrary. Words have a fanciful, airy-fairy quality to them; they aren’t anchored to anything fundamental. The only reason we express gratitude by saying “thank you,” instead of “merci” or “arigatou” or “uggawuggawugga,” is because that’s the way our people have always done it. Body language, however, is mostly not arbitrary. […] body language is inherently more honest than verbal language
Meh. Solid a priori reasoning + supported by observation. But overall the effect seems quite small most of the time, because virtually all of the signaling content (both of words and body language) is derived from social consequences. The fact that body language is ultimately anchored to real consequences is an important part of why it is conserved across cultures, but outside of a few exceptions it doesn’t seem like it has a big effect on the strength of the signal.
What of the demand for research? Here we also see a preference for prestige, rather than a strict focus on the underlying value of the research. To most sponsors and consumers of research, the “text” of the research (what it says about reality and how important and useful that information is) seems to matter less than the “subtext” (what the research says about the prestige of the researcher, and how some of that glory might reflect back on the sponsor or consumer).
Meh. I think this is roughly right, but I don’t think the evidence in the section is very persuasive and I don’t think it’s the main thing going on in demand for research, or even the main deviation between the status quo and the nominal purpose of research institutions. The evidence given in the section is:
- Students prefer universities with prestigious researchers even if they don’t engage with that research. But this has a ton of other explanations (e.g. taking prestige as an indicator of quality). And even within the realm of signaling explanations, because research quality is most important to the most discerning students, you’d expect students to pretend to care about the quality of research. (That would be consistent with the book’s main thesis but would undermine this specific hypothesis).
- Researchers tend to cluster in “hot” areas. I don’t believe that the proposed hypothesis actually does a better job of explaining this phenomenon than normal folk explanations. Citation-as-incentive would explain clustering, and that behavior can be better explained by people using citations as a proxy for researcher quality. And I think the social dynamics by which researchers pick areas can also largely explain the concentration (and the book’s model of research demand doesn’t explain those social dynamics better than folk stories about e.g. availability of mentorship or collaborators).
- If academics really cared about incentivizing work they would offer prizes. I think this claim is totally unclear on the merits, and the argument for prizes being better isn’t really made in the book.
- Referees care a lot about prestige of the authors. This is definitely true, but again I think it can be largely explained by referees using prestige as a prior / expecting others to use prestige as a prior and therefore subjecting some of their decisions to more scrutiny. Also, the political explanation for preferring prestige here is not super strong since most audiences won’t know who the referees are. To the extent that referees are behaving politically here, I don’t think that the book has correctly pinned down their motive (which is more likely to be about maintaining relationships or other jockeying within the field).
None of these schemes is unequivocally better or more accurate than the others. They’re just different ways of slicing up the same complex system—the reality of which is even more fragmented than the “committee” metaphor suggests. Psychologists call this modularity. Instead of a single monolithic process or small committee, modern psychologists see the brain as a patchwork of hundreds or thousands of different parts or “modules,” each responsible for a slightly different information-processing task
[Not a claim to agree/disagree with]
After a pint and a half of blood was drawn, according to Belofsky, [detailed description of elaborate and ineffective treatment of King Charles II]. Not surprisingly, the king died on February 6. But notice all the conspicuous effort in this story. If Charles’s physicians had simply prescribed soup and bed rest, everyone might have questioned whether “enough” had been done.
[Not a claim to agree/disagree with] I don’t have a strong view of whether this treatment strategy was in fact produced by complex incentives to show care, but that seems quite plausible.
Religious badges are reinforced at home and church, but they have the most value (as badges) out in public, in the market or town square.
Meh. Plausible a priori reasoning + consistent with observation. But I’m not sure whether the claim really makes any good non-trivial predictions. I’m also not sure it’s consistent with evidence about religious behavior in private, without effectively defining something to be a “badge” iff it’s used in public (at which point you are giving up another plausible prediction).
The story in the book emphasizes religious badges as brand, making the claim: “If I behave badly, my Jewish peers are liable to punish me for tarnishing our collective reputation.” It’s not clear to me this mechanism works as an honest symbol without better mechanisms for punishing marginally religious folk who display religious badges. I think it’s pretty likely that other effects are at least as important.
As a rule of thumb, whenever communication is discreet—subtle, cryptic, or ambiguous—it’s a fair bet that the speaker is trying to get away with something by preventing the message from becoming common knowledge
Solid a priori reasoning + supported by observation. The story given in the book also basically seems right to me: “One way to model scenarios like this is to imagine a cast of peers waiting in the wings, eager to hear what happened on the date [one of several examples]. This is the audience, real or imagined, in front of whom the couple is performing.” A good example of something that seems right, has seemed more right to me as I’ve gotten better at introspection, but seems unpalatable to acknowledge.
First, and perhaps most important, the desire to signal loyalty helps explain why we don’t always vote our self-interest (i.e., for the candidates and policies that would bring us, as individuals, the greatest benefit). Rather, we tend to vote for our groups’ interests [cite Haidt 2012]. Naturally, on many issues, our group and self-interests align. But when they don’t, we often choose to side with our groups. In this way, politics (like religion) is a team sport.
Reasonably well-supported by observation. If you don’t have something like this story you’d be confused by a lot of things (e.g. see Scott’s puzzlement about class struggle), and there aren’t other standard stories that explain the same evidence.
I don’t know if this is really a critical driver of voting behavior overall (groups vote against their interest a lot, there are other things going on). But given that it also explains a lot of other related observations, it seems OK.
But schools today look very little like Plato’s Academy. Specifically, our modern K–12 school system is both compulsory and largely state sponsored. How did we get here?
Compulsory state-sponsored education traces its heritage to a relatively recent, and not particularly “scholarly,” development: the expansion of the Prussian military state in the 18th and 19th centuries. Prussian schools were designed to create patriotic citizens for war, and they apparently worked as intended. But the Prussian education system had many other attractive qualities (like teacher training) that made it appealing to other nations. By the end of the 1800s, the “Prussian model” had spread throughout much of Europe. And in the mid-1800s, American educators and lawmakers explicitly set out to emulate the Prussian system.
Seems plausible, but I expect the simplicity of the situation is greatly overstated. Citation is to Wikipedia (which I expect overstates the importance of the Prussian model based on being written by enthusiasts, and based on uncareful scholarship generally promoting an unrealistically simple pictures of what’s going on.)
Seems useful to have a lot of background facts like this in mind to have some general intuitions about what is going on, but I don’t think that it’s strong enough to license any substantive inference (and the book doesn’t present enough evidence for me to feel comfortable assimilating this kind of thing into my background understanding with much weight).
The next line: “This suggests that public K–12 schools were originally designed as part of nation-building projects, with an eye toward indoctrinating citizens and cultivating patriotic fervor” seems a little bit true, but “suggests” should really be replaced by “weakly suggests.”
Meanwhile, close friends want to distinguish themselves from casual friends, and one of the ways they can do it is by being unfriendly, at least on the surface. When a close friend forgets his wallet and can’t pay for lunch, you might call him an idiot. This works only when you’re so confident of your friendship that you can (playfully) insult him, without worrying that it will jeopardize your friendship. This isn’t something a casual friend can get away with as easily, and it may even serve to bring close friends closer together.
Solid a priori reasoning + supported by observation. Not going to blow anyone’s mind as a claim, but it’s good example to illustrate the basic phenomenon and important phenomenon to illustrate for later use. I suspect there is other stuff going on in these cases but not really important to the book’s points.
[If humans were oblivious to each other’s consumption choices, then] standardization would occur in other product categories like cars and houses.
I think they give reasonably persuasive evidence:
- Standardized goods would be considerably cheaper for fixed quality. (Some sloppiness here but looks basically right.)
- Anecdotally it feels undignified to have the same kind of home or house as other people, so presumably that’s part of the motive.
- Fashions change frequently, generating extra diversity
- For strictly personal goods people seem to have significantly lower diversity.
In communicating discreetly with each other, what are the nobles hoping to achieve? […] if their meaning is clear to individual eavesdroppers, they hope their plans can remain closeted knowledge rather than becoming common knowledge.
[Redundant with one of the earlier claims. By catch and release we therefore estimate that there are O(100) morally distinct claims in the book :)]
These are just a few of the inconsistencies between our civic ideals and our actual behavior. To explain these and other puzzles, we’ll have to make use of another political archetype—one whose motives are, not surprisingly, less noble than those of the altruistic Do-Right.
I basically agree that the puzzles pointed to in this chapter are better explained by the authors’ model. The three puzzles indicated in the section:
- Voters don’t much change their behavior based on the decisiveness of their vote, even in cases where they have a good understanding of decisiveness. I agree that in general people’s behavior does not seem sensitive enough to stakes to be plausibly consistent with the do-right motivation, even given their level of knowledge and innumeracy. I didn’t check the citation to Caplan 2007, but I have significant concerns about the reported study—e.g. swing votes are very rarely the only thing on the ballot except in weird cases where knowledge of swinginess may be less widespread. So I suspect this section overstates the case but I still believe it.)
- People are generally not interested in becoming informed. I’m less convinced here, since most people remain similarly uninformed for private decisions. In general I don’t think a hypocrisy story explains the evidence as well as an irrationality story.
- People have strongly entrenched ideological views, with strong emotional attachments. I agree that this mostly happens situations where there are covert political incentives to hold certain beliefs and so is a strong indication that such incentives are shaping behavior.
4 thoughts on “The Elephant in the Brain”