- The Oxonian Review - http://www.oxonianreview.org/wp -

The Rational Question

Kevin Dorst

[1]

How rational are people? 

A common answer: ‘Not very—instead, we’ve learned from social psychology and behavioural economics that people are systematically irrational.’

I disagree. I think we don’t yet know how (ir)rational people are. Why not? Because there’s a problem with the basis of the common irrationalist answer. The problem I have in mind isn’t to do with replication failures [2], publication bias [3], or statistical malpractice [4]. The problem is not with the empirical observations, but with their normative foundations.  

The problem is this. The claim that people are systematically irrational owes much of its purchase to the empirical fact that they violate standard normative models of rational belief and action. But those models are wrong––very wrong––about how rational people would think and act. 

That raises a question: could people’s violations of these normative models demonstrate failures of the models, rather than failures of the people? I’m going to argue that sometimes—perhaps often—the answer is ‘Yes’: when we refine these models, we find that several of the most widely-maligned “biases” turn out to be rational.  As a result, it remains an open question whether the common irrationalist answer deserves to be so common.

**

The story of this answer begins in the mid-20th century. Although initial work suggested that simple rational models fit well [5] with human judgments, the tide soon turned. In the early 1970s, Daniel Kahneman and Amos Tversky published [6] a series [7] of influential papers [8] that demonstrated that people make (apparently) perverse mistakes in their reasoning. The explanation offered was that these mistakes were the outputs of simple rules of thumb––“heuristics”––that often gave accurate judgments, but also led to predictable errors––“biases”. The heuristics and biases [9] research program was born. Its fruits include a list of nearly 200 biases [10] that pervade everyday reasoning, as well as the new subfield of behavioural economics [11] that is devoted to grappling with the consequences of those biases. ‘Irrationality [12] is the new buzzword. [12] 

For example in his provocatively titled Predictably Irrational [13], here’s how Dan Ariely sets up his foil:

In this book, when I mention the rational economic model, I refer to the basic assumption that most economists and many of us hold about human nature: the simple and compelling idea that we are capable of making the right decisions for ourselves. (xxix)

Ariely thinks this basic assumption is wrong: ‘My goal… is to help you fundamentally rethink what makes you and people around you tick’ (xii).  Why? Because ‘we are really far less rational than standard economic theory assumes. Moreover, these irrational behaviours of ours are neither random nor senseless. They are systematic. And since we repeat them again and again, predictable’ (xxx). This irrationalist picture is painted even more bluntly in Cordelia Fine’s wonderful book, A Mind of its Own [14]. Although we all trust our brains to help us think rationally, she warns us that 

the truth of the matter… is that your unscrupulous brain is entirely undeserving of your confidence. It has some shifty habits that leave the truth distorted and disguised. Your brain is vainglorious. It’s emotional and immoral. It deludes you. It is pigheaded, secretive, and weak-willed. Oh, and it’s also a bigot. (2)

That’s the big-picture irrationalist narrative. To get a better sense of its basis, let’s take look at a couple examples of the (apparent) biases we exhibit.

First, imagine a time when you’d invested resources into a course of action whose prospects were starting to look dim. Perhaps you were pouring energy into a doomed relationship, or pursuing a career path that ended up going nowhere, or were simply standing in a line that was going far slower than you expected. During this process, did you ever think to yourself, ‘I’ve devoted so much to this  already… I can’t give up now’?  I bet you did. In doing so—we are told—you committed the sunk cost fallacy [15]: the irrational tendency to give weight to past investments in deciding what to do.

Why is this irrational? Because the past is the past: there’s nothing you can do now that will get back your wasted time and effort. What you have control over is the future, and therefore the only things that should affect your decision are what will happen in the future––the wasted resources are completely irrelevant. Nevertheless, the sunk cost fallacy is pervasive. Because of it, people stick with bad financial investments [16], stay in bad relationships [17], and remain on hopeless career trajectories [18]. All because of—we are told—an irrational bug in human decision-making.

Consider another example. How likely should you have judged it, on 7 November 2016, that Trump would win the U.S. election on the following day? 30%? Sounds reasonable. But if you’re like most people [19], your answer is in fact higher than the answer you would’ve given if I’d asked you on November 7th. Why? Because—again, we are told—you are prone to hindsight bias [20]; or, more colloquially, the “I-knew-it-all-along effect”. This is the tendency to retroactively think that events were more predictable than they in fact were. More precisely: when asked how likely a fixed body of evidence makes a given event, people consistently give higher answers if they know that the event occurred than if they do not know whether it occurred. Thus their knowledge about the event must be irrationally “seeping in” to their post hoc assessment of the evidence.

It’s not hard to imagine the disastrous consequences [21] of this bias. If after the fact people think that events are more predictable than they in fact were, then they will tend to unfairly blame victims of unpredictable harms (‘they should’ve avoided it’), to scapegoat those who had a chance to avert unpredictable disasters (‘they should’ve known and taken action’), and to systematically fail to learn of their own limitations (‘I knew it all along’). All because of—we again, are told—an irrational bug in human reasoning.

The sunk cost fallacy and hindsight bias are just two examples—there are literally hundreds more [10]. That is why many think it’s been settled that people are irrational.

**

Here’s why I think they’re wrong.  

To demonstrate irrationality you have to show two things: first, that people exhibit a certain pattern of thought or action; and second, that rational people would not do so. How have defenders of the irrationalist answer supported this second claim? Although the methods are disparate and varied, the most common approach is to use the standard “rational actor” models from economics and decision theory [22]. These models use numerically precise probability [23] and utility [24] functions to model rational beliefs and desires, and give a recipe [25] for determining how to act rationally in light of them.

As any economist will tell you, these are simplified models. And if they are an economist who specializes in behavioural economics [11] or judgment and decision-making [26], they’ll further tell you that we need to refine and modify those models in order to account for the subtleties of human thought an action.  So far, I completely agree.

But often the claims go further. Often it’s claimed that such refinements are needed precisely because humans are irrational and biased—that if we were rational, we would decide as “homo economicus [27]” or “Econs” do; but since we are only human, we cannot manage to do so.

Here are some examples. Nobel-prize-winning behavioural economist Richard Thaler opens his popular book on the subject [28] by discussing the ways in which standard economic models fail to predict human action—and immediately follows up by telling us that this is because humans are prone to ‘overconfidence’ and ‘countless other biases’ (6). Ariely, after telling us that we are ‘far less rational than standard economic theory assumes’, goes on to give his favorite examples—including the decoy effect [29] (Ch. 1), anchoring effects [30] (Ch. 2), and ’arbitrary coherence [31]’ (Ch. 2)—all of which are claimed to be irrational because they (apparently) violate standard axioms of decision theory [32]. And in a widely-read textbook [33] on the subject, Hasties and Dawes are explicit about the decision-theoretic foundations of their critique of human rationality (16–21)—for example, saying that it took the advent of modern decision theory to diagnose the irrationality of the sunk cost fallacy (40).

In short, there is a widespread tendency to treat human deviations from standard economic models as symptoms of bias and irrationality. That is what I’m contesting. I’ll do that by arguing three things. First, deviations from such models are often perfectly rational; second, this is true of at least some widely-maligned “biases”; and third, it follows that until we scrutinize and refine the applications of such models, we will not know how many of the putative biases are truly indicators of irrationality.  (To be clear: these claims don’t cast doubt on the importance of the turn from classical to behavioral economics—what they cast doubt on is the irrationalist interpretation of that turn.)

Begin with the first claim. It is well-known and uncontroversial that some violations of standard decision-theoretic models are not at all “irrational”. (At least, not in the sense of the term that’s relevant to the question of whether humans are rational.) For example, such models predict that rational agents will never forget things [34] and will never be uncertain about mathematical truths [35]—but there is no interesting sense in which you are “irrational” if you forget what you ate for breakfast last Tuesday or are unsure how much a 15% tip on a $43 bill amounts to. 

Other instances of this pattern are less well appreciated. For example, here are two predictions of the standard models:

(i) If you are unsure whether you are rational, then you are thereby irrational. This is sometimes called “Epistemic Murphy’s Law [36]”: ‘if something could go wrong, it already has’. It is a result of the fact that the models don’t treat rational beliefs as things that can vary across possibilities, meaning they require you to be certain of what the rational beliefs are [37].

(ii) If you have any imprecision in your preferences, you are thereby irrational. For example, suppose you are choosing between careers: you could become a musician or a scientist. You have no clear preference between them, and so can’t decide. Now I offer to give you an extra $5 if you become a scientist. If after receiving this offer you still lack a clear preference between the two, then your preferences are imprecise––and, according to the model, irrational. (This is a result of the fact that the “expected utilities” in the standard model are always numerically precise [38]––meaning that if they are perfectly balanced between two options, then adding any value to one of them will tip the scales.)

These predictions are clearly wrong. It is not irrational to be unwilling to bet your life on your own rationality, nor to be unmoved in your career decision by my offer of $5. In short, there are a variety of violations of standard decision-theoretic models that do not indicate irrationality. 

What of it? This obviously doesn’t show that psychological critiques of rationality that invoke such models will turn out to be mistaken. But I think it does show is that we need to scrutinize the details of those critiques carefully: What exactly are they assuming about how a rational person would think and act? And is that a reasonable assumption to make—or is it another bad prediction of the standard models?

To answer these questions, we need to see how the predictions change when we refine the models in ways philosophers (and others) have argued we must. That means using models that allow rational people to forget things [39], or to have mathematical uncertainty [40], or to have doubts about their own rationality [41], or to have imprecise preferences [42] and imprecise opinions [43], or to have nuanced [44] and changeable [45] values, and so on.

My claim is that such refinements have the potential to overturn the common wisdom on at least some widely-maligned biases. To see why, let’s see what such refinements reveal about our two old friends: the sunk cost fallacy and hindsight bias.

**

Start with the sunk cost fallacy. Recall that this is the (putatively irrational) tendency to let your past investments increase your commitment to a given course of action. In a fascinating 2004 paper [46], Tom Kelly challenges the conventional wisdom on this issue. As it turns out, there are a variety of reasons why past investments in a course of action often make it rational to be more likely to stick with it—meaning that the sunk cost fallacy is often rational (or, if you prefer, the “fallacious” instances of such reasoning are very rare).

Take a simple example: suppose you’ve decided (swayed, perhaps, by my $5) to pursue a career as a scientist. You’ve started in a graduate program, have been working hard for the past two years, have gotten your friends and family to support you, and so on. But for the last month you’ve felt thoroughly uninspired by the prospect of this sort of career, and you find yourself wavering in your commitment. Question: could it be rational for your awareness of your past investments to increase your commitment to sticking with it? Absolutely. Kelly points out there are at least three relevant types of considerations; I’ll call them ‘outcome effects’, ‘evidential effects’, and ‘redemptive preferences’.

First, outcome effects. The fact that you have invested in a career path makes the outcomes you’ll face if you give up on it different from what they would’ve been if you hadn’t invested in it to begin with. For example, you may know that you’ll feel regret—or even shame––if you give up at this stage, after others have invested time and energy in supporting you. You may also suspect that others will view you as fickle, or that the very act of giving up will weaken your tenacity to stick with future resolutions. These are possible negative consequences, and it is perfectly rational to treat them as reasons to not give up.

Second, evidential effects. The fact that you have invested in this career path is good evidence that you have a tendency––at other times––to find the career to be worth it. You may not have felt that way for the past few weeks, but your past actions and feelings are good evidence that you will feel that way again. This is evidence about your future preferences, and it is perfectly rational to treat it as a reason to not give up.

Finally, Kelly points out that it is often perfectly rational to have redemptive preferences: to prefer that a sacrifices or expenses not be in vain. This is easy to appreciate in extreme examples. Imagine that a friend of yours has recently passed away. You are sorting through her things when you come across a book of poetry she was working on—which (it emerges) she was keeping secret, but intended to try to publish. Now you are considering whether to try to publish the book posthumously.

Obviously there are strong reasons to do so, but suppose there are also strong reasons not to: it would take a large amount of work, it may not succeed, and so on.  You’re on the fence. Here’s a question that’s surely relevant: how much time, energy, and emotion did your friend devote to this book? If you find out that she wrote it quickly and casually one weekend, that makes publishing it seem less important. Conversely, if you find out that she worked on it for years as one of her driving passions, that surely would push your decision in the other direction. This seems perfectly reasonable.

But wait! In so deciding you would be allowing past investments to influence your current decision. Does that mean you’ve committed the “sunk cost fallacy”? Of course not. You are perfectly rational in preferring that your friend’s time and effort not be in vain––you have a redemptive preference that such investments are not wasted. That preference is not mistaken nor fallacious––if anything, it is admirable and compassionate. Thus it’s sometimes rational to prefer that other people’s time and effort not be in vain. And if that is rational, surely it can likewise be rational to prefer that your own time and effort not be in vain! 

Therefore past investments in a course of action can make it rational for you to increase your commitment to it. The thought, ‘I don’t want this time in graduate school to have been a waste’ is a perfectly rational thought to have––though it can, of course, be taken too far. (Similarly for more mundane cases, like not wanting your night at the casino to have been a flop. There are many subtle parts to this final point that I can’t go into here—see Kelly’s paper [46] for further discussion.)

Upshot: there are a variety of factors that can make it rational to let past investments increase your commitment to a given course of action, meaning the “sunk cost fallacy” is often perfectly rational.

**

Now turn to hindsight bias: the tendency to think that events were more predictable after the fact than you thought they were beforehand. In a brilliant 2019 paper [47], Brian Hedden shows that this sort of effect can be––in fact almost always is––perfectly rational.

Begin with an example that’s structurally similar, but easier to think about. Question: will my brother Chris run in a race (say, a half marathon) this year? Here’s some relevant evidence: he runs regularly; he and I have talked about running a race together; he lives in Florida; this year I’m living in Europe.

Rather than trying to figure out whether Chris will run a race this year, I want you to use this evidence to evaluate a slightly different question: how strongly do you think my evidence supports the claim that Chris will run a race? Well, you know I have more evidence than you do (I talk to him regularly, after all)—but you don’t know what that evidence is. For all you know I have strong evidence that he will; for all you know I have strong evidence that he won’t; for all you know I have some middling amount of evidence (he’s expressed an interest but not a commitment); etc. So if you had to write down a number between 0–100% estimating how likely my evidence makes it that Chris will run a race, I’m guessing you’d pick some middling number—say, 30%.  

Now suppose you’re walking the streets of Florida and you bump into Chris. He tells you that as a matter of fact he will run a race this year. Should this new information lead you to revise your estimate of how likely my evidence made it that he would? Absolutely. You should think to yourself, ‘Well, since Chris is going to run a race, there’s a good chance that Kevin knew about it. So I should revise my estimate for how likely Kevin’s evidence made it to something much higher than 30%––maybe 70% or 80%.’

This might seem puzzling. After all, you found out nothing directly about me or my evidence, so why did your estimate change? Because you have reason to think that my evidence is correlated with the truth of the matter. If you learn that I had evidence that Chris will run a race, that should boost your confidence that he’ll run a race. Therefore: if you learn that he will run a race, that should boost your confidence that I had evidence that he will.

Hedden points out is that this is a pervasive phenomenon: finding out whether something is true should affect your estimate of how much evidence there was in favour of it. This is due to two facts. First, in general: evidence is correlated with truth; if you learn that there was evidence for X, that should boost your confidence in X. Second, probabilistic relevance is symmetric: if learning that X is true should boost your confidence in Y, then learning that Y is true should boost your confidence in X. (See here [48] for an explanation and here [49] for an explanation of the explanation.) Since learning that there was evidence for X should boost your confidence in X, it follows that learning that X is true should boost your confidence that there was evidence for X.

How does this result bear on hindsight bias? Because––as Hedden points out––exactly parallel reasoning applies to one’s own case, when one is assessing one’s own evidence. Return to the question of whether Chris will run a race, but now consider it from my perspective. It turns out that I have some ambiguous evidence on the matter: he told me he’d like to run one, but I couldn’t tell by his tone how serious he was. Because of this, I was (rationally) unsure what my own evidence supported about the matter––I had “higher-order uncertainty [37]”. So if I had to write down a number estimating how likely my evidence made it that Chris would run a race, I would choose some middling number––say, 60%.  But just as you are unsure what my evidence supports, I too am unsure what my evidence supports. Because of this, I should follow reasoning precisely parallel to yours.

In particular, I should think to myself, ‘Maybe I misread him, and he was being more serious than I realized––so maybe my evidence made it more than 60% likely that he’ll run a race.’ Moreover, I should think that what my evidence supported is correlated with truth: if I were to learn that my evidence did make it more than 60% likely that he’ll run a race, that would boost my confidence that he’ll run a race. It follows (by the symmetry of probabilistic relevance) that if I were to learn that he will run a race, that would boost my confidence that my evidence made it more than 60% likely that he’ll run a race.

Therefore when I learn that Chris is running a race, I should think to myself, ‘Ah, I probably was mis-reading him––I bet my evidence made it more than 60% likely that he would.’ And that, of course, is an instance of hindsight bias: upon learning that an event will happen, I now have a higher estimate for how likely my prior evidence made it. Thus I now think that it was more predictable than I did previously; but this doesn’t demonstrate irrationality—what it demonstrates is a rational sensitivity to the correlation between evidence and truth.

Upshot: Whenever you should be uncertain about the rational response to your own evidence––as, arguably [41], you virtually always should––hindsight bias can be rational.

**

What to make of these results? I think they show that the sunk cost fallacy and hindsight bias are to be expected from rational people. Moreover, similar lessons are emerging for other apparent biases. In a 2008 paper [50], Tom Kelly shows that “biased assimilation [51]”––the tendency to give greater weight to evidence that supports your prior beliefs––can often be rational. In a 2015 paper [52], Jacob Nebel shows that “status quo bias [53]”––the tendency to prefer a state of affairs because it’s the status quo––can be rational according to a variety of plausible theories. And in a recent paper [54], I’ve argued that there is a methodological flaw with the empirical studies that purport to demonstrate overconfidence—in fact, many of the observed “overconfidence effects” are to be expected from rational people. (See this blog post [55] for an accessible summary.) Each of these critiques exploits the same structural flaw—namely, that the irrationalist interpretations rely on an incorrect model of rationality. And collectively they suggest that this may be a broader problem with the common irrationalist narrative.

So what are we to do about it? I think we need to look back at the details of the claimed demonstrations of irrationality [10] to see whether their philosophical presuppositions are appropriate. When they are not, we need philosophers to develop theories that are appropriate, and psychologists to test those theories’ predictions.

When we do so, what will we find? How rational are people? I don’t claim to know. But I do claim that we should no longer be satisfied with the common irrationalist answer. Instead, we should approach the issue with minds that are open to new theories and skeptical of sweeping stories. The question of rationality remains an open question—one that people from diverse fields can fruitfully work together to address.

**

Kevin Dorst [56] is a Junior Research Fellow in Philosophy at Magdalen College, Oxford and an Assistant Professor in the Philosophy Department at the University of Pittsburgh. He discusses these issues further on his blog [57].