We're sorry but this page doesn't work properly without JavaScript enabled. Please enable it to continue.
Feedback

Does Logically Incoherent Decision-Making Really Have Negative Consequences?

00:00

Formal Metadata

Title
Does Logically Incoherent Decision-Making Really Have Negative Consequences?
Author
License
CC Attribution 3.0 Unported:
You are free to use, adapt and copy, distribute and transmit the work or content in adapted or unchanged form for any legal purpose as long as the work is attributed to the author in the manner specified by the author or licensor.
Identifiers
Publisher
Release Date
Language

Content Metadata

Subject Area
Genre
Abstract
As explained in this video it is commonly assumed that logically incoherent decision-making is irrational and costly in that it can lead e.g. to a decrease in happiness or health. An example of this would be a patient reacting differently if doctors speak of a 90% success rate of surgery instead of a 10% failure rate for the same procedure. The purpose of the study presented here was to examine if there is proof in the existing literature that incoherent decision-making actually has negative consequences and is rightly seen as irrational. * GERD GIGERENZER is Director at the Max Planck Institute for Human Development and Director of the Harding Center for Risk Literacy, both in Berlin, Germany. This LT Publication is divided into the following chapters: 0:00 Question 2:13 Method 4:41 Findings 6:06 Relevance 10:29 Outlook
Lecture/ConferenceMeeting/Interview
Transcript: English(auto-generated)
What interests me, and what keeps me sometimes sleepless, is how people make decisions under risk and uncertainty. According to much of research today in behavioral economics, in psychology, in other fields, humans systematically violate certain norms that are called norms of coherence.
That's like consistency, transitivity, and other content-blind norms that have no content, no context, no causality, no time, nothing. And my question is, are these good norms?
And the assumption is that people who violate these norms would incur costs. For instance, suffer from less wealth, health, or just happiness, or whatever it is.
In this paper, we have looked at the question, is there evidence that violations of coherence would actually lead to this kind of costs? And why is this important?
Because the assumption today in many fields is that you and I, people, violate coherence, that this is an error, and which leads to costs.
And it's used as an explanation for all kinds of human disasters. And as a consequence, the government has to step in and lead us, nudge us, where we want to be. And this is the political side of the question, is there any evidence
that this type of experimental demonstrations justify governmental paternalism in the 21st century? So the way we approached this question is that we defined a number of so-called cognitive errors,
cognitive illusions, or whatever term is used, which are all violations of coherence. Here's one example. Someone has a, or you, have a severe heart condition, and you think about whether you should have heart surgery. It's a dangerous surgery. You ask your doctor what the prospect is.
The doctor has now two ways to answer these questions, which are logically equivalent. One is, there's a 90% chance that you survive. The other is, there's a 10% chance that you die.
Human beings react differently, so they are more willing to accept the operation. If it's positively framed, 90% chance to survive, and not go for it otherwise.
According to the coherence literature, this is an error because it's logically the same thing. So this is what we are talking about. One can easily defend. It's called framing. One can easily defend people by thinking.
Now they are basically thinking, they're reading between the lines. They know that the doctor gives a message, but it's beyond the coherence. So we were looking at framing, at intransitivity, at many other violations of coherence,
and searching the entire literature, and since you may miss something, we were looking in the review articles on these so-called cognitive errors for studies that show that it has actually an impact on health, on wealth, on happiness outside the laboratory.
And in addition, we asked our colleagues for studies that we might have missed that can show that violations of coherence actually have costs.
So what were the results? Now, as I mentioned, almost everyone in these fields assumes that violations of coherence are costly. So we were looking first at the so-called money pump. Money pump is if you prefer A over B, B over C, and then C over A.
That's intransitive. And the argument is, if you're willing to pay a little bit for your preferences, you are a money pump. I can take out of all money from you. We were looking whether there is evidence. We found no evidence in the literature.
And in the rare cases where someone committed an intransitive circle, people very quickly learned. So then we are looking for framing, for preference reversals, and all the other major violations such as Bayesian inconsistency and additivity of probabilities.
So the bottom line is, we could not find any consistent evidence that violations of coherence would incur loss of health, of wealth, of happiness, or something else.
What is the relevance of these findings? First, it shows that in large parts of the social sciences, we feature a notion of rationality for which we have no evidence that it has costs.
That suggests that we may have the wrong notion of rationality. The alternative is to put aside this purely logical notion of rationality
and replace it by what we call an ecological notion of rationality. That is, a notion of rationality that is sensitive to the structure of the environments, to the goals, to the content, to the context,
and which avoids that intelligent behavior is mistaken as irrationality. For instance, just to illustrate the point with a very simple example, one of the most featured demonstrations of incoherence is called the Linda problem.
How does it work? You read a story about a person named Linda, which reads like she is 31 years old, studied philosophy, and it's written in a way as if or to suggest that she might be a feminist.
There's no evidence that she's a bank teller. But then the question is, what is more likely? Linda is a bank teller. And you say, whoa, what? Or, Linda is a bank teller and active in the feminist movement. And you say, well, that at least makes sense. But then, by coherence measures, you are wrong
because the probability of being of A can never be larger than the probability of A and B. It's like a set and a subset. That's the reasoning. But the reason that people make a different conclusion is not irrational.
They think. They're intelligent. And the norms in this case are content-free norms. The only thing according to the coherence norm you should think about is the word probable and ain't. Nothing else counts.
It's the set-subset relation. And probable must mean mathematically probable, and ain't must mean logically ain't. If you just have a look in the OED, the Oxford English Dictionary, that's an illusion. Probability means many things.
So when we did an experiment and made it clear that it's about probability, so by having the description of Linda and asking, there are a hundred people like Linda, how many are bank tellers? How many are bank tellers and active in the feminist movement? The entire so-called illusion disappears. People are smart. They use intelligence.
So one of the key results is that we need to rethink our standards of rationality and also our standards of calling people irrational. I mean, as this example shows, human intelligence is much smarter than simple logic.
And with simple logic we would understand very little. So back to this example. For instance, when I say, I invited this evening friends and colleagues. The end doesn't mean the intersection. It means the logical or.
So the entire union. And we understand this immediately. This is a very smart intelligence to infer whatever it means, and not a logical error. So here the theory of rationality is quite simplistic
and leads us to blame people and the human intelligent as irrational way it isn't. What are the lessons to be learned? What's the future? So I work on developing an alternative conception of rationality that no longer is defined by logic or coherence.
And that's what I call ecological rationality. It's about the strategies that people use and the environment in which they make decisions and how that fits. That has little to do with coherence.
So this is a vision about rethinking rationality and it respects more than looking at logic what actual people do. Second, the political side is to stop the message that we are all irrational
and according to some books predictably irrational. By rethinking rationality and analyzing better what people do wrong and what people do right without just relying on coherence.
And here there may be more important political messages that we should start teaching young people how to make good decisions. So that includes statistical thinking, that's the coherence part, but also heuristic thinking,
that is how to deal with uncertainties where coherence and statistics doesn't help you very well. So what smart rules that a doctor can use to make better diagnosis? So we work with experts how to regulate the financial sector
rather than doing complicated estimations and computations like value at risk computations where a large bank has to estimate thousands of risk factors correlation matrixes in the order of millions and that borders on astrology.
The result is not more security, not more safety, but more uncertainty. And here very simple rules if we systematically study them can bring an alternative to getting in more rationality, more reason and a safer world.