Novelty Perception & the schizautistic spectrum: part 1
Remarks on behavioural economics' claims of irrationality
Irrationality?
Though the rise of the internet with its facilitation of unilateral messages between distant strangers might dissuade any of us from claiming that humans are rational beings, we still perceive them altogether as, and interact with them as if they were, reasonable. We take ourselves to be reasonable, and, so long as they have not yet said or acted in a manner that leads us to suspect otherwise, we take people we encounter to be likewise: capable of perceiving the reality around them, to deduce and induce, to generalize and distinct.
Economics' conception of the human agent, Homo economicus, which made the assumption that ‘humans are rational’, perhaps in part to bridge the gap between the field of economics as the study of ‘how humans should behave’ (in order to maximize utility) and ‘how humans do behave’ and/or to put a firm ground under the fundamental idea that free market economy leads to prosperity, was attacked by the field of psychology, whose new empirical findings came to constitute a new subfield, ‘behavioural economics.’
Is a dollar is a dollar is a dollar
One fundamental given of classical economics is the scarcity of resources and the mutual benefit of a trading transaction between voluntary parties. Being the most common medium of exchange, money has the convenient qualities of being universal and quantitative and thus an apt unit to measure utility against. A reduction of a commodity's price should increase the number of customers and/or the average number of units sold as it makes the purchase more advantageous to the buyers by increasing the gap between the utility gained (a property of the commodity) and the money lost (the price).
Studies have shown that people do not invariably behave according to these premises. In one experiment by Dan Ariely1 people at the store were told that one of the two articles they wanted to buy could be gotten for $30 cheaper at another nearby store. They were much more likely to go to the other store when the article was the cheaper phone cover than when it was the more expensive smartphone. From a ‘rational standpoint,’ in both cases the trade presented to them was of a few minutes of walking for $30 so that the article the discount was attached to should not have made a difference, but it did. Hence (among other experiments) people are irrational.
A second example comes from a series of experiments by Kahneman and Tversky which found that in a scenario where a choice needs to be made between two options, people would prefer one or the other option depending on the articulation of the options —whether the options deal with potential gain or averting loss— even though the end effect was the same. In one of these the participants of the thought experience were divided into two groups, each presented with one of the scenarios. In scenario one, the participants were given $1000. They could choose option 1, get another $500 for sure or option 2, get a 50% chance for an additional $1000. In scenario two the participants were given $2000 and the options were, 1. lose $500 for sure or 2. 50% chance to lose $1000. Most participants in the first scenario chose option 1. while those of the second scenario mostly chose option 2. In either scenario the choices were in effect 1. $1500 for sure or 2. 50:50 chance of getting either $1000 or $2000;2 I wonder how people would have chosen had the options been explicitly formulated thus.
This Ariely's experiment refuted premises of classical economics, but I believe that his conclusion —that humans are irrational— targeted the wrong corner of the theory. He sticks to the classical economics' precepts which conceive the system as a game: rules, goals and players who play according to the rules to achieve the goals. Since it is rationality that would have allowed us, humans, to plan and execute a rules-bound path towards our goals, demonstrations of humans failing to do so supposedly show that humans are not completely rational. What I think is the case is that while economics' model describes well certain interactions, not all activities whereby money, time or effort is exchanged for a service or commodity follow it. Humans do not invariably play the game the economists have assigned them. In other words, while Ariely thought that what he had caught with his findings was humans at their cognitive shortcomings, actually it was the theory's.3
According to Ariely, if a short walk is (not) worth a discount of $30 then the person should (not) walk the walk whether it is a smartphone or a phone-cover that is being discounted; behaving variably in these cases demonstrates faulty reasoning. I disagree. There might be sound reasons to take the walk for the discounted already cheaper product and not the other: we might do it out of a sense of justice;4 we might want to buy more than one cover, then or in the future, just in case or as a gift, whereby the discount is multiplied with each purchase; we might be following a ‘take care of the pennies’ principle whereby consistently putting some effort to save 20% when possible would make a difference by the end of the year (assuming such opportunities manifest themselves);5 we might feel that by turning down a shop that had already served us to get the item elsewhere for a marginally lower price would express niggardliness. In a sense these are all excuses for being more alarmed by relative price differences irrespectively of their absolute value, yet I find the assumption that ‘utility,’ as measured in money, is a universal measurement like a meter or kg that invaries under changes of context, or that it constitutes the sole guideline that a ‘rational person’ would follow, dubious.
Prospect Theory
I look more favourably on Kahneman and Tversky's work, if only because they put their findings into a more precise theory. Nonetheless, I think something was missed there, too. In another set of experiments,6 they demonstrated —through similar trials of choice between potential gains, slightly varying their respective probabilities— that people's choices were inconsistent. To put it briefly, people preferred in one scenario option B: getting with 100% $2400, over option A: getting 33% $2500, 66% $2400, 1% $0; in another scenario they preferred option C: 33% $2500, 67% $0, over option D: 34% $2500, 66% $0. With some arithmetic manipulation they showed that these preferences are not consistent with each other.
What they undoubtedly demonstrated is that people deal with formal probabilities very poorly. One thing that is going on is that this inadequacy stems from a combination of difficulty imagining probabilities and a lack of training in formal probabilities. While we translate the word ‘apple’ into meaning rather easily, e.g. we form accurate expectations when somebody tells us they are about to bring us an apple, we do poorly when we deal with the expression ‘probability of 10%.’ I think, perhaps falsely, that Kahneman's results would have been different if e.g. the probability 0.01 would have been understood, experienced by the test subjects, as ‘it is the case 3-4 days a year’ — that is, if they had an intuitive feeling of their chances (as one has when considering whether they were likely to bump into a particular somebody at a venue or whether it is necessary to bring this time an umbrella along to one's usual summer resort) and did not have to consider them as numbers on a piece of paper.
I'm not quite certain about this next point. Language as a tool used by people to communicate might the matter as well; even when making hypothetical choices that would bare no consequences, people employ their day to day intuition. In Kahneman's and like experiments, subjects are described scenarios. They are expected to take them as the absolute truth. Is this realistic? It is every person's most quotidian experience to observe the deviation of reality from what others have claimed, whether ones as inconsequential as perceived inaccuracies in descriptions of persons or places or as important as broken promises regarding contracts, payments or obligations to have something done by a certain time. This is possible reason why the formulation of two possible outcomes to choose from, whether as potential gain or loss, effects how people choose; in real life a formulation expresses, for example, how accountable the person formulating the option takes himself to be and therefore affects our real expectations about the reported probabilities. The expectations also affect our reaction to the outcome; if something did no go well, had the responsible person told us that it would work with ‘a chance of 10%’, we would think to ourselves ‘oh well’; had he told us that it was ‘95% certain,’ we would ask him ‘what went wrong?’ Moreover, formal inaccuracies are a property of daily language; when somebody tells us that they will be ‘back in a second,’ we do not expect them, nor for a moment think they wanted to lead us to expect, to be back within a second; when somebody says the chances are ‘fifty-fifty,’ they simply express they have no real idea.
Beyond these inaccuracies, there's all that is being understood without being explicitly said. An example from Dan Ariely's book Predictably Irrational would serve well to demonstrate. Lawyers who had been contacted and asked to provide services to needy retirees for the (discounted) price of $30, refused; those who were asked to do it for free, agreed. Had these two scenarios appeared in Kahneman's questionnaire experiments, they would have each had the first option of ‘do nothing’ (refuse) and the second option of ‘do some work’ — with the differential additional ‘and get $30 for it‘ in one scenario. If anything, thus formulated, one would have then expected the opposite results, for who would prefer to work when they could do nothing and everything else remains equal; while some would be glad to do something for some money. Or, on the other hand, since nobody would decline to receive $30, or any money for that matter,7 for nothing, those who agreed to work for free should have agreed to work and get some money on top of that.
We intuitively understand Ariely's results. One thing Ariely doesn't discuss about them is that even though the remuneration is the only explicit difference between the two scenarios, there's an additional difference in how the two requests must have been understood. When a person takes money for a service, she is obliged; if she does it for free, ‘voluntary work’ per excellence, she has room to adjust her effort, apologize out of it if something came up in her life. Incidentally a distinction that also the law —the expertise of lawyers— recognizes; while it is illegal to work under minimum wage it is legal to volunteer. In other words, the difference between the two scenarios in case of a ‘yes’ is not only in what the lawyers expected to receive but also in what they expected to give.
When discussing the results, Ariely speaks of ‘social norms’ and ‘market norms,’ whereby the addition of money into the situation shifts it from the domain of the former to the latter's. I'd like to make it a little more precise, on the way also touching on the concepts of abstraction levels that would take a central role on later parts of this here essay. Norms are certainly part of the matter, as they are with every sentence or piece of information that we need to interpret. To put the issue more precisely, what the introduction of money does is shrinking the scopes of the interaction; from being part of a relationship it turns into a transaction. To return for to the matter of what is and isn't said we step aside momentarily from the lawyers and retirees; when your friend accepts to help you, whether the favour is big or small, while they do not state it explicitly, part of the deal is that they expect you to reciprocate at some future time of need. On the other hand, in the act of transaction it is implied that all obligations are thereby settled, i.e. the parties are essentially done with each other. Beside the difference in the feeling of obligation, the lawyers who did agree to work for free but who would have refused to work for the scant money, may have preferred the acquisition of the amiability of an elderly stranger with its potential, however unlikely, favours over the monetary settlement.8
The Conjunction Fallacy
The conjunction fallacy is committed whenever people ascribe a higher probability to a set of events than to any of its constituents; in other words, when they regard the probability of ‘event 1 is true and event 2 is true’ to be higher than the probability of ‘event 1 is true,’ in blatant contradiction of probability theory. An experiment by Kahneman and Tversky, described in ‘Thinking fast and slow,’ which has been conducted in variations over the years, involved subjects reading a piece of description about ‘Linda,’ which was meant to match a stereotypical ‘active feminist’ without naming her as such. The description was followed by a list of statements for each of which the participants had to indicate how probable it was. The results showed a pattern whereby the probability that ‘Linda is a feminist’9 was ranked as more probable than ‘Linda is a feminist and a bank teller’ which was ranked more probable than ‘Linda is a bank teller.’ The relationship between the last two statements is a committed conjunction fallacy (if she's a bank teller and a feminist then she's indeed a bank teller, but if she's a bank teller she's not necessarily also a feminist).
Kahneman and Tversky conducted, among others, a variation of this experiment to circumvent some of the criticism the first one drew. This experiment included an opportunity for the participants to win money. The assignment looked thus:10
Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds (R) will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you chose appears on successive rolls of the die. Please check the sequence of greens and reds on which you prefer to bet.
1. RGRRR
2. GRGRRR
3. GRRRRR
The first sequence is contained within the second and is thus, necessarily, more probable. Yet most of the participants deemed the second as most probable, thus committing implicitly the conjunction fallacy (evaluating P(G&RGRRR) as greater than P(RGRRR)) and seizing a lesser chance to win $25. Kahneman and Tversky's interpretation was that the participants' mistake was to choose the eventuality that seemed more representative of the die's potential outcomes.
As far as I would have wanted to exonerate the participants, I cannot; it's certainly a fallacy that people intuitively commit. Nonetheless, even though it is beside the point of the experiments' validity per se, and despite the experimenter's admission of heuristics' quotidian usefulness, I'd like to suggest that this effect is rather restricted to their artificial experimental setups and that unlike our ocular blind spot, a defective design of the eye —which as octopuses demonstrate, could have been avoided— this phenomenon is a trace of an adaptive mechanism that allows us to comprehend each other in communication and comprehend the world.
Probability, representability and likelihood
The coloured die experiment was meant to side-step issues of language —such as whether a ‘logical and’ ∧ was really equivalent to daily language ‘and’— and other issues by allowing the participants the possibility of winning money by betting right as opposed to expressing an opinion. Indeed, again, Kahneman and Tversky tricked the participants and demonstrated faulty reasoning, but I think they missed something themselves. Though the word ‘probability’ didn't appear in the prompt above, we understand that upon reaching the choices the participants asked themselves, ‘which of the following is most probable?’ This is not merely my own wild conjecture; after all, Kahneman and Tversky interpreted the results as if the participants were deciding ‘which is the most probable.’
That the so called layperson, as well as the physicist when outside the university, uses and understands the words velocity, speed, momentum, work other than how they are used in physics is a triviality having to do with technicality of definitions; but that the layperson conflates the words —having a precisely distinct meaning in probability theory— ‘probability’ and ‘likelihood’ has more to do with the manner by which a person comprehends the world — irrespectively of at all knowing the two words.
Kahneman and Tversky, within the scope of this last experiment, also asked participants to rank the choices by the degree to which they were representative of the die. Sequence number two, on which most betted, was also ranked most representative by 88% of the participants. As it happens, what Kahneman and Tversky called ‘representability’ has a name in probability theory: ‘likelihood.’11 The likelihood of the die being as described (2 red faces and 4 green faces) is greater in the case where we know that 6 die throws yielded GRGRRR (0.08) than if we knew that 5 die throws yielded RGRRR (0.04).
Humans are intuitive likelihood calculators
What is the probability that a coin would give heads on a coin flip? Unless you were thinking it was a trick question, you'd say 0.5. If I told my friend that I flipped a coin ten times and got ten heads straight, he would be, rightly, incredulous. Either I had actually flipped it five hundred times and reported only on one stretch of ten flips, or the coin was actually unfair, or I was simply lying about the results. Or I was extremely lucky. In fact, of course, ten straight heads are as probable an outcome of ten flips as any ten long sequence of heads and tails, 0.5^10 = 0.001. However, if I told my friend that my outcome was HTTHHTTTHT, it would have seemed so trivial to him that he would have wondered why I told him about it at all. The difference in the two cases is in the likelihood that the coin was fair, I had actually only flipped it ten times and I was not lying about the results: 0.001 in the first case and 0.2 in the second, a 200-fold difference.
Even the worst offenders of loquaciousness use language tersely; we do not communicate with logical statements and mean more than we say. If I invite you to ‘come over’, you assume I meant coming to my house, you expect that I'll let you in once we get there, that once we're inside I won't go directly to bed (alone), that if we had just discussed being hungry I'll offer you food, &c. I don't need to ask ‘want to come to my house and I'll let you in and we'll stay awake together and eat something I'd prepare for us?’; if I did you would have probably still looked for the greater unsaid meaning, for example, perhaps I was insinuating that last time I came to yours you left me hungry outdoors. Or, when someone asks ‘can you wait a minute?’ we understand it as a request, not as a theoretical question. And so with simple nouns: whether ‘bank-teller’ or ‘feminist,’ we understand —perceive in our imagination— more than is merely said. If I invited you to come to my place to see my bird (nudge, nudge), and upon arrival you found out that I had a pet penguin, you'd think my description was rather off. If I prepared you ‘vegan chicken stew,’ you'd expect the stew to be vegan, not the chicken. Had it been otherwise, it's not that we would spend much more time speaking than we do now,12 but language would doubtfully function at all. This is not a caprice of language but a reflection of the workings of our perception; when I see a woman in a suit sitting behind a counter in a bank, I see a ‘bank teller,’ not a ‘woman in a suit’; and when I see such a ‘bank teller’ I see a customer servant who knows about the loan and credit programs of the bank, &c. I wouldn't ask her if she was a bank teller, if she had time to have a conversation with me &c, I'd just assume it. And be very surprised if she said she was there just hanging around, waiting for the dentist.
Probabilistic statements are statements about knowability
Probability is a measurements of uncertainty, not of the world per se. It has more to do with our state of knowledge than with the state of the world directly. At a first glance they might seem to be indicative statements about our understanding of the world (and its potential), especially when regarding past-tensed sentences, but really they are hypothetical sentences.
Let's say we flip a coin. We find out that it lands heads, but we might say ‘there was a 50% probability that it would land tails’. This is true —since we understand it to describe our past understanding— but as irrelevant as a valid but unsound argument.1314 This is of course not the case with direct indicative statements. If we agreed that it was true that ‘Paul the cat is alive’ and the next day we found out that he had been dead for the last five years, we wouldn't say ‘Paul was alive yesterday.’ This is also the case with Frequentist probability, that likewise makes statements about the world and not about states of knowledge, about the present, at most the cumulative past, and not about a hypothetical future. A frequentist statement about our coin throw such as ‘half of the throw was tails,’ wouldn't even make sense since there was only one throw and it landed heads.
However, what makes probabilistic statements hypothetical is that they are not even statements about states of knowledge, but rather about knowability — the hypothetical ability to know. But I'll come back to it after I return to the conjecture fallacy.
Had the person from earlier, who promised us 95% success, returned to us telling us that he met failure, to the affect that we had given credit to his words, we would demand explanation for the improbable, 5% chance, outcome. He gives us an excuse and it settles the matter. This, too, is a commitment of conjecture fallacy: his failure alone seemed unlikely (improbable?) but his failure AND the excuse seemed likely. It's a matter of attention. Before he had gone on his way, we imagined all sorts of factors that could go against his plan, but we perceived them as unlikely or were assured by him that it would not be a problem. We failed to expect the (event behind the) excuse, which seemed likely once its possibility crossed our minds, and it gave credit to the failure.
And now back to the matter of ‘knowability.’ After we were given an excuse, we would no longer say that the person ‘had a 95% of success’. We were presented new evidence that persuades us that his chances had not been as great as we thought. We conceivably, hypothetically, could have found out (the possibility of the event of the) excuse before he went on his errand. Not so with the coin. We know that it landed heads but we know that 1. we couldn't have known that it would land heads, none of us had any divination abilities nor the skill to tip the coin one way or the other,15 and 2. that had the thrower thrown the coin in some slightly different manner, caught it at a different height, it might have fallen tails. If, on the other hand, we discover that the coin was actually loaded, we would no longer say that ‘there was a 50% probability that it would land tails.’
The matter of the probabilistic success and of Paul the cat's death seem very similar. New evidence updates our statement about the past. But still, one is indicative and the other is hypothetical. ‘Paul was dead yesterday’ is a fact. Perhaps a contestable or a false fact, but still a fact. ‘He had a 50% chance to succeed yesterday’ is a hypothetical sentence. It's neither a report about our state of knowledge yesterday —we thought he had a chance of 95%— nor of an event that happened —for he has failed— but of something that could have happened but didn't.
We engage in the hypothetical to guide us in decision making. ‘If I go to the grocery shop, in the evening I'll have food to cook,’ or ‘if so and so wins the elections, this and that would happen to me (unless I do something in preparation)’.16 There's randomness involved, stemming from uncertainty about the world, but the variable of greatest importance is the self — the self's actions and the effects on it. This is why with the indicative —cryptic oracle speech notwithstanding—, non hypothetical, second-person prophecies of myth the trope of self-fulfilling prophecy is so common. The only solution to the case that expectations about the future affect our actions, but at the same time the statement of the prophecy is unavoidable, is to have the reaction bring about ironically that which was sought to be avoided.
The hypothetical serves, therefore, a pragmatical purpose, though it's tricky to separate here ‘practice’ and ‘theory.’ I suppose the point is that the responsible cognitive apparatus is meant to serve us in our so called day-to-day and does not function like a computer operating on logical statements. This does not cross Kahneman and Tversky's own thoughts, but I feel that they have not brought their speculations to their conclusion.
When we project into the future we aim to get a comprehensive picture. The aim is not an accurate picture —such that somebody reading a written account of it would be inclined to say it is probable— but to derive as much information as we can through our act of thinking. That is, our aim is to think through whatever relevant elements come to our attention, and it is only the second step to get the details right. Kahneman's ‘slow thinking’, though slower and more effortful, is taken to be superior to the extent that it applies analysis and thereby avoids the pitfalls into which the faster thinking falls into, as well as for being the parent of all sorts of technological conveniences we are familiar with today, but I don't think it is a separate cognitive system, as he presented it; it is merely the shift of attention and with it thinking to a particular conception of the world, a technical/ mathematical formal one in the cases of Kahneman's examples. But we live in the real world, not in the world of formalities;17 a formal articulation flattens down reality into an artificial construct. A chess or basketball player's world, too, during a game, transforms from a whole to some restricted conception of it.
That people fail to answer correctly riddles such as, ‘A bat and ball together cost $1.10, and the bat costs a dollar more than the ball. How much does the ball cost?’ or ‘in a pond there is a patch of lily pads that doubles in size every day. If it takes 48 days for the patch to entirely cover the pond, how long would it take to cover half the pond?’ or ‘If it takes 5 machines 5 minutes to make 5 widgets, how long will it take 100 machines to make 100 widgets?’ is because, as Kahneman puts it, people go for the intuitive answer, but, I say, it has nothing to do with the system 1 and system 2 (corresponding to the ‘fast, automatic’ and ‘effortful, deliberate’ thinkings) as they are otherwise described in his book. What I think was happening is that the participants of the experiments where these questions were presented had immediately identified them as ‘math problems’ and, either out of laziness, cockiness or mathematical ineptitude came up with a tentative answer which they then used without taking a moment to verify it. That participants to whom the questions were displayed in a hard to read font with low contrast answered more correctly, Kahneman interpreted it as ‘effort induces the operation of system 2’, while I think it is merely that they were made to spend more time with the question (in order to be able to decipher the text at all) and therefore spent more time thinking about it as well as had less motivation to rush forward with the first answer that came to my mind. I, too, was a victim of such trickery; when I was a child my father occasionally asked me ‘what's heavier, a kg of cotton or a kg of iron?’ which I had answered incorrectly until the point where I realized that the crux is not that ‘iron is heavier than cotton’ but that ‘a kg is a unit of weight.’ What Kahneman calls ‘system 2,’ I think, is merely the transition during discourse into reasoning by domain knowledge, the same kind as would say that grain and tomatoes are fruits, a spider is not an insect or that a superposition of red and green lights would be perceived as yellow. That is, these questions translate potentially material problems into a special domain (mathematics), in which the participants stumbled. I don't deny that people do not often make analytical mistakes, but had any of them been the owner of 5 widget making machines, I doubt that he would have failed to make the correct extrapolation; that the lord of the manor alerted of the covered lake by his botanist ‘on the 48th day’ would have failed to recognize that the lake was half covered the previous day, and so on.
Formally, ‘the probability Linda is a bank teller’ and ‘the probability Linda is a bank teller and not an active feminist’ are not the same, but to the extent that the two personas are perceived to be incompatible with each other, the two statements are practically equivalent. Linda was irrelevant to the experiment's participants, indeed, she didn't really exist. If she was real, she would either have or have not been a bank teller, either have or have not been an active feminist. ‘Linda is a feminist and not a bank teller’ being more probable than ‘Linda is a feminist and a bank teller’ being more probable than ‘Linda is not a feminist and is a bank teller’ would be in agreement with probability theory,18 though the statements the participants were evaluating were formally articulated otherwise.
Applied formal probabilistic calculations carry unstated assumptions
Each formal calculation of probability carries with it certain assumptions, implicit and explicit. In the case of the green/red die, one fact was known by the experimenters and assumed by the participants: it was not loaded, so the ratio of faces yielded a 2:1 chance that it would land green rather than red. The formality is a restriction of the phenomenon that is the world into a limited conception, and we might face issues if we prod against the limits of that conception. For example, 1+1 = 2. Therefore, if I give you one apple and another apple, I give you two apples. So far so good. What is an apple? It has many properties beyond its unity of appleness. Does one apple weight half as much as two apples? It depends on the accuracy we are seeking, but generally the answer is ‘more or less,’ which in this case means ‘probably not,’ which is short for ‘no.’ Two sagittal halves of an apple might be deemed equal in a way that horizontal halves would not.
If I asked you what was the probability that I'd get HT on a sequence of two coin flips, you might say 0.25. If I asked you what was the probability that I'd get HT on a sequence of two coin flips AND that the coin was fair, it might throw you off. The second part draws your attention to the assumption you had made during the first part, namely that the coin was fair, i.e. with equal probabilities to land as either head or tails.
When Kahneman and Tversky showed all three options, ‘Linda is a feminist’, ‘Linda is a feminist and a bank teller’ and ‘Linda is a bank teller’ to the same participants (as opposed to spreading them among participants), they sought to bring to their attention the formal difference between these options, i.e. that one is P(A), another is P(A∧B) and the last is P(B), but I don't think that this happens — as the results, corroborating the commitment of the conjunction fallacy, also demonstrate.
1. As already said, I believe that what the participants are answering is not ‘what is more probable’ but ‘what options make the prompt more likely (having greater likelihood)’. We might transform Linda into an unfair coin which lands heads with a probability of 0.1. With the two options:
a. Linda flips and yields H
b. Linda flips and yields HT
the probability of the former (0.1) is greater than the probability of the latter (0.1 * 0.9 = 0.09), but (I presume, and in accordance to Kahneman and Tversky's experiments' results) participants would tend to choose the more likely option as the ‘more probable one’, namely the latter, where the likelihood of the prompt ( 0.1 * 0.9 * 2 = 0.18) is greater than in the former case (0.1). Notice that with these probabilities we made the assumptions "on one coin flip" and "on two successive coin flips" respectively. And so with the actual Linda from the experiments; ‘Linda is a bank teller’ is perceived as having to do with more than the mere ‘bank tellership,’ whereby ‘Linda is a bank teller and a feminist’ makes it more likely. The latter part —feminist— transforms the former —bank teller— like in ‘polar bear’ which is not just a bear in the pole but a kind of bear that is different from other non polar bears. The experimental variations that presented the option ‘Linda is a bank teller whether or not she is a feminist’ doesn't change the perception that bank tellers are more likely to have certain properties and not others.
Let's take a more extreme example. John has two new porches, one yellow, one red. It seems unlikely that ‘John has been unemployed since twenty years,’ but ‘John has been unemployed since twenty years and had inherited a billion dollars from his dead uncle when he was five’ seems more likely. We don't know john. But if he is indeed the owner of two Porches, it's more likely that he had (also) inherited a lot of money rather than that he was, as it were, merely unemployed. We perceive ‘unemployed’ differently in each case. Just as we don't imagine that a ‘large apple’ is as big as a ‘large mountain,’ even though they are, in a matter of speaking, the same size — ‘large.’
The fallacy, to conclude, is in the direction of the arrow —in the maximization of the prompt's likelihood given the chosen option instead of maximizing the choice's probability given the prompt— in the fact that when reasoning about probabilities people often actually reason about likelihoods. But this is, as they say, a feature and not a bug of perception; while it yields failure in the highly artificial setups of Kahneman and Tversky —even in the realizable scenario of gambling on a coloured die, a math riddle equipped with stakes, which is still a rare occurrence in one's life— it enables us to communicate with each other more or less successfully, to say nothing of navigating the reality around us. Present evidence is not merely laid on top of our knowledge, but it modifies it —ascertains, modifies or contradicts it— like each word does the understanding of the words that precede it. Formal/ mathematical problems pose us with god given facts about the world, but in reality we continuously weigh the validity of our humanly conceived knowledge.
Unfortunately, I cannot find the source. I can't say for certain he conducted it, but I first heard of the finding from him. The longer I have thought of this experiment, the more I found it problematic and the the more I felt a need of the original report with some of its details.
The numbers in this example are made up.
Daniel Kahneman, Thinking Fast and Slow, ch. 26 Prospect Theory.
To the extent that ‘humans are rational’ & ‘rational agents behave a certain way,’ saying that the theory is wrong amounts to the same as saying that humans are not rational. Nonetheless, it is one thing to ascribe a property to humans as a whole (irrationality) and another to restrict the scope in which the theory applies.
Overcharging 50% might be a kind of dishonesty where 5% would not, and reacting to it is not unreasonable, though the dynamics of this reaction fall outside the economics' scopes of study. This is somewhat related to the phenomenon whereby people are concerned less about absolute than relative wealth (what they have vs. what they have compared to others) which is likewise sound.
George Ainslie's work on hyperbolic discounting might be illuminating on this point.
Daniel Kahneman and Amos Tversky, Prospect Theory: An Analysis of Decision under Risk, 1979, https://doi.org/10.2307/1914185
Very hypothetically. As there are no free lunches, one might wonder ‘what's the trick?’
The cynical me rejects the explanation that such things are done out of poor goodness, for the sake of others. We have an intuitive sense of tit-for-tat justice that becomes especially salient in repeated interactions. If only a single party gives and the second party takes, we might say that the first is abused or exploited. Had a lawyer been long maltreated by the elderly, she might understandably refuse lending them her help; if she doesn't, it's likely that she wants to improve the relationship.
But the expectation of reciprocation can be observed even on a single-shot interaction. Consider what a difference receiving or not receiving thanks for a favour makes. Thanksgiving is an expression that ‘you intentionally did something for my sake and I acknowledge it’ and of an obligation, whether it would ever be realized or not.
With regards to the score-settlement of transactions, I can't help but recall an incident where I thanked a serviceman who finished fixing something in the apartment. He rejected my thanks, saying that he had only done his job. It happened to me more than once.
I use 'feminist' here as an abbreviation of the 'active in the feminist movement' that was used in the original publication.
Mentioned in Extensional Versus Intuitive Reasoning: The Conjunction Fallacy in Probability Judgment, Amos Tversky and Daniel Kahneman, 1983, https://doi.org/10.1037/0033-295X.90.4.293
Put very shortly, given a random process with a set of parameters θ, the probability function assigns a possible outcome its probability. Given a random process and a outcome, the likelihood function assigns a set of parameters it probability. In other words, the former tells something of the chances of an outcome when the process is fully known, while the latter tells something of the chances of the process' properties when its outcome is known.
and if we did, we would have dispensed of such words as ‘but’, which is not used to contradict what has been already said but that which has not been said but might have been understood
In logic an argument is valid if obeys logical rules and sound if its premises are true.
Beijing is a city in Germany,
all cities in Germany are part of the EU,
therefore Beijing is part of the EU.
is valid but unsound, since Beijing is not in Germany. Notice that it might be converted into a sound hypothetical argument: ‘All cities in Germany are part of the EU. If Beijing was in Germany, it would be part of the EU.’ This would still be irrelevant unless we were considering including Beijing in Germany.
Which of course doesn't make it irrelevant at all, for we might be discussing the fairness of the lot we have drawn. Still, the coin fell as it has, and fairness is a judgment, a kind of opinion, of the world rather than its direct description.
As a robotic arm was trained to do in the last decade or so.
I believe fiction redounds to the same, but that's a different story.
Plato conceived the world of ideas, of forms, to be superior to the fleeting ever changing world of things, the universe as it is, but he, of course, had gotten it all backwards.
is the evaluation of probabilities of the statements whose absent clauses were replaced with their negations equivalent in this context to the evaluation of the prompt likelihood given the prompt? I don't know