DR. MILLER’S NOTES ON THE BLACK SWAN
The Black Swan: the Impact of the Highly Improbable, by Nassim Nicholas Taleb (2nd edition, Random House, 2010)
A Black Swan event is one that is: p. xxii
1. an outlier, something outside the realm of regular expectations, nothing in the past can convincingly point to its possibility
2. has an extreme impact
Human nature leads us to “concoct explanations for its occurrence after the fact”, making it seem explainable and even predictable
This is a disastrous combination: we can’t predict it, it has huge impact, and yet we think we can predict them because we invent explanations of past BSEs after the fact.
A small number of BS events explain almost everything in our world, from success of ideas and religions to dynamics of historical events, even elements of our personal lives.
The effects of BS events has been increasing, accelerating as the world gets more complicated.
Fads, epidemics, fashions, ideas, all follow BS dynamics
Social scientists have for a century “operated under the false belief that their tools could measure uncertainty.” (xxii)
Ask a stock broker about risks and they’ll produce you measures that EXCLUDE the possibility of BS risk, “hence one that has no better predictive value for assessing the total risks than astrology.” (xxiii)
“The central idea of this book concerns our blindness with respect to randomness, particularly the large deviations.” (xxiii)
“Why do we keep focusing on the minutiae, not the possible significant large events, in spite of the obvious evidence of their huge influence?” (xxiii)
“Black Swan logic makes what you don’t know far more relevant than what you do know. Consider that many Black Swans can be caused and exacerbated by their being unexpected.” (xxiii)
If 911 risk had been reasonably conceivable on Sep 10, it would not have happened (they’d have easily secured cabin doors). (xxiii)
“The inability to predict outliers implies the inability to predict the course of history, given the share of these events in the dynamics of events.” (xxiv)
We produce 30 years projections of social security deficiencies and oil prices when we can’t predict these well for the next year.
“Black Swans being unpredictable, we need to adjust to their existence (rather than naively try to predict them). There are so many things we can do if we focus on anti-knowledge, or what we do not know.” (xxiv)
Drew Miller: For example; have realistic aggressive disaster survival plans
“It is much easier to deal with the Black Swan problem if we focus on robustness to errors rather than improving predictions.” (xxiv)
In addition to being better prepared for bad BS events, we can often position ourselves to be better able to enjoy good BS events. (xxiv)
“I disagree with the followers of Marx and those of Adam Smith: the reason free markets work is because they allow people to be lucky, thanks to aggressive trial and error, not by giving rewards or “incentives” for skill. The strategy is, then, to tinker as much as possible and try to collect as many Black Swan opportunities as you can.” (xxv)
LIST OF THINKING ERRORS:
This list is from throughout the book. Our natural ways of thinking make us more susceptible to BS unpreparedness:
1. We invent explanation of BSE post happening that make it seem like we could have predicted the event.
2. “the human mind suffers from three ailments as it comes into contact with history, what I call the triplet of opacity.” (8)
a. The illusion of understanding, everyone thinks he knows what is going on in a world that is more complicated and/or random than they realize.
b. Retrospective distortion, we can assess matters only after the fact, and when looking in the rear view mirror we explain, historians write, them clearer and more organized than they really were.
c. We overvalue factual information and the views of “authoritative” and learned people who explain things with a false clarity, often plain wrong.
In sum: we grossly overestimate what we think we can explain and understand and predict.
3. “Platonicity” – our tendency to cut reality into crisp shapes, invent some framework for analysis that explains everything clearly and rationally and even predictable when in fact it is still extremely uncertain, unknown. “Categorizing” seems necessary for humans, imposing some sense or order on things even when they don’t make sense or our explanation is not accurate. (15)
4. The error of confirmation: we are likely to look for and at what confirms our knowledge and opinion, not our ignorance or contrary to views. Once your mind is set with a certain view of the world you tend to only consider instances that appear to show you are right; helping you feel more justified in your views. (59)
5. Narrative fallacy—we fool ourselves with stories and anecdotes. We like stories, we like to summarize, simplify, explain things with some model or rationale that makes sense, seems right—the narrative fallacy (63)
6. “Humans are great at self-delusion.” (12)
7. It is impossible for our brains to see something and not try to interpret it, make sense of it; unconsciously. It takes great discipline, training, practice to withhold judgment and try to stay open minded, skeptical of drawing any conclusions
8. Our memories are NOT permanent, they change, are shaped by our other thoughts, unconsciously or not. We continually renarrate past evens in the light of what appears to make more sense after the events occur. We even invent memories, as courts have shown in invented child-abuse cases that come from listening to theories and reports of other instances. (71)
9. We also tend to come up with plausible connections/reasons for causality; explanations that bind facts together and help them make more sense. Risk of this is we tend to think we understand things better than we really do.
10. The “ludic fallacy” (term invented by Taleb) is “the misuse of games to model real-life situations.” Taleb explains the ludic fallacy as “basing studies of chance on the narrow world of games and dice.” It is a central argument in the book and a rebuttal of the mathematical models used to predict the future – as well as an attack on the idea of applying naïve and simplified statistical models in complex domains. According to Taleb, statistics only work in some domains like casinos in which the odds are visible and defined. Taleb’s argument centers on the idea that predictive models are based on platonified forms, gravitating towards mathematical purity, and failing to take some key ideas into account:
- it is impossible to be in possession of all the information.
- very small unknown variations in the data can have a huge impact. Taleb does differentiate this idea from that of mathematical notions in chaos theory, e.g. the Butterfly effect.
- theories/models based on empirical data are flawed, as events that have not taken place before cannot be accounted for.
Any decision theory based on a fixed universe or model of possible outcomes ignores and minimizes the impact of events which are “outside the model.” For instance, a simple model of daily stock market returns may include extreme moves such as Black Monday (1987) but might not model the market breakdowns following the September 11 attacks. A fixed model considers the “known unknowns,” but ignores the “unknown unknowns.”
11. We neglect, ignore silent evidence. One of the gravest impacts of this is the illusion of stability. This bias lowers our perception of the risk we incurred in the past. “the problem of silent evidence . . . is why we do not see Black Swans: we worry about those that happened, not those that may happen but did not. It is why we Platonify, liking known schemas and well-organized knowledge—to the point of blindness to reality. It is why we fall for the problem of induction, why we confirm. It is why those who ‘study’ and fare well in school have a tendency to be suckers for the ludic fallacy.” (131)
12. We tend to “tunnel”; focus on too few sources of uncertainty, too specific and limited a list of potential BSs to watch out for. (50)
13. We scorn the abstract in preference for facts, evidence—and you won’t have concrete facts and evidence warning of an approaching BS.
14. What we don’t regularly see, we tend to ignore.
15. We tend not to think carefully, not in a disciplined manner; often irrational.
16. Many psychological tests have shown that we are very poor at estimating probabilities, tending to grossly overestimate what we know
17. Lots of biases in estimating things such as “anchoring: whatever # you have in mind just before asked to predict, you’ll predict closer to this (anchor bias): if you are thinking about a low # and then are asked to estimate something, you will estimate low.
18. “We cannot work without a point of reference” (159) so the mind will grab one even if it makes no sense as a reference.
19. We learn by repetition, at the expense of events that have not happened before.
20. We are more influenced by emotions, personal events, than statistics. A death of someone we know from disease X has far more impact than statistics on large # folks dying of X.
21. We often react and decide by gut feel, our historic reactions, and THINK that we’ve thought it through and made a rational choice when in fact we have not carefully thought it thru. (81) Don’t react and use your limbic part of brain, fast acting part that animals have; use the cortical part of brain, used for thinking (distinguishes us from other animals). (82)
22. “We think that if . . . two variables are causally linked, then a steady input in one variable should always yield a result in the other one. Our emotional apparatus is designed for linear causality.” (88)
23. “We favor the sensational and the extremely visible. This affects the way we judge heroes. There is little room in our consciousness for heroes who do not deliver visible results—or those heroes who focus on process rather than results.” (89)
24. “We are made to be superficial, to heed what we see and not heed what does not vividly come to mind. . . . Out of sight, out of mind: we harbor a natural, even physical, scorn of the abstract.” (121) “We are naturally shallow and superficial—and we do not know it.” (132)
25. “the scandal of prediction”—we predict all the time despite dismal ability to do so, almost always missing the big events (138) We overestimate what we know and underestimate uncertainty. “Our human race is affected by a chronic underestimation of the possibility of the future straying from the course initially envisioned, . . . an ingrained tendency in humans to underestimate outliers—or Black Swans.” (141)
26. Too much information can lead to worse decisions. Gladstone noted this in Blink; saying that gut instinct can often be better than info overload. Taleb says “the more information you give someone, the more hypotheses they will formulate along the way, and the worse off they will be. They see more random noise and mistake it for information. The problem is that our ideas are sticky; once we produce a theory, we are not likely to change our minds—so those who delay developing their theories are better off. When you develop your opinions on the basis of weak evidence, you will have difficulty interpreting subsequent information that contradicts these opinions. . . .” (144)
27. “We attribute our successes to our skills, and our failures to external events outside our control, namely to randomness.” (152)
You get no credit for preventing a BSE. If someone had invented/required cockpit locks before 9/11 they would have avoided the disaster, but never have gotten credit for avoiding 9/11.
NOTES (page numbers in parentheses):
Television and media tend to magnify unfairness of not rewarding those who prepare and plan; and also contribute to BS blindness. (xxviii)
We tend to learn and teach by focusing on the normal and ruling out the extraordinary.
Better if we look at “outliers” as well, particularly if they are potential BS events. (xxix)
He condemns the Bell Curve and normal distribution; which focus on the “normal” and minimize or ignore the large deviations. It gives us a very false send of having dealt with uncertainty and outliers. He calls the Bell Curve the “Great Intellectual Fraud.” (xxix)
Do NOT try to look for “collaborating evidence” of coming BSs.
Do not focus on the known and normal; think of the possible, however improbable, extremes and unknowns. “use the extreme event as a starting point and not treat it as an exception to be pushed under the rug.” (xxxii)
“the future will be increasingly less predictable, while both human nature and social ‘science’ seem to conspire to hide the idea from us.” (xxxii)
In chapter 1; Taleb explains how a BS event transformed and ruined his home country of Lebanon. After thirteen centuries of peaceful ethnic coexistence, a fierce civil war began between Christians and Moslems, including Palestinian refugees. Conflict lasted over a decade and a half.
“History and societies do not crawl. They make jumps.” “Yet we (and historians) like to believe in the predictable, small incremental progression.” (11)
“Humans are great at self-delusion.” (12)
He cites as examples diaries of people prior to WWII—they had no inkling that something momentous was taking place, war coming. We hear much about Churchill warning; but he was very rare and ignored. WWII came as a surprise despite what in retrospect looks like absolutely clear signs and warnings. Bond prices, which are supposed to reflect the intelligence of the market, also showed no expectation that a war was coming. (14)
The stock market often has huge, wild, unexpected swings. Oct 19, 1987 for example; largest market drop in modern history. The drop was not in response to any discernible news, it was a BS event. (18)
The “Platonic fold” is where our representation/models of reality cease to apply—but we don’t know it. (19)
Taleb become a Quant in finance, NOT looking for models to explain and analyze financial instruments, but to study the flaws and limits of their models looking for Platonic folds where they break down. “I was convinced that I was totally incompetent in predicting market prices—but that others were generally incompetent also but did not know it, or they did not know that they were taking massive risks.” Most traders were just ‘picking pennies in front of a steamroller,’ exposing themselves to the high-impact rare event yet sleeping like babies, unaware of it.” (19)
Taleb calls himself a “Flaneur”—someone who walks city streets, to think and contemplate and observe (not a “loafer”, the negative sense of word).
Artists, professional success today is to a large degree a matter of luck; and once one person achieves success, even if not substantially better than others, that success and knowledge of his success will be focused on him—concentrated there. Not fair, but that’s how it works. (30)
The U.S. has achieved a lot of success because we specialize in and are far more tolerant of bottom up tinkering and undirected trial and error. We produce concepts and ideas, the scalable part of products and increasingly export or outsource the less scalable, mundane work jobs that anyone can do. (31)
Mediocristan: a place where sample is large, no single instance will significantly change the total, the largest observation is impressive, but insignificant to the sum. Average height is an example of a Mediocristan measure.
Extremistan: a place where the extremes are REALLY huge. Example: Bill Gates and 999 other average people on planet: His net worth alone represents 99%+ of total. Another example is book sales; the top few authors make up vast majority of book sales.
In Extremistan, inequalities are such that one single observation will disproportionately impact the total. (33)
Extremistan can produce many more BSs. You cannot rely on data as much, and can’t predict well. Be suspicious of knowledge you derive from data. (34)
Much more that matters is in Extremistan; subjected to Type 2 randomness: wealth, income, book sales, name recognition, city population, deaths in wars or terrorist incidents, financial markets, commodity prices, inflation rates, economic data. (35)
Extreme winner take all inequality, tyranny of the accidental.
Like a turkey being fattened for slaughter, a long experience of the same treatment/occurrences every day can seem very predictable, certain—until the day of slaughter. (40)
Banks may have boringly stable business day after day—but it they are exposed to big financial risks, they can have a horribly bad one day that puts the bank out of business. In the summer of 82, large American banks lost close to all their past earnings in one year due to loans to Latin American countries that defaulted at the same time. (43)
1998 Long Term Capital Management, run by Nobel prize in economics “risk experts,” blew up their company and almost took financial markets down with them; applying “phony, bell curve-style mathematics.” (44)
Some BS events can take years, decades to take place, play out.
“empirical” means based on past, observed evidence
We tend to “tunnel”; focus on too few sources of uncertainty, too specific and limited a list of potential BSs to watch out for. (50)
You can know what is wrong with a lot more confidence than you know what is right. (58)
Once your mind is set with a certain view of the world you tend to only consider instances that appear to show you are right; helping you feel more justified in your views. (59)
The impact of some events, BS things of past can be more severe today; an earthquake can have far more bad impact today due to larger populations, more economic dependence and interdependence.
When our mind simplifies, organizes data and comes up with some explanation of it (however wrong it may be), we in effect set ourselves up for BS disasters—because in the assumptions we make to simplify, make sense of what we observe now, we throw out the misfits and leave out the possibility of a BS. (69) We forget about or throw out evidence that doesn’t fit with our expectation—the norm. So we’ll also tend to ignore, pay less attention to the misfit data, the evidence perhaps of an approaching BS.
We all tend to commit the narrative fallacy. It happens in media too, despite them checking facts and supposedly being disciplined, in scientific research too, and in academia where people who think they are clever add the illusion of expertise and postulate theories that have little solid basis.
Some BSs are overblown in our minds because they are “narrated BSs”; those that are present in the current discourse and news we hear about all the time. But BSs not in the media or popular discussion are underestimated. People will overestimate probability of some event they are aware of and gets discussed a lot. (77)
Before a rare event, like a terrorist bombing, probably underestimated the probability, then right after an event like that—it’s OVERestimated as likely for awhile, while on our mind.
After economic disasters, cycle is risk aversion, then after stability and absence of crises, then more risk taking, complacency, BS disaster, repeat.
“… we live in a society where the reward mechanism is based on the illusion of the regular; our hormonal reward system also needs tangible and steady results. It too thinks that the world is steady and well behaved—it falls for the confirmation error. The world has changed too fast for our genetic makeup. We are alienated from our environment.” (85)
“Many people labor in life under the impression that they are doing something right, yet they may not show solid results for a long time. They need a capacity for continuously adjourned gratification to survive a steady diet of peer cruelty without being demoralized. . . . Believe me, it is tough to deal with the social consequences of the appearance of continuous failure. We are social animals; hell is other people.” (87)
“If you feel that you are not going anywhere, your emotions will cause you to become demoralized. But modern reality rarely gives us the privilege of a satisfying, linear, positive progression: you may think about a problem for a year and learn nothing; then, unless you are disheartened by the emptiness of the results and give up, something will come to you in a flash.” (88)
“The problem of lumpy payoffs is not so much in the lack of income they entail, but the pecking order, the loss of dignity, the subtle humiliations near the water cooler. It is my great hope someday to see science and decision makers rediscover what the ancients have always known, namely that our highest currency is respect.” (90)
“Linear relationships are truly the exception; we only focus on them in classrooms and textbooks because they are easier to understand.” (89)
“your happiness depends far more on the number of instances of positive feelings, what psychologist call ‘positive effect,’ than on their intensity when they hit. . . . So to have a pleasant life you should spread these small ‘affects’ across time as evenly as possible. Plenty of mildly good news is preferable to one single lump of great news. . . . The same property in reverse applies to our unhappiness. It is better to lump all your pain into a brief period than have it spread out over a longer one.” (91)
“We are local animals, interested in our immediate neighborhood.” (94)
“we have very few historical records of people who have achieved anything extraordinary without such peer validation. . . . “ (94)
The accounting period we use for evaluating company performance is too short to reveal whether they are doing a good job. (97)
“owing to the shallowness of our intuitions, we formulate our risk assessments too quickly.” (97)
“Humans. . . can detect the smallest crack in your confidence before you express it. The trick is to be as smooth as possible in personal manners. It is much easier to signal self-confidence if you are exceedingly polite and friendly. . . . But you need to remain understated and maintain an Olympian calm in front of others.“ (98)
Points out that fund managers/investors with great records does NOT mean they are great; there will always be someone who calculates to be best just from luck, statistics; NOT because they really are better. (106)
Close to 99.5 percent of species that existed on earth are now extinct. (108)
“Life is a great deal more fragile than we have allowed for.” (109)
“governments are great at telling you what they did; but not what they did not do. In fact, they engage in what could be labeled as phony ‘philanthropy,’ the activity of helping people in a visible and sensational way without taking into account the unseen cemetery of invisible consequences.” (111)
“Our neglect of silent evidence kills people daily. Assume that a drug saves many people from a potentially dangerous ailment, but runs the risk of killing a few, with a net benefit to society. Would a doctor prescribe it? He has no incentive to do so. They lawyers of the person hurt by the side effects will go after the doctor like attack dogs, while the lives saved by the drug might not be accounted for anywhere.” (112)
“My biggest problem with the educational system lies precisely in that it forces students to squeeze explanations out of subject matters and shames them for withholding judgment, for uttering the ‘I don’t know’.” 120 In many cases the correct answer to why some past historical event X happened is NOT some logical, rational explanation, but 1) we don’t really know and 2) it just happened. “I am not saying causes do not exist; do not use this argument to avoid trying to learn from history. All I am saying is that it is not so simple; be suspicious of the ‘because’ and handle it with care—particularly in situations where you suspect silent evidence.” (121)
“only military people deal with randomness with genuine, introspective intellectual honesty—unlike academics and corporate executives using other people’s money.” (126)
Military planners, reacting to BS discussions, started using the expression “unknown unknowns” (as opposed to known unknowns) to help emphasize the existence and danger of BSs. (127)
Knightian risks (which you can compute); Knightian uncertainty (you cannot compute). But “computable” risks are largely absent from real life, they are theoretical, academic inventions, not practical. (128)
“I recently looked at what college students are taught under the subject of chance and came out horrified; they were brainwashed with this ludic fallacy and the outlandish bell curve.” (128)
“the problem of silent evidence . . . is why we do not see Black Swans: we worry about those that happened, not those that may happen but did not. It is why we Platonify, liking known schemas and well-organized knowledge—to the point of blindness to reality. It is why we fall for the problem of induction, why we confirm. It is why those who ‘study’ and fare well in school have a tendency to be suckers for the ludic fallacy.” (131)
“if you want a simple step to a higher form of life, as distant from the animal as you can get, then you may have to denarrate, that is, shut down the television set, minimize time spend reading newspapers, ignore the blogs. Train your reasoning abilities to control your decisions. . . Train yourself to spot the difference between the sensational and the empirical. This insulation from the toxicity of the world will have an additional benefit; it will improve your well-being. Also, bear in mind how shallow we are with probability, the mother of all abstract notions. You do not have to do much more in order to gain a deeper understanding of the things around you. Above all, learn to avoid ‘tunneling.’ . . . the last thing you need to do when you deal with uncertainty is to ‘focus’. . . This ‘focus’ makes you a sucker. . . .” (133)
“We tend to ‘tunnel’ while looking into the future, making it business as usual, Black-Swan free, when in fact there is nothing usual about the future. . . . We are [good] at narrating backward, at inventing stories that convince us that we understand the past.” (135)
“No matter what anyone tells you, it is a good idea to question the error rate of an expert’s procedure. Do not question his procedure, only his confidence.” (145)
Lists experts who tend to be experts: Livestock judges, chess masters, physicists, mathematicians, accountants. Experts who tend to be not experts: stockbrokers, psychiatrists, college admissions officers, intelligence analysts, economists, financial forecasters, finance professors, political scientists, “risk experts” (146)
Things that “move” seem to be less prone to expertise or prediction; more BS prone. (147)
Taleb scrutinized 2,000 security analyst predictions. “What it showed was that these brokerage-house analysts predicted nothing—a naïve forecast made by someone who takes the figures from one period as predictors of the next would not do markedly worse.. . . Worse yet, the forecasters’ errors were significantly larger than the average difference between individual forecasts, which indicates herding.” (150)
When experts are right in predictions, they attribute it to their understanding and expertise, when wrong, they blame a special situation, spin some story to avoid acknowledgement that they/their methods were wrong. (152)
Taleb reviewed articles and papers in economics. “They collectively show no convincing evidence that economists as a community have an ability to predict, and, if they have some ability, their predictions are at best just slightly better than random ones—not good enough to help with serious decisions.” (154)
Econometrics: combines economic theory with statistics.
In 1999 a study and competition of forecasters found that “statistically sophisticated or complex methods do not necessarily provide more accurate forecasts than simpler ones.” (154)
“Plans fail because of what we have called tunneling, the neglect of sources of uncertainty outside the plan itself.” (156) In particular, “outside events” and interference in our plans is common; we focus on what our known constraints are, not things that will interfere with us. We don’t add time/costs for “unknown unknowns.”
Drew Miller: Should always have a category of “contingency costs” for unknown expenses, delays.
Tools like Excel that make projections and pro forma spreadsheets easy to build can make our forecasting errors and hubris worse. “We have become worse planners than the Soviet Russians thanks to these potent computer programs given to those who are incapable of handling their knowledge.” (158)
Forecasting without incorporating an error rate uncovers three fallacies:
1. Variability matters. We tend to make a point prediction, not worrying about error or variability, when it is probably better to give a full range of possible outcomes, especially if error and variability high (161)
2. Failing to take into account forecast degradation as projected period lengthens. Gives example of George Orwell’s 1984 (162)
3. Gravest fallacy—a misunderstanding of the random character of variables being forecast; the possibility of a big BS event lurking. The extreme cases may be far worse, beyond what we could even imagine.
The classic model of discovery is you search for something you know or want (a new way to reach India) and you find something you didn’t know was there (America). (166)
We are also horrible at estimating impacts of things we discover. Thomas Watson, founder of IBM, predicted there would be no need for more than just a handful of computers. (168) Pan Am, once took advance bookings for round trips between the earth and moon; then went bankrupt. (169)
In 1899 the head of the US Patent Office resigned because he deemed there was nothing left to discover.
“Fat tails”—a term for BSs.
A way to profit from BSs is look for research, business development opportunities which can pay off big.
Henri Poincare’, mathematician, explained that there are fundamental limits to our equations. Nonlinearities can lead to severe consequences, an idea that later became over popular as chaos theory. Pointcare showed this in a “three body problem”—if only two planets in a solar style system you can predict their course, but add a third body, ever so small, between the 2 and over time its effects on the two huge bodies movements may become explosive. (177)
“Our world, unfortunately, is far more complicated than the three body problem….” (177)
Poincare’s point was rediscovered in 1960s by MIT meteorologist Edward Lorenz while doing computer modeling of weather. When he tried to rerun a few days weather prediction he thought using the same input parameters he got wildly different results. It turned out that small rounding error input in parameters was all it took to yield huge differences—this became known as the “butterfly effect”. This evolved (or devolved) into chaos theory. (179)
Fredrich Hayek wrote a speech “The Pretense of Knowledge” railing against economics and the idea of a planner. Argued against use of tools of hard science in the social sciences. Condemned central planning as impossible and attacked socialism and managed economies. Used the term “scientism” to describe our disease of overestimating our ability to understand the world. (180)
Taleb agrees with Hayek, but argues that scientism and the errors of thinking we can plan/forecast competently is not just erroneous in social sciences, but in economics, medicine, and most areas of knowledge—perhaps all. (181)
“Corporations can go bust as often as they like, thus subsidizing us consumers by transferring their wealth into our pockets—the more bankruptcies, the better it is for us—unless they are ‘too big to fail’ and require subsidies, which is an argument in favor of letting companies go bust early. Government is a more serious business and we need to make sure we do not pay the price for its folly. As individuals we should love free markets because operators in them can be as incompetent as they wish.” (181)
“Platonic is top-down, formulaic, closed-minded, self-serving and commoditized; a-Platonic is bottom-up, open-minded, skeptical, and empirical.” (182)
The “empirics” experimented, tinkered until they found something that worked; they did minimal theorizing. It may not “make sense” or fit a theory that acupuncture works, but if it does, then “let’s go with it for now while keeping our minds open.” (183)
“I cannot for the life of me understand why today’s libertarians do not go after tenured faculty (except perhaps because many libertarians are academics).” (183)
“if you believe in free will you can’t truly believe in social science and economic projection. You cannot predict how people will act. . . . You simply [cannot] assume that individuals will be rational in the future and thus act predictably.” (184)
“the model of rational behavior under uncertainty is not just grossly inaccurate but plain wrong as a description of reality.” (185)
“Tolstoy said that happy families were all alike, while each unhappy one is unhappy in its own way.” (185)
“if people make inconsistent choices and decisions, the central core of economic optimization fails. You can no longer produce a ‘general theory,’ and without one you cannot predict.” (185)
p. 187, rails against linear regression analysis. “Always remember that ‘R-square’ is unfit for Extremistan; it is only good for academic promotion.”
“If you are a turkey being fed for a long period of time, you can either naively assume that feeding confirms your safety or be shrewd and consider that it confirms the danger of being turned into supper.” (188)
“What is the most potent use of our brain? It is precisely the ability to project conjectures into the future and play the counterfactual game. . . .” (189)
“Why do we listen to experts and their forecasts? A candidate explanation is that society reposes on specialization, effectively the division of knowledge.” (189)
Epistemocrat-a person who holds his own knowledge to be suspect.
Empistemocracy—place where laws are structured with this kind of human fallibility in mind. (190)
Montaigne “accepted human weakness and understood that no philosophy could be effective unless it took into account our deeply ingrained imperfections, the limitations of our rationality, the flaws that make us human.”(191)
“To me utopia is an Empistemocracy, . . . a society governed from the basis of awareness of ignorance, not knowledge.” (192)
Danny Kahneman and Dan Gilbert discovered that we do not learn from past experiences, we have a mental block, a distortion in the way we fail to learn from our past errors in projecting the future of our affective states. We grossly overestimate the length of the effect of misfortune on our lives. We actually can adapt to most anything. According to Triver’s theory of self-deception, we fool ourselves to be optimistic towards the future. (195)
“History is certainly not a place to theorize or derive general knowledge, nor is it meant to help in the future, without some caution. We can get negative confirmation from history, which is invaluable, but we get plenty of illusions of knowledge along with it. . . . Learn to read history, get all the knowledge you can, do not frown on the anecdote, but do not draw any causal links, do not try to reverse engineer too much. . . . The more we try to turn history into anything other than an enumeration of accounts to be enjoyed with minimal theorizing, the more we get into trouble.” (199)
Cautions against comparing fall of Rome to U.S. or any comparison of a simpler time to complex times. (199)
“Another error is to draw causal conclusion from the absence of nuclear war, since . . we would not be here had a nuclear war taken place, and it is not a good idea for us to derive a ‘cause’ when our survival is conditioned on that cause.” (199)
Bertrand Russell called for teaching/learning the virtue of suspending judgment. But Taleb does not believe this can be taught. “We cannot teach people to withhold judgment; judgments are embedded in the way we view objects. I do not see a ‘tree’; I see a pleasant or an ugly tree. It is not possible without great, paralyzing effort to strip these small values we attach to matters.” (202)
“The lesson . . . is: be human! Accept that being human involves some amount of epistemic arrogance in running your affairs. Do not be ashamed of that. Do not try to always withhold judgment–opinions are the stuff of life. Do not try to avoid predicting—yes . . . I am not urging you to stop being a fool. Just be a fool in the right places. What you should avoid is unnecessary dependence on large-scale harmful predictions—those and only those. Avoid the big subjects that may hurt your future: be fooled in small matters, not in large. Do not listen to economic forecasters or to predictors in social science, but do make your own forecast for the picnic.”(203)
“rank beliefs not according to their plausibility but by the harm they may cause.” (203)
“Knowing that you cannot predict does not mean that you cannot benefit from unpredictability.” (203)
“The bottom line: be prepared! . . . Be aware of the numbing effect of magic numbers. Be prepared for all relevant eventualities.” (203)
“the reason I felt immediately at home in America is precisely because American culture encourages the process of failure; unlike the cultures of Europe and Asia where failure is met with stigma and embarrassment. America’s specialty is to take these small risks for the rest of the world, which explains this country’s disproportionate share in innovations.” (204)
Taleb advocates working to expose yourself to potential positive BSs. “Seize any opportunity, or anything that looks like opportunity.” (208)
The Barbell Strategy: “If you know that you are vulnerable to prediction errors, and if you accept that most ‘risk measures’ are flawed, because of the Black Swan, then your strategy is to be as hyper-conservative and hyper-aggressive as you can be instead of being mildly aggressive or conservative. Instead of putting your money in ‘medium risk’ investments, you need to put a portion, say 85 to 90 percent, in extremely safe investments, like Treasury bills. . . The remaining 10 to 15 percent you put in extremely speculative bets, as leveraged as possible (like options), preferably venture capital-style portfolios.” (205)
“If venture capital firms are profitable, it is not because of the stories they have in their heads, but because they are exposed to unplanned rare events.” (205)
There are positive BS businesses, movies, some segments of publishing, scientific research, venture capital. (207) In these businesses you “lose small to make big.”
But if you pay too much to invest in these businesses, there is little upside potential (ala dot-com bubble). (207)
For defense, terrorism prep: don’t try to predict the exact BS disaster, instead; “invest in preparedness, not in prediction. Remember that infinite vigilance is just not possible.” (208)
“Beware of precise plans by governments. . . Remember that the interest of these civil servants is to survive and self-perpetuate—not to get to the truth.” (209)
“The Achilles’ heel of capitalism is that if you make corporations compete, it is sometimes the one that is most exposed to the negative Black Swan that will appear to be the most fit for survival.” (209)
“If you hear a ‘prominent’ economist using the word equilibrium, or normal distribution, do not argue with him; just ignore him, or try to put a rat down his shirt.” (210)
“the notion of asymmetric outcomes is the central idea of this book: I will never get to know the unknown since, by definition, it is unknown. However, I can always guess how it might affect me, and I should base my decisions around that.” (210)
“This idea that in order to make a decision you need to focus on the consequences (which you can know) rather than the probability (which you can’t know) is the central idea of uncertainty.” (211)
“the world is moving deeper into Extremistan…” (213)
“many rare events can yield their structure to us; it is not easy to compute their probability, but it is easy to get a general idea about the possibility of their occurrence. We can turn these Black Swans into Gray Swans, so to speak, reducing their surprise effect.” (213)
Zipf’s Law: the more you use a word, the less effortful you will find it to use that word again.
English has become a lingua franca not because U.S. English is easiest/best, but because the most obvious one to select.
In our connected, Extremistan world, the web also promotes acute concentration, “winner take all” situations, like Microsoft, Google. (223)
But the web also enables the formation of “a reservoir of proto-Googles waiting in the background” and “inverse Google. . . people with a technical specialty to find a small, stable audience.” Taleb calls this “a long tail.” (223)
“The long tail implies that the small guys, collectively, should control a large segment of culture and commerce, thanks to the niches and subspecialties that can now survive thanks to the Internet. But, strangely, it can also imply a large measure of inequality; a large base of small guys and a very small number of super giants, together representing a share of the world’s culture—with some of the small guys, on occasions, rising to know out the winners. (This is the ‘double tail’; a large tail of the small guys and a small tail of the big guys.)” (224)
“The role of the long tail is fundamental in changing the dynamics of success, destabilizing the well-seated winner, and bringing about another winner. . . . The long tail’s contribution is not yet numerical; it is still confined to the Web and its small-scale online commerce. But consider how the long tail could affect the future of culture, information, and political life. It could free us from the dominant political parties, from the academic system, from the clusters of the press—anything that is currently in the hands of ossified, conceited, and self-serving authority.” (224)
Economic globalization “is not all for the good: It created interlocking fragility, while reducing volatility and giving the appearance of stability. In other words it creates devastating Black Swans. We have never lived before under the threat of a global collapse. Financial institutions have been merging into a smaller number of very large banks. Almost all banks are now interrelated. So the financial ecology is swelling into gigantic, incestuous, bureaucratic banks (often Gaussianized in the risk measurement)—when one falls, they all fall.” (225)
There is also concentration in networks. “Networks have a natural tendency to organize themselves around an extremely concentrated architecture: a few nodes are extremely connected; others barely so. . . Concentration of this kind is not limited to the Internet; it appears in social life (a small number of people are connected to others), in electricity grids, in communications networks. This seems to make networks more robust: random insults to most parts of the network will not be consequential since they are likely to hit a poorly connected spot. But it also makes networks more vulnerable to Black Swans.” (226)
“We would be far better off if there were a different ecology, in which financial institutions went bust on occasion and were rapidly replaced by new ones, thus mirroring the diversity of Internet business and the resilience of the Internet economy.” (226-227)
“The main point of the Gaussian [normal distribution] . . . is that most observations hover around the mediocre, the average; the odds of a deviation decline faster and faster (exponentially) as you move away from the average.” (231)
In contrast: the Mandelbrotian. “The speed of the decrease here remains constant (and does not decline)!” Same for “power law” and “Scaleables” (233)
The 80/20 rule is the common signature of a power law. It started with Vilfredo Pareto observed that about 80% of the land in Italy was owned by 20% of the people. 80/20 rule is a metaphor, not a rule, and not a very good guideline. For example, about 97% of book sales are made by 20% of authors. (235)
“the traditional Gaussian way of looking at the world begins by focusing on the ordinary, and then deals with exceptions or so-called outliers as ancillaries. But there is a second way, which takes the exceptional as a starting point and treats the ordinary as subordinate.” (236)
“The notion of standard deviation is meaningless outside of Mediocristan.” (239)
“If you use the term statistically significant, beware of the illusions of certainties. Odds are that someone has looked at his observation errors and assumed that they were Gaussian, which necessitates a Gaussian context, namely, Mediocristan, for it to be acceptable.” (240)
“If you’re dealing with qualitative inference, such as in psychology or medicine, looking for yes/no answers to which magnitudes don’t apply, then you can assume you’re in Mediocristan without serious problems. The impact of the improbably cannot be too large. You have cancer or you don’t…. But if you are dealing with aggregate, where magnitudes do matter, such as income, your wealth, return on a portfolio, or books sales, then you will have a problem and get a wrong distribution if you use the Gaussian. . . One single number can disrupt all your averages; and a single loss can eradicate a century of profits.” (245)
Gaussian assumes that events are independent. Something like a coin flip. But if you introduce memories or skills in flipping, Gaussian assumptions won’t hold. Probabilities are dependent on history. And outcomes are always about the same—if an outcome can be wildly different, then normal/Gaussian distribution is N/A. (251)
“The ubiquity of the Gaussian is not a property of the world, but a problem in our minds, stemming from the way we look at it.” (251)
“You cannot use one single measure for randomness called standard deviation (and call it ’risk’); you cannot accept a simple answer to characterize uncertainty.” (252)
This is NOT from Taleb:
The efficient market hypothesis contends that at any moment in time the prices of financial securities traded in pubic markets reflect knowledgeable investor opinions and it is impossible to make excess profits. It is a sophisticated version of the old joke about the economist who refuses to pick up a $20 bill lying in the street because he insists that if it was really there, someone would have picked it up already.
Mandelbrot contended that security price variations don’t follow normal distributions but “fat tailed” curves in which extreme events are much more likely to happen than predicted by efficient market theory. This is Taleb’s view as well, validated by recent years.
You can always ferret out predecessors for any idea you get; find someone who worked on part of your argument and cite them as backup. (256)
Fractal is a word Mandelbrot coined to describe the geometry of the rough and broken appearance of nature. (257)
Nature seems to use basic shapes at a small level, that repeat at large levels/sizes. A short and simple rule of interaction that can be used by nature, or a computer, to build shapes of seemingly great complexity. Some music, poetry is also fractal. (258)
The fractal has statistical measures that are somewhat preserved across scales—the ratio is the same, unlike the Gaussian. Wealth is scale independent, or, more precisely, of unknown scale dependence. If the top 1% wealthy has 10% of wealth, top 20% has 45% of wealth
Mandelbrot, and Taleb, believes fractals should be the default assumption of data, not the normal/Gaussian distribution. Still have unpredictable BSs, but you are more inclined to expect big outliers, more likely to have Gray Swans. (262)
A Gray Swan concerns modelable extreme events, a BS is an unknown unknown. (272)
“We are teaching people methods from Mediocristan and turning them lose in Extremistan.” (274)
In the last 50 years, the ten most extreme days in the financial markets represent half the returns. (275)
After the 87 stock market crash; which should have proven how risk models were Bull Shit, the Nobel committee gave Economics Prize to Harry Markowitz and Williams Sharpe for their Modern Portfolio Theory—which was thoroughly Gaussian and full of the nonsense of predictable risk. (277)
Portfolio Theory still used today.
Gaussian theory widely taught, along with related Sharpe Ratio; pervading higher education and financial industry. In 1997 the Swedish academy gave another round of prizes to Scholes and Merton. Portfolio theory and Gaussian risk analysis was “safe” since so widely accepted. When these risk methods fail because of their flaws (financial markets, stocks, etc. do not follow normal distribution) people “can claim that they relied on standard scientific method” for their defense. (278)
An option investment (bet) on a Black Swan benefits hugely when Scholes and Merton’s ‘formula” misses. “The option payoff is so powerful that you do not have to be right on the odds: you can be wrong on the probability, but get a monstrously large payoff. I’ve called this the ‘double bubble’: the mispricing of the probability and that of the payoff.”(279)
“Gaussianizations do not have realistic assumptions and do not produce reliable results.” (280)
Long Term Capital Management had not just Robert Merton and Myron Scholes as founding partners, but the “highest ranks of academia” in their firm. “They managed to enlarge the ludic fallacy to industrial proportions.” (281) In 1998 large events “outside their models” took down not just LTCM but “almost took down the entire financial system with it, as the exposures were massive. Since their models ruled out the possibility of large deviations, they allowed themselves to take a monstrous amount of risk.” (282)
As of 2010, over a decade later, schools continue to teach portfolio theory.
“A scholar who applies such methodology resembles Locke’s definition of a madman: “someone ‘reasoning correctly from erroneous premises’.” (283)
“This is where you learn from the minds of military people and those who have responsibilities in security. They do not care about ‘perfect’ ludic reasoning; they want realistic ecological assumptions. In the end, they care about lives.” (283)
Or, as Taleb put it in Table 4, his “Two ways to approach randomness” chart:
“Prefers to be broadly right”
While “Platonic approach”: “precisely wrong”
Skeptical Empiricists: “Does not believe that we can easily compute probabilities”
“Platonic”: “Built their entire apparatus on the assumptions that we can compute probabilities”
Skeptical Empiricists: “Develops intuitions from practice, goes from observations to books”
“Platonic”: “Relies on scientific papers, goes from books to practice”
Skeptical Empiricists: “Ideas based on skepticism, on the unread books in the library”
“Platonic”: “Ideas based on beliefs, on what they think they know”
Skeptical Empiricists: “Assumes Extremistan as a starting point”
“Platonic”: “Assumes Mediocristan as a starting point”
Skeptical Empiricists: “Seeks to be approximately right across a broad set of eventualities”
“Platonic”: “Seeks to be perfectly right in a narrow model, under precise assumptions” (284)
“I want to minimize reliance on theories, stay light on my feet, and reduce my surprises. I want to be broadly right rather than precisely wrong.” (285)
The “greater uncertainty principle” in quantum physics hold that you cannot measure certain pairs of values such as position and momentum of particles; what you gain in precision of one you lose in the other—there is an incompressible uncertainty. We do not have such precise uncertainty in social and political science, “so when you hear ‘experts’ presenting the problems of uncertainty in terms of subatomic particles, odds are that the expert is a phony.” (287)
People can’t even predict how long they will be happy with a recently acquired object, how long their marriages will last—yet we pretend to have social “science” and predictability. (288)
Taleb insists you should not go from books to problems, but from problems to books. He cites Karl Popper sharing this view: “Genuine philosophical problems are always rooted outside philosophy and they die if these roots decay. . . . These roots are easily forgotten by philosophers who ‘study’ philosophy instead of being forced into philosophy by the pressure of non-philosophical problems” (Popper, quoted in Taleb, 290-91)
“my antidote to Black Swans is precisely to be noncommoditized in my thinking.” (292)
“worry less about small failures, more about large, potentially terminal ones.” (296)
“I am very aggressive when I can gain exposure to positive Black Swans—when a failure would be of small moment—and very conservative when I am under threat from a negative Black Swan. I am very aggressive when an error in a model can benefit me, and paranoid when the error can hurt.” (296)
Postscript Essay (for Second Edition)
Lehman Brothers when bust during the financial crisis of 2008
“Mother Nature likes redundancies. Redundancy equals insurance. . . . The exact opposite of redundancy is naïve optimization. . . . An economist would find it inefficient to maintain two lungs and two kidneys.” (312)
Ricardo advocated national specialization, “comparative advantage”—but that makes you very vulnerable to either loss of market for your product, or vulnerabilities if can’t import vital goods. (313)
“Globalization might give the appearance of efficiency, but the operating leverage and the degrees of interaction between parts will cause small cracks in one spot to percolate through the entire system.” (313)
We engage in borrowing when historical cultures (and the Maxims) advised against debt.
“debt is dangerous if you have some overconfidence about the future and are Black Swan blind, which we all tend to be.” (314)
“one of the purposes of religion and tradition [such as don’t go into debt] has been to enforce interdicts—simply to protect people against their own epistemic arrogance.” (314)
“Mother Nature does not like anything too big. . . . one bank failure, that of Lehman Brothers, in September 2008, brought down the entire edifice. Mother Nature does not limit the interactions between entities; it just limits the size of the units.” (314)
“much more stability would be achieved by stopping governments from helping companies when they become large and by giving back advantages to the small guy.”(314)
“Wall Street analysts (MBA types) will pressure companies to sell the extra kidney and ditch insurance to raise their ‘earnings per share’ and ‘improve their bottom line’—hence eventually contributing to their bankruptcy.” (315)
Taleb shares my view of global warming; not sure of models but:
“we need to be hyper-conservationists ecologically, since we do not know what we are harming with now. That’s the sound policy under conditions of ignorance and epistemic opacity. To those who say ‘We have no proof that we are harming nature,’ a sound response is “We have no proof that we are not harming nature, either’; the burden of the proof is not on the ecological conservationist, but on someone disrupting an old system.” (316)
This was covered in Rohan Nation: Reinventing America after the 2020 Collapse:
“As we travel more on this planet, epidemics will be more acute—we will have a germ population dominated by a few numbers, and the successful killer will spread vastly more effectively. . . . I am not saying that we need to stop globalization and prevent travel. We just need to be aware of the side effects, the tradeoffs—and few people are.” (317)
The crisis of 2008 was not a BS, “only the result of fragility in systems built upon ignorance—and denial—of the notion of Black Swan events. You know with near certainty that a plane flown by an incompetent pilot will eventually crash.” (321)
“economists are the most Black-Swan-blind species on the planet.” (321)
What Taleb recommends for public policy:
“the idea is not to correct mistakes and eliminate randomness from social and economic life through monetary policies, subsidies, and so on. The idea is simply to let human mistakes and miscalculations remain confined, and to prevent their spreading through the system, as Mother Nature does.” (322)
Cites Dr. McGruff, others arguing that human body designed/evolved to deal with extremes, feast and famines; and regular exercise, regular eating is not better for you than extremes. (326)
Hunger strengthens the body and immune system, helps rejuvenate brain cells. (328)
Section on “Beware Manufactured Stability”:
“Preventing small forest fires sets the stage for more extreme ones; giving out antibiotics when it is not very necessary makes us more vulnerable to severe epidemics—and perhaps to that big one, the grand infection that will be resistant to known antibiotics and will travel on Air France.” (329)
In economic terms; “Making something artificially bigger (instead of letting it die early if it cannot survive stressors) makes it more and more vulnerable to a very severe collapse.” (329)
“the U.S. government (or, rather, the Federal Reserve) have been trying for years to iron out the business cycle, leaving us exposed to a sever disintegration. This is my argument against ‘stabilization’ policies and the manufacturing of a nonvolatile environment.” (329)
One of the reasons we continue to use Gaussian means and forecasting is that people feel that a bad map or approach is better than none at all. (331)
Actually probably are better off with no map than a bad one; but best to know the map is bad. Taleb doesn’t say don’t forecast or don’t use models, he says “Do not use sterile forecasts with huge error”. (332)
“evolution does not work by teaching, but destroying.” (334)
The 2008 collapse took place about 1 ½ years after BS published. Taleb had been expecting it and betting against the banking system and helping his clients be BS robust. (337)
Calls econometrics “an abhorrent field” “that, if any scientific scrutiny was applied to it, would not exist.” (338)
A BS event is not unknown to all (like 9/11). “a Black Swan for the turkey is not a Black Swan for the butcher.” (339)
Also, a BS event is not just about the occurrence of the event, but about its depth and consequences. (340)
Greenspan, called to Congress to explain the banking crisis of 2008, argued that it could not have been foreseen because it “had never happened before.” A foolish argument, but no one in Congress was smart enough to retort that we can predict things that have not happened before, such as our own future deaths. (342)
Taleb condemns the common practice of “stress tests” that use worst cases of the worst past deviation seen, not thinking through possible bad outcomes that have never occurred in the past. (342)
Distinguish between epistemic limitations (lack of knowledge) and ontological (or ontonic) uncertainty which is randomness where the future is not implied by the past or anything. There is random uncertainty that you can never know, will never be able to forecast. (344)
Ontonic uncertainty “is created every minute by the complexity of our actions, which makes the uncertainty much more fundamental than the epistemic one coming from imperfections in knowledge. It means that there is no such thing as a long run for such systems, called ‘nonergodic’ systems—as opposed to the ‘ergodic’ ones. In an ergodic system, the probabilities of what may happen in the long run are not impacted by events that may take place, say, next year. . . . So ergodic systems are invariant, on average, to paths, taken in the intermediate term—what researchers call absence of path dependency. A nonergodic system has no real long-term properties—it is prone to path dependency. I believe that the distinction between epistemic and ontic uncertainty is important philosophically, but entirely irrelevant in the real world. Epistemic uncertainty is so hard to disentangle from the more fundamental one. This is the case of a ‘distinction without a difference’. . . .” (344)
“There is no such thing as a ‘long run’ in practice; what matters is what happens before the long run. The problem of using the notion of ‘long run’ . . . is that it usually makes us blind to what happens before the long run. . . .” ( 344)
Knightian uncertainty—uncomputable (345)
“One has to have a mental problem to think that probabilities of future events are ‘measurable’ in the same sense that the temperature is measurable by a thermometer.” (345)
Another common social theory assumption that people are rational, “rational expectations”, in which people seem to rationally converge on the same inference when supplied with the same data even if they start with different initial views is BS according to Taleb who argues that you need just “a very quick check to see that people do not converge to the same opinions in reality.” (346)
Taleb believes that the BS is “the very first attempt (that I know of) in the history of thought to provide a map of where we get hurt by what we don’t know, to set systematic limits to the fragility of knowledge—and to provide exact locations where these maps no longer work.” (347)
Taleb explains, “I am not saying ‘S**t happens,’ I am saying ‘S**t happens in the Fourth Quadrant’ [Extremistan with Complex Payoffs], which is as different as mistaking prudence and caution for paranoia.” (347)
By focusing on defenses against bad BSs in the Fourth Quadrant we can build a safer society. (348)
You cannot doubt everything and function, or believe everything. Charles Sanders proposed that we update and correct beliefs as a continuous work in progress, view knowledge as “a rigorous interplay between anti-skepticism and fallibilism, i.e. between the two categories of what to doubt and what to accept.” (348)
Taleb points out that “the famous paper by Reverend Bayes that led to what we call Bayesian inference did not give us ‘probability’ but expectation (expected average). Statisticians had difficulties with the concept so extracted probability from payoff. Unfortunately, this reduction led to the reification of the concept of probability, its adherents forgetting that probability is not natural in real life.” (351-52)
“Never forget that the probability of a rare event cannot be estimated from empirical observation for the very reason that they are rare.” (350)
“the rarer the event, the less we know about its role—and the more we need to compensate for that deficiency with an extrapolative, generalizing theory.” (352)
To help explain how NON-average and normally distributed the real world is; remember that:
- Less than 0.25 percent of all companies in the world represent about half the market capitalization
- Less than a tiny percent of novels account for half of fiction sales
- Less than 0.1 percent of drugs generate half drug industry sales
- Less than 0.1 percent of risky events will cause at least half the damages and losses
To reinforce idea that we can not look back on history to explain how BS events occurred, or what happened, Taleb uses the melted ice cube example. Imagine an ice carving that has melted and you see the puddle of water—you can’t from the puddle figure out what it was before the meltdown. “It is another manifestation of the error of Platonicity, of thinking that the Platonic form you have in your mind is the one you are observing outside the window.”
Taleb applies this to medicine as well; “We assume that we know the logic behind an organ, what is was made to do, and thus we can use this logic in our treatment of the patient. It has been very hard in medicine to shed our theories of the human body. Likewise, it is easy to construct a theory in your mind, or pick it up from Harvard, then go project it on the world.” (353)
“in real life we do not observe probability distributions. We just observe events. So I can rephrase the results as follows: we do not know the statistical properties—until, of course, after the fact.” (353)
“plenty of statistical distributions can correspond to the exact same realizations. . . . Under nonlinearities, the families of possible models/parametrization explode in numbers.” (353)
The worst BSs, the rarest and deadliest, are unfortunately precisely the BSs that will be hardest to predict—you have less past experience to expect them, fewer survivors if they are natural/repeated BSs to tell the stories of them. The lack of seeing these BSs leads, inherently, to an overestimation of safety and stability, and underestimation of potential volatility and risk. (354)
This point is made strongly in Rohan Nation: Reinventing America after the 2020 Collapse:
“This point—that things have a bias to appear more stable and less risky in the past, leading us to surprises—needs to be taken seriously, particularly in the medical field. The history of epidemics, narrowly studied, does not suggest the risks of the great plague to come that will dominate the planet. Also I am convinced that in doing what we are to the environment, we greatly underestimate the potential instability we will experience somewhere from the cumulative damage we have done to nature.” (354)
“we are suckers and will gravitate toward those variables that are unstable but that appear stable.” (354)
“There is no reliable way to compute small probabilities.” (355)
Kurtosis is a statistical measure of how fat the tails are, how much rare events play a role. “The instability of the kurtosis implies that a certain class of statistical measures should be totally disallowed. This proves that everything relying on ‘standard deviation’, ‘variance,’ ‘least square deviation,,’ etc., is bogus.” (355)
“there is ‘no’ typical failure and ‘no’ typical success. You may be able to predict the occurrence of a war, but you will not be able to gauge its effect!” (356)
“This absence of ‘typical’ events in Extremistan is what makes something called prediction markets (in which people are assumed to make bets on events) ludicrous, as they consider events to be binary. ‘A war’ is meaningless: you need to estimate its damage—and no damage is typical. Many predicted that the First World War would occur, but nobody really predicted its magnitude. One of the reasons economics does not work is that the literature is almost completely blind to this point.” (356)
Taleb believes that as we move more into Extremistan, BS events may be even more rare, but bigger still in magnitude. (357)
As of 2010, US Gov’t is still stress testing banks using past large deviations, not the extreme deviations of a BS. (357)
Because of large deviations in economics (Extremistan) and feedback effects, non linearities, attempting to predict using mathematical equations is impossible. (360)
The Fourth Quadrant, where you are in Extremistan and there are Complex Payoffs is the domain of the Black Swan, an area of highest risks. (365)
If you are in the Fourth Quadrant, try to get out if it is bad risks, bad BSs you face. Reduce your exposure if it is big loss potential, increase your exposure if positive BSs. (366)
Fight the tendency to latch onto a forecast or any estimates that provides some reassurance—but false assurance.
“people do not realize that success consists mainly in avoiding losses, not in trying to derive profits.” (368)
We must fight the tendency to “do something” when doing nothing is often the best course of action. (368)
“Science, particularly its academic version, has never liked negative results, let along the statement and advertising of its own limits.” (368)
Note: in military intelligence, you are taught, encouraged to say “Sir I do not know, but I’ll research and get back to you,” not guess an answer for sake of replying.
Taleb cites example of doctors “driven by the beastly illusion of control” and the desire to “do something” doing things like bleeding people (George Washington on his death bed for example) when the best COA was do nothing. The term iatrogenics, the study of the harm caused by the healer, illustrates the need for guidance to “Do no harm”. (369)
“the elders knew better—Greeks, Romans, Byzantines, and Arabs had a built-in respect for the limits of knowledge.” (369)
“You cannot do anything with knowledge unless you know where it stops, and the costs of using it.” (370)
“Iatrogenics of Regulators. Alas, the call for more (unconditional) regulation of economic activity appears to be a normal response. My worst nightmares have been the results of regulators. It was they who promoted the reliance on ratings by credit agencies and the ‘risk measurement’ that fragilized the system as bankers used them to build positions that turned sour. Yet every time there is a problem, we do the Soviet-Harvard thing of more regulation, which makes investment bankers, lawyers, and former regulators-turned-Wall-Street-advisers rich.” (370)
If you cannot get out of the Fourth Quadrant (things like exposure to pandemics, climate change) then Taleb prescribes the following “rules of ‘wisdom’ to increase robustness:
1. “Have respect for time and nondemonstrative knowledge. Recall my respect for Mother Earth—simply because of its age. . . . Remember that the burden of proof lies on someone disturbing a complex system, not on the person protecting the status quo.” (371)
2. “Avoid optimization; learn to love redundancy. . . . Overspecialization also is not a great idea. Consider what can happen to you if your job disappears completely.” (371)
3. “Avoid prediction of small-probability payoffs—though not necessarily of ordinary ones. Obviously, payoffs from remote events are more difficult to predict.” (372)
4. “Beware the ‘atypicality’ of remote events. There are suckers’ methods called ‘scenario analysis’ and ‘stress testing’—usually based on the past (or on some ‘make sense’ theory). Yet past shortfalls to not predict subsequent shortfalls, so we do not know what exactly to stress-test for.” (372)
5. “Beware moral hazard with bonus payments. . . . Bankers are always rich because of this bonus mismatch. In fact, society ends up paying for it.” (372)
6. “Avoid some risk metrics. Conventional metrics, based on Mediocristan, adjusted for large deviations, don’t work. . . . Words like ‘standard deviation’ are not stable and do not measure anything in the Fourth Quadrant. Neither do ‘linear regression’, “Sharpe ratio,’ Markowitz optimal portfolio, ANOVA shmanova, Least square, and literally anything mechanistically pulled out of a statistics textbook.” (372)
7. “Positive or negative Back Swan? . . . . Life expectancy of humans is not as long as we suspect because the data are missing something central: the big epidemic (which far outweighs the gains from cures). The same, as we saw, with the return on risky investments. On the other hand, research ventures show a less rosy past. A biotech company (usually) faces positive uncertainty, while a bank faces almost exclusively negative shocks.” (372-73)
8. “Do not confuse absence of volatility with absence of risk. Conventional metrics using volatility as an indicator of stability fool us, because the evolution into Extremistan is marked by a lowering of volatility—and a greater risk of big jumps. This has fooled a chairman of the Federal Reserve called Ben Bernanke—as well as the entire banking system. It will fool again.” (373)
9. Beware presentations of risk numbers. (373)
“Ten Principles for a Black-Swan Robust Society” (374)
1. “What is fragile should break early, while it’s still small. Nothing should ever become too big to fail.
2. “No socialization of losses and privatization of gains. Whatever may need to be bailed out should be nationalized; whatever does not need a bailout should be free, small, and risk-bearing. We got ourselves into the worst of capitalism and socialism. In France, in the 1980s, the socialists took over the banks. In the United States in the 2000s, the banks took over the government. This is surreal.” (374)
3. “People who were driving a school bus blindfolded (and crashed it) should never be given a new bus. The economics establishment (universities, regulators, central bankers, government officials, various organizations staffed with economists) lost its legitimacy with the failure of the system in 2008. It is irresponsible and foolish to put our trust in their ability to get us out of this mess. It is also irresponsible to listen to advice from the ‘risk experts’ and business school academic still promoting their measurements, which failed us (such as Value-at-Risk). Find the smart people whose hands are clean.” (375)
4. “Don’t let someone making an ‘incentive’ bonus manage a nuclear plant—or your financial risks. Odds are he would cut every corner on safety to show ‘profits’ from these savings while claiming to be ‘conservative’. Bonuses don’t accommodate the hidden risks of blowups. . . . No incentives without disincentives: capitalism is about rewards and punishments, not just rewards.” (375)
5. “Compensate complexity with simplicity. Complexity from globalization and highly networked economic life needs to be countered by simplicity in financial products. The complex economy is already a form of leverage. It’s the leverage of efficiency. Adding debt to that system produces wild and dangerous gyrations and provides no room for error. Complex systems survive thanks to slack and redundancy, not debt and optimization.”(375)
6. “Do not give children dynamite sticks, even if they come with a warning label. Complex financial products need to be banned because nobody understands them, and few are rational enough to know it.” (375)
7. “Only Ponzi schemes should depend on confidence. Governments should never need to ‘restore confidence’. . . . Governments cannot stop the rumors. Simply, we need to be in a position to shrug off rumors, be robust to them.” (375)
8. “Do not give an addict more drugs if he has withdrawal pains. Using leverage to cure the problems of too much leverage is not homeopathy, it’s denial. The debit crisis is not a temporary problem, it’s a structural one. We need rehab.” (376)
9. “Citizens should not depend on financial assets as a repository of value and should not rely on fallible ‘expert’ advice for their retirement. . . . We should learn not to use markets as warehouses of value: they do not harbor the certainties that normal citizens can require, in spite of ’expert’ opinions Investments should be for entertainment. Citizens should experience anxiety from their own businesses (which they control), not from their investments (which they do not control).” (376)
10. “Make an omelet with the broken eggs. . . converting debt into equity, marginalizing the economics and business school establishments, shutting down the “Nobel’ in economics, banning leveraged buyouts, putting bankers where they belong, claming back the bonuses of those who got us here, . [We need] a world in which entrepreneurs, not bankers, take the risks, and in which companies are born and die every day without making the news.” (376)
In his short closing chapter on “how to become indestructible”, Taleb cites the example of Seneca, the Roman teacher and Stoic who transformed Greek-Phoenician Stoicism from the theoretical into a practical, moral plan of living—and dying. Seneca taught Nietzsche the “amor fati”, “love fate.” “For Seneca, Stoicism is about dealing with loss, and finding ways to overcome our loss aversion—how to become less dependent on what you have.” (378) Seneca was reportedly wealthy, but ready to lose everything every day.
Danny Kahneman, “Prospect theory”; if you give someone all kinds of riches for awhile, then take them away, the person will be much worse off than if nothing had happened in the first place. (378)