Skip to content

Latest commit

 

History

History
1904 lines (1018 loc) · 163 KB

Rationality A-Z.md

File metadata and controls

1904 lines (1018 loc) · 163 KB

Quotes

You are not a Bayesian homunculus whose reasoning is 'corrupted' by cognitive biases. You just are cognitive biases.

You just are attribution substitution heuristics, evolved intuitions, and unconscious learning. These make up the 'elephant' of your mind, and atop them rides a tiny 'deliberative thinking' module that only rarely exerts itself, and almost never according to normatively correct reasoning.


Rationality: A-Z link

Preface link

Map and Territory The First Book

Biases: An Introduction: link

Statistical Bias: Skews samples so that they less closely resemble larger populations.

If your sampling is not biased, then your estimates about the large population won't be incorrect on average. The more you learn, the smaller your error will tend to be.

With Statistical Bias in your sampling, the more you learn the further you may skew your estimates away from reality.

Such samples maybe be unrepresentative in a consistent direction.

Cognitive Bias: A systematic way that your innate patterns of thought fall short of truth (or some other attainable goal, such as happiness). Minds can come to undermine themselves, despite "learning" more.

System 1 processes: fast, implicit, associative, automatic cognition.

System 2 processes: slow, explicit, intellectual, controlled cognition.

Cognitive Heuristics: Rough shortcuts our brains have evolved to employ that get the right answer often, but not all the time.

Cognitive biases arise when the corners cut by these heuristics result in a relatively consistent and discrete mistake.

Representativeness Heuristic: our tendency to assess phenomena by how representative they seem of various categories.

Can lead to biases such as conjunction fallacy and base rate neglect.

Conjunction Fallacy: failing to grasp for all A,B: P(A and B) <= P(A).

The moral? Adding more detail or extra assumptions can make an event seem more plausible, even though the event necessarily becomes less probable.

Experimental subjects considered it less likely that a strong tennis player would “lose the first set” than that he would “lose the first set but win the match.” Making a comeback seems more typical of a strong player, so we overestimate the probability of this complicated-but-sensible-sounding narrative compared to the probability of a strictly simpler scenario.

Base Rate Neglect: grounding judgments in how intuitively “normal” a combination of attributes is, neglecting how common each attribute is in the population at large.

Is it more likely that Steve is a shy librarian, or that he’s a shy salesperson? Most people answer this kind of question by thinking about whether “shy” matches their stereotypes of those professions. They fail to take into consideration how much more common salespeople are than librarians—seventy-five times as common, in the United States.[9]

Duration Neglect: evaluating experiences without regard to how long they lasted.

Sunk Cost Fallacy: feeling committed to things you’ve spent resources on in the past, when you should be cutting your losses and moving on.

Confirmation Bias: giving more weight to evidence that confirms what we already believe.

Knowing about a bias, however, is rarely enough to protect you from it. In a study of bias blindness, experimental subjects predicted that if they learned a painting was the work of a famous artist, they’d have a harder time neutrally assessing the quality of the painting. And, indeed, subjects who were told a painting’s author and were asked to evaluate its quality exhibited the very bias they had predicted, relative to a control group. When asked afterward, however, the very same subjects claimed that their assessments of the paintings had been objective and unaffected by the bias—in all groups!

Even when we correctly identify others’ biases, we have a special bias blind spot when it comes to our own flaws.

Quotes:

A rational person, no matter how out of their depth they are, forms the best beliefs they can with the evidence they’ve got.

A rational person, no matter how terrible a situation they’re stuck in, makes the best choices they can to improve their odds of success.

There are some problems where conscious deliberation serves us better, and others where snap judgments serve us better.

Our untrained intuitions don’t tell us when we ought to stop relying on them. Being biased and being unbiased feel the same.

Predictably Wrong

What Do We Mean By "Rationality"?

Epistemic: relating to knowledge or to the degree of its validation.

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory. The art of obtaining beliefs that correspond to reality as closely as possible.

Instrumental rationality: achieving your values. The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.

Not necessarily "your values" in the sense of being selfish values or unshared values: "your values" means anything you care about. On LW we sometimes refer to this as "winning".

Anthropic Principle: a tautological observation that the universe must be compatible with the observer who observes it. This observation is a warning to be aware of the natural anthropocentric bias of our human point of view.

The TV show version of the anthropic principle: all the episodes where the Enterprise does blow up aren't made.

Newcomb's Problem: External being Omega set's up problem as:

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

Omega has put a million dollars in box B iff Omega has predicted that you will take only box B.

The point is not to have an elegant theory of winning - the point is to win; elegance is a side effect.

Secondary Notes:

Grok: understand (something) intuitively or by empathy.

I keep trying to say that rationality is the winning-Way, but causal decision theorists insist that taking both boxes is what really wins, because you can't possibly do better by leaving $1000 on the table... even though the single-boxers leave the experiment with more money. Be careful of this sort of argument, any time you find yourself defining the "winner" as someone other than the agent who is currently smiling from on top of a giant heap of utility.

You shouldn't claim to be more rational than someone and simultaneously envy them their choice - only their choice. Just do the act you envy.

Ordinarily, the work of rationality goes into figuring out which choice is the best - not finding a reason to believe that a particular choice is the best.

Alleged rationalists should not find themselves envying the mere decisions of alleged nonrationalists, because your decision can be whatever you like. When you find yourself in a position like this, you shouldn't chide the other person for failing to conform to your concepts of reasonableness. You should realize you got the Way wrong.

If you ever find yourself keeping separate track of the "reasonable" belief, versus the belief that seems likely to be actually true. Either you have misunderstood reasonableness, or your second intuition is just wrong.

That is why I use the word "rational" to denote my beliefs about accuracy and winning - not to denote verbal reasoning, or strategies which yield certain success, or that which is logically provable, or that which is publicly demonstrable, or that which is reasonable.

Secondary Quotes:

You can't tell me, first, that above all I must conform to a particular ritual of cognition, and then that, if I conform to that ritual, I must change my morality to avoid being Dutch-booked. Toss out the losing ritual; don't change the definition of winning. That's like deciding to prefer $1000 to $1,000,000 so that Newcomb's Problem doesn't make your preferred ritual of cognition look bad.

"Everyone is imperfect" is an excellent example of replacing a two-color view with a one-color view. If you say, "No one is perfect, but some people are less imperfect than others," you may not gain applause; but for those who strive to do better, you have held out hope. No one is perfectly imperfect, after all.

(reading) Book Suggestion: PROBABILITY THEORY: THE LOGIC OF SCIENCE by E. T. Jaynes link

I protest against the use of infinite magnitude as something accomplished, which is never permissible in mathematics. Infinity is merely a figure of speech, the true meaning being a limit. - C. F. Gauss

Historically speaking, science won because it displayed greater raw strength in the form of technology, not because science sounded more reasonable.

"Don't try to be clever." And, "Listen to those quiet, nagging doubts. If you don't know, you don't know what you don't know, you don't know how much you don't know, and you don't know how much you needed to know.

The thought you cannot think controls you more than thoughts you speak aloud.

The third virtue is lightness. Let the winds of evidence blow you about as though you are a leaf, with no direction of your own. Beware lest you fight a rearguard retreat against the evidence, grudgingly conceding each foot of ground only when forced, feeling cheated. Surrender to the truth as quickly as you can. Do this the instant you realize what you are resisting; the instant you can see from which quarter the winds of evidence are blowing against you. Be faithless to your cause and betray it to a stronger enemy.

If you fail to achieve a correct answer, it is futile to protest that you acted with propriety.

Feeling Rational link

That which the truth nourishes should thrive.

maladaptive: not providing adequate or appropriate adjustment to the environment or situation.

Anger is generally a maladaptive response in today's environment of tremendous punishments for physical violence, and that's beside the fact that it is an extremely unpleasant feeling.

Secondary Notes/Quotes:

They flinch away, refuse to confront the horror of a hundred and fifty thousand sentient beings annihilated every day. One point eight lives per second, fifty-five million lives per year.

Convert the units, time to life, life to time. 1.8 lives/s => 0.56 s/life

The World Trade Center killed half an hour. As of today, all cryonics organizations together have suspended one minute.

Why truth? And ... link

The first virtue is curiosity.

If your motive is curiosity, you will assign priority to questions according to how the questions, themselves, tickle your personal aesthetic sense. A trickier challenge, with a greater probability of failure, may be worth more effort than a simpler one, just because it is more fun.

irrational epistemic conduct: "If the iron approaches your face, and you believe it is hot, and it is cool, the Way opposes your fear. If the iron approaches your face, and you believe it is cool, and it is hot, the Way opposes your calm."

Emotion which is evoked by correct beliefs or epistemically rational thinking is a "rational emotion"; and this has the advantage of letting us regard calm as an emotional state, rather than a privileged default.

the priority you assign to your questions will reflect the expected utility of their information—how much the possible answers influence your choices, how much your choices matter, and how much you expect to find an answer that changes your choice from its default.

Pure curiosity is a wonderful thing, but it may not linger too long on verifying its answers, once the attractive mystery is gone. But what set humanity firmly on the path of Science was noticing that certain modes of thinking uncovered beliefs that let us manipulate the world. As far as sheer curiosity goes, spinning campfire tales of gods and heroes satisfied that desire just as well, and no one realized that anything was wrong with that.

Predictive Truth is a means to value, and even if a value in itself it's surely not the only value. Beliefs can pay rent in many ways.

wrong beliefs can often be locally optimal (meaning that giving them up might result in a temporary utility loss, or that you can lose utility by not shifting them far enough towards truth), but a maximally rational outlook seems to pay off in the long run.

...What's a bias, again? link

Bias: a certain kind of obstacle to our goal of obtaining truth - its character as an "obstacle" stems from this goal of truth - but there are many obstacles that are not "biases".

Those obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery.

"Biases" are distinguished from errors that arise from cognitive content, such as adopted beliefs, or adopted moral duties. These we call "mistakes", rather than "biases", and they are much easier to correct, once we've noticed them for ourselves.

"Biases" are distinguished from errors that arise from damage to an individual human brain, or from absorbed cultural mores; biases arise from machinery that is humanly universal.

Quotes:

"There are forty kinds of lunacy but only one kind of common sense."

Availability link

Availability heuristic: judging the frequency or probability of an event, by the ease with which examples of the event come to mind.

Selective reporting is one major source of availability biases.

Leading Causes Of Deaths In Canada link

Rank Cause of Death Number Of Victims, 2012 % Of Total Deaths
1 Malignant neoplasms (cancer) 74361 30.2
2 Diseases of heart (heart disease) 48681 19.7
3 Cerebrovascular diseases (stroke) 13174 5.3
4 Chronic lower respiratory diseases 11130 4.5
5 Accidents (unintentional injuries) 11290 4.6
6 Diabetes mellitus (diabetes) 6993 2.8
7 Alzheimer's disease 6293 2.6
8 Influenza and pneumonia 5694 2.3
9 Intentional self-harm (suicide) 3926 1.6
10 Nephritis, nephrotic syndrome and nephrosis (kidney disease) 3327 1.3

Absurdity bias: events that have never happened, are not recalled, and hence deemed to have probability zero.

underreaction to threats of flooding may arise from "the inability of individuals to conceptualize floods that have never occurred... Men on flood plains appear to be very much prisoners of their experience... Recently experienced floods appear to set an upward bound to the size of loss with which managers believe they ought to be concerned."

While building dams decreases the frequency of floods, damage per flood is afterward so much greater that average yearly damage increases.

The wise would extrapolate from a memory of small hazards to the possibility of large hazards. Instead, past experience of small hazards seems to set a perceived upper bound on risk.

A society well-protected against minor hazards takes no action against major risks, building on flood plains once the regular minor floods are eliminated.

A society subject to regular minor hazards treats those minor hazards as an upper bound on the size of the risks, guarding against regular minor floods but not occasional major floods.

Secondary Notes

Three major circumstances where the absurdity heuristic gives rise to an absurdity bias:

  1. The first case is when we have information about underlying laws which should override surface reasoning. But we may find it hard to attend to mere calculations in the face of surface absurdity, until we see the balloon rise.

  2. The second case is a generalization of the first - attending to surface absurdity in the face of abstract information that ought to override it. If people cannot accept that studies show that marginal spending on medicine has zero net effect, because it seems absurd - violating the surface rule that "medicine cures" - then I would call this "absurdity bias". There are many reasons that people may fail to attend to abstract information or integrate it incorrectly.

  3. The third case is when the absurdity heuristic simply doesn't work - the process is not stable in its surface properties over the range of extrapolation - and yet people use it anyway. The future is usually "absurd" - it is unstable in its surface rules over fifty-year intervals.

The point is not that you can say anything you like about the future and no one can contradict you; but, rather, that the particular practice of crying "Absurd!" has historically been an extremely poor heuristic for predicting the future.

Over the last few centuries, the absurdity heuristic has done worse than maximum entropy - ruled out the actual outcomes as being far too absurd to be considered. You would have been better off saying "I don't know".

Burdensome Details link

Conjunction Fallacy:

need to feel a stronger emotional impact from Occam's Razor - feel every added detail as a burden, even a single extra roll of the dice.

Package-deal fallacy: "It is more probable that universes replicate for any reason, than that they replicate via black holes because advanced civilizations manufacture black holes because universes evolve to make them do it."

had not felt these extra details as extra burdens. Instead they were corroborative detail, lending verisimilitude to the narrative.

Someone presents you with a package of strange ideas, one of which is that universes replicate. Then they present support for the assertion that universes replicate. But this is not support for the package, though it is all told as one story.

Quotes:

Is that a crystal ball in your pocket or are you just happy to be a futurist?

If you can lighten your burden you must do so. There is no straw that lacks the power to break your back.

Negative Conjunction (Disjunction) Fallacy: The more times the word "or" appears in an explanation or theory, the more likely an onlooker will say "Now you're just guessing" and lose confidence in the claim, even though it necessarily becomes more likely.

Planning Fallacy link

Planning fallacy:

  • Asking subjects for their predictions based on realistic "best guess" scenarios; or
  • Asking subjects for their hoped-for "best case" scenarios...

...produced indistinguishable results.

The outside view is when you deliberately avoid thinking about the special, unique features of this project, and just ask how long it took to finish broadly similar projects in the past.

This is counterintuitive, since the inside view has so much more detail - there's a temptation to think that a carefully tailored prediction, taking into account all available data, will give better results.

Experiment has shown that the more detailed subjects' visualization, the more optimistic (and less accurate) they become.

"Human's grossly overestimate what will happen in 5 years but grossly underestimate what will happen in 10."

Illusion of Transparency: Why No One Understands You link

Hindsight bias: is when people who know the answer vastly overestimate its predictability or obviousness, compared to the estimates of subjects who must guess without advance knowledge.

Sometimes called the I-knew-it-all-along effect.

a third experimental group was told the outcome and also explicitly instructed to avoid hindsight bias, which made no difference: 56% concluded the city was legally negligent.

told one group of subjects that "the goose hangs high" meant that the future looks good; another group of subjects learned that "the goose hangs high" meant the future looks gloomy. Subjects were then asked which of these two meanings an uninformed listener would be more likely to attribute to the idiom. Each group thought that listeners would perceive the meaning presented as "standard".

Shortly after September 11th 2001, I thought to myself, and now someone will turn up minor intelligence warnings of something-or-other, and then the hindsight will begin. Yes, I'm sure they had some minor warnings of an al Qaeda plot, but they probably also had minor warnings of mafia activity, nuclear material for sale, and an invasion from Mars.

After September 11th, the FAA prohibited box-cutters on airplanes—as if the problem had been the failure to take this particular "obvious" precaution.

We don't learn the general lesson: the cost of effective caution is very high because you must attend to problems that are not as obvious now as past problems seem in hindsight.

Whenever I notice myself thing "I knew that all along," it reminds me to check for hindsight bias. Sometimes it is, sometimes it isn't.

Illusion of transparency: We always know what we mean by our words, and so we expect others to know it too. It's hard to empathize with someone who must interpret blindly, guided only by the words.

June recommends a restaurant to Mark; Mark dines there and discovers:
(a) unimpressive food and mediocre service
(b) delicious food and impeccable service.

Then Mark leaves the following message on June's answering machine:
"June, I just finished dinner at the restaurant you recommended, and I must say, it was marvelous, just marvelous."

Keysar (1994) presented a group of subjects with scenario (a), and 59% thought that Mark's message was sarcastic and that Jane would perceive the sarcasm.

Among other subjects, told scenario (b), only 3% thought that Jane would perceive Mark's message as sarcastic.

So participants took Mark's communicative intention as transparent. It was as if they assumed that June would perceive whatever intention Mark wanted her to perceive.

Be not too quick to blame those who misunderstand your perfectly clear sentences, spoken or written. Chances are, your words are more ambiguous than you think.

If you write an email, read it aloud to yourself in the opposite tone of voice. If you are still confident that it will be taken the way you originally thought it should, it's probably safe to send. But if you can now see how it might be misunderstood, redraft it. Repeat until you feel ready to send.

Expecting Short Inferential Distances link

Environment of evolutionary adaptedness (EEA or "ancestral environment"): consisted of hunter-gatherer bands of at most 200 people, with no writing. All inherited knowledge was passed down by speech and memory.

In a world like that, all background knowledge is universal knowledge. All information not strictly private is public.

unlikely to end up more than one inferential step away from anyone else.

When you discover a new oasis, you don't have to explain to your fellow tribe members what an oasis is, or why it's a good idea to drink water, or how to walk. Only you know where the oasis lies; this is private knowledge.

Everything is at most 1 or 2 inferential steps away from Universal Knowledge.

Combined with the illusion of transparency and self-anchoring, I think this explains a lot about the legendary difficulty most scientists have in communicating with a lay audience - or even communicating with scientists from other disciplines.

usually see the explainer taking one step back, when they need to take two or more steps back. Or listeners, assuming that things should be visible in one step, when they take two or more steps to explain.

Both sides act as if they expect very short inferential distances from universal knowledge to any new knowledge.

A clear argument has to lay out an inferential pathway, starting from what the audience already knows or accepts. If you don't recurse far enough, you're just talking to yourself.

If at any point you make a statement without obvious justification in arguments you've previously supported, the audience just thinks you're a cult victim. This also happens when you allow yourself to be seen visibly attaching greater weight to an argument than is justified in the eyes of the audience at that time.

you'd better not drop any hints that you think you're working a dozen inferential steps away from what the audience knows, or that you think you have special background knowledge not available to them.

The audience doesn't know anything about an evolutionary-psychological argument for a cognitive bias to underestimate inferential distances leading to traffic jams in communication. They'll just think you're condescending.

just-so story: an unverifiable narrative explanation for a cultural practice, a biological trait, or behavior of humans or other animals.

Secondary Notes:

Tanenhaus et. al. (1995) had previously demonstrated that when people understand a spoken reference, their gaze fixates on the identified object almost immediately.

Dunbar's number: a suggested cognitive limit to the number of people with whom one can maintain stable social relationships—relationships in which an individual knows who each person is and how each person relates to every other person.

Self-Anchoring:

Sometime between the age of 3 and 4, a human child becomes able, for the first time, to model other minds as having different beliefs.

we've got to simulate it using our own visual cortex - put our own brains into the other mind's shoes. And because you can't reconfigure memory to simulate a new brain from scratch, pieces of you leak into your visualization of the Other.

seems to suggest that subjects first computed the meaning according to their brains' settings, their knowledge, and then afterward adjusted for the other mind's different knowledge.

the processes are actually more akin to contamination and under-correction;

We can put our feet in other minds' shoes, but we keep our own socks on.

(If it sounds plausible to me, it should sound plausible to any sane member of my band.)

The Lens That Sees Its Flaws link

Science, itself, is an understandable process-in-the-world that correlates brains with reality.

Everett branch: one of the "worlds" in the many-worlds interpretation of quantum mechanics.

Tegmark duplicates: instances of you/thing in other universes or sub-universes that is "the same".

The "lens" sees perhaps only parts of itself, and then perhaps only some of its flaws.

Bayes rule:

the prior odds times the likelihood ratio equals the posterior odds.

More generally, suppose we have a medical test that detects a sickness with a 90% true positive rate (10% false negatives) and a 30% false positive rate (70% true negatives). A positive result on this test represents the same strength of evidence as a test with 60% true positives and 20% false positives.

Because relative likelihoods: 90%/30% = 60%/20% = 3 : 1 are the same!

the process of observing evidence and using its likelihood ratio to transform a prior belief into a posterior belief is called a "Bayesian update" or "belief revision."

to update your beliefs in the face of evidence, simply throw away the probability mass that was inconsistent with the evidence.

you're not "eliminating people from consideration," you're eliminating probability mass from certain possible worlds represented in your own subjective belief state.

Example:

Prior Odds: 10% of widgets are bad and 90% are good.

Likelihood Ratio (Evidence Test/Observable): 12% of bad widgets emit sparks, and 4% of good widgets emit sparks.

If you have a widget and see that it emits sparks, what are the odds that you have a bad widget?

i.e. the real life question/hypothesis; the thing you're curious about.

Posterior Odds: P(Bad Widget | Emits Sparks) / P(Good Widget | Emits Sparks)

(1 : 9) * (3 : 1) = 1 : 3 => Three times more likely that the widget is good despite emitting sparks.

Prob = 1 / (1 + 3) = 25% that is bad.

almost like, the base occurrence rate affects the effectiveness of an observable test.

The evidence was weaker than the prior improbability of the claim.

For a previously implausible proposition X to end up with a high posterior probability, the likelihood ratio for the new evidence favoring X over its alternatives, needs to be more extreme than the prior odds against X.

"extraordinary claims require extraordinary evidence"

Must have really low false positive rate, for your "test" or "evidence" to be extraordinary, i.e. capable of supporting an extraordinary claim.

ceteris paribus: "all other things being equal/constant".

To evaluate the ordinariness or extraordinariness of a claim:

  • We don't ask whether the future consequences of this claim seem extreme or important.
  • We don't ask whether the policies that would be required to address the claim are very costly.
  • We ask whether "carbon dioxide warms the atmosphere" or "carbon dioxide fails to warm the atmosphere" seems to conform better to the deep, causal generalizations we already have about carbon dioxide and heat.
  • If we've already considered the deep causal generalizations like those, we don't ask about generalizations causally downstream of the deep causal ones we've already considered. (E.g., we don't say, "But on every observed day for the last 200 years, the global temperature has stayed inside the following range; it would be 'extraordinary' to leave that range.")

If you get in a car accident, and don't want to relinquish the hypothesis that you're a great driver, then you can find all sorts of reasons ("the road was slippery! my car freaked out!") why P(e | GoodDriver) is not too low. But P(e | BadDriver) is also part of the update equation, and the "bad driver" hypothesis better predicts the evidence. Thus, your first impulse, when deciding how to update your beliefs in the face of a car accident, should not be "But my preferred hypothesis allows for this evidence!" It should instead be "Points to the 'bad driver' hypothesis for predicting this evidence better than the alternatives!" (And remember, you're allowed to increase a little bit, while still thinking that it's less than 50% probable.)

Overriding Evidence:

If you think that a proposition has prior odds of 1 to a 10^100, and then somebody presents evidence with a likelihood ratio of 10^94 to one favoring the proposition, you shouldn't say, "Oh, I guess the posterior odds are 1 to a million." You should instead question whether either (a) you were wrong about the prior odds or (b) the evidence isn't as strong as you assessed.

Witnessing an observation that truly has a 10^-94 probability of occurring if the hypothesis is false, in a case where the hypothesis is in fact false, is something that will not happen to anyone even once over the expected lifetime of this universe.

By the quantitative form of Occam's Razor, we need to penalize the prior probability of your theory for its algorithmic complexity.

Weber–Fechner law: most human sensory perceptions are logarithmic, in the sense that a factor-of-2 intensity change feels like around the same amount of increase no matter where you are on the scale.

Doubling the physical intensity of a sound feels to a human like around the same amount of change in that sound whether the initial sound was 40 decibels or 60 decibels.

test sensitivity: P(+ | have condition)
test specificity: P(- | don't have condition)

Naive Bayes assumptions: Independent tests that don't rely on previous results.

Naive Bayes calculations are usually quantitatively wrong, they often point in the right qualitative direction

Non-Naive Bayes assumptions: Non independent tests/observations

Don't ask, "What is the likelihood that a non-romantic couple would visit one person's workplace?" but "What is the likelihood that a non-romantic couple which previously visited a museum for some unknown reason would also visit the workplace?"

The result was a kind of double-counting of the evidence — they took into account the prior improbability of a random non-romantic couple "going places together" twice in a row, for the two pieces of evidence, and ended up performing a total update that was much too strong.

I.e. it may be unlikely that non-romantic partners go to a museum together, but IF THEY HAD, then them visiting some random workplace together GIVEN that they visited the Museum together for any reason is NOT as unlikely.

Bayesian view of scientific virtues

Falsifiability: saying which events and observations should definitely not happen if your theory is true.

a requirement for obtaining strong likelihood ratios in favor of the hypothesis, compared to, e.g., the alternative hypothesis "I don't know."

ad hominem: a fallacious argumentative strategy whereby genuine discussion of the topic at hand is avoided by instead attacking the character, motive, or other attribute of the person making the argument, or persons associated with the argument, rather than attacking the substance of the argument itself.

Precision:

a theory has the scientific virtue of precision, in that, by concentrating most of its probability density on a narrow region of possible precise observations, it will gain a great likelihood advantage over vaguer theories.

If the probability mass (p-mass-yours) your theory assigns for a particular region is higher then the p-mass-theirs of another theory; then if your prediction is actually observed then this gives p-mass-yours : p-mass-theirs odds for favouring your theory.

it's better to be vague and right than to be precise and wrong.

You have two theories, A and B. A is more complex then B, but has sharper/more precise predictions for it's observables.

i.e. given a test, where it's either +-ve or -ve (true or false), then we necessitate that P(+ | A) > P(+ | B).

Say that P(+ | A) = 10 * P(+ | B).

Then each successful +-ve test gives 10 : 1 odds for theory A over theory B.

You can penalize A initially for algorithmic complexity and estimate/assign it 1 : 10^5 odds for it; i.e. you think it is borderline absurd.

But if you get 5 consecutive +-ve tests, then your posterior odds become 1 : 1; meaning your initial odds estimate was grossly wrong.

In fact, given 5 more consecutive +-ve tests, it is theory B which should at this point be considered absurd.

Of course in real problems, the favorable likelihood ratio could be as low as 1.1 : 1, and your prior odds are not as ridiculous; maybe 1 : 100 against. Then you'd need about 50 updates before you get posterior odds of about 1 : 1. You then seriously question the validity of your prior odds. After another 50 updates, you're essentially fully convinced that the new theory contestant is much better then the original theory.

Newtonian Gravity proven false vs Einstein:

Problem of Uranus orbit: the current Newtonian model was making confident predictions about Uranus that were much wronger than the theory expected to be on average.

The low P(UranusLocation | currentNewton) created a potential for some modification of the current model to make a better prediction with higher P(UranusLocation | newModel), but it didn't say what had to change in the new model.

Even after Neptune was observed, though, this wasn't a final confirmation of Newtonian mechanics. While the new model assigned very high P(UranusLocation | Neptune and currentNewton) there could, for all anyone knew, be some unknown Other theory that would assign equally high P(UranusLocation | Neptune and OtherTheory). In this case, Newton's theory would have no likelihood advantage versus this unknown Other, so we could not say that Newton's theory of gravity had been confirmed over every other possible theory.

In the case of Mercury, when Einstein's formal theory came along and assigned much higher P(MercuryLocation | Einstein) compared to P(MercuryLocation | Newton) this created a huge likelihood ratio for Einstein over Newton and drove the probability of Newton's theory very low.

So - from a Bayesian standpoint - after explaining Mercury's orbital precession, we can't be sure Einstein's gravitation is correct, but we can be sure that Newton's gravitation is wrong.

some theories can be finally confirmed, beyond all reasonable doubt.

Fake Beliefs

Making Beliefs Pay Rent (in Anticipated Experiences) link

To anticipate sensory experiences as precisely as possible, we must process beliefs that are not anticipations of sensory experience.

We can build up whole networks of beliefs that are connected only to each other—call these "floating" beliefs. It is a uniquely human flaw among animal species, a perversion of Homo sapiens's ability to build more general and flexible belief networks.

The rationalist virtue of empiricism consists of constantly asking which experiences our beliefs predict—or better yet, prohibit.

If you can't find the difference of anticipation, you're probably arguing about labels in your belief network—or even worse, floating beliefs, barnacles on your network.

Above all, don't ask what to believe—ask what to anticipate.

"Man in his natural state has liberty because everyone is equal"
"Man in his natural state is equal because everyone has liberty"
"When everyone has liberty and is equal, man is in his natural state"
These statements should express very different beliefs about the world, but to the student they sound equally clever coming out of the professor's mouth. These are barnacles/floating beliefs.

adumbrate:report or represent in outline.

A Fable of Science and Politics link

People on average, often subscribe to package-deal types of political views, despite individual nuances.

If you entangle such political divides with natural phenomena, you're gonna have a bad time regardless of whether that (scientifically quantifiable) polarizing issue is resolved one way or the other.

ecclesiastical relating to the christian church or its clergy.

Belief in Belief link

classic moral that poor hypotheses need to do fast footwork to avoid falsification.

The claimant must have an accurate model of the situation somewhere in his mind, because he can anticipate, in advance, exactly which experimental results he'll need to excuse.

where it is difficult to believe a thing, it is often much easier to believe that you ought to believe it.

For one thing, if you believe in belief, you cannot admit to yourself that you only believe in belief, because it is virtuous to believe, not to believe in belief, and so if you only believe in belief, instead of believing, you are not virtuous. Nobody will admit this to themselves-not unless they are unusually capable of acknowledging their own lack of virtue.

I believe that killing people is wrong. I also know that believing in the sanctity of life is virtuous.

BUT

I also know that sometimes there is no other choice but to deliberately end lives.

there are sharp distinctions between P, wanting P, believing P, wanting to believe P, and believing that you believe P.

Fleeting mental images, unspoken flinches, desires acted upon without acknowledgement—these account for as much of ourselves as words.

"Belief" should include unspoken anticipation-controllers.
"Belief in belief" should include unspoken cognitive-behavior-guiders.

it is realistic to say the dragon-claimant anticipates as if there is no dragon in his garage, and makes excuses as if he believed in the belief.

To correctly anticipate, in advance, which experimental results shall need to be excused, the dragon-claimant must
(a) possess an accurate anticipation-controlling model somewhere in his mind, and
(b) act cognitively to protect either
(b1) his free-floating propositional belief in the dragon or
(b2) his self-image of believing in the dragon.

When someone makes up excuses in advance, it would seem to require that belief, and belief in belief, have become unsynchronized.

Pretending to be Wise link

the display of neutrality or suspended judgment, in order to signal maturity, wisdom, impartiality, or just a superior vantage point.

pretending to be Wise: trying to signal wisdom by
refusing to make guesses -
refusing to sum up evidence -
refusing to pass judgment -
refusing to take sides -
staying above the fray and looking down with a lofty and condescending gaze.

In other words, signaling wisdom by saying and doing nothing.

"Washing one's hands of the conflict between the powerful and the powerless means to side with the powerful, not to be neutral."

a part of it also has to do with signaling a superior vantage point. After all - what would the other adults think of a principal who actually seemed to be taking sides in a fight between mere children? Why, it would lower his status to a mere participant in the fray!

That is, when the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.

neutrality is a definite judgment. It is not staying above anything. It can be wrong; propounding neutrality is just as attackable as propounding any particular side.

There are cases where it is rational to suspend judgment, where people leap to judgment only because of their biases.

Poe's law: Without a clear indication of the author's intent, it is difficult or impossible to tell the difference between an expression of sincere extremism and a parody of extremism.

apostasy: the abandonment or renunciation of a religious or political belief. Can use this for debiasing yourself about a long held belief. Force yourself to write such an essay that has to convince earlier version of you that you truly changed your mind.

Secondary Notes

the ability to have potentially divisive conversations is a limited resource.

groups that freely argue about basic values tend to fragment into like-thinking-cliques focused more on clique loyalty than on fairly crediting informative contributions. Such discussions threaten social norms that preserve group cohesion and effective conversation.

In practice, most policy debate focuses on a few dimensions, such as the abortion rate, the overall tax rate, more versus less regulation, for or against more racial equality, or a pro versus anti US stance. In fact, one can explain 85% of the variation in US Congressional votes by a single underlying dimension, where there are two separated clumps. Most of the remaining variation is explained by one more dimension.

These results reflect more the nature of human coalition formation than the nature of policy.

most readers will try to project your position onto the few "key" policy dimensions, asking whether your position is pro abortion, more taxes, more regulation, and so on. And if readers can’t easily read you as being for their side, they will suspect that you are really on the other side.

The policy world can thought of as consisting of a few Tug-O-War "ropes" set up in this high dimensional policy space. If you want to find a comfortable place in this world, where the people around you are reassured that you are "one of them," you need to continually and clearly telegraph your loyalty by treating each policy issue as another opportunity to find more supporting arguments for your side of the key dimensions. That is, pick a rope and pull on it.

On the few main dimensions, not only will you find it very hard to move the rope much, but you should have little confidence that you actually have superior information about which way the rope should be pulled.

Religion's Claim to be Non-Disprovable link

magisterium: the church's authority or office to give authentic interpretation of the word of god, "whether in its written form or in the form of Tradition.".

Back in the old days, people actually believed their religions instead of just believing in them. The biblical archaeologists who went in search of Noah's Ark did not think they were wasting their time; they anticipated they might become famous. Only after failing to find confirming evidence - and finding disconfirming evidence in its place - did religionists execute what William Bartley called the retreat to commitment, "I believe because I believe.".

Book Suggestion: Asimov's Guide to the Bible; an introduction to the historical background behind the Bible.

The modern concept of religion as purely ethical derives from every other area having been taken over by better institutions. Ethics is what's left.

Secondary Notes

Think of it like a movie. The Torah is the first one, and the New Testament is the sequel. Then the Qu'ran comes out, and it retcons the last one like it never happened. There's still Jesus, but he's not the main character anymore, and the messiah hasn't shown up yet.

Jews like the first movie but ignored the sequels, Christians think you need to watch the first two but the third movie doesn't count, Moslems think the third one was the best, and Mormons liked the second one so much they started writing fanfiction that doesn't fit with ANY of the series canon.

Theists don't have any observable advantage over non-theists on matters of chance. God doesn't rig the dice for you.

Professing and Cheering link

It finally occurred to me that this woman wasn't trying to convince us or even convince herself. Her recitation of the creation story wasn't about the creation of the world at all. Rather, by launching into a five-minute diatribe about the primordial cow, she was cheering for paganism, like holding up a banner at a football game.

It wasn't just a cheer, like marching, but an outrageous cheer, like marching naked—believing that she couldn't be arrested or criticized, because she was doing it for her pride parade.

That's why it mattered to her that what she was saying was beyond ridiculous. If she'd tried to make it sound more plausible, it would have been like putting on clothes.

fundamental attribution error (FAE): the concept that, in contrast to interpretations of their own behavior, people tend to (unduly) emphasize the agent's internal characteristics (character or intention), rather than external factors, in explaining other people's behavior. a.k.a. correspondence bias or attribution effect.

"the tendency to believe that what people do reflects who they are".

Belief as Attire link

so far distinguished belief as anticipation-controller, belief in belief, professing and cheering.

Of these, we might call anticipation-controlling beliefs "proper beliefs" and the other forms "improper belief".

A proper belief can be wrong or irrational, e.g., someone who genuinely anticipates that prayer will cure her sick baby, but the other forms are arguably "not belief at all".

another form of improper belief is belief as group-identification or "belief as attire"

The very concept of the courage and altruism of a suicide bomber is Enemy attire—you can tell, because the Enemy talks about it. The cowardice and sociopathy of a suicide bomber is American attire. There are no quote marks you can use to talk about how the Enemy sees the world; it would be like dressing up as a Nazi for Halloween.

Mere belief in belief, or religious professing, would have some trouble creating genuine, deep, powerful emotional effects. People who've stopped anticipating-as-if their religion is true, will go to great lengths to convince themselves they are passionate, and this desperation can be mistaken for passion.

On the other hand, it is very easy for a human being to genuinely, passionately, gut-level belong to a group, to cheer for their favorite sports team.

i.e. you may "wear" certain beliefs, never even bringing up opposing views because they are taboo. This is very similar to cheering, but it's somewhat different from "professing and cheering" as that doesn't even need to contain any true emotional substance.

Valid: belief as anticipation-controller,

Invalid: belief in belief, professing and cheering (mistaking passion for actual anticipation-controller belief and/or genuine emotion directly attached to beliefs) belief as attire (group-identification beliefs which DO result in genuine emotion)

Applause Lights link

The substance of a democracy is the specific mechanism that resolves policy conflicts.

What does it mean to call for a "democratic" solution if you don't have a conflict-resolution mechanism in mind? I think it means that you have said the word "democracy", so the audience is supposed to cheer. It's not so much a propositional statement, as the equivalent of the "Applause" light that tells a studio audience when to clap.

Most applause lights are much more blatant, and can be detected by a simple reversal test.

We need to balance the risks and opportunities of AI.

If you reverse this statement, you get:

We shouldn't balance the risks and opportunities of AI.

Since the reversal sounds abnormal, the unreversed statement is probably normal, implying it does not convey new information.

There are plenty of legitimate reasons for uttering a sentence that would be uninformative in isolation. You can introduce a discussion topic; it can emphasize the importance of a specific proposal for balancing; it can criticize an unbalanced proposal.

Linking to a normal assertion can convey new information to a bounded rationalist—the link itself may not be obvious. But if no specifics follow, the sentence is probably an applause light.

Another side of applause lights: Oversimplification of arguments. Portraying only one side (ours) is remotely sensible, and you'd have to be insane to disagree.

She can say "we shouldn't be hugging criminals, we should be locking them up".

She wouldn't believe that "we should be hugging criminals instead of locking them up", but she might believe something that a bigot could paraphrase as such with a straight face.

deontology: is an approach to Ethics that focuses on the rightness or wrongness of actions themselves, as opposed to the rightness or wrongness of the consequences of those actions (Consequentialism) or to the character and habits of the actor (Virtue Ethics).

Noticing Confusion

Focus Your Uncertainty link

One of the most fundamental life skills is realizing when you are confused.

Students learn the habit that eating consists of putting food into mouth; the exams can't test for chewing or swallowing, and so they starve.

It may be dangerous to present people with a giant mass of authoritative knowledge, especially if it is actually true. It may damage their skepticism.

exhortation: an address or communication emphatically urging someone to do something.

The cognitive system gambles that incoming information will be related to what you’ve just been thinking about. Thus, it significantly narrows the scope of possible interpretations of words, sentences, and ideas. The benefit is that comprehension proceeds faster and more smoothly; the cost is that the deep structure of a problem is harder to recognize.

The problem: To avoid being lost you can potentially physically mark your progress through a labyrinth/territory. (usually without a piece of paper).

Presented with a problem that resembled Hansel and Gretel, about 75 percent of American college students thought of the "carry some sand with you in the bag, and leave a trail as you go" while only 25 percent of Chinese students solved it.

Presented with a puzzle based on a common Chinese folk tale, the results reversed.

But he does have the intuition that the decreasing function giving the marginal utility of additional prep time should have the same general shape regardless of how much "a priori" time was spent before the clock began ticking. That is, he intuits that shifting the function graph along the X axis should be equivalent to scaling it along the Y axis.

What is Evidence? link

evidence: an event entangled, by links of cause and effect, with whatever you want to know about.

"entanglement" in the sense of two things that end up in correlated states because of the links of cause and effect between them.

For an event to be evidence about a target of inquiry, it has to happen differently in a way that's entangled with the different possible states of the target.

If your retina ended up in the same state regardless of what light entered it, you would be blind. Some belief systems, in a rather obvious trick to reinforce themselves, say that certain beliefs are only really worthwhile if you believe them unconditionally— no matter what you see, no matter what you think. Your brain is supposed to end up in the same state regardless. Hence the phrase, "blind faith". If what you believe doesn't depend on what you see, you've been blinded as effectively as by poking out your eyeballs.

Rational thought produces beliefs which are themselves evidence.

If you don't believe that the outputs of your thought processes are entangled with reality, why do you believe the outputs of your thought processes?

"belief is only really worthwhile if you could, in principle, be persuaded to believe otherwise"

Scientific Evidence, Legal Evidence, Rational Evidence link

All legal evidence should ideally be rational evidence, but not the other way around.

Science is made up of generalizations which apply to many particular instances, so that you can run new real-world experiments which test the generalization, and thereby verify for yourself that the generalization is true, without having to trust anyone's authority.

Science is the publicly reproducible knowledge of humankind.

The prediction that the Sun will rise is, definitely, an extrapolation from scientific generalizations.

Historical knowledge is not scientific knowledge.

A historical event happens once; generalizations apply over many events. History is not reproducible; scientific generalizations are.

How Much Evidence Does it Take? link

The larger the space of possibilities in which the hypothesis lies, or the more unlikely the hypothesis seems a priori compared to its neighbors, or the more confident you wish to be, the more evidence you need.

Einstein's Arrogance link

To assign more than 50% probability to the correct candidate from a pool of 100,000,000 possible hypotheses, you need at least 27 bits of evidence. (log 100,000,000 / log 2)

If you try to apply a test that only has a million-to-one chance of a false positive (~20 bits), you'll end up with a hundred candidates.

Just finding the right answer, within a large space of possibilities, requires a large amount of evidence.

Occam's Razor link

Occam's Razor is often phrased as "The simplest explanation that fits the facts." Robert Heinlein replied that the simplest explanation is "The lady down the street is a witch; she did it."

The complexity of anger, and indeed the complexity of intelligence, was glossed over by the humans who hypothesized Thor the thunder-agent.

To a human, Maxwell's Equations take much longer to explain than Thor.

But

It's enormously easier (as it turns out) to write a computer program that simulates Maxwell's Equations, compared to a computer program that simulates an intelligent emotional mind like Thor.

trade-offs between communicating increasing numbers of model parameters vs having to communicate less residuals (ie. offsets from real data). MDL = Minimal Data Length.

Your Strength as a Rationalist link

truth bias: people are more likely to correctly judge that a truthful statement is true than that a lie is false.

Your strength as a rationalist is your ability to be more confused by fiction than by reality.

If you are equally good at explaining any outcome, you have zero knowledge.

that sensation of still feels a little forced. It's one of the most important feelings a truth seeker can have, a part of your strength as a rationalist.

the usefulness of a model is not what it can explain, but what it can't.

It is a design flaw in human cognition that this sensation manifests as a quiet strain in the back of your mind, instead of a wailing alarm siren and a glowing neon sign reading "EITHER YOUR MODEL IS FALSE OR THIS STORY IS WRONG."

Secondary Notes

propitious giving or indicating a good chance of success; favorable.

Sine qua non: an indispensable and essential action, condition, or ingredient.

It seems absurd to insist that one can believe what one has already deemed implausible, but not so absurd to suggest that one may believe the impossible before its plausibility is calculated.

For Spinoza, being skeptical meant taking a second step backward (unbelieving) to correct for the uncontrollable tendency to take a first step forward (believing).

Ideas are not mere candidates for belief, but potent entities whose mere communication instantly alters the behavioral propensities of the listener.

However:

Those who are responsible for instituting prior restraints may err in their attempts to distinguish good from bad ideas, and some good ideas may never have an opportunity to reach the person.

what signal detection theorists know as the problem of setting beta. Should citizens be more concerned with misses (i.e., failures to encounter good ideas) or false alarms (i.e., failures to reject bad ones)?

The error of believing too much may be corrected by commerce with others, but the error of believing too little cannot.

What might be a good explanation, then, if I woke up one morning and found my arm transformed into a blue tentacle? To claim a "good explanation" for this hypothetical experience would require an argument such that, contemplating the hypothetical argument now, before my arm has transformed into a blue tentacle, I would go to sleep worrying that my arm really would transform into a tentacle.

Had they the courage of their convictions, they would say: I do not expect to ever encounter this hypothetical experience, and therefore I cannot explain, nor have I a motive to try.

In the art of rationality, to explain is to anticipate.

Absence of Evidence is Evidence of Absence link

Absence of proof is not proof of absence.

A -> B not equivalent to !A => !B.

"If it's raining, then Sam will meet Jack at the movies" is not the same as "If it's not raining, then Sam will not meet Jack at the movies.".

Conservation of Expected Evidence link

This is what Bayesians do instead of taking the squared error of things; we require invariances.

prudent: acting with or showing care and thought for the future.

dissemble conceal one's true motives, feelings, or beliefs.

fifth column: any group of people who undermine a larger group from within, usually in favour of an enemy group or nation.

If you expect a strong probability of seeing weak evidence in one direction, it must be balanced by a weak expectation of seeing strong evidence in the other direction.

H: hypothesis.
E: evidence.

Expectation of improving the probability of H due to seeing evidence E: [P(H|E) - P(H)]P(E)
Expectation of diminishing the probability of H due to not seeing evidence E (seeing not E): [P(H|!E) - P(H)]P(!E)

They are opposite and equal. I.e. you expect on average that your (finalized best possible conclusion) does not change in probability.

If you expect that "no sabotage" is a strong evidence for the existence of a "Japanese fifth column in the US" then, you must also say that "sabotage" is an equally strong evidence against the existence of that fifth column.

If you argue that God, to test humanity's faith, refuses to reveal His existence, then the miracles described in the Bible must argue against the existence of God.

for every expectation of evidence, there is an equal and opposite expectation of counterevidence.

If you try to weaken the counterevidence of a possible "abnormal" observation, you can only do it by weakening the support of a "normal" observation, to a precisely equal and opposite degree.

[P(H|E) - P(H)]P(E) + [P(H|!E) - P(H)]P(!E) = 0

If you try to convince yourself that your evidence is stronger (a higher P(H|E) - P(H), say a factor of 2), you must likewise equally counter weight the strength of the counterevidence (a higher P(H|!E) - P(H) by like wise a factor of 2).

It's a zero sum game. Don't try to connive stronger evidence.

Hindsight Devalues Science link

If you find yourself agreeing with a common sense "finding", there is still a high chance that it's false and the opposite is true.

When the day before the election, Mark Leary (1982) asked people what percentage of votes they thought each candidate would receive, the average person, too, foresaw only a slim Reagan victory. The day after the election Leary asked other people what result they would have predicted the day before the election; most indicated a Reagn vote that was closer to the Reagan landslide.

two highly rated proverbs: "Wise men make proverbs and fools repeat them" (authentic) and its made-up counterpart, "Fools make proverbs and wise men repeat them."

We need to make a conscious effort to be shocked enough.

Mysterious Answers

Fake Explanations link

Guessing the Teacher's Password link

Book Suggestion: QED: The Strange Theory of Light and Matter. Richard Feynman

Book Suggestion: Reread Feynman Lectures on Physics.

What if there is no teacher to tell you that you failed? Then you may think that "Light is wakalixes" is a good explanation, that "wakalixes" is the correct password. It happened to me when I was nine years old—not because I was stupid, but because this is what happens by default. This is how human beings think, unless they are trained not to fall into the trap. Humanity stayed stuck in holes like this for thousands of years.

Science as Attire link

Is there anything in science that you are proud of believing, and yet you do not use the belief professionally? You had best ask yourself which future experiences your belief prohibits from happening to you. That is the sum of what you have assimilated and made a true part of yourself. Anything else is probably passwords or attire.

Fake Causality link

Fake explanations don't feel fake.

research suggests that humans think about cause and effect using something like the directed acyclic graphs (DAGs) of Bayes nets.

Book Suggestion: Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.

Speaking of "hindsight bias" is just the nontechnical way of saying that humans do not rigorously separate forward and backward messages, allowing forward messages to be contaminated by backward ones.

Are there any fake explanations in your mind? If there are, I guarantee they're not labeled "fake explanation", so polling your thoughts for the "fake" keyword will not turn them up.

Semantic Stopsigns link

Jonathan Wallace suggested that "God!" functions as a semantic stopsign—that it isn't a propositional assertion, so much as a cognitive traffic signal: do not think past this point.

Having strong emotions about something doesn't qualify it as a stopsign.

What distinguishes a semantic stopsign is failure to consider the obvious next question.

Mysterious Answers to Mysterious Questions link

The Futility of Emergence link

Say Not "Complexity" link

What you must avoid is skipping over the mysterious part; you must linger at the mystery to confront it directly. There are many words that can skip over mysteries, and some of them would be legitimate in other contexts—"complexity", for example. But the essential mistake is that skip-over, regardless of what causal node goes behind it. The skip-over is not a thought, but a microthought. You have to pay close attention to catch yourself at it. And when you train yourself to avoid skipping, it will become a matter of instinct, not verbal reasoning. You have to feel which parts of your map are still blank, and more importantly, pay attention to that feeling.

Positive Bias: Look Into the Dark link

As soon as you suspect your hypothesis is right. Try hard to disprove it instead.

You have to learn, wordlessly, to zag instead of zig. You have to learn to flinch toward the zero, instead of away from it.

Lawful Uncertainty link

But the error must go deeper than that. Even if subjects think they've come up with a hypothesis, they don't have to actually bet on that prediction in order to test their hypothesis. They can say, "Now if this hypothesis is correct, the next card will be red" - and then just bet on blue.

It is a counterintuitive idea that, given incomplete information, the optimal betting strategy does not resemble a typical sequence of cards.

When your knowledge is incomplete - meaning that the world will seem to you to have an element of randomness - randomizing your actions doesn't solve the problem.

"Randomness is like poison: Yes, it can benefit you, but only if you feed it to people you don't like."

By adding randomness to your algorithm, you spread its behaviors out over a particular distribution, and there must be at least one point in that distribution whose expected value is at least as high as the average expected value of the distribution.

My Wild and Reckless Youth link

Today, one of the chief pieces of advice I give to aspiring young rationalists is "Do not attempt long chains of reasoning or complicated plans."

retrospective: looking back on or dealing with past events or situations.

You need one whole hell of a lot of rationality before it does anything but lead you into new and interesting mistakes.

Failing to Learn from History link

solving a mystery should make it feel less confusing.

Science is not about regurgitating facts; nor connecting disparate concepts. It is about facing the mysterious and arriving at a mundane explanation.

But surely, if a phenomenon really was very weird, a weird explanation might be in order? It was only afterward, when I began to see the mundane structure inside the mystery, that I realized whose shoes I was standing in. Only then did I realize how reasonable vitalism had seemed at the time, how surprising and embarrassing had been the universe's reply of, "Life is mundane, and does not need a weird explanation."

Making History Available link

So many mistakes, made over and over and over again, because I did not remember making them, in every era I never lived...

Don't imagine how you could have predicted the change, for that is amnesia. Remember that, in fact, you did not guess. Remember how, century after century, the world changed in ways you did not guess.
Maybe then you will be less shocked by what happens next.

Explain/Worship/Ignore? link

"Science" as Curiosity-Stopper link

Yes indeed! Whenever anyone asks "How did you do that?", I just say "Science!"

Truly Part Of You link

vacuously: Lacking intelligence; stupid or empty-headed. b. Devoid of substance or meaning; vapid or inane: a vacuous comment. c. Devoid of expression; vacant: a vacuous stare.

Almost as soon as I started reading about AI—even before I read McDermott—I realized it would be a really good idea to always ask myself: "How would I regenerate this knowledge if it were deleted from my mind?" Book suggestion: McDermott

How to Actually Change Your Mind. The Second Book

Rationality: An Introduction link

It’s why the rationalist proverb, upon being given a cool theory, goes “Name three examples”.

You are not a Bayesian homunculus whose reasoning is “corrupted” by cognitive biases.
You just are cognitive biases.

Overly Convenient Excuses

The Proper Use of Humility link

If you ask someone to "be more humble", by default they'll associate the words to social modesty—which is an intuitive, everyday, ancestrally relevant concept. Scientific humility is a more recent and rarefied invention, and it is not inherently social. Scientific humility is something you would practice even if you were alone in a spacesuit, light years from Earth with no one watching.

disconfirmation bias (motivated skepticism): more heavily scrutinizing assertions that we don't want to believe.

attitudinally: relating to, based on, or expressive of personal attitudes or feelings.

"To be humble is to take specific actions in anticipation of your own errors. To confess your fallibility and then do nothing about it is not humble; it is boasting of your modesty."

The Third Alternative link

false dilemma: something is falsely claimed to be an "either/or" situation, when in fact there is at least one additional option.

If the goal is really to help people, then a superior alternative is cause for celebration—once we find this better strategy, we can help people more effectively. But if the goal is to justify a particular strategy by claiming that it helps people, a Third Alternative is an enemy argument, a competitor.

Beware when you find yourself arguing that a policy is defensible rather than optimal;

Did I spend five minutes with my eyes closed, brainstorming wild and creative options, trying to think of a better alternative? It has to be five minutes by the clock, because otherwise you blink—close your eyes and open them again—and say, "Why, yes, I searched for alternatives, but there weren't any." Blinking makes a good black hole down which to dump your duties. An actual, physical clock is recommended.

From my perspective, the main goal that Santa-ism serves is giving children a trial run at atheism - teaching them to be skeptical of supernatural propositions fed them by adults and believed by their peers, especially the part about being rewarded for belief. If I had children I'd let their peers tell them about Santa Claus, without contradiction from me, just to make sure the kids got experience in skepticism - you lose out on a fundamental life trial and very valuable experience if your parents happen to be atheists.
But even this can be improved upon, if you're willing to tell your own lies instead of letting others do it for you. Just tell the children in a very stern voice that if they doubt the existence of Santa Claus he won't bring them any presents; but if they believe as hard as they can, they'll get lots of presents. Also, remove the part about Santa Claus rewarding children for being good - being good should be its own reward to be internalized appropriately; if the children believe they are being bribed, it may interfere with their internalization of morality. Santa Claus should reward children only for believing in him. Why is that good? Well, just because.
When the child first questions Santa Claus, he should be given a vaguely plausible set of physical and moral rationalizations - i.e., the reindeer actually travel through the nineteenth dimension, and rich kids get better presents because they have a higher hedonic baseline, etc. This will give the child experience with vague philosophical-sounding rationalizations, not just blatantly obvious lies.

Behaving morally because you think Big Sky Daddy is watching is not a conscience. It is to a conscience as a crutch is to a leg.

Lotteries: A Waste of Hope link

persistence turns out to be more than a conscious act of will; it’s also an unconscious response, governed by a circuit in the brain.

Blackwell split her kids into two groups for an eight-session workshop. The control group was taught study skills, and the others got study skills and a special module on how intelligence is not innate. These students took turns reading aloud an essay on how the brain grows new neurons when challenged. They saw slides of the brain and acted out skits. Even as I was teaching these ideas, Blackwell noted, I would hear the students joking, calling one another 'dummy’ or 'stupid.’ The only difference between the control group and the test group were two lessons, a total of 50 minutes spent teaching not math but a single idea: that the brain is a muscle. Giving it a harder workout makes you smarter. That alone improved their math scores.

Giving kids the label of 'smart' does not prevent them from underperforming. It actually causes it.

Emphasizing effort gives a child a variable that they can control. They come to see themselves as in control of their success. Emphasizing natural intelligence takes it out of the child’s control, and it provides no good recipe for responding to a failure.

can't devalue the emotional force of a pleasant anticipation by a factor of 0.00000001 without dropping the line of reasoning entirely.

New Improved Lottery link

But There's Still A Chance, Right? link

what I found even more fascinating was the qualitative distinction between "certain" and "uncertain" arguments, where if an argument is not certain, you're allowed to ignore it.

"But you can't prove me wrong." If you're going to ignore a probabilistic counterargument, why not ignore a proof, too?

"every little bit counts!", I try to argue, "yeah, but only a little."

The Fallacy of Gray link

"No one is perfect, but some people are less imperfect than others"

Whenever someone says to me, "Perfectionism is bad for you," I reply: "I think it's okay to be imperfect, but not so imperfect that other people notice."

"When people thought the earth was flat, they were wrong. When people thought the earth was spherical, they were wrong. But if you think that thinking the earth is spherical is just as wrong as thinking the earth is flat, then your view is wronger than both of them put together."

Absolute Authority link

In the world of the unenlightened ones, there is authority and un-authority. What can be trusted, can be trusted; what cannot be trusted, you may as well throw away.

If scientists have changed their stories ever in their history, then science cannot be a true Authority, and can never again be trusted—like a witness caught in a contradiction, or like an employee found stealing from the till. If even the proponent of science admits that science is less than perfect, why, it must be pretty much worthless.

One obvious source for this pattern of thought is religion, where the scriptures are alleged to come from God; therefore to confess any flaw in them would destroy their authority utterly; so any trace of doubt is a sin, and claiming certainty is mandatory whether you're certain or not.

gestalt: an organized whole that is perceived as more than the sum of its parts.

In front of audience responses to "Science doesn't really know anything. All you have are theories—you can't know for certain that you're right. You scientists changed your minds about how gravity works—who's to say that tomorrow you won't change your minds about evolution?":

"The power of science comes from having the ability to change our minds and admit we're wrong. If you've never admitted you're wrong, it doesn't mean you've made fewer mistakes."

"Anyone can say they're absolutely certain. It's a bit harder to never, ever make any mistakes. Scientists understand the difference, so they don't say they're absolutely certain. That's all. It doesn't mean that they have any specific reason to doubt a theory—absolutely every scrap of evidence can be going the same way, all the stars and planets lined up like dominos in support of a single hypothesis, and the scientists still won't say they're absolutely sure, because they've just got higher standards. It doesn't mean scientists are less entitled to certainty than, say, the politicians who always seem so sure of everything."

"Would you be willing to change your mind about the things you call 'certain' if you saw enough evidence? I mean, suppose that God himself descended from the clouds and told you that your whole religion was true except for the Virgin Birth. If that would change your mind, you can't say you're absolutely certain of the Virgin Birth. For technical reasons of probability theory, if it's theoretically possible for you to change your mind about something, it can't have a probability exactly equal to one. The uncertainty might be smaller than a dust speck, but it has to be there. And if you wouldn't change your mind even if God told you otherwise, then you have a problem with refusing to admit you're wrong that transcends anything a mortal like me can say to you, I guess."

I think the first, beginning step should be understanding that you can live without certainty—that if, hypothetically speaking, you couldn't be certain of anything, it would not deprive you of the ability to make moral or factual distinctions. To paraphrase Lois Bujold, "Don't push harder, lower the resistance."

One of the common defenses of Absolute Authority is something I call "The Argument From The Argument From Gray", which runs like this:

  1. Moral relativists say:
    1. The world isn't black and white, therefore:
    2. Everything is gray, therefore:
      ERROR
    3. No one is better than anyone else, therefore:
      ERROR
    4. I can do whatever I want and you can't stop me bwahahaha.
  2. But we've got to be able to stop people from committing murder.
    ERROR
  3. Therefore there has to be some way of being absolutely certain, or the moral relativists win.

Fallacy: Appeal to Consequences of a Belief:

  1. X because if !X then there would be negative or positive consequences.
  2. I wish that X, therefore X.

the little progress bar in people's heads that measures their emotional commitment to a belief does not translate well into a calibrated confidence—it doesn't even behave monotonically.

As for "absolute certainty"—well, if you say that something is 99.9999% probable, it means you think you could make one million equally strong independent statements, one after the other, over the course of a solid year or so, and be wrong, on average, around once. (It's amazing to realize we can actually get that level of confidence for "Thou shalt not win the lottery.")

A probability of 1.0 isn't just certainty, it's infinite certainty.

"We are not INFINITELY certain." is a better way of saying "We are not absolutely certain."

How to Convince me That 2 + 2 = 3 link

Unconditional facts are not the same as unconditional beliefs. If entangled evidence convinces me that a fact is unconditional, this doesn't mean I always believed in the fact without need of entangled evidence.

If there are any Christians in the audience who know Bayes's Theorem (no numerophobes, please) might I inquire of you what situation would convince you of the truth of Islam? Presumably it would be the same sort of situation causally responsible for producing your current belief in Christianity: We would push you screaming out of the uterus of a Muslim woman, and have you raised by Muslim parents who continually told you that it is good to believe unconditionally in Islam. Or is there more to it than that? If so, what situation would convince you of Islam, or at least, non-Christianity?

Infinite Certainty link

Yet the map is not the territory: if I say that I am 99% confident that 2 + 2 = 4, it doesn't mean that I think "2 + 2 = 4" is true to within 99% precision, or that "2 + 2 = 4" is true 99 times out of 100. The proposition in which I repose my confidence is the proposition that "2 + 2 = 4 is always and exactly true", not the proposition "2 + 2 = 4 is mostly and usually true".

If you say 99.9999% confidence, you're implying that you could make one million equally fraught statements, one after the other, and be wrong, on average, about once. That's around a solid year's worth of talking, if you can make one assertion every 20 seconds and you talk for 16 hours a day.

Assert 99.9999999999% confidence, and you're taking it up to a trillion. Now you're going to talk for a hundred human lifetimes, and not be wrong even once?

But even a confidence of (1 - 1/3^^^3) isn't all that much closer to PROBABILITY 1 than being 90% sure of something.

0 and 1 Are Not Probabilities link

Actually, the point being made is that your computation device – whether it’s a wonderfully faulty human brain, or even a highly deterministic computer chip – will get randomly hit by a cosmic ray well before you test a proposition ten quintillion times, nevermind an “infinite” number.

Your Rationality is My Business link

syllogism: an instance of a form of reasoning in which a conclusion is drawn (whether validly or not) from two given or assumed propositions (premises), each of which shares a term with the conclusion, and shares a common or middle term not present in the conclusion.

The counterintuitive idea underlying science is that factual disagreements should be fought out with experiments and mathematics, not violence and edicts.

Politics and Rationality

Politics is the Mind-Killer link

Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back—providing aid and comfort to the enemy.

It's just better for the spiritual growth of the community to discuss the issue without invoking color politics.

Policy Debates Should Not Appear One-Sided link

A policy may legitimately have lopsided costs or benefits. If policy questions were not tilted one way or the other, we would be unable to make decisions about them. But there is also a human tendency to deny all costs of a favored policy, or deny all benefits of a disfavored policy; and people will therefore tend to think policy tradeoffs are tilted much further than they actually are.

The Scales of Justice, the Notebook of Rationality link

vignette: 1. a brief evocative description, account, or episode. 2. a small illustration or portrait photograph that fades into its background without a definite border.

parsimonious: unwilling to spend money or use resources; stingy or frugal.

homily: a religious discourse that is intended primarily for spiritual edification rather than doctrinal instruction; a sermon.

We are taught (by E. T. Jaynes) that all Bayesian evidence consists of probability flows between hypotheses; there is no such thing as evidence that "supports" or "contradicts" a single hypothesis, except insofar as other hypotheses do worse or better.

people tend to judge technologies—and many other problems—by an overall good or bad feeling.

Correspondence Bias link

pabulum: bland or insipid intellectual fare, entertainment, etc.

verity: a true principle or belief, especially one of fundamental importance.

Causes of Correspondence Bias:

  1. lack of awareness (situation is playing a causal role in an actor's behavior)
  2. unrealistic expectations
  3. inflated categorizations (representativeness heuristic)
  4. incomplete corrections

Sequence of events:

Prior Beliefs ->
Situation Perception ->
Behavioral Expectation ->
Behavior Perception ->
Attribution =
Disposition Inference (understanding of circumstance, like knowing (or being wrong about) the audience; measure of typicalness) ->
Situational Correction

observers could not actually see the "invisible jail" in which contestants were imprisoned, their impoverished understanding of the situation led them to have inappropriate expectations for the contestants' behavior–expectations.

construals: are how individuals perceive, comprehend, and interpret the world around them, particularly the behavior or action of others towards themselves.

Behavioral constraints alter an actor's behavioral options by altering the actor's capacity to enact those options or by altering the capacity of the environment to sustain them. (almost physical limits, situation as constrained)

Psychological constraints do not change an actor's behavioral options so much as they change her or his understanding of those options. (technically still free to do as they will, situation as the actor sees it)

default is egocentric assumption: the situation as you perceive it is the situation that the actor perceives as well.

idiosyncrasy: a mode of behavior or way of thought peculiar to an individual.

erroneous estimates of situational strength need not be underestimates.

inflated categorization effect:

Asked subjects to watch a silent film of a young woman being interviewed.

Some subjects were told that the woman was being asked to discuss politics, and others were told that she was being asked to discuss sex.

Some of the subjects were given this information about the interview topic before seeing the film, and some were given the information only after seeing the film.

Those who were told afterwards used: "Don't attribute x units of anxious behavior to dispositional anxiety when the person is in a situation that provokes precisely x units of anxious behavior". They thought in the political interview, she was less dispositionally anxious.

Those who were told first used: subjects who expected the woman to be talking about sex saw a great deal of anxiety ( x + n ) in her somewhat ambiguous behavior. They discounted a behavior that had already been overinflated during categorization ([ x + n ] − x > 0).

salience: of an item – be it an object, a person, a pixel, etc. – is the state or quality by which it stands out from its neighbors.

ontogeny:is the origination and development of an organism, usually from the time of fertilization of the egg to the organism's mature form.

ontogenetic: 1. of, relating to, or appearing in the course of ontogeny. 2. based on visible morphological characters

By studying ontogeny (the development of embryos), scientists can learn about the evolutionary history of organisms. Ancestral characters are often, but not always, preserved in an organism's development. For example, both chick and human embryos go through a stage where they have slits and arches in their necks that are identical to the gill slits and gill arches of fish. This observation supports the idea that chicks and humans share a common ancestor with fish. Thus, developmental characters, along with other lines of evidence, can be used for constructing phylogenies.

phylogenetics: is the study of the evolutionary history and relationships among individuals or groups of organisms (e.g. species, or populations).

In a rooted phylogenetic tree, each node with descendants represents the inferred most recent common ancestor of those descendants, and the edge lengths in some trees may be interpreted as time estimates.

surfeit: cause (someone) to desire no more of something as a result of having consumed or done it to excess.

somatic: of, relating to, or affecting the body especially as distinguished from the germplasm.

germplasm: germ cells and their precursors serving as the bearers of heredity and being fundamentally independent of other cells.

Somatic mutation, genetic alteration acquired by a cell that can be passed to the progeny of the mutated cell in the course of cell division. Somatic mutations differ from germ line mutations, which are inherited genetic alterations that occur in the germ cells (i.e., sperm and eggs). Somatic mutations are frequently caused by environmental factors, such as exposure to ultraviolet radiation or to certain chemicals.

soporific: tending to induce drowsiness or sleep.

when someone kicks a vending machine, we think they have an innate vending-machine-kicking-tendency. Unless the "someone" who kicks the machine is us—in which case we're behaving perfectly normally, given our situations; surely anyone else would do the same.

The "fundamental attribution error" refers to our tendency to overattribute others' behaviors to their dispositions, while reversing this tendency for ourselves.

people do have dispositions—but there are not enough heritable quirks of disposition to directly account for all the surface behaviors you see.

conspecific: adjective: (of animals or plants) belonging to the same species. noun: a member of the same species.

supervene: occur later than a specified or implied event or action, typically in such a way as to change the situation. "any plan that is made is liable to be disrupted by supervening events"

"Sufficiently tall skyscrapers don’t potentially start doing their own engineering."

stochastic: randomly determined; having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely.

false-consensus effect/bias is an attributional type of cognitive bias whereby people tend to overestimate the extent to which their opinions, beliefs, preferences, values, and habits are normal and typical of those of others (i.e., that others also think the same way that they do).

Are Your Enemies Innately Evil? link

When you accurately estimate the Enemy's psychology—when you know what is really in the Enemy's mind—that knowledge won't feel like landing a delicious punch on the opposing side.

If your estimate makes you feel unbearably sad, you may be seeing the world as it really is. More rarely, an accurate estimate may send shivers of serious horror down your spine, as when dealing with true psychopaths, or neurologically intact people with beliefs that have utterly destroyed their sanity (Scientologists or Jesus Camp).

Reversed Stupidity Is Not Intelligence link

Even if there were poorly hidden aliens, it would not be any less likely for flying saucer cults to arise.

horns effect: All perceived negative qualities correlate. If Stalin is evil, then everything he says should be false. You wouldn't want to agree with Stalin, would you?

"If you’re interested in being on the right side of disputes, you will refute your opponents’ arguments. But if you’re interested in producing truth, you will fix your opponents’ arguments for them. To win, you must fight not only the creature you encounter; you must fight the most horrible thing that can be constructed from its corpse."

Precedent Utilitarians believe that when a person compares possible actions in a specific situation, the comparative merit of each action is most accurately approximated by estimating the net probable gain in utility for all concerned from the consequences of the action, taking into account both the precedent set by the action, and the risk or uncertainty due to imperfect information.

ontology: the branch of metaphysics dealing with the nature of being.

Argument Screens Off Authority link

That is, the probability of the sidewalk being Slippery, given knowledge about the Sprinkler and the Night, is the same probability we would assign if we knew only about the Sprinkler. Knowledge of the Sprinkler has made knowledge of the Night irrelevant to inferences about Slipperiness. This is known as screening off, and the criterion that lets us read such conditional independences off causal graphs is known as D-separation.

p(truth|argument,expert) = p(truth|argument)

In practice you can never completely eliminate reliance on authority. Good authorities are more likely to know about any counterevidence that exists and should be taken into account; a lesser authority is less likely to know this, which makes their arguments less reliable. This is not a factor you can eliminate merely by hearing the evidence they did take into account.

Hug the Query link

The more directly your arguments bear on a question, without intermediate inferences—the closer the observed nodes are to the queried node, in the Great Web of Causality—the more powerful the evidence. It's a theorem of these causal graphs that you can never get more information from distant nodes, than from strictly closer nodes that screen off the distant ones.

Whenever you can, dance as near to the original question as possible—press yourself up against it—get close enough to hug the query!

Rationality and the English Language link

euphonious: (of sound, especially speech) pleasing to the ear.

A not unblack dog was chasing a not unsmall rabbit across a not ungreen field.

Journal articles are often written in passive voice. (Pardon me, some scientists write their journal articles in passive voice. It's not as if the articles are being written by no one, with no one to blame.)

Passive voice obscures reality.

a rationalist should become consciously aware of the experiences which words create.

Meaning does not excuse impact!

Book Suggestion: A Dictionary of Philosophical Quotations https://books.google.ca/books?id=NK2dLn48zWIC&pg=PA338&lpg=PA338&ots=vDJyvFZotS&sig=5Z1ZG8_yfTFIUpCIq_YSUgR749M&redir_esc=y&hl=en#PPA338,M1

Human Evil and Muddled Thinking link

If they can make you believe absurdities, they can make you commit atrocities.

In all human history, every great leap forward has been driven by a new clarity of thought. Except for a few natural catastrophes, every great woe has been driven by a stupidity. Our last enemy is ourselves; and this is a war, and we are soldiers.

Against Rationalization

Knowing About Biases Can Hurt People link

Fully General Counterarguments

Update Yourself Incrementally link

An experiment worth defying should command attention!

But I didn't say fraud. I didn't speculate on how the results might have been obtained. That would have been dismissive. I just stuck my neck out, and nakedly, boldly, without excuses, defied the data.

Rationality is not for winning debates, it is for deciding which side to join.

Even with a correct model, if it is not an exact model, you will sometimes need to revise your belief down.

Reasoning probabilistically, we realize that on average, a correct theory will generate a greater weight of support than countersupport. And so you can, without fear, say to yourself: "This is gently contrary evidence, I will shift my belief downward".

On every occasion, you must, on average, anticipate revising your beliefs downward as much as you anticipate revising them upward.

One Argument Against An Army link

when people encounter a contrary argument, they prevent themselves from downshifting their confidence by rehearsing already-known support.

by rehearsing arguments you already knew, you are double-counting the evidence.

The Bottom Line link

the handwriting of the curious inquirer is entangled with the signs and portents

Your effectiveness as a rationalist is determined by whichever algorithm actually writes the bottom line of your thoughts.

What Evidence Filtered Evidence link

5:54pm 12.20.2018

Monty Hall problem.

Rationalization link

You cannot "rationalize" what is not already rational. It is as if "lying" were called "truthization".

A Rational Argument link

You defeated yourself the instant you specified your argument's conclusion in advance.

It becomes impossible to condition on the facts alone; they must condition on the additional fact of their presentation, and infer the existence of hidden evidence.

Avoiding Your Belief's Real Weak Points link

It is the resolution of doubts, not the mere act of doubting, which drives the ratchet of rationality forward.

A doubt that is not investigated might as well not exist. Every doubt exists to destroy itself, one way or the other. An unresolved doubt is a null-op; it does not turn the wheel, neither forward nor back.

When you're doubting one of your most cherished beliefs, close your eyes, empty your mind, grit your teeth, and deliberately think about whatever hurts the most. Don't rehearse standard objections whose standard counters would make you feel better. Ask yourself what smart people who disagree would say to your first reply, and your second reply. Whenever you catch yourself flinching away from an objection you fleetingly thought of, drag it out into the forefront of your mind. Punch yourself in the solar plexus. Stick a knife in your heart, and wiggle to widen the hole. In the face of the pain, rehearse only this:

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
—Eugene Gendlin

Motivated Stopping and Motivated Continuation link

The moral is that the decision to terminate a search procedure (temporarily or permanently) is, like the search procedure itself, subject to bias and hidden motives.

You should suspect motivated stopping when you close off search, after coming to a comfortable conclusion, and yet there's a lot of fast cheap evidence you haven't gathered yet—Web sites you could visit, counter-counter arguments you could consider, or you haven't closed your eyes for five minutes by the clock trying to think of a better option.

You should suspect motivated continuation when some evidence is leaning in a way you don't like, but you decide that more evidence is needed—expensive evidence that you know you can't gather anytime soon, as opposed to something you're going to look up on Google in 30 minutes—before you'll have to do anything uncomfortable.

Fake Justification link

Generally speaking, the Church has promulgated views about human sexuality that are unconscionably stupid and utterly lacking in empathy. Full stop. The fact that you have navigated this labyrinth of sacred prejudice and kept your sanity is no point in favor of religion.

And if you disagree that the truth of an idea can be neatly separated from its consolations, what does the phrase "wishful thinking" mean to you?

Is That Your True Rejection? link

This might be an important thing for young businesses and new-minted consultants to keep in mind—that what your failed prospects tell you is the reason for rejection, may not make the real difference; and you should ponder that carefully before spending huge efforts.

Entangled Truths, Contagious Lies link

As Tom McCabe put it: "Anyone who claims that the brain is a total mystery should be slapped upside the head with the MIT Encyclopedia of the Cognitive Sciences. All one thousand ninety-six pages of it."

Physically, each event is in some sense the sum of its whole past light cone, without borders or boundaries. But the list of noticeable entanglements is much shorter, and it gives you something like a network.

Dark Side Epistemology link

Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you're lying to others, or to yourself. You have to deny that beliefs require evidence, and then you have to deny that maps should reflect territories, and then you have to deny that truth is a good thing...

Just as a murderer ties the corpse of his victim to a heavy stone before throwing it into the water, so too do victims of the Dark Side tie ideas they want to dispose of to negative affect words. It really does make them less likely to resurface. The same caution applies to tying positive affect words to desired ideas.

Against Doublethink

Singlethink link

In doublethink, you forget, and then forget you have forgotten. In singlethink, you notice you are forgetting, and then you remember. You hold only a single non-contradictory thought in your mind at once.

Book Suggestion: Judgment Under Uncertainty

Doublethink (Choosing to be Biased) link

You can't know the consequences of being biased, until you have already debiased yourself. And then it is too late for self-deception.

neglect of probability: the tendency to disregard probability when making a decision under uncertainty. Small risks are typically either neglected entirely or hugely overrated.

The happiness of stupidity is closed to you. You will never have it short of actual brain damage, and maybe not even then. You should wonder, I think, whether the happiness of stupidity is optimal—if it is the most happiness that a human can aspire to—but it matters not. That way is closed to you, if it was ever open.

concomitant: naturally accompanying or associated.

pulchritudinous: describes a person of breathtaking, heartbreaking beauty.

predict what each side will say in a sort of mental role acting; that is much easier if you imagine it not being you that has such thoughts.

No, Really, I've Deceived Myself link

you don't have to really deceive yourself so long as you believe you've deceived yourself. Call it "belief in self-deception".

someone who actually seriously believed in God and acted accordingly, who was over the age of 20, would probably get looked at a little funny - they wouldn't get the warm friendship that accrues to those who just say the passwords.

Belief in Self-Deception link

I never understood the Prisoner's Dilemma until this day. Do you cooperate when you really do want the highest payoff? When it doesn't even seem fair for both of you to cooperate? When it seems right to defect even if the other player doesn't? That's the payoff matrix of the true Prisoner's Dilemma. But all the rest of the logic - everything about what happens if you both think that way, and both defect - is the same. Do we want to live in a universe of cooperation or defection?

I now realize that the whole essence of her philosophy was her belief that she had deceived herself, and the possibility that her estimates of other people were actually accurate, threatened the Dark Side Epistemology that she had built around beliefs such as "I benefit from believing people are nicer than they actually are."

She has taken the old idol off its throne, and replaced it with an explicit worship of the Dark Side Epistemology that was once invented to defend the idol; she worships her own attempt at self-deception. The attempt failed, but she is honestly unaware of this.

Moore's Paradox link

Moore's paradox: concerns the apparent absurdity involved in asserting a first-person present-tense sentence such as, "It's raining, but I don't believe that it is raining" or "It's raining but I believe that it is not raining."

Many people may not consciously distinguish between believing something and endorsing it.

Don't Believe You'll Self-Deceive link

If you know your belief isn't correlated to reality, how can you still believe it?

When it comes to deliberate self-deception, you must believe in your own inability!

Seeing with Fresh Eyes

Anchoring and Adjustment link

"How much would I be willing to pay for this if I didn't know any special information?" Useful for debiasing.

Watch yourself thinking, and try to notice when you are adjusting a figure in search of an estimate.

Priming and Contamination link

The more general result is that completely uninformative, known false, or totally irrelevant "information" can influence estimates and decisions. In the field of heuristics and biases, this more general phenomenon is known as contamination.

Do We Believe Everything We're Told? link

One might naturally think that on being told a proposition, we would first comprehend what the proposition meant, then consider the proposition, and finally accept or reject it. This obvious-seeming model of cognitive process flow dates back to Descartes. But Descartes's rival, Spinoza, disagreed; Spinoza suggested that we first passively accept a proposition in the course of comprehending it, and only afterward actively disbelieve propositions which are rejected by consideration.

This suggests — to say the very least—that we should be more careful when we expose ourselves to unreliable information, especially if we're doing something else at the time. Be careful when you glance at that newspaper in the supermarket.

Cached Thoughts link

It's a good guess that the actual majority of human cognition consists of cache lookups.

The "Outside the Box" Box link

Whenever someone exhorts you to "think outside the box", they usually, for your convenience, point out exactly where "outside the box" is located. Isn't it funny how nonconformists all dress the same...

The problem with originality is that you actually have to think in order to attain it, instead of letting your brain complete the pattern. There is no conveniently labeled "Outside the Box" to which you can immediately run off. There's an almost Zen-like quality to it—like the way you can't teach satori in words because satori is the experience of words failing you.

Original Seeing link

he thought about it and concluded she was evidently stopped with the same kind of blockage that had paralyzed him on his first day of teaching. She was blocked because she was trying to repeat, in her writing, things she had already heard, just as on the first day he had tried to repeat things he had already decided to say. She couldn't think of anything to write about Bozeman because she couldn't recall anything she had heard worth repeating. She was strangely unaware that she could look and see freshly for herself, as she wrote, without primary regard for what had been said before. The narrowing down to one brick destroyed the blockage because it was so obvious she had to do some original and direct seeing.

Stranger then History link

Surface absurdity counts for nothing.

The Logical Fallacy of Generalization from Fictional Evidence link

Even the most diligent science fiction writers are, first and foremost, storytellers; the requirements of storytelling are not the same as the requirements of forecasting.

Almost like anchoring and contamination.

Remembered fictions rush in and do your thinking for you; they substitute for seeing—the deadliest convenience of all.

The Virtue of Narrowness link

Outside their own professions, people often commit the misstep of trying to broaden a word as widely as possible, to cover as much territory as possible.

There is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all. The important graphs are the ones where some things are not connected to some other things.

How to Seem (and Be) Deep link

People who try to seem Deeply Wise often end up seeming hollow, echoing as it were, because they're trying to seem Deeply Wise instead of optimizing.

I suspect this is one reason Eastern philosophy seems deep to Westerners—it has nonstandard but coherent cache for Deep Wisdom. Symmetrically, in works of Japanese fiction, one sometimes finds Christians depicted as repositories of deep wisdom and/or mystical secrets.

To seem deep, study nonstandard philosophies. Seek out discussions on topics that will give you a chance to appear deep. Do your philosophical thinking in advance, so you can concentrate on explaining well. Above all, practice staying within the one-inferential-step bound.

We Change Our Minds Less Often Than We Think link

I realized that once I could guess what my answer would be—once I could assign a higher probability to deciding one way than other—then I had, in all probability, already decided. We change our minds less often than we think. And most of the time we become able to guess what our answer will be within half a second of hearing the question.

How swiftly that unnoticed moment passes, when we can't yet guess what our answer will be; the tiny window of opportunity for intelligence to act. In questions of choice, as in questions of fact.

Hold Off On Proposing Solutions link

Traditional Rationality emphasizes falsification— the ability to relinquish an initial opinion when confronted by clear evidence against it. But once an idea gets into your head, it will probably require way too much evidence to get it out again. Worse, we don't always have the luxury of overwhelming evidence.

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can't yet guess what our answer will be; thus giving our intelligence a longer time in which to act. Even half a minute would be an improvement over half a second.

The Genetic Fallacy link

In lists of logical fallacies, you will find included "the genetic fallacy"—the fallacy attacking a belief, based on someone's causes for believing it.

The genetic fallacy is formally a fallacy, because the original cause of a belief is not the same as its current justificational status, the sum of all the support and antisupport currently known.

Clearing your mind is a powerful heuristic when you're faced with new suspicion that many of your ideas may have come from a flawed source.

It takes a convulsive effort to actually reconsider, instead of letting your mind fall into the pattern of rehearsing cached arguments. "It ain't a true crisis of faith unless things could just as easily go either way," said Thor Shenkel.

You should be extremely suspicious if you have many ideas suggested by a source that you now know to be untrustworthy, but by golly, it seems that all the ideas still ended up being right—the Bible being the obvious archetypal example.

Death Spirals link

The Affect Heuristic link

that feeling was an integral component of the machinery of reason.

“In short, somatic markers are . . . feelings generated from secondary emotions. These emotions and feelings have been connected, by learning, to predicted future outcomes of certain scenarios” (Damasio, 1994, p. 174). When a negative somatic marker is linked to an image of a future outcome it sounds an alarm. When a positive marker is associated with the outcome image, it becomes a beacon of incentive. Damasio concludes that somatic markers increase the accuracy and efficiency of the decision process and their absence degrades decision performance.

High odds signal success. Payoff is ignored, because it isn't as easily evaluable.

By adding a loss chance, you make "net gain" evaluable and all of a sudden the lower odds gamble seems better then the safe gain of a low payoff.

imaging the numerator: neglecting probablity in favor of larger pool of desired instances. 10% is judged more laxly then 10/100...

veridical: truthful. coinciding with reality.

Today’s pain, hunger, anger, etc. are palpable, but the same sensations anticipated in the future receive little weight.

That is, the one-day difference between today and tomorrow looms much larger than the difference between one year from now and one year and a day from now (Gibbon, 1977).

idiosyncrasy: a mode of behavior or way of thought peculiar to an individual.

Evaluability (And Cheap Holiday Shopping) link

Evaluably good products are seen as more generous. I.e. buy a 45$ scarf vs a 55$ coat as a present.

Unbounded Scales, Huge Jury Awards, & Futurism link

The ratios between stimuli will continue to correlate reliably between subjects. Subject A says that sound X has a loudness of 10 and sound Y has a loudness of 15. If subject B says that sound X has a loudness of 100, then it's a good guess that subject B will assign loudness in the range of 150 to sound Y.

For a subject rating a single sound, on an unbounded scale, without a fixed standard of comparison, nearly all the variance is due to the arbitrary choice of modulus, rather than the sound itself.

I observe that many futuristic predictions are, likewise, best considered as attitude expressions.

The Halo Effect link

The halo effect is that perceptions of all positive traits are correlated.

The influence of attractiveness on ratings of intelligence, honesty, or kindness is a clear example of bias—especially when you judge these other qualities based on fixed text—because we wouldn't expect judgments of honesty and attractiveness to conflate for any legitimate reason. On the other hand, how much of my perceived intelligence is due to my honesty? How much of my perceived honesty is due to my intelligence? Finding the truth, and saying the truth, are not as widely separated in nature as looking pretty and looking smart...

But these studies on the halo effect of attractiveness, should make us suspicious that there may be a similar halo effect for kindness, or intelligence. Let's say that you know someone who not only seems very intelligent, but also honest, altruistic, kindly, and serene. You should be suspicious that some of these perceived characteristics are influencing your perception of the others. Maybe the person is genuinely intelligent, honest, and altruistic, but not all that kindly or serene. You should be suspicious if the people you know seem to separate too cleanly into devils and angels.

Superhero Bias link

The bias I wish to point out is that Gandhi's fame score seems to get perceptually added to his justly accumulated altruism score. When you think about nonviolence, you think of Gandhi—not an anonymous protestor in one of Gandhi's marches who faced down riot clubs and guns, and got beaten, and had to be taken to the hospital, and walked with a limp for the rest of her life, and noone ever remembered her name.

After we've done our best to reduce risk and increase scope, any remaining heroism is well and truly revealed.

asceticism: severe self-discipline and avoidance of all forms of indulgence, typically for religious reasons

sublime: of such excellence, grandeur, or beauty as to inspire great admiration or awe.

Mere Messiahs link

Affective Death Spiralslink

The halo effect is that any perceived positive characteristic (such as attractiveness or strength) increases perception of any other positive characteristic (such as intelligence or courage). Even when it makes no sense, or less than no sense.

Weak positive affect is subcritical; it doesn't spiral out of control. An attractive person seems more honest, which, perhaps, makes them seem more attractive; but the effective neutron multiplication factor is less than 1. Metaphorically speaking. The resonance confuses things a little, but then dies out.

With intense positive affect attached to the Great Thingy, the resonance touches everywhere. Every time they use the Great Idea to interpret another event, the Great Idea is confirmed all the more. It feels better—positive reinforcement—and of course, when something feels good, that, alas, makes us want to believe it all the more.

"affective death spiral"

Resist the Happy Death Spiral link

The really dangerous cases are the ones where any criticism of any positive claim about the Great Thingy feels bad or is socially unacceptable. Arguments are soldiers, any positive claim is a soldier on our side, stabbing your soldiers in the back is treason. Then the chain reaction goes supercritical.

Cut up your Great Thingy into smaller independent ideas, and treat them as independent. Hopefully this leads you away from the good or bad feeling, and toward noticing the confusion and lack of support.

you do avoid a Happy Death Spiral by:

  1. splitting the Great Idea into parts.
  2. treating every additional detail as burdensome.
  3. thinking about the specifics of the causal chain instead of the good or bad feelings.
  4. not rehearsing evidence.
  5. not adding happiness from claims that "you can't prove are wrong";

but not by

  • refusing to admire anything too much
  • conducting a biased search for negative points until you feel unhappy again
  • forcibly shoving an idea into a safe box.

Uncritical Supercriticality link

If a tree falls in a forest and no one hears it, does it make a sound?
If you want to know whether landmines will detonate, you will not get lost in fighting over the meaning of the word "sound".

No true Scotsman/appeal to purity: an informal fallacy in which one attempts to protect a universal generalization from counterexamples by changing the definition in an ad hoc fashion to exclude the counterexample.

What you really want to know—what the argument was originally about—is why, at certain points in human history, large groups of people were slaughtered and tortured, ostensibly in the name of an idea. Redefining a word won't change the facts of history one way or the other.


"If your brother, the son of your father or of your mother, or your son or daughter, or the spouse whom you embrace, or your most intimate friend, tries to secretly seduce you, saying, 'Let us go and serve other gods,' unknown to you or your ancestors before you, gods of the peoples surrounding you, whether near you or far away, anywhere throughout the world, you must not consent, you must not listen to him; you must show him no pity, you must not spare him or conceal his guilt. No, you must kill him, your hand must strike the first blow in putting him to death and the hands of the rest of the people following. You must stone him to death, since he has tried to divert you from Yahweh your God." (Deuteronomy 13:7-11, emphasis added)

This was likewise the rule which Stalin set for Communism, and Hitler for Nazism: if your brother tries to tell you why Marx is wrong, if your son tries to tell you the Jews are not planning world conquest, then do not debate him or set forth your own evidence; do not perform replicable experiments or examine history; but turn him in at once to the secret police.


hopefully reduce the resonance to below criticality, so that one nice-sounding claim triggers less than 1.0 additional nice-sounding claims, on average.

In short, I argue "naturalism" means, in the simplest terms, that every mental thing is entirely caused by fundamentally nonmental things, and is entirely dependent on nonmental things for its existence. Therefore, "supernaturalism" means that at least some mental things cannot be reduced to nonmental things.

Thus Clarke's Third Law ("any sufficiently advanced technology is indistinguishable from magic") is only, at best, an epistemological principle, not a metaphysical one. Metaphysically, magic and technology are very definitely always distinguishable.

antecedent: a thing or event that existed before or logically precedes another.

The vast majority of possible beliefs in a nontrivial answer space are false, and likewise, the vast majority of possible supporting arguments for a true belief are also false, and not even the happiest idea can change that.

Evaporative Cooling of Group Beliefs link

In Festinger's classic "When Prophecy Fails", one of the cult members walked out the door immediately after the flying saucer failed to land. Who gets fed up and leaves first? An average cult member? Or a relatively more skeptical member, who previously might have been acting as a voice of moderation, a brake on the more fanatic members?

This is one reason why it's important to be prejudiced in favor of tolerating dissent. Wait until substantially after it seems to you justified in ejecting a member from the group, before actually ejecting. If you get rid of the old outliers, the group position will shift, and someone else will become the oddball. If you eject them too, you're well on the way to becoming a Bose-Einstein condensate and, er, exploding.

My own theory of Internet moderation is that you have to be willing to exclude trolls and spam to get a conversation going. You must even be willing to exclude kindly but technically uninformed folks from technical mailing lists if you want to get any work done. A genuinely open conversation on the Internet degenerates fast. It's the articulate trolls that you should be wary of ejecting, on this theory—they serve the hidden function of legitimizing less extreme disagreements. But you should not have so many articulate trolls that they begin arguing with each other, or begin to dominate conversations. If you have one person around who is the famous Guy Who Disagrees With Everything, anyone with a more reasonable, more moderate disagreement won't look like the sole nail sticking out. This theory of Internet moderation may not have served me too well in practice, so take it with a grain of salt.

When None Dare Urge Restraint link

That's the challenge of pessimism; it's really hard to aim low enough that you're pleasantly surprised around as often and as much as you're unpleasantly surprised.

Initially, on 9/11, it was thought that six thousand people had died. Any politician who'd said "6000 deaths is 1/8 the annual US casualties from automobile accidents," would have been asked to resign the same hour.

This is the even darker mirror of the happy death spiral—the spiral of hate.

Anyone who attacks the Enemy is a patriot; and whoever tries to dissect even a single negative claim about the Enemy is a traitor. But just as the vast majority of all complex statements are untrue, the vast majority of negative things you can say about anyone, even the worst person in the world, are untrue.

It is just too dangerous for there to be any target in the world, whether it be the Jews or Adolf Hitler, about whom saying negative things trumps saying accurate things.

When the defense force contains thousands of aircraft and hundreds of thousands of heavily armed soldiers, one ought to consider that the immune system itself is capable of wreaking more damage than 19 guys and four nonmilitary airplanes. The US spent billions of dollars and thousands of soldiers' lives shooting off its own foot more effectively than any terrorist group could dream.

Once restraint becomes unspeakable, no matter where the discourse starts out, the level of fury and folly can only rise with time.

The Robbers Cave Experiment link

(The Outside Enemy, one of the oldest tricks in the book.)

Every Cause Wants To Be A Cult link

The ingroup-outgroup dichotomy is part of ordinary human nature. So are happy death spirals and spirals of hate. A Noble Cause doesn't need a deep hidden flaw for its adherents to form a cultish in-group. It is sufficient that the adherents be human. Everything else follows naturally, decay by default, like food spoiling in a refrigerator after the electricity goes off.

Every group of people with an unusual goal—good, bad, or silly—will trend toward the cult attractor unless they make a constant effort to resist it. You can keep your house cooler than the outdoors, but you have to run the air conditioner constantly, and as soon as you turn off the electricity—give up the fight against entropy—things will go back to "normal".

Worshipping rationality won't make you sane any more than worshipping gravity enables you to fly. You can't talk to thermodynamics and you can't pray to probability theory. You can use it, but not join it as an in-group.

Cultishness is quantitative, not qualitative. The question is not "Cultish, yes or no?" but "How much cultishness and where?" Even in Science, which is the archetypal Genuinely Truly Noble Cause, we can readily point to the current frontiers of the war against cult-entropy, where the current battle line creeps forward and back. Are journals more likely to accept articles with a well-known authorial byline, or from an unknown author from a well-known institution, compared to an unknown author from an unknown institution? How much belief is due to authority and how much is from the experiment? Which journals are using blinded reviewers, and how effective is blinded reviewing?
I cite this example, rather than the standard vague accusations of "Scientists aren't open to new ideas", because it shows a battle line—a place where human psychology is being actively driven back, where accumulated cult-entropy is being pumped out. (Of course this requires emitting some waste heat.)

the worthiness of the Cause does not mean you can spend any less effort in resisting the cult attractor. And that if you can point to current battle lines, it does not mean you confess your Noble Cause unworthy. You might think that if the question were "Cultish, yes or no?" that you were obliged to answer "No", or else betray your beloved Cause. But that is like thinking that you should divide engines into "perfectly efficient" and "inefficient", instead of measuring waste.

Guardians of the Truth link

But the Inquisitors were not Truth-Seekers. They were Truth-Guardians.

assuming a defensive posture toward the truth, versus a productive posture toward the truth.

If there's some way to pump against entropy, generate new true beliefs along with a little waste heat, that same pump can keep the truth alive without secret police.

Because experiments can recover the truth without need of authority, they can also override authority and create new true beliefs where none existed before.

I don't mean to provide a grand overarching single-factor view of history. I do mean to point out a deep psychological difference between seeing your grand cause in life as protecting, guarding, preserving, versus discovering, creating, improving. Does the "up" direction of time point to the past or the future? It's a distinction that shades everything, casts tendrils everywhere.

I would also argue that this basic psychological difference is one of the reasons why an academic field that stops making active progress tends to turn mean. (At least by the refined standards of science. Reputational assassination is tame by historical standards; most defensive-posture belief systems went for the real thing.) If major shakeups don't arrive often enough to regularly promote young scientists based on merit rather than conformity, the field stops resisting the standard degeneration into authority. When there's not many discoveries being made, there's nothing left to do all day but witch-hunt the heretics.

Guardians of the Gene Pool link

You may remember that Hitler and the Nazis planned to carry forward a romanticized process of evolution, to breed a new master race, supermen, stronger and smarter than anything that had existed before.
Actually this is a common misconception. Hitler believed that the Aryan superman had previously existed—the Nordic stereotype, the blond blue-eyed beast of prey—but had been polluted by mingling with impure races. There had been a racial Fall from Grace.

It says something about how difficult it is for the relatively healthy to envision themselves in the shoes of the relatively sick, that we are told of the Nazis, and distort the tale to make them defective transhumanists.

It's the Communists who were the defective transhumanists. "New Soviet Man" and all that. The Nazis were quite definitely the bioconservatives of the tale.

Guardians of Ayn Rand link

To me the thought of voluntarily embracing a system explicitly tied to the beliefs of one human being, who's dead, falls somewhere between the silly and the suicidal. A computer isn't five years old before it's obsolete.

It takes a much stronger constitution to fear authority when you have the power. When people are looking to you for answers, it's harder to say "What the hell do I know about music? I'm a writer, not a composer," or "It's hard to see how liking a piece of music can be untrue."

When you're the one crushing those who dare offend you, the exercise of power somehow seems much more justifiable than when you're the one being crushed. All sorts of excellent justifications somehow leap to mind.

Ayn Rand fled the Soviet Union, wrote a book about individualism that a lot of people liked, got plenty of compliments, and formed a coterie of admirers. Her admirers found nicer and nicer things to say about her (happy death spiral), and she enjoyed it too much to tell them to shut up. She found herself with the power to crush those of whom she disapproved, and she didn't resist the temptation of power.

The only extraordinary thing about the whole business, is how ordinary it was.

Let that be a lesson to all of us: Praising "rationality" counts for nothing. Even saying "You must justify your beliefs through Reason, not by agreeing with the Great Leader" just runs a little automatic program that takes whatever the Great Leader says and generates a justification that your fellow followers will view as Reason-able.

So where is the true art of rationality to be found? Studying up on the math of probability theory and decision theory. Absorbing the cognitive sciences like evolutionary psychology, or heuristics and biases. Reading history books...

progress isn't fair! That's the point!

The great Names are not our superiors, or even our rivals, they are passed milestones on our road; and the most important milestone is the hero yet to come.

To be one more milestone in humanity's road is the best that can be said of anyone; but this seemed too lowly to please Ayn Rand. And that is how she became a mere Ultimate Prophet.

Two Cult Koans link

"If you find a hammer lying in the road and sell it, you may ask a low price or a high one. But if you keep the hammer and use it to drive nails, who can doubt its worth?"

Asch's Conformity Experiment link

If you were looking at a diagram like the one above, but you knew for a fact that the other people in the experiment were honest and seeing the same diagram as you, and three other people said that C was the same size as X, then what are the odds that only you are the one who's right? I lay claim to no advantage of visual reasoning—I don't think I'm better than an average human at judging whether two lines are the same size. In terms of individual rationality, I hope I would notice my own severe confusion and then assign >50% probability to the majority vote.

In terms of group rationality, seems to me that the proper thing for an honest rationalist to say is, "How surprising, it looks to me like B is the same size as X. But if we're all looking at the same diagram and reporting honestly, I have no reason to believe that my assessment is better than yours." The last sentence is important—it's a much weaker claim of disagreement than, "Oh, I see the optical illusion—I understand why you think it's C, of course, but the real answer is B."

Unsurprisingly, subjects in the one-dissenter condition did not think their nonconformity had been influenced or enabled by the dissenter.

Being the first dissenter is a valuable (and costly!) social service, but you've got to keep it up.

Consistently within and across experiments, all-female groups (a female subject alongside female confederates) conform significantly more often than all-male groups. Around one-half the women conform more than half the time, versus a third of the men.

On Expressing Your Concerns link

And the wearisome thing is that dissent was not learned over the course of the experiment—when the single dissenter started siding with the group, rates of conformity rose back up.

Being a voice of dissent can bring real benefits to the group. But it also (famously) has a cost. And then you have to keep it up. Plus you could be wrong.

An individual, working alone, will have natural doubts. They will think to themselves, "Can I really do XYZ?", because there's nothing impolite about doubting your own competence. But when two unconfident people form a group, it is polite to say nice and reassuring things, and impolite to question the other person's competence.

The most fearsome possibility raised by Asch's experiments on conformity is the specter of everyone agreeing with the group, swayed by the confident voices of others, careful not to let their own doubts show—not realizing that others are suppressing similar worries. This is known as "pluralistic ignorance".

Therefore it is written: "Do not believe you do others a favor if you accept their arguments; the favor is to you."

Unfortunately, there's not much difference socially between "expressing concerns" and "disagreement". A group of rationalists might agree to pretend there's a difference, but it's not how human beings are really wired. Once you speak out, you've committed a socially irrevocable act; you've become the nail sticking up, the discord in the comfortable group harmony, and you can't undo that. Anyone insulted by a concern you expressed about their competence to successfully complete task XYZ, will probably hold just as much of a grudge afterward if you say "No problem, I'll go along with the group" at the end.

Lonely Dissent link

Followup interviews showed that subjects in the one-dissenter condition expressed strong feelings of camaraderie with the dissenter—though, of course, they didn't think the presence of the dissenter had influenced their own nonconformity.

Lonely dissent doesn't feel like going to school dressed in black. It feels like going to school wearing a clown suit. That's the difference between joining the rebellion and leaving the pack.

When someone wears black to school, the teachers and the other children understand the role thereby being assumed in their society. It's Outside the System—in a very standard way that everyone recognizes and understands. Not, y'know, actually outside the system. It's a Challenge to Standard Thinking, of a standard sort, so that people indignantly say "I can't understand why you—", but don't have to actually think any thoughts they had not thought before.

What takes real courage is braving the outright incomprehension of the people around you, when you do something that isn't Standard Rebellion #37, something for which they lack a ready-made script. They don't hate you for a rebel, they just think you're, like, weird, and turn away. This prospect generates a much deeper fear.
It's the difference between explaining vegetarianism and explaining cryonics. There are other cryonicists in the world, somewhere, but they aren't there next to you. You have to explain it, alone, to people who just think it's weird. Not forbidden, but outside bounds that people don't even think about. You're going to get your head frozen? You think that's going to stop you from dying? What do you mean, brain information? Huh? What? Are you crazy?

As the case of cryonics testifies, the fear of thinking really different is stronger than the fear of death. Hunter-gatherers had to be ready to face death on a routine basis, hunting large mammals, or just walking around in a world that contained predators. They needed that courage in order to live. Courage to defy the tribe's standard ways of thinking, to entertain thoughts that seem truly weird—well, that probably didn't serve its bearers as well. We don't reason this out explicitly; that's not how evolutionary psychology works. We human beings are just built in such fashion that many more of us go skydiving than sign up for cryonics.

To be a scientific revolutionary, you've got to be the first person to contradict what everyone else you know is thinking. This is not the only route to scientific greatness; it is rare even among the great. No one can become a scientific revolutionary by trying to imitate revolutionariness. You can only get there by pursuing the correct answer in all things, whether the correct answer is revolutionary or not. But if, in the due course of time—if, having absorbed all the power and wisdom of the knowledge that has already accumulated—if, after all that and a dose of sheer luck, you find your pursuit of mere correctness taking you into new territory... then you have an opportunity for your courage to fail.

Freethinkers see the deck stacked against new or contrary ideas, and see their own brave contrarian stance as a needed antidote to unreasonable conformity pressures. On net, however, freethinkers deserve much of the blame for resistance to new ideas.

Contrary to their self-image, undiscriminating freethinkers are our main obstacle to innovation.

Most of the difficulty in having a new true scientific thought is in the "true" part.

It really isn't necessary to be different for the sake of being different. If you do things differently only when you see an overwhelmingly good reason, you will have more than enough trouble to last you the rest of your life.

But if you think you would totally wear that clown suit, then don't be too proud of that either! It just means that you need to make an effort in the opposite direction to avoid dissenting too easily. That's what I have to do, to correct for my own nature. Other people do have reasons for thinking what they do, and ignoring that completely is as bad as being afraid to contradict them. You wouldn't want to end up as a free thinker. It's not a virtue, you see—just a bias either way.

Cultish Countercultishness link

I should probably sympathize more with people who are terribly nervous, embarking on some odd-seeming endeavor, that they might be joining a cult. It should not grate on my nerves. Which it does.

But if you observe that a certain group of people seems to exhibit ingroup-outgroup polarization and see a positive halo effect around their Favorite Thing Ever—which could be Objectivism, or vegetarianism, or neural networks—you cannot, from the evidence gathered so far, deduce whether they have achieved uncriticality. You cannot deduce whether their main idea is true, or false, or genuinely useful but not quite as useful as they think. From the information gathered so far, you cannot deduce whether they are otherwise polite, or if they will lure you into isolation and deprive you of sleep and food. The characteristics of cultness are not all present or all absent.

You cannot build up an accurate picture of a group's reasoning dynamic using this kind of essentialism. You've got to pay attention to individual characteristics individually.

If you're interested in the central idea, not just the implementation group, then smart ideas can have stupid followers.

Along with binary essentialism goes the idea that if you infer that a group is a "cult", therefore their beliefs must be false, because false beliefs are characteristic of cults, just like cats have fur. If you're interested in the idea, then look at the idea, not the people. Cultishness is a characteristic of groups more than hypotheses.

surely one can see that nervously seeking reassurance is not the best frame of mind in which to evaluate questions of rationality. You will not be genuinely curious or think of ways to fulfill your doubts. Instead, you'll find some online source which says that cults use sleep deprivation to control people, you'll notice that Your-Favorite-Group doesn't use sleep deprivation, and you'll conclude "It's not a cult. Whew!" If it doesn't have fur, it must not be a cat. Very reassuring.

But any group with a goal seen in a positive light, is at risk for the halo effect, and will have to pump against entropy to avoid an affective death spiral. This is true even for ordinary institutions like political parties—people who think that "liberal values" or "conservative values" can cure cancer, etc. It is true for Silicon Valley startups, both failed and successful. It is true of Mac users and of Linux users.

That's why groups whose beliefs have been around long enough to seem "normal" don't inspire the same nervousness as "cults", though some mainstream religions may also take all your money and send you to a monastery. It's why groups like political parties, that are strongly liable for rationality errors, don't inspire the same nervousness as "cults". The word "cult" isn't being used to symbolize rationality errors, it's being used as a label for something that seems weird.

And so people ask "This isn't a cult, is it?" in a tone that they would never use for attending a political rally, or for putting up a gigantic Christmas display.

(sarcasm) just goes to show that everyone with an odd belief is crazy; the first and foremost characteristic of "cult members" is that they are Outsiders with Peculiar Ways.

Yes, socially unusual belief puts a group at risk for ingroup-outgroup thinking and evaporative cooling and other problems. But the unusualness is a risk factor, not a disease in itself. Same thing with having a goal that you think is worth accomplishing. Whether or not the belief is true, having a nice goal always puts you at risk of the happy death spiral. But that makes lofty goals a risk factor, not a disease. Some goals are genuinely worth pursuing.

On the other hand, I see no legitimate reason for sleep deprivation or threatening dissenters with beating, full stop. When a group does this, then whether you call it "cult" or "not-cult", you have directly answered the pragmatic question of whether to join.

Doubt shouldn't be scary. Otherwise you're going to have to choose between living one heck of a hunted life, or one heck of a stupid one.

If you really, genuinely can't figure out whether a group is a "cult", then you'll just have to choose under conditions of uncertainty. That's what decision theory is all about.

People who talk about "rationality" also have an added risk factor. Giving people advice about how to think is an inherently dangerous business. But it is a risk factor, not a disease.

It is your own responsibility to stop yourself from thinking cultishly, no matter which group you currently happen to be operating in.

A skillful swordsman focuses on the target, rather than glancing away to see if anyone might be laughing. When you know what you're trying to do and why, you'll know whether you're getting it done or not, and whether a group is helping you or hindering you.

Letting Go

The Importance of Saying "Oops" link

The Crackpot Offer link

When I was very young—I think thirteen or maybe fourteen—I thought I had found a disproof of Cantor's Diagonal Argument, a famous theorem which demonstrates that the real numbers outnumber the rational numbers. Ah, the dreams of fame and glory that danced in my head!
So I found this counterexample, and saw that my attempted disproof was false, along with my dreams of fame and glory.

And then I realized something. I realized that I had made a mistake, and that, now that I'd spotted my mistake, there was absolutely no reason to suspect the strength of Cantor's Diagonal Argument any more than other major theorems of mathematics.
I saw then very clearly that I was being offered the opportunity to become a math crank, and to spend the rest of my life writing angry letters in green ink to math professors. (I'd read a book once about math cranks.)

Until you admit you were wrong, you cannot get on with your life; your self-image will still be bound to the old mistake.

Just Lose Hope Already link

Every profession has a different way to be smart—different skills to learn and rules to follow. You might therefore think that the study of "rationality", as a general discipline, wouldn't have much to contribute to real-life success. And yet it seems to me that how to not be stupid has a great deal in common across professions. If you set out to teach someone how to not turn little mistakes into big mistakes, it's nearly the same art whether in hedge funds or romance, and one of the keys is this: Be ready to admit you lost.

You Can Face Reality link

What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
Eugene Gendlin

The Meditation on Curiosity link

Consider what happens to you, on a psychological level, if you begin by saying: "It is my duty to criticize my own beliefs." Roger Zelazny once distinguished between "wanting to be an author" versus "wanting to write". Mark Twain said: "A classic is something that everyone wants to have read and no one one wants to read." Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you'll be able to say afterward that your faith is not blind. This is not the same as wanting to investigate.

This can lead to motivated stopping of your investigation. You consider an objection, then a counterargument to that objection, then you stop there. You repeat this with several objections, until you feel that you have done your duty to investigate, and then you stop there. You have achieved your underlying psychological objective: to get rid of the cognitive dissonance that would result from thinking of yourself as a rationalist, and yet knowing that you had not tried to criticize your belief. You might call it purchase of rationalist satisfaction—trying to create a "warm glow" of discharged duty.
Afterward, your stated probability level will be high enough to justify your keeping the plans and beliefs you started with, but not so high as to evoke incredulity from yourself or other rationalists.

When you're really curious, you'll gravitate to inquiries that seem most promising of producing shifts in belief, or inquiries that are least like the ones you've tried before. Afterward, your probability distribution likely should not look like it did when you started out—shifts should have occurred, whether up or down; and either direction is equally fine to you, if you're genuinely curious.

"If in your heart you believe you already know, or if in your heart you do not wish to know, then your questioning will be purposeless and your skills without direction. Curiosity seeks to annihilate itself; there is no curiosity that does not want an answer."

So what can you do with duty? For a start, we can try to take an interest in our dutiful investigations—keep a close eye out for sparks of genuine intrigue, or even genuine ignorance and a desire to resolve it. This goes right along with keeping a special eye out for possibilities that are painful, that you are flinching away from—it's not all negative thinking.
It should also help to meditate on Conservation of Expected Evidence.

If you can find within yourself the slightest shred of true uncertainty, then guard it like a forester nursing a campfire. If you can make it blaze up into a flame of curiosity, it will make you light and eager, and give purpose to your questioning and direction to your skills.

No One Can Exempt You From Rationality's Laws link

Humans detect social cheating with much greater reliability than isomorphic violations of abstract logical rules. But viewing rationality as a social obligation gives rise to some strange ideas.

To Bayesians, the brain is an engine of accuracy: it processes and concentrates entangled evidence into a map that reflects the territory. The principles of rationality are laws in the same sense as the second law of thermodynamics: obtaining a reliable belief requires a calculable amount of entangled evidence, just as reliably cooling the contents of a refrigerator requires a calculable minimum of free energy.

Before you try mapping an unseen territory, pour some water into a cup at room temperature and wait until it spontaneously freezes before proceeding. That way you can be sure the general trick—ignoring infinitesimally tiny probabilities of success—is working properly. You might not realize directly that your map is wrong, especially if you never visit New York; but you can see that water doesn't freeze itself.

Lady Nature is famously indifferent to such pleading, and so is Lady Math.

So—to shift back to the social language of Traditional Rationality—don't think you can get away with claiming that it's okay to have arbitrary beliefs about XYZ, because other people have arbitrary beliefs too. If two parties to a contract both behave equally poorly, a human judge may decide to impose penalties on neither. But if two engineers design their engines equally poorly, neither engine will work. One design error cannot excuse another. Even if I'm doing XYZ wrong, it doesn't help you, or exempt you from the rules; it just means we're both screwed.

Leave a Line of Retreat link

As a matter of self-respect you should try to believe the truth no matter how uncomfortable it is, like I said before; but as a matter of human nature, it helps to make a belief less uncomfortable, before you try to evaluate the evidence for it."
The principle behind the technique is simple: As Sun Tzu advises you to do with your enemies, you must do with yourself—leave yourself a line of retreat, so that you will have less trouble retreating. The prospect of losing your job, say, may seem a lot more scary when you can't even bear to think about it, than after you have calculated exactly how long your savings will last, and checked the job market in your area, and otherwise planned out exactly what to do next. Only then will you be ready to fairly assess the probability of keeping your job in the planned layoffs next month. Be a true coward, and plan out your retreat in detail—visualize every step—preferably before you first come to the battlefield.

The hope is that it takes less courage to visualize an uncomfortable state of affairs as a thought experiment, than to consider how likely it is to be true. But then after you do the former, it becomes easier to do the latter.

For a start: You must at least be able to admit to yourself which ideas scare you, and which ideas you are attached to. But this is a substantially less difficult test than fairly counting the evidence for an idea that scares you. Does it help if I say that I have occasion to use this technique myself? A rationalist does not reject all emotion, after all. There are ideas which scare me, yet I still believe to be false. There are ideas to which I know I am attached, yet I still believe to be true. But I still plan my retreats, not because I'm planning to retreat, but because planning my retreat in advance helps me think about the problem without attachment.

But greater test of self-honesty is to really accept the uncomfortable proposition as a premise, and figure out how you would really deal with it. When we're faced with an uncomfortable idea, our first impulse is naturally to think of all the reasons why it can't possibly be so. And so you will encounter a certain amount of psychological resistance in yourself, if you try to visualize exactly how the world would be, and what you would do about it, if My-Most-Precious-Belief were false, or My-Most-Feared-Belief were true.

I say this, to show that it is a considerable challenge to visualize the way you really would react, to believing the opposite of a tightly held belief.

You shouldn't be afraid to just visualize a world you fear. If that world is already actual, visualizing it won't make it worse; and if it is not actual, visualizing it will do no harm. And remember, as you visualize, that if the scary things you're imagining really are true—which they may not be!—then you would, indeed, want to believe it, and you should visualize that too; not believing wouldn't help you.

Crisis of Faith link

"It ain't a true crisis of faith unless things could just as easily go either way." —Thor Shenkel

And yet skillful scientific specialists, even the major innovators of a field, even in this very day and age, do not apply that skepticism successfully. Nobel laureate Robert Aumann, of Aumann's Agreement Theorem, is an Orthodox Jew: I feel reasonably confident in venturing that Aumann must, at one point or another, have questioned his faith. And yet he did not doubt successfully. We change our minds less often than we think.

This should scare you down to the marrow of your bones. It means you can be a world-class scientist and conversant with Bayesian mathematics and still fail to reject a belief whose absurdity a fresh-eyed ten-year-old could see. It shows the invincible defensive position which a belief can create for itself, if it has long festered in your mind.

What does it take to defeat an error which has built itself a fortress?
But by the time you know it is an error, it is already defeated. The dilemma is not "How can I reject long-held false belief X?" but "How do I know if long-held belief X is false?" Self-honesty is at its most fragile when we're not sure which path is the righteous one. And so the question becomes: How can we create in ourselves a true crisis of faith, that could just as easily go either way?

If you stay with your cached thoughts, if your brain fills in the obvious answer so fast that you can't see originally, you surely will not be able to conduct a crisis of faith.

invidious: (of an action or situation) likely to arouse or incur resentment or anger in others. (of a comparison or distinction) unfairly discriminating; unjust.

excoriate: censure or criticize severely.

tripartite.

Better to think of such rules as, "Imagine what a skeptic would say—and then imagine what they would say to your response—and then imagine what else they might say, that would be harder to answer."

Or, "Try to think the thought that hurts the most."

"Put forth the same level of desperate effort that it would take for a theist to reject their religion."

Because, if you aren't trying that hard, then—for all you know—your head could be stuffed full of nonsense as ridiculous as religion. Without a convulsive, wrenching effort to be rational, the kind of effort it would take to throw off a religion—then how dare you believe anything, when Robert Aumann believes in God?

We're talking about a desperate effort to figure out if you should be throwing off the chains, or keeping them. Self-honesty is at its most fragile when we don't know which path we're supposed to take—that's when rationalizations are not obviously sins.

Not every doubt calls for staging an all-out Crisis of Faith. But you should consider it when:

  • A belief has long remained in your mind;
  • It is surrounded by a cloud of known arguments and refutations;
  • You have sunk costs in it (time, money, public declarations);
  • The belief has emotional consequences (note this does not make it wrong);
  • It has gotten mixed up in your personality generally.

But what these warning signs do mark, is a belief that will take more than an ordinary effort to doubt effectively. So that if it were in fact false, you would in fact reject it. And where you cannot doubt effectively, you are blind, because your brain will hold the belief unconditionally. When a retina sends the same signal regardless of the photons entering it, we call that eye blind.

Rest up the previous day, so you're in good mental condition. Allocate some uninterrupted hours. Find somewhere quiet to sit down. Clear your mind of all standard arguments, try to see from scratch. And make a desperate effort to put forth a true doubt that would destroy a false, and only a false, deeply held belief.

The Crisis of Faith is only the critical point and sudden clash of the longer isshoukenmei—the lifelong uncompromising effort to be so incredibly rational that you rise above the level of stupid damn mistakes. It's when you get a chance to use your skills that you've been practicing for so long, all-out against yourself.