to-process

Annotations

I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode.

I suppose evaluation of alternatives (decision theory) is rationality. So, as usual with rationality, it shouldn’t be let to non-active thinking.

2024-02-24 07:04pm --- > Reacting to evidence / surprises / arguments you haven't h eard before; flagging beliefs for examination.

These should probably have PPs to guide in each case.

2024-02-24 07:08pm --- > For example, someone criticized us for providing inadequate prior info on what statistics we'd gather for the Rationality Minicam p; and I had to visualize the consequences of \[explaining to myself, internally, why I couldn’t have done any better given everything else I had to do\], vs. the possible consequences of \[visualizing how it might've been done better, so as to update my acti on patterns for next time\], to snap my brain out of defensive mode and into should we do that differently mode.

Seems like curiosity is a sign of being open, of not being defensive. This is pretty related to awe and wonder as things that limit your ego.

2024-02-24 07:10pm --- > When I'm trying to distinguish between two (or more) hypotheses using a piece of evidence, I visualize the world where hypothesis #1 holds, and try to consider the prior probability I'd have assigned to the evidence in that world, then visualize the world where hypothesis #2 holds; and see if the evidence seems more likely or more specifically predicted in one world than the other

Think about how likely an effect would be given each hypothesis. Don’t just think “she’s dancing; she must be crazy,” think “if she was crazy, she would be more likely to dance than if she was not.”

2024-02-24 07:13pm --- > When I see something odd something that doesn't fit with what I'd ordinarily expect, given my other beliefs I successfully notice, promote it to conscious attention and think "I notice that I am confused" or some equivalent thereof.

I wonder how this could be made actionable—how can I raise probability of raising awareness and integrating into some confusion-reducing pipeline?

I imagine I make a PP for this so I can think through the confusion and come to a resolution.

2024-02-24 06:59pm ---

Annotations

When I encounter evidence that’s insufficient to make me “change my mind” (substantially change beliefs/policies), but is still more likely to occur in world X than world Y, I try to update my probabilities at least a little. (Recent example from Anna: Realized I should somewhat update my beliefs about being a good driver aft er someone else knocked off my side mirror, even though it was legally and probably actually their fault—even so, the accident is still more likely to occur in worlds where my bad driver parameter is higher.)

This seems very emotionally intelligent, and especially difficult to do “frequently.”

2024-02-26 07:58am

---

When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna’s brother: Trying to decide whether to move to Silicon Valley and look for a higher paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon V alley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))

This is a great practical example of considering different perspectives on the same issue

2024-02-26 08:31am

---

When facing a difficult decision, I check which considerations are consequentialist which considerations are actually about future consequences. (Recent example from Eliezer: I bought a 1400 was a sunk cost ra ther than a future consequence, and didn’t change the importance and scope of future better sleep at stake

This gives me some intuition that we can have patterns such as this that prevents batches of biases—find the roots of them and find patterns to reduce them.

This case of evaluating just the consequences is very much alike the definition of rational in decision theory.

2024-02-26 10:25am

---

I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical.

This is related to Peirce’s pragmatism, I think.

2024-02-26 10:28am

---

I try to come up with an experimental test, whose possible results would either satisfy me (if it’s an internal argument) or that my friends can agree on (if it’s a group discussion).

First, establish the criteria. Then, do what you can to reach them, and if you reach them, that’s enough.

2024-02-26 10:30am --- > I consciously think about information value when deciding whether to try something new, or investigate something that I'm doubtful about.

This seems vague and hard to define. I suppose this is “see experimental potential” in weighing alternatives.

2024-02-26 10:32am --- > I notice when something is negatively reinforcing a behavior I want to repeat. ( Recent example from Anna: I noticed that every time I hit 'Send' on an email, I was visualizing all the ways the recipient might respond poorly or something else might go wrong, negatively reinforcing the behavior of sending emails. I've (a) stopped doing that (b) inst alled a habit of smiling each time I hit 'Send' ( which provides my brain a jolt of positive reinforcement ) . This has resulted in strongly reduced procrastination about emails. )

Analysis of habits is essential.

2024-02-26 10:34am ---