Annotations
I notice when my mind is arguing for a side (instead of evaluating which side to choose), and flag this as an error mode.
I suppose evaluation of alternatives (decision theory) is rationality. So, as usual with rationality, it shouldn’t be let to non-active thinking.
These should probably have PPs to guide in each case.
Seems like curiosity is a sign of being open, of not being defensive. This is pretty related to awe and wonder as things that limit your ego.
Think about how likely an effect would be given each hypothesis. Don’t just think “she’s dancing; she must be crazy,” think “if she was crazy, she would be more likely to dance than if she was not.”
I wonder how this could be made actionable—how can I raise probability of raising awareness and integrating into some confusion-reducing pipeline?
I imagine I make a PP for this so I can think through the confusion and come to a resolution.
Annotations
When I encounter evidence that’s insufficient to make me “change my mind” (substantially change beliefs/policies), but is still more likely to occur in world X than world Y, I try to update my probabilities at least a little. (Recent example from Anna: Realized I should somewhat update my beliefs about being a good driver aft er someone else knocked off my side mirror, even though it was legally and probably actually their fault—even so, the accident is still more likely to occur in worlds where my bad driver parameter is higher.)
This seems very emotionally intelligent, and especially difficult to do “frequently.”
2024-02-26 07:58am
---
When facing a difficult decision, I try to reframe it in a way that will reduce, or at least switch around, the biases that might be influencing it. (Recent example from Anna’s brother: Trying to decide whether to move to Silicon Valley and look for a higher paying programming job, he tried a reframe to avoid the status quo bias: If he was living in Silicon V alley already, would he accept a $70K pay cut to move to Santa Barbara with his college friends? (Answer: No.))
This is a great practical example of considering different perspectives on the same issue
2024-02-26 08:31am
---
When facing a difficult decision, I check which considerations are consequentialist which considerations are actually about future consequences. (Recent example from Eliezer: I bought a 1400 was a sunk cost ra ther than a future consequence, and didn’t change the importance and scope of future better sleep at stake
This gives me some intuition that we can have patterns such as this that prevents batches of biases—find the roots of them and find patterns to reduce them.
This case of evaluating just the consequences is very much alike the definition of rational in decision theory.
2024-02-26 10:25am
---
I try to find a concrete prediction that the different beliefs, or different people, definitely disagree about, just to make sure the disagreement is real/empirical.
This is related to Peirce’s pragmatism, I think.
2024-02-26 10:28am
---
I try to come up with an experimental test, whose possible results would either satisfy me (if it’s an internal argument) or that my friends can agree on (if it’s a group discussion).
First, establish the criteria. Then, do what you can to reach them, and if you reach them, that’s enough.
This seems vague and hard to define. I suppose this is “see experimental potential” in weighing alternatives.
Analysis of habits is essential.