Identifying With

Recently I’ve been using the phrase “identify with” a bunch. As in, “I’m learning not to identify with my past self”, or “I try not to identify with my beliefs”, or “the ego is what, when accurately perceived, we stop identifying with”. When a friend of mine asked me what I meant by “identifying with”, I wasn’t immediately sure how to unpack how I was using the words. I said, “Well, when I identify with something it feels like me. It feels like part of my, uh, identity”. He was unsatisfied with this explanation, as was I.

(Prioritizing explaining myself is a heuristic that has served me well. I want meaning.)

Here’s my best one sentence version: When I identify with something, doing an original seeing on it is aversive. I don’t want to take a fresh look at the evidence and see if it really exists in the way I’m thinking of it. In other words, things I identify with seem like part of the territory, not just part of my map. I’ll expand a bit on what this means in terms of identifying with beliefs, emotions, and parts of myself.

Continue reading “Identifying With”

Rationalist Conversation Patterns

A few months ago a conversation I had with someone at one of the NYC Rationality meetups prompted me to write an email on the subject of rationalist conversational norms. I kept telling myself I’d distill my points from the email into a more coherent summary, and I’ve haven’t, so instead I’m posting it pretty much without editing:

My take on some conversational patterns (no particular order):

  • Eliezer has listed “God”, “Hitler”, “absolutely certain”, “can’t prove that”, and “by definition” as red flags of irrational thinking.
  • One big positive signal that the person is thinking rationally is visibly pausing and stopping to think—looking inside and figuring out what we actually think/feel takes time, so too quick responses are suspect, though obviously this one can be faked. It’s not just pausing, but pausing and exhibiting the facial expressions and body language that indicate that you’re trying to figure something out.
  • The rationalist way is to strongly prefer positive statements to normative ones (observations not evaluations).
  • Speaking as though a counterexample refutes a correlation is not considered rational.
  • Saying that “absence of evidence is not evidence of absence” is also not considered rational—there’s a post on exactly this.
  • Complaining, being bitter, blaming people, and labeling people are all outside of rational discourse as I see it.
  • On a related note, there are three types of “clever” stories from Crucial Conversations that are bad signs: Villain, Victim, and Helpless stories. The theme of Victim Stories is that “I am noble and pure and doing everything right, and I’m getting bad results because of outside circumstances. It’s not my fault”. The theme of Villain Stories is “not only do other people do things to make my life worse, they do it on purpose because they’re evil, and they deserve retribution”. Villain and Victim stories are about justifying past behavior. Helpless stories are about justifying future inaction. “I can’t do anything to change or improve my situation because…” All of those are big red flags in my book.
  • Also, saying “I feel x because he/she/they/it” or “I do x because he/she/they/it” or, “This other person made me do x” or “This other person made me feel x”. I think language that takes ownership of our actions, feelings, and choices, is very important.
  • Similarly, I’d say the words “can’t”, “must”, “ought”, “should”, “have to”, “unacceptable” etc. are all at least yellow flags of irrational thinking, though I can think of exceptions. My friend Molly and I used to talk about how “shoulds” are okay if there’s a corresponding “if” statement, like: “If you want to get a cheap Burning Man ticket, you should do it today, because they might sell out.”
  • Getting offended or defensive is not conducive to rational discourse, and the easiest way to mitigate the effect of those reactions is to admit them and examine them.
  • Actually, that’s a more general point. Emotions aren’t opposed to rationality—they’re data, and so the preferred way of dealing with them in rationalist discourse is to own them, admit them, put them on the table, try to explain them and question them. So “When you said that thing about red-headed people being less intelligent overall I got sad because I was imagining that maybe such a view would lead to them being treated unfairly, and it’s an important part of my value system to treat all human beings fairly”, or something like that. In my experience, this one is hard because people feel more vulnerable doing it that way.
  • Rationalist conversation is more likely to have an explicit goal than normal conversation. Keeping the conversation on track and not letting the goal drift is valued, and changing the subject to avoid things you have unpleasant reactions to or think might make you look bad without explicitly acknowledging that you’re doing so is frowned upon.

Short list of things that are positive signals of rationality:

  • Asking questions that help clarify the other person’s understanding of the situation. Like, “Wait, so are you saying that X is evidence for Y? If you believe that, does that mean you also believe this other thing?” Basically, assuming there’s a consistent model in the other person’s head and trying to figure it out.
  • Asking people for the evidence that led them to arrive at their beliefs is a good sign too. “What experience did you have that led you to this conclusion?”
  • Just generally seeming curious is a very good sign, and can be obvious.
  • Acknowledging when you have a bias that’s interfering with your ability to think clearly about something, or a motive that’s different from “seeking truth”.
  • Giving probability estimates and confidence intervals.
  • Being aware that what we’ve just heard other people say affects what we think, so requesting that each person form an opinion before either person shares it.
  • Showing surprise when presented with information that doesn’t fit your worldview is a very good sign that you’re trying to keep a consistent model.
  • Coming up with thought experiments to narrow down the source of differences in belief.
  • Prohibiting the use of certain words when we’re getting distracted by them.
  • Always asking “why?” and “how do I anticipate the world behaving because I believe this?”.
  • Using the vocabulary of rationality. This could be a much longer point, but naming cognitive biases, talking about heuristics, talking about what you anticipate happening, saying that you “intuit” something when it’s that, instead of saying that you “know” something. Referring to “motivated cognition”, “belief as attire”, asking whether particular feelings are “truth-tracking”, talking about clusters.

This is what I can think of right now, though I’d love to encourage a collaborative effort to refine it. Feel free to share it.

Brain burning

Here’s a thought experiment:

If you could burn ten claims about the nature of the world into everyone’s brain, so that they truly grokked them, which ones would you choose?  

My answers, stolen from all over the place, in rough order of perceived importance:
  1. Reality is not vague.
  2. The past is sunk.
  3. There is no failure, only feedback.
  4. The human mind is comprised of sub-selves.
  5. To communicate effectively with sub-selves, be curious about their deepest desires.
  6. Pain is an attention signal from a sub-self.
  7. When sub-selves compete for attention, suffering arises.
  8. Curiosity eliminates suffering.
  9. We exercise our agency by directing our attention.
  10. All judgments (including self-judgments) are alienated expressions of our unmet needs.
What do you think of my ten?  I want to hear yours because I like the idea of compiling and sharing the best of the basic principles that guide the actions of the people whose accomplishments and thinking I respect.

Sticky claims and surprise meters

I’ve been rereading the early posts of Mysterious Answers to Mysterious Questions, and will now, as an exercise for myself, try summarize my best understanding of the relevant concepts and how I relate to them using my own jargon:

I will describe two natural clusters of mental representations “sticky claims” and “surprise meters” and their respective underlying meanings.

Sticky Claims

“Sticky claims” aren’t by default truth-tracking: they don’t get updated whenever new data comes in. Rather, they exercise their (limited) agency to preserve their existence by directing my focus away from counter-evidence. The standard function of a “sticky claim” in my mind is to act as a self-fulfilling prophecy.

I can refer to my “sticky claims” verbally, and they have attached sensory representations. “Sticky claims” I’ve uncovered in my brain have ranged from “It’s dangerous to have money because someone will take it away from me” to “I can’t find the cheese anywhere in the refrigerator” to “I love my brother”. When these are active in my brain I will de disinclined to acquire money, cast my eyes on every shelf in the fridge, and dwell on the positive qualities of a world where my brother doesn’t exist.

I’m sure you can think of “sticky claims” of your own. Think of one now.

Surprise Meters

Together, my “surprise meters” imply my best working model of reality. The rough function of surprise is to alert me when I encounter input that causes an update: data that render an aspect of my model of reality improbable. I do not shy away from disproving them.

I can refer to my “surprise meters” in words, and they have attached models underlying them. Typical examples of “surprise meters” are: “I have very little money in my savings account,” “I recently opened the refrigerator and stared inside without moving my eyes much for about 30 seconds, thinking of cheese,” and “I have a strong intuition that loving my brother is virtuous“, none of which seeks to prevent me from invalidating it by constraining my actions.

Steps for Turning Sticky Claims into Surprise Meters

“Sticky claims” and “surprise meters” behave differently, and I find distinguishing the two is instrumental. More specifically, the ability to reliably access my surprise meters enables me to update my map on demand towards consistently picture of the world I exist in.

When I notice myself shying away from reality, cultivating my curiosity allows me to seek truth. Some go-to questions for when I feel stuck are:

  1. What is the probability the claim is true? What bets would I make about it? (20% chance the cheese has been eaten or moved.  80% chance I haven’t checked the shelf or drawer where it is.)
  2. What is my evidence for the claim? (It’s not on the shelf I’m looking at right now.  I’ve been looking in the fridge for 30 seconds and I haven’t seen it.)
  3. What is my evidence against the claim? (I remember putting the cheese in the fridge when I bought it.  I cannot recall my roommates ever having finished or moved my cheese without informing me.  I haven’t checked every shelf and drawer.)
  4. Imagine, in as much sensory detail as possible, that I will open an envelope with the truth about the claim inside. How do my emotions respond to the prospect of finding out the real answer?  (Imagining opening the envelope containing a note informing me whether the cheese is in there I start to doubt my claim that it isn’t. I feel myself not wanting to look–I’m afraid to be wrong.)

Try those questions out on the claims you picked before and see how your perspective shifts.

If following the steps described above does not bring about my desired level of clarity (as evidenced by my contentment), I can tease out the models underlying my “surprise meters” by employing longer-form techniques.

I strive not only to recognize that “sticky claims” and “surprise meters” exist as categories and are distinguishable, but also to develop the habit of tagging my verbal statements (spoken either out loud or in my head) as one or the other in real-time.

Less Wrong Sequences decks

I read Eliezer’s Sequences back when he was posting them to Overcoming Bias, and both enjoyed them and thought they were full of lots of good rules about how to think. Just as with communication advice, the hardest time for me to remember rationality advice has been when I most needed it. So far I’ve made Anki decks of the first two of Eliezer’s core Sequences, and that does seem to have helped the material sink in.

Mysterious Answers to Mysterious Questions

One piece of wisdom I’ve gotten from the self-help literature is that by default, the function of a belief in the human mind is to be a self-fulfilling prophecy. We tend to look for confirming evidence, reject (or fail to notice) counter-evidence, and take actions which confirm our world view. There are also truth-tracking beliefs. These are beliefs that we’d bet on, and that we update as new information comes in. The posts in Mysterious Answers to Mysterious Questions helped classify and categorize a bunch of important rules for filling my mind with the latter types of belief, not the former. My best one-sentence summary of the sequence is: “Don’t ask what to believe, ask what to anticipate.”

Here’s the deck:

mysteriousanswers.anki
Download this file

A Human’s Guide to Words

I used to spend a lot of time nitpicking and arguing about words. I didn’t realize that I was doing so at the time–I would have said I was arguing about ideas–and that’s why I kept doing it. Having learned tons of Anki cards about the ways humans slip into doing this without noticing, I’m aware of at least many of the failure modes. I often find myself saying, “wait–it seems like we’re arguing about words” and changing course, which I take to be a good sign.

The deck I made for myself probably goes into more detail than most people would want, so I recommend suspending any cards that don’t seem interesting or relevant.

The deck:

wordswrong.anki
Download this file

Crucial Conversations deck

The third deck of communication cards I made was from Crucial Conversations. Recommended to me by Anna Salamon, this book gave me a lot of insight into the inner workings of the stories we tell ourselves that create our emotions.

The basic model is that the purpose of dialogue is to get all the relevant information into the shared “pool of meaning”, and then to decide what to do with it. When people don’t feel safe, they become either silent, withholding meaning from the pool, or violent, trying to force meaning into the pool. The entrance condition of dialogue is Mutual Purpose–having a shared reason to talk about the thing in the first place. The continuation condition of dialogue is Mutual Respect.

The book also describes what it calls our “Path to Action”, where we first see or hear facts about the world, then tell ourselves stories about them, then feel emotions based on the stories, then act based on the emotions. We want our emotions to both be based on true stories and to lead us in useful directions. If any of our emotions lacks one or both of these qualities, we can change it by figuring out the facts that generated the relevant story and choosing to tell a different story. 

One of the key lessons for me was learning to recognize three types of stories the book calls “clever stories”: the Victim Story (“it’s not my fault”, the Villain Story (“it’s all your fault”), and the Helpless Story (“there’s nothing else I can do”). We make up these stories because they conveniently excuse us from responsibility and help us justify continuing to act in the same way that created the problem instead of modifying our behavior. We tell the stories when we’ve done something we feel bad about and don’t want to admit it–and they’re almost never true. I started tagging both my own and others’ stories with these labels when they applied, and I’ve found it quite valuable. 

A lot of the rest of the advice in the book is so simple it’s embarrassing how much mileage I’ve gotten from using it. For example, if someone misunderstands something you said, Contrasting (“I didn’t mean X, I did mean Y”) can be very useful. Don’t let yourself get away with saying stuff like “Someone has to tell her how she’s acting, even if it means being cruel”. Instead, present your brain with a more complex question: “How can I tell her how she’s acting AND be kind?”. If you find that you’re becoming either silent or violent in conversation, return to your motive. Ask yourself, “What do I really want to get out of this conversation?” and “How would someone who really wanted those results behave?”.

Here’s the deck:

crucialconversations.anki
Download this file