Rationalist Conversation Patterns

A few months ago a conversation I had with someone at one of the NYC Rationality meetups prompted me to write an email on the subject of rationalist conversational norms. I kept telling myself I’d distill my points from the email into a more coherent summary, and I’ve haven’t, so instead I’m posting it pretty much without editing:

My take on some conversational patterns (no particular order):

  • Eliezer has listed “God”, “Hitler”, “absolutely certain”, “can’t prove that”, and “by definition” as red flags of irrational thinking.
  • One big positive signal that the person is thinking rationally is visibly pausing and stopping to think—looking inside and figuring out what we actually think/feel takes time, so too quick responses are suspect, though obviously this one can be faked. It’s not just pausing, but pausing and exhibiting the facial expressions and body language that indicate that you’re trying to figure something out.
  • The rationalist way is to strongly prefer positive statements to normative ones (observations not evaluations).
  • Speaking as though a counterexample refutes a correlation is not considered rational.
  • Saying that “absence of evidence is not evidence of absence” is also not considered rational—there’s a post on exactly this.
  • Complaining, being bitter, blaming people, and labeling people are all outside of rational discourse as I see it.
  • On a related note, there are three types of “clever” stories from Crucial Conversations that are bad signs: Villain, Victim, and Helpless stories. The theme of Victim Stories is that “I am noble and pure and doing everything right, and I’m getting bad results because of outside circumstances. It’s not my fault”. The theme of Villain Stories is “not only do other people do things to make my life worse, they do it on purpose because they’re evil, and they deserve retribution”. Villain and Victim stories are about justifying past behavior. Helpless stories are about justifying future inaction. “I can’t do anything to change or improve my situation because…” All of those are big red flags in my book.
  • Also, saying “I feel x because he/she/they/it” or “I do x because he/she/they/it” or, “This other person made me do x” or “This other person made me feel x”. I think language that takes ownership of our actions, feelings, and choices, is very important.
  • Similarly, I’d say the words “can’t”, “must”, “ought”, “should”, “have to”, “unacceptable” etc. are all at least yellow flags of irrational thinking, though I can think of exceptions. My friend Molly and I used to talk about how “shoulds” are okay if there’s a corresponding “if” statement, like: “If you want to get a cheap Burning Man ticket, you should do it today, because they might sell out.”
  • Getting offended or defensive is not conducive to rational discourse, and the easiest way to mitigate the effect of those reactions is to admit them and examine them.
  • Actually, that’s a more general point. Emotions aren’t opposed to rationality—they’re data, and so the preferred way of dealing with them in rationalist discourse is to own them, admit them, put them on the table, try to explain them and question them. So “When you said that thing about red-headed people being less intelligent overall I got sad because I was imagining that maybe such a view would lead to them being treated unfairly, and it’s an important part of my value system to treat all human beings fairly”, or something like that. In my experience, this one is hard because people feel more vulnerable doing it that way.
  • Rationalist conversation is more likely to have an explicit goal than normal conversation. Keeping the conversation on track and not letting the goal drift is valued, and changing the subject to avoid things you have unpleasant reactions to or think might make you look bad without explicitly acknowledging that you’re doing so is frowned upon.

Short list of things that are positive signals of rationality:

  • Asking questions that help clarify the other person’s understanding of the situation. Like, “Wait, so are you saying that X is evidence for Y? If you believe that, does that mean you also believe this other thing?” Basically, assuming there’s a consistent model in the other person’s head and trying to figure it out.
  • Asking people for the evidence that led them to arrive at their beliefs is a good sign too. “What experience did you have that led you to this conclusion?”
  • Just generally seeming curious is a very good sign, and can be obvious.
  • Acknowledging when you have a bias that’s interfering with your ability to think clearly about something, or a motive that’s different from “seeking truth”.
  • Giving probability estimates and confidence intervals.
  • Being aware that what we’ve just heard other people say affects what we think, so requesting that each person form an opinion before either person shares it.
  • Showing surprise when presented with information that doesn’t fit your worldview is a very good sign that you’re trying to keep a consistent model.
  • Coming up with thought experiments to narrow down the source of differences in belief.
  • Prohibiting the use of certain words when we’re getting distracted by them.
  • Always asking “why?” and “how do I anticipate the world behaving because I believe this?”.
  • Using the vocabulary of rationality. This could be a much longer point, but naming cognitive biases, talking about heuristics, talking about what you anticipate happening, saying that you “intuit” something when it’s that, instead of saying that you “know” something. Referring to “motivated cognition”, “belief as attire”, asking whether particular feelings are “truth-tracking”, talking about clusters.

This is what I can think of right now, though I’d love to encourage a collaborative effort to refine it. Feel free to share it.

Leave a Reply

Your email address will not be published. Required fields are marked *