« Syntax differences hide splits... | Home | The imagination gap, part 2 »

The imagination gap, part 1

Mon 10 Dec 2018 by mskala Tags used: , , ,

A deficit in hypothetical cognition

In The World As If, Sarah Perry gives "an account of how magical thinking made us modern." She discusses how to define "magical thinking" and suggests that the diverse things to which people apply that label form "a collection of stigmatized examples of a more general, and generally useful, cognitive capacity." Namely, the capacity to entertain false, "not expected to be proven," or otherwise not exactly true propositions as if they were true.

Although magical thinking may often be called a behaviour of children or of those in primitive cultures, what Perry calls the "as if" mode of thought (I want to also include "what if") is in no way primitive. The view that magical thinking is for children and the uneducated can and should be inverted: mastery of hypothetical "as if" cognition is necessary for functioning as an adult in a literate technological society, and characteristic of the most sophisticated thinking human beings ever do.

But what if not everybody can do it? What if there are some persons who are not able to comprehend hypothetical "what if?" concepts in the way that social functioning requires? Such people evidently did exist somewhere at some time. Perry describes anthropological studies of primitive societies, last century, in which illiterate older adults seemed unable, and sometimes oddly unwilling to try, to comprehend questions about hypothetical things without a directly-experienced "evidentiary basis" for answering each question. Younger persons, who had had more contact with Western civilization, had no trouble with the questions. She says, and I think she's substantially right, that "as if" thinking, what I call the capacity for cognition of hypothetical things, is part of what makes people modern and able to function in a modern society.

In this three-part series I want to go further and suggest that difficulty with hypothetical cognition is not limited to primitive societies. Magical thinking made us modern, but it did not make all of us modern, and even here on the Net, in the midst of the most highly literate and intellectually advanced culture on the planet, there are still very many people who have a serious cognitive deficit regarding all processing of hypothetical concepts. What would the consequences of a widespread deficit in this human faculty look like? I submit that we frequently see phenomena that could be exactly those consequences, and that they are hard to explain in other ways.

My suggestion may be connected to that made by David Chapman in his discussion of the bridge to meta-rationality. That article postulates a series of stages of adult mental development, described in detail by Robert Kegan. The basic function of universities is, or once was, to teach young adults how to grow into the development stage characterized by rational thought ("Stage 4"), which stage of development is necessary for participating fully in modern society.

Postmodern thought developed within academia as a way to help human beings grow to the next level after that, Stage 5, and (according to Chapman) in its original form postmodernism actually worked well. The problem is, you have to learn rationality first. You have to know the rules before you break them. As postmodernism became orthodox, universities started attempting to teach it (including its built-in rejection of Stage 4 concepts) to young adults in Stage 3 who had not yet learned rationality, and it prevented them from being able to grow beyond Stage 3. After a couple of academic generations, the teachers themselves were people who'd never reached the rational stage; postmodernism now serves only to trap people in Stage 3, feverishly dismantling the institutions of our Stage 4 society without understanding why.

Although I'm not entirely convinced of Chapman's framework, and I still prefer to fit things into Timothy Leary's ladder of "circuits" instead (which has interesting parallels to Kegan's "stages"), the Chapman article is noteworthy in pointing at a plausible explanation for why hypothetical thinking is dying. Hypothetical thought may be one of the basic capabilities of minds that have reached Kegan's Stage 4, and the coordinated effort to stunt human beings at Stage 3 would then necessarily include a crusade against hypotheticals.

A taste for contradiction

Let's think about a simple proof of a theorem in mathematics. Maybe not everyone in the society I am part of is expected to read and write such proofs fluently, but the basic act of coming to a logical conclusion from specific, stated facts is something pretty important that we all ought to be able to handle, and math proofs are the purest expression of that act.

The square root of two is irrational. There is no fraction in whole numbers that, when multiplied by itself, gives the product 2. The fraction 99/70 is close, and 239/169 is closer, but no fraction in whole numbers is exactly right. Here's a fairly typical proof of that fact, such as might be found in a high school math book.

Suppose the square root of two is rational. That is, there are some x, a, and b, with x positive rational and a and b positive integers having no common factors, such that x2=2 (meaning that x is the square root of 2), and x=a/b.

Since x=a/b, we have x2=2=a2/b2, and therefore a2=2b2. Then a2 is even, and so a is even. Let c=a/2, necessarily an integer because a is even.

Then 2b2=4c2 and so b2=2c2 and b2 is also even. Therefore b is even.

But with a and b both even, they have the common factor 2, contradicting their definition. Therefore this x, a, and b cannot exist, and the square root of two is irrational.

Some people find that proof easier to understand than others do, and of course part of the reason is simply the familiarity of words like "rational" and "integer" and of algebraic operations like squaring both sides of an equation. There are a few necessary background facts one must also know and they may not be obvious, such as the fact that an integer is an even number if and only if its square is also an even number. Those things are more familiar to some readers than others and if they're not familiar it's understandable that the proof could be confusing. It's easy enough to teach most people these kinds of things.

But there is another reason for this proof to be hard, and it poses more difficulty for math teachers. The proof above is what's called a proof by contradiction, and many students have a lot of trouble with all such proofs. The proof starts off telling the reader to "suppose" a statement that is not true, about numbers which do not exist. Someone can easily balk right at the first sentence, saying "But you told me the square root of two was not rational!" Indeed we did; that's the point of the proof; but the proof is based on a "what if?" examination of a false statement. If the statement were true, then its consequences would be contradictory, and therefore it is not true. It's hard to make that logic palatable to some students.

In order to understand this proof, the reader must needs entertain a false proposition ("root-two is rational") temporarily, and evaluate its consequences as if it were true. And this game of supposing is played constantly, many times on every page, in formal mathematical writing. The language of mathematics is all "Suppose..." and "Let..." and "If...".

Mathematics makes hypotheticality highly explicit, but other intellectual disciplines deal in abstractions with hypothetical concepts and non-existent things frequently too. Law is another good source of examples; so is literature. I'll give some examples from each. The ability to handle hypothetical entities and ask and answer "what if?" questions is central to serious thought.

What if someone couldn't do "what if?"

An organic deficit in hypothetical cognition is plausible

Most of us can see; but some of us are blind. More subtly, some of us can see a full range of colours; but a significant minority of human beings can't see the difference between red and green. Such persons are still human. It's not usually a stretch to accept as fact that not everyone has the complete set of perceptual faculties we associate with the standard human senses, on the level of bare physical phenomena. So if one can be "blind" to all vision, or just to one part of colour perception, why not "blind" to hypotheticals?

Abstract cognition may or may not be different from sensation of physical phenomena. The general ability to think about things that are not true, or that are not known to be true, or are not true yet, or conditionally true, or similar, but that have consequences if or when true, and to make decisions on the basis of those consequences, is fundamental to cognition as we know it. If you can't ask and answer "what if?" questions, then you don't have a normally functioning human mind in the context of literate and industrial society, and one could maybe argue, no mind at all. It may seem hard to believe, then, that a significant fraction of the population does have a real deficit in this faculty.

But we already accept that there are other extremely important and purely abstract competences which a significant number of people lack. I've written before about a deficit that I think may exist in comprehension of the topics called "foundations" in mathematics. That's speculation on my part, but some abstract cognitive deficits are not speculative.

Consider "theory of mind," which is used to describe the ability to recognize that minds other than your own (that is, other people's) really exist as distinct from yours. Other people know things you don't, or don't know things you do. What if you didn't know that? There was probably a time when, as a young child, you didn't.

One classic demonstration of "theory of mind" involves the story of Sally and Anne. Sally puts her marble in a box, then leaves the room. Anne takes the marble out of the box and puts it in a basket. Sally comes back. Where will Sally look for the marble? Children who have reached a certain level of normal development will guess that Sally looks in the box because she doesn't know that Anne moved the marble. Before that level, they can't answer accurately and may say Sally looks in the basket, unable to deal with the idea that Sally doesn't know the same things Anne and the listener to the story know. Note that the whole story is hypothetical, as are all stories in general; Sally and Anne don't really exist. But some people who study "theory of mind" will act out the story with real people when doing experiments on this subject matter, partly to control for any possible difficulty in understanding of hypothetical situations.

There's neurological evidence that human beings have what in computer science would be called dedicated hardware ("mirror neurons") for modelling others' states of mind. It's understandable that this general ability of predicting the actions of persons other than ourselves, would be really important. We need it for social interaction; we're built to do it; and we are, most of us, good at it. Something similar specifically related to emotions rather than to factual content tends to be called "empathy," and is also regarded as an important fundamental competence of human minds.

And yet, some people do at least partially lack such abilities, or find them disproportionately difficult to exercise. Deficits in theory of mind, in otherwise adult people, are not even rare. The inability to accurately and consistently build mental models of what's going on in others' minds may be typical of autism, for instance, including high-functioning autism. There is ongoing controversy about the precise role of theory of mind deficit in autism in particular, but that does not negate my point about the mere existence of theory of mind deficit. Some people do at least partially lack this fundamental human competence, and there are enough such people that the rest of us need to be aware of their existence.

General comprehension of hypotheticals is not a whole lot different from having a theory of mind. Indeed, theory of mind could be just one important special case of the general comprehension of hypotheticals. If some people can't answer "Where will Sally look?", maybe it's reasonable that there might be people who can't handle "What if?" questions in general.

There is a word "aphantasia" used in medicine and psychology to describe the condition of being unable to see things that aren't there; that is, the inability to visualize mental images in the absence of direct visual stimuli. People with aphantasia, who have been estimated to number as many as 2.7% of the population, have trouble with any tasks that involve "seeing" in the mind's eye. As well as having more important consequences, it means they are basically unable to answer IQ-test questions like the old cube-folding puzzle.

[IQ test question involving an unfolded cube]

Aphantasia is sort of like the mental deficit I think some people may have toward any interaction with and use of hypothetical concepts. However, I'm interested primarily in a deficit of abstract hypotheticality. Visualization can be useful in understanding abstract concepts and many people use it a lot for that, so a deficit in visual imagination could certainly cause trouble in understanding things that don't need to be visual. But I don't know what a picture of the difference between rational and irrational numbers would look like, and I don't think you need to invent such a picture to understand the proof that root-two is irrational. Someone can still have trouble with proof by contradiction, generalizing to all proofs by contradiction, and that trouble could come from an inability to consider hypotheticals. For this reason, I'm not going to call the deficit I hypothesize "aphantasia," though there's clearly some similarity.

Is citation needed?

Julia Galef: Ahhhh the eternal choice between

- discussing a phenomenon without citing any examples, and having ppl object that it's not real, or

- discussing a phenomenon with examples, and having ppl nitpick the details of the particular examples you chose [source]

I have written before about the harm done to intellectual discourse by Wikipedia and its practice of tolerating "citation needed" claims. It's really disheartening to write carefully about a general abstract concept, giving multiple examples to support the claim that a general pattern exists, and then get responses that are specific to the examples and do not address or recognize the overall pattern. It may be worse to leave out the examples in order to avoid their acting as distractions from the big picture, and then hear nothing but "citation needed" and demands for examples. Galef's quote above summarizes the dilemma well. With or without examples, there doesn't seem to be much hope of ever writing about a general pattern and expecting readers really to engage with it in its general form.

I noticed the pattern years ago that I couldn't write about a general concept here in my Web log without most of the comments being about examples, either demanding examples when I gave none, or discussing specific details of examples I did give, in a way that excluded applicability of the discussion to the intended general concepts. There seemed to be nothing I could ever write that would convey the idea that I was really writing about exactly what I said I was writing about: general, large-scale patterns, for which examples absolutely exist but where the patterns have existence beyond just the examples I happened to cite if any. After years of struggling with this issue, I pretty much gave up, and drastically cut my output of Web log articles. It just wasn't worth it. I'd spend the time and effort that it takes to write a serious article about something important, and all I could ever expect in return was nonsense about examples. But I have continued to wish that I could write about important generalities and have my comments be read and understood at the general level, and I have still made further attempts just once in a while. So far, it has never worked well.

I have previously ascribed the unwarranted insistence on examples to some combination of theory-of-argument issues and unconscious refusal to confront difficult general ideas. If we deeply believe that argument is not about searching for truth but about who is right and who is wrong, then anything abstract presented without examples (and, more specifically, examples of someone saying it, that is, citations) is not a well formed argument. Someone who fails to give cited examples is not arguing well, and the believer who redirects the argument onto specific cited examples is doing that person a favour.

On the other hand, if some highly abstract ideas are literally dangerous, then the self-preserving mind will automatically reject consideration of those abstractions, and will again change the subject to specific examples as a matter of safety. Demanding examples and focusing on the examples instead of the general principle could also be a deliberate and malicious rhetorical tactic, but we don't need to assume bad faith to explain it. Theory of argument, and protection from genuinely damaging abstract concepts, suffice.

But what if it's really about a deficit in "what if" thinking? To confront a general pattern as something that exists independently of its examples, is to deal with an hypothetical thing. Someone who is incapable of thinking about hypothetical things and is told about a general pattern without examples, will be unable to comprehend that they are being told about something that is a thing at all. That person may have learned to demand examples in such cases as a coping mechanism. Then if given examples, it is natural for such a person to think only about the examples and not any pattern beyond them. Being blind to hypotheticals may mean being blind to any difference between patterns and examples of patterns. If a significant number of the persons who choose to comment on my Web log have a serious cognitive deficit related to the meaningfulness of hypothetical things, it could be another explanation for my frustrating experience of being allowed to expect answers to everything except the declared subject matter of my writings.

Go to Part 2

If you like this article, I hope you will share it on your social media.

3 comments

*
I don't really agree with your analysis of why people ask for examples. Even if you are able to consider hypothetical scenarios, I think doing so is often a bad idea if the scenario is actually wrong. I think a lot of interactions when people ask for examples look something like this:

Alfred: Hey, so you know how Italian-Americans have a disproportionate influence in politics? Have you every considered whether they may have disproportionate influence more generally, e.g. in finance or art?

Beth: What kind of political influence are you talking about? Can you give some examples?

Alfred: That's not really the point, but I'm thinking of things like Italian-American voters winning the election for Trump, or Italian-American newspaper columnists prompting Brexit, or the Italian-American International plotting to start the invasion of Ukraine.

Beth: Wait, as far as I can tell Italian-American's didn't vote overwelmingly for Trump, and in any case, how much such voters are there?

Carol: Wait, I don't think there were many pro-Brexit Italian-American columnists?

David: Wait, what's this International?

Alfred: Oh, would you all please stop nit-picking! If you like, you can just assume the political influence as a hypothetical; in any case it's a very small fraction of the *total* Italian-American influence that I'm about to describe.

I think what probably happened here is that Alfred first misread some statistics about the Trump election. Then he carries in the back of his mind the idea of the surprisingly strong Italian-American influence, and one day he happens to read a pro-Brexit column by an Italian-American, and aha! this explains the Brexit vote as well. Then, the information about the lead up to the war in Ukraine is all very confusing, but if you already know about the pervasive influence of Italian-Americans, then you can begin to connect the dots...

The point of ideas and abstract concepts is to help you think more clearly; you want short mental names for ideas that you use often. In this sense, learning about the idea of an Italian-American conspiracy is harmful to you: not because it opens a portal to Lovecraftian madness, but because from them on each time you read a news story you will remember the conspiracy and waste a little time considering whether this is the underlying explanation. (Hopefully you'll conclude it isn't!) Scott Alexander has written a blog post on this topic [http://slatestarcodex.com/2014/03/15/can-it-be-wrong-to-crystallize-patterns/].

You mention mathematics, and there is an often-repeated type of anecdote which is quite relevant here:

> The story goes that once upon a time a student wrote his thesis on Hölder-continuous maps with α>1, since he had only seen the case α≤1 addressed in his books. The student proved many wonderful theorems about these maps and was very excited for his defense. At his thesis defense, one of the examiners asked him to provide a nontrivial example of such a map. The student was flustered. As it turns out, all such maps are constant.

In this case the work of the student was clearly wasted. In a more common problematic case, someone proves a theorem which is true and whose antecedents *can* be nontrivially satisfied, but it turns out that the theorem was not so useful after all, because the situations it describes don't seem to come up very often in practice, and when they do there are more powerful methods to deal with them. In that case, the work of the mathematician was also largely wasted. So if someone publishes a long and fearsome paper, and you are considering whether to spend the time to understand its definitions, you probably want to know how many examples it applies to. You might be willing to learn Inter-universal Teichmüller theory, *if* it actually proves the abc-conjecture.
youzicha - 2018-12-12 10:41
*
Thanks for the thoughts. No amount of skill in hypothetical thinking is enough to protect people from simply being wrong; and as you describe, there are times when looking for examples, and asking for examples, are worthwhile pursuits. But that doesn't make all example requests legitimate, and what I've observed on Wikipedia is that they are far more often used tactically than in such a way as to actually bring the discussion closer to truth.
Matt - 2018-12-12 11:23
*
Maybe you are right, and there is a phenomenon where people ask for examples in an unproductive way, but I have not really noticed it myself, which I guess means that I'm not in the target audience for the article.

Somewhat ironic, given the topic, but I feel maybe what you would have to do to make me interested in your thesis is give more convincing examples. :P

I guess this is probably largely a matter of personality. I generally find it quite annoying when people state incorrect or suspicious-looking facts, and then go on to build value judgements on top of them or generalize rules from them. I remember by parents commenting on this when I was quite young, that I would hate it if when people in a discussion would make a factual claim which they clearly didn't have enough evidence to be able to state it with certainty. So I hope you don't find this rude, but when Julia Galef complains about people nitpicking her examples, or you complain about wikipedia editors asking for citations, my instinctive reaction (sorry) is to think "well, was that really nitpicking, or were her examples perhaps invalid to start with". In general, in the world at large, I feel the problem is that people give too few citations, not too many. Without any concrete example, my first instinctive reaction is to doubt if there really is a phenomenon that needs explaining in the first place.

Anyway, again, that probably just means that I'm not in the target audience for the essay.

Best wishes.
youzicha - 2018-12-12 18:49


(optional field)
(optional field)
Answer "bonobo" here to fight spam. ここに「bonobo」を答えてください。SPAMを退治しましょう!
I reserve the right to delete or edit comments in any way and for any reason. New comments are held for a period of time before being shown to other users.