The garden of cosmic horror and delight
Mon 6 Aug 2018 by mskala Tags used: academic, philosophy, math, daresI'm very interested in cognitive deficits: tasks it may seem human brains ought to be able to perform, but that at least some brains cannot. This time around I'd like to say a few words about mathematical foundations and the ability to understand them. The fact is that there are some questions - and they're very simple ones - that neither a human brain nor anything that functions like a human brain can answer. And understanding that fact is itself a problem that may be challenging for at least some brains.
This posting is based on part of a sequence of items I posted on Twitter in May 2017; I'm re-working it into a Web log entry because Twitter is untrustworthy as an archive or a medium for serious discussion, and I want to refer to it in something else I'm working on.
Mathematical foundations
My preferred path into this topic is through the work of Turing, but those with a more pure-mathematics bent would probably cite Gödel or go even further back in history. There's a good historical summary in this lecture by Chaitin from 2000. What it comes down to is that there's a lot that we don't know; that we have proven we cannot know; or that we know, but it doesn't make sense, at the root of how mathematics works, and similar issues have spread from mathematics to infect other fields that depend on mathematics, such as physics and computer science.
As a toy example, probably oversimplified: most of us know at least a little bit about arithmetic, which is the mathematics of numbers. Some, though not all, of us also know that mathematics is not only about numbers and also involves other kinds of things. We usually start out in arithmetic with the integers, whole numbers like 1, 2, 3, and 4. We also mostly think we know about real numbers, like 1.414, which are kind of in between the integers.
Already there are some problems. Ask a random person on the street "What is a real number, really?" Try answering that one for yourself. Is it an easy question?
Not many people can give any kind of sensible answer to the question of what reals really are. It's not the kind of question most of us are accustomed to thinking about. The best answer for the large majority of the population is along the lines of "Don't worry about it, but if you really need to know, ask at the library for a book on the subject." But at least there is a fairly well-agreed answer to this question, that you can find consistently in all the books you might be handed in answer to that request. (Except the one by Conway, but that's another story.) The definition of real numbers is used by the people with a reason to care, who are a small minority of pure mathematicians, and it is known, but mostly ignored, by a somewhat larger group of people who've taken certain math classes and are still a tiny minority of the general population. What real numbers really are is complicated and I won't describe the definition here, but already there's a clue to what's going on: the concept of a real number is something we all use, it appears simple and "intuitive" (whatever that means), but it has a hidden part at which we are encouraged not to look closely, and it can at least be claimed that nearly all the time it just doesn't matter what numbers really are anyway. "Pay no attention to the man behind the curtain."
How many real numbers are there?
Any eight-year-old can answer that: "Infinity!" But not many will be able to coherently tell you what "infinity" actually is, and very few adults can give an answer that's any better than the eight-year-old's. Adults often try to answer with nonsense along the lines of "It has to be something."
How many integers are there? Infinity again. Does that mean there are equal numbers of integers and reals?
Well, no. It certainly feels like there ought to be more reals than there are integers, and if you dig into the mathematics of this question, it turns out that this intuitive feeling is correct: there are a lot more reals than there are integers, and it's relatively easy to explain why (although I still won't, here). There are at least two different sizes of "infinity," a sort of big infinity for reals and a smaller infinity for integers. There are also others, bigger yet.
Are there any in between? Can you have a medium-sized infinity that is bigger than the integer infinity and smaller than the real infinity? What sort of things would come in that quantity?
That question of whether there is a medium-sized infinity is a really big problem. It looks like a simple yes-or-no question. Given that I said the earlier questions do have specific answers which are well-known to the people with a reason to care, it seems that this question ought to also be answerable. Does medium-sized infinity exist, yes or no? Maybe you don't know the answer, maybe I don't, but it looks like someone ought to be able to answer it. Maybe it's a really hard question and nobody knows the answer today; but in that case, maybe we could do some more research and hope to someday find out.
Actually, it's much worse than that. What we now know is that in an important sense, the "medium-sized infinity" question cannot be answered at all. It's not a "yes," it's not a "no," it's not a "we don't know what the answer is, but there is an answer"; it's more like "no answer can exist," or "the question does not exactly have a meaning." My saying "in an important sense" it's unanswerable is another clue, because to even talk about this sort of thing we need to have a clear agreement on what it actually means for question to have an answer, what it means for us to know the answer to a question, what it means for anything to have a meaning, and a whole bunch of other philosophical stuff that ends up being very much like the "What is a number?" question.
In the early 20th Century, mathematicians ran into enough of this all at once that they couldn't get away with ignoring it anymore, an event sometimes called a "foundational crisis" in mathematics. Physicists were having a foundational crisis of their own at about the same time, with a picture of the mechanism of the universe that had seemed nearly complete only a few decades before, rapidly falling apart and making less and less sense as they tried to fill in what had seemed to be just a few small missing details. Like "Why don't all physical objects just explode?", which is the so-called Ultraviolet Catastrophe. The term "Ultraviolet Catastrophe" was coined by Paul Ehrenfest.
Chaitin describes computer science as basically having been invented by Turing as a match for burning down mathematics; and even if it also had other origins, it's a fact that in computing we've had to deal with foundational problems right from the start. The foundational problems in computer science are even harder to ignore than in most other fields.
The garden of cosmic horror and delight
Many other things were happening in the world in the early 20th Century. In particular, that is about the time that fiction writers started to really explore the fantastic and speculative. The genre of writing known as "cosmic horror," arguably founded by H.P. Lovecraft but others were doing similar things at the same time too, originated at about the same time that mathematics and physics were falling apart. There was a peak in the 1920s.
Cosmic horror puts forward the idea that we human beings, everything we know, and everything we can know, are all minuscule in comparison to a vast, unknowable, and hostile universe. Moreover, there are truths we shouldn't know, which our minds cannot handle and by which our minds would be damaged. Lovecraft's fiction is full of characters who learn the terrible truth about the world and are destroyed by it, or at best end up wishing they'd remained ignorant. The story "The Tree on the Hill" (collaboration with Duane Rimel) is a fairly typical example of this sort of thing.
Such concepts have become commonplace in speculative fiction. Just two examples are Langford's "basilisks" (pictures that kill you if you see them, explicitly described as doing it by a means related to Turing uncomputability) and the current story arc of Girl Genius (monsters outside of time that destroy you and possibly everybody else too if you study the physics of time too deeply). The idea of forbidden knowledge you shouldn't know goes back much earlier, of course, but in pre-Lovecraft versions it was more often associated with a religious framework - you're not supposed to have certain knowledge and can incur divine punishment for acquiring it. That one's in the Book of Genesis, as well as the Prometheus legend. The 20th Century incarnation focused more on the idea of knowledge intrinsically harmful in itself, that is its own punishment without needing divine intervention.
In the role-playing game Call of Cthulhu, which is heavily based on cosmic horror literature, the effect of people being harmed by forbidden knowledge is not only taken for granted as something that really happens, but quantitatively modelled as part of the game system. Characters have "SAN points" (for "sanity"), which more or less measure their ignorance of terrible truths about the universe that human minds cannot handle. Over the course of adventures in the game, if you read the wrong books or see the wrong things, you lose SAN points, and eventually, pretty much inevitably, your protective ignorance is used up, and your story ends. You go gibbering off into the void. Part of the strategy of the game is to manage your SAN points by avoiding learning too much. It's a game; it's fictional; it does not necessarily model anything in reality; but it seems a reasonable enough model of something that people enjoy playing the game.
The 1920s were immediately after the Great War and one common opinion of critics is that the horrors of that conflict spurred the development and popularity of cosmic horror literature. It's a reasonable position to take: massive, senseless destruction of humanity, and the observed effects of such real-life horror on the minds of the people forced to witness it, could certainly support the ideas that the truth about the world is vast, terrible, unknowable, and shouldn't even be known.
But it's also common for people to link Lovecraftian cosmic horror literature with the foundational crises in mathematics and the sciences also occurring at that time, even if the link is just used as a "ha ha only serious" metaphor. Chapman freely mixes technical language with Lovecraft and Lovecraft-derived allusions in this comment about foundational crises and it's a pretty common way to talk about such things. Using that kind of metaphor is quite routine when people talk about math foundations. The people who do talk a lot about math foundations don't find it at all unusual to say stuff like that.
Whether it's true or not is debatable, and I'm about to present a different view, but it is certainly one popular view, a view people are aware of whether they agree with it or not, that foundational questions in math, physics, and computer science are by nature scary questions and that just thinking about such questions, let alone discovering the answers, could be harmful.
Ludwig Boltzman, who spent much of his life studying statistical mechanics, died in 1906, by his own hand. Paul Ehrenfest, carrying on the work, died similarly in 1933. Now it is our turn to study statistical mechanics.
- D.L. Goodstein, States of Matter
That's a real book. How many SAN points does it cost if you read it?
When I mentioned the Genesis version of the "forbidden divine knowledge" story, I wanted to also cite another divine-knowledge story, from The Boyhood Deeds of Fionn mac Cumhill, as part of my general campaign to use pagan sources for religious points where possible. But on looking it up I realised it unfolds in an importantly different way that in fact lines up even better with what I want to say. The story, to summarize, is that there's a magic fish, the Salmon of All Knowledge. Eat the fish, and you know everything. It is apparent from later events that that doesn't mean just literally knowing the answer to every specific question, which would be an automatic foundational crisis in itself; the concept seems to be more like acquiring proficiency in all skills, with access to an oracular ritual for at least some specific questions.
So the poet Finneces spends seven years stalking the Salmon of All Knowledge, eventually catches it, and gives it to his apprentice Demne to cook, with strict orders for the boy to eat none of it himself. Demne burns his thumb on a drop of fat while cooking the fish, instinctively puts his thumb into his mouth, and thereby inadvertantly ingests enough of the fish to gain a magical gift of knowledge. Finneces - and this is where it gets especially interesting to me because it departs from the Genesis version with a distinctly pagan take on the events - doesn't punish Demne, but gives him a new name (Finn) and tells him to eat the rest of the magic fish, because it has become clear that he is the one to do so. Finn doesn't seem to be harmed by having All Knowledge. He becomes an epic hero. The first step in being an epic hero is going away to study more poetry with a new teacher.
What if the secret knowledge is transformative but not, after all, harmful or associated with punishment? And what if it's only a starting point?
The religious tradition I participated in during childhood was something called New Thought, which arose in the late 19th and early 20th Centuries as a development or descendant of the earlier Spiritualist religious movement. This was a time in history - just before the foundational crises - when science started to seem to really make sense. Many people today think, and think this is obvious, that science and religion are opposed to each other, but that was not always a well-agreed consensus. Around the turn of the 20th Century, there were many people to whom it was clear that then-new developments in scientific understanding of the world, especially in such areas as the physics of electromagnetism, were supportive of, not opposed to, religious understanding of the world; that science and religion not only complemented each other but would soon merge into a single unified body of knowledge. Spirit was equated with electricity and it was thought that everybody would very soon be able to understand both kinds of phenomena with the same analytical tools as aspects of a single universal energy. The later findings of physics that many similar-sounding ideas are literally true (matter-energy equivalence, unification of forces, and so on) helped keep such religious concepts strong even after the foundational crisis era.
Churches descending from this current of ideas often have the word "Science" in their names, such as Religious Science and Divine Science. Christian Science (the "First Church of Christ, Scientist") may not be exactly part of the New Thought tradition, but has a genetic relation too. Scientology was several decades later and is not closely related. My own specific traditions come from near the chronological tail end of what might be called the New Thought Cambrian Explosion: the 1920s again, just the same time when math and physics were falling apart and theoretical computer science was about to be invented. The later thinkers of New Thought are people who witnessed the foundational crises going on but put a positive interpretation on these developments as things interesting, beautiful, and worth studying rather than as things frightening, dangerous, and better avoided. After that time, although people following such traditions clearly still exist today, the main focus of religious innovation for most of the world moved on to other subject matter, and the overall consensus on the relationship between science and religion is now quite different.
Maybe there are people inclined to say "Oh no! Unspeakable cosmic horror! We must protect ourselves from this dangerous forbidden knowledge!" and generally react like a Lovecraft protagonist when confronted with foundational crises, subtle unknowability in scientific thought, and the findings of science with respect to traditionally religious questions. Maybe there are other people inclined to say "Wow! How interesting! Let's learn a lot more about this stuff, and transform ourselves with its power!" about the same concepts. What would we expect to see in a world that contains both of these kinds of people? What differences might we observe between them, signs by which we might classify individuals into one kind or the other? Which kind are you?
Human understanding of computability theory
Although tough questions and forbidden knowledge are interesting in the abstract, I'm especially interested here in how human beings relate to those questions, and especially where they occur in computer science and computer science education.
Some of Turing's work concerns questions that are or are not answerable by computers. If you're studying computer science, it's important that you have some understanding of the limitations of these machines we use. The basic point that computers have serious limitations and cannot answer some kinds of questions is itself important; but people studying computer science at the university level need to go deeper into exactly which questions are unanswerable and why. Most computer science undergraduate degree programs include a required course, typically in third year, with a title something like "CS 320: Languages and theory of computation." (The word "languages" in this title refers to a class of mathematical objects used in the analysis, not human languages like English nor even programming languages like FORTRAN, although both concepts are related.) It's the course that introduces Turing machines and goes into their consequences. At nearly all institutions it'll end up being considered one of the toughest courses in the undergrad program. It'll also quite often end up being taught by a professor with a reputation for teaching difficult courses. I took the course from someone nicknamed, no kidding, "Dr. Doom."
Whatever you call this course, wherever it's taught and by whoever, the curriculum for it is pretty much the same, and there's an interesting phenomenon that usually happens. Unless heroic efforts are taken to manipulate the grading to prevent this, the grade curve in the Turing machine course tends to go bimodal. Instead of what's usually seen in most courses, where most people get roughly average grades and some people get higher or lower grades, with the frequency tapering off further from the average, the Turing machine course tends to split into two groups.
Some people, maybe 10% to 25% of the class, "get it." These are people who might walk into the three-hour final exam, answer all the questions in half an hour, wait another half hour (in an effort not to discourage their classmates...), leave, and end up scoring over 95%; while others in the same class who heard the same lectures and read the same books and appear to be of normal intelligence can spend the whole three hours genuinely working hard, and get just a barely passing mark. You've very likely seen this happen from one side, or maybe the other, if you've taken a course of this type. And both experiences are normal, expected, and hard to prevent for this course in a way that doesn't seem to be the case for other courses. So far, efforts to get the people who find it easy to explain their understanding in a way that can be useful to the ones who don't, have not been successful pretty much at all. We don't really know how to teach this stuff to everybody.
I have witnessed attempts to blame the bimodal-grade phenomenon in the Turing machine course on "sexism," but I don't believe that's plausible as the main explanation. The population taking this course is usually predominantly male, but the population who "get it" and find the subject matter easy to understand, does not obviously have a different sex ratio from the population who take the course at all. Even if there were a difference in the percentage who "get it" between men and women (which, I emphasize, does not appear to be a hypothesis supported by evidence), it wouldn't explain the existence of the get it/don't get it distinction in the first place, nor the persistence of that distinction within a single sex group.
Just looking at male students alone, some find the course material easy, some find it hard, and there's a gap between those. The most we could expect to see as a result of "sexism" in teaching of an otherwise-ordinary subject would be a gap between the sexes, not within one sex. Bimodality within a same-sex group necessarily comes from a different cause. It is also strange that if caused by "sexism," the same thing wouldn't also happen in nearly all other computer science courses - which are taught to the same students by the same faculty who can't reasonably be expected to become much more or less "sexist" depending on the specific course they're teaching. Languages and theory of computation is different from other courses in a way that cannot be primarily explained by "sexism." What's different about it?
The sample size for just one classroom full of students is too small to answer questions about statistical distribution reliably, and there are plenty of difficulties that show up as soon as one tries to combine more than one classroom-full of students into a larger sample. Nonetheless, I think the subjective impression of people having qualitatively different experiences of the difficulty of learning computability theory is an interesting thing worth exploring. Why does it happen?
I'd be interested to know - I don't have data on this - whether a similar bimodal phenomenon occurs in the specific math and physics courses that deal with those fields' foundational crises. My hypothesis is that it's learning about foundational crises that causes the bimodal thing to happen.
If there is such a thing as knowledge that harms you, then it is understandable that you'd be reluctant to learn it. It is even understandable that that reluctance would go beyond conscious distaste: in a world where such things are possible, we'd expect that human beings would have evolved specific adaptations for automatically avoiding harmful knowledge without conscious effort.
The dire predictions of cosmic horror literature have not come true, and it's not only because they're coming true behind the scenes and we don't let ourselves notice. People do not suddenly go off gibbering into the void in large numbers in a psychological Ultraviolet Catastrophe, not even when exposed to challenging ideas and foundational crises. If doing that when you learn the terrible truth about the universe is a real possibility at all, then there must be something that prevents all of humanity from just going staring bonkers immediately. If Langford basilisks were capable of existing, we would certainly have seen them before now, either in the Mandelbrot set or The Wizard of Op.
Maybe there is an adaptation that prevents at least some human brains from being able to process certain concepts that would be threatening to sanity if fully understood. Maybe there is an adaptation that allows human brains to directly handle problematic ideas like Turing undecidability and Gödel incompleteness without harm. Maybe both adaptations exist but different individual brains rely on one or the other to a greater degree.
What if someone attempted to teach in school ideas that were of such a nature as to trigger a protective blocking adaptation built into human cognition, regardless of whether or not they were exactly harmful ideas? What if not all brains are equally vulnerable, or equally frightened; some automatically protect themselves by rejecting the forbidden knowledge, while others are simply unharmed by it and even tend to embrace it?
What would the grade curve look like in such a course? How would it feel to be a student in such a course?
If this picture of two kinds of people were an accurate description of humanity, what would people in the two groups think about each other and be likely to do to each other?
7 comments
The incompleteness theorem has a few different things going against it, intuition-wise. It's very abstract, and its nature forces you to deal explicitly with that abstraction: even just keeping the models, axiomatic systems, and metatheories involved all separate can get tricky. It's also one of those results of the form "you can approximate a thing very well, even as well as you like, but the thing itself either doesn't exist or is radically different from its approximants". These can be difficult to accept. Another one of these is: any two sets are contained in a single common set (and even a set of sets is contained in a common set), yet there is no set of all sets. You can get as close as you like to the universe of sets, yet the universe itself either doesn't exist or is radically different from a set (depending on your setup).
The fact that a power set of an infinite set has a larger cardinality than the set itself is pretty weird for this reason. If you have a function from such a set to its powerset that misses even a countable number of points, you can find a new function that includes those points in its image as well. And yet there is no surjection. People really can have a hard time with that: "I have this method to get more and more points in the image, what do you mean I'll never get a surjection? Just keep going!". Especially after you've learned that infinite sets are really flexible, unlike finite sets (injecting a set into a proper subset of itself is strange). It's difficult to impress on people that the diagonalization argument means that method just /can't/ work, it just /can't/ end with a surjective function. Same with incompleteness ("but I'll just keep throwing in more axioms!"), or uncomputability ("even with an oracle for the halting problem, I still get a new halting problem for my new machine?").
It also runs counter to a lot of mathematical results where you /can/ do these things. With rationals, you can get closer and closer to things that don't exist in the rationals, like the square root of 2. The solution is just to fill the holes and get a nice consistent system of real numbers that includes the rationals and yet has no more holes. Or maybe you don't like that not all of your non-constant rational polynomials have roots. There, the anti-diagonalization argument actually works! (from what I recall). You can construct a closure of the field just by (symbolically) throwing in the roots of polynomials that ought to exist, then doing the same to the resulting field, and so on, and eventually you will get an algebraically closed field. That last one is related to Zorn's lemma, the king of "just throw more things in until it works" (want a maximal complementary subspace? a maximal ideal? sure! just keep throwing missing elements in until you're done).
In any case, believing in the incompleteness theorem doesn't commit you to agnosticism regarding any particular axiom of a formal system. You can even be a diehard Platonist and merely think that we live in a world like one Feynman described, where there is no one set of physical laws describing the entire universe, physicists being fated to uncover ever better and more accurate theories of the universe, coming ever closer to a unified theory that doesn't exist.
sand - 2019-05-31 18:45
sand - 2019-05-31 19:09
Matt - 2019-06-01 11:36
That said, one of the profs on my thesis committee was a classmate of Kaczynski's in grad school, and was rather amused (in an "of course not" sort of way) when a reporter interviewing him asked if there was something about the mathematics he studied that might have driven him to terrorism.
kiwano - 2019-09-27 13:45
JD - 2022-11-07 06:40
A personal friend of mine, who's generally intelligent and interested in philosophy, but without any specific math or computer science (or even computer programming) aptitude, has that exact perspective.
The friend in question is a bit bemused by my confidence that it's *not* the case, because to them it's pretty obviously true. Trying to explain it led me to realize just how much of that understanding is intuitive; I do feel a bit embarrassed about not being able to immediately give a complete account of why it's untrue.
I'm giving Hofstadter's G.E.B. a re-read this year, because I thought I remembered it going into excruciating detail on exactly that issue.
Liam I. - 2024-07-17 09:54
(I don't know how that affected me, but I do recall Theory of Computation being easier than Combinatorics. For whatever reason, things like diagonal arguments made more sense to me than counting techniques. Not sure if that's changed in the years since.)
trythil - 2018-08-21 01:26