Meta-Rationality *

Author: David Chapman

Learning to Wield an Invisible Power

Senior professionals with years of experience can somehow deal with problems that juniors can’t. They have a “feel for things” that helps them find shortcuts and make things run smoothly. They may:

  • Notice relevant factors that others overlooked

  • Point out non-obvious gaps or friction between theory and reality

  • Ask questions no one had thought of (!)

  • Make new distinctions suggesting different conceptualizations of a situation

  • Change the description of a problem so different solutions arise

  • Rethink the purpose of the work and its technical priorities

  • Apply concepts or methods from seemingly distant fields

  • Combine multiple contradictory views

This is meta-rationality.

Meta-rationality involves producing insights by investigating the relationship between systems of technical rationality and their contexts (where, why, and how a rational system was used). Meta-rationality does not operate within a system of technical rationality, but around, above, and then on the system.

Meta-rationality is a craft that must be learned through apprenticeship and experience. It is also rarer than rationality and has more leverage. It becomes increasingly important as you move from being an individual contributor into leadership or entrepreneurship - where your job is to make sense of chaos when standard techniques no longer cut it.

Anti-rational, Irrational, Rational, and Meta-rational

  • Anti-rationalists dismiss the truth because they reject reason altogether.

  • Irrationalists dismiss the truth because true facts contradict their ideological agendas.

  • Rationalists may describe rationalism as the commitment to trying to believe only true statements.

  • Meta-rationalists defend “truth” against irrationalists, but reject “truth” as misunderstood by rationalists. It is about more detailed, accurate, and effective senses of “truth”.

Credibility Crisis in a Post-truth Era

The modern world was built on a foundation of rationalism, the ideological belief that some system is guaranteed to be correct.

When rationalism failed, modernity ended. We now live in a postmodern world, a result of abandoning rationality, universality, and coherence.

Postmodernity features cultural triviality, political dysfunction, and nihilistic malaise. Rationalism breaks down in the face of the claim that the truth depends on who is asking and why.

Scientific breakthrough has become scarce and increasingly trivial, and much of it isn’t even true - science is facing a replication crisis. This is one example of a meta-rational problem. Science is a rational system that isn’t working as well as it should.

Meta-rational Reforms for Credibility and Creativity

Many rational systems are overdue for extensive overhauls on individual and institutional levels.

Quantitative productivity is high, scientists are still cranking out papers. Yet, meaningful innovation is certainly not proportional to increase in funding. Most output is not valuable in context.

Qualitative creativity needs work, better selection of research problems in alignment with what actually matters. What actually leads to the creation of knowledge and useful innovation? This is a meta-rational investigation that requires a passion for the subject matter. 

Creativity flows from wonder, curiosity, play, and enjoyment. Current institutions built around rational systems discourage this, in favor of constant competitive pressure for mindless productivity.

The Post-post-truth Era

Irrationalists were inadvertently correct in that truth is highly dependent on context and purpose.

Rationalists would rather double down on overstated claims instead of acknowledging their pragmatic truths were not absolute truths.

Postmodernity is the acknowledgment that claims of absolute truth within social and cultural systems were false; not altogether false, but also not the absolute truth.

Meta-rationality is about forming a more accurate and credible understanding of rationality, including the nature of truth. Remodeling society to acknowledging the meaningfulness of practical truths while recognizing their shortcomings can lead us to a “post-post-truth” era.

Clouds and Eggplants

How does rationality relate to nebulosity? (a more detailed description of nebulosity is covered in meaningness)

  • Boundaries - clouds do not have edges, they thin out gradually and you cannot quite say when they start or end.

  • Identity - it is hard to say when a cloud ends and another begins, whether a clump of clouds is connected or distinct.

  • Categories - types of clouds transition continuously into each other, intermediate forms cannot meaningfully be categorized.

  • Properties - depending on context and composition, clouds can take on different properties that cannot be precisely described.

Clouds are an extreme case, but nebulosity is pervasive. 

The First of Many Eggplant Examples

Imagine a refrigerator containing only an eggplant. Now suppose someone posed the question “Is there water in the refrigerator?”

  • A rationalist will say yes - it’s in the eggplant.

  • A meta-rationalist will say it depends - in what sense is “yes” true or false? There are water molecules, but there is nothing to drink.

Considering eggplants in fridges, it is not that we are uncertain whether or not water molecules are in the fridge, or that we don’t know what water means - it is that what counts as water depends on what you want it for.

Key Concepts

  • Rationality - Systematic and formal methods of thinking and acting. Systematic and formal are nebulous terms. A system vaguely means anything complicated, and formality is a matter of degree. For practical purposes, think of it as the activities that happen if a set of rules are consciously followed.

  • Rationalism - Belief systems that make exaggerated claims about the power of rationality, involving a formal guarantee of correctness. Rationalism is how we would like the reality to work, but lacks an adequate mapping between the clearly defined mathematical realm and nebulous reality, so it fails to realize that rationality is unreasonable.

  • Reasonableness - Thinking and acting in ways that are sensible and likely to work, but are not formally rational. Reasonableness is not merely a primitive approximation to rationality, it captures the nebulosity of the world effectively, in a way that formal rationality can’t.

  • Meta-rationality - informal reasoning about how to best use reasonable, rational, and meta-rational methods in a given context. Meta-rationality combines resources from both to understand how to act effectively when/where others can’t.

Some Unavoidable Philosophy Jargon

  • Epistemology is an explanation of knowing, it is the investigation of what distinguishes justified knowledge from opinion.

  • Ontology is an explanation of what there is. What categories of things are there? What properties do they have? How do they relate to other categories? Ontology is intrinsically irreparably nebulous. Ontologies cannot be true, but they can be practical and useful.

Rationalism

There are three obstacles to rationality that rationalism addresses

  1. Representational vagueness - inability to clearly define what a rationally conceived object means, and how it relates to reality.

  2. Epistemological uncertainty - “known unknowns”: unknowns due to insufficient evidence; and “unknown unknowns”: relevant factors that you are unaware of.

  3. Ontological nebulosity - The nebulosity that pervades the world

They are all fuzzy, and make it difficult to make claims about truths and falsities.

Rationalism mainly concerns itself with the first two obstacles, as they are about human cognition - maybe we can fix them. We can gather more data, use more precise language, and maybe one day solve these obstacles. Since nebulosity is about the world itself, we can’t fix it, and rationalism ignores it.

Encountering Representational Vagueness

Rationalism considers ordinary language defective and tries to replace it with more precise systems. One example is formal logic in mathematics.

While these sharper representations are often valuable and do give power to rational methods, fully eliminating vagueness is infeasible, and attempts to do so are based on a fundamental misunderstanding of language.

Ordinary language contains extensive methods for working with nebulosity, which are lost when replaced with technical abstractions.

Encountering Epistemological Uncertainty

Rationalism assumes well-formed statements are either absolutely true or false, and the task is to find out which through rationality.

Some things are true, and knowing truths can be highly effective in many situations. Unfortunately, uncertainty cannot be entirely eliminated, and formal reasoning isn’t built to handle this - probabilisticity can handles some things, but not all.

Unknown unknowns can’t be incorporated into formal systems; to treat an uncertain fact requires specifying it in advance, which cannot be done.

Encountering Ontological Nebulosity

Rationalism typically misinterprets this as one of the earlier concerns. However, nebulosity does not boil down to linguistic sloppiness or lack of knowledge; there exist no definite, absolutely true answers to most questions.

Nebulosity negates any possibility of strong claims about rationality. Most rationalisms act on the supposition that beliefs are either absolutely true or false - in the process denying or ignoring nebulosity.

Acknowledging epistemology while ignoring ontology is impossible, the two can’t be separated - most beliefs are about things that are inherently nebulous, so truthiness generally is too. Most of the time, the best we can get are “pretty much true” truths where no amount of additional information would resolve them into absolute ones.

Rationalism often formulates explicit denials of nebulosity on the basis of fundamental physics.

  • Subatomic particles have absolutely definite properties, described by quantum field theory, which are absolutely true.

  • Everything is made out of particles, so everything is also absolutely definite. Thus, the world is well-behaved, and there is an absolute truth to everything.

The problem is, quantum physics doesn’t have much to say about ontology - what we care about in objects, categories, properties, and relationships cannot be understood in quantum terms. We need a better explanation.

Logicism and Probabilism

Epistemology has traditionally distinguished rationality from empiricism.

  • Rationality derived new knowledge through deduction from existing knowledge or from intuition

  • Empiricism derived new knowledge from sensory experience

Rationalism and empiricism were opposing theories, but it became clear that knowledge rests on reasoning and experience. The word “rationality” now covers both reasoning and experience, and it is intuition that has been discarded as it proved unreliable.

Deductions can be made public and checked against each other, intuitions are inherently private, leaving no way to resolve disagreements.

There are two major varieties of rationalism

  • Logicism - a descendant of the rationalist tradition and predicate logic

  • Probabilism - a descendant of the empiricist tradition and probability theory

(Predicate logic is the set of rules for mathematical proofs. They guarantee absolute truth, so many believe they are the essence of rationality.)

Logical Positivism

This is the framework of the logical positivist:

  1. Apply rationality to itself. Use logic to prove that logic works. This gives us a foundation to apply logic to other things.

  2. Use logic to prove mathematics is correct.

  3. Prove that the mathematical, scientific understanding of the world is correct, that scientific empiricism is reliable.

  4. Reduce squishy things like ethics and aesthetics to science and solve those too.

Unfortunately, we failed at step 1.

In the early twentieth century, logical positivism tried to marry predicate logic with scientific empiricism, attempting to generalize results from experimental data to universal truths, forming an unassailable proof that the theory of rationality was correct.

Around 1960, Kurt Godel and others proved mathematically that some logical defects cannot be fixed, even in principle. Logic is inherently broken, nothing can be done about it.

Predicate logic no longer works because there exist mathematical truths that cannot be proven, rationalism cannot be extended to everything.

The discovery of fundamental limits to what can be known produced the crisis of confidence that eventually led to postmodernity, or “incredulity towards all grand narratives”.

Step 2 also failed.

The mathematics mostly works, and its internal problems are largely irrelevant to its applications in the real world (through science, engineering, economics, etc.). Nevertheless, it does contain some highly technical issues if you dig deep enough.

Induction - the problem of step 3.

The dilemma in scientific understanding is that it does not provide an absolute verification criterion. No matter how much evidence we have, we are faced with the question of “How much evidence is adequate to verify a general conclusion?”

After failing to find an answer, proponents of logical positivism reluctantly concluded one could only have degrees of belief in universal truths; full confidence is impossible.

Some epistemologists proceeded to develop probability theory as a means to quantify the concept of being “more confident” in a truth. The attempt to unify predicate logic and probability theory failed, but probabilism did replace logicism as the dominant school of rationalism in the mid 20th century.

Typical discussions of rationalism’s collapse cover only:

  1. The foundational crisis in logic

  2. The failure to solve the problem of induction

This gave the hopeful impression that if these two issues were overcome, we could have a viable general epistemology.

The first poses less of a problem because the internal technical problems with mathematics had no practical consequences

The second was more-or-less addressed by scientists and statistical software that promised reliable answers to the problem of induction through statistical significance.

Even given these, logical positivism still failed for several other reasons, which apply generally across all forms of rationalisms.

Aristotelian Epistemology

For over two thousand years, rationalist philosophers held the Aristotelian view of logic:

  • You have a list of sentences in your head

  • Each sentence is labeled with whether you believe it is true or false

  • Separately, each is actually true or false in the world

If you had a wrong belief, rationality was the way to fix it. Once inside and outside correspond, you are done. This seemed to accord with common sense, and much of the time it is adequate.

However, we inevitably express ontological nebulosity in reasonable everyday activity

  • What if a clear answer isn’t available? (eg. I’m not sure)

  • What if an answer can only be given with a weak degree of confidence or high degree of uncertainty? (eg. I think so…)

  • What if it doesn’t make sense to assign an answer? (eg. I believe in America!)

Is “America” something that could be true or false? Is believing in the same phenomenon as believing that? Maybe the statement is an abbreviation for “I believe that America is Good”? But this is so vague that it doesn’t seem that it could be either true or false.

Law of the Excluded Middle

Philosophers have found several other intrinsic problems with the Aristotelian framework, one of which is the violation of the Law of the Excluded Middle, which essentially states that either a proposition or its negation must be true.

However, consider the statement “The president of the world is bald”.

  • There is no president of the world, so it false that he is bald.

Now consider the negation, “The president of the world is not bald”.

  • There is no president of the world, so it is also false that he is not bald.

So clearly, Aristotelian theory contradicts with logical epistemology.

Everything is Wrong

It turns out that every part of traditional logical epistemology is wrong.

  • Knowledge is not made of true beliefs, you don’t simply believe all statements to be true or false

  • Beliefs aren’t sentences, there are no list of beliefs in your head

  • Beliefs can’t be true or false of the world

However, the main features of this theory have been retained up until now, with modifications and elaborations. In fact, complicated versions are invented to try to deal with failures of simpler versions, but those don’t work either.

The problem doesn’t lie in any of the details. The whole approach is wrong.

Natural Language is Broken?

Sentences are often unclear - this is a problem if we want to know whether or not our beliefs (expressed as sentences) are true. A sentence might be true in some sense, and false in another, or even meaningless in a third.

Consider the ambiguity that arises when considering a “pretty little girls’ school”:

  • Does “little” apply to the girls or the school?

  • Does “little” refer to age or size?

  • Does “pretty” apply to the girls or the school?

  • Does school mean a building? or an intellectual lineage? (or a co-moving group of fish?)

A sentence depends on the meaning of its parts, and logical epistemologists attempted to find a fixed scheme for extracting sentence meanings from word meanings. This is also impossible.

Consider the statements “the eggplant is a fruit” and “the dog is a Samoyed”:

  • The former “the” likely refers to all eggplants

  • The latter “the” likely refers to some particular dog

This understanding can be inferred from our understanding of these topics - which is not contained within the sentence. Meanings depend on its constituent parts, but not only on them.

This problem is pervasive, almost any sentence can be read with multiple meanings. Rationalism’s diagnosis is that natural languages are hopelessly broken. They are incapable of adequately expressing truths. So, what if we replaced natural language with math and logical formulae?

The Invention of Modern Formal Logic

Gottlob Frege’s invention of modern formal logic fixed several outstanding defects in Aristotelian logic.

  • The meaning of formulas definitely exist and can be derived unambiguously from its parts.

  • Rigorously separated deduction and intuition, which previously were nebulous in distinction.

  • Introduced the logical device of “nested quantifiers” which solved many technical problems of Aristotelian logic

The previous two statements can now be expressed as such:

∀x eggplant(x) ⇒ fruit(x)

  • ∀ is the universal quantifier, it states a universal truth.

  • For all x, if x is an eggplant, then x is a fruit

∃x dog(x) ∧ Samoyed(x)

  • ∃ is the existential quantifier, it states that something exists

  • ∧ is the “and” relation.

  • There exists an x, that is both a dog and a Samoyed.

  • This isn’t quite what we wanted though

If we want to be more specific and rigorous about making a claim of one particular dog

dog(x_{dog_id}) ∧ Samoyed(x_{dog_id})

But what if we see a dog and have to way to identify it? You still see a dog, and it’s obviously a Samoyed, but how can you make a logical claim about it?

This seemingly trivial question holds a key to meta-rationality - there is no way to fixate the belief “the dog is a Samoyed” to eliminate the context dependency of the statement. You could assign it an ID, but there is no “Cosmic Object Registry”, and even if there was, there is no way to objectively add objects to it. Even if you could, there is no ambiguity when I’m looking right at the dog, there is no need for this registry when you have context and reasonableness.

You don’t always need rationality’s power, precision, and accuracy in the real world.

Facing Indefiniteness

There are also questions without definite answers, not that we are uncertain about them; it is not that there are objective answers we haven’t determined, but they face matters of indefiniteness. This is awkward for rationalist epistemologies, how can you say an eggplant is a fruit if you cannot say what either one is?

Let’s consider a world where ratinoalism is true, one made of ontologically definite objects, objects which something can be true or false about without any nebulosity, possessing definite properties and definite relationships (or lack of) with others. 

If we try formal logic again, we’ve dealt with syntactic ambiguity, but haven’t addressed semantic ambiguity hidden inside predicates (such as eggplant). We can attempt the following explicit definition of a fruit.

  • ∀x eggplant(x) ⇒ fruitbotanical(x) is True

  • ∀x eggplant(x) ⇒ fruitculinary(x) is False

  • ∀x eggplant(x) ⇒ ¬ fruitculinary(x) is True

(the ¬  symbol means “not”.)

The next step is to give necessary and sufficient conditions in order to define our terms, such as

  • fruitbotanical(x) ≝ seed_bearing(x) ∧ structure(x) ∧ (∃y angiosperm(y) ∧ part_of(x, y))

“A thing is a fruit of the botanical type if and only if it is a seed-bearing structure and there is some angiosperm y that thing x is part of.”

This has not gone well, you discover a never-ending assortment of exceptions and borderline cases, and taxonomizing an unbounded proliferation of senses and properties becomes increasingly contrived and convoluted.

Trying to make a nebulous category precise in terms of several others also turns out to be nebulous. Eventually you might think this would terminate, as we reach the quantum realm, but in practice this never happens. The set of terms proliferates exponentially and seemingly endlessly. It is not just unbounded, but also pragmatically uninterpretable and unusable.

This is not to say it is definitely impossible to ever make perfectly accurate definitions (although this seems to be true). It is that we do not currently have perfect definitions, and yet it rarely poses a problem in the practice of science and engineering. We would need perfect definitions if we needed absolute truth, but we don’t.

The Problem is in the Territory, Not the Map.

Even if we had a perfect mapping of definitions to things in the real world, mathematical formalism doesn’t solve the problem that there are fundamentally no sharp lines that divide the world in a meaningful way.

This issue is an ontological, not representational one. We want to carve nature at its joints, but practically it doesn’t have any. There is no natural, intrinsic, absolute distinction between eggplants and non-eggplants, and no subdivision, however technical, can fully fix this.

A New Truth Value

Georg Wilhelm Friedrich Hegel’s version of Idealism dominated British philosophy for most of the 1800s:

“Time and space are unreal, matter is an illusion, and the world consists of nothing but mind.”

Logical positivism began in the 1900s with the revelation that Hegel’s writing was incoherent made no sense. We acknowledged that the sun and stars would exist even if no one was aware of them.

Aristotelian logic said all statements were either true or false with no alternatives. With the realization that nonsense could not be assigned an existing truth value, logical positivism cut the Gordian knot and claimed that it was neither - it was meaningless.

The problem wasn’t that we didn’t know (epistemic), but that the world doesn’t work in a way that could answer that problem (ontological). 

This opened the door for more truth values - another was unknown. The belief status of sentences whose truth you were uncertain about. Now logic can express epistemic uncertainty?

We can also add sort of, and both (true and false), and a bunch of others.

Multi-valued, or non-Aristotelian logic solved some important problems. Yet, it was still inadequate in practice and eventually abandoned. It was too coarse; it’s rarely useful to merely label something as unknown, you want to know how confident to be and why, you want specifics. 

Also, hardly anything is absolutely true or false, at best they are “good enough” or “pretty much true”. So almost everything and anything would have a truth value of “meaningless”, or “sort of”.

Multi-valued logic was then replaced in part by probability theory, giving a finer-grained account of the epistemological problem of uncertainty, offering a continuous range representing degrees of confidence. However, probability doesn’t deal with “meaningless” and “sort of true”, and has other fatal flaws of its own.

True Statements That Aren’t Absolutely True

Real world truths are all sort-of truths, this causes trouble for most rationalisms, since the formal methods underlying logicism and probabilism depend on absolute truth. 

The fundamental principle of logical deduction is that it is absolute-truth-preserving. If all inputs to a deduction are absolutely true, then so will the outputs. Unfortunately, sort-of truths are not preserved; they don’t follow the standard rules of logical inference.

Sort-of truths are not sufficiently specific enough for standard logic, could we define inference rules for them though? No one has succeeded in creating anything usable, as correct answers depend on unstated knowledge outside the context of the question.

Probabilism doesn’t require an absolute belief about the truth of the statement. However, it does require that a statement actually is either absolutely true or absolutely false - we just don’t know. This doesn’t work for sort-of truths either. Almost no meaningful statements are absolutely true or false - universally, objectively, independent of circumstances, purposes, or judgments.

Quixotic Quests for Absolute Truths

When encountering a sort-of truth, there rationalist strategies for converting sort-of truths into absolute ones. They each work in some cases, and are all meta-rational: they are methods of ontological remodeling, ways of making rationality work better. Unfortunately, they do not provide general solutions, and none of them generate absolute truths that are usable in practice.

Treating Linguistic Vagueness

This strategy involves tackling linguistic vagueness, defining all terms in a statement with absolute precision so that it becomes absolutely true or false.

  1. You move all the nebulosity inside the statement. This by itself doesn’t help with deduction, and it doesn’t necessary always work.

    • Before: It’s more or less true that cottage cheese is white

    • After: It’s absolutely true that cottage cheese is more or less white

  2. Split the meaning of each term into technical special cases that are each defined separately. For example, the range of reflective properties that count as a particular kind of whiteness under certain lighting conditions; the range of substances that count as cottage cheese; etc.

Treating Epistemic Uncertainty

This strategy involves reinterpreting a mostly true statement as an absolutely true statement with built-in uncertainty.

  • Before: (mostly true) all ravens are black

  • After: (absolutely true) the probability that a raven is black is high

This sometimes works well, when variations are random and patterns are usually uniform, so a general statement is adequate.

But, when variations occur we usually want to know why and exclude them from the statement. Exceptions can be meaningfully unique and it doesn’t always make sense to lump them in and interpreted them as “true with a given probability”.

Oftentimes, this probability isn’t available.

Reducing Rationality to Reduction

Reduction is a powerful tool of rationality, it is responsible for the kinetic theory of gasses, set theory, the entirety of computer science, etc.

Naturally, rationalists tried to deliver absolute truths about the practical world through reduction., starting with quantum field theory.

The argument is that things on this level are not nebulous, there is absolute truth. From this unshakeable foundation, we can find absolute truths about atoms, molecules, cells, and finally, life.

Empirically, this is a metaphysical fantasy - it does not conform to facts.

In practice, we are unable to reduce many of these domains into each other. Psychology is mostly not reducible to neuroscience, neuroscience is mostly not reducible to molecular biology, molecular biology is mostly not reducible to chemistry, etc.

Not to say that it’s not feasible, but we don’t have sufficiently detailed or accurate models of neurons, we don’t understand in details what each neuron does, we don’t know how neurons form functional structures, etc.

To explain one domain fully in terms is non-existent, and partial reductions do not enable absolute truth to accumulate from lower to higher levels. Nebulosity blurs reduction so things cannot be cleanly split into well-defined levels, and it cannot propagate absolute truth upwards.

What if we’re not “Sciencing” Properly?

An example of nebulosity permeating science is within biology - we don’t have a well-defined conception of what a cell is or isn’t; if we start removing each of the several trillions of molecules of a cell one by one, at some point it must cease to become a cell. We simply don’t have a criteria for this, it’s nebulous.

Some rationalists argue that if biologists an’t define a cell, they are not doing science correctly. If the cell isn’t reducible to quantum theory, then nothing that biologists say about cells can be absolutely true or false, and therefore meaningless.

Biology should be fixed by people who know what they’re doing

The only meaningful questions that scientists can ask are those that can be answered in unambiguous physical terms.

But, this is impossible (currently). So then does it follow that we do not (currently) have any genuine knowledge of biology? If we take this seriously, we eventually reach post-rationalist nihilism, the despairing realization that rationality cannot deliver on rationalism’s promises. Knowledge is mostly impossible.

Alternatively, you can acknowledge that our current biological knowledge is rational, and that a different and better understanding of rationality is possible. Meta-rationalism is that.

Objective Objects

A powerful argument often used in support of rationalism that claims ontological nebulosity is impossible is as follows:

An object is just atoms, and atoms have precisely fixed, objective behaviours. So we know from physics that an object doesn’t depend on your subjective ideas about it. There’s only one real world. Different people may have different beliefs or concepts about it, but that doesn’t affect what’s true. We don’t each get our own reality; only our own subjective opinions. Your supposed “nebulosity” just boils down to people having different theories, some of which are true, and some of which are false. It’s just epistemological fuzziness. There’s no fuzziness in objective reality.

Quantum field theory is the closest thing we have to an absolute truth about physical reality. The physicist Richard Feynman was one of its major architects. He wrote:

What is an object? Philosophers are always saying, “Well, just take a chair for example.” The moment they say that, you know that they do not know what they are talking about anymore. The atoms are evaporating it from time to time - not many atoms, but a few - dirt falls on it and gets dissolved in the paint; so to define a chair precisely, to say exactly which atoms are chair, and which atoms are air, or which atoms are dirt, or which atoms are paint that belongs to the chair is impossible. So the mass of a chair can be defined only approximately.

There are not any single, left-alone objects in the world. There are no absolute truths about everyday objects because there is no absolute truth about which atoms make it up. The physical boundaries of a physical object are always nebulous, to varying degrees.

  • The mass of a set of atoms is objectively well-defined, but an object is not a specific set of atoms. It will invariably have some atoms that are loosely associated but not definitely either part of it or part of its surroundings.

  • Mass is a fundamental physical property, yet, the mass of objects is nebulous. Imagine how much more so is its shape, compressibility, pathogenicity, or other endless set of properties. 

The point is, the absolute truths of quantum field theory don’t apply precisely if you can’t say precisely where to apply them.

“If we are not too precise we may idealize the chair as a definite thing”

Feynman to the rescue again. In rational practice, we use ontologies that assume the existence of definite objects with definite properties. This idealization cannot precisely reflect the real world. “One may prefer a mathematical definition; but a mathematical definition can never work in the real world.”

Yet, it works. We can choose ontologies that work despite not being absolutely true. How does a good ontology relate effectively to reality?

In reasonable everyday activity, it’s usually not a problem that the world cannot be divided into well-defined objects. Your ability to work effectively with the non-objectness of these objects depends on non-rational skills of perception and manipulation, which can impose boundaries on their nebulosity.

Meta-rationalism explains the relationship between rationality and reality as mediated by reasonable activity, and as enabled by definiteness-enhancing technologies.

Science generally aims for universal, objective truths (which is good). However, when you apply rational conclusions to the real world, separation of objects is generally context-dependent and purpose-dependent to varying extends. This is because reasonableness critically depends on them, it carves out chunks that are useful, meaningful, or explanatory.

The Role of Perception in Rationalism

There are two important roles that perception plays in rationalism:

  1. The correspondence theory of truth (the philosophical position that a statement is true if it corresponds to reality) does not include a causal explanation of how the correspondence between beliefs and reality is evaluated. Perception does part of this work.

  2. Rational processes of deduction and induction produce new beliefs from old ones. But what if we have no relevant beliefs? To get this process started, some beliefs must come from a “primary source”, which does not depend on inference or interpretation. Perception is one such source.

Rationalist theories typically make several assumptions, implicitly or explicitly:

  • Perception and rationality are separate modules, with clearly distinct and defined spheres of responsibility, and with a coherent information transfer interface at the boundary.

  • Information flows unidirectionally from sense organs, through perceptual processing, and finally to rationality.

  • Perceptual information is inherently factual and objective, although it might be only approximately correct

The underlying issues with these views were always the same: unavoidable nebulosity. 

Rationalist Theories of Perception

Each of the below is unworkable for different reasons. They are all simplified versions of theories that were major research programs for decades. The aim here is not to prove that no rationalist theory can be adequate, but to explain some specific obstacles that suggest alternative approaches.

  1. It would be ideal for rationalism if perception delivered a set of statements about what the objects in your environment are, with their types and relationships. That’s what rationality wants to use as a foundation to build upon.

    • Obstacle: assigning objective types and relationships often requires reasoning that goes far beyond what could be expected of perception.

  2. Perception might deliver statements involving only a fixed set of objective, sensory properties of the world, such as shapes and colors. Then, rationality proceeds to make sense of those. 

    • Obstacle: There doesn’t seem to be any fixed perceptual vocabulary that is sufficient to support reasoning. Many conclusions require a set of arbitarily fine-grained discriminations to be formed.

  3. Reasoning occasionally has to go all the way “down to the pixels,” in which case it is not clear what work is left for a stand-alone perception module. 

    • Obstacle: It does not seem feasible that rationality can do the whole job.

  4. There is strong scientific evidence that biological perception is biased, unreliable, and not objective. Perhaps rationality should be based on measurements taken with objective instruments? 

    • Obstacle: Unfortunately, there are no objective instruments. They can be more objective than perception, but they still fall short of delivering absolute truths.

1. Perception to Formulae

Higher cognition, notably rationality, is usually taken as something running on an engine like to language or logic. So, perception ought to deliver a set of statements about the world.

The question is, what sorts of predicates can appear in the statements that perception produces? What ontology do perception and rationality use to communicate at their interface? 

Perception encounters the same issues rationality, such as nebulosity and context dependence.

Suppose I teach you a new word for an object - describing it vividly enough that you could recognize it immediately - before showing it to you. Then the first time you see it, it seems that some deliberate and rational reasoning would be involved. If not, then definitions would need to be placed into the perception module, which doesn’t seem right.

Perhaps then, every sort of reasoning might be required for accurate judgment in some cases, and nothing is left solely for rationality to handle.

2a. Reasoning from a Neutral Observation Vocabulary

The approach above drew the boundary at too high of a level. What if we move the interface between perception and rationality down, so perception does less work and rationality more?

Considering this lower boundary, perception might output statements involving a fixed set of sensory properties of the world. Perception is responsible for describing objective physical features, and leaving for rationality to handle.

Logical positivists called this “neutral observation vocabulary”, which should be free from ontological bias. This set of vocabulary is supposed to deliver a set of starting beliefs that don’t depend on any theoretical assumptions. The opposite of this would be a “theory-laden” vocabulary, whose terms implicitly include substantive assumptions about the world. 

Unfortunately, even though this makes it easier for perception, it makes things harder for rationality. Too hard. Also still too hard for perception.

The world requires an arbitrary and indefinite list of “sense data” observations in order to make conclusions. Finding rational conditions to believe something is a member of a macroscopically meaningful category faces the same problem we’ve always faced - nebulosity. 

2b. Perception into a Neutral Observation Vocabulary

So a neutral observation vocabulary does not provide enough information to apply rationality and arrive at absolute truths. What about something more fine-grained?

This still doesn’t work - if perception applies any sort of processing and summarization of the retinal image, the limits of that computation will show up at the interface between perception and rationality, and will shape what kind of starting beliefs rationality can work with.

So if we really need an indefinitely fine-grained output from the perception module, it needs access to the retinal image.

3. Pushing Rationality Down to the Pixels

What if we give rationality true inputs? It guarantees true outputs. It’s reliable, universal, and you can reason using formal logic.

When photons hit the retina, perception performs some sort of computation, so we should be able to model it formally; it should be another rational process, just an unconscious one.

Conceptually, just declaring rationality does the rest of the job doesn’t address the question of how. Rationality still needs to overcome all the problems stated earlier, as they need to be overcome to reach a plausible explanation on how pixels can be reasoned into statements, and none have been found.

Still, this approach remains popular, and deep learning systems that start from pixels do surprisingly well. However, computations like convolutions have long been known to be a special purpose method in the early stages of visual processing. Further, image classification is not general perception - it does poorly on spatial relationships and even worse when made to reason with nested logical quantifiers.

Probabilistic (“Bayesian”) approaches are also popular, and depend on the implicit belief that probabilistic inference encompasses the whole of rationality. This is unambiguously mistaken, and also does not yet include any practical theory of how to begin statistical inference from pixels.

4. Objective Instruments

Logical positivists suggested relying on scientific instruments for objective measurements, given the subjectivity of biological perception. However, instruments are also imperfect and theory-laden, reliant on pre-existing theories and assumptions. They're useful but not infallible or ontologically neutral. No scientific measurement can ascertain an object's identity with absolute certainty, similar to perceptual observations.

Since biological perception is subjective, unreliable, and biased, maybe it’s not the right starting point. Logical positivists suggested that reliable knowledge must be based on scientific instruments, which measure objective physical properties and have unambiguous numerical outputs.

Unfortunately, our tools aren’t that good, and scientists know this. It would be convenient if a spectrophotometer always gave you a reliable and objective measure of an object’s color, but they don’t for the same reason eyes can’t. Color is not an objective properly.

Laboratory apparatus can be inaccurate and go out of calibration while still being approximately, probabilistically, or “good enough” right. Yet, this does not exclude the possibility (or eventuality) that they may be wildly off in hard-to-define circumstances.

Scientific instruments are also not ontologically neutral; their outputs are “theory-laden”, someone had to decide what kind of output an instrument should give. A measurement is only meaningful if you already accept particular concepts, assumptions, and theories about the output. Whether and how to trust an instrument is a matter of subjective interpretation.

These instruments are still useful, they do let you make observations that the unaided senses can’t. But they are not infallible or ontologically neutral. No scientific measurement can ascertain an object’s identity with asbolute certainty.

Rational Inference in the Real World

There are hardly any universal absolute truths that apply to the practical world. Nearly every piece of knowledge has exceptions. Nearly anything might be relevant to nearly anything else, though nearly everything turns out to be pretty irrelevant in any case.

You might be able to reason probabilistically about “known unknowns” - obstacles that you could realistically anticipate and assign probabilities to. 

You cannot realistically anticipate “unknown unknowns”, this defeats probabilism. Unknown unknowns are innumerable, and this problem is fatal fore rationalism’s hope for inference and optimality in the real world.

Obviously though, we do use rational inference all the time, often successfully, through three ways.

  • We can make a small/closed-world idealization by pretending we know what all the relevant factors are

  • We can re-engineer the world to more nearly fit the idealization by manufacturing less-nebulous objects and shielding them from unexpected influences

  • We can reality-check the necessarily unreliable results of rational inference

There are all meta-rational operations that can be performed poorly or well. 

Probabilism

Probabilism is any rationalism that takes probabilistic rationality as central to rationality overall, including probability theory, decision theory, and statistical methods.

Probabilism addresses several fatal probabilism, recognizing that absolute certainty is not possible in the practical world and providing an intuitive accounting of confidence in beliefs. 

It managed to largely replace logicism as the dominant form of rationalism in the mid-twentieth century, however:

  • Probability theory lacks the power of formal logic

  • Probabilism does not address most of the issues logicism failed at

  • Probabilism’s defects are not just theoretical; they regularly produce large practical catastrophies

Weaker and Stronger Probabilisms

Here are some claims about the power of probabilistic rationality, in order from weak to strong.

  1. Probabilistic rationality is extremely valuable in some circumstances

  2. Probabilistic rationality is a complete and correct theory of induction

  3. Probabilistic rationality is a complete and correct theory of uncertainty

  4. Probabilistic rationality is a complete and correct theory of epistemology

  5. Probabilistic rationality is a complete and correct theory of rationality

  6. Probabilistic rationality applies in all circumstances

Claim 1 is true, the rest are false

Claim 5 and 6 can be disposed of quickly - probabilistic rationality does not include most of mathematics. Many tools in computer science, theoretical physics, and calculus are not available to us in probabilistic rationality, so it is not complete. That is not to say it can’t be combined with them.

Claim 4 is false for many of the same reasons as logicism

  • It supposes that all beliefs are either absolutely true or false, whether or not we know it (otherwise the math wouldn’t work).

  • It also does not address any problems faced with representations of reality, such as ambiguity, vagueness, definitions, and references).

  • It doesn’t address any of the ontological problems, like the nebulosity of objects, categories, properties, and relationships.

What it does do is address the uncertainty about known unknowns. What it doesn’t have is the expressive power (what it allows you to say) and inferential power (what you can conclude) as logic - it is a weaker system.

Claim 3 fails when faced with unknown unknowns, and even some known unknowns. Probabilistic certainty only addresses certain types of known unknowns.

Claim 2 is compelling because statistics paired with evidence can give you an idea of how confident you can be. However, statistics is not enough to answer the question outright, and can be powerfully misleading. Also, much of science isn’t based on probability and statistics at all, it is not always necessary for induction.

No Alternative?

So logicism doesn’t work, and it seems neither does probabilism - yet probabilists claim there is no credible alternative, and so probabilism has to be right, or else rationality would be impossible.

The assumption at play here is that rationality requires rationalism. That is, rationality requires proofs from first principles that being rational is correct. Yet, we do do rationality, often successfully, without proofs.

The issue is we are looking for a universally applicable procedure with uniform justification, such as logic or probability. Yet we don’t have first-principles proofs that induction works for either. So how do we derive general knowledge from specific data?

An alternate answer is that, investigating this empirically rather than philosophically, we find there is no uniform principle deduction. This is why we’ve devised different ways of finding different sort of truths, all of which are reasonably *but not absolutely) reliable when used well. Some use logic and probability, some don’t.

Small World Idealizations

The mathematics of probability starts from a specified set of possible outcomes. The probabilities must add to 1, or else the math doesn’t work.

So, you start assigning probabilities. Until you realize that there are innumerable unknowns. In the real world, you can’t make a full list of possible outcomes. There are always more things that might happen. If you add up all these estimated probabilities, you’ll eventually exceed 1 (or asymptotically approach 1 which does not provide justifiable priors). If you quit before you run out of ideas, you’ll have underestimated the probability of failure.

And this is just known unknowns. Even after you’ve written down every possibility you can think of, there are still possibilities that nobody would ever think of, because we don’t know how everything about the world works.

You could adopt the strategy of lumping all outcomes into the bucket of “something else happens” and estimate the probability of that. In statistics, this is called a “small world idealization”.

In the practical world, we always require a small world idealization. Using probabilistic rationality always requires lumping these unknown factors into an “other” category. But by excluding an unknown set of unaccounted-for factors, we always risk making the analysis so wrong that it is useless. The “other” category contains a probabilistic model for the entire rest of the universe.

Statistics and the Replication Crisis

If probabilism were just another mistaken philosophical theory, it wouldn’t matter. Philosophy has lots of silly theories; most of these are harmless, because nobody takes them seriously.

Probabilism being wrong is not harmless for obvious reasons, because its widely used in

  • Science

  • Engineering

  • Finance

  • Medicine

  • etc.

When misplaced faith in probabilistic methods leads you to ignore nebulosity, catastrophes can soon follow.

The crises around credibility and replication in several fields are rooted in bad incentives that reward activities that lead to false conclusions, and punish those that can correct them. However, the substance of the crisis is rooted by statistics being done incorrectly.

Statistics can be “done wrong” at three levels

  1. making errors in calculations within a formal system

  2. misunderstanding what could be concluded within the system if your small-world idealization is held

  3. not realizing you have made a small-world idealization, and taking it as truth

The Problem isn’t Technical Errors

Stats is a collection of complex and difficult calculation methods, which leaves scientists prone to miscalculations. These level-one errors are not uncommon. 

However, if this were the whole problem, it would be straightforward to fix. Unfortunately, the second and third levels are necessary, and more nuanced fixes.

No Solution to the Problem of Induction

Second level mistakes are misunderstandings of what stats can do. What we often want is a mathematically guaranteed general solution to the problem of induction, allowing you to gain knowledge through a routine mechanical procedure without necessarily understanding the domain. You could feed a hypothesis and some data into a black box, and it would spit out a percentage about the degree to which you should believe the hypothesis.

Unfortunately, no magic box can relieve you of the necessity of figuring out for yourself what the data is telling you. For half a century, many scientists assumed there was one, which is a main reason so much science is wrong.

P < 0.05

One such attempt is the famous example of null hypothesis significance testing.

What we would like P to represent is the chance your hypothesis is false, so given a high P value, you can have a high confidence the null hypothesis is correct. Unfortunately it doesn’t mean that, and the P value does not tell you anything about how confident you should be.

Few scientists understand P values, because what the value tells you is both difficult to understand and something you almost certainly don’t care about. This is in part due to education: explanations of the concepts are often subtly wrong, and the fact that it is taught leads you to assume it must tell you something useful, or else why would it have been taught?

One might think these misconceptions can and should be fixed with better education, but a correct understanding leaves a void, begging the question of what scientists should do when P values aren’t applicable? Maybe they’ve just chosen the wrong black box.

Some reformers have advocated for alternatives - confidence intervals or Bayes factors, for example. Unfortunately, each of these has its own problems. All of them can be valuable in certain cases, and none of them by themselves can tell you what to believe.

Extraordinary Claims Require Extraordinary Evidence

Why isn’t there a method to tell you how confident you should be in a belief? Because an accurate numerical estimate of how likely you are to be wrong requires a numerical estimate of how extraordinary the claim is, which is often not meaningfully quantifiable. Science is supposed to explore uncharted areas where nobody already knows what’s going on.

Good scientists have good hunches, and can reasonably disagree about what’s likely.

Statistics Cannot Do Your Thinking for You.

Avoiding the first and second level errors does not mean you will get correct answers about the real world. It only guarantees that your answers are correct about your formal small-world idealization.

Good statisticians understand the third level error - confusing formal inference with real-world truth. There cannot be a general theory of induction, uncertainty, or epistemology.

Meta-rational Statistical Practice

In poorly understood domains, science requires a meta-rational approach to induction. In a particular situation, what method will give me a meaningful answer? Why or why not? What needs to be done to assure that it does?

The real world applicability of a statistical approach is necessarily nebulous, because the real world is nebulous. There is no correct statistical method that you can choose, it is a meta-rational judgement based on a preliminary understanding of how the idealization relates to reality. Choices can only be more or less predictive, productive, or meaningful.

Acting on the Truth

Rationalist theories generally take action as deriving straightforwardly from your beliefs about the current state of the world and how your actions will affect it. If your beliefs are true, then the optimal action can be derived. Four influential theories of this sort:

  1. Game Theory - you and an opponent alternate in choosing from a small number of possible actions whose effects are fully known in order to achieve a win condition

  2. Decision Theory - you choose a single action out of a small set  which will result in one of a small number of possible outcomes, but you may only have probabilistic knowledge of the world state or outcomes.

  3. Control Theory - the world is taken to be a differential equation, where your beliefs are values of some real-valued variables in the equation, your actions set some variables, and you aim to maximize the function.

  4. Means-ends Planning - you derive a program that will result in a well-defined goal state by taking a series of discrete actions, each of which affects the world in a well-defined way.

In each theory, the math is conceptually trivial. This is why epistemology is central to rationalism: if your beliefs are true, then optimal action is guaranteed.

Determining Effects is Hard

Although the math for computing optimal action is conceptually trivial, it is computationally hard. The number of computational steps required scales super-exponentially as the number of possible actions and outcomes increases. In practice, the correct computation in infeasible.

Instead, heuristics are used, where we consider only a small subset of possibilities. Generally, these are not even approximately correct. Sometimes they work well, but often there’s no available analysis for how well, or for when they do and don’t work.

The effects of any real-world action are subject to innumerable unknown and potentially relevant considerations. To apply one of the theories on rational action, you must enforce a closed world idealization where rational action frameworks can behave reliably. Rational correctness can only be guaranteed relative to the idealization, and any time an unexpected factor intrudes, the guarantee is broken.

Knowing That and Knowing How

Effectively action often occurs without being able to predict or even understanding its effects. Take riding a bike for example.

Almost nobody can correctly explain how riding a bike works; the physics is counterintuitive, shockingly complex, and still a subject of research, not yet fully understood by anyone. Some facts are proven, for instance, to turn left you must first momentarily turn the wheel right. All cyclists do this, few know they do, and many would actively deny it if asked.

Still, we can steer the bicycle effectively while having actively false beliefs about what you are doing, and why it works. Conversely, if you intensively studied bicycle physics and then got on a bike for the first time, you would likely fall over numerous times before getting it right. Your true beliefs would be nearly useless.

Cognitive scientist make a useful distinction between knowing that and knowing how.

  • Knowing that, or propositional knowledge, consists of true beliefs

  • Knowing how, or procedural knowledge, consists of effective action

Rationalists have argued that procedural knowledge can be reduced to propositional knowledge, and that riding a bicycle does consist of having true beliefs regarding a set of propositions about physics, it’s just that we don’t have concious access to them. 

This can’t be definitively disproven, but there is strong evidence against it. 

  • There is a problem with computational complexity. Neurons compute surprisingly slowly, and cycling requires rapid reaction sometimes.

    • There doesn’t seem to be enough time for your brain to perform the logical deductions necessary to derive new conclusions.

  • Second, there is extensive neuroscientific evidence that propositional and procedural knowledge are stored differently in the brain. 

Unhelpfully, we know little about how brains store know-how.  Useful intuition may come from the field of AI, where reinforcement learning programs have learned to play complex games using artificial neural networks, in which, if there is any propositional knowledge embedded within them, no one has found. They seem to be fully procedural.

Rationalist theories of action are powerfully useful in certain highly-restricted situations, but are overall inadequate as descriptive theories of what we do, and normative theories of what we should do.

Procedural knowledge typically cannot be formally analyzed, yet prove reliably effective in practice.

Ontologies of Action

Rationalist theories mainly consider the cognitive process of deciding what action to take, with little to say about what it means to take action. Once you decide on an action, you should take it; but the theory doesn’t explain what “taking” entails. Implicitly, “taking” is atomic, you can just do it, then you are done.

In the rationalist ontology, actions exist outside of space and time. It does not consider that you are taking action here and now. This is the power of rationality: its ability to abstract and generalize. It provides universal solutions that are equally correct anywhere and at any time. This is also it’s limitation: rationality is oblivious to the innumerable specifics.

Real-world activity is not separable into discrete units of well-defined types. Actions are not objective features of reality. What an action is or isn’t depends on purpose and circumstance.

Overcoming Post-Rationalist Nihilism

Realizing that rationalism is wrong can be devastating, particularly if you have build an identity around it.

Learning to master rationality is an engrossing way of being. During the educational phase of your life, it can absorb most of your attention and energy. It is natural to construct your identity, your understanding of self and the world, around rationality. It is also natural to take rationalism for granted as your understanding of what rationality is and how it works; and therefore where you are, and how you work.

Post-rationalist rage, anxiety, and depression can be destructive and awful. It is too common among smart, open-minded, scientifically and technically educated people. Fortunately, it is not necessary.

Post-rationalist nihilism can be addressed through the recognition that:

  1. Rationality doesn’t always work, but it often works. It doesn’t work at all in some domains, but remains immensely valuable in many. Being capable of rationality is good.

  2. Rationalism is a mistaken theory of rationality, but a better understanding is available. Meta-rationalism explains how and why rationality works when it does.

  3. Applying the more accurate understanding can level up your skillfulness is applying rationality.

Taking Reasonableness Seriously

Systematic rationality often works, but not in the way that rationalism mistakenly supposed. So, how?

The answer depends on an understanding of how effective thought and action works in practice.

Reasonableness works directly with reality, whereas rationality works with formalisms. Rationalism assumes that a formalism reflects reality without effectively addressing how it works. TO understand how rationality depends on reasonableness to connect with reality, we need to understand reasonableness.

Cross the River When You Come to it

The rationalist framework overlooks contextual resources, which makes rationality artificially difficult.

Every problem faced by rationalism can be reduced to nebulosity, which gives rise to innumerable potentially relevant factors which cannot be accounted for in a bounded formal framework. However, it is usually true that almost none of the potential complexities arise in any specific situation. The ones that do arise also often turn out to be irrelevant at that given time. Generally, you are able to observe how these relevant factors play out, and they are adequate in resolving the difficulties that a generalized rational theory could not.

Of course, reasonableness is error-prone. When it goes wrong, you may need to backtrack and clean up your mess. Often it helps to plan head. Other times, reasonableness is inadequate and it is better to apply rationality.

Though, most of everyday activity can be handled reasonably.

Not About the Inside of Your Head

Behaviorism views events in the world as eliciting a response from an organism.

Cognitivism is an inversion on this, it is the view that activity is best understood in terms of mental representations and mechanisms that manipulate them, which causes us to take action.

Interactionism holds that causality rapidly crosses back and forth between perception and action. Understanding activity requires taking into account both the environment and the people within it.

To the extent that rationality is a matter of action as well as thought, a good understanding must take circumstance and context into account. Most rationalisms are cognitivist, and therefore most ignore circumstances in favor of mental mechanisms. This is one reason they fail to accurately model real-world rational practice.

Cognitive science aims to determine what sort of machinery is in the brain which underlies rationality. No doubt there are such mechanisms, but we don’t know enough about them to improve rationality much. Fortunately, we don’t need to know what’s happening in the brain in order to improve the ways we think and act.

Not a Dual-process Theory

People are rational sometimes, but certainly not all the time. Maybe there’s a part of us that is rational, and part of us that isn’t?

This is an attractive theory because it suggests we can be consistently rational, or at least rational more often, if we can strengthen our rational part in relation to the non-rational part.

What is this other part? Rationality has been contrasted with qualities such as irrationality, emotionality, intuition, creativity, superstition, religion, fantasy, imagination, self-deception, unconscious thought, and subjectiveness.

Rationalists tend to collapse all these non-rational phenomena into a homogenous, inferior category. Psychologists call this a dual-process theory: there are just two primary mental faculties or modes of thought - the rational and the other.

Thinking Fast and Slow

A version of a dual-process theory is popularized in Daniel Kahneman’s Thinking, Fast and Slow. He describes two systems:

  1. System 1 - fast, intuitive, and emotional

  2. System 2 - slow, deliberative, and logical

According to this theory, “cognitive biases” explain irrationality, allowing system 1 to act.

Ideas like this are pervasive in our folk understanding of thinking. It makes it easy to misunderstand “reasonableness” as system 1, or non-rational. This is wrong because:

  •  Reasonableness does not show most characteristics typically ascribed to the non-rational cluster.

  •  The distinction between reasonableness and rationality is about what people do, not about what is happening inside brains. It does not depend on any theories about cognitive processes, and does not offer one.

  • Reasonableness and rationality are both cultural practices, not mental or neural processing modules.

  • Reasonableness, rationality, and meta-rationality are not exhaustive and do not overlap with irrationality or emotions.

We are forced to do calculus by re-using mechanisms evolved for finding berries.

For hundreds of millions of years, brains evolved for routine practical activities such as collecting food and avoiding predators. Systematic rationality is a modern product and is presumably mainly the result of cultural evolution rather than biological evolution. There hasn’t been enough time for the brain to evolve to develop a separate system for rationality.

It is no wonder that we are bad at rationality.

We don’t have a good understanding of what neurons do. We do know they are extremely slow relative to their silicon counterparts in computers. It takes tens of milliseconds for a neuron to do anything, and hundreds to do basic mental operations. This rules out models of sequential processing, including those necessary for logical inference.

We also know we have lots of neurons, each of which is connected to lots of others. Estimates are in the magnitude of a hundred billion neurons with a quadrillion connections between them. 

What we are good at, extraordinarily good at, is making sense of situations using our contextual understanding of meaning. This sort of understanding was useful in our evolutionary history, while calculus would not have been.

So, it seems to follow that much of what the brain does must involve a shallow consideration of an extremely large amount of possibilities to meanings. Almost all of these possible interpretations are wrong, and your brain manages to find the relevant ones based on your experience and uses them to make sense of your situation.

The Ethnomethodological Flip

Rationalism understands everyday reasonableness as a defective approximate of formal rationality. We will understand formal rationality as a specialized application of everyday reasonableness.

This flip is best developed in the field of ethnomethodology, which is the empirical study of reasoning and activity, in both an everyday and technical sense.

Flipping the relationship between reasonableness and rationality may be disorienting. Rationalists might respond that human reason capability depends on biological hardware that is ill-suited for rationality. But from a meta-rationalist perspective, everyday reasonableness provides resources that technical rationality necessarily depends on.

This flip implies a change in explanatory priority.

Rationalism supposes that formal rationality could in principle serve as a complete mechanism for thinking and acting. However, formal reasoning is unable to bridge the gap between formalism and reality on its own. Only reasonableness can.

Reasonableness makes direct contact with nebulous reality in a way that rationality can’t. Abstracting from real life into the formal realm depends on reasonable perception, judgement, and interpretation.

Explanatory priority is not a value judgement. The meta-rationalist agenda is not to say that rationality is inferior to reasonableness. Rather, it uses rationality and reasonableness for different purpose, and to show their mutual dependence in technical practice. Neither is uniformly superior, both are useful tools sometimes.

Reasonableness is Meaningful Activity

Reasonableness is a quality of activity. Making an omelet for breakfast has the quality of being reasonable; making a sandcastle for breakfast doesn’t.

Activity is a flow involved in a specific situation

Activity is a seamless flow that continues throughout life. At any moment, activity is involved in a unique, meaningful situation: at a meaningful time, in a meaningful place, with meaningful social or material accompaniments; from all of which it is inseparable. You are always already doing something.

Reasonable activity is in unstopping, intimate contact with the world. You continually perceive relevant aspects of your situation and adjust your activity to account for contextual features.

For reasonable activity, the context is both the problem and the resource for solving it.

Reasonable Activity is Immediately Meaningful

To count as reasonable, activity must be meaningful in at least two ways:

  • Concretely purposeful - you are doing something for a reason that is present in the local specifics of your situation.

  • Makes sense - it is explainable and orderly. It is not chaotic, random, arbitrary, or irrational.

Rationality is Mostly Not About Activity

The criteria of rationality typically apply to abstract solutions to formal problems. A deduction is rational if it is in accord with the rules of logic.

Rationality is powerful because it is not about specific activities and situations. If a rational analysis is correct, context doesn’t matter, everything is abstracted and detached. Anyone can verify the solution, and the activity that led to it is irrelevant.

The power of rationality comes at the cost of disconnection from reality, formal rationality can never be in direct contact with the world. In rationalist usage, an action is an output of a mathematical computation, a member in a well-defined set of actions. Once this optimal formal action is computed, rationality is finished.

Meaninglessness is a Key to Rationality

Rational knowledge and methods are not purpose specific, and often make no sense. This is the source of their power, but also their limitations.

Rationality mostly produces or applies theoretical knowledge independent of specific purposes. They can have practical applications, but the theories themselves are general-purpose. Rationality is disinterested, as it should be.

Rational work is not pointless. You do it for reasons, but they are typically remote in space, time, or abstraction level.

Rationality is not meaningless, but meaninglessness plays a central role in it. This is key to formality because a formal solution must remain valid under arbitrary changes in meaning.

Rational inference often makes no sense, and that senselessness is in part what gives it its extraordinary power. Demanding a reasonable account from rationality relinquishes its value.

You are Accountable for Reasonableness

Reasonableness has a normative force - you should be reasonable. By and large, everyone will hold you accountable for being reasonable.

Rationality also has a normative force - if you do professional work, you should apply professional rationality.

However, the nature of reasonable and rational normative forces are quite different. This difference lies at the theoretical level.

Rational norms are absolute, abstract, and universal. They derive from ultimate principles. They do not consider the idiosyncratic meanings of specific situations. They are non-negotiable and do not permit interpretation.

Reasonable norms are contextual, purpose-dependent, and situation-specific. Reasonableness is realistic in recognizing that there are always innumerable potentially relevant considerations. Which considerations are meaningful is always subject to interpretation, and often, negotiation.

Reasonableness is Recursive

What counts as reasonable? Something is reasonable if you can give a reasonable account of its being reasonable. But what makes that account reasonable?

A term is recursive if it is defined in terms of itself. Reasonable is recursive, and its recursive structure can be observed in negotiations about whether something is reasonable.

Reasonableness has no Ultimate Ground

How can reasonableness be determined? Does this recursion ground out? In reality, relevant factors are innumerable and absolute truths are scarce, so there can be no ultimate grounding. The negotiation of reasonableness may be non-terminating. 

Yet, we generally tend to reach consensus quickly rather than regress infinitely, because we don’t pursue anything infinitely. We can have persistent disagreement, in which the matter will have to be dealt with reasonably through another means: by dropping the question, agreeing to disagree, taking a vote, or someone claiming authority to make a final judgement. These are standard methods of dispute resolution, whose reasonableness is also always negotiable.

Reasonableness depends on assumed good faith and moral truth; there is no guarantee for those. There is also no guarantee for eventual correctness, because there can’t be any. The hope for such a guarantee is the fallacy of rationalist epistemology.

Fortunately, reasonableness usually more-or-less coincides with what is moral and pragmatically effective (with exceptions, of course). 

Reasonableness provides generalizations, guidelines that are subject to usualness conditions. This non-systemacitiy is what gives reasonableness both its power and limits.

Rationality faces the problem of not being able to treat everything systematically due to nebulosity. Reasonableness provides methods for working effectively with nebulosity that aren’t systematic, that come with no guarantees, and are prone to failure.

There is No Method - Only Methods

The holy grail of rationalism is a single method, the guaranteed correct way of conducting rational thought and action. There isn’t one.

This also doesn’t exist for reasonableness. But, reasonableness isn’t interested in finding this. Reasonableness is about recognizing such a method doesn’t exist, and doing whatever it takes anyway. You are likely to face similar problems a little bit differently each time, but you don’t have to do much innovating. Most of the time, you can just see what to do. Reasonable methods are innumerable and nebulous, without well-defined distinct procedures. You just look and see and do the next thing.

This open-ended improvisational quality is also ultimately true of rationality. Scientific breakthroughs often depend on duct tape.

Improvisation Provides Efficient Generalization

In routine activity, it is usually reasonable to assume you can work out details as they come up. If you get something wrong, you’ll be able to compensate for it.

  • Relying on improvisation provides tacit generalization. Your intention covers an innumerable space of unanticipated eventualities efficiently, without having to think of them in advance.

  • The rationalist approach to generalization involves explicit universal quantification. You model all the actions and events you consider possible, with all of their possible outcomes, and choose the best ones. This is computationally expensive, but sometimes justified. In the face of uncertainty, rational analysis also depends on a closed-world idealization, implicitly ignoring possibilities that were not considered.

Repairing Breakdown

The basic approach of reasonable activity is continue in the obvious way until you run into obvious trouble. It is normal for obvious troubles to be reasonably and trivially repaired. 

Occasionally, we face troubles that constitute breakdowns, troubles that we can’t repair with routine methods. Occasional breakdown is inevitable due to nebulosity, and also due to the open-ended improvisational approach.

It is only when we encounter routine’s atypical defects, not its typical smooth flow, that it becomes memorable or noteworthy. This can reinforce the misimpression that reasonable is a defective approximation of rationality.

Because of the perceived insignificance of normally successful operation of routine reasonableness, rationalist theories of action are mainly theories of problem solving. They deal with the atypical but significant condition of breakdowns.

Faced with the breakdown in routine reasonableness, you have to come up with a non-obvious fix, which will involve some new thinking. Breakdowns can force explicit reflection that draws on theoretical knowledge. Rationality can be good for that.

However, rationality can also be routine, when you apply familiar systematic methods in familiar ways and they yield the expected sorts of results.

Just as how breakdown in routine reasonableness can trigger rationality, the breakdown in routine rationality can also trigger the groundless open-ended curiosity of meta-rationality.

Meaningful Perception

The usual rationalist assumption is that perception delivers an objective description of your environment that is independent of any bias that cannot be sensed in the moment. 

From previous reasoning this seems impossible. Fortunately, this isn’t what we need from perception.

In practical life, we want perception to tell us what the meaningful aspects of our situation are, and what ongoing action they suggest. The answers to this will depend on what we know, what we can do, what we ongoing activity we’re partaking, and what else is happening.

Unsurprisingly then, scientific study of perception shows that it does not attempt to deliver objective descriptions, and that perception operates on a task-dependent, contextual, meaning-saturated, and knowledge-saturated basis.

So, what is the division of labor between perception and rationality?

We’ve now reasoned that perception is an aspect of activity, not a separated, encapsulated function. This implies what we perceive is inevitably affected by what we are doing.

Rationality depends on perception (among other things), and therefore perception is used to build objective and rational theories. However, this is mediated through reasonableness, which limits how objective theories can be.

Seeing with a Purpose

Vision is not an input device like a digital camera. In that setup, causality flows in one direction, with photons arriving at the sensor, through various processing, and finally through a cable to a computer. It delivers objective information in the sense that it’s the same regardless of what program the computer is running. This is called bottom-up information flow.

Human vision also involves top-down information flow. Conscious reasoning and processes can causally affect which visual information gets processed at lower, pre-conscious stages. Most obviously, we can move our eyes to choose what to look at.

There are many other ways we can direct our visual processing, such as visual attention.

What we’ve learned from visual psychology suggests that seeing involves learned, task-specific skills, and is contextual and purposive. This makes it a good fit for everyday activity, not so much for objective rationality.

Seeing with an Ontology

Much of what we see, we see as something. Bottom-up vision has done the work in identifying things for you.

What you see something as depends on your knowledge, context, and purpose. You can only see things as something already part of your ontology. Although bottom-up processes can do much of the work for you, your top-down direction also plays a critical role.

Because perception evolved to enable purposeful activities, it is able to reveal meaningful functions and potentials. Those are a matter of ontology: not just categories, but also how you separate the world into objects, what properties you see them having, and how they relate to each other. It’s not just about the objects, but also intentions, actions, events, environments, and possibilities.

Routine activity is easy because most of the time we can see what to do. We see affordances - cues to what actions are possible and what their effects will be. We can, in effect, see into the future.

Seeing Nebulosity

Perception is inherently nebulous. Perception is nebulous because reality is nebulous. This is an ontological issue, not an epistemological one.

Fortunately, you only need to perceive precisely enough to accomplish your task.

It seems that we have perceptual processing at many different abstraction levels, and there exists no objective and well-defined “neutral observation vocabulary” as the logical positivists hoped. There are few, if any, objective and non-nebulous macroscopic properties to be perceived.

The Purpose of Meaning

…is to get stuff done.

The typical rationalist view is that the purpose of language is to state facts and theories. But that is mostly not what language is for.

Stating truths is only occasionally useful, and usually only as a means for accomplishing something else. Language is not a defective approximation to an ideal formal language.

Language is the right tool for dealing with the world we live in. One that is nebulous, localized, and meaning-laden.

It is not that everyday language is “good enough” - a properly precise language would be better; it is precisely adapted to its proper function, which is to get reasonable work done.

Logical positivists hoped to start from a theory of meaning developed for math, extend it to science, then to other academic subjects, and finally to rectify everyday language and thought.

The ethnomethodological flip involves starting from the ordinary usage of language, and develop an ontology that broadly covers reasonable activity, and then an understanding of science and mathematics.

This may seem backwards, but as human beings, we don’t start with science. Our ability to do science relies on our ability to eat and sleep, so that’s where our understanding of science has to start too.

Reasonable Believings

Categories are a matter of ontology (how things are).

Beliefs are a matter of epistemology (how we know truths).

The ontology of belief itself is prior to epistemology. We ought to understand what beliefs are before making theories about whether or not they are true.

Rationalist ontology supposes beliefs are definite things that live in your head, and the set of proposition and truth value pairs form a single well-defined category.  The rationalist ontology of belief is simple, but wrong.

Understanding Believing Empirically

A better alternative to the rationalist ontology of belief must understand believing empirically as a diversity of complex, contingent, and natural phenomena. Such an understanding is nebulous and complicated, but with adequate empirical grounding, can be roughly right.

Whether or not we believe something, what that belief is, and what it means to believe it, are all nebulous. 

The collection of beliefs progresses from concrete and specific ones, to abstract and general ones.

Believing is a Reasonable Activity

The ethnomethodological flip redirects attention from hypothetical things residing in our head to observable activities.

Believing shares the characteristics of other routine, reasonable activities.

  • You are accountable for believing

  • What you believe in depends on context and purpose

  • What you believe and what it means to believe are nebulous and variable

  • Reasonable believings are often adequate to get concrete and practical work done

  • Believing is a public and social activity

  • Believing is routine, often goes wrong, and then almost always gets repaired

  • Believing is often improvised to suit unique circumstances

Believing has a feeling component, both in the sense of an emotion and bodily sensation. Philosophers analyze beliefs as a propositional attitude, a stance toward something a statement about something. This “attitude” is not a mere assignment of a truth value; it is a complex of contextual and circumstantial emotions, associations, and actions.

Believing often means having feelings about an idea. Feelings are notoriously complex, vague, contextual, purpose-relevant, and changeable. Beliefs, considered as feelings about ideas, share these properties.

Reasonable Ontology

An ontology is a tool, a way of relating to the world that enables us to do the things we care about.

Rationality depends on a perfectly sharp ontology, because that makes absolute truths possible. Truths can enable activities that we care about.

In a formal ontology, things definitely belong to a category or don’t, p or not p.

Properties have precise values, and reality is divided into objectively separable entities that are in unambiguous relationships with each other. All of this is held independent of context and purposes; a thing belonging to a category will continue to be part of that category wherever you take it and whatever you do with it.

Rationalism assumes that the world works this way. In the everyday world, it doesn’t.

Sure, the world described by quantum physics is independent of context and purpose, but this ontology is useful only in rare circumstances, and even in those circumstances is contextual and purpose-laden from our perspective. Our everyday understanding of the world is nebulous - there is no uniform, accurate, context-free, purpose-free, and objective ontology in our everyday world. 

Reasonableness works with nebulous, tacit, interactive, accountable, and purposeful ontologies and truths, those that enable everyday activity.

  • Nebulous means something can be pretty much something without there being an ultimate truth on the matter

  • Tacit means that the use of an ontology generally goes unnoticed and unexpressed

  • Interactive means that ontology is an aspect of activity

  • Accountable means that if you treat something as a particular thing, you may be expected to explain why it is that particular thing.

  • Purposeful means that ontologies are tools for getting work done, and you may use different ones on different occasions depending on context and purpose.

Taking Rationality Seriously

Caring enough about wanting to improve rationality’s operation requires an empirically accurate and practical understanding of when and why it works.

What Understanding Rationality Should Do

Most rationalisms involve impossible metaphysical representations, and encounter difficulties in practice. A better explanation should address how rationality, as a real-world practice, addresses and handles each issue.

An understanding of rationality should explain how, when, and why it works.

  • It should account for observed facts about how rationality works in practice

  • It should be useful, and enable us to do rationality better

  • It should eschew from metaphysics in favor of naturalistic explanations where possible

The standard narrative of how rationality guides practical work is as follows:

  • Abstraction - you make a formal model of the problem

  • Problem Solving - you apply rational inference to formal model to solve the problem

  • Application - you apply the formal solution to the real-world problem

This is not exactly wrong, but an issue lies with bridging meta-physical abstractions and physical reality.

The J-Curve of Development

Understanding how individuals develop into rationality, then into meta-rationality, helps us understand what rationality and meta-rationality are. 

Although the process is continuous, it is helpful to divide it into stages.

  • Pre-rationality - you can be reasonable, but have little to no capacity for formal reasoning

  • Developing Formality - you can conform to formal norms over reasonable ones

  • Basic Rationality - you can model the real world using formal systems with conventional patterns of correspondence

  • Advanced Rationality - you can model the real world using formal systems where standard conventional patterns don’t apply

  • Meta-rationality - you can dynamically revise rational, circumrational, and meta-rational processes

The development from reasonable through rationality and then meta-rationality follows a J-curve, with time on the horizontal axis and the role of meaningfulness (context, purpose, nebulous specifics) on the vertical. Picture reasonableness starting at some point on the y-axis.

Eliminating meaning is essential to formality

Rationality gains its power from transcending context, purpose, and nebulous specifics to create universal, abstract, meta-physical systems. You must become comfortable with meaninglessness.

This meaninglessness is why formalism works. To become rational, you must wield the power of meaninglessness, the power to strip the world of context and purpose and treating it as a collection of abstractions.

As you develop advanced and meta-rationality, context and purpose come back into the picture. Meta-rationality takes an enormously broader view than mere reasonableness. It considers contexts and purposes with potentially vast scope across space, time, and complexity.

This step is also emotionally difficult. The vastness and groundlessness of the meta-rational way of being provoke agoraphobia and vertigo until the transition has been made.

What Makes Rationality Work?

We do.

There is no answer to rationalism’s central question that is elegant, abstract, or universal and explains why it must work. Rationality only works when we do work to make it work, and our work doesn’t always work. There are several sorts of work we do:

Circumrationality

  • Formal reality cannot make contact with nebulous reality, and this disconnection is how rationality gains its power.

  • This leaves a gap between reality and formalism that needs to be bridged by a dynamic interface.

  • Circumrationality is the non-rational work we do at the margins of rationalisms to actualize correspondences between the two worlds.

  • Circumrationality can work more or less well, and a major meta-rational task involves revising the rationality/circumrationality relationship when it breaks or improvements can be made.

Procedural Systems

  • Procedural systems mandate rules for action that cover all likely eventualities within their domain.

  • You can execute a protocol and it is generally unambiguous whether or not you have done so correctly.

  • Meta-rationality reflects on a procedural system’s adequacy.

Sanity Checking

  • Sanity checks can be used to reject nonsensical results from feeding “sort-of” truths into rational inference.

    • These kind of truths are usually all that is available, but the correctness guarantee of rational inference depends on absolute truths, so we have to accept that rationality often comes to wrong conclusions.

  • Meta-rationality reasons about how specific systems of formal reasoning behave in the face of nebulosity.

Standardization

  • Standardization is the work that reworks the physical world to make it more closely fit a rational ontology. It involves getting real-world things to conform to rational criteria as closely as possible to enable rational inference.

  • Designing standards involves meta-rational reasoning about the consequences of the inevitable nebulosity that remains.

Shielding

  • Shielding isolated a situation from factors a rational framework ignores, so its closed-world idealization is more likely to hold. It involves making many of the innumerable potentially relevant factors as irrelevant as possible.

  • Meta-rationality can help figure out what sorts of shielding a system needs.

Why Does Rationality Work?

It works for different reasons in different situations. We do lots of work to make rationality work, and in each case it can be obvious why it works, but there is no general explanation.

However, leaning on abstract metaphysics, rationality works when it does because the world is patterned as well as nebulous. Often rationality doesn’t work because we can’t force reality to conform to arbitrary rationalisms, but when it does, it’s because we’ve found patterns that make rationality work well enough.

Advanced Rationality

There are types of work that remain within a rational system but go beyond the basic rationality that can be taught explicitly.

Advanced rationality shades into meta-rationality. It relaxes formalism’s shielding of inferences, and allows room for context and purpose.

Non-procedural Rationality

Some problems can be solved more than one way, and may have many methods which are relevant at different points in the solution. A solution may involve using several methods in a novel arrangement, or inventing new methods altogether.

An example would be mathematical proofs. A proof can often be reached with different tools, but there is no single procedural way to devise all proofs. Sometimes a proof does not exist.

Context and rationality are both required to decide which approaches may work best, and when to reasonably give up trying.

Ascending the J-curve

As a technical professional it is possible to ignore context, purpose, and nebulosity throughout your career. However, this means the usefulness of your work depends on others abstracting reality into formal problems for you, then figuring out how to turn your formal solutions into practical work.

Typically, becoming more senior brings you closer to the volatile, ambiguous, and unknowable complexities of reality. Increasingly, you are required to make decisions about purposes and context.

Advanced rationality requires the recognition that solving difficult real world problems requires multiple models, exploiting ad hoc constraints, inference-limiting, and solution-monitoring. Understanding when and why formal procedures work becomes more important.

Shading into Meta-Rationality

The boundary between advanced rationality and meta-rationality is nebulous, but meaningful distinctions can be made.

Advanced rationality mainly works within rational systems, and adopts the ontology the system assumes. You typically do not use an ontology from one system and methods from an entirely different system.

Ontological Remodeling

Ontological remodeling involves the reconfiguration of individuation criteria, categories, properties, and relationships. Meta-rationality is itself an ontological remodeling of rationality.

We must recognize:

  1. At the meta level, moving from rationalism to meta-rationality requires a remodeling of the ontological categories of rationality, such as truth, beliefs, deductions, etc.

  2. At the object level, ontological remodeling is a major aspect of the subject matter of meta-rationality.

So the shift from the rationalist to the meta-rationalist view is an instance of meta-rationality. This implies that meta-rationality is required to understand meta-rationality; it is a pre-requisite for itself, which makes such a shift difficult.

What you must do then, is proceed in a spiral. Gaining an approximate understanding of a subset of a subject makes it possible to grasp more of it. Repeated passes are required to increase breadth and depth, and eventually reach mastery.

The Extinction and Survival of Categories

During ontological remodeling, categories may:

  1. Disappear completely

    • There is nothing in it - the entities in the group don’t exist or have nothing meaningful in common.

  2. Convert from informal to informal status

    • It is too nebulous for any formal account of it to work, but may still be heuristically useful.

  3. Get a new formal meaning

    • It gains a new formal meaning as we discover new knowledge that allows for better categories.

These outcomes are also continuous. It is possible for categories to be dropped by experts but retained in popular language, or for the understanding of categories to diverge.

To approximate where some rationalist categories lie on this continuum during meta-rational remodeling of rationality:

  • “Truth” - somewhere around 2.2

    • Best thought of as many different vaguely-similar nebulous ideas

  • “Belief” - somewhere around 1.6

    • A mostly useless and misleading category, thought necessary for every-day communication

  • “Rationality” - somewhere around 2.7

    • A grouping of usefully similar and reasonably well-defined methods that we should think about differently during meta-rational remodeling.

The book is unfinished and this is unfortunately where it ends, but I will pick this back up if and when it is completed.

Previous
Previous

Elon Musk *

Next
Next

Nudge