What’s the LessWrongist philosophy of mathematics?

I consider myself to subscribe to "LessWrongist" philosophy: Bayesian epistemology, words are clusters in thingspace, materialist/reductionist view of consciousness and metaethics. These views seem to me to solve the typical philosophical nonsense you can often hear.

But there is one topic on which I don't know what to think without a little nonsense. I have an intuition that arithmetical statements are true or false independently of my ability to prove or disprove them. I have seen a similar view from famous rat-adjacent philosopher Scott Aaronson. But this intuition seems not very reductionist.

Consider the statement

On the one hand, every Turing machine either halts or doesn't. If we apply this to all the -state TMs, it determines the Busy Beaver function, and therefore it's SHA256 hash and its last digit.

On the other hand, if I say that this statement is either true or false, I'm asserting an apparent question of objective fact that will never be answered by a being in our universe, and there isn't even in principle a way to know the answer. Which is very "philosophical nonsense"-coded.

But if I bite the formalist bullet and say that sometimes I have a proof or disproof and sometimes I don't and that's all there is to it, I'm not making a distinction between the above statement and, say, the Continuum Hypothesis, which feels to me like an important distinction.


A somewhat related, less philosophical question is the Cotton Eyed Joe problem of mathematics, ie:

  1. Where does mathematics come from? On the surface, human mathematical activity seems quite different from what I imagine apes would be doing on the savanna.
  2. Where does mathematics go? How is it that human mental mathematical abilities and the related "manipulations of symbols" can be used to achieve things in the world? Cf The Unreasonable Effectiveness of Mathematics.

Seems a bit mysterious to me.


I thought to try the classic Yudkowskian trick of checking what, if anything, an AI would have to answer to a philosophical problem to be able to do stuff in the world. An AI would probably want to do some math to do physics, engineering, design encryption schemes etc.

I can imagine an AI getting all what it needs from some formal system like ZFC and not bothering with deeper question. But, what if the AIs maker neglected to hardcode the most suitable system into the AI? When I consider which mathematical axioms to apply to my daily life, I use my mysterious philosophical abilities that I don't know how to write down in code. So, I'm not sure what AI would think of this.



Discuss

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top