Is Libet-style “free will illusion” a general property of hierarchical systems (brains and models)?

I’ve been thinking about a parallel between the classic Libet experiment and how decisions seem to form in layered ML systems.

Libet found that the brain’s readiness potential starts ~550ms before movement, but the feeling of deciding only shows up ~200ms before. So the neural “commitment” appears ~350ms before conscious awareness. This has often been taken as evidence that free will is an illusion — the brain decides before “you” do.

What’s interesting is that you see a structurally similar pattern in hierarchical models:

Lower-level processes effectively “commit” to a direction/state. That commitment only becomes visible later in higher-level representations (i.e. what you can actually observe or interpret)

So in both cases: the system's "output layer" — conscious awareness in Libet, spectral visibility in AI — is downstream of the actual commitment point. What feels like intention forming is actually intention being read, not written. The write happened earlier, in a layer that doesn't have direct phenomenal access.

That raises a broader question:

Is this a general property of complex hierarchical systems — that the layer reporting a decision isn’t the layer that made it? This collapses the distinction between "deterministic machine" and "free agent" — not because machines have free will, but because the biological substrate that generates the feeling of free will is doing the same thing machines do.

submitted by /u/Naive_Weakness6436
[link] [comments]

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top