On the evolutionary cognitive pressure for experiential awareness: do machines need it?

arXiv:2510.20839v2 Announce Type: replace-cross Abstract: The consciousness standing for artificial intelligence divides opinions across epistemological positions. Whether or not machines can be conscious, and whether we can ascertain the truth of such a proposition for any given case, has consequential ethical implications. This challenge is exacerbated by the lack of consensus on the nature of consciousness. We address an orthogonal problem: regardless of this nature of, is it \textit{required} for machines? Specifically, we focus on a constituent element of consciousness -experiential awareness- and examine why it arose evolutionarily in biological organisms, from a computational perspective. We show that, because of evolutionary "baggage" -autonomous neurological reactions- experiential awareness is necessary for higher-level reasoning to be possible. The implication is that, given artificial systems are architected without such legacy considerations, it is possible to design them with an arbitrary level of intelligence, without the need for experiential awareness. This possibility simplifies ethical considerations on artificial intelligence, and opens new approaches to the discernment of artificial consciousness.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top