Barriers to Complexity-Theoretic Proofs that “AGI” Using Machine Learning is Impossible

arXiv:2411.06498v2 Announce Type: replace Abstract: A recent paper (van Rooij et al. 2024) claims to have proved that achieving human-like intelligence using learning from data is intractable in a complexity-theoretic sense. We point out that the proof relies on an unjustified assumption about the distribution of (input, output) tuples in the data. We briefly discuss that assumption in the context of two fundamental barriers to repairing the proof: the need to precisely define ``human-like," and the need to account for the fact that a particular machine learning system will have particular inductive biases that are key to the analysis. Another attempt to repair the proof, by focusing on subsets of the data, faces barriers in terms of defining the subsets.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top