minAction.net: Energy-First Neural Architecture Design — From Biological Principles to Systematic Validation

arXiv:2604.24805v2 Announce Type: replace Abstract: Modern machine learning optimizes for accuracy without explicit treatment of internal computational cost, even though physical and biological systems operate under intrinsic energy constraints. We evaluate energy-aware learning across 2,203 experiments spanning vision, text, neuromorphic, and physiological datasets with 10 seeds per configuration and factorial statistical analysis. Three findings emerge. First, architecture alone explains negligible variance in accuracy (partial eta^2 = 0.001), while the architecture x dataset interaction is large (partial eta^2 = 0.44, p < 0.001), demonstrating that optimal architecture depends critically on task modality and rejecting the assumption of a universal best architecture. Second, a controlled lambda-sweep across lambda in {0, 1e-5, 1e-4, 1e-3, 1e-2} validates a single-parameter energy-regularized objective L = L_CE + lambda * E(theta, x): across this range, internal activation energy decreases by approximately three orders of magnitude relative to the unregularized lambda=0 baseline, with negligible accuracy change (<0.5 percentage points) on both MNIST and Fashion-MNIST. Third, energy-first architectures inspired by an action-principle framework yield 5-33% within-modality training-efficiency gains over conventional baselines. These results emerge from a research program that interprets learning through a structural correspondence between the action functional in classical mechanics, free energy in statistical physics, and KL-regularized objectives in variational inference. We frame this correspondence as a design hypothesis, not a derivation.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top