Isotropic Activation Functions Enable Deindividuated Neurons and Adaptive Topologies

arXiv:2602.23405v2 Announce Type: replace-cross Abstract: Introduced is a methodology for adapting the topology of dense neural networks, enabled by isotropic activation functions. Achieved through prescribed reparameterisation symmetries and singular-value decomposition of affine maps, this diagonalises layers into one-to-one, ordered connections. This makes it simpler to assess the impact of individual connections on the function. Low-impact neurons can be removed (neurodegeneration), and a thresholded buffer of largely inactive 'scaffold' neurons is maintained (neurogenesis). These symmetry-led diagonalisation and structural changes are function-invariant, demonstrated to be computationally identical during neurogenesis, arbitrarily well approximated during neurodegeneration, and enable asymptotic 50\% parameter sparsification of dense networks with identically preserved function. Thus, real-time restructuring of the architecture in response to task demands, task appending, removal or changes is shown. The approach is conceptually centred on primitive symmetry-prescriptions, through which isotropic functions are derived that feature explicit basis independence and a loss in the individuation of neurons implicit in typical elementwise functional forms. Hence, this allows freedom in the basis to which layers are decomposed and interpreted as individual artificial neurons, directly enabling this adaptive topology approach. Additionally, a new tunable model parameter, the 'intrinsic length', is introduced to improve this analytical invariance, alongside a generalised isotropic-perceptron architecture that enables parallel precomputation of all matrix-vector products and displays a nested functional class. Diagonalisation is suggested to offer new possibilities for interpretability and monitoring of isotropic networks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top