Lower bounds for one-layer transformers that compute parity

arXiv:2605.12171v1 Announce Type: new Abstract: This note shows that no self-attention layer post-processed by a rational function can sign-represent the parity function unless the product of the number of heads and the degree of the post-processing function grows linearly with the input length. Combining this lower bound with rational approximation of ReLU networks yields a margin-dependent extension for self-attention layers post-processed by ReLU networks.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top