Growing Transformers: Modular Composition and Layer-wise Expansion on a Frozen Substrate

arXiv:2507.07129v3 Announce Type: replace-cross Abstract: We study a constrained training regime for decoder-only Transformers in which the token interface is fixed, previously trained dense blocks are not reopened, and the active trainable parameter set is kept approximately constant as depth grows. Starting from a shallow model, we stack new blocks and train only the newest blocks and the LM head; optional LoRA phases provide limited global readjustment under the same active-parameter budget. The paper asks a feasibility/tradeoff question, not whether this regime matches tuned monolithic pretraining. In a common-protocol 9-layer study on a frozen Unicode substrate, the constructive frozen-Unicode model uses 105.0M active trainable parameters, compared with 180.5M for the interface-matched monolithic frozen baseline and 247.6M for the fully trainable monolithic baseline. We then consider an extreme fixed interface: each token is represented only by a frozen 16-dim binary token-ID code, deterministically lifted to d_model, so the resulting token embedding matrix has rank at most 16. Even in this setting, continued growth remains viable. In a 68.9B-token run on FineWeb-Edu + Cosmopedia, a 16-layer 269.7M model trained above this fixed interface reaches 28.92\% MMLU after an interleaved LoRA stage. Reported final metrics are measured after merging the last-stage LoRA adapters into the 269.7M base model. Because the data mixture changes across stages in this long-horizon run, we interpret it as a viability demonstration rather than a clean causal comparison. Overall, the evidence supports a narrow claim: useful continued learning can proceed above a frozen minimal interface under a bounded active trainable-parameter budget, with a clear tradeoff against dense monolithic training in final perplexity.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top