Task ecologies and the evolution of world-tracking representations in large language models

Giulio Valentino Dalla Riva
Paper From BibTeX import
, 2026

Notes

Large language models (LLMs) are increasingly used in settings where success depends on tracking conditions they have no direct access to, such as which language is being spoken, whether a claim is true, whether code will execute, or whether a plan is valid. Whether and how autoregressive models develop world-tracking representations is a broad open question. We address a specific, tractable formalization: which distinctions among latent world states must an optimal next-token encoding preserve, and under what conditions should populations of model lineages evolve toward such encodings? We answer the static question exactly. The Bayes-optimal token loss decomposes into an irreducible entropy floor plus the encoding's conditional insufficiency. Zero excess holds if and only if the encoding is sufficient, in the classical statistical sense, for the next token given the context. The relevant distinctions are determined by the training ecology: the unique coarsest sufficient encoding is the quotient partition by training-ecology equivalence, and it is also the entropy-minimizing zero-excess encoding. We call encodings that preserve all ecology-separated distinctions ecologically veridical. This yields three main consequences. First, under an explicit complexity regularizer, simplicity pressure preferentially merges low-gain distinctions. Second, models optimal for one ecology can still incur positive excess on richer deployment ecologies that refine it. Third, at the population level, if model lineages satisfy explicit heredity, variation, and differential-reproduction conditions, then inter-model selection acts on ecology-relative excess and should, in accessible mutation regimes, favor movement toward ecologically veridical encodings; post-training enters as ecology injection, with an explicit threshold for recovering gap distinctions. Exact finite-ecology calculations and controlled microgpt experiments validate the decomposition, the split-versus-merge threshold, off-ecology failure, and the two-ecology rescue mechanism in a regime where every quantity is observable. The aim is to use small language models as laboratory organisms for theory about representational targets under autoregressive prediction and the population-level pressures acting on model lineages.

References

No references yet.

Referenced by