Learning by Surprise: Surplexity for Mitigating Model Collapse in Generative AI
Daniele Gambetta, Gizem Gezici, Fosca Giannotti, Dino Pedreschi, Alistair Knott, Luca Pappalardo
Paper
From BibTeX import
, 2025
Notes
Cited in riva2026task as concrete evidence for the niche-construction feedback loop: surplexity work documents performance and diversity decay across generations when later models train on synthetic data that no longer surprises them. The cite supplies an empirical instance of the long-run veridicality concern.
References
No references yet.