Research from institutions including Oxford and Cambridge has found that training large language models using synthetic data may lead to model collapse. This finding was featured on the cover of Nature.
The researchers tested Meta's OPT-125m model, asking it about medieval architecture. While the first few rounds of responses were reasonable, by the ninth iteration the model began producing nonsensical outputs.
The paper's lead author noted they had considered synthetic data might introduce some errors, but were surprised by how quickly the model degraded.
Three types of errors contribute to model collapse:
- Statistical approximation error - Due to limited sample sizes
- Function expressivity error - From limitations in the model's approximation capabilities
- Function approximation error - Caused by limitations in the learning process
To evaluate the impact on language models, the researchers fine-tuned Meta's OPT-125m model on WikiText-2 data. They generated synthetic training data from the fine-tuned model and used it to train subsequent generations.
Results showed increasing errors over time, with models forgetting low-probability events and producing more homogeneous outputs before complete collapse. Similar phenomena were observed in VAE and GMM models.
Mitigating this issue is challenging. Some companies are exploring "watermarking" AI-generated content to exclude it from training data, but this requires coordination between companies.
This suggests models trained on earlier internet data may better represent the real world, potentially giving the first wave of large language models an advantage.