
DeepSeek’s ‘Quiet’ Paper Update Hits 86 Pages, Developers Spot Incoming V4 From Orbit
DeepSeek has updated its R1 paper, expanding it from 22 to 86 pages without a formal announcement. The revised paper, available on arxiv.org, provides a detailed breakdown of the company's full training pipeline, including intermediate checkpoints and expanded evaluations. This level of transparency is unusual in the industry, where companies typically only reveal such information when their method is no longer a competitive edge or a newer system is imminent. The update sheds light on how DeepSeek stabilized long-chain reasoning while avoiding chaotic outputs through a multi-stage training pipeline. This development is significant, as it highlights the growing importance of training pipelines and transparency in AI research, alongside model size. The update has sparked speculation that it may be a precursor to the release of DeepSeek V4, with many industry observers viewing it as a notable shift in the company's approach to research and development.