Neural Scaling Laws for Boosted Jet Tagging
每日信息看板 · 2026-02-18
2026-02-17T18:13:01Z
Published
AI 总结
该论文在JetClass数据集上系统研究增强喷注分类的神经缩放律,给出算力最优扩展关系与可逼近的性能上限,证明增加算力及更底层特征能稳定提升HEP模型性能,对高能物理模型训练资源配置与数据策略具有指导意义。
- 基于公开JetClass数据集,分析了增强喷注分类任务中的神经网络缩放规律。
- 推导了计算量最优的缩放律,并识别出可通过持续增算力逼近的有效性能极限。
- 研究了HEP中常见的数据重复训练现象,量化其带来的“有效数据集规模增益”。
- 比较不同输入特征与粒子多重度下的缩放系数和渐近性能上限差异。
- 发现更具表达力的低层级特征不仅能提高固定数据规模下效果,也能抬升最终性能上限。
#arXiv #paper #研究/论文 #JetClass
内容摘录
The success of Large Language Models (LLMs) has established that scaling compute, through joint increases in model capacity and dataset size, is the primary driver of performance in modern machine learning. While machine learning has long been an integral component of High Energy Physics (HEP) data analysis workflows, the compute used to train state-of-the-art HEP models remains orders of magnitude below that of industry foundation models. With scaling laws only beginning to be studied in the field, we investigate neural scaling laws for boosted jet classification using the public JetClass dataset. We derive compute optimal scaling laws and identify an effective performance limit that can be consistently approached through increased compute. We study how data repetition, common in HEP where simulation is expensive, modifies the scaling yielding a quantifiable effective dataset size gain. We then study how the scaling coefficients and asymptotic performance limits vary with the choice of input features and particle multiplicity, demonstrating that increased compute reliably drives performance toward an asymptotic limit, and that more expressive, lower-level features can raise the performance limit and improve results at fixed dataset size.