CERN’s approach to handling the universe’s data is distinct. Rather than relying on preset weights and typical TPUs or GPUs, CERN integrates custom nanosecond-speed AI directly into silicon, reducing unnecessary data at the source. This strategy was highlighted by Thea Aarrestad at the Monster Scale Summit. Aarrestad’s work at CERN involves using machine learning to optimize data collection from the Large Hadron Collider (LHC), specifically focusing on anomaly detection.

The LHC generates about 40,000 EBs of raw sensor data annually, a quarter of the Internet’s traffic, necessitating real-time data reduction. With its detector systems, CERN processes data at hundreds of terabytes per second, surpassing tech giants like Google or Netflix. Decisions must be hardwired into chip design, requiring algorithms to perform incredibly fast.

Nestled in a 27-kilometer ring and spanning the Swiss-French border, the LHC collides subatomic particles to uncover new types of matter. These collisions provide insights into the universe’s fundamental structure. CERN has implemented a vast edge computing infrastructure at the detector level to manage this data deluge efficiently.

The ‘Level One Trigger’ system, an array of 1,000 FPGAs, reconstructs events at 10 TB/sec, deciding within 50 nanoseconds whether to preserve or discard data. Karaya’s AXOL1TL algorithm, a powerful anomaly detector, plays a vital role in identifying significant events, ensuring only about 0.02% of data is saved for further analysis.

CERN’s AI efforts prioritize efficiency and careful model design, transcending traditional computing architectures. By tailoring systems closely, they develop high-performance tree-based models and personalized hardware.

As CERN prepares for the High Luminosity LHC, expected in 2031, the data volume will rise tenfold, demanding innovative solutions to continue leading in particle physics research.