I arrived in Berkeley last night and today was my first day at a full-week workshop on real-time decision-making at the Simons Institute for the Theory of Computing at UC Berkeley. The day started with amazing talks about Large Hadron Collider hardware and software by Caterina Doglioni (Lund) and Benjamin Nachman (LBNL). The cuts from collisions to disk-writing is a factor of 10 million, and they are writing as fast as they can.
The triggers (that trigger a disk-writing event) are hardware-based close to the metal, and then software-based in a second layer. This means that when they upgrade the triggers, they are often doing hardware upgrades! Some interesting things came up, including the following:
Simulating is much slower than the real world, so months of accelerator run-time requires years of computing on enormous facilities just for simulation. These simulations need to be sped up, and machine-learning emulators are very promising. Right now events are stored in full, but only certain reconstructed quantities are used for analysis; in principle if these quantities could be agreed-upon and computed rapidly, the system could store less per event and then many more events, reducing the insanity of the triggers. And every interesting (and therefore triggered, saved) event is simultaneous with many uninteresting events, so in principle right now the system saves a huge control sample, which hasn't been fully exploited, apparently.
Of course the theme of the meeting is decision-making. So much of the discussion was about how you run these experiments so that you decide to keep the events that will turn out to be most interesting, when you don't really know what you are looking for!
No comments:
Post a Comment