In 2012, as a worldwide collaboration of physicists labored to assemble findings that would spawn global headlines, 33 computational scientists at Brookhaven National Laboratory were working equally long and tense hours to keep many of those particle hunters supplied with constantly updated information.
“Those were probably the most exciting moments of my professional career, and this was true for many other people working in computing,” recalls Michael Ernst about his group’s role in the discovery of the Higgs boson. Ernst directs the RHIC and ATLAS Computing Facility (RACF) at Brookhaven, which has served as a key data hub for two massive particle accelerators making landmark findings in physics.
“It was essential for everybody directly involved in the analysis to have immediate access to the data. Everybody was committed to resolving problems, regardless of what the hour. Whenever something was not going as projected there were automatic alarms, and people got out of their beds to solve these problems immediately.”
Since 2000, RHIC – for Relativistic Heavy Ion Collider – has pushed gold ions to near-light speeds around a 2.4-mile racetrack at Brookhaven, colliding them at energies of up to 500 billion electron volts (GeV). That high-energy crash is thought to free an optimal number of quarks from their normal bondage to gluons, something theoreticians say last happened 100 millionths of a second after the Big Bang.
Researchers anticipated the collisions would result in intensely hot gaseous plasmas of quarks and gluons. But RHIC experiments are instead showing these extreme conditions create a perfect liquid – a substance that flows with virtually no viscosity.
The RHIC, in essence, is a time machine. So is the Large Hadron Collider (LHC), a 17-mile track at the French-Swiss border. At intervals since 2008, the LHC has smashed together beams of protons at energy levels of up to 8 trillion electron volts (TeV). Theoretically, that can recreate other kinds of physics from just after the Big Bang.
The LHC’s most notable finding to date is the apparent discovery of the last major fundamental particle needed to complete the Standard Model dictated by quantum mechanics. Theoreticians say the Higgs boson begat the mass in most states of matter.
The Tevatron, a 4-mile, 1 TeV proton-and-antiproton smasher at the Fermi National Accelerator Laboratory (Fermliab) near Batavia, Illinois, narrowed down the Higgs search before closing in 2011. That quest then refocused at the higher-energy LHC. Meanwhile, both Fermilab and Brookhaven took on major roles in the European-based mission.
Sensitive sifting
Seeking particles and states of matter that don’t exist in today’s world are needles-in-haystacks challenges that demand careful sifting through the many fragments from ferocious matter smashing.
The Higgs challenge has confronted about 6,000 scientists from 38 nations with byproducts from 600 million proton-proton collisions per second, reports CERN, the laboratory that hosts the LHC. Only one of each trillion such bust-ups would likely create a Higgs, Brookhaven experts say. Caught outside its time zone, each candidate Higgs would instantly decay, in as many as a dozen ways, into other detectable particles.
RHIC’s gold-gold ion smashups, meanwhile, occur thousands of times a second to generate many more fragments for more than 1,000 other investigators in the U.S. and abroad to analyze.
These two divergent experiments pose another set of challenges for the information scientists at Ernst’s RACF. Besides managing megadata for all RHIC collaborators, Brookhaven and Fermilab also agreed to split the more imposing information management duties for all U.S. collaborators in the LHC.
That means RACF serves about 600 Americans who analyze fragments logged by the LHC’s huge ATLAS particle detector. Fermilab supplies an equal number using LHC’s other detector, called CMS.
Both LHC detectors have collectively generated about 30 petabytes of data annually, information that’s still under analysis during a two-year LHC shutdown for maintenance and upgrading. When the LHC resumes operations early this year at more powerful 13 TeV energy levels, that data flow will balloon “between a factor of four and five,” Ernst predicts.
Brookhaven’s own RHIC, meanwhile, has accumulated 35 petabytes of raw data, “a volume that we could probably scale up by another factor of 10,” he says. Brookhaven offered to take on this dual role because of the large overlap, Ernst says, in the data handling needs of both RHIC and ATLAS.
RACF is among about 140 data centers sharing the LHC’s information-handling duties via a high-speed global fiber optic data grid. PanDA, a special workload management system for ATLAS that draws on disk storage, processors and enabling software, ensures that all of that detector’s researchers receive whatever data they need, regardless of where they’re working.
“I think it’s fair to say that all these resources combined form a worldwide distributed supercomputer,” Ernst says.
Meanwhile, RHIC uses domestic fiber-optic pipelines of the Energy Sciences Network and Internet2, plus National Science Foundation-sponsored transatlantic and transpacific links, to update information for its users in the U.S., Europe, South Korea, China and India.
A recent program alum interweaves large and small scales in wind-energy and ocean models. Read More
At UC Berkeley, a fellow applies machine learning to sharpen microscopy. Read More
A Cornell University fellowship recipient works on methods for ensuring software functions as expected. Read More
A UCSD engineering professor and former DOE CSGF recipient combines curiosity and diverse research experiences… Read More
A UT Austin-based fellow blends physics and advanced computing to reveal cosmic rays’ role in… Read More
With the fifth season's first episode, a DOE CSGF-sponsored podcast launches a website. Read More