Despite gains in identifying and treating them, the cerebral blood vessel dilations known as aneurysms cause suffering and death for up to 5 percent of Americans. Aneurysms can rupture to start hemorrhages with often rapid catastrophic consequences. Clots formed at the site of an aneurysm may detach, block arteries and trigger a stroke.
Blood supply to the brain relies on a highly complex system where, at any point, an aneurysm may occur. Angiograms can show the presence of aneurysms and clots but don’t necessarily reveal their cause – the interactions among and between platelets and red and white blood cell and the endothelial cells that line blood vessels. When injured, endothelial cells trigger platelet aggregation, leading to a clot.
Conventional imaging also doesn’t show the big picture of blood flow throughout the brain – where blood is coming from and where it’s going. Precisely understanding these elements at the level of both gross blood flow and its microscopic properties would greatly improve neurosurgeons’ ability to predict when and where an aneurysm might rupture and when to operate.
High-performance computing (HPC) shows promise for simulating and visualizing brain blood flow at multiple scales. Paving the way is a team led by Leopold Grinberg, senior research associate in the Division of Applied Mathematics at Brown University. Other researchers include Brown’s George Em Karniadakis, Argonne National Laboratory’s (ANL) Joseph A. Insley, Vitali Morozov, Michael E. Papka and Kalyan Kumaran, and Dmitry Fedosov of the Institute of Complex Systems (ICS) in Jülich, Germany.
In 2006, Grinberg began developing arithmetical and software methods capable of simulating 3-D blood flow in complex arterial networks. “The methodology I started to build was based on functional decomposition, with many tasks assigned to different groups of processors, plus multilevel communicating interfaces capable of connecting the data computed by different tasks and synchronizing the processors assigned to different tasks,” Grinberg says.
Working on three fronts – parallel computing, mathematical algorithms and visualization tools – the team from Brown and ANL received 50 million processor hours on Intrepid, Argonne’s IBM Blue Gene/P, HPC, and 23 million hours on Jaguar, the Cray XT system at Oak Ridge National Laboratory (ORNL). The allocations were granted through the Department of Energy’s Innovative and Novel Computational Impact on Theory and Experiment (INCITE) program. The researchers also had access to Jugene, ICS’s Blue Gene/P, which runs at a peak speed of 1 petaflops – a quadrillion calculations per second – and is almost twice the size of Intrepid.
As they reported in November at the SC11 high-performance computing conference in Seattle, Grinberg and colleagues have created what they think is the first truly multiscale simulation and visualization of an actual biological system. The team’s ambitious target was brain blood flow, the most complex arteriovenous system in the human body. A paper describing the research was one of five finalists for the prestigious 2011 Gordon Bell Prize, which recognizes outstanding results in the application of parallel computing to practical scientific problems.
Page: 1 2
A Caltech fellowship recipient works on the physics underlying turbulence, or the chaotic gain of… Read More
A recent program alum interweaves large and small scales in wind-energy and ocean models. Read More
At UC Berkeley, a fellow applies machine learning to sharpen microscopy. Read More
A Cornell University fellowship recipient works on methods for ensuring software functions as expected. Read More
A UCSD engineering professor and former DOE CSGF recipient combines curiosity and diverse research experiences… Read More
A UT Austin-based fellow blends physics and advanced computing to reveal cosmic rays’ role in… Read More