«National Computational Infrastructure for Lattice Gauge Theory A proposal in response to Ofﬁce of Science Notice DE-FG02-06ER06-04 and Announcement ...»
National Computational Infrastructure for Lattice Gauge Theory
A proposal in response to Ofﬁce of Science Notice DE-FG02-06ER06-04 and Announcement
Lab 06-04: Scientiﬁc Discovery through Advanced Computing.
Lead Institution: University of California, Santa Barbara
Santa Barbara, CA 93106
Lead Principal Investigator and DOE Contact: Robert Sugar
Address: Department of Physics
University of California
Santa Barbara, CA 93106
Ofﬁce of Science Programs Addressed: High Energy Physics and Nuclear Physics Ofﬁce of Science Program Ofﬁce Technical Contacts: Craig Tull and Sidney Coon
Participating Institutions and Principal Investigators:
Boston University*, Richard Brower † ‡ and Claudio Rebbi † Brookhaven National Laboratory*, Michael Creutz † ‡ Columbia University*, Norman Christ † ‡ Fermi National Accelerator Laboratory*, Paul Mackenzie † ‡ Indiana University*, Steven Gottlieb ‡ Massachusetts Institute of Technology*, John Negele † ‡ Thomas Jefferson National Accelerator Facility*, David Richards † and William (Chip) Watson ‡ University of Arizona*, Doug Toussaint ‡ University of California, Santa Barbara*, Robert Sugar † ‡ University of Utah*, Carleton DeTar ‡ University of Washington, Stephen Sharpe †
DePaul University*, Massimo DiPierro ‡ Illinois Institute of Technology*, Xian-He Sun ‡ University of North Carolina*, Daniel Reed ‡ Vanderbilt University*, Theodore Bapty ‡ * Institution submitting an application † Project Principal Investigator, Member of Lattice QCD Executive Committee ‡ Institution Principal Investigator 1 Executive Summary Our long range objective is to construct the computational infrastructure needed for the study of quantum chromodynamics (QCD). Nearly all theoretical physicists in the United States involved in the numerical study of QCD are participating in this effort , as are Brookhaven National Laboratory (BNL), Fermi National Accelerator Laboratory (FNAL) and Thomas Jefferson National Accelerator Facility (JLab), and computer scientists at DePaul University, the Illinois Institute of Technology, the University of North Carolina and Vanderbilt University. A very successful start was made under the ﬁrst phase of the Department of Energy’s Scientiﬁc Discovery through Advanced Computing Program (SciDAC-1). We propose to build on this success to address new challenges that must be met in order to capitalize fully on the exciting opportunities now available for advancing the study of QCD.
QCD is the component of the Standard Model of elementary particle physics that describes the stronginteractions. The Standard Model has been enormously successful; however, our knowledge of it is incomplete because it has proven extremely difﬁcult to extract many of the most important predictions of QCD, those that depend on the strong coupling regime of the theory. To do so from ﬁrst principles and with controlled systematic errors requires large scale numerical simulations within the framework of lattice gauge theory.
Such simulations are needed to address problems that are at the heart of the DOE’s large experimental programs in high energy and nuclear physics. Our immediate objectives are to 1) calculate weak interaction matrix elements to the accuracy needed to make precise tests of the Standard Model; 2) determine the properties of strongly interacting matter under extreme conditions such as those that existed in the very early development of the universe, and are created today in relativistic heavy ion collisions; and 3) calculate the masses of strongly interacting particles and obtain a quantitative understanding of their internal structure.
The infrastructure we propose to build is essential to achieve these objectives.
The bulk of our effort in SciDAC-1 was devoted to software development, and that will continue to be the case under this SciDAC-2 proposal. Under SciDAC-1 a QCD Applications Programming Interface (QCD API) was developed, which enables lattice gauge theorists to make effective use of a wide variety of massively parallel computers, including those with switched and mesh architectures. The QCD API was optimized for the custom designed QCD on a Chip (QCDOC) computer, and for commodity clusters based on Pentium 4 processors. Under this proposal, optimized versions of the QCD API will be created for clusters based on multi-core processors and Inﬁniband communications networks, and for the Cray XT3, the IBM BlueGene/L and their successors. The QCD API will be used to enhance the performance of the major QCD community codes and to create new applications. A QCD physics toolbox will be constructed which will contain sharable software building blocks for inclusion in application codes, performance analysis and visualization tools, and software for automation of physics work ﬂow. New software tools will be created for managing the large data sets generated in lattice QCD simulations, and for sharing them through the International Lattice Data Grid consortium. A common computing environment will be developed for the dedicated lattice QCD computers at BNL, FNAL, and JLab. Work on multi-scale algorithms recently begun in collaboration with members of the Terascale Optimal PDE Simulations (TOPS) Center will be extended.
The lattice QCD infrastructure effort has included the development of hardware as well as software, because for the study of QCD it has proven more cost effective to build specialized computers than to make use of general purpose supercomputers. We have pursued both customized clusters constructed from commodity components and the development of fully customized computers. Research and development work on commodity clusters was carried out under our SciDAC-1 grant at FNAL and JLab. The experience gained with these prototype clusters will enable us to build highly cost effective terascale clusters in the coming year. In parallel with SciDAC-1, but funded separately by the DOE, a 12,288 processor QCDOC computer was constructed at BNL for use by the U.S. lattice QCD community. In SciDAC-2 we propose to continue to track the evolving commodity and semi-commodity marketplace and to undertake design of a fully customized successor to the QCDOC. A four year Lattice QCD Computing Project began on October 1, 2005 with funding from the DOE’s High Energy Physics and Nuclear Physics Programs. The purpose of this Project is to construct and operate dedicated computers for the study of QCD. Both the hardware research and development and the software development we propose are critical to the success of this Project and to research in lattice QCD in the U.S.
2 Physics Goals
2.1 Tests of the Standard Model Despite its extraordinary success, the Standard Model is believed to be only the low energy (long distance) limit of a more fundamental theory. Therefore, a major component of the experimental program in high energy physics is devoted to making precise tests of the Standard Model in order to determine its range of validity and search for indications of new physics beyond it. Many of these tests require both accurate experiments and accurate lattice QCD calculations of the effects of the strong interactions on weak interaction processes. In almost all cases, the precision of the tests are limited by the uncertainties in the lattice calculations, rather than in the experiments. Our objective is to bring the lattice errors down to, or below, the experimental ones.
The greatest challenge to performing accurate numerical calculations of QCD is to include the full effects of vacuum polarization due to light (up, down and strange) quarks. Signiﬁcant progress has been made in meeting this challenge during the past ﬁve years through the use of improved formulations of QCD on the lattice and through rapid growth in the computing resources available to the ﬁeld [2, 3, 4]. Among the notable results have been calculations of the leptonic decay constants of the π and K mesons  and mass splittings in the charmonium  and bottomonium  spectra to an accuracy of 3% or better; the ﬁrst determination of the light quark masses to fully include their vacuum polarization effects [8, 9]; the calculation of the strong coupling constant  and the Cabibbo-Kobayashi-Maskawa (CKM) matrix element Vus [11, 5] to the same accuracy as their experimental determinations. The lattice gauge theory community has moved from the validation of techniques through the calculation of quantities that are well known experimentally to the successful prediction of quantities that had not previously been measured. Three cases in which predictions were subsequently conﬁrmed by experiment were the calculations of the leptonic decay constant  and semi-leptonic form factors  of the D meson, and the mass of the Bc meson . The decay constants and form factors for B mesons play important roles in tests of the Standard Model, but are very difﬁcult to measure experimentally. The lattice calculations are similar for D and B mesons, since only the masses of the heavy quarks change. Thus, the successful calculations for D mesons provide important validation of those for B mesons which are now in progress.
Table 1: The impact of improved lattice QCD calculations on the determination of CKM matrix elements.
The results quoted above indicate that we are in a position to make very signiﬁcant progress over the next ﬁve years. The current lattice and experimental uncertainties in some key quantities are shown in Table 1, along with the reduction in lattice errors expected as more computational resources become available, as well as expected improvements in ancillary theoretical calculations of operator normalization factors. All quantities in the table have had ﬁrst calculations which fully include the effects of vacuum polarization due to light quarks. The error estimates in Table 1 are based on our experience with the improved staggered formulation of lattice quarks, as were the successful calculations cited above, with the exception of the εK estimates which are based on domain wall quarks as well.
2.2 Matter under extreme conditions
At very high temperatures and/or densities, one expects to observe a phase transition or crossover from ordinary strongly interacting matter to a plasma of quarks and gluons. A primary motivation for the construction of the Relativistic Heavy Ion Collider (RHIC) at BNL was to observe the quark–gluon plasma and determine its properties. During the early development of the Universe matter was in the plasma state, and the quark-gluon plasma may be a central component of neutron stars today. The behavior of strongly interacting matter in the vicinity of the phase transition or crossover is inherently a strong coupling problem, which can only be studied from ﬁrst principles through lattice gauge theory calculations. Among the issues that can uniquely be addressed by lattice calculations are the nature of the transition, the temperature at which it occurs, the properties of the plasma, and the equation of state. Indeed, it is the lattice that has given us the best estimates of the temperature of the deconﬁnement transition . Lattice results will continue to be crucial to the interpretation of ongoing heavy–ion experiments in the United States and Europe.
A major goal of our research program is to investigate the properties of matter under the extreme conditions of high temperature and high density. Important progress has been made in the last several years  in determining the transition temperature [16, 17], the phase diagram  and the equation of state [18, 19, 20, 21]. However, as the deconﬁnement process occurs at a temperature of order 175 MeV, a new scale is introduced, as well as new potential lattice artifacts. Thus, these calculations are computationally quite demanding. Those at zero baryon density are well understood theoretically, and only require sufﬁcient computational resources to reach high precision results; but, calculations at non-zero baryon density are at a much earlier stage of development.