Oak Ridge Leadership Computing Facility

Celebrating 25 Years of Leadership in High Performance Computing

Twenty-five years ago, high-performance computing (HPC) in the United States stood at a crossroads. Established computing architectures were approaching their limits in performance and competiveness, while the country’s need for computing power to solve challenging problems in science, energy, and national security continued to grow.;xNLx;;xNLx;Out of this period of technological transition, a new force in scientific computing emerged in the unlikely region of East Tennessee, a force that continues to shape the HPC landscape today.;xNLx;;xNLx;In 2017, the Oak Ridge Leadership Computing Facility is celebrating 25 years of leadership in high-performance computing. Since its founding as the ORNL Center for Computational Sciences (CCS) in May 1992, the center has consistently delivered supercomputers of unprecedented capability to the scientific community on behalf of the US Department of Energy (DOE). Scientists, in turn, have used these versatile systems to solve critical problems in areas as diverse as biology, advanced materials, climate, and nuclear physics.;xNLx;;xNLx;From the beginning, the OLCF has contributed to a rapid evolution in scientific computing that has produced a millionfold increase in computing power. This rise has included the launch of the first teraflop system (IBM Power3 Eagle) for open science, the science community’s first petaflop system (Cray XT5 Jaguar), and two top-ranked machines on the TOP500 list, including the OLCF’s current leadership-class machine, Titan. Additionally, the next chapter in the OLCF’s legacy is set to begin in 2018 with the deployment of Summit, a pre-exascale system capable of more than 100 petaflops.;xNLx;;xNLx;Using OLCF systems, scientists have expanded the scale and scope of their research, solved complex problems in less time, and filled critical gaps in scientific knowledge. Today, simulation is considered on par with experiment and theory as an essential standard of modern science. ;xNLx;

1985-03-01 13:13:01

ORNL's Cray X-MP

Oak Ridge National Laboratory acquires the Cray X-MP computer, Cray's first shared-memory parallel vector processor. The computer is initially obtained for handling uranium enrichment calculations but will eventually be used to help engineers simulate complex physical systems and for large engineering studies.

1985-07-01 13:13:01

Serial #1 iPSC/1 Delivered to ORNL

Oak Ridge National Laboratory (ORNL) receives serial #1 of the first Intel Personal SuperComputer 1, or iPSC/1—the first parallel computer built from commercially available off-the-shelf parts. ORNL’s model features 32 nodes connected via Ethernet into a hypercube.

1988-07-01 13:13:01

ORNL Beta Tests iPSC/2

Oak Ridge National Laboratory receives Intel’s second model of its first Intel Personal SuperComputer (iPSC) system, the iPSC/2. The system is serial #1 with 64 nodes.

1989-07-01 13:13:01

iPSC/860 Comes to ORNL

Oak Ridge National Laboratory (ORNL) receives one of the most powerful computers available in 1989, the Intel Personal SuperComputer 860, or iPSC/860. The serial #1 parallel computer, with 128 nodes connected in a hypercube, operates at a peak performance of 5.12 gigaflops.

1990-11-12 13:13:01

Team Wins Gordon Bell for MKKR-CPA Run

Oak Ridge National Laboratory’s Malcolm Stocks and Al Geist, along with William Shelton of the US Naval Research Laboratory and Beniamino Ginatempo at the University of Messina in Italy, run an alloy-theory code, MKKR-CPA, at 2.5 gigaflops on a 128-node Intel iPSC/860. The team wins an Association for Computing Machinery Gordon Bell Prize the same year for their price–performance during the project: 800 megaflops/$1M.

1991-09-01 13:13:01

KSR-1 Computer Installed at ORNL

The first multiprocessor (serial #1) system from Kendall Square Research is installed at Oak Ridge National Laboratory in September 1991. The 32-processor system features a shared-memory architecture and is designed to make porting applications easy for both serial and parallel codes. The system operates at a peak performance of 2.56 gigaflops.

1991-12-01 13:13:01

Congress Signs HPCA

On December 9, 1991, Congress signs the High-Performance Computing Act (HPCA) of 1991, created by Senator Al Gore. HPCA proposes a national information infrastructure to build communications networks and databases and also calls for proposals to build new high-performance computing facilities to serve science.

1991-12-03 00:00:00

ORNL Submits PICS Proposal

Oak Ridge National Laboratory (ORNL) joins with three other national laboratories and seven universities to submit the Partnership in Computational Science (PICS) proposal to the US Department of Energy as part of the High-Performance Computing and Communications Initiative. PICS paves the way for ORNL to become a major player in scientific computing.

1991-12-09 00:00:00

Bob Ward to Direct CCS

As part of the Partnership in Computational Science proposal, Robert C. Ward becomes the acting director for a planned high-performance computing center. Ward will lead the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory in the formative phases of the project.

1991-12-15 00:00:00

CSEP Program Established

The US Department of Energy sponsors the Computational Science Education Project (CSEP), which targets undergraduate and graduate students in physical or life sciences. CSEP is designed to relay overviews of computational science tools to students in a comprehensive e-book, featuring clickable elements, summaries, and visualization packages for boosting understanding.

1991-12-22 00:00:00

CSGF Fellowship Program

The US Department of Energy establishes the Computational Science Graduate Fellowship (CSGF) to provide doctoral students the opportunity to explore and use computing resources to meet needs in their respective fields. CSGF hosts an annual program review that serves as a springboard for fellows to gather the information and resources necessary for selecting laboratories for their respective summer practicums. Oak Ridge National Laboratory is an active participant in the program, hosting students for practicums, facilitating workshops at the CSGF Annual Review meetings, and hiring alumni of the program to build the national scientific community.

1992-05-24 13:13:01

Ken Kliewer Director of CCS

Theoretical physicist Kenneth Kliewer is named the director of the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL). Kliewer will serve under ORNL Director Alvin Trivelpiece, whose efforts with the US Department of Energy will be crucial to the initial development of CCS.

1992-05-24 13:13:01

CCS Established, First Paragon Accepted

On May 24, 1992, Oak Ridge National Laboratory (ORNL) is awarded a high-performance computing research center called the Center for Computational Sciences, or CCS, as part of the High-Performance Computing Act of 1991. ORNL also receives a 66-processor, serial #1 Intel Paragon XP/S 5 for code development the same year. The system has a peak performance of 5 gigaflops.

1992-08-20 13:13:01

Adventures in Supercomputing

The Center for Computational Sciences promotes a hands-on program called “Adventures in Supercomputing," or AiS. In collaboration with faculty members from Oak Ridge National Laboratory, Ames Laboratory, and Sandia National Laboratory, teachers from more than 70 high schools integrate high-performance computing concepts into their teaching materials to educate a wide population of students—including women, minorities, and disadvantaged students.

1992-11-20 13:13:01

Second Intel Paragon Arrives

The Center for Computational Sciences receives a 512-processor system called the Intel Paragon XP/S 35. With a peak performance of 35 gigaflops, the system will used for major computational projects.

1994-07-20 13:13:01

ORNL Receives IBM SP2

The Center for Computational Sciences at Oak Ridge National Laboratory receives the IBM SP2/16 (Power 2) computer, which features 16 processors and has a peak performance of 4.26 gigaflops.

1995-01-01 13:13:01

Unearthing the Subsurface

Mary Wheeler of the University of Texas at Austin and a multi-institution team pioneer the effort to create the algorithms necessary for simulating groundwater processes, giving scientists a first look at some of the complex interactions that occur during groundwater contamination and remediation. Wheeler’s team uses the IBM SP2 and the Intel Paragon systems to develop groundwater models and codes of unprecedented complexity. The team eventually implements its new algorithms into groundwater transport simulators to generate 3-D models of subsurface radioactive decay and the transport of radioactive materials via groundwater. The team sets the precedent for modeling flow and transport in porous media using Godunov and mixed finite element methods, bringing scientists one step closer to understanding and solving some of the world’s most difficult groundwater contamination problems.

1995-04-21 13:13:01

Inauguration of CCII

On April 21, 1995—the same day the Intel Paragon XP/S 150 is dedicated—the Center for Computational Sciences inaugurates the Computational Center for Industrial Innovation (CCII), a US Department of Energy National User Facility for research created by the Partnership in Computational Science. CCII focuses on industrial competitiveness through computational science. Eventually, an industrial outreach program called Accelerating Competitiveness through Computational ExceLlence, or ACCEL, will preserve that same focus.

1995-04-21 13:13:01

Fastest Machine in the World Delivered

The Center for Computational Sciences (CCS) receives the Intel Paragon XP/S 150 system, with 3,096 processors and a peak performance of 150 gigaflops. When it is delivered, it is the fastest machine in the world. The XP/S 150 system serves to assist researchers exploring CCS’ Grand Challenge computing problems ranging from energy research and environmental management to the human genome and the exploration of deep space.

1995-10-09 13:13:01

Revealing the Quantum World of Materials

A research team led by Oak Ridge National Laboratory’s Malcolm Stocks develops the Locally Self-Consistent Multiple Scattering (LSMS) electronic structure code. It is designed specifically for large parallel processors to solve the runaway problem of calculating the electronic structure for large numbers of atoms by assigning each individual atom to a single computer node. By carrying out computations on the Intel Paragon XP/S 150, the team uses LSMS to calculate the total energy of copper by modeling as many as 1,024 atoms. LSMS will eventually become the first science application to perform at more than a teraflop on a Cray T3E outside the laboratory. It also will become the second application to run at more than a petaflop and will become optimized for CPU–GPU architectures such as the Cray XK7 Titan. Team members working on LSMS will go on to receive two Association for Computing Machinery Gordon Bell Prizes—one in 1998 and another in 2009.

1998-06-01 00:00:00

Thomas Zacharia Directs CCS

Oak Ridge National Laboratory’s (ORNL’s) Thomas Zacharia becomes the director of the Center for Computational Sciences. In this role, Zacharia will work to strengthen ORNL’s computational and networking infrastructure, and his efforts will lead to the acquisition of a terascale computing facility. Zacharia will be named the deputy associate laboratory director for high-performance computing and then the associate laboratory director for the Computing and Computational Sciences Directorate. Zacharia will ultimately become ORNL’s laboratory director, overseeing a nuclear reactor, a high-power proton accelerator, classified activities, and a complex array of research facilities and construction projects.

1998-11-07 00:00:00

ORNL Gordon Bell Winners

Oak Ridge National Laboratory’s (ORNL’s) Malcolm Stocks and William Shelton run the Locally Self-Consistent Multiple Scattering code with colleagues at 657 gigaflops on a Cray T3E system and win the 1998 Association for Computing Machinery Peak Performance Gordon Bell Prize.

1998-12-12 00:00:00

HPSS Storage System Implemented

For its archival storage, the Center for Computational Sciences implements a High-Performance Storage System (HPSS) developed by Randy Burris, Ken Kliewer, Daniel Million, Deryl Steinert, and Vicky White. In 1998, the center is producing 300 gigabytes of data per month and providing 1 terabyte of disk storage space. HPSS is designed to manage enormous amounts of data produced and used in modern high-performance computing, data collection and analysis, imaging, and enterprise environments.

1999-04-13 00:00:00

Eagle Takes Flight

An IBM RS/6000 SP machine known as Eagle, featuring a Power3 architecture, goes online at the Center for Computational Sciences. The machine initially features 124 processors and operates at a peak performance of 99.2 gigaflops but eventually will reach 704 processors and become the first Office of Science computing system with a peak performance exceeding 1 teraflop in 2000.

1999-09-01 00:00:00

Tunneling Electrons

A team led by Oak Ridge National Laboratory’s William Butler begins using the Center for Computational Sciences' resources, including an IBM Power system, to model the material properties of layered magnetic films—to expand the limits of compute and storage capacity. The team performs first-principles calculations to model tunneling magnetoresistance (TMR), in which a magnetic tunnel junction is created by “sandwiching” an antiferromagnetic (insulating) barrier between two ferromagnetic (conductive) layers. The team calculates the magnetoresistance of an Fe|MgO|Fe “sandwich” and finds a high contrast TMR ratio, meaning the material exhibits a high probability of tunneling. This tunneling junction will be confirmed by experimentalists years later, and greater contrasts will be observed between crystalline MgO insulating layers and ferromagnetic materials other than Fe.

2000-06-20 00:00:00

Falcon Spreads Its Wings

The Center for Computational Sciences receives the 64-processor Compaq AlphaServer SC system known as Falcon. Upon arrival, the system operates at a peak performance of 85.33 gigaflops. It will eventually grow to include 256 processors and operate at a peak performance of 0.5 teraflops.

2001-08-14 00:00:00

Scientific Discovery through Advanced Computing

The US Department of Energy (DOE) launches the Scientific Discovery through Advanced Computing program (SciDAC) to create a new generation of scalable scientific software, applications, middleware, libraries, and mathematical routines needed to make the most of parallel computing systems. On August 14, 2001, DOE announces its first SciDAC awards. Project areas include climate, fusion, chemistry, astrophysics, high-energy physics, and high-performance computing.

2001-11-01 00:00:00

CCS Introduces Cheetah

The Center for Computational Sciences (CCS) announces its new IBM pSeries 690 Turbo system named Cheetah, featuring a Power4 architecture. The system has 864 processors and operates at a peak performance of 4 teraflops.

2003-03-19 00:00:00

CCS Gets a Cray X1

The Center for Computational Sciences (CCS) gets the Cray X1 system, with 28 processors and a peak performance of 358.4 gigaflops. The system will eventually contain 504 processors and operate at a peak performance of 6.451 teraflops.

2003-07-31 00:00:00

INCITE Program Launched

To help research communities fully tap into the capabilities of current and future supercomputers, the US Department of Energy Under Secretary for Science Raymond Orbach establishes the Innovative and Novel Computational Impact on Theory and Experiment program, or INCITE, to seek out computationally intensive, large-scale research projects with potential to advance science.

2003-09-09 00:00:00

SGI Altix Arrives at CCS

Oak Ridge National Laboratory’s Center for Computational Sciences (CCS) receives a 256-processor SGI Altix 3000 system that operates at a peak performance of 1.5 teraflops. The Altix allows for numerous models of parallelism and runs on a single copy of Linux, making it highly beneficial for high-performance computing needs.

2003-11-01 00:00:00

Revealing Supernova Secrets

A group of computational astrophysicists led by Oak Ridge National Laboratory’s Anthony Mezzacappa simulates the death of a core-collapse supernova and reveals a previously unknown phenomenon—a shock-wave-distorting feature that emerges in the early stages of a star’s demise and contributes to its eventual explosion. The team names the discovery SASI, or the stationary accretion shock instability. The team identifies the feature in two dimensions through work conducted partly on the Oak Ridge Leadership Computing Facility’s IBM Power 3 Eagle computer. In 2014, a team of astronomers reports observational evidence from high-energy x-ray telescope data (NASA’s NuSTAR) that supports the SASI model, offering an example of simulation predicting a physical phenomenon before it is observed in nature. SASI goes on to become a fundamental component of high-fidelity supernova simulations upon which future supernova models will build.

2004-11-10 00:00:00

EVEREST Visualization Environment

The Exploratory Visualization Environment for Research in Science and Technology (EVEREST) comes online at the Center for Computational Sciences. EVEREST provides display walls for visualization. Thanks to EVEREST’s unique capabilities, scientists can analyze and compare images and datasets. Later EVEREST is updated in 2013 to include more detailed images, 3-D capability, and easier access to the environment.

2004-11-30 00:00:00

From CCS to OLCF

The US Department of Energy (DOE) High-End Computing Revitalization Act of 2004 authorizes America’s Leadership Computing Facilities. The first such facility, the Oak Ridge Leadership Computing Facility (OLCF), is established at Oak Ridge National Laboratory with the mission of providing researchers from government, academia, and industry with access to leadership computing resources, with individual programs receiving 100 times more computing capability than available at other facilities. DOE announces in May 2004 that the OLCF will lead the project to build the world’s most powerful supercomputer.

2005-01-01 00:00:00

Detailing Combustion

A team led by Jacqueline Chen at Sandia National Laboratories begins using Oak Ridge Leadership Computing Facility resources to simulate multiscale combustion, which can help engineers develop better predictive models for fuel-efficient engines. Eventually, the team will use the Cray XT4 Jaguar and the Cray XK7 Titan to perform direct numerical simulations of combustion phenomena to look at combustion processes that more efficiently distribute heat and lower nitrogen oxide emissions. By detailing these important mechanisms, the team helps pave the way for automobiles that could use 25 to 50 percent less fuel than those on the road today.

2005-05-01 00:00:00

Jeff Nichols Named CCS Director

Jeff Nichols becomes the acting director of the Center for Computational Sciences (CCS) at Oak Ridge National Laboratory (ORNL). During 2005, Nichols will oversee the installation of two of CCS’s primary supercomputers: Phoenix and Jaguar. Nichols will later become the director of Computer Science and Mathematics and eventually the associate laboratory director for ORNL’s Computing and Computational Sciences.

2005-10-14 00:00:00

Phoenix Soars

Phoenix, a Cray X1E system, makes its debut. Phoenix has 1,014 processors and boasts a peak performance of 18.333 teraflops when it arrives. The system, also known as OLCF-1, begins running simulations related to grand challenge problems in combustion, molecular structures, plasma energy, accelerators, and supernovae.

2005-11-28 00:00:00

Jaguar Up and Running

Jaguar, a Cray XT3 system with 3,748 processors, comes to the Oak Ridge Leadership Computing Facility. The system, also known as OLCF-2, operates at a peak performance of 18 teraflops. Eventually, the system will have 10,424 processors and operate at a peak performance of 54 teraflops. Goals for expanding the system begin the same year the system is accepted.

2005-11-30 00:00:00

Ricky Kendall to Lead SciComp

Ricky Kendall comes to Oak Ridge National Laboratory (ORNL) from Ames Laboratory at Iowa State University. Kendall becomes the founding leader of the Scientific Computing Group, or SciComp, at the Center for Computational Sciences (CCS). A master of team building, an exceptional code developer, and an accomplished scientist, Kendall will eventually be designated a “Rock Star of HPC” (high-performance computing) by insidehpc.com in 2010 as well as earn UT-Battelle LLC’s title of Group Leader of the Year in 2011. He will also take on the role of chief computational scientist at the National Center for Computational Sciences (NCCS) in 2011.

2005-12-01 00:00:00

First Users Meeting

The Center for Computational Sciences (CCS) conducts its first Users Meeting, which consists of a workshop for project teams with fiscal year 2006 allocations on Phoenix as well as the new Jaguar system. The first 2 days of the workshop include overviews of CCS, Jaguar, and Phoenix architectures, while the third day is devoted to hands-on tutorials to answer porting and optimization questions.

2005-12-31 07:32:53

First CCS Annual Report

The Center for Computational Sciences (CCS) produces its first Annual Report. The report highlights past accomplishments at CCS as well as new computational resources, the High-Performance Storage System, internal committees and organizations, and roadmaps for future systems.

2006-01-01 07:32:53

Buddy Bland to Direct LCF Project

The Center for Computational Sciences (CCS) is tasked with carrying out the Leadership Computing Facility (LCF) Project at Oak Ridge National Laboratory with the goal of developing and installing a petaflops-speed supercomputer at CCS by the end of 2008. Buddy Bland is named the director of the LCF at its inception. Bland joined the staff at Oak Ridge National Laboratory in 1984 and became the computing resources manager of CCS in 1992. After serving as the director of operations for the center from 1996 to 2006, Bland accepts the new role as part of the aggressive effort to upgrade CCS’s Jaguar supercomputer. The center officially changes its name from the Center for Computational Sciences to the National Center for Computational Sciences, or NCCS, the same year.

2006-06-21 07:32:53

Jaguar XT3 Upgraded to XT4

The Oak Ridge Leadership Computing Facility upgrades its Jaguar system (OLCF-2) to an XT4 system. The new system has 23,016 cores and a peak performance of 119 teraflops at the time of its arrival. In June 2007, it becomes the second-fastest computer in the world on the Top500 list. That same year, it is upgraded to 30,976 cores and operates at a peak performance of 263 teraflops—double its previous computing power.

2006-09-01 07:32:53

DD Established

The Director’s Discretionary, or DD, program is established to award computing time to R&D projects, industry projects, projects carrying out development work for INCITE projects, or pilot projects.

2007-01-01 07:32:53

Illuminating Dark Matter

A team of researchers led by astrophysicist Piero Madau at the University of California–Santa Cruz uses the Cray XT3 Jaguar at Oak Ridge National Laboratory to carry out the largest simulation ever of the Milky Way’s dark matter—the invisible material that provides most of the universe’s mass—and its evolution over 13 billion years. The work reveals new details regarding the likely distribution of the invisible substance throughout the galaxy, especially in the Milky Way’s dense inner reaches, and gives astronomers a valuable tool in their search for dark matter. By capturing the substructure of dark matter in the neighborhood of our solar system, the team paves the way for future studies and observations.

2007-04-01 07:32:53

An IBM Blue Gene Named Eugene

An IBM Blue Gene/P Solution system comes to Oak Ridge National Laboratory (ORNL), with 8,192 cores and a peak performance of 27.85 teraflops. The system—named Eugene after ORNL’s first scientific director, Eugene Wigner—is used for materials studies and bioenergy work.

2007-09-01 07:32:53

Breaking Down Biomass

A team led by Oak Ridge National Laboratory’s Jeremy Smith begins using Oak Ridge Leadership Computing Facility systems to provide molecular-level insights to help biofuel researchers overcome hurdles to producing cost-competitive cellulosic ethanol fuel and other high-value products from woody plants and waste biomass. Through a series of simulations on the Cray XT5 Jaguar and, eventually, the Cray XK7 Titan, the team captures key interactions between major components of the plant cell wall along with the enzymes and solvents used in biofuel production. The team becomes the first to perform supercomputer simulations on biomass, demonstrating the value of identifying molecular mechanisms to explain problems in multicomponent molecular systems.

2008-01-15 07:32:53

Lens Visualization Cluster

The Oak Ridge Leadership Computing Facility receives Lens, a 32-node Linux cluster for data analysis and visualization of scientific data generated on Jaguar.

2008-02-01 07:32:53

James Hack to Direct NCCS

James J. Hack, former senior scientist at the National Center for Atmospheric Research (NCAR) in Boulder, Colorado, is appointed director of the National Center for Computational Sciences (NCCS) at Oak Ridge National Laboratory (ORNL). After receiving his PhD in atmospheric dynamics from Colorado State University in 1980, Hack became a research staff member at the IBM Thomas J. Watson Research Center, where he focused on mapping scientific algorithms to high-performance computing architectures. In 1984, he moved to NCAR, where he led the development of the Community Atmosphere Model, or CAM. As director of NCCS, Hack will identify major high-performance computing needs from scientific and hardware perspectives and put forth strategies to meet those needs as machines evolve to the petascale and beyond. Hack will also lead the Climate Change Initiative at ORNL to advance the state of the art in Earth system discovery and policy.

2008-04-01 07:32:53

OLCF Gets Spider File System

Oak Ridge National Laboratory gets a Lustre-based Spider file system that can hold 1,000 times as much data as that contained in the printed collection of the Library of Congress—assuming one word is 10 bytes. Spider will eventually become the largest-scale Lustre file system in the world.

Oak Ridge Leadership Computing Facility

Launch
Copy this timeline Login to copy this timeline 3d Game mode

Contact us

We'd love to hear from you. Please send questions or feedback to the below email addresses.

Before contacting us, you may wish to visit our FAQs page which has lots of useful info on Tiki-Toki.

We can be contacted by email at: hello@tiki-toki.com.

You can also follow us on twitter at twitter.com/tiki_toki.

If you are having any problems with Tiki-Toki, please contact us as at: help@tiki-toki.com

Close

Edit this timeline

Enter your name and the secret word given to you by the timeline's owner.

3-40 true Name must be at least three characters
3-40 true You need a secret word to edit this timeline

Checking details

Please check details and try again

Go
Close