sponsored byIEEEACMThe International Conference for High Performance 
Computing, Networking, Storage and Analysis
FacebookTwitterGoogle PlusLinkedInYouTubeFlickr

HPC Impact Showcase

HPC Impact Showcase

Premier Sponsors: Deere & Company and Procter & Gamble

In continuing for a second year, the HPC Impact Showcase reveals the real world applications of high performance computing (HPC) via presentations in a theater setting on the exhibit floor.

The Showcase is designed to introduce attendees to the many ways that HPC matters in our world, through testimonials from companies large and small.  Their stories relate real-world experiences of what it took to embrace HPC to better compete and succeed in their line of endeavor.

Whether you are new to HPC or a long-time professional, you are sure to learn something new and exciting in the HPC Impact Showcase. Presentations will be framed for a non-expert audience interested in technology, and will discuss how the use of HPC has resulted in design, engineering, or manufacturing innovations.

Program of events

Introduction to the HPC Impact Showcase

Date: Monday, November 17th
Time: 7:15-7:30PM
Presenter(s): David Martin (Argonne National Laboratory), David Halstead (National Radio Astronomy Observatory)

Abstract: Overview and highlights from of the exciting presentations to be given by industry leaders throughout the week. Welcome to the HPC Impact Showcase!

 


The Exascale Effect: the Benefits of Supercomputing Investment for U.S. Industry

Date: Monday, November 17th
Time: 7:30-8:00PM
Presenter(s): Cynthia R. McIntyre and Chris Mustain (Council on Competitiveness)

Abstract: The Council on Competitiveness has worked for over a decade to analyze, promote and strengthen the America’s use of advanced computing for economic competitiveness. The Council brings together top high performance computing leaders in industry, academia, government and the national laboratories. We know from experience that to out-compete is to out-compute.

Supported by a grant from the U.S. Department of Energy, the Council recently issued the report Solve – the Exascale Effect: the benefits of Supercomputing Investment for U.S. Industry. The report takes a fresh look at: (1) the value of HPC to U.S. industry, (2) key actions that would enable companies to leverage advanced computing more effectively, and (3) how American industry benefits directly and indirectly from government investment at the leading edge of HPC.

The leaders of the Council’s HPC Initiative, Senior Vice President Cynthia McIntyre and Vice President Chris Mustain, will share the findings of the report.

 


IDC Study for DOE: Evaluating HPC ROI

Date: Monday, November 17th
Time: 8:00-8:30PM
Presenter(s): Steve Conway (IDC)

Abstract: Across the world, government funding for large supercomputers is undergoing a multi-year shift. The historical heavy tilt toward national security and advanced science/engineering increasingly needs to be counterbalanced by arguments for returns-on-investment (ROI). The U.S. Department of Energy's (DOE) Office of Science and National Nuclear Security Administration recently awarded International Data Corporation (IDC) a three-year grant to conduct a full study of returns on investments (ROI) in high performance computing (HPC).  The full study follows IDC's successful completion of a 2013 pilot study on this topic for DOE. IDC's Steve Conway will discuss the goals and methodology for the three-year, full-out study.

 


HPC: A Matter of Life or Death

Date: Tuesday, November 18th
Time: 10:20-11:00AM
Session Chair: David Martin (Argonne National Laboratory)
Presenter(s): Terry Boyd (Centers for Disease Control and Prevention)

Abstract: This session will use the example of a fictitious "zombie virus" outbreak to demonstrate the multiple applications of HPC to public health. These include: 1. Big Data Analysis to detect outbreaks 2. Spatial modeling and modeling of potential outbreaks to develop emergency response plans 3. Genomic evaluation of suspected disease outbreaks 4. Drug manufacturing and supply modeling 5. Contact tracing and response modeling to evaluate current activities and improve response activities 6. Post-event analysis to prepare for the next epidemic.

Though this session will use a fictitious zombie outbreak as the basis for discussion in order to increase interest in the session, the uses described will be applicable to real life outbreaks such as Ebola and H1N1.

 


HPC Productivity - Tearing Down the Barriers to HPC for CAE Simulation

Date: Tuesday, November 18th
Time: 11:00-11:40AM
Session Chair: David Martin (Argonne National Laboratory)
Presenter(s): Steve Legensky (Intelligent Light)

Abstract: Engineering users are well aware of the familiar hurdles of exploiting HPC: Generating data faster than it can be evaluated, software scalability, licensing policies that limit production deployment and the need to keep internal resources working on tasks that are “on mission” for the organization.

Learn how Corvid Technologies, an engineering services and software provider, and longtime user of HPC, is delivering high quality, CAE-driven engineering for applications where accuracy and timeliness are critical. They are using the DoE’s ultra-scalable, open source VisIt software with their highly scalable Velodyne solver to overcome the obstacles to HPC faced by many in the engineering community.

The use of VisIt brought with it some new challenges common with open source software. Corvid turned to Intelligent Light to provide them with professional support and some custom VisIt development that helped them complete their workflow and ensure that VisIt is meeting their HPC-enabled engineering needs. This development work produced multiple improvements to the VisIt codebase that have been contributed to the VisIt repository so they may benefit other VisIt users.

 


Impact of HPC on Nuclear Energy: advanced simulations for the Westinghouse AP1000® reactor

Date: Tuesday, November 18th
Time: 11:40AM-12:20PM
Session Chair: Suzy Tichenor (Oak Ridge National Laboratory)
Presenter(s): Fausto Franceschini (Westinghouse)

Abstract: Westinghouse Electric Company, a leading supplier of nuclear plant products and technologies, performed advanced simulations on the Oak Ridge Leadership Computing Facility that reproduce with unprecedented facility the conditions occurring during the startup of the AP1000 advanced nuclear reactor.  These simulations were performed using advanced modeling and simulation capabilities which are developed by the Consortium for Advanced Simulation of Light Water Reactors (CASL), a U.S. Department of Energy (DOE) Energy Innovation Hub led by the Oak Ridge National Laboratory.  The AP1000 reactor is an advanced reactor design with enhanced passive safety and improved operational performance, with eight units currently being built, four in China and four in the U.S. 

The high-fidelity simulation toolkit used for this effort corroborated a complete understanding of the behavior for this advanced reactor design. The novel simulations performed have been accomplished using a 60 million core-hour allocation Early Science award on the Oak Ridge Leadership Computing Facility TITAN.  Use of the CASL advanced nuclear methods and software techniques, properly designed to take advantage of these vast computational resources, has been a key factor to this successful endeavor. Prior to this project, results of comparable fidelity were unavailable due to the impractical computational time resulting from limitations in codes scalabilities or computational resources available.

This endeavor, pursued through a collaborative effort of U.S. national labs and private industry, typifies the benefits that HPC can bring to the U.S. public by further ensuring the viability and safety of advanced energy sources.

 


HPC and Open Source

Date: Tuesday, November 18th
Time: 12:20PM-1:00PM
Session Chair: Suzy Tichenor (Oak Ridge National Laboratory)
Presenter(s): Omar Kebbeh (Deere and Company)

Abstract: Open Source software has been around for generations. Some were born at universities and died there, others were born in national labs and made it out in terms of ISV codes, and others were born at universities and stayed as open where many scientists keep them alive.

This talk will address some of the issues faced by the industry when it comes to Engineering Open Source Codes.  Both availability and challenges will be discussed and posed as an "Open Question" to the participants.

 


HPC Impact on Developing Cleaner and Safer Aero-Engines

Date: Tuesday, November 18th
Time: 1:00-1:40PM
Session Chair: Melyssa Fratkin (Texas Advanced Computing Center)
Presenter(s): Jin Yan (GE)

Abstract: An aeroengine is an essential part of aviation industry. Over the past 40 years, the fuel efficiency and emissions of the aeroengine have improved by approximately 75% largely due to the continual increase of the combustor temperatures.  Increasing the combustor exit temperature carries with it many challenges to the design. General Electric (GE) has continuously invested in High Performance Computing to address these challenges. Today, researchers and designers leverage state of the art aerodynamics and multiphysics tools to understand the finest flow scales and how small changes impact the overall performance of the aeroengines. This talk will provide an overview of HPC at GE today, and outlook on the future of large scale simulations in the aviation industry.

 


Scalable HPC Software – “It Takes a Village”

Date: Tuesday, November 18th
Time: 1:40-2:20PM
Session Chair: Melyssa Fratkin (Texas Advanced Computing Center)
Presenter(s): Mike Trutt (Northrop Grumman)

Abstract: In today’s competitive world, High Performance Computing (HPC) requires software that can scale from just a few cores up to thousands. This requires a diversity of talent and skills that promote the creation of scalable software that maximizes utilization of available resources.

There are several steps to creating scalable HPC software. Whether the application requires thousands of cores (like Computational Fluid Dynamics (CFD)) or is limited to a few dozen cores (such as a real-time signal processing platform), whether the hardware is commodity or custom, the discipline and principles for what makes software scale remain the same.

Northrop Grumman’s approach to producing scalable HPC software begins with modeling and simulation activities that feed software design requirements. The software in turn is designed for scalability, including such factors as middleware, scientific libraries, and benchmarking. Agile management can be key, as such approaches allow for accurate planning and course correction. Through cross-discipline interaction and training across the software lifecycle, scalable HPC software can enable today’s endeavors in an increasingly competitive world.

 


High Performance Computing and Neuromorphic Computing

Date: Wednesday, November 19th
Time: 10:20-11:00AMupercomputer Center)
Session Chair: Ron Hawkins (San Diego Supercomputer Center)
Presenter(s): Mark Barnell (Air Force Research Laboratory)

Abstract: The Air Force Research Laboratory Information Directorate Advanced Computing Division (AFRL/RIT) has focused research efforts that explore how to best model the human visual cortex. In 2008 it became apparent that parallel applications and large distributing computing clusters (HPC) were the way forward to explore new methods. In 2009 at our High Performance Computing Affiliated Resource Center (HPC-ARC) we designed and built a large scale interactive computing cluster. CONDOR, the largest interactive Cell cluster in the world, integrated heterogeneous processors of IBM Cell Broadband Engine multicore CPUs, nVidia GPGPUs and powerful Intel x86 server nodes in a 10GbE Star Hub network and 20Gb/s Infiniband Mesh, with a combined capability of 500 Teraflops. Applications developed and running on CONDOR include: neromorphic computing applications, video synthetic aperture radar (SAR) backprojection and Autonomous Sensing Framework (ASF). This presentation will show progress on performance optimization using the heterogeneous clusters and how neuromorphic architectures are advancing the capabilities for the autonomous systems within the Air Force.

 


High-Order Methods for Seismic Imaging on Manycore Architectures

Date: Wednesday, November 19th
Time: 11:00-11:40AM
Session Chair: Ron Hawkins (San Diego Supercomputer Center)
Presenter(s): Amik St-Cyr (Shell)

Abstract:The lion's share of Shells global HPC capacity is consumed by geophysical seismic imaging. Legacy algorithms and software must be replaced with fundamentally different ones that scale to 1000s of possibly heterogeneous- cores. Reverse Time Migration is an example of a wave based imaging algorithm. In this talk, we present how we're adapting our algorithms to tackle the many-core phenomena and how HPC impacts our business.

 


Using Next Gen Sequencing and HPC to Change Lives

Date: Wednesday, November 19th
Time: 11:40AM-12:20PM
Session Chair: David Skinner (Lawrence Berkeley National Laboratory)
Presenter(s): Shane Corder (Children's Mercy Hospital)

Abstract: After nearly 30 years, the Human genome is now being used in the clinical space. The researchers and engineers of The Center for Pediatric Genomic Medicine (CPGM) at Children’s Mercy Hospital in Kansas City, Missouri use HPC to decipher the infant’s genome to rapidly look for genetic markers of disease to make a dramatic impact on the lives of children. CPGM is using HPC to push the turnaround time for this life saving application to just 50 hours.

 


Real-time Complex Event Processing in DSPs

Date: Wednesday, November 19th
Time: 12:20-1:00PM
Session Chair: David Skinner (Lawrence Berkeley National Laboratory)
Presenter(s): Ryan Quick and Arno Kolster (PayPal)

Abstract: Traditionally, complex event processing utilizes text search and augmented off-line analytics to provide insight in near real-time. PayPal’s pioneering work leverages the true real-time capabilities of digital signal processors, event ontology, and encoding technologies to provide real-time pattern recognition and anomaly detection. Their novel approach delivers not only true real-time performance, but does so at a fraction of the power cost generally associated with low latency analytics. Partnering with HP and Texas Instruments, PayPal is delivering HPC solutions which not only capitalize on the power of offload compute, but also pave the way for new methodologies as we march towards exascale.

 


Lockheed-Martin's Special Multilevel and Overlapping Tier Support for the DoD HPC Centers

Date: Wednesday, November 19th
Time: 1:00-1:40PM
Session Chair: Suzy Tichenor (Oak Ridge National Laboratory)
Presenter(s): Dr. James C. Ianni & Mr. Ralph A. McEldowney (Lockheed-Martin /  Army Research Laboratory & DoD High Performance Computing Modernization Program)

Abstract: This presentation will discuss the benefit to the U.S. Warfighter provided by Lockheed Martin's multiple levels of HPC resource support for the various Department of Defense (DoD) Supercomputing Resource Centers (DSRCs). The DoD High Performance Computing Modernization Program supports the HPC needs of the DoD Research, Development, Test, and Evaluation (RDT&E) community through several of the DCRCs: the Army Research Laboratory (ARL) DSRC, the Navy DSRC, the Air Force Research Laboratory (AFRL) DSRC, and the Engineer Research and Development Center (ERDC) DSRC. Each of these centers has several state-of-the-art supercomputers upon which DoD scientists and engineers run high-fidelity simulations in fluid dynamics, structural mechanics, computational chemistry and biology, weather prediction, electromagnetics and other related disciplines. These simulations ultimately provide a great benefit to the U.S. Warfighter. Consequently, this involves a synergistic relationship between account creation, software utilization (commercial, open-source and custom) and optimization (compiler, code and/or theory utilization), hardware utilization and maintenance and user interaction. These synergies are enabled by cultivating a means of communication between the several tier groups within and across the individual Centers. This synergy is very important for HPC user interaction, as user requests can (and often do) span many levels of experience. These levels include getting an account and connecting to an HPC resource, compiling/submitting/script writing, and determining how to calculate spectra for an excited state, high-spin, charged, f-element in various solvents. This presentation will describe the experiences of incorporating and actively utilizing all of the above-mentioned HPC support, as well as the ultimate impact HPC has had on the Department of Defense.

 


Digital Consumer Products: Surfactant Properties from Molecular Simulations

Date: Wednesday, November 19th
Time: 1:40-2:20PM
Session Chair: Suzy Tichenor (Oak Ridge National Laboratory)
Presenter(s): Peter H. Koenig (Procter and Gamble)

Abstract: Wormlike Micelles (WLMs) provide the basis for structure and rheology of many consumer products. The composition, including concentration of surfactants and level of additives such as perfumes and salt, critically controls the structure and rheological properties of WLM formulations. An extensive body of research is dedicated to the experimental characterization and modeling of properties of WLMs. However, the link to the molecular composition and micellar scale has only received limited attention, limiting rational design of WLM formulations. We are developing and refining methods to predict properties of micelles including cross-sectional geometry and composition, persistence length (relating to the bending stiffness) and scission energy using molecular dynamics simulations. This presentation will give an overview of the modeling program with participants from Procter and Gamble, U. Michigan, XSEDE, Oak Ridge National Labs and U. Cincinnati.

 


Is Healthcare Ready for HPC (and is HPC Ready for Healthcare)?

Date: Thursday, November 20th
Time: 10:20-11:00AM
Session Chair: David Halstead (National Radio Astronomy Observatory)
Presenter(s): Patricia Kovatch (Mount Sinai Hospital)

Abstract: HPC is readily accepted (and perhaps expected) to tackle certain kinds of scientific questions in quantum chemistry, astrophysics and earthquake simulations. But how easy is it to apply typical HPC “know how” to healthcare and medical applications? What kinds of help can HPC provide and what are the barriers to success?

Biomedical researchers and clinicians are open to new tools and approaches to advance the diagnosis and treatment of human disease. One such approach is engaging interdisciplinary teams of computer scientists, applied mathematicians, computational scientists to work towards and improved understanding of the biological and chemical mechanisms behind disease.

Although progress is nascent, the opportunities and benefits of applying HPC resources and techniques to accelerate biomedical research and improve healthcare delivery are clear. One example of early success is the use of molecular dynamics to better understand protein interactions to design non-addictive painkillers. Clinicians are also using HPC applications for 3D visualizations of patient-specific surgical preparation for neurosurgery and cardiac surgery. State-of-the-art personalized and precision medicine is based on genomic sequencing and analysis on HPC resources.

 


High Performance Computing: an Essential Resource for Oil and Gas Exploration Production

Date: Thursday, November 20th
Time: 11:00-11:40AM
Session Chair: David Halstead (National Radio Astronomy Observatory)
Presenter(s):  Henri Calandra (Total)

Abstract: For several decades, Oil and Gas industry has been continuously challenged to produce more hydrocarbons in response to the growing world demand for energy. Finding new economical Oil traps has become increasingly challenging. The industry must invest in the design and definition of new advanced exploration and production tools. Application of High Performance Computing to Oil and Gas has dramatically increased the effectiveness of seismic exploration and reservoir management. Significantly enhanced computational algorithms and more powerful computers have provided a much better understanding of the distribution and description of complex geological structures, opening new frontiers to unexplored geological areas. Seismic depth imaging and reservoir simulation are the two main domains that take advantage of the rapid evolution of high performance computing. Seismic depth imaging provides invaluable and highly accurate subsurface images reducing the risk of deep and ultra-deep offshore seismic exploration. The enhanced computational reservoir simulation models optimize and increase the predictive capabilities and recovery rates of the subsurface assets. With an order of magnitude increase in computing capability reaching an exaflop by the end of this decade, the reduction of computing time combined with rapid growth in data sets and problem sizes will provide far richer information to analyze. While next generation codes are developed to give access to new information of unrivaled quality, these codes will also present new challenges in building increasingly complex and integrated solutions and tools to achieve the exploration and production goals.

 


HPC – Accelerating Science through HPC

Date: Thursday, November 20th
Time: 11:40AM-12:20PM
Session Chair: David Martin (Argonne National Laboratory)
Presenter(s): Dee Dickerson (Dow)

Abstract: Today Dow has over 5,000 CPU cores used for a multitude of Research projects, Manufacturing Process Design, Problem Solving, and Optimization of existing Processes. High Performance Computing (HPC) has also enabled better materials design, models for plant troubleshooting especially in a situation where time is the main constraint.

HPC has enabled Dow researchers to meet stringent timelines and deliver implementable solutions to businesses.  HPC provides the platform where large complex reacting flow models can be analyzed in parallel to significantly shorten the delivery time.  The science and engineering community at Dow continues to advance technologies and develop cutting-edge computational capabilities involving some of the most complex multiphase reactive processes in dispersion applications.  Such modeling capability development is only possible with the advancement of high speed, high capacity, and large memory HPC systems that are available to the industry today.

This presentation will show case the advancements Dow Researchers have made between using HPC and Lab Experiments.

 


HPC Matters to Dresser Rand (working title)

Date: Thursday, November 20th
Time: 12:20-1:00PM
Session Chair: David Martin (Argonne National Laboratory)
Presenter(s): Ravichandra Srinivasan (Dresser Rand)

Abstract: pending approval

Return to top of page