- Home
- Register
- Attend
- Program
- Technical Program Overview
- SC14 Schedule
- Awards
- Birds-of-a-Feather Sessions (BOFs)
- Emerging Technologies
- Invited Talks
- Panels
- Papers
- Posters
- Scientific Visualization Showcase
- Tutorials
- Workshops
- Doctoral Showcase Program
- HPC Matters Plenary
- Keynote
- SC14 Archive
- SC14 Conference Program
- Tech Program Receptions
- Exhibit
- Engage
- Media
- Media Overview
- Media Releases
- Announcing the Second Test of Time Award Winner
- CDC to Present at Supercomputing 2014
- Finalists Compete for Coveted ACM Gordon Bell Prize in High Performance Computing
- Four Ways Supercomputing Is Changing Lives: From Climate Modeling to Manufacturing Consumer Goods
- Join the Student Cluster Competition
- New Orleans Becomes Home to Fastest Internet Hub in the World
- SC14 Announces New Plenary to Focus on the Importance of Supercomputers in Society
- SC14 Registration Opens, Technical Program Goes Live
- Supercomputing 2014 Recognizes Outstanding Achievements in HPC
- Supercomputing 2014 Sets New Records
- Supercomputing Invited Plenary Talks
- Supercomputing Unveils Ground-Breaking Innovations and the World’s Fastest Computer Network
- World’s Fastest Computer Network Coming to New Orleans
- SC14 Logo Usage
- SC14 Media Partners
- Social Media
- Newsletters
- SC14 Blog
- Opening Press Briefing
- SC Photograph and Film Acceptable Use Policy
- Media Registration
- Video Gallery
- SCinet
Invited Talks
For the SC13 Conference, talks featured in years past such as Masterworks, Plenary talks, and State-of-the-Field, were combined under a single banner, called “Invited Talks”. We will continue this practice with renewed efforts to develop Invited Talks as a premier component of the Program for SC14.
Twelve Invited Talks will feature leaders in the areas of high-performance computing, networking and storage. Invited Talks will typically concern innovative technical contributions and their applications to address the critical challenges of the day.
Additionally, these talks will often concern the development of a major area through a series of important contributions and provide insights within a broader context and from a longer-term perspective. At Invited Talks, you should expect to hear about pioneering technical achievements, the latest innovations in supercomputing and data analytics, and broad efforts to answer some of the most complex questions of our time.
For details of when and where to hear these talks, please visit the interactive program schedule.
Quantum Computing Paradigms for Probabilistic Inference and Optimization
Masoud Mohseni
Google
Over the past 30 years, several computational paradigms have been developed based on the premise that the laws of quantum mechanics could provide radically new and more powerful methods of information processing. One of these approaches is to encode the solution of a computational problem into the ground state of a programmable many-body quantum Hamiltonian system. Although, there is empirical evidence for quantum enhancement in certain problem instances, there is not a full theoretical understanding of the conditions for quantum speed up for problems of practical interest, especially hard combinatorial optimization and inference tasks in machine learning. In this talk, I will provide an overview of quantum computing paradigms and discuss the progress at the Google Quantum Artificial Intelligence Lab towards developing the general theory and overcoming practical limitations. Furthermore, I will discuss two algorithms that we have recently developed known as Quantum Principal Component Analysis and Quantum Boltzmann Machine.
Super Debugging: It Was Working Until I Changed...
David Abramson
University of Queensland
Debugging software is a complex, error prone, and often a frustrating experience. Typically, there is only limited tool support, and many of these do little more than allow a user to control the execution of a program and examine it's run time state. This process becomes even more difficult in supercomputers that exploit massive parallelism because the state is distributed across processors, and additional failure modes (such as race and timing errors) can occur. We have developed a debugging strategy called "Relative Debugging", that allows a user to compare the run time state between executing programs, one being a working, "reference" code, and the other being a test version. In this talk I will outline the basic ideas in relative debugging, and will give examples of how it can be applied to debugging supercomputing applications.
Usable Exascale and Beyond Moore’s Law
Horst Simon
Lawrence Berkeley National Laboratory
As documented by the TOP500, high performance computing (HPC) has been the beneficiary of uninterrupted growth, and performance of the top HPC systems doubled about every year until 2004, when Dennard scaling tapered off. This was based on the contributions of Moore’s law and the increasing parallelism in the highest end system. Continued HPC system performance increases were then obtained by doubling parallelism. However, over the last five years HPC performance growth has been slowing measurably, and in this presentation several reasons for this slowdown will be analyzed. To reach usable exascale performance over the next decade, some fundamental changes will have to occur in HPC systems architecture. In particular, a transition from a compute centric to a data movement centric point of view needs to be considered. Alternatives including quantum and neuromorphic computing have also been considered. The prospects of these technologies for post-Moore’s Law supercomputing will be explored.
MuMMI: A Modeling Infrastructure for Exploring Power and Execution Time Tradeoffs
Valerie Taylor
Texas A&M University
MuMMI (Multiple Metrics Modeling Infrastructure) is an infrastructure that facilitates the measurement and modeling of performance and power for large-scale parallel applications . MuMMI builds upon three existing tools: Prophesy for performance modeling and prediction of parallel applications, PAPI for hardware performance counter monitoring, and PowerPack for power measurement and profiling. The MuMMI framework develops models of runtime and power based upon performance counters; these models are used to explore tradeoffs and identify methods for improving energy efficiency. In this talk we will describe the MuMMI framework and present examples of the use of MuMMI to improve the energy efficiency of parallel applications.
A Curmudgeon's View of High Performance Computing
Michael Heath
University of Illinois at Urbana-Champaign
This talk addresses a number of concepts and issues in high performance computing that are often misconceived, misconstrued, or misused. Specific questions considered include why Amdahl remains relevant, what Moore actually predicted, the uses and abuses of speedup, and what should scalability really mean. These observations are based on more than thirty years experience in high performance computing, wherein one becomes an expert on mistakes by making a lot of them.
Supercomputing Trends, Opportunities and Challenges for the Next Decade
Jim Sexton
IBM
The last decade has been an extraordinary period in the development of supercomputing. Computing performance has continued to scale with Moore's law even through the end of classic silicon scaling, 100+ Petaflop systems are on the horizon, architectural innovations abound and commercial installations have grown to rival traditional research and academic systems in capability and performance. The next decade promises to be even more extraordinary. Exascale will soon be in reach, architectures are embracing heterogeneity at unprecedented levels, commercial uses are driving capability in unexpected directions and fundamentally new technologies are beginning to show promise. In this presentation, we will discuss some of the emerging trends in supercomputing, some of the emerging opportunities both for technical and commercial uses of supercomputing, and some of the very significant challenges that are increasingly threatening to inhibit progress. The next decade will be a great time to be a systems designer!
Using Supercomputers to Discover the 100 Trillion Bacteria Living Within Each of Us
Larry Smarr
University of California, San Diego
The human body is host to 100 trillion microorganisms, ten times the number of cells in the human body and these microbes contain 100 times the number of DNA genes that our human DNA does. The microbial component of our “superorganism” is comprised of hundreds of species with immense biodiversity. Thanks to the National Institutes of Health’s Human Microbiome Program researchers have been discovering the states of the human microbiome in health and disease. To put a more personal face on the “patient of the future,” I have been collecting massive amounts of data from my own body over the last five years, which reveals detailed examples of the episodic evolution of this coupled immune-microbial system. An elaborate software pipeline, running on high performance computers, reveals the details of the microbial ecology and its genetic components. We can look forward to revolutionary changes in medical practice over the next decade.
MATLAB Meets the World of Supercomputing
Cleve Moler
MathWorks
MATLAB played a role in the very first Supercomputing conferences in 1988 and 1989. But then, for 15 years, MathWorks had little to do with the world of supercomputing. We returned in 2004 to SC04 with the introduction of parallel computing capability in MATLAB. We have had an increasing presence at the SC conferences ever since. I will review our experience over these last 10 years with tools for high performance computing ranging from GPUs and multicore to clusters and clouds.
Meeting the Computational Challenges Associated with Human Health
Philip Bourne
National Institutes of Health
It is my guess that by the end of this decade healthcare will be a predominantly digital enterprise and for the first time (surprising to say perhaps) patient centric. Such a shift from analog research and diagnosis and a provider centric healthcare system to an open digital system is a major change with opportunities and challenges across all areas of computation. I will describe some of these challenges and opportunities and indicate how the NIH is meeting them and how we need your help.
The Transformative Impact of Parallel Computing for Real-Time Animation
Lincoln Wallen
Dreamworks Animation
Over the course of 20 years, DreamWorks Animation has been a leader in producing award winning computer generated (CG) animated movies such as Shrek, Madagascar and How to Train Your Dragon 2. In order to scale numerous simulations such as explosions or hundreds of animated characters in a scene, the studio requires a High Performance Computing environment capable of delivering tens of millions of render hours and managing millions of files for one film. Five years ago, the studio made a decision to re-architect key tools for animation and rendering from the ground up. Through the extensive use of parallel computing the studio has solved the fundamental need to maximize artist productivity and work at the speed of creativity. DreamWorks Animation demonstrates the fully transformative impact ubiquitous parallel computing will have on the complex and varied workflows in Animation.
Life at the Leading Edge: Supercomputing @ Lawrence Livermore
Dona Crawford
Lawrence Livermore National Laboratory
Even before Livermore Laboratory opened in 1952, Ernest O. Lawrence and Edward Teller placed an order for one of the first commercial supercomputers, a Univac mainframe. Although computer simulation was in its infancy at the time, Laboratory leaders recognized its potential to accelerate scientific discovery and engineering. The Lab focused on complex, long-term national security problems that could be advanced through computing. These mission-focused applications drove the required computing hardware advances, which drove the associated infrastructure. What factors allowed LLNL to become leaders in HPC? How have they been sustained over the years? Come hear the inside Lab perspective on why #hpcmatters and what the Lab is doing to advance the discipline
Runtime Aware Architectures
Mateo Valero
Barcelona Supercomputing Center
In the last few years, the traditional ways to keep the increase of hardware performance to the rate predicted by the Moore's Law have vanished. When uni-cores were the norm, hardware design was decoupled from the software stack thanks to a well defined Instruction Set Architecture (ISA). This simple interface allowed developing applications without worrying too much about the underlying hardware, while hardware designers were able to aggressively exploit instruction-level parallelism (ILP) in superscalar processors. Current multi-cores are designed as simple symmetric multiprocessors (SMP) on a chip. However, we believe that this is not enough to overcome all the problems that multi-cores face. The runtime has to drive the design of future multi-cores to overcome the restrictions in terms of power, memory, programmability and resilience that multi-cores have. In this talk, we introduce a first approach towards a Runtime-Aware Architecture (RAA), a massively parallel architecture designed from the runtime's perspective.
SC14 Invited Talks Co-Chairs:
Robert F. Lucas (ISI/USC)
Padma Raghavan (Penn State)
Email Contact: invitedtalks@info.supercomputing.org