sponsored byIEEEACMThe International Conference for High Performance 
Computing, Networking, Storage and Analysis
FacebookTwitterGoogle PlusLinkedInYouTubeFlickr

SCHEDULE: NOV 16-21, 2014

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

PGAS and Hybrid MPI+PGAS Programming Models on Modern HPC Clusters

SESSION: PGAS and Hybrid MPI+PGAS Programming Models on Modern HPC Clusters

EVENT TYPE: Tutorials

TIME: 1:30PM - 5:00PM

ROOM:393

ABSTRACT:

Multi-core processors, accelerators (GPGPUs/MIC) and high-performance
interconnects with RDMA are shaping the architecture for next generation exascale clusters. Efficient programming models to design applications on these systems are still evolving. Partitioned Global Address Space (PGAS) models provide an attractive alternative to the traditional MPI model, owing to their easy to use shared memory abstractions and light-weight one-sided communication. Hybrid MPI+PGAS models are gaining attention as a possible solution to programming exascale systems. They help MPI applications to take advantage of PGAS models, without paying the prohibitive cost of re-designing complete applications. They also enable hierarchical design of applications using different models to suite modern architectures. In this tutorial, we provide an overview of the research and development taking place and discuss associated opportunities and challenges as we head toward exascale. We start with an in-depth overview of modern system architectures with multi-core processors, accelerators and high-performance interconnects. We present an overview of UPC and OpenSHMEM. We introduce MPI+PGAS hybrid programming models and highlight their advantages and challenges. We examine the challenges in designing high-performance UPC, OpenSHMEM and unified MPI+UPC/OpenSHMEM runtimes. We present application case-studies to demonstrate the productivity and performance of MPI+PGAS models, using the publicly available MVAPICH2-X software package.

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar