sponsored byIEEEACMThe International Conference for High Performance 
Computing, Networking, Storage and Analysis
FacebookTwitterGoogle PlusLinkedInYouTubeFlickr

SCHEDULE: NOV 16-21, 2014

When viewing the Technical Program schedule, on the far righthand side is a column labeled "PLANNER." Use this planner to build your own schedule. Once you select an event and want to add it to your personal schedule, just click on the calendar icon of your choice (outlook calendar, ical calendar or google calendar) and that event will be stored there. As you select events in this manner, you will have your own schedule to guide you through the week.

QRing – A Scalable Parallel Software Tool for Quantum Transport Simulations in Carbon Nanoring Devices Based on NEGF Formalism and a Parallel C++ / MPI / PETSc Algorithm

SESSION: BE Session II: HPC Applications and Q&A

EVENT TYPE: HPC Interconnections (BE, Undergraduates, Cluster)

TIME: 1:45PM - 2:00PM

SESSION CHAIR: Tony Drummond

Presenter(s):Mark A. Jack

ROOM:288-89

ABSTRACT:

The ability of a nanomaterial to conduct charge is essential for many nanodevice applications. Numerous studies have shown that disorder, including defects and phononic or plasmonic effects, can disrupt or block electric current in nanomaterials. An accurate theoretical account of both electron-phonon and electron-plasmon coupling in carbon nanotube- and nanoring-based structures is key in order to properly predict the performance of these new nanodevices.
The parallel sparse matrix library PETSc provides tools to optimally divide memory and computational tasks required in inverting the Hamiltonian matrix across multiple compute cores or nodes. The results at different energy levels are integrated via a second layer of parallelism to obtain integrated observables. Because the inversion at each energy step dominates the run time and memory use of the code, it is important that the code scales well to realistic system sizes. Several different direct and iterative linear solvers and libraries were compared for their scalability and efficiency, including the Intel MKL shared-memory, direct dense solver, the MUltifrontal Massively Parallel direct Solver (MUMPS), and iterative sparse solvers bundled with PETSc. The NSF XSEDE resource ‘Stampede’ at the Texas Advanced Computing Center as well as regional resources in Florida (SSERCA) are used for development and benchmarking, while DOE OLCF computational resources ‘Titan’ and 'NICS/Beacon' are used for physics production runs.

Chair/Presenter Details:

Tony Drummond (Chair) - Lawrence Berkeley National Laboratory

Mark A. Jack - Florida A&M University

Add to iCal  Click here to download .ics calendar file

Add to Outlook  Click here to download .vcs calendar file

Add to Google Calendarss  Click here to add event to your Google Calendar