- Home
- Register
- Attend
- Program
- Technical Program Overview
- SC14 Schedule
- Awards
- Birds-of-a-Feather Sessions (BOFs)
- Emerging Technologies
- Invited Talks
- Panels
- Papers
- Posters
- Scientific Visualization Showcase
- Tutorials
- Workshops
- Doctoral Showcase Program
- HPC Matters Plenary
- Keynote
- SC14 Archive
- SC14 Conference Program
- Tech Program Receptions
- Exhibit
- Engage
- Media
- Media Overview
- Media Releases
- Announcing the Second Test of Time Award Winner
- CDC to Present at Supercomputing 2014
- Finalists Compete for Coveted ACM Gordon Bell Prize in High Performance Computing
- Four Ways Supercomputing Is Changing Lives: From Climate Modeling to Manufacturing Consumer Goods
- Join the Student Cluster Competition
- New Orleans Becomes Home to Fastest Internet Hub in the World
- SC14 Announces New Plenary to Focus on the Importance of Supercomputers in Society
- SC14 Registration Opens, Technical Program Goes Live
- Supercomputing 2014 Recognizes Outstanding Achievements in HPC
- Supercomputing 2014 Sets New Records
- Supercomputing Invited Plenary Talks
- Supercomputing Unveils Ground-Breaking Innovations and the World’s Fastest Computer Network
- World’s Fastest Computer Network Coming to New Orleans
- SC14 Logo Usage
- SC14 Media Partners
- Social Media
- Newsletters
- SC14 Blog
- Opening Press Briefing
- SC Photograph and Film Acceptable Use Policy
- Media Registration
- Video Gallery
- SCinet
Impossibly huge data is now possible
Posted on Friday, November 14, 2014
By Giorgio Regni
One of the promises of object storage is more efficiency and scalability by collapsing the storage stack and decoupling the interface to data from where it’s actually stored. Complex directory structures and unnecessary metadata are gone, replaced by a unique federated flat namespace while file names become globally unique identifier.
What if you could leverage the power of object storage but still access the data via file interfaces, using your existing workflows and unmodified apps to manage your data?
Clearly this is the best way to make the most use of underlying hardware but what about after the data has been created? People want to collaborate, do further analysis, archive... using whatever application interface makes the best sense.
The need for petabyte-sized files
Let’s take the example of distributed computing -- found in various spaces such as oil & gas exploration, EDA, or genomics research. In these scenarios, you have tens of thousands of clients working in parallel on portions of the problem, generating petabyte-sized result sets.
Actually many times the result is a unique monster petabyte-size file!
Typically clients are mounting the storage using a distributed file system like Lustre or GPFS and making heavy use of byte range locking to synchronize concurrent access to the same files. This actually drastically limits the scalability of such a system because the actual locking and synchronization calls eat into the network performance and create bottlenecks in the form of metadata servers or latency-sensitive lock services.
What if all clients knew what portions of a very huge file they’re responsible for and could work on those independent pieces in isolation? It turns that this is often the case in scientific calculations and Scality has a very elegant solution to that problem.
The concept of sparse files
Scality’s RING object storage engine is very good at parallelism and handling a large number of clients working on independent objects. The RING can also be exposed as a distributed scale out file system, leveraging Scality’s MESA database to manage directory listing and stitching independent objects together into files, a technology called sparse files.
We’ve develop an API to be able to stitch independent object together into one large sparse file and attach them to existing file system directory trees out of band.
That means that the calculation can happen in a pure object, highly parallel world, and get exposed in the traditional POSIX world by postponing the actual metadata indexing after the fact. Users get the best of both worlds, uncompromised performance and compatibility with existing apps and workflows.
Look at this CLI screenshot:
Yes, this is a 2.3 petabyte file stored on a Scality RING and exposed via NFS! That file was created by pushing around 300 million 64MB objects in parallel and indexing them together as one huge file in the end. The time to write the data in parallel, followed by indexing is drastically faster that operating at the file system layer for the entire time. The resulting file can be accessed via any Scality application connector: NFS, FUSE or REST and any number of clients in parallel!
Where do we go from here?
We’re excited for the possibilities in the petabyte-scale era, and we’re excited to see what you can do with the technology we’ve made available. What other applications can make use of it? Can you see other areas that need data manipulation at this scale? We look forward to the discussions at SC14 and beyond. And we’re always busy in the kitchen cooking up new ways to handle impossibly sized data.
Bio: http://www.scality.com/team_members/giorgio-regni/
Website: http://www.scality.com/