The HDF Group logo The HDF Group The HDF Group logo
 
 
Other HDF Links

HDF5 1.10 Documentation


New Features in HDF5 1.10
Additional New APIs

Collective Metadata I/O

Fine-tuning the Metadata Cache

File Space Management

Page Buffering

Partial Edge Chunks

Reference

SWMR

Virtual Datasets (VDS)


New Features in HDF5 Release 1.10

 
New HDF5 1.10 Features

HDF5 1.10 introduces several new features in the HDF5 Library. For a brief description of each new feature see:

See User, Reference, and Design Documentation for detailed information regarding the new features.

File Format Changes

This release includes changes in the HDF5 storage format. These changes come into play when one or more of the new features is used or when an application calls for use of the latest storage format (H5Pset_libver_bounds).

Due to the requirements of some of the new features, the format of a 1.10.x HDF5 file is likely to be different from that of a 1.8.x HDF5 file. This means that tools and applications built to read 1.10.x files will be able to read a 1.8.x file, but tools built to read 1.8.x files may not be able to read a 1.10.x file.

If an application built on HDF5 Release 1.10 avoids use of the new features and does not request use of the latest format, applications built on HDF5 Release 1.8.x will be able to read files the first application created. In addition, applications originally written for use with HDF5 Release 1.8.x can be linked against a suitably configured HDF5 Release 1.10.x library, thus taking advantage of performance improvements in 1.10.
 

New Features Introduced in HDF5 1.10.1


Metadata Cache Image

HDF5 metadata is typically small, and scattered throughout the HDF5 file. This can affect performance, particularly on large HPC systems. The Metadata Cache Image feature can improve performance by writing the metadata cache in a single block on file close, and then populating the cache with the contents of this block on file open, thus avoiding the many small I/O operations that would otherwise be required on file open and close. See the RFC for complete details regarding this feature. Also, see the Fine Tuning the Metadata Cache documentation.


Metadata Cache Evict on Close

The HDF5 library's metadata cache is fairly conservative about holding on to HDF5 object metadata (object headers, chunk index structures, etc.), which can cause the cache size to grow, resulting in memory pressure on an application or system. The "evict on close" property will cause all metadata for an object to be evicted from the cache as long as metadata is not referenced from any other open object. See the Fine Tuning the Metadata Cache documentation for information on the APIs.


Paged Aggregation

The current HDF5 file space allocation accumulates small pieces of metadata and raw data in aggregator blocks which are not page aligned and vary widely in sizes. The paged aggregation feature was implemented to provide efficient paged access of these small pieces of metadata and raw data. See the RFC for details. Also, see the File Space Management documentation.


Page Buffering

Small and random I/O accesses on parallel file systems result in poor performance for applications. Page buffering in conjunction with paged aggregation can improve performance by giving an application control of minimizing HDF5 I/O requests to a specific granularity and alignment. See the RFC for details. Also, see the Page Buffering documentation.


New Features Introduced in HDF5 1.10.0
 


SWMR
Data acquisition and computer modeling systems often need to analyze and visualize data while it is being written. It is not unusual, for example, for an application to produce results in the middle of a run that suggest some basic parameters be changed, sensors be adjusted, or the run be scrapped entirely.

To enable users to check on such systems, we have been developing a concurrent read/write file access pattern we call SWMR (pronounced swimmer). SWMR is short for single-writer/multiple-reader. SWMR functionality allows a writer process to add data to a file while multiple reader processes read from the file.


Fine-tuning the Metadata Cache
The orderly operation of the metadata cache is crucial to SWMR functioning. A number of APIs have been developed to handle the requests from writer and reader processes and to give applications the control of the metadata cache they might need. However, the metadata cache APIs can be used when SWMR is not being used; so, these functions are described separately.


Collective Metadata I/O
Calls for HDF5 metadata can result in many small reads and writes. On metadata reads, collective metadata I/O can improve performance by allowing the library to perform optimizations when reading the metadata, by having one rank read the data and broadcasting it to all other ranks.

Collective metadata I/O improves metadata write performance through the construction of an MPI derived datatype that is then written collectively in a single call.


File Space Management
Usage patterns when working with an HDF5 file sometimes result in wasted space within the file. This can also impair access times when working with the resulting files. The new file space management feature provides strategies for managing space in a file to improve performance in both of these arenas.


Virtual Datasets (VDS)
With a growing amount of data in HDF5, the need has emerged to access data stored across multiple HDF5 files using standard HDF5 objects, such as groups and datasets, without rewriting or rearranging the data. The new virtual dataset (VDS) feature enables an application to draw on multiple datasets and files to create virtual datasets without moving or rewriting any data.


Partial Edge Chunk Options
New options for the storage and filtering of partial edge chunks in a dataset provide a tool for tuning I/O speed and file size in cases where the dataset size may not be a multiple of the chunk size.


Additional New APIs
In addition to the features described above, several additional new functions, a new struct, and new macros have been introduced or newly versioned in this release.

 
 
User, Reference, and Design Documentation

Documentation for the new features of the HDF5 Library is available on the pages listed below.

Additional New APIs This page lists various new functions, a new struct, and new macros that are either unrelated to other new features described elsewhere or have aspects that are unrelated to the feature where they are otherwise described.
 
Collective Metadata I/O The purpose of this page is to list and briefly describe the documentation available to those who will use the Collective Metadata I/O feature of the HDF5 Library.
 
Fine-tuning the Metadata Cache The purpose of this page is to list and briefly describe the documentation available to those who want to fine-tune how the metadata cache behaves.
 
File Space Management The purpose of this page is to list and briefly describe documentation for HDF5’s file space management capabilities.
 
Page Buffering The purpose of this page is to briefly describe the new page buffering option and to provide access to full descriptions for the relevant functions.
 
Partial Edge Chunks The purpose of this page is to briefly describe the new partial edge chunk options and to provide access to full descriptions for the relevant functions.
 
Single-Writer/Multiple-Reader (SWMR) The purpose of this page is to list and briefly describe the documentation available to those who will use the Single-Writer/Multiple-Reader (SWMR) feature of the HDF5 Library.
 
Virtual Datasets (VDS) The purpose of this page is to list and briefly describe the documentation available to those who want to use the Virtual Datasets (VDS) feature of the HDF5 Library.
 

A description of the file format changes can be seen here:

Reference The purpose of this page is to describe the changes made to HDF5 reference documents to support new features.
 
 
 


The HDF Group Help Desk: The HDF Group Help Desk
Last modified: April 2017

  Copyright by The HDF Group and the Board
of Trustees of the University of Illinois.
All rights reserved.