Return to the "Advanced Topics" page.



Direct Chunk Write

When a user application is working with a chunked dataset and is writing a single chunk of data with H5Dwrite, the data goes through several steps inside the HDF5 library. The library first examines the hyperslab selection. Then it converts the data from the datatype in memory to the datatype in the file if they are different. Finally, the library processes the data in the filter pipeline.

This can create an I/O bottleneck in a very high-throughput environment.

The high-level C function H5DOwrite_chunk provides a mechanism enabling the application to write a data chunk directly to the file bypassing the library’s hyperslab selection, data conversion, and filter pipeline processes. If an application can pre-process the data properly, the application can use H5DOwrite_chunk to write the data much faster.

The following documents describe the use of this feature:



Return to the "Advanced Topics" page.



The HDF Group Help Desk:
Describes HDF5 Release 1.8.18, November 2016.
  Copyright by The HDF Group
and the Board of Trustees of the University of Illinois