hdf images hdf images

This web site is no longer maintained (but will remain online).
Please see The HDF Group's new Support Portal for the latest information.

HDF5 Tutorial: Command-line Tools for Viewing HDF5 Files

Contents:


File Content and Structure

The h5dump and h5ls tools can both be used to view the contents of an HDF5 file. The tools are discussed below:

h5dump

The h5dump tool dumps or displays the contents of an HDF5 file (textually). By default if you specify no options, h5dump will display the entire contents of a file. There are many h5dump options for examining specific details of a file. To see all of the available h5dump options, specify the -h or --help option:

   h5dump -h 

The following h5dump options can be helpful in viewing the content and structure of a file:

Option Description Comment
-n, --contents Displays a list of the objects in a file See Example 1
-n 1, --contents=1 Displays a list of the objects and attributes in a file See Example 6
-H, --header Displays header information only (no data) See Example 2
-A 0, --onlyattr=0 Suppresses the display of attributes See Example 2
-N P, --any_path=P Displays any object or attribute that matches path P See Example 6

Example 1

The following command displays a list of the objects in the file OMI-Aura.he5 (an HDF-EOS5 file):

As shown in the output below, the objects (groups, datasets) are listed to the left, followed by their names. You can see that this file contains two root groups, HDFEOS and HDFEOS INFORMATION:

HDF5 "OMI-Aura.he5" {
FILE_CONTENTS {
 group      /
 group      /HDFEOS
 group      /HDFEOS/ADDITIONAL
 group      /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES
 group      /HDFEOS/GRIDS
 group      /HDFEOS/GRIDS/OMI Column Amount O3
 group      /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields
 dataset    /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/ColumnAmountO3
 dataset    /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/RadiativeCloudFraction
 dataset    /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle
 dataset    /HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/ViewingZenithAngle
 group      /HDFEOS INFORMATION
 dataset    /HDFEOS INFORMATION/StructMetadata.0
 }
}

Example 2

The file structure of the OMI-Aura.he5 file can be seen with the following command. The -A 0 option suppresses the display of attributes:

Output of this command is shown below:

HDF5 "OMI-Aura.he5" {
GROUP "/" {
   GROUP "HDFEOS" {
      GROUP "ADDITIONAL" {
         GROUP "FILE_ATTRIBUTES" {
         }
      }
      GROUP "GRIDS" {
         GROUP "OMI Column Amount O3" {
            GROUP "Data Fields" {
               DATASET "ColumnAmountO3" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
               DATASET "RadiativeCloudFraction" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
               DATASET "SolarZenithAngle" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
               DATASET "ViewingZenithAngle" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
            }
         }
      }
   }
   GROUP "HDFEOS INFORMATION" {
      DATASET "StructMetadata.0" {
         DATATYPE  H5T_STRING {
            STRSIZE 32000;
            STRPAD H5T_STR_NULLTERM;
            CSET H5T_CSET_ASCII;
            CTYPE H5T_C_S1;
         }
         DATASPACE  SCALAR
      }
   }
}
}

h5ls

The h5ls tool by default just displays the objects in the root group. It will not display items in groups beneath the root group unless specified. Useful h5ls options for viewing file content and structure are:

Option Description Comment
-r
Lists all groups and objects recursively See Example 3
-v

 
Generates verbose output (lists dataset properties, attributes
and attribute values, but no dataset values)
 

Example 3

The following command shows the contents of the HDF-EOS5 file OMI-Aura.he5. The output is similar to h5dump, except that h5ls also shows dataspace information for each dataset:

The output is shown below:

/                        Group
/HDFEOS                  Group
/HDFEOS/ADDITIONAL       Group
/HDFEOS/ADDITIONAL/FILE_ATTRIBUTES Group
/HDFEOS/GRIDS            Group
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3 Group
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields Group
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ColumnAmountO3 Dataset {720, 1440}
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/RadiativeCloudFraction Dataset {720, 1440}
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/SolarZenithAngle Dataset {720, 1440}
/HDFEOS/GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ViewingZenithAngle Dataset {720, 1440}
/HDFEOS\ INFORMATION     Group
/HDFEOS\ INFORMATION/StructMetadata.0 Dataset {SCALAR}


Datasets and Dataset Properties

Both h5dump and h5ls can be used to view specific datasets.

h5dump

Useful h5dump options for examining specific datasets include:

Option Description Comment
 -d D, --dataset=D
Displays dataset D See Example 4
 -H, --header
Displays header information only See Example 4
 -p, --properties
Displays dataset filters, storage layout, and fill value properties See Example 5
 -A 0, --onlyattr=0
Suppresses the display of attributes See Example 2
-N P, --any_path=P
Displays any object or attribute that matches path P See Example 6


Example 4

A specific dataset can be viewed with h5dump using the -d D option and specifying the entire path and name of the dataset for D. The path is important in identifying the correct dataset, as there can be multiple datasets with the same name. The path can be determined by looking at the objects in the file with h5dump -n.

The following example uses the groups.h5 file that is created by the HDF5 Introductory Tutorial example h5_crtgrpar.c. To display dset1 in the groups.h5 file below, specify dataset /MyGroup/dset1. The -H option is used to suppress printing of the data values:

Contents of groups.h5 Display dataset "dset1"
   $ h5dump -n groups.h5
   HDF5 "groups.h5" {
   FILE_CONTENTS {
    group      /
    group      /MyGroup
    group      /MyGroup/Group_A
    dataset    /MyGroup/Group_A/dset2
    group      /MyGroup/Group_B
    dataset    /MyGroup/dset1
    }
   }
   $ h5dump -d "/MyGroup/dset1" -H groups.h5
   HDF5 "groups.h5" {
   DATASET "/MyGroup/dset1" {
      DATATYPE  H5T_STD_I32BE
      DATASPACE  SIMPLE { ( 3, 3 ) / ( 3, 3 ) }
   }
   }



Example 5

The -p option is used to examine the the dataset filters, storage layout, and fill value properties of a dataset.

This option can be useful for checking how well compression works, or even for analyzing performance and dataset size issues related to chunking. (The smaller the chunk size, the more chunks that HDF5 has to keep track of, which increases the size of the file and potentially affects performance.)

In the file shown below the dataset /DS1 is both chunked and compressed:

   $ h5dump -H -p -d "/DS1" h5ex_d_gzip.h5
   HDF5 "h5ex_d_gzip.h5" {
   DATASET "/DS1" {
      DATATYPE  H5T_STD_I32LE
      DATASPACE  SIMPLE { ( 32, 64 ) / ( 32, 64 ) }
      STORAGE_LAYOUT {
         CHUNKED ( 4, 8 )
         SIZE 5278 (1.552:1 COMPRESSION)
      }
      FILTERS {
         COMPRESSION DEFLATE { LEVEL 9 }
      }
      FILLVALUE {
         FILL_TIME H5D_FILL_TIME_IFSET
         VALUE  0
      }
      ALLOCATION_TIME {
         H5D_ALLOC_TIME_INCR
      }
   }
   }

You can obtain the h5ex_d_gzip.c program that created this file, as well as the file created, from the HDF5 C Examples by API page.

h5ls

Specific datasets can be specified with h5ls by simply adding the dataset path and dataset after the file name. As an example, this command displays dataset dset2 in the groups.h5 file used in Example 4:

   h5ls groups.h5/MyGroup/Group_A/dset2

Just the dataspace information gets displayed:

   dset2                    Dataset {2, 10}

The following options can be used to see detailed information about a dataset.

Option Description
-v, --verbose

 
Generates verbose output (lists dataset properties, attributes
and attribute values, but no dataset values)
-d, --data
Displays dataset values

The output of using -v is shown below:

   $ h5ls -v groups.h5/MyGroup/Group_A/dset2
   Opened "groups.h5" with sec2 driver.
   dset2                    Dataset {2/2, 10/10}
       Location:  1:3840
       Links:     1
       Storage:   80 logical bytes, 80 allocated bytes, 100.00% utilization
       Type:      32-bit big-endian integer

The output of using -d is shown below:

   $ h5ls -d groups.h5/MyGroup/Group_A/dset2
   dset2                    Dataset {2, 10}
       Data:
           (0,0) 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10


Groups

Both h5dump and h5ls can be used to view specific groups in a file:

h5dump

The h5dump options that are useful for examining groups are:

Option Description
 -g G, --group=G
Displays group G and its members
 -H, --header
Displays header information only
 -A 0, --onlyattr=0
Suppresses the display of attributes

To view the contents of the HDFEOS group in the OMI file mentioned previously, you can specify the path and name of the group as follows:

The -A 0 option suppresses attributes and -H suppresses printing of data values:

   HDF5 "OMI-Aura.he5" {
   GROUP "/HDFEOS" {
      GROUP "ADDITIONAL" {
         GROUP "FILE_ATTRIBUTES" {
         }
      }
      GROUP "GRIDS" {
         GROUP "OMI Column Amount O3" {
            GROUP "Data Fields" {
               DATASET "ColumnAmountO3" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
               DATASET "RadiativeCloudFraction" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
               DATASET "SolarZenithAngle" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
               DATASET "ViewingZenithAngle" {
                  DATATYPE  H5T_IEEE_F32LE
                  DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
               }
            }
         }
      }
   }
   }

h5ls

You can view the contents of a group with h5ls by specifying the group after the file name. To use h5ls to view the contents of the /HDFEOS group in the OMI-Aura.he5 file, type:

The output of this command is:

   /ADDITIONAL              Group
   /ADDITIONAL/FILE_ATTRIBUTES Group
   /GRIDS                   Group
   /GRIDS/OMI\ Column\ Amount\ O3 Group
   /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields Group
   /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ColumnAmountO3 Dataset {720, 1440}
   /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/RadiativeCloudFraction Dataset {720, 1440}
   /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/SolarZenithAngle Dataset {720, 1440}
   /GRIDS/OMI\ Column\ Amount\ O3/Data\ Fields/ViewingZenithAngle Dataset {720, 1440}

If you specify the -v option, you can also see the attributes and properties of the datasets.


Attributes

h5ls

If you include the -v (verbose) option for h5ls, you will see all of the attributes for the specified file, dataset or group. You cannot display individual attributes.

h5dump

Attributes are displayed by default if using h5dump. Some files contain many attributes, which can make it difficult to examine the objects in the file. Shown below are options that can help when using h5dump to work with files that have attributes.

Option Description Comment
-a A, --attribute=A
Displays attribute A See Example 6
-A 0, --onlyattr=0
Suppresses the display of attributes See Example 2
-n 1, --contents=1
Lists file contents with attributes See Example 6
-N P, --any_path=P
Displays any object or attribute that matches path P See Example 6

Example 6

The -a A option will display an attribute. However, the path to the attribute must be included when specifying this option. For example, to see the ScaleFactor attribute in the OMI-Aura.he5 file, type:

This command displays:

   HDF5 "OMI-Aura.he5" {
   ATTRIBUTE "ScaleFactor" {
      DATATYPE  H5T_IEEE_F64LE
      DATASPACE  SIMPLE { ( 1 ) / ( 1 ) }
      DATA {
      (0): 1
      }
   }
   }

How can you determine the path to the attribute? This can be done by looking at the file contents with the -n 1 option:

Below is a portion of the output for this command:

   HDF5 "OMI-Aura.he5" {
   FILE_CONTENTS {
    group      /
    group      /HDFEOS
    group      /HDFEOS/ADDITIONAL
    group      /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/EndUTC
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleDay
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleDayOfYear
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleMonth
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/GranuleYear
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/InstrumentName
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/OrbitNumber
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/OrbitPeriod
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/PGEVersion
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/Period
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/ProcessLevel
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/StartUTC
    attribute  /HDFEOS/ADDITIONAL/FILE_ATTRIBUTES/TAI93At0zOfGranule

    ...

There can be multiple objects or attributes with the same name in a file. How can you make sure you are finding the correct object or attribute? You can first determine how many attributes there are with a specified name, and then examine the paths to them.

The -N option can be used to display all objects or attributes with a given name. For example, there are four attributes with the name ScaleFactor in the OMI-Aura.he5 file, as can be seen below with the -N option:

It outputs:

HDF5 "OMI-Aura.he5" {
ATTRIBUTE "ScaleFactor" {
   DATATYPE  H5T_IEEE_F64LE
   DATASPACE  SIMPLE { ( 1 ) / ( 1 ) }
   DATA {
   (0): 1
   }
}
ATTRIBUTE "ScaleFactor" {
   DATATYPE  H5T_IEEE_F64LE
   DATASPACE  SIMPLE { ( 1 ) / ( 1 ) }
   DATA {
   (0): 1
   }
}
ATTRIBUTE "ScaleFactor" {
   DATATYPE  H5T_IEEE_F64LE
   DATASPACE  SIMPLE { ( 1 ) / ( 1 ) }
   DATA {
   (0): 1
   }
}
ATTRIBUTE "ScaleFactor" {
   DATATYPE  H5T_IEEE_F64LE
   DATASPACE  SIMPLE { ( 1 ) / ( 1 ) }
   DATA {
   (0): 1
   }
}
}


Dataset Subset

h5dump

If you have a very large dataset, you may wish to subset or see just a portion of the dataset. This can be done with the following h5dump options.

Option Description
-d D, --dataset=D
Dataset D
-s START, --start=START
Offset or start of subsetting selection
-S STRIDE, --stride=STRIDE

 
 
Stride (sampling along a dimension). The default (unspecified, or 1) selects
every element along a dimension, a value of 2 selects every other element,
a value of 3 selects every third element, ...
-c COUNT, --count=COUNT
Number of blocks to include in the selection
-k BLOCK, --block=BLOCK

 
Size of the block in a hyperslab. The default (unspecified, or 1) is for
the block size to be the size of a single element.

The START (s), STRIDE (S), COUNT (c), and BLOCK (k) options define the shape and size of the selection. They are arrays with the same number of dimensions as the rank of the dataset's dataspace, and they all work together to define the selection. A change to one of these arrays can affect the others.

When specifying these h5dump options, a comma is used as the delimiter for each dimension in the option value. For example, with a 2-dimensional dataset, the option value is specified as "H,W", where H is the height and W is the width. If the offset is 0 for both dimensions, then START would be specified as follows:

    -s "0,0"

There is also a shorthand way to specify these options with brackets at the end of the dataset name:

   -d DATASETNAME[s;S;c;k] 

Multiple dimensions are separated by commas. For example, a subset for a 2-dimensional dataset would be specified as follows:

  -d DATASETNAME[s,s;S,S;c,c;k,k]

For a detailed understanding of how selections works, see the H5Sselect_hyperslab API in the HDF5 Reference Manual.

The dataset SolarZenithAngle in the OMI-Aura.he5 file can be used to illustrate these options. This dataset is a 2-dimensional dataset of size 720 (height) x 1440 (width). Too much data will be displayed by simply viewing the specified dataset with the -d option:

   h5dump -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" OMI-Aura.he5

Subsetting narrows down the output that is displayed. In the following example, the first 15x10 elements (-c "15,10") are specified, beginning with position (0,0) (-s "0,0"):

    h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" 
          -s "0,0" -c "15,10" -w 0 OMI-Aura.he5

If using the shorthand method, specify:

    h5dump -A 0 -d "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle[0,0;;15,10;]" 
          -w 0 OMI-Aura.he5

Where,

Either command displays:

   HDF5 "OMI-Aura.he5" {
   DATASET "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" {
      DATATYPE  H5T_IEEE_F32LE
      DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
      SUBSET {
         START ( 0, 0 );
         STRIDE ( 1, 1 );
         COUNT ( 15, 10 );
         BLOCK ( 1, 1 );
         DATA {
         (0,0): 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403, 79.403,
         (1,0): 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071,
         (2,0): 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867,
         (3,0): 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632,
         (4,0): 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429,
         (5,0): 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225,
         (6,0): 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021,
         (7,0): 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715, 77.715,
         (8,0): 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511, 77.511,
         (9,0): 77.658, 77.658, 77.658, 77.307, 77.307, 77.307, 77.307, 77.307, 77.307, 77.307,
         (10,0): 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.556, 77.102, 77.102,
         (11,0): 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 78.408, 77.102, 77.102,
         (12,0): 76.34, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413, 78.413,
         (13,0): 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 78.107, 77.195,
         (14,0): 78.005, 78.005, 78.005, 78.005, 78.005, 78.005, 76.991, 76.991, 76.991, 76.991
         }
      }
   }
   }

What if we wish to read three rows of three elements at a time (-c "3,3"), where each element is a 2 x 3 block (-k "2,3") and we wish to begin reading from the second row (-s "1,0")?

You can do that with the following command:

In this case, the stride must be specified as 2 by 3 (or larger) to accomodate the reading of 2 by 3 blocks. If it is smaller, the command will fail with the error, h5dump error: wrong subset selection; blocks overlap.

The output of the above command is shown below:

   HDF5 "OMI-Aura.he5" {
   DATASET "HDFEOS/GRIDS/OMI Column Amount O3/Data Fields/SolarZenithAngle" {
      DATATYPE  H5T_IEEE_F32LE
      DATASPACE  SIMPLE { ( 720, 1440 ) / ( 720, 1440 ) }
      SUBSET {
         START ( 1, 0 );
         STRIDE ( 2, 3 );
         COUNT ( 3, 3 );
         BLOCK ( 2, 3 );
         DATA {
         (1,0): 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071, 79.071,
         (2,0): 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867, 78.867,
         (3,0): 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632, 78.632,
         (4,0): 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429, 78.429,
         (5,0): 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225, 78.225,
         (6,0): 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021, 78.021
         }
      }
   }
   }


Datatypes

h5dump

The following datatypes are discussed, using the output of h5dump with HDF5 files from the HDF5 C Examples by API page:


Array

Users have been confused by the difference between an Array datatype (H5T_ARRAY) and a dataset that (has a dataspace that) is an array.

Typically, these users want a dataset that has a simple datatype (like integer or float) that is an array, like the following dataset /DS1. It has a datatype of H5T_STD_I32LE (32-bit Little-Endian Integer) and is a 4 by 7 array:

$ h5dump h5ex_d_rdwr.h5
HDF5 "h5ex_d_rdwr.h5" {
GROUP "/" {
   DATASET "DS1" {
      DATATYPE  H5T_STD_I32LE
      DATASPACE  SIMPLE { ( 4, 7 ) / ( 4, 7 ) }
      DATA {
      (0,0): 0, -1, -2, -3, -4, -5, -6,
      (1,0): 0, 0, 0, 0, 0, 0, 0,
      (2,0): 0, 1, 2, 3, 4, 5, 6,
      (3,0): 0, 2, 4, 6, 8, 10, 12
      }
   }
}
} 

Contrast that with the following dataset that has both an Array datatype and is an array:

$ h5dump h5ex_t_array.h5
HDF5 "h5ex_t_array.h5" {
GROUP "/" {
   DATASET "DS1" {
      DATATYPE  H5T_ARRAY { [3][5] H5T_STD_I64LE }
      DATASPACE  SIMPLE { ( 4 ) / ( 4 ) }
      DATA {
      (0): [ 0, 0, 0, 0, 0,
            0, -1, -2, -3, -4,
            0, -2, -4, -6, -8 ],
      (1): [ 0, 1, 2, 3, 4,
            1, 1, 1, 1, 1,
            2, 1, 0, -1, -2 ],
      (2): [ 0, 2, 4, 6, 8,
            2, 3, 4, 5, 6,
            4, 4, 4, 4, 4 ],
      (3): [ 0, 3, 6, 9, 12,
            3, 5, 7, 9, 11,
            6, 7, 8, 9, 10 ]
      }
   }
}
}

In this file, dataset /DS1 has a datatype of H5T_ARRAY { [3][5] H5T_STD_I64LE } and it also has a dataspace of SIMPLE { ( 4 ) / ( 4 ) }. In other words, it is an array of four elements, in which each element is a 3 by 5 array of H5T_STD_I64LE.

This dataset is much more complex. Also note that subsetting cannot be done on Array datatypes.

See this FAQ for more information on the Array datatype.

Object Reference

An Object Reference is a reference to an entire object (dataset, group, or named datatype). A dataset with an Object Reference datatype consists of one or more Object References. An Object Reference dataset can be used as an index to an HDF5 file.

The /DS1 dataset in the following file (h5ex_t_objref.h5) is an Object Reference dataset. It contains two references, one to group /G1 and the other to dataset /DS2:


$ h5dump h5ex_t_objref.h5
HDF5 "h5ex_t_objref.h5" {
GROUP "/" {
   DATASET "DS1" {
      DATATYPE  H5T_REFERENCE { H5T_STD_REF_OBJECT }
      DATASPACE  SIMPLE { ( 2 ) / ( 2 ) }
      DATA {
      (0): GROUP 1400 /G1 , DATASET 800 /DS2
      }
   }
   DATASET "DS2" {
      DATATYPE  H5T_STD_I32LE
      DATASPACE  NULL
      DATA {
      }
   }
   GROUP "G1" {
   }
}
}

Region Reference

A Region Reference is a reference to a selection within a dataset. A selection can be either individual elements or a hyperslab. In h5dump you will see the name of the dataset along with the elements or slab that is selected. A dataset with a Region Reference datatype consists of one or more Region References.

An example of a Region Reference dataset ( h5ex_t_regref.h5 ) can be found on the C Examples by API page, under Datatypes. If you examine this dataset with h5dump you will see that /DS1 is a Region Reference dataset as indicated by its datatype, highlighted in grey below:

$ h5dump  h5ex_t_regref.h5
HDF5 "h5ex_t_regref.h5" {
GROUP "/" {
   DATASET "DS1" {
      DATATYPE  H5T_REFERENCE { H5T_STD_REF_DSETREG }
      DATASPACE  SIMPLE { ( 2 ) / ( 2 ) }
      DATA {
         DATASET /DS2 {(0,1), (2,11), (1,0), (2,4)},
         DATASET /DS2 {(0,0)-(0,2), (0,11)-(0,13), (2,0)-(2,2), (2,11)-(2,13)}
      }
   }
   DATASET "DS2" {
      DATATYPE  H5T_STD_I8LE
      DATASPACE  SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
      DATA {
      (0,0): 84, 104, 101, 32, 113, 117, 105, 99, 107, 32, 98, 114, 111, 119,
      (0,14): 110, 0,
      (1,0): 102, 111, 120, 32, 106, 117, 109, 112, 115, 32, 111, 118, 101,
      (1,13): 114, 32, 0,
      (2,0): 116, 104, 101, 32, 53, 32, 108, 97, 122, 121, 32, 100, 111, 103,
      (2,14): 115, 0
      }
   }
}
}

It contains two Region References:

If you look at the code that creates the dataset (h5ex_t_regref.c) you will see that the first reference is created with these calls:

  status = H5Sselect_elements (space, H5S_SELECT_SET, 4, coords[0]);
  status = H5Rcreate (&wdata[0], file, DATASET2, H5R_DATASET_REGION, space);

where the buffer containing the coordinates to select is:

   coords[4][2] = { {0,  1},
                    {2, 11},
                    {1,  0},
                    {2,  4} },  

The second reference is created by calling,
  status = H5Sselect_hyperslab (space, H5S_SELECT_SET, start, stride, count,
                block);
  status = H5Rcreate (&wdata[1], file, DATASET2, H5R_DATASET_REGION, space);

where start, stride, count, and block have these values:

     start[2] =  {0, 0},
     stride[2] = {2, 11},
     count[2] =  {2, 2},
     block[2] =  {1, 3};

These start, stride, count, and block values will select the elements shown in grey in the dataset:

 84 104 101 32 113 117 105  99 107  32  98 114 111 119 110 0
102 111 120 32 106 117 109 112 115  32 111 118 101 114  32 0
116 104 101 32  53  32 108  97 122 121  32 100 111 103 115 0

If you use h5dump to select a subset of dataset /DS2 with these start, stride, count, and block values, you will see that the same elements are selected:

$ h5dump -d "/DS2" -s "0,0" -S "2,11" -c "2,2" -k "1,3" h5ex_t_regref.h5
HDF5 "h5ex_t_regref.h5" {
DATASET "/DS2" {
   DATATYPE  H5T_STD_I8LE
   DATASPACE  SIMPLE { ( 3, 16 ) / ( 3, 16 ) }
   SUBSET {
      START ( 0, 0 );
      STRIDE ( 2, 11 );
      COUNT ( 2, 2 );
      BLOCK ( 1, 3 );
      DATA {
      (0,0): 84, 104, 101, 114, 111, 119,
      (2,0): 116, 104, 101, 100, 111, 103
      }
   }
}
} 

For more information on selections, see the tutorial topic on Reading From or Writing To a Subset of a Dataset. Also see the View Dataset Subset tutorial topic on using h5dump to view a subset.

String

There are two types of string data, fixed length strings and variable length strings.

Below is the h5dump output for two files that have the same strings written to them. In one file, the strings are fixed in length, and in the other, the strings have different sizes (and are variable in size) .

Dataset of Fixed Length Strings Dataset of Variable Length Strings
HDF5 "h5ex_t_string.h5" {
GROUP "/" {
   DATASET "DS1" {
      DATATYPE  H5T_STRING {
         STRSIZE 7;
         STRPAD H5T_STR_SPACEPAD;
         CSET H5T_CSET_ASCII;
         CTYPE H5T_C_S1;
      }
      DATASPACE  SIMPLE { ( 4 ) / ( 4 ) }
      DATA {
      (0): "Parting", "is such", "sweet  ", "sorrow."
      }
   }
}
}
HDF5 "h5ex_t_vlstring.h5" {
GROUP "/" {
   DATASET "DS1" {
      DATATYPE  H5T_STRING {
         STRSIZE H5T_VARIABLE;
         STRPAD H5T_STR_SPACEPAD;
         CSET H5T_CSET_ASCII;
         CTYPE H5T_C_S1;
      }
      DATASPACE  SIMPLE { ( 4 ) / ( 4 ) }
      DATA {
      (0): "Parting", "is such", "sweet", "sorrow."
      }
   }
}
}

You might wonder which to use. Some comments to consider are included below.

In general, a variable length string dataset is more complex than a fixed length string. If you don't specifically need a variable length type, then just use the fixed length string.

A variable length dataset consists of pointers to heaps in different locations in the file. For this reason, a variable length dataset cannot be compressed. (Basically, the pointers get compressed and not the actual data!) If compression is needed, then do not use variable length types.

If you need to create an array of of different length strings, you can either use fixed length strings along with compression, or use a variable length string.


- - Last modified: 12 July 2017