6.1. Introduction and Definitions
An HDF5 dataset is an array of data elements, arranged according to the specifications of the dataspace. In general, a data element is the smallest addressable unit of storage in the HDF5 file. (Compound datatypes are the exception to this rule.) The HDF5 datatype defines the storage format for a single data element. See the figure below.
The model for HDF5 attributes is extremely similar to datasets: an attribute has a dataspace and a datatype, as shown in the figure below. The information in this chapter applies to both datasets and attributes.
|
Abstractly, each data element within the dataset is a sequence of bits, interpreted as a single value from a set of values (for example, a number or a character). For a given datatype, there is a standard or convention for representing the values as bits, and when the bits are represented in a particular storage the bits are laid out in a specific storage scheme such as 8-bit bytes with a specific ordering and alignment of bytes within the storage array.
HDF5 datatypes implement a flexible, extensible, and portable mechanism for specifying and discovering the storage layout of the data elements, determining how to interpret the elements (for example, as floating point numbers), and for transferring data from different compatible layouts.
An HDF5 datatype describes one specific layout of bits. A dataset has a single datatype which applies to every data element. When a dataset is created, the storage datatype is defined. After the dataset or attribute is created, the datatype cannot be changed.
• The datatype describes the storage layout of a single data element
• All elements of the dataset must have the same type
• The datatype of a dataset is immutable
When data is transferred (for example, a read or write), each end point of the transfer has a datatype, which describes the correct storage for the elements. The source and destination may have different (but compatible) layouts, in which case the data elements are automatically transformed during the transfer.
HDF5 datatypes describe commonly used binary formats for numbers (integers and floating point) and characters (ASCII). A given computing architecture and programming language supports certain number and character representations. For example, a computer may support 8-, 16-, 32-, and 64-bit signed integers, stored in memory in little-endian byte order. These would presumably correspond to the C programming language types ‘char’, ‘short’, ‘int’, and ‘long’.
When reading and writing from memory, the HDF5 Library must know the appropriate datatype that describes the architecture specific layout. The HDF5 Library provides the platform independent ‘NATIVE’ types, which are mapped to an appropriate datatype for each platform. So the type ‘H5T_NATIVE_INT’ is an alias for the appropriate descriptor for each platform.
Data in memory has a datatype:
• The storage layout in memory is architecture-specific
• The HDF5 ‘NATIVE’ types are predefined aliases for the architecture-specific memory layout
• The memory datatype need not be the same as the stored datatype of the dataset
In addition to numbers and characters, an HDF5 datatype can describe more abstract classes of types including enumerations, strings, bit strings, and references (pointers to objects in the HDF5 file). HDF5 supports several classes of composite datatypes which are combinations of one or more other datatypes. In addition to the standard predefined datatypes, users can define new datatypes within the datatype classes.
The HDF5 datatype model is very general and flexible:
• For common simple purposes, only predefined types will be needed
• Datatypes can be combined to create complex structured datatypes
• If needed, users can define custom atomic datatypes
• Committed datatypes can be shared by datasets or attributes
The HDF5 Library implements an object-oriented model of datatypes. HDF5 datatypes are organized as a logical set of base types, or datatype classes. Each datatype class defines a format for representing logical values as a sequence of bits. For example the H5T_INTEGER class is a format for representing twos complement integers of various sizes.
A datatype class is defined as a set of one or more datatype properties. A datatype property is a property of the bit string. The datatype properties are defined by the logical model of the datatype class. For example, the integer class (twos complement integers) has properties such as “signed or unsigned”, “length”, and “byte-order”. The float class (IEEE floating point numbers) has these properties, plus “exponent bits”, “exponent sign”, etc.
A datatype is derived from one datatype class: a given datatype has a specific value for the datatype properties defined by the class. For example, for 32-bit signed integers, stored big-endian, the HDF5 datatype is a sub-type of integer with the properties set to signed=1, size=4 (bytes), and byte-order=BE.
The HDF5 datatype API (H5T functions) provides methods to create datatypes of different datatype classes, to set the datatype properties of a new datatype, and to discover the datatype properties of an existing datatype.
The datatype for a dataset is stored in the HDF5 file as part of the metadata for the dataset.
A datatype can be shared by more than one dataset in the file if the datatype is saved to the file with a name. This shareable datatype is known as a committed datatype. In the past, this kind of datatype was called a named datatype.
When transferring data (for example, a read or write), the data elements of the source and destination storage must have compatible types. As a general rule, data elements with the same datatype class are compatible while elements from different datatype classes are not compatible. When transferring data of one datatype to another compatible datatype, the HDF5 Library uses the datatype properties of the source and destination to automatically transform each data element. For example, when reading from data stored as 32-bit signed integers, big-endian into 32-bit signed integers, little-endian, the HDF5 Library will automatically swap the bytes.
Thus, data transfer operations (H5Dread, H5Dwrite, H5Aread, H5Awrite) require a datatype for both the source and the destination.
|
The HDF5 Library defines a set of predefined datatypes, corresponding to commonly used storage formats, such as twos complement integers, IEEE Floating point numbers, etc., 4- and 8-byte sizes, big-endian and little-endian byte orders. In addition, a user can derive types with custom values for the properties. For example, a user program may create a datatype to describe a 6-bit integer, or a 600-bit floating point number.
In addition to atomic datatypes, the HDF5 Library supports composite datatypes. A composite datatype is an aggregation of one or more datatypes. Each class of composite datatypes has properties that describe the organization of the composite datatype. See the figure below. Composite datatypes include:
• Compound datatypes: structured records
• Array: a multidimensional array of a datatype
• Variable-length: a one-dimensional array of a datatype
|
6.2.1. Datatype Classes and Properties
The figure below shows the HDF5 datatype classes. Each class is defined to have a set of properties which describe the layout of the data element and the interpretation of the bits. The table below lists the properties for the datatype classes.
|
Class |
Description |
Properties |
Notes |
---|---|---|---|
Integer |
Twos complement integers |
Size (bytes), precision (bits), offset (bits), pad, byte order, signed/unsigned |
|
Float |
Floating Point numbers |
Size (bytes), precision (bits), offset (bits), pad, byte order, sign position, exponent position, exponent size (bits), exponent sign, exponent bias, mantissa position, mantissa (size) bits, mantissa sign, mantissa normalization, internal padding |
See IEEE 754 for a definition of these properties. These properties describe non-IEEE 754 floating point formats as well. |
Character |
Array of 1-byte character encoding |
Size (characters), Character set, byte order, pad/no pad, pad character |
Currently, ASCII and UTF-8 are supported. |
Bitfield |
String of bits |
Size (bytes), precision (bits), offset (bits), pad, byte order |
A sequence of bit values packed into one or more bytes. |
Opaque |
Uninterpreted data |
Size (bytes), precision (bits), offset (bits), pad, byte order, tag |
A sequence of bytes, stored and retrieved as a block. The ‘tag’ is a string that can be used to label the value. |
Enumeration |
A list of discrete values, with symbolic names in the form of strings. |
Number of elements, element names, element values |
Enumeration is a list of pairs (name, value). The name is a string; the value is an unsigned integer. |
Reference |
Reference to object or region within the HDF5 file |
|
See the Reference API, H5R |
Array |
Array (1-4 dimensions) of data elements |
Number of dimensions, dimension sizes, base datatype |
The array is accessed atomically: no selection or sub-setting. |
Variable-length |
A variable-length 1-dimensional array of data elements |
Current size, base type |
|
Compound |
A Datatype of a sequence of Datatypes |
Number of members, member names, member types, member offset, member class, member size, byte order |
|
The HDF5 Library predefines a modest number of commonly used datatypes. These types have standard symbolic names of the form H5T_arch_base where arch is an architecture name and base is a programming type name (Table 2). New types can be derived from the predefined types by copying the predefined type (see H5Tcopy()) and then modifying the result.
The base name of most types consists of a letter to indicate the class (Table 3), a precision in bits, and an indication of the byte order (Table 4).
Table 5 shows examples of predefined datatypes. The full list can be found in the “HDF5 Predefined Datatypes” section of the HDF5 Reference Manual.
B |
Bitfield |
---|---|
F |
Floating point |
I |
Signed integer |
R |
References |
S |
Character string |
U |
Unsigned integer |
BE |
Big-endian |
---|---|
LE |
Little-endian |
Example |
Description |
---|---|
Eight-byte, little-endian, IEEE floating-point |
|
Four-byte, big-endian, IEEE floating point |
|
Four-byte, little-endian, signed two’s complement integer |
|
Two-byte, big-endian, unsigned integer |
|
One-byte, null-terminated string of eight-bit characters |
|
Eight-byte bit field on an Intel CPU |
|
Eight-byte Cray floating point |
|
Reference to an entire object in a file |
The HDF5 Library predefines a set of NATIVE datatypes which are similar to C type names. The native types are set to be an alias for the appropriate HDF5 datatype for each platform. For example, H5T_NATIVE_INT corresponds to a C int type. On an Intel based PC, this type is the same as H5T_STD_I32LE, while on a MIPS system this would be equivalent to H5T_STD_I32BE. Table 6 shows examples of NATIVE types and corresponding C types for a common 32-bit workstation.
Example |
Corresponding C Type |
---|---|
H5T_NATIVE_CHAR |
char |
H5T_NATIVE_SCHAR |
signed char |
H5T_NATIVE_UCHAR |
unsigned char |
H5T_NATIVE_SHORT |
short |
H5T_NATIVE_USHORT |
unsigned short |
H5T_NATIVE_INT |
int |
H5T_NATIVE_UINT |
unsigned |
H5T_NATIVE_LONG |
long |
H5T_NATIVE_ULONG |
unsigned long |
H5T_NATIVE_LLONG |
long long |
H5T_NATIVE_ULLONG |
unsigned long long |
H5T_NATIVE_FLOAT |
float |
H5T_NATIVE_DOUBLE |
double |
H5T_NATIVE_LDOUBLE |
long double |
H5T_NATIVE_HSIZE |
hsize_t |
H5T_NATIVE_HSSIZE |
hssize_t |
H5T_NATIVE_HERR |
herr_t |
H5T_NATIVE_HBOOL |
hbool_t |
H5T_NATIVE_B8 |
8-bit unsigned integer or 8-bit buffer in memory |
H5T_NATIVE_B16 |
16-bit unsigned integer or 16-bit buffer in memory |
H5T_NATIVE_B32 |
32-bit unsigned integer or 32-bit buffer in memory |
H5T_NATIVE_B64 |
64-bit unsigned integer or 64-bit buffer in memory |
6.3.1. The Datatype Object and the HDF5 Datatype API
The HDF5 Library manages datatypes as objects. The HDF5 datatype API manipulates the datatype objects through C function calls. New datatypes can be created from scratch or copied from existing datatypes. When a datatype is no longer needed its resources should be released by calling H5Tclose().
The datatype object is used in several roles in the HDF5 data model and library. Essentially, a datatype is used whenever the format of data elements is needed. There are four major uses of datatypes in the HDF5 Library: at dataset creation, during data transfers, when discovering the contents of a file, and for specifying user-defined datatypes. See the table below.
Use |
Description |
---|---|
Dataset creation |
The datatype of the data elements must be declared when the dataset is created. |
Data transfer |
The datatype (format) of the data elements must be defined for both the source and destination. |
Discovery |
The datatype of a dataset can be interrogated to retrieve a complete description of the storage layout. |
Creating user-defined datatypes |
Users can define their own datatypes by creating datatype objects and setting their properties. |
All the data elements of a dataset have the same datatype. When a dataset is created, the datatype for the data elements must be specified. The datatype of a dataset can never be changed. The example below shows the use of a datatype to create a dataset called “/dset”. In this example, the dataset will be stored as 32-bit signed integers in big-endian order.
hid_t dt; dataset_id = H5Dcreate(file_id, “/dset”, dt, dataspace_id, H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT); |
6.3.3. Data Transfer (Read and Write)
Probably the most common use of datatypes is to write or read data from a dataset or attribute. In these operations, each data element is transferred from the source to the destination (possibly rearranging the order of the elements). Since the source and destination do not need to be identical (in other words, one is disk and the other is memory), the transfer requires both the format of the source element and the destination element. Therefore, data transfers use two datatype objects, for the source and destination.
When data is written, the source is memory and the destination is disk (file). The memory datatype describes the format of the data element in the machine memory, and the file datatype describes the desired format of the data element on disk. Similarly, when reading, the source datatype describes the format of the data element on disk, and the destination datatype describes the format in memory.
In the most common cases, the file datatype is the datatype specified when the dataset was created, and the memory datatype should be the appropriate NATIVE type.
The examples below show samples of writing data to and reading data from a dataset. The data in memory is declared C type ‘int’, and the datatype H5T_NATIVE_INT corresponds to this type. The datatype of the dataset should be of datatype class H5T_INTEGER.
int dset_data[DATA_SIZE];
status = H5Dwrite(dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data); |
int dset_data[DATA_SIZE];
status = H5Dread(dataset_id, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, H5P_DEFAULT, dset_data); |
6.3.4. Discovery of Data Format
The HDF5 Library enables a program to determine the datatype class and properties for any datatype. In order to discover the storage format of data in a dataset, the datatype is obtained, and the properties are determined by queries to the datatype object. The example below shows code that analyzes the datatype for an integer and prints out a description of its storage properties (byte order, signed, size).
switch (H5Tget_class(type)) { printf(“Integer ByteOrder= ”); switch (ord) {
|
printf(“LE”); break; printf(“BE”); break; } |
printf(“ Sign= ”); switch (sgn) { printf(“false”); break; printf(“true”); break; } |
printf(“ Size= ”); printf(“%d”, sz); printf(“\n”); break; |
6.3.5. Creating and Using User-defined Datatypes
Most programs will primarily use the predefined datatypes described above, possibly in composite datatypes such as compound or array datatypes. However, the HDF5 datatype model is extremely general; a user program can define a great variety of atomic datatypes (storage layouts). In particular, the datatype properties can define signed and unsigned integers of any size and byte order, and floating point numbers with different formats, size, and byte order. The HDF5 datatype API provides methods to set these properties.
User-defined types can be used to define the layout of data in memory; examples might include to match some platform specific number format or application defined bit-field. The user-defined type can also describe data in the file such as an application-defined format. The user-defined types can be translated to and from standard types of the same class, as described above.
6.4. Datatype (H5T) Function Summaries
Functions that can be used with datatypes (H5T functions) and property list functions that can be used with datatypes (H5P functions) are listed below.
C Function Fortran Subroutine |
Purpose |
---|---|
Creates a new datatype. |
|
Opens a committed datatype. The C function is a macro: see “API Compatibility Macros in HDF5.” |
|
Commits a transient datatype to a file. The datatype is now a committed datatype. The C function is a macro: see “API Compatibility Macros in HDF5.” |
|
Commits a transient datatype to a file. The datatype is now a committed datatype, but it is not linked into the file structure. |
|
Determines whether a datatype is a committed or a transient type. |
|
Copies an existing datatype. |
|
Determines whether two datatype identifiers refer to the same datatype. |
|
(no Fortran subroutine) |
Locks a datatype. |
Returns the datatype class identifier. |
|
Returns a copy of a datatype creation property list. |
|
Returns the size of a datatype. |
|
Returns the base datatype from which a datatype is derived. |
|
Returns the native datatype of a specified datatype. |
|
(no Fortran subroutine) |
Determines whether a datatype is of the given datatype class. |
Returns the byte order of a datatype. |
|
Sets the byte ordering of a datatype. |
|
Decode a binary object description of datatype and return a new object identifier. |
|
Encode a datatype object description into a binary buffer. |
|
Releases a datatype. |
C Function Fortran Subroutine |
Purpose |
---|---|
Converts data between specified datatypes. |
|
Check whether the library’s default conversion is hard conversion. |
|
(no Fortran subroutine) |
Finds a conversion function. |
(no Fortran subroutine) |
Registers a conversion function. |
(no Fortran subroutine) |
Removes a conversion function from all conversion paths. |
C Function Fortran Subroutine |
Purpose |
---|---|
Sets the total size for an atomic datatype. |
|
Returns the precision of an atomic datatype. |
|
Sets the precision of an atomic datatype. |
|
Retrieves the bit offset of the first significant bit. |
|
Sets the bit offset of the first significant bit. |
|
Retrieves the padding type of the least and most-significant bit padding. |
|
Sets the least and most-significant bits padding types. |
|
Retrieves the sign type for an integer type. |
|
Sets the sign property for an integer type. |
|
Retrieves floating point datatype bit field information. |
|
Sets locations and sizes of floating point bit fields. |
|
Retrieves the exponent bias of a floating-point type. |
|
Sets the exponent bias of a floating-point type. |
|
Retrieves mantissa normalization of a floating-point datatype. |
|
Sets the mantissa normalization of a floating-point datatype. |
|
Retrieves the internal padding type for unused bits in floating-point datatypes. |
|
Fills unused internal floating point bits. |
|
Retrieves the character set type of a string datatype. |
|
Sets character set to be used. |
|
Retrieves the storage mechanism for a string datatype. |
|
Defines the storage mechanism for character strings. |
C Function Fortran Subroutine |
Purpose |
---|---|
Creates a new enumeration datatype. |
|
Inserts a new enumeration datatype member. |
|
Returns the symbol name corresponding to a specified member of an enumeration datatype. |
|
Returns the value corresponding to a specified member of an enumeration datatype. |
|
Returns the value of an enumeration datatype member. |
|
Retrieves the number of elements in a compound or enumeration datatype. |
|
Retrieves the name of a compound or enumeration datatype member. |
|
(no Fortran subroutine) |
Retrieves the index of a compound or enumeration datatype member. |
C Function Fortran Subroutine |
Purpose |
---|---|
Retrieves the number of elements in a compound or enumeration datatype. |
|
Returns datatype class of compound datatype member. |
|
Retrieves the name of a compound or enumeration datatype member. |
|
Retrieves the index of a compound or enumeration datatype member. |
|
Retrieves the offset of a field of a compound datatype. |
|
Returns the datatype of the specified member. |
|
Adds a new member to a compound datatype. |
|
Recursively removes padding from within a compound datatype. |
C Function Fortran Subroutine |
Purpose |
---|---|
Creates an array datatype object. The C function is a macro: see “API Compatibility Macros in HDF5.” |
|
Returns the rank of an array datatype. |
|
Returns sizes of array dimensions and dimension permutations. The C function is a macro: see “API Compatibility Macros in HDF5.” |
C Function Fortran Subroutine |
Purpose |
---|---|
Creates a new variable-length datatype. |
|
Determines whether datatype is a variable-length string. |
C Function Fortran Subroutine |
Purpose |
---|---|
Tags an opaque datatype. |
|
Gets the tag associated with an opaque datatype. |
C Function Fortran Subroutine |
Purpose |
---|---|
(no Fortran subroutine) |
Creates a datatype from a text description. |
(no Fortran subroutine) |
Generates a text description of a datatype. |
C Function Fortran Subroutine |
Purpose |
---|---|
Sets the character encoding used to encode a string. Use to set ASCII or UTF-8 character encoding for object names. |
|
Retrieves the character encoding used to create a string. |
C Function Fortran Subroutine |
Purpose |
---|---|
(no Fortran subroutine) |
Sets user-defined datatype conversion callback function. |
(no Fortran subroutine) |
Gets user-defined datatype conversion callback function. |
6.5. Programming Model for Datatypes
The HDF5 Library implements an object-oriented model of datatypes. HDF5 datatypes are organized as a logical set of base types, or datatype classes. The HDF5 Library manages datatypes as objects. The HDF5 datatype API manipulates the datatype objects through C function calls. The figure below shows the abstract view of the datatype object. The table below shows the methods (C functions) that operate on datatype objects. New datatypes can be created from scratch or copied from existing datatypes.
|
In order to use a datatype, the object must be created (H5Tcreate), or a reference obtained by cloning from an existing type (H5Tcopy), or opened (H5Topen). In addition, a reference to the datatype of a dataset or attribute can be obtained with H5Dget_type or H5Aget_type. For composite datatypes a reference to the datatype for members or base types can be obtained (H5Tget_member_type, H5Tget_super). When the datatype object is no longer needed, the reference is discarded with H5Tclose.
Two datatype objects can be tested to see if they are the same with H5Tequal. This function returns true if the two datatype references refer to the same datatype object. However, if two datatype objects define equivalent datatypes (the same datatype class and datatype properties), they will not be considered ‘equal’.
A datatype can be written to the file as a first class object (H5Tcommit). This is a committed datatype and can be used in the same way as any other datatype.
6.5.1. Discovery of Datatype Properties
Any HDF5 datatype object can be queried to discover all of its datatype properties. For each datatype class, there are a set of API functions to retrieve the datatype properties for this class.
6.5.1.1. Properties of Atomic Datatypes
Table 9 lists the functions to discover the properties of atomic datatypes. Table 10 lists the queries relevant to specific numeric types. Table 11 gives the properties for atomic string datatype, and Table 12 gives the property of the opaque datatype.
Functions |
Description |
---|---|
A user-defined string. |
6.5.1.2. Properties of Composite Datatypes
The composite datatype classes can also be analyzed to discover their datatype properties and the datatypes that are members or base types of the composite datatype. The member or base type can, in turn, be analyzed. The table below lists the functions that can access the datatype properties of the different composite datatypes.
6.5.2. Definition of Datatypes
The HDF5 Library enables user programs to create and modify datatypes. The essential steps are:
1. Create a new datatype object of a specific composite datatype class, or copy an existing atomic datatype object
2. Set properties of the datatype object
3. Use the datatype object
4. Close the datatype object
To create a user-defined atomic datatype, the procedure is to clone a predefined datatype of the appropriate datatype class (H5Tcopy), and then set the datatype properties appropriate to the datatype class. The table below shows how to create a datatype to describe a 1024-bit unsigned integer.
hid_t new_type = H5Tcopy (H5T_NATIVE_INT); |
Composite datatypes are created with a specific API call for each datatype class. The table below shows the creation method for each datatype class. A newly created datatype cannot be used until the datatype properties are set. For example, a newly created compound datatype has no members and cannot be used.
Datatype Class |
Function to Create |
---|---|
COMPOUND |
|
OPAQUE |
|
ENUM |
|
ARRAY |
|
VL |
Once the datatype is created and the datatype properties set, the datatype object can be used.
Predefined datatypes are defined by the library during initialization using the same mechanisms as described here. Each predefined datatype is locked (H5Tlock), so that it cannot be changed or destroyed. User-defined datatypes may also be locked using H5Tlock.
6.5.2.1. User-defined Atomic Datatypes
Table 15 summarizes the API methods that set properties of atomic types. Table 16 shows properties specific to numeric types, Table 17 shows properties specific to the string datatype class. Note that offset, pad, etc. do not apply to strings. Table 18 shows the specific property of the OPAQUE datatype class.
Functions |
Description |
---|---|
herr_t H5Tset_tag (hid_t type_id const char *tag) |
Tags the opaque datatype type_id with an ASCII identifier tag. |
Examples
The example below shows how to create a 128-bit little-endian signed integer type. Increasing the precision of a type automatically increases the total size. Note that the proper procedure is to begin from a type of the intended datatype class which in this case is a NATIVE INT.
hid_t new_type = H5Tcopy (H5T_NATIVE_INT); |
The figure below shows the storage layout as the type is defined. The H5Tcopy creates a datatype that is the same as H5T_NATIVE_INT. In this example, suppose this is a 32-bit big-endian number (Figure a). The precision is set to 128 bits, which automatically extends the size to 8 bytes (Figure b). Finally, the byte order is set to little-endian (Figure c).
|
The significant bits of a data element can be offset from the beginning of the memory for that element by an amount of padding. The offset property specifies the number of bits of padding that appear to the “right of” the value. The table and figure below show how a 32-bit unsigned integer with 16-bits of precision having the value 0x1122 will be laid out in memory.
Byte Position |
Big-Endian Offset=0 |
Big-Endian Offset=16 |
Little-Endian Offset=0 |
Little-Endian Offset=16 |
---|---|---|---|---|
0: |
[pad] |
[0x11] |
[0x22] |
[pad] |
1: |
[pad] |
[0x22] |
[0x11] |
[pad] |
2: |
[0x11] |
[pad] |
[pad] |
[0x22] |
3: |
[0x22] |
[pad] |
[pad] |
[0x11] |
|
If the offset is incremented then the total size is incremented also if necessary to prevent significant bits of the value from hanging over the edge of the datatype.
The bits of the entire data are numbered beginning at zero at the least significant bit of the least significant byte (the byte at the lowest memory address for a little-endian type or the byte at the highest address for a big-endian type). The offset property defines the bit location of the least significant bit of a bit field whose length is precision. If the offset is increased so the significant bits “hang over” the edge of the datum, then the size property is automatically incremented.
To illustrate the properties of the integer datatype class, the example below shows how to create a user-defined datatype that describes a 24-bit signed integer that starts on the third bit of a 32-bit word. The datatype is specialized from a 32-bit integer, the precision is set to 24 bits, and the offset is set to 3.
hid_t dt;
|
The figure below shows the storage layout for a data element. Note that the unused bits in the offset will be set to zero and the unused bits at the end will be set to one, as specified in the H5Tset_pad call.
|
To illustrate a user-defined floating point number, the example below shows how to create a 24-bit floating point number that starts 5 bits into a 4 byte word. The floating point number is defined to have a mantissa of 19 bits (bits 5-23), an exponent of 3 bits (25-27), and the sign bit is bit 28. (Note that this is an illustration of what can be done and is not necessarily a floating point format that a user would require.)
hid_t dt;
H5Tset_fields (dt, 28, 25, 3, 5, 19); |
|
The figure above shows the storage layout of a data element for this datatype. Note that there is an unused bit (24) between the mantissa and the exponent. This bit is filled with the inpad value which in this case is 0.
The sign bit is always of length one and none of the fields are allowed to overlap. When expanding a floating-point type one should set the precision first; when decreasing the size one should set the field positions and sizes first.
6.5.2.2. Composite Datatypes
All composite datatypes must be user-defined; there are no predefined composite datatypes.
6.5.2.2.1. Compound Datatypes
The subsections below describe how to create a compound datatype and how to write and read data of a compound datatype.
Defining Compound Datatypes
Compound datatypes are conceptually similar to a C struct or Fortran derived types. The compound datatype defines a contiguous sequence of bytes, which are formatted using one up to 2^16 datatypes (members). A compound datatype may have any number of members, in any order, and the members may have any datatype, including compound. Thus, complex nested compound datatypes can be created. The total size of the compound datatype is greater than or equal to the sum of the size of its members, up to a maximum of 2^32 bytes. HDF5 does not support datatypes with distinguished records or the equivalent of C unions or Fortran EQUIVALENCE statements.
Usually a C struct or Fortran derived type will be defined to hold a data point in memory, and the offsets of the members in memory will be the offsets of the struct members from the beginning of an instance of the struct. The HDF5 C library provides a macro HOFFSET (s,m) to calculate the member’s offset. The HDF5 Fortran applications have to calculate offsets by using sizes of members datatypes and by taking in consideration the order of members in the Fortran derived type.
HOFFSET(s,m)
This macro computes the offset of member m within a struct s
offsetof(s,m)
This macro defined in stddef.h does exactly the same thing as the HOFFSET() macro.
Note for Fortran users: Offsets of Fortran structure members correspond to the offsets within a packed datatype (see explanation below) stored in an HDF5 file.
Each member of a compound datatype must have a descriptive name which is the key used to uniquely identify the member within the compound datatype. A member name in an HDF5 datatype does not necessarily have to be the same as the name of the member in the C struct or Fortran derived type, although this is often the case. Nor does one need to define all members of the C struct or Fortran derived type in the HDF5 compound datatype (or vice versa).
Unlike atomic datatypes which are derived from other atomic datatypes, compound datatypes are created from scratch. First, one creates an empty compound datatype and specifies its total size. Then members are added to the compound datatype in any order. Each member type is inserted at a designated offset. Each member has a name which is the key used to uniquely identify the member within the compound datatype.
The example below shows a way of creating an HDF5 C compound datatype to describe a complex number. This is a structure with two components, “real” and “imaginary”, and each component is a double. An equivalent C struct whose type is defined by the complex_t struct is shown.
The example below shows a way of creating an HDF5 Fortran compound datatype to describe a complex number. This is a Fortran derived type with two components, “real” and “imaginary”, and each component is DOUBLE PRECISION. An equivalent Fortran TYPE whose type is defined by the TYPE complex_t is shown.
Important Note: The compound datatype is created with a size sufficient to hold all its members. In the C example above, the size of the C struct and the HOFFSET macro are used as a convenient mechanism to determine the appropriate size and offset. Alternatively, the size and offset could be manually determined: the size can be set to 16 with “real” at offset 0 and “imaginary” at offset 8. However, different platforms and compilers have different sizes for “double” and may have alignment restrictions which require additional padding within the structure. It is much more portable to use the HOFFSET macro which assures that the values will be correct for any platform.
The figure below shows how the compound datatype would be laid out assuming that NATIVE_DOUBLE are 64-bit numbers and that there are no alignment requirements. The total size of the compound datatype will be 16 bytes, the “real” component will start at byte 0, and “imaginary” will start at byte 8.
|
The members of a compound datatype may be any HDF5 datatype including the compound, array, and variable-length (VL) types. The figure and example below show the memory layout and code which creates a compound datatype composed of two complex values, and each complex value is also a compound datatype as in the figure above.
|
Note that a similar result could be accomplished by creating a compound datatype and inserting four fields. See the figure below. This results in the same layout as the figure above. The difference would be how the fields are addressed. In the first case, the real part of ‘y’ is called ‘y.re’; in the second case it is ‘y-re’.
The members of a compound datatype do not always fill all the bytes. The HOFFSET macro assures that the members will be laid out according to the requirements of the platform and language. The example below shows an example of a C struct which requires extra bytes of padding on many platforms. The second element, ‘b’, is a 1-byte character followed by an 8 byte double, ‘c’. On many systems, the 8-byte value must be stored on a 4- or 8-byte boundary. This requires the struct to be larger than the sum of the size of its elements.
In the example below, sizeof and HOFFSET are used to assure that the members are inserted at the correct offset to match the memory conventions of the platform. The figure below shows how this data element would be stored in memory, assuming the double must start on a 4-byte boundary. Notice the extra bytes between ‘b’ and ‘c’.
|
However, data stored on disk does not require alignment, so unaligned versions of compound data structures can be created to improve space efficiency on disk. These unaligned compound datatypes can be created by computing offsets by hand to eliminate inter-member padding, or the members can be packed by calling H5Tpack (which modifies a datatype directly, so it is usually preceded by a call to H5Tcopy).
The example below shows how to create a disk version of the compound datatype from the figure above in order to store data on disk in as compact a form as possible. Packed compound datatypes should generally not be used to describe memory as they may violate alignment constraints for the architecture being used. Note also that using a packed datatype for disk storage may involve a higher data conversion cost.
The example below shows the sequence of Fortran calls to create a packed compound datatype. An HDF5 Fortran compound datatype never describes a compound datatype in memory and compound data is ALWAYS written by fields as described in the next section. Therefore packing is not needed unless the offset of each consecutive member is not equal to the sum of the sizes of the previous members.
Creating and Writing Datasets with Compound Datatypes
Creating datasets with compound datatypes is similar to creating datasets with any other HDF5 datatypes. But writing and reading may be different since datasets that have compound datatypes can be written or read by a field (member) or subsets of fields (members). The compound datatype is the only composite datatype that supports “sub-setting” by the elements the datatype is built from.
The example below shows a C example of creating and writing a dataset with a compound datatype.
The example below shows the content of the file written on a little-endian machine.
HDF5 “SDScompound.h5” { GROUP “/” { DATASET “ArrayOfStructures” { DATATYPE H5T_COMPOUND { H5T_STD_I32LE “a_name”; H5T_IEEE_F32LE “b_name”; H5T_IEEE_F64LE “c_name”; |
} DATASPACE SIMPLE { ( 3 ) / ( 3 ) } DATA { (0): { 0, 0, 1 }, |
(1): { 1, 1, 0.5 }, |
(2): { 2, 4, 0.333333 } } } } } |
It is not necessary to write the whole data at once. Datasets with compound datatypes can be written by field or by subsets of fields. In order to do this one has to remember to set the transfer property of the dataset using the H5Pset_preserve call and to define the memory datatype that corresponds to a field. The example below shows how float and double fields are written to the dataset.
The figure below shows the content of the file written on a little-endian machine. Only float and double fields are written. The default fill value is used to initialize the unwritten integer field.
HDF5 “SDScompound.h5” { GROUP “/” { DATASET “ArrayOfStructures” { H5T_STD_I32LE “a_name”; H5T_IEEE_F32LE “b_name”; H5T_IEEE_F64LE “c_name”; } |
DATASPACE SIMPLE { ( 3 ) / ( 3 ) } DATA { (0): { 0, 0, 1 }, |
(1): { 0, 1, 0.5 }, |
(2): { 0, 4, 0.333333 |
} } } } } |
The example below contains a Fortran example that creates and writes a dataset with a compound datatype. As this example illustrates, writing and reading compound datatypes in Fortran is always done by fields. The content of the written file is the same as shown in the example above.
Reading Datasets with Compound Datatypes
Reading datasets with compound datatypes may be a challenge. For general applications there is no way to know a priori the corresponding C structure. Also, C structures cannot be allocated on the fly during discovery of the dataset’s datatype. For general C, C++, Fortran and Java application the following steps will be required to read and to interpret data from the dataset with compound datatype:
1. Get the identifier of the compound datatype in the file with the H5Dget_type call
2. Find the number of the compound datatype members with the H5Tget_nmembers call
3. Iterate through compound datatype members
• Get member class with the H5Tget_member_class call
• Get member name with the H5Tget_member_name call
• Check class type against predefined classes
• If class is H5T_COMPOUND, then go to step 2 and repeat all steps under step 3. If class is not H5T_COMPOUND, then a member is of an atomic class and can be read to a corresponding buffer after discovering all necessary information specific to each atomic type (for example, size of the integer or floats, super class for enumerated and array datatype, and its sizes)
The examples below show how to read a dataset with a known compound datatype.
The first example below shows the steps needed to read data of a known structure. First, build a memory datatype the same way it was built when the dataset was created, and then second use the datatype in a H5Dread call.
Instead of building a memory datatype, the application could use the H5Tget_native_type function. See the example below.
The example below shows how to read just one float member of a compound datatype.
The example below shows how to read float and double members of a compound datatype into a structure that has those fields in a different order. Please notice that H5Tinsert calls can be used in an order different from the order of the structure’s members.
6.5.2.2.2. Array
Many scientific datasets have multiple measurements for each point in a space. There are several natural ways to represent this data, depending on the variables and how they are used in computation. See the table and the figure below.
Storage Strategy |
Stored as |
Remarks |
---|---|---|
Multiple planes |
Several datasets with identical dataspaces |
This is optimal when variables are accessed individually, or when often uses only selected variables. |
Additional dimension |
One dataset, the last “dimension” is a vector of variables |
This can give good performance, although selecting only a few variables may be slow. This may not reflect the science. |
Record with multiple values |
One dataset with compound datatype |
This enables the variables to be read all together or selected. Also handles “vectors” of heterogeneous data. |
Vector or Tensor value |
One dataset, each data element is a small array of values. |
This uses the same amount of space as the previous two, and may represent the science model better. |
|
|
|
|
The HDF5 H5T_ARRAY datatype defines the data element to be a homogeneous, multi-dimensional array. See Figure 13d above. The elements of the array can be any HDF5 datatype (including compound and array), and the size of the datatype is the total size of the array. A dataset of array datatype cannot be subdivided for I/O within the data element: the entire array of the data element must be transferred. If the data elements need to be accessed separately, for example, by plane, then the array datatype should not be used. The table below shows advantages and disadvantages of various storage methods.
Method |
Advantages |
Disadvantages |
---|---|---|
a) Multiple Datasets |
Easy to access each plane, can select any plane(s) |
Less efficient to access a ‘column’ through the planes |
b) N+1 Dimension |
All access patterns supported |
Must be homogeneous datatype
The added dimension may not make sense in the scientific model |
c) Compound Datatype |
Can be heterogeneous datatype |
Planes must be named, selection is by plane
Not a natural representation for a matrix |
d) Array |
A natural representation for vector or tensor data |
Cannot access elements separately (no access by plane) |
An array datatype may be multi-dimensional with 1 to H5S_MAX_RANK (the maximum rank of a dataset is currently 32) dimensions. The dimensions can be any size greater than 0, but unlimited dimensions are not supported (although the datatype can be a variable-length datatype).
An array datatype is created with the H5Tarray_create call, which specifies the number of dimensions, the size of each dimension, and the base type of the array. The array datatype can then be used in any way that any datatype object is used. The example below shows the creation of a datatype that is a two-dimensional array of native integers, and this is then used to create a dataset. Note that the dataset can be a dataspace that is any number and size of dimensions. The figure below shows the layout in memory assuming that the native integers are 4 bytes. Each data element has 6 elements, for a total of 24 bytes.
|
6.5.2.2.3. Variable-length Datatypes
A variable-length (VL) datatype is a one-dimensional sequence of a datatype which are not fixed in length from one dataset location to another. In other words, each data element may have a different number of members. Variable-length datatypes cannot be divided, the entire data element must be transferred.
VL datatypes are useful to the scientific community in many different ways, possibly including:
• Ragged arrays: Multi-dimensional ragged arrays can be implemented with the last (fastest changing) dimension being ragged by using a VL datatype as the type of the element stored.
• Fractal arrays: A nested VL datatype can be used to implement ragged arrays of ragged arrays, to whatever nesting depth is required for the user.
• Polygon lists: A common storage requirement is to efficiently store arrays of polygons with different numbers of vertices. A VL datatype can be used to efficiently and succinctly describe an array of polygons with different numbers of vertices.
• Character strings: Perhaps the most common use of VL datatypes will be to store C-like VL character strings in dataset elements or as attributes of objects.
• Indices (for example, of objects within a file): An array of VL object references could be used as an index to all the objects in a file which contain a particular sequence of dataset values.
• Object Tracking: An array of VL dataset region references can be used as a method of tracking objects or features appearing in a sequence of datasets.
A VL datatype is created by calling H5Tvlen_create which specifies the base datatype. The first example below shows an example of code that creates a VL datatype of unsigned integers. Each data element is a one-dimensional array of zero or more members and is stored in the hvl_t structure. See the second example below.
typedef struct { size_t len; /* Length of VL data */ /*(in base type units) */ void *p; /* Pointer to VL data */ } hvl_t; |
The first example below shows how the VL data is written. For each of the 10 data elements, a length and data buffer must be allocated. Below the two examples is a figure that shows how the data is laid out in memory.
An analogous procedure must be used to read the data. See the second example below. An appropriate array of vl_t must be allocated, and the data read. It is then traversed one data element at a time. The H5Dvlen_reclaim call frees the data buffer for the buffer. With each element possibly being of different sequence lengths for a dataset with a VL datatype, the memory for the VL datatype must be dynamically allocated. Currently there are two methods of managing the memory for VL datatypes: the standard C malloc/free memory allocation routines or a method of calling user-defined memory management routines to allocate or free memory (set with H5Pset_vlen_mem_manager). Since the memory allocated when reading (or writing) may be complicated to release, the H5Dvlen_reclaim function is provided to traverse a memory buffer and free the VL datatype information without leaking memory.
|
The user program must carefully manage these relatively complex data structures. The H5Dvlen_reclaim function performs a standard traversal, freeing all the data. This function analyzes the datatype and dataspace objects, and visits each VL data element, recursing through nested types. By default, the system free is called for the pointer in each vl_t. Obviously, this call assumes that all of this memory was allocated with the system malloc.
The user program may specify custom memory manager routines, one for allocating and one for freeing. These may be set with the H5Pvlen_mem_manager, and must have the following prototypes:
• typedef void *(*H5MM_allocate_t)(size_t size, void *info);
• typedef void (*H5MM_free_t)(void *mem, void *free_info);
The utility function H5Dget_vlen_buf_size checks the number of bytes required to store the VL data from the dataset. This function analyzes the datatype and dataspace object to visit all the VL data elements, to determine the number of bytes required to store the data for the in the destination storage (memory). The size value is adjusted for data conversion and alignment in the destination.
6.6. Other Non-numeric Datatypes
Several datatype classes define special types of objects.
Text data is represented by arrays of characters, called strings. Many programming languages support different conventions for storing strings, which may be fixed or variable-length, and may have different rules for padding unused storage. HDF5 can represent strings in several ways. See the figure below.
First, a dataset may have a dataset with datatype H5T_NATIVE_CHAR with each character of the string as an element of the dataset. This will store an unstructured block of text data, but gives little indication of any structure in the text. See item a in the figure above.
A second alternative is to store the data using the datatype class H5T_STRING with each element a fixed length. See item b in the figure above. In this approach, each element might be a word or a sentence, addressed by the dataspace. The dataset reserves space for the specified number of characters, although some strings may be shorter. This approach is simple and usually is fast to access, but can waste storage space if the length of the Strings varies.
A third alternative is to use a variable-length datatype. See item c in the figure above. This can be done using the standard mechanisms described above. The program would use vl_t structures to write and read the data.
A fourth alternative is to use a special feature of the string datatype class to set the size of the datatype to H5T_VARIABLE. See item c in the figure above. The example below shows a declaration of a datatype of type H5T_C_S1 which is set to H5T_VARIABLE. The HDF5 Library automatically translates between this and the vl_t structure. Note: the H5T_VARIABLE size can only be used with string datatypes.
|
Variable-length strings can be read into C strings (in other words, pointers to zero terminated arrays of char). See the example below.
In HDF5, objects (groups, datasets, and committed datatypes) are usually accessed by name. There is another way to access stored objects - by reference. There are two reference datatypes: object reference and region reference. Object reference objects are created with H5Rcreate and other calls (cross reference). These objects can be stored and retrieved in a dataset as elements with reference datatype. The first example below shows an example of code that creates references to four objects, and then writes the array of object references to a dataset. The second example below shows a dataset of datatype reference being read and one of the reference objects being dereferenced to obtain an object pointer.
In order to store references to regions of a dataset, the datatype should be H5T_REGION_OBJ. Note that a data element must be either an object reference or a region reference: these are different types and cannot be mixed within a single array.
A reference datatype cannot be divided for I/O: an element is read or written completely.
The enum datatype implements a set of (name, value) pairs, similar to C/C++ enum. The values are currently limited to native integer datatypes. Each name can be the name of only one value, and each value can have only one name.
The data elements of the ENUMERATION are stored according to the datatype. An example would be as an array of integers. The example below shows an example of how to create an enumeration with five elements. The elements map symbolic names to 2-byte integers. See the table below.
Name |
Value |
---|---|
RED |
0 |
GREEN |
1 |
BLUE |
2 |
WHITE |
3 |
BLACK |
4 |
The figure below shows how an array of eight values might be stored. Conceptually, the array is an array of symbolic names [BLACK, RED, WHITE, BLUE, ...]. See item a in the figure below. These are stored as the values and are short integers. So, the first 2 bytes are the value associated with “BLACK”, which is the number 4, and so on. See item b in the figure below.
a) Logical data to be written - eight elements |
|
|
b) The storage layout. Total size of the array is 16 bytes, 2 bytes per element. |
The order that members are inserted into an enumeration type is unimportant; the important part is the associations between the symbol names and the values. Thus, two enumeration datatypes will be considered equal if and only if both types have the same symbol/value associations and both have equal underlying integer datatypes. Type equality is tested with the H5Tequal function.
If a particular architecture type is required, a little-endian or big-endian datatype for example, use a native integer datatype as the ENUM base datatype and use H5Tconvert on values as they are read from or written to a dataset.
In some cases, a user may have data objects that should be stored and retrieved as blobs with no attempt to interpret them. For example, an application might wish to store an array of encrypted certificates which are 100 bytes long.
While an arbitrary block of data may always be stored as bytes, characters, integers, or whatever, this might mislead programs about the meaning of the data. The opaque datatype defines data elements which are uninterpreted by HDF5. The opaque data may be labeled with H5Tset_tag with a string that might be used by an application. For example, the encrypted certificates might have a tag to indicate the encryption and the certificate standard.
Some data is represented as bits, where the number of bits is not an integral byte and the bits are not necessarily interpreted as a standard type. Some examples might include readings from machine registers (for example, switch positions), a cloud mask, or data structures with several small integers that should be store in a single byte.
This data could be stored as integers, strings, or enumerations. However, these storage methods would likely result in considerable wasted space. For example, storing a cloud mask with one byte per value would use up to eight times the space of a packed array of bits.
The HDF5 bitfield datatype class defines a data element that is a contiguous sequence of bits, which are stored on disk in a packed array. The programming model is the same as for unsigned integers: the datatype object is created by copying a predefined datatype, and then the precision, offset, and padding are set.
While the use of the bitfield datatype will reduce storage space substantially, there will still be wasted space if the bitfield as a whole does not match the 1-, 2-, 4-, or 8-byte unit in which it is written. The remaining unused space can be removed by applying the N-bit filter to the dataset containing the bitfield data. For more information, see "Using the N-bit Filter."
The “fill value” for a dataset is the specification of the default value assigned to data elements that have not yet been written. In the case of a dataset with an atomic datatype, the fill value is a single value of the appropriate datatype, such as ‘0’ or ‘-1.0’. In the case of a dataset with a composite datatype, the fill value is a single data element of the appropriate type. For example, for an array or compound datatype, the fill value is a single data element with values for all the component elements of the array or compound datatype.
The fill value is set (permanently) when the dataset is created. The fill value is set in the dataset creation properties in the H5Dcreate call. Note that the H5Dcreate call must also include the datatype of the dataset, and the value provided for the fill value will be interpreted as a single element of this datatype. The example below shows code which creates a dataset of integers with fill value -1. Any unwritten data elements will be set to -1.
The figure above shows how to create a fill value for a compound datatype. The procedure is the same as the previous example except the filler must be a structure with the correct fields. Each field is initialized to the desired fill value.
The fill value for a dataset can be retrieved by reading the dataset creation properties of the dataset and then by reading the fill value with H5Pget_fill_value. The data will be read into memory using the storage layout specified by the datatype. This transfer will convert data in the same way as H5Dread. The example below shows how to get the fill value from the dataset created in the example "Create a dataset with a fill value of -1".
A similar procedure is followed for any datatype. The example below shows how to read the fill value for the compound datatype created in an example above. Note that the program must pass an element large enough to hold a fill value of the datatype indicated by the argument to H5Pget_fill_value. Also, the program must understand the datatype in order to interpret its components. This may be difficult to determine without knowledge of the application that created the dataset.
6.8. Complex Combinations of Datatypes
Several composite datatype classes define collections of other datatypes, including other composite datatypes. In general, a datatype can be nested to any depth, with any combination of datatypes.
For example, a compound datatype can have members that are other compound datatypes, arrays, VL datatypes. An array can be an array of array, an array of compound, or an array of VL. And a VL datatype can be a variable-length array of compound, array, or VL datatypes.
These complicated combinations of datatypes form a logical tree, with a single root datatype, and leaves which must be atomic datatypes (predefined or user-defined). The figure below shows an example of a logical tree describing a compound datatype constructed from different datatypes.
Recall that the datatype is a description of the layout of storage. The complicated compound datatype is constructed from component datatypes, each of which describe the layout of part of the storage. Any datatype can be used as a component of a compound datatype, with the following restrictions:
1. No byte can be part of more than one component datatype (in other words, the fields cannot overlap within the compound datatype)
2. The total size of the components must be less than or equal to the total size of the compound datatype
These restrictions are essentially the rules for C structures and similar record types familiar from programming languages. Multiple typing, such as a C union, is not allowed in HDF5 datatypes.
|
6.8.1. Creating a Complicated Compound Datatype
To construct a complicated compound datatype, each component is constructed, and then added to the enclosing datatype description. The example below shows how to create a compound datatype with four members:
• “T1”, a compound datatype with three members
• “T2”, a compound datatype with two members
• “T3”, a one-dimensional array of integers
• “T4”, a string
Below the example code is a figure that shows this datatype as a logical tree. The output of the h5dump utility is shown in the example below the figure.
Each datatype is created as a separate datatype object. Figure 20 below shows the storage layout for the four individual datatypes. Then the datatypes are inserted into the outer datatype at an appropriate offset. Figure 21 below shows the resulting storage layout. The combined record is 89 bytes long.
The Dataset is created using the combined compound datatype. The dataset is declared to be a 4 by 3 array of compound data. Each data element is an instance of the 89-byte compound datatype. Figure 22 below shows the layout of the dataset, and expands one of the elements to show the relative position of the component data elements.
Each data element is a compound datatype, which can be written or read as a record, or each field may be read or written individually. The first field (“T1”) is itself a compound datatype with three fields (“T1.a”, “T1.b”, and “T1.c”). “T1” can be read or written as a record, or individual fields can be accessed. Similarly, the second filed is a compound datatype with two fields (“T2.f1”, “T2.f2”).
The third field (“T3”) is an array datatype. Thus, “T3” should be accessed as an array of 40 integers. Array data can only be read or written as a single element, so all 40 integers must be read or written to the third field. The fourth field (“T4”) is a single string of length 25.
|
a) Compound type ‘s1_t’, size 16 bytes. |
|
b) Compound type ‘s2_t’, size 8 bytes. |
|
c) Array type ‘s3_tid’, 40 integers, total size 40 bytes. |
|
d) String type ‘s4_tid’, size 25 bytes. |
|
|
a) A 4 x 3 array of Compound Datatype |
|
b) Element [1,1] expanded |
6.8.2. Analyzing and Navigating a Compound Datatype
A complicated compound datatype can be analyzed piece by piece to discover the exact storage layout. In the example above, the outer datatype is analyzed to discover that it is a compound datatype with four members. Each member is analyzed in turn to construct a complete map of the storage layout.
The example below shows an example of code that partially analyzes a nested compound datatype. The name and overall offset and size of the component datatype is discovered, and then its type is analyzed depending on the datatype class. Through this method, the complete storage layout can be discovered.
6.9. Life Cycle of the Datatype Object
Application programs access HDF5 datatypes through identifiers. Identifiers are obtained by creating a new datatype or by copying or opening an existing datatype. The identifier can be used until it is closed or until the library shuts down. See items a and b in the figure below. By default, a datatype is transient, and it disappears when it is closed.
When a dataset or attribute is created (H5Dcreate or H5Acreate), its datatype is stored in the HDF5 file as part of the dataset or attribute object. See item c in the figure below. Once an object created, its datatype cannot be changed or deleted. The datatype can be accessed by calling H5Dget_type, H5Aget_type, H5Tget_super, or H5Tget_member_type. See item d in the figure below. These calls return an identifier to a transient copy of the datatype of the dataset or attribute unless the datatype is a committed datatype.
Note that when an object is created, the stored datatype is a copy of the transient datatype. If two objects are created with the same datatype, the information is stored in each object with the same effect as if two different datatypes were created and used.
A transient datatype can be stored using H5Tcommit in the HDF5 file as an independent, named object, called a committed datatype. Committed datatypes were formerly known as named datatypes. See item e in the figure below. Subsequently, when a committed datatype is opened with H5Topen (item f), or is obtained with H5Tget_type or similar call (item k), the return is an identifier to a transient copy of the stored datatype. The identifier can be used in the same way as other datatype identifiers except that the committed datatype cannot be modified. When a committed datatype is copied with H5Tcopy, the return is a new, modifiable, transient datatype object (item f).
When an object is created using a committed datatype (H5Dcreate, H5Acreate), the stored datatype is used without copying it to the object. See item j in the figure below. In this case, if multiple objects are created using the same committed datatype, they all share the exact same datatype object. This saves space and makes clear that the datatype is shared. Note that a committed datatype can be shared by objects within the same HDF5 file, but not by objects in other files. For more information on copying committed datatypes to other HDF5 files, see the “Copying Committed Datatypes with H5Ocopy” topic in the “Additional Resources” chapter.
A committed datatype can be deleted from the file by calling H5Ldelete which replaces H5Gunlink. See item i in the figure below. If one or more objects are still using the datatype, the committed datatype cannot be accessed with H5Topen, but will not be removed from the file until it is no longer used. H5Tget_type and similar calls will return a transient copy of the datatype.
|
Transient datatypes are initially modifiable. Note that when a datatype is copied or when it is written to the file (when an object is created) or the datatype is used to create a composite datatype, a copy of the current state of the datatype is used. If the datatype is then modified, the changes have no effect on datasets, attributes, or datatypes that have already been created. See the figure below.
A transient datatype can be made read-only (H5Tlock). Note that the datatype is still transient, and otherwise does not change. A datatype that is immutable is read-only but cannot be closed except when the entire library is closed. The predefined types such as H5T_NATIVE_INT are immutable transient types.
|
To create two or more datasets that share a common datatype, first commit the datatype, and then use that datatype to create the datasets. See the example below.
6.10. Data Transfer: Datatype Conversion and Selection
When data is transferred (write or read), the storage layout of the data elements may be different. For example, an integer might be stored on disk in big-endian byte order and read into memory with little-endian byte order. In this case, each data element will be transformed by the HDF5 Library during the data transfer.
The conversion of data elements is controlled by specifying the datatype of the source and specifying the intended datatype of the destination. The storage format on disk is the datatype specified when the dataset is created. The datatype of memory must be specified in the library call.
In order to be convertible, the datatype of the source and destination must have the same datatype class (with the exception of enumeration type). Thus, integers can be converted to other integers, and floats to other floats, but integers cannot (yet) be converted to floats. For each atomic datatype class, the possible conversions are defined. An enumeration datatype can be converted to an integer or a floating-point number datatype.
Basically, any datatype can be converted to another datatype of the same datatype class. The HDF5 Library automatically converts all properties. If the destination is too small to hold the source value then an overflow or underflow exception occurs. If a handler is defined with the H5Pset_type_conv_cb function, it will be called. Otherwise, a default action will be performed. The table below summarizes the default actions.
Datatype Class |
Possible Exceptions |
Default Action |
---|---|---|
Integer |
Size, offset, pad |
|
Float |
Size, offset, pad, ebits |
|
String |
Size |
Truncates, zero terminate if required. |
Enumeration |
No field |
All bits set |
For example, when reading data from a dataset, the source datatype is the datatype set when the dataset was created, and the destination datatype is the description of the storage layout in memory. The destination datatype must be specified in the H5Dread call. The example below shows an example of reading a dataset of 32-bit integers. The figure below the example shows the data transformation that is performed.
Source Datatype: H5T_STD_BE32 |
|
.... |
|
Destination Datatype: H5T_STD_LE32 |
|
.... |
One thing to note in the example above is the use of the predefined native datatype H5T_NATIVE_INT. Recall that in this example, the data was stored as a 4-bytes in big-endian order. The application wants to read this data into an array of integers in memory. Depending on the system, the storage layout of memory might be either big or little-endian, so the data may need to be transformed on some platforms and not on others. The H5T_NATIVE_INT type is set by the HDF5 Library to be the correct type to describe the storage layout of the memory on the system. Thus, the code in the example above will work correctly on any platform, performing a transformation when needed.
There are predefined native types for most atomic datatypes, and these can be combined in composite datatypes. In general, the predefined native datatypes should always be used for data stored in memory.
Storage Properties |
Predefined native datatypes describe the storage properties of memory. |
For composite datatypes, the component atomic datatypes will be converted. For a variable-length datatype, the source and destination must have compatible base datatypes. For a fixed-size string datatype, the length and padding of the strings will be converted. Variable-length strings are converted as variable-length datatypes.
For an array datatype, the source and destination must have the same rank and dimensions, and the base datatype must be compatible. For example an array datatype of 4 x 3 32-bit big-endian integers can be transferred to an array datatype of 4 x 3 little-endian integers, but not to a 3 x 4 array.
For an enumeration datatype, data elements are converted by matching the symbol names of the source and destination datatype. The figure below shows an example of how two enumerations with the same names and different values would be converted. The value ‘2’ in the source dataset would be converted to ‘0x0004’ in the destination.
If the source data stream contains values which are not in the domain of the conversion map then an overflow exception is raised within the library.
|
The library also allows conversion from enumeration to a numeric datatype. A numeric datatype is either an integer or a floating-point number. This conversion can simplify the application program because the base type for an enumeration datatype is an integer datatype. The application program can read the data from a dataset of enumeration datatype in file into a memory buffer of numeric datatype. And it can write enumeration data from memory into a dataset of numeric datatype in file, too.
For compound datatypes, each field of the source and destination datatype is converted according to its type. The name of the fields must be the same in the source and the destination in order for the data to be converted.
The example below shows the compound datatypes shows sample code to create a compound datatype with the fields aligned on word boundaries (s1_tid) and with the fields packed (s2_tid). The former is suitable as a description of the storage layout in memory, the latter would give a more compact store on disk. These types can be used for transferring data, with s2_tid used to create the dataset, and s1_tid used as the memory datatype.
When the data is transferred, the fields within each data element will be aligned according to the datatype specification. The figure below shows how one data element would be aligned in memory and on disk. Note that the size and byte order of the elements might also be converted during the transfer.
It is also possible to transfer some of the fields of compound datatypes. Based on the example above, the example below shows a compound datatype that selects the first and third fields of the s1_tid. The second datatype can be used as the memory datatype, in which case data is read from or written to these two fields, while skipping the middle field. The second figure below shows the layout for two data elements.
|
|
6.11. Text Descriptions of Datatypes: Conversion to and from
HDF5 provides a means for generating a portable and human-readable text description of a datatype and for generating a datatype from such a text description. This capability is particularly useful for creating complex datatypes in a single step, for creating a text description of a datatype for debugging purposes, and for creating a portable datatype definition that can then be used to recreate the datatype on many platforms or in other applications.
These tasks are handled by two functions provided in the HDF5 Lite high-level library:
• H5LTtext_to_dtype Creates an HDF5 datatype in a single step.
• H5LTdtype_to_text Translates an HDF5 datatype into a text description.
Note that this functionality requires that the HDF5 High-Level Library (H5LT) be installed.
While H5LTtext_to_dtype can be used to generate any sort of datatype, it is particularly useful for complex datatypes.
H5LTdtype_to_text is most likely to be used in two sorts of situations: when a datatype must be closely examined for debugging purpose or to create a portable text description of the datatype that can then be used to recreate the datatype on other platforms or in other applications.
These two functions work for all valid HDF5 datatypes except time, bitfield, and reference datatypes.
The currently supported text format used by H5LTtext_to_dtype and H5LTdtype_to_text is the data description language (DDL) and conforms to the HDF5 DDL. The portion of the HDF5 DDL that defines HDF5 datatypes appears below.
<datatype> ::= <atomic_type> | <compound_type> | <array_type> | <variable_length_type>
<atomic_type> ::= <integer> | <float> | <time> | <string> | <bitfield> | <opaque> | <reference> | <enum>
|
<integer> ::= H5T_STD_I8BE | H5T_STD_I8LE | H5T_STD_I16BE | H5T_STD_I16LE | H5T_STD_I32BE | H5T_STD_I32LE | H5T_STD_I64BE | H5T_STD_I64LE | H5T_STD_U8BE | H5T_STD_U8LE | H5T_STD_U16BE | H5T_STD_U16LE | H5T_STD_U32BE | H5T_STD_U32LE | H5T_STD_U64BE | H5T_STD_U64LE | H5T_NATIVE_CHAR | H5T_NATIVE_UCHAR | H5T_NATIVE_SHORT | H5T_NATIVE_USHORT | H5T_NATIVE_INT | H5T_NATIVE_UINT | H5T_NATIVE_LONG | H5T_NATIVE_ULONG | H5T_NATIVE_LLONG | H5T_NATIVE_ULLONG
|
<float> ::= H5T_IEEE_F32BE | H5T_IEEE_F32LE | H5T_IEEE_F64BE | H5T_IEEE_F64LE | H5T_NATIVE_FLOAT | H5T_NATIVE_DOUBLE | H5T_NATIVE_LDOUBLE
<time> ::= TBD
|
<string> ::= H5T_STRING { STRSIZE <strsize> ; STRPAD <strpad> ; CSET <cset> ; CTYPE <ctype> ;}
<strsize> ::= <int_value> | H5T_VARIABLE <strpad> ::= H5T_STR_NULLTERM | H5T_STR_NULLPAD | H5T_STR_SPACEPAD <cset> ::= H5T_CSET_ASCII | H5T_CSET_UTF8 <ctype> ::= H5T_C_S1 | H5T_FORTRAN_S1
|
<bitfield> ::= TBD
<opaque> ::= H5T_OPAQUE { OPQ_SIZE <opq_size>; OPQ_TAG <opq_tag>; } opq_size ::= <int_value> opq_tag ::= "<string>"
<reference> ::= Not supported
|
<compound_type> ::= H5T_COMPOUND { <member_type_def>+ } <member_type_def> ::= <datatype> <field_name> <offset>opt; <field_name> ::= "<identifier>" <offset> ::= : <int_value>
<variable_length_type> ::= H5T_VLEN { <datatype> }
|
<array_type> ::= H5T_ARRAY { <dim_sizes> <datatype> } <dim_sizes> ::= [<dimsize>] | [<dimsize>] <dim_sizes> <dimsize> ::= <int_value>
<enum> ::= H5T_ENUM { <enum_base_type>; <enum_def>+ } <enum_base_type> ::= <integer> // Currently enums can only hold integer type data, but // they may be expanded in the future to hold any // datatype <enum_def> ::= <enum_symbol> <enum_val>; <enum_symbol> ::= "<identifier>" <enum_val> ::= <int_value> |
The definitions of opaque and compound datatype above are revised for HDF5 Release 1.8. In Release 1.6.5. and earlier, they were defined as follows:
Examples
The code sample below illustrates the use of H5LTtext_to_dtype to generate a variable-length string datatype.
The code sample below illustrates the use of H5LTtext_to_dtype to generate a complex array datatype.