Package hdf.object.h5

Class H5CompoundDS

All Implemented Interfaces:
CompoundDataFormat, DataFormat, MetaDataContainer, Serializable

public class H5CompoundDS extends CompoundDS implements MetaDataContainer
The H5CompoundDS class defines an HDF5 dataset of compound datatypes. An HDF5 dataset is an object composed of a collection of data elements, or raw data, and metadata that stores a description of the data elements, data layout, and all other information necessary to write, read, and interpret the stored data. A HDF5 compound datatype is similar to a struct in C or a common block in Fortran: it is a collection of one or more atomic types or small arrays of such types. Each member of a compound type has a name which is unique within that type, and a byte offset that determines the first byte (smallest byte address) of that member in a compound datum. For more information on HDF5 datasets and datatypes, read the HDF5 User's Guide. There are two basic types of compound datasets: simple compound data and nested compound data. Members of a simple compound dataset have atomic datatypes. Members of a nested compound dataset are compound or array of compound data. Since Java does not understand C structures, we cannot directly read/write compound data values as in the following C example.
 typedef struct s1_t {
         int    a;
         float  b;
         double c;
         } s1_t;
     s1_t       s1[LENGTH];
     ...
     H5Dwrite(..., s1);
     H5Dread(..., s1);
 
Values of compound data fields are stored in java.util.Vector object. We read and write compound data by fields instead of compound structure. As for the example above, the java.util.Vector object has three elements: int[LENGTH], float[LENGTH] and double[LENGTH]. Since Java understands the primitive datatypes of int, float and double, we will be able to read/write the compound data by field.
Version:
1.1 9/4/2007
Author:
Peter X. Cao
See Also:
  • Constructor Details

    • H5CompoundDS

      public H5CompoundDS(FileFormat theFile, String theName, String thePath)
      Constructs an instance of a HDF5 compound dataset with given file, dataset name and path. The dataset object represents an existing dataset in the file. For example, new H5CompoundDS(file, "dset1", "/g0/") constructs a dataset object that corresponds to the dataset,"dset1", at group "/g0/". This object is usually constructed at FileFormat.open(), which loads the file structure and object information into memory. It is rarely used elsewhere.
      Parameters:
      theFile - the file that contains the data object.
      theName - the name of the data object, e.g. "dset".
      thePath - the full path of the data object, e.g. "/arrays/".
    • H5CompoundDS

      @Deprecated public H5CompoundDS(FileFormat theFile, String theName, String thePath, long[] oid)
      Deprecated.
      Not for public use in the future.
      Using H5CompoundDS(FileFormat, String, String)
      Parameters:
      theFile - the file that contains the data object.
      theName - the name of the data object, e.g. "dset".
      thePath - the full path of the data object, e.g. "/arrays/".
      oid - the oid of the data object.
  • Method Details

    • open

      public long open()
      Description copied from class: HObject
      Opens an existing object such as a dataset or group for access. The return value is an object identifier obtained by implementing classes such as H5.H5Dopen(). This function is needed to allow other objects to be able to access the object. For instance, H5File class uses the open() function to obtain object identifier for copyAttributes(long src_id, long dst_id) and other purposes. The open() function should be used in pair with close(long) function.
      Specified by:
      open in class HObject
      Returns:
      the object identifier if successful; otherwise returns a negative value.
      See Also:
    • close

      public void close(long did)
      Description copied from class: HObject
      Closes access to the object. Sub-classes must implement this interface because different data objects have their own ways of how the data resources are closed. For example, H5Group.close() calls the hdf.hdf5lib.H5.H5Gclose() method and closes the group resource specified by the group id.
      Specified by:
      close in class HObject
      Parameters:
      did - The object identifier.
    • init

      public void init()
      Retrieves datatype and dataspace information from file and sets the dataset in memory. The init() is designed to support lazy operation in a dataset object. When a data object is retrieved from file, the datatype, dataspace and raw data are not loaded into memory. When it is asked to read the raw data from file, init() is first called to get the datatype and dataspace information, then load the raw data from file. init() is also used to reset the selection of a dataset (start, stride and count) to the default, which is the entire dataset for 1D or 2D datasets. In the following example, init() at step 1) retrieves datatype and dataspace information from file. getData() at step 3) reads only one data point. init() at step 4) resets the selection to the whole dataset. getData() at step 4) reads the values of whole dataset into memory.
       dset = (Dataset) file.get(NAME_DATASET);
      
       // 1) get datatype and dataspace information from file
       dset.init();
       rank = dset.getRank(); // rank = 2, a 2D dataset
       count = dset.getSelectedDims();
       start = dset.getStartDims();
       dims = dset.getDims();
      
       // 2) select only one data point
       for (int i = 0; i < rank; i++) {
           start[0] = 0;
           count[i] = 1;
       }
      
       // 3) read one data point
       data = dset.getData();
      
       // 4) reset selection to the whole dataset
       dset.init();
      
       // 5) clean the memory data buffer
       dset.clearData();
      
       // 6) Read the whole dataset
       data = dset.getData();
       
      Specified by:
      init in interface DataFormat
    • getToken

      public long[] getToken()
      Get the token for this object.
      Returns:
      true if it has any attributes, false otherwise.
    • hasAttribute

      public boolean hasAttribute()
      Check if the object has any attributes attached.
      Specified by:
      hasAttribute in interface MetaDataContainer
      Returns:
      true if it has any attributes, false otherwise.
    • getDatatype

      Returns the datatype of the data object.
      Specified by:
      getDatatype in interface DataFormat
      Overrides:
      getDatatype in class Dataset
      Returns:
      the datatype of the data object.
    • clear

      public void clear()
      Removes all of the elements from metadata list. The list should be empty after this call returns.
      Specified by:
      clear in interface MetaDataContainer
      Overrides:
      clear in class Dataset
    • readBytes

      public byte[] readBytes() throws hdf.hdf5lib.exceptions.HDF5Exception
      Description copied from class: Dataset
      Reads the raw data of the dataset from file to a byte array. readBytes() reads raw data to an array of bytes instead of array of its datatype. For example, for a one-dimension 32-bit integer dataset of size 5, readBytes() returns a byte array of size 20 instead of an int array of 5. readBytes() can be used to copy data from one dataset to another efficiently because the raw data is not converted to its native type, it saves memory space and CPU time.
      Specified by:
      readBytes in class Dataset
      Returns:
      the byte array of the raw data.
      Throws:
      hdf.hdf5lib.exceptions.HDF5Exception
    • read

      public Object read() throws Exception
      Reads the data from file. read() reads the data from file to a memory buffer and returns the memory buffer. The dataset object does not hold the memory buffer. To store the memory buffer in the dataset object, one must call getData(). By default, the whole dataset is read into memory. Users can also select a subset to read. Subsetting is done in an implicit way. How to Select a Subset A selection is specified by three arrays: start, stride and count.
      1. start: offset of a selection
      2. stride: determines how many elements to move in each dimension
      3. count: number of elements to select in each dimension
      getStartDims(), getStride() and getSelectedDims() returns the start, stride and count arrays respectively. Applications can make a selection by changing the values of the arrays. The following example shows how to make a subset. In the example, the dataset is a 4-dimensional array of [200][100][50][10], i.e. dims[0]=200; dims[1]=100; dims[2]=50; dims[3]=10;
      We want to select every other data point in dims[1] and dims[2]
       int rank = dataset.getRank(); // number of dimensions of the dataset
       long[] dims = dataset.getDims(); // the dimension sizes of the dataset
       long[] selected = dataset.getSelectedDims(); // the selected size of the
                                                    // dataset
       long[] start = dataset.getStartDims(); // the offset of the selection
       long[] stride = dataset.getStride(); // the stride of the dataset
       int[] selectedIndex = dataset.getSelectedIndex(); // the selected
                                                         // dimensions for
                                                         // display
      
       // select dim1 and dim2 as 2D data for display, and slice through dim0
       selectedIndex[0] = 1;
       selectedIndex[1] = 2;
       selectedIndex[1] = 0;
      
       // reset the selection arrays
       for (int i = 0; i < rank; i++) {
           start[i] = 0;
           selected[i] = 1;
           stride[i] = 1;
       }
      
       // set stride to 2 on dim1 and dim2 so that every other data point is
       // selected.
       stride[1] = 2;
       stride[2] = 2;
      
       // set the selection size of dim1 and dim2
       selected[1] = dims[1] / stride[1];
       selected[2] = dims[1] / stride[2];
      
       // when dataset.getData() is called, the selection above will be used
       // since
       // the dimension arrays are passed by reference. Changes of these arrays
       // outside the dataset object directly change the values of these array
       // in the dataset object.
       
      For CompoundDS, the memory data object is an java.util.List object. Each element of the list is a data array that corresponds to a compound field. For example, if compound dataset "comp" has the following nested structure, and member datatypes
       comp --> m01 (int)
       comp --> m02 (float)
       comp --> nest1 --> m11 (char)
       comp --> nest1 --> m12 (String)
       comp --> nest1 --> nest2 --> m21 (long)
       comp --> nest1 --> nest2 --> m22 (double)
       
      getData() returns a list of six arrays: {int[], float[], char[], String[], long[] and double[]}.
      Specified by:
      read in interface DataFormat
      Returns:
      the data read from file.
      Throws:
      Exception - if object can not be read
      See Also:
    • write

      public void write(Object buf) throws Exception
      Writes the given data buffer into this dataset in a file. The data buffer is a vector that contains the data values of compound fields. The data is written into file field by field.
      Specified by:
      write in interface DataFormat
      Parameters:
      buf - The vector that contains the data values of compound fields.
      Throws:
      Exception - If there is an error at the HDF5 library level.
    • convertByteMember

      protected Object convertByteMember(Datatype dtype, byte[] byteData)
      Description copied from class: CompoundDS
      Routine to convert datatypes that are read in as byte arrays to regular types.
      Overrides:
      convertByteMember in class CompoundDS
      Parameters:
      dtype - the datatype to convert to
      byteData - the bytes to convert
      Returns:
      the converted object
    • convertFromUnsignedC

      Converts the data values of this data object to appropriate Java integers if they are unsigned integers.
      Specified by:
      convertFromUnsignedC in interface DataFormat
      Returns:
      the converted data buffer.
      See Also:
    • convertToUnsignedC

      Converts Java integer data values of this data object back to unsigned C-type integer data if they are unsigned integers.
      Specified by:
      convertToUnsignedC in interface DataFormat
      Returns:
      the converted data buffer.
      See Also:
    • getMetadata

      public List<Attribute> getMetadata() throws hdf.hdf5lib.exceptions.HDF5Exception
      Retrieves the object's metadata, such as attributes, from the file. Metadata, such as attributes, is stored in a List.
      Specified by:
      getMetadata in interface MetaDataContainer
      Returns:
      the list of metadata objects.
      Throws:
      hdf.hdf5lib.exceptions.HDF5Exception - if the metadata can not be retrieved
    • getMetadata

      public List<Attribute> getMetadata(int... attrPropList) throws hdf.hdf5lib.exceptions.HDF5Exception
      Retrieves the object's metadata, such as attributes, from the file. Metadata, such as attributes, is stored in a List.
      Parameters:
      attrPropList - the list of properties to get
      Returns:
      the list of metadata objects.
      Throws:
      hdf.hdf5lib.exceptions.HDF5Exception - if the metadata can not be retrieved
    • writeMetadata

      public void writeMetadata(Object info) throws Exception
      Writes a specific piece of metadata (such as an attribute) into the file. If an HDF(4&5) attribute exists in the file, this method updates its value. If the attribute does not exist in the file, it creates the attribute in the file and attaches it to the object. It will fail to write a new attribute to the object where an attribute with the same name already exists. To update the value of an existing attribute in the file, one needs to get the instance of the attribute by getMetadata(), change its values, then use writeMetadata() to write the value.
      Specified by:
      writeMetadata in interface MetaDataContainer
      Parameters:
      info - the metadata to write.
      Throws:
      Exception - if the metadata can not be written
    • removeMetadata

      public void removeMetadata(Object info) throws hdf.hdf5lib.exceptions.HDF5Exception
      Deletes an existing piece of metadata from this object.
      Specified by:
      removeMetadata in interface MetaDataContainer
      Parameters:
      info - the metadata to delete.
      Throws:
      hdf.hdf5lib.exceptions.HDF5Exception - if the metadata can not be removed
    • updateMetadata

      public void updateMetadata(Object info) throws hdf.hdf5lib.exceptions.HDF5Exception
      Updates an existing piece of metadata attached to this object.
      Specified by:
      updateMetadata in interface MetaDataContainer
      Parameters:
      info - the metadata to update.
      Throws:
      hdf.hdf5lib.exceptions.HDF5Exception - if the metadata can not be updated
    • setName

      public void setName(String newName) throws Exception
      Description copied from class: HObject
      Sets the name of the object. setName (String newName) changes the name of the object in the file.
      Overrides:
      setName in class HObject
      Parameters:
      newName - The new name of the object.
      Throws:
      Exception - if name is root or contains separator
    • create

      @Deprecated public static Dataset create(String name, Group pgroup, long[] dims, String[] memberNames, Datatype[] memberDatatypes, int[] memberSizes, Object data) throws Exception
      Parameters:
      name - the name of the dataset to create.
      pgroup - parent group where the new dataset is created.
      dims - the dimension size of the dataset.
      memberNames - the names of compound datatype
      memberDatatypes - the datatypes of the compound datatype
      memberSizes - the dim sizes of the members
      data - list of data arrays written to the new dataset, null if no data is written to the new dataset.
      Returns:
      the new compound dataset if successful; otherwise returns null.
      Throws:
      Exception - if there is a failure.
    • create

      @Deprecated public static Dataset create(String name, Group pgroup, long[] dims, String[] memberNames, Datatype[] memberDatatypes, int[] memberRanks, long[][] memberDims, Object data) throws Exception
      Parameters:
      name - the name of the dataset to create.
      pgroup - parent group where the new dataset is created.
      dims - the dimension size of the dataset.
      memberNames - the names of compound datatype
      memberDatatypes - the datatypes of the compound datatype
      memberRanks - the ranks of the members
      memberDims - the dim sizes of the members
      data - list of data arrays written to the new dataset, null if no data is written to the new dataset.
      Returns:
      the new compound dataset if successful; otherwise returns null.
      Throws:
      Exception - if the dataset can not be created.
    • create

      public static Dataset create(String name, Group pgroup, long[] dims, long[] maxdims, long[] chunks, int gzip, String[] memberNames, Datatype[] memberDatatypes, int[] memberRanks, long[][] memberDims, Object data) throws Exception
      Creates a simple compound dataset in a file with/without chunking and compression. This function provides an easy way to create a simple compound dataset in file by hiding tedious details of creating a compound dataset from users. This function calls H5.H5Dcreate() to create a simple compound dataset in file. Nested compound dataset is not supported. The required information to create a compound dataset includes the name, the parent group and data space of the dataset, the names, datatypes and data spaces of the compound fields. Other information such as chunks, compression and the data buffer is optional. The following example shows how to use this function to create a compound dataset in file.
       H5File file = null;
       String message = "";
       Group pgroup = null;
       int[] DATA_INT = new int[DIM_SIZE];
       float[] DATA_FLOAT = new float[DIM_SIZE];
       String[] DATA_STR = new String[DIM_SIZE];
       long[] DIMs = { 50, 10 };
       long[] CHUNKs = { 25, 5 };
      
       try {
           file = (H5File) H5FILE.open(fname, H5File.CREATE);
           file.open();
           pgroup = (Group) file.get("/");
       }
       catch (Exception ex) {
       }
      
       Vector data = new Vector();
       data.add(0, DATA_INT);
       data.add(1, DATA_FLOAT);
       data.add(2, DATA_STR);
      
       // create groups
       Datatype[] mdtypes = new H5Datatype[3];
       String[] mnames = { "int", "float", "string" };
       Dataset dset = null;
       try {
           mdtypes[0] = new H5Datatype(Datatype.CLASS_INTEGER, 4, Datatype.NATIVE, Datatype.NATIVE);
           mdtypes[1] = new H5Datatype(Datatype.CLASS_FLOAT, 4, Datatype.NATIVE, Datatype.NATIVE);
           mdtypes[2] = new H5Datatype(Datatype.CLASS_STRING, STR_LEN, Datatype.NATIVE, Datatype.NATIVE);
           dset = file.createCompoundDS("/CompoundDS", pgroup, DIMs, null, CHUNKs, 9, mnames, mdtypes, null, data);
       }
       catch (Exception ex) {
           failed(message, ex, file);
           return 1;
       }
       
      Parameters:
      name - the name of the dataset to create.
      pgroup - parent group where the new dataset is created.
      dims - the dimension size of the dataset.
      maxdims - the max dimension size of the dataset. maxdims is set to dims if maxdims = null.
      chunks - the chunk size of the dataset. No chunking if chunk = null.
      gzip - GZIP compression level (1 to 9). 0 or negative values if no compression.
      memberNames - the names of compound datatype
      memberDatatypes - the datatypes of the compound datatype
      memberRanks - the ranks of the members
      memberDims - the dim sizes of the members
      data - list of data arrays written to the new dataset, null if no data is written to the new dataset.
      Returns:
      the new compound dataset if successful; otherwise returns null.
      Throws:
      Exception - if there is a failure.
    • isString

      public boolean isString(long tid)
      Description copied from class: Dataset
      Checks if a given datatype is a string. Sub-classes must replace this default implementation.
      Overrides:
      isString in class Dataset
      Parameters:
      tid - The data type identifier.
      Returns:
      true if the datatype is a string; otherwise returns false.
    • getSize

      public long getSize(long tid)
      Description copied from class: Dataset
      Returns the size in bytes of a given datatype. Sub-classes must replace this default implementation.
      Overrides:
      getSize in class Dataset
      Parameters:
      tid - The data type identifier.
      Returns:
      The size of the datatype
    • isVirtual

      public boolean isVirtual()
      Description copied from class: Dataset
      Checks if dataset is virtual. Sub-classes must replace this default implementation.
      Overrides:
      isVirtual in class Dataset
      Returns:
      true if the dataset is virtual; otherwise returns false.
    • getVirtualFilename

      public String getVirtualFilename(int index)
      Description copied from class: Dataset
      Gets the source file name at index if dataset is virtual. Sub-classes must replace this default implementation.
      Overrides:
      getVirtualFilename in class Dataset
      Parameters:
      index - index of the source file name if dataset is virtual.
      Returns:
      filename if the dataset is virtual; otherwise returns null.
    • getVirtualMaps

      public int getVirtualMaps()
      Description copied from class: Dataset
      Gets the number of source files if dataset is virtual. Sub-classes must replace this default implementation.
      Overrides:
      getVirtualMaps in class Dataset
      Returns:
      the list size if the dataset is virtual; otherwise returns negative.
    • toString

      public String toString(String delimiter, int maxItems)
      Description copied from class: Dataset
      Returns a string representation of the data value. For example, "0, 255". For a compound datatype, it will be a 1D array of strings with field members separated by the delimiter. For example, "{0, 10.5}, {255, 20.0}, {512, 30.0}" is a compound attribute of {int, float} of three data points.
      Overrides:
      toString in class Dataset
      Parameters:
      delimiter - The delimiter used to separate individual data points. It can be a comma, semicolon, tab or space. For example, toString(",") will separate data by commas.
      maxItems - The maximum number of Array values to return
      Returns:
      the string representation of the data values.