Object
hdf.object.HObject
hdf.object.Dataset
hdf.object.ScalarDS
hdf.object.h5.H5ScalarDS
- All Implemented Interfaces:
DataFormat
,MetaDataContainer
,Serializable
H5ScalarDS describes a multi-dimension array of HDF5 scalar or atomic data types, such as byte, int, short,
long, float, double and string, and operations performed on the scalar dataset.
The library predefines a modest number of datatypes. For details, read
HDF5 Datatypes in HDF5 User
Guide
- Version:
- 1.1 9/4/2007
- Author:
- Peter X. Cao
- See Also:
-
Field Summary
Modifier and TypeFieldDescriptionprotected boolean
flag to indicate if the datatype in file is the same as dataype in memoryprotected boolean
flag to indicate if the dataset buffers should be refreshed.Fields inherited from class hdf.object.ScalarDS
fillValue, imageDataRange, interlace, INTERLACE_LINE, INTERLACE_PIXEL, INTERLACE_PLANE, isDefaultImageOrder, isFillValueConverted, isImageDisplay, isTrueColor, palette, unsignedConverted
Fields inherited from class hdf.object.Dataset
chunkSize, compression, COMPRESSION_GZIP_TXT, convertByteToString, convertedBuf, data, datatype, dimNames, dims, filters, inited, isDataLoaded, isImage, isNULL, isScalar, isText, maxDims, nPoints, originalBuf, rank, selectedDims, selectedIndex, selectedStride, space_type, startDims, storage, storageLayout
Fields inherited from class hdf.object.HObject
fileFormat, linkTargetObjName, oid, SEPARATOR
-
Constructor Summary
ConstructorDescriptionH5ScalarDS
(FileFormat theFile, String theName, String thePath) Constructs an instance of a H5 scalar dataset with given file, dataset name and path.H5ScalarDS
(FileFormat theFile, String theName, String thePath, long[] oid) Deprecated. -
Method Summary
Modifier and TypeMethodDescriptionvoid
clear()
Removes all of the elements from metadata list.void
close
(long did) Closes access to the object.Creates a new dataset and writes the data buffer to the new dataset.static Dataset
create
(String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, Object data) Creates a scalar dataset in a file with/without chunking and compression.static Dataset
create
(String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, Object fillValue, Object data) Creates a scalar dataset in a file with/without chunking and compression.void
extend
(long[] newDims) H5Dset_extent verifies that the dataset is at least of size size, extending it if necessary.Returns the datatype of the data object.Retrieves the object's metadata, such as attributes, from the file.getMetadata
(int... attrPropList) Retrieves the object's metadata, such as attributes, from the file.int
Get the number of pallettes for this object.byte[][]
Returns the palette of this scalar dataset or null if palette does not exist.getPaletteName
(int idx) Get the name of a specific image palette from file.long[]
getToken()
Get the token for this object.getVirtualFilename
(int index) Gets the source file name at index if dataset is virtual.int
Gets the number of source files if dataset is virtual.boolean
Check if the object has any attributes attached.void
init()
Retrieves datatype and dataspace information from file and sets the dataset in memory.boolean
Checks if dataset is virtual.long
open()
Opens an existing object such as a dataset or group for access.read()
Reads the data from file.byte[]
Reads the raw data of the dataset from file to a byte array.int
reads references of palettes to count the numberOfPalettes.byte[][]
readPalette
(int idx) Reads a specific image palette from file.Refreshes the dataset before re-read of data.void
removeMetadata
(Object info) Deletes an existing piece of metadata from this object.protected void
Resets selection of dataspacevoid
Sets the name of the object.Returns a string representation of the data value.void
updateMetadata
(Object info) Updates an existing piece of metadata attached to this object.void
Writes the given data buffer into this dataset in a file.void
writeMetadata
(Object info) Writes a specific piece of metadata (such as an attribute) into the file.Methods inherited from class hdf.object.ScalarDS
addFilteredImageValue, clearData, convertFromUnsignedC, convertToUnsignedC, getFillValue, getFilteredImageValues, getImageDataRange, getInterlace, isDefaultImageOrder, isImage, isImageDisplay, isTrueColor, setImageDataRange, setIsImage, setIsImageDisplay, setPalette
Methods inherited from class hdf.object.Dataset
byteToString, convertFromUnsignedC, convertFromUnsignedC, convertToUnsignedC, convertToUnsignedC, getChunkSize, getCompression, getConvertByteToString, getData, getDepth, getDimNames, getDims, getFilters, getHeight, getMaxDims, getOriginalClass, getRank, getSelectedDims, getSelectedIndex, getSize, getSpaceType, getStartDims, getStorage, getStorageLayout, getStride, getWidth, isInited, isNULL, isScalar, isString, setConvertByteToString, setData, stringToByte, toString, toString, write
Methods inherited from class hdf.object.HObject
createFullname, debug, equals, equals, equalsOID, getFID, getFile, getFileFormat, getFullName, getLinkTargetObjName, getName, getOID, getPath, hashCode, setFullname, setLinkTargetObjName, setPath, toString
-
Field Details
-
refresh
flag to indicate if the dataset buffers should be refreshed. -
isNativeDatatype
flag to indicate if the datatype in file is the same as dataype in memory
-
-
Constructor Details
-
H5ScalarDS
Constructs an instance of a H5 scalar dataset with given file, dataset name and path. For example, in H5ScalarDS(h5file, "dset", "/arrays/"), "dset" is the name of the dataset, "/arrays" is the group path of the dataset.- Parameters:
theFile
- the file that contains the data object.theName
- the name of the data object, e.g. "dset".thePath
- the full path of the data object, e.g. "/arrays/".
-
H5ScalarDS
Deprecated.Not for public use in the future.
UsingH5ScalarDS(FileFormat, String, String)
- Parameters:
theFile
- the file that contains the data object.theName
- the name of the data object, e.g. "dset".thePath
- the full path of the data object, e.g. "/arrays/".oid
- the oid of the data object.
-
-
Method Details
-
open
Description copied from class:HObject
Opens an existing object such as a dataset or group for access. The return value is an object identifier obtained by implementing classes such as H5.H5Dopen(). This function is needed to allow other objects to be able to access the object. For instance, H5File class uses the open() function to obtain object identifier for copyAttributes(long src_id, long dst_id) and other purposes. The open() function should be used in pair with close(long) function. -
close
Description copied from class:HObject
Closes access to the object. Sub-classes must implement this interface because different data objects have their own ways of how the data resources are closed. For example, H5Group.close() calls the hdf.hdf5lib.H5.H5Gclose() method and closes the group resource specified by the group id. -
init
Retrieves datatype and dataspace information from file and sets the dataset in memory. The init() is designed to support lazy operation in a dataset object. When a data object is retrieved from file, the datatype, dataspace and raw data are not loaded into memory. When it is asked to read the raw data from file, init() is first called to get the datatype and dataspace information, then load the raw data from file. init() is also used to reset the selection of a dataset (start, stride and count) to the default, which is the entire dataset for 1D or 2D datasets. In the following example, init() at step 1) retrieves datatype and dataspace information from file. getData() at step 3) reads only one data point. init() at step 4) resets the selection to the whole dataset. getData() at step 4) reads the values of whole dataset into memory.dset = (Dataset) file.get(NAME_DATASET); // 1) get datatype and dataspace information from file dset.init(); rank = dset.getRank(); // rank = 2, a 2D dataset count = dset.getSelectedDims(); start = dset.getStartDims(); dims = dset.getDims(); // 2) select only one data point for (int i = 0; i < rank; i++) { start[0] = 0; count[i] = 1; } // 3) read one data point data = dset.getData(); // 4) reset selection to the whole dataset dset.init(); // 5) clean the memory data buffer dset.clearData(); // 6) Read the whole dataset data = dset.getData();
- Specified by:
init
in interfaceDataFormat
-
getToken
Get the token for this object.- Returns:
- true if it has any attributes, false otherwise.
-
hasAttribute
Check if the object has any attributes attached.- Specified by:
hasAttribute
in interfaceMetaDataContainer
- Returns:
- true if it has any attributes, false otherwise.
-
getDatatype
Returns the datatype of the data object.- Specified by:
getDatatype
in interfaceDataFormat
- Overrides:
getDatatype
in classDataset
- Returns:
- the datatype of the data object.
-
refreshData
Refreshes the dataset before re-read of data.- Specified by:
refreshData
in interfaceDataFormat
- Overrides:
refreshData
in classDataset
- Returns:
- the updated data
- See Also:
-
clear
Removes all of the elements from metadata list. The list should be empty after this call returns.- Specified by:
clear
in interfaceMetaDataContainer
- Overrides:
clear
in classDataset
-
readBytes
Description copied from class:Dataset
Reads the raw data of the dataset from file to a byte array. readBytes() reads raw data to an array of bytes instead of array of its datatype. For example, for a one-dimension 32-bit integer dataset of size 5, readBytes() returns a byte array of size 20 instead of an int array of 5. readBytes() can be used to copy data from one dataset to another efficiently because the raw data is not converted to its native type, it saves memory space and CPU time. -
read
Reads the data from file. read() reads the data from file to a memory buffer and returns the memory buffer. The dataset object does not hold the memory buffer. To store the memory buffer in the dataset object, one must call getData(). By default, the whole dataset is read into memory. Users can also select a subset to read. Subsetting is done in an implicit way. How to Select a Subset A selection is specified by three arrays: start, stride and count.- start: offset of a selection
- stride: determines how many elements to move in each dimension
- count: number of elements to select in each dimension
We want to select every other data point in dims[1] and dims[2]int rank = dataset.getRank(); // number of dimensions of the dataset long[] dims = dataset.getDims(); // the dimension sizes of the dataset long[] selected = dataset.getSelectedDims(); // the selected size of the // dataset long[] start = dataset.getStartDims(); // the offset of the selection long[] stride = dataset.getStride(); // the stride of the dataset int[] selectedIndex = dataset.getSelectedIndex(); // the selected // dimensions for // display // select dim1 and dim2 as 2D data for display, and slice through dim0 selectedIndex[0] = 1; selectedIndex[1] = 2; selectedIndex[1] = 0; // reset the selection arrays for (int i = 0; i < rank; i++) { start[i] = 0; selected[i] = 1; stride[i] = 1; } // set stride to 2 on dim1 and dim2 so that every other data point is // selected. stride[1] = 2; stride[2] = 2; // set the selection size of dim1 and dim2 selected[1] = dims[1] / stride[1]; selected[2] = dims[1] / stride[2]; // when dataset.getData() is called, the selection above will be used // since // the dimension arrays are passed by reference. Changes of these arrays // outside the dataset object directly change the values of these array // in the dataset object.
For ScalarDS, the memory data buffer is a one-dimensional array of byte, short, int, float, double or String type based on the datatype of the dataset.- Specified by:
read
in interfaceDataFormat
- Returns:
- the data read from file.
- Throws:
Exception
- if object can not be read- See Also:
-
write
-
getMetadata
Retrieves the object's metadata, such as attributes, from the file. Metadata, such as attributes, is stored in a List.- Specified by:
getMetadata
in interfaceMetaDataContainer
- Returns:
- the list of metadata objects.
- Throws:
hdf.hdf5lib.exceptions.HDF5Exception
- if the metadata can not be retrieved
-
getMetadata
Retrieves the object's metadata, such as attributes, from the file. Metadata, such as attributes, is stored in a List.- Parameters:
attrPropList
- the list of properties to get- Returns:
- the list of metadata objects.
- Throws:
hdf.hdf5lib.exceptions.HDF5Exception
- if the metadata can not be retrieved
-
writeMetadata
Writes a specific piece of metadata (such as an attribute) into the file. If an HDF(4&5) attribute exists in the file, this method updates its value. If the attribute does not exist in the file, it creates the attribute in the file and attaches it to the object. It will fail to write a new attribute to the object where an attribute with the same name already exists. To update the value of an existing attribute in the file, one needs to get the instance of the attribute by getMetadata(), change its values, then use writeMetadata() to write the value.- Specified by:
writeMetadata
in interfaceMetaDataContainer
- Parameters:
info
- the metadata to write.- Throws:
Exception
- if the metadata can not be written
-
removeMetadata
Deletes an existing piece of metadata from this object.- Specified by:
removeMetadata
in interfaceMetaDataContainer
- Parameters:
info
- the metadata to delete.- Throws:
hdf.hdf5lib.exceptions.HDF5Exception
- if the metadata can not be removed
-
updateMetadata
Updates an existing piece of metadata attached to this object.- Specified by:
updateMetadata
in interfaceMetaDataContainer
- Parameters:
info
- the metadata to update.- Throws:
hdf.hdf5lib.exceptions.HDF5Exception
- if the metadata can not be updated
-
setName
Description copied from class:HObject
Sets the name of the object. setName (String newName) changes the name of the object in the file. -
resetSelection
Resets selection of dataspace- Overrides:
resetSelection
in classDataset
-
create
public static Dataset create(String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, Object data) throws Exception Creates a scalar dataset in a file with/without chunking and compression.- Parameters:
name
- the name of the dataset to create.pgroup
- parent group where the new dataset is created.type
- the datatype of the dataset.dims
- the dimension size of the dataset.maxdims
- the max dimension size of the dataset. maxdims is set to dims if maxdims = null.chunks
- the chunk size of the dataset. No chunking if chunk = null.gzip
- GZIP compression level (1 to 9). No compression if gzip<=0.data
- the array of data values.- Returns:
- the new scalar dataset if successful; otherwise returns null.
- Throws:
Exception
- if there is a failure.
-
create
public static Dataset create(String name, Group pgroup, Datatype type, long[] dims, long[] maxdims, long[] chunks, int gzip, Object fillValue, Object data) throws Exception Creates a scalar dataset in a file with/without chunking and compression. The following example shows how to create a string dataset using this function.H5File file = new H5File("test.h5", H5File.CREATE); int max_str_len = 120; Datatype strType = new H5Datatype(Datatype.CLASS_STRING, max_str_len, Datatype.NATIVE, Datatype.NATIVE); int size = 10000; long dims[] = { size }; long chunks[] = { 1000 }; int gzip = 9; String strs[] = new String[size]; for (int i = 0; i < size; i++) strs[i] = String.valueOf(i); file.open(); file.createScalarDS("/1D scalar strings", null, strType, dims, null, chunks, gzip, strs); try { file.close(); } catch (Exception ex) { }
- Parameters:
name
- the name of the dataset to create.pgroup
- parent group where the new dataset is created.type
- the datatype of the dataset.dims
- the dimension size of the dataset.maxdims
- the max dimension size of the dataset. maxdims is set to dims if maxdims = null.chunks
- the chunk size of the dataset. No chunking if chunk = null.gzip
- GZIP compression level (1 to 9). No compression if gzip<=0.fillValue
- the default data value.data
- the array of data values.- Returns:
- the new scalar dataset if successful; otherwise returns null.
- Throws:
Exception
- if there is a failure.
-
copy
Description copied from class:Dataset
Creates a new dataset and writes the data buffer to the new dataset. This function allows applications to create a new dataset for a given data buffer. For example, users can select a specific interesting part from a large image and create a new image with the selection. The new dataset retains the datatype and dataset creation properties of this dataset.- Specified by:
copy
in classDataset
- Parameters:
pgroup
- the group which the dataset is copied to.dstName
- the name of the new dataset.dims
- the dimension sizes of the the new dataset.buff
- the data values of the subset to be copied.- Returns:
- the new dataset.
- Throws:
Exception
- if dataset can not be copied
-
getNumberOfPalettes
Get the number of pallettes for this object.- Overrides:
getNumberOfPalettes
in classScalarDS
- Returns:
- the number of palettes if it has any, 0 otherwise.
-
getPalette
Description copied from class:ScalarDS
Returns the palette of this scalar dataset or null if palette does not exist. A Scalar dataset can be displayed as spreadsheet data or an image. When a scalar dataset is displayed as an image, the palette or color table may be needed to translate a pixel value to color components (for example, red, green, and blue). Some scalar datasets have no palette and some datasets have one or more than one palettes. If an associated palette exists but is not loaded, this interface retrieves the palette from the file and returns the palette. If the palette is loaded, it returns the palette. It returns null if there is no palette associated with the dataset. Current implementation only supports palette model of indexed RGB with 256 colors. Other models such as YUV", "CMY", "CMYK", "YCbCr", "HSV will be supported in the future. The palette values are stored in a two-dimensional byte array and are arranges by color components of red, green and blue. palette[][] = byte[3][256], where, palette[0][], palette[1][] and palette[2][] are the red, green and blue components respectively. Sub-classes have to implement this interface. HDF4 and HDF5 images use different libraries to retrieve the associated palette.- Overrides:
getPalette
in classScalarDS
- Returns:
- the 2D palette byte array.
-
getPaletteName
Description copied from class:ScalarDS
Get the name of a specific image palette from file. A scalar dataset may have multiple palettes attached to it. getPaletteName(int idx) returns the name of a specific palette identified by its index.- Overrides:
getPaletteName
in classScalarDS
- Parameters:
idx
- the index of the palette to retrieve the name.- Returns:
- The name of the palette
-
readPalette
Description copied from class:ScalarDS
Reads a specific image palette from file. A scalar dataset may have multiple palettes attached to it. readPalette(int idx) returns a specific palette identified by its index.- Overrides:
readPalette
in classScalarDS
- Parameters:
idx
- the index of the palette to read.- Returns:
- the image palette
-
readNumberOfPalettes
reads references of palettes to count the numberOfPalettes.- Returns:
- the number of palettes referenced.
-
extend
H5Dset_extent verifies that the dataset is at least of size size, extending it if necessary. The dimensionality of size is the same as that of the dataspace of the dataset being changed. This function can be applied to the following datasets: 1) Any dataset with unlimited dimensions 2) A dataset with fixed dimensions if the current dimension sizes are less than the maximum sizes set with maxdims (see H5Screate_simple)- Parameters:
newDims
- the dimension target size- Throws:
hdf.hdf5lib.exceptions.HDF5Exception
- If there is an error at the HDF5 library level.
-
isVirtual
-
getVirtualFilename
Description copied from class:Dataset
Gets the source file name at index if dataset is virtual. Sub-classes must replace this default implementation.- Overrides:
getVirtualFilename
in classDataset
- Parameters:
index
- index of the source file name if dataset is virtual.- Returns:
- filename if the dataset is virtual; otherwise returns null.
-
getVirtualMaps
Description copied from class:Dataset
Gets the number of source files if dataset is virtual. Sub-classes must replace this default implementation.- Overrides:
getVirtualMaps
in classDataset
- Returns:
- the list size if the dataset is virtual; otherwise returns negative.
-
toString
Description copied from class:Dataset
Returns a string representation of the data value. For example, "0, 255". For a compound datatype, it will be a 1D array of strings with field members separated by the delimiter. For example, "{0, 10.5}, {255, 20.0}, {512, 30.0}" is a compound attribute of {int, float} of three data points.- Overrides:
toString
in classDataset
- Parameters:
delimiter
- The delimiter used to separate individual data points. It can be a comma, semicolon, tab or space. For example, toString(",") will separate data by commas.maxItems
- The maximum number of Array values to return- Returns:
- the string representation of the data values.
-
Using
H5ScalarDS(FileFormat, String, String)