HDF5 version 1.8.1 released on Thu May 29 15:28:55 CDT 2008 ================================================================================ INTRODUCTION ============ This document describes the differences between the HDF5-1.8.1 release and HDF5 1.8.0, and contains information on the platforms tested and known problems in HDF5-1.8.1. For more details, see the files HISTORY-1_0-1_8_0_rc3.txt and HISTORY-1_8.txt in the release_docs/ directory of the HDF5 source. Links to the HDF5 1.8.1 source code, documentation, and additional materials can be found on the HDF5 web page at: http://www.hdfgroup.org/products/hdf5/ The HDF5 1.8.1 release can be obtained from: http://www.hdfgroup.org/HDF5/release/obtain5.html User documentation for 1.8.1 can be accessed directly at this location: http://www.hdfgroup.org/HDF5/doc/ New features in the HDF5-1.8.x release series, including brief general descriptions of some new and modified APIs, are described in the "What's New in 1.8.0?" document: http://www.hdfgroup.org/HDF5/doc/ADGuide/WhatsNew180.html All new and modified APIs are listed in detail in the "HDF5 Software Changes from Release to Release" document, in the section "Release 1.8.1 (current release) versus Release 1.8.0": http://www.hdfgroup.org/HDF5/doc/ADGuide/Changes.html If you have any questions or comments, please send them to the HDF Help Desk: help@hdfgroup.org CONTENTS ======== - New Features - Support for new platforms and languages - Bug Fixes since HDF5-1.8.0 - Platforms Tested - Supported Configuration Features Summary - Known Problems New Features ============ Configuration ------------- - The lib/libhdf5.settings file contains much more configure information. (AKC - 2008/05/18) - The new configure option "--disable-sharedlib-rpath" disables embedding the '-Wl,-rpath' information into executables when shared libraries are produced, and instead solely relies on the information in LD_LIBRARY_PATH. (MAM - 2008/05/15) - Configuration suite now uses Autoconf 2.61, Automake 1.10.1, and Libtool 2.2.2 (MAM - 2008/05/01) Source code distribution ======================== Library ------- - None Parallel Library ---------------- - None Tools ----- - h5repack: Reinstated the -i and -o command line flags to specify input and output files. h5repack now understands both the old syntax (with -i and -o) and the new syntax introduced in Release 1.8.0. (PVN - 2008/05/23) - h5dump: Added support for external links, displaying the object that an external link points to. (PVN - 2008/05/12) - h5dump: Added an option, -m, to allow user-defined formatting in the output of floating point numbers. (PVN - 2008/05/06) - h5dump, in output of the -p option: Added effective data compression ratio to the dataset storage layout output when a compression filter has been applied to a dataset. (PVN - 2008/05/01) F90 API ------ New H5A, H5G, H5L, H5O, and H5P APIs to enable 1.8 features were added. See "Release 1.8.1 (current release) versus Release 1.8.0" in the document "HDF5 Software Changes from Release to Release" (http://hdfgroup.org/HDF5/doc/ADGuide/Changes.html) for the complete list of the new APIs. C++ API ------ - None Support for New Platforms, Languages, and Compilers =================================================== - Both serial and parallel HDF5 are supported for the Red Storm machine which is a Cray XT3 system. - The Fortran library will work correctly if compiled with the -i8 flag. This has been tested with the g95, PGI and Intel Fortran compilers. Bug Fixes since HDF5-1.8.0 ========================== Configuration ------------- - None Source code distribution ======================== Library ------- - Chunking: Chunks greater than 4GB are disallowed. (QAK - 2008/05/16) - Fixed the problem with searching for a target file when following an external link. The search pattern will depend on whether the target file's pathname is an absolute or a relative path. Please see the H5Lcreate_external description in the "HDF5 Reference Manual" (http://hdfgroup.org/HDF5/doc/RM/RM_H5L.html). (VC - 2008/04/08) - Fixed possible file corruption bug when encoding datatype descriptions for compound datatypes whose size was between 256 and 511 bytes and the file was opened with the "use the latest format" property enabled (with H5Pset_libver_bounds). (QAK - 2008/03/13) - Fixed bug in H5Aget_num_attrs() routine to correctly handle an invalid location identifier. (QAK - 2008/03/11) Parallel Library ---------------- - None Tools ----- - Fixed bug in h5diff that prevented datasets and attributes with variable-length string elements from comparing correctly. (QAK - 2008/02/28) - Fixed bug in h5dump that caused binary output to be made only for the first dataset, when several datasets were requested. (PVN - 2008/04/07) F90 API ------ - The h5tset(get)_fields subroutines were missing the parameter to specify a sign position; fixed. (EIP - 2008/05/23) - Many APIs were fixed to work with the 8-byte integers in Fortran vs. 4-byte integers in C. This change is trasparent to user applications. C++ API ------ - The class hierarchy was revised to address the problem reported in bugzilla #1068, Attribute should not be derived from base class H5Object. Classes AbstractDS was moved out of H5Object. Class Attribute now multiply inherits from IdComponent and AbstractDs and class DataSet from H5Object and AbstractDs. In addition, data member IdComponent::id was moved into subclasses: Attribute, DataSet, DataSpace, DataType, H5File, Group, and PropList. (BMR - 2008/05/20) - IdComponent::dereference was incorrect; it was changed from: void IdComponent::dereference(IdComponent& obj, void* ref) to: void H5Object::dereference(H5File& h5file, void* ref) void H5Object::dereference(H5Object& obj, void* ref) (BMR - 2008/05/20) - Revised Attribute::write and Attribute::read wrappers to handle memory allocation/deallocation properly. (bugzilla 1045) (BMR - 2008/05/20) Platforms Tested ================ The following platforms and compilers have been tested for this release. Cray XT3 (2.0.41) cc (pgcc) 7.1-4 (red storm) ftn (pgf90) 7.1-4 CC (pgCC) 7.1-4 mpicc 1.0.2 mpif90 1.0.2 FreeBSD 6.2-STABLE i386 gcc 3.4.6 [FreeBSD] 20060305 (duty) g++ 3.4.6 [FreeBSD] 20060305 gcc 4.2.1 20080123 g++ 4.2.1 20080123 gfortran 4.2.1 20070620 FreeBSD 6.2-STABLE amd64 gcc 3.4.6 [FreeBSD] 20060305 (liberty) g++ 3.4.6 [FreeBSD] 20060305 gcc 4.2.1 20080123 g++ 4.2.1 20080123 gfortran 4.2.1 20080123 IRIX64 6.5 (64 & n32) MIPSpro cc 7.4.4m F90 MIPSpro 7.4.4m C++ MIPSpro cc 7.4.4m Linux 2.6.9 (RHEL4) Intel 10.0 compilers (abe.ncsa.uiuc.edu) Linux 2.4.21-47 gcc 3.2.3 20030502 (osage) Linux 2.6.9-42.0.10 gcc,g++ 3.4.6 20060404, G95 (GCC 4.0.3) (kagiso) PGI 7.1-6 (pgcc, pgf90, pgCC) Intel 9.1 (icc, ifort, icpc) Linux 2.6.16.27 x86_64 AMD gcc 4.1.0 (SuSE Linux), g++ 4.1.0, (smirom) g95 (GCC 4.0.3) PGI 7.1-6 (pgcc, pgf90, pgCC) Intel 9.1 (icc, ifort, icpc) Linux 2.6.5-7.252.1-rtgfx #1 Intel(R) C++ Version 9.0 SMP ia64 Intel(R) Fortran Itanium(R) Version 9.0 (cobalt) SGI MPI SunOS 5.8 32,46 Sun WorkShop 6 update 2 C 5.3 (Solaris 2.8) Sun WorkShop 6 update 2 Fortran 95 6.2 Sun WorkShop 6 update 2 C++ 5.3 SunOS 5.10 cc: Sun C 5.8 (linew) f90: Sun Fortran 95 8.2 CC: Sun C++ 5.8 Xeon Linux 2.4.21-32.0.1.ELsmp-perfctr-lustre (tungsten) gcc 3.2.2 20030222 Intel(R) C++ Version 9.0 Intel(R) Fortran Compiler Version 9.0 IA-64 Linux 2.4.21.SuSE_309.tg1 ia64 (NCSA tg-login) gcc 3.2.2 Intel(R) C++ Version 8.1 Intel(R) Fortran Compiler Version 8.1 mpich-gm-1.2.6..14b-intel-r2 Intel 64 Linux 2.6.9-42.0.10.EL_lustre-1.4.10.1smp (abe) gcc 3.4.6 20060404 Intel(R) C++ Version 10.0 Intel (R) Fortran Compiler Version 10.0 mvapich2-0.9.8p2patched-intel-ofed-1.2 Windows XP Visual Studio .NET Visual Studio 2005 w/ Intel Fortran 9.1 Cygwin(native gcc compiler and g95) MinGW(native gcc compiler and g95) Windows XP x64 Visual Studio 2005 w/ Intel Fortran 9.1 Windows Vista Visual Studio 2005 MAC OS 10.5.2 (Intel) i686-apple-darwin9-gcc-4.0.1 (GCC) 4.0.1 GNU Fortran (GCC) 4.3.0 20070810 G95 (GCC 4.0.3 (g95 0.91!) Apr 24 2008) Supported Configuration Features Summary ======================================== In the tables below y = tested and supported n = not supported or not tested in this release x = not working in this release dna = does not apply ( ) = footnote appears below second table = testing incomplete on this feature or platform Platform C F90 F90 C++ zlib SZIP parallel parallel SunOS5.10 64-bit n y n y y y SunOS5.10 32-bit n y n y y y IRIX64_6.5 64-bit n y y y y y IRIX64_6.5 32-bit n n n n y y Windows XP n y(15) n(15) y y y Windows XP x64 n y(15) n(15) y y y Windows Vista n n n y y y Mac OS X 10.5 Intel n y n y y y FreeBSD 4.11 n n n y y y RedHat EL3 W (3) y(1) y(10) y(1) y y y RedHat EL3 W Intel (3) n y n y y n RedHat EL3 W PGI (3) n y n y y n SuSe x86_64 gcc (3,12) y(2) y(11) y(2) y y y SuSe x86_64 Int (3,12) n y(13) n y y n SuSe x86_64 PGI (3,12) n y(8) n y y y Linux 2.4 Xeon C Lustre Intel (3,6) n y n y y n Linux 2.6 SuSE ia64 C Intel (3,7) y y y y y n Linux 2.6 SGI Altix ia64 Intel (3) y y y y y y Linux 2.6 RHEL C Lustre Intel (5) y(4) y y(4) y y n Cray XT3 2.0.41 y y y y y n Platform Shared Shared Shared Thread- C libs F90 libs C++ libs safe Solaris2.10 64-bit y y y y Solaris2.10 32-bit y y y y IRIX64_6.5 64-bit y y n y IRIX64_6.5 32-bit y dna y y Windows XP y y(15) y y Windows XP x64 y y(15) y y Windows Vista y n n y Mac OS X 10.3 y n FreeBSD 4.11 y n y y RedHat EL3 W (3) y y(10) y y RedHat EL3 W Intel (3) y y y n RedHat EL3 W PGI (3) y y y n SuSe x86_64 W GNU (3,12) y y y y SuSe x86_64 W Int (3,12) y y y n SuSe x86_64 W PGI (3,12) y y y n Linux 2.4 Xeon C Lustre Intel (6) y y y n Linux 2.4 SuSE ia64 C Intel (7) y y y n Linux 2.4 SGI Altix ia64 Intel y n Linux 2.6 RHEL C Lustre Intel (5) y y y n Cray XT3 2.0.41 n n n n n Notes: (1) Using mpich2 1.0.6. (2) Using mpich2 1.0.7. (3) Linux 2.6 with GNU, Intel, and PGI compilers, as indicated. W or C indicates workstation or cluster, respectively. (4) Using mvapich2 0.9.8. (5) Linux 2.6.9-42.0.10. Xeon cluster with ELsmp_perfctr_lustre and Intel compilers (6) Linux 2.4.21-32.0.1. Xeon cluster with ELsmp_perfctr_lustre and Intel compilers (7) Linux 2.4.21, SuSE_292.till. Ia64 cluster with Intel compilers (8) pgf90 (9) With Compaq Visual Fortran 6.6c compiler. (10) With PGI and Absoft compilers. (11) PGI and Intel compilers for both C and Fortran (12) AMD Opteron x86_64 (13) ifort (14) Yes with C and Fortran, but not with C++ (15) Using Visual Studio 2005 or Cygwin (16) Not tested for this release. Compiler versions for each platform are listed in the preceding "Platforms Tested" table. Known Problems ============== * For Red Storm, a Cray XT3 system, the yod command sometimes gives the message, "yod allocation delayed for node recovery". This interferes with test suites that do not expect seeing this message. See the section of "Red Storm" in file INSTALL_parallel for a way to deal with this problem. AKC - 2008/05/28 * For Red Storm, a Cray XT3 system, the tools/h5ls/testh5ls.sh will fail on the test "Testing h5ls -w80 -r -g tgroup.h5" fails. This test is expected to fail and exit with a non-zero code but the yod command does not propagate the exit code of the executables. Yod always returns 0 if it can launch the executable. The test suite shell expects a non-zero for this particular test, therefore it concludes the test has failed when it receives 0 from yod. To bypass this problem for now, change the following lines in the tools/h5ls/testh5ls.sh. ======== Original ========= # The following combination of arguments is expected to return an error message # and return value 1 TOOLTEST tgroup-1.ls 1 -w80 -r -g tgroup.h5 ======== Skip the test ========= echo SKIP TOOLTEST tgroup-1.ls 1 -w80 -r -g tgroup.h5 ======== end of bypass ======== AKC - 2008/05/28 * We have discovered two problems when running collective IO parallel HDF5 tests with chunking storage on the ChaMPIon MPI compiler on tungsten, a Linux cluster at NCSA. Under some complex selection cases: 1) MPI_Get_element returns the wrong value. 2) MPI_Type_struct also generates the wrong derived datatype and corrupt data may be generated. These issues arise only when turning on collective IO with chunking storage with some complex selections. We have not found these problems on other MPI-IO compilers. If you encounter these problems, you may use independent IO instead. To avoid this behavior, change the following line in your code H5Pset_dxpl_mpio(xfer_plist, H5FD_MPIO_COLLECTIVE); to H5Pset_dxpl_mpio(xfer_plist, H5FD_MPIO_INDEPENDENT); KY - 2007/08/24 * For SNL, spirit/liberty/thunderbird: The serial tests pass but parallel tests failed with MPI-IO file locking message. AKC - 2007/6/25 * On Intel 64 Linux cluster (RH 4, Linux 2.6.9) with Intel 10.0 compilers, use -mp -O1 compilation flags to build the libraries. A higher level of optimization causes failures in several HDF5 library tests. * For LLNL, uP: both serial and parallel tests pass. Zeus: Serial tests pass but parallel tests fail with a known problem in MPI. ubgl: Serial tests pass but parallel tests fail. * Configuring with --enable-debug=all produces compiler errors on most platforms: Users who want to run HDF5 in debug mode should use --enable-debug rather than --enable-debug=all to enable debugging information on most modules. * On Mac OS 10.4, test/dt_arith.c has some errors in conversion from long double to (unsigned) long long and from (unsigned) long long to long double. * On Altix SGI with Intel 9.0, testmeta.c would not compile with -O3 optimization flag. * On VAX, the Scaleoffset filter is not supported. The Scaleoffset filter supports only the IEEE standard for floating-point data; it cannot be applied to HDF5 data generated on VAX. * On Cray X1, a lone colon on the command line of h5dump --xml (as in the testh5dumpxml.sh script) is misinterpereted by the operating system and causes an error. * On mpich 1.2.5 and 1.2.6, if more than two processes contribute no IO and the application asks to do collective IO, we have found that when using 4 processors, a simple collective write will sometimes be hung. This can be verified with t_mpi test under testpar. * On IRIX6.5, when the C compiler version is greater than 7.4, complicated MPI derived datatype code will work. However, the user should increase the value of the MPI_TYPE_MAX environment variable to some appropriate value to use collective irregular selection code. For example, the current parallel HDF5 test needs to raise MPI_TYPE_MAX to 200,000 to pass the test. * A dataset created or rewritten with a v1.6.3 library or after cannot be read with the v1.6.2 library or before when the Fletcher32 EDC filter is enabled. There was a bug in the calculation of the Fletcher32 checksum in the library before v1.6.3; the checksum value was not consistent between big- endian and little-endian systems. This bug was fixed in Release 1.6.3. However, after fixing the bug, the checksum value was no longer the same as before on little-endian system. Library releases after 1.6.4 can still read datasets created or rewritten with an HDF5 library of v1.6.2 or before. SLU - 2005/6/30 * For version 6 (6.02 and 6.04) of the Portland Group compiler on the AMD Opteron processor, there is a bug in the compiler for optimization(-O2). The library failed in several tests, all related to the MULTI driver. The problem has been reported to the vendor. * On IBM AIX systems, parallel HDF5 mode will fail some tests with error messages like "INFO: 0031-XXX ...". This is from the command `poe'. Set the environment variable MP_INFOLEVEL to 0 to minimize the messages and run the tests again. The tests may fail with messages like "The socket name is already in use", but HDF5 does not use sockets. This failure is due to problems with the poe command trying to set up the debug socket. To resolve this problem, check to see whether there are many old /tmp/s.pedb.* files staying around. These are sockets used by the poe command and left behind due to failed commands. First, ask your system administrator to clean them out. Lastly, request IBM to provide a means to run poe without the debug socket. * The --enable-static-exec configure flag fails to compile for Solaris platforms. This is due to the fact that not all of the system libraries on Solaris are available in a static format. The --enable-static-exec configure flag also fails to correctly compile on IBM SP2 platforms for serial mode. The parallel mode works fine with this option. It is suggested that you do not use this option on these platforms during configuration. * With the gcc 2.95.2 compiler, HDF5 uses the `-ansi' flag during compilation. The ANSI version of the compiler complains about not being able to handle the `long long' datatype with the warning: warning: ANSI C does not support `long long' This warning is innocuous and can be safely ignored. * The ./dsets tests fail on the TFLOPS machine if the test program, dsets.c, is compiled with the -O option. The HDF5 library still works correctly with the -O option. The test program works fine if it is compiled with -O1 or -O0. Only -O (same as -O2) causes the test program to fail. * Not all platforms behave correctly with Szip's shared libraries. Szip is disabled in these cases, and a message is relayed at configure time. Static libraries should be working on all systems that support Szip and should be used when shared libraries are unavailable. There is also a configure error on Altix machines that incorrectly reports when a version of Szip without an encoder is being used. * On some platforms that use Intel and Absoft compilers to build the HDF5 Fortran library, compilation may fail for fortranlib_test.f90, fflush1.f90 and fflush2.f90 complaining about the exit subroutine. Comment out the line IF (total_error .ne. 0) CALL exit (total_error). * Information about building with PGI and Intel compilers is available in the INSTALL file sections 4.7 and 4.8. * On at least one system, SDSC DataStar, the scheduler (in this case LoadLeveler) sends job status updates to standard error when you run any executable that was compiled with the parallel compilers. This causes problems when running "make check" on parallel builds, as many of the tool tests function by saving the output from test runs, and comparing it to an exemplar. The best solution is to reconfigure the target system so it no longer inserts the extra text. However, this may not be practical. In such cases, one solution is to "setenv HDF5_Make_Ignore yes" prior to the configure and build. This will cause "make check" to continue after detecting errors in the tool tests. However, in the case of SDSC DataStar, it also leaves you with some 150 "failed" tests to examine by hand. A second solution is to write a script to run serial tests and filter out the text added by the scheduler. A sample script used on SDSC DataStar is given below, but you will probably have to customize it for your installation. Observe that the basic idea is to insert the script as the first item on the command line which executes the the test. The script then executes the test and filters out the offending text before passing it on. #!/bin/csh set STDOUT_FILE=~/bin/serial_filter.stdout set STDERR_FILE=~/bin/serial_filter.stderr rm -f $STDOUT_FILE $STDERR_FILE ($* > $STDOUT_FILE) >& $STDERR_FILE set RETURN_VALUE=$status cat $STDOUT_FILE tail +3 $STDERR_FILE exit $RETURN_VALUE You get the HDF5 make files and test scipts to execute your filter script by setting the environment variable "RUNSERIAL" to the full path of the script prior to running configure for parallel builds. Remember to "unsetenv RUNSERIAL" before running configure for a serial build. Note that the RUNSERIAL environment variable exists so that we can prefix serial runs as necessary on the target system. On DataStar, no prefix is necessary. However on an MPICH system, the prefix might have to be set to something like "/usr/local/mpi/bin/mpirun -np 1" to get the serial tests to run at all. In such cases, you will have to include the regular prefix in your filter script. * H5Ocopy() does not copy reg_ref attributes correctly when shared-message is turn on. The value of the reference in the destination attriubte is wrong. This H5Ocopy problem will affect the h5copy tool. * In the C++ API, it appears that there are bugs in Attribute::write/read and DataSet::write/read for fixed- and variable-len strings. The problems are being worked on and a patch will be provided when the fixes are available.