========================================================================== = = = This files conatins a history of the HDF4.* releases. = = = = To find information about a particular release search = = for %%%4.#.# string, for example 4.2.6, or = = for %%%4.#r# string, for example 4.1r2, prior to 2010. = = List of all releases is at the top of this file. = = = = Documents in this file refer to several *.txt files that were = = originally stored in the release_notes directory of the HDF4 = = source tree. Those file are now combined into one misc_docs.txt = = file in the same directory. = = = ========================================================================== List of the HDF4 releases 4.2.12 June 2016 /* delayed dues to 1.10 and 1.8.17 releases */ 4.2.11 February 2015 4.2.10 February 2014 4.2.9 February 2013 4.2.8 August 2012 4.2.7 February 2012 4.2.6 June 2011 4.2.5 February 2010 4.2r4 January 2009 4.2r3 January 2008 4.2r2 October 2007 4.2r1 February 2005 4.2r0 December 2003 4.2r0-Beta September 2003 4.1r5 November 2001 4.1r4 October 2000 4.1r3 May 1999 4.1r2 March 1998 4.1r1 February 1997 4.1b1 December 1996 4.0r2 July 1996 4.0r1 February 1996 4.0b2 November 1995 4.0b1 July 1995 4.0.alpha November 1994 ========================================================================== %%%4.2.12%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.12 released on 2016-06-29 ==================================================== INTRODUCTION This document describes the differences between HDF 4.2.11 and HDF 4.2.12. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.12. The HDF 4.2.11 documentation can be found on the The HDF Group's website at: https://www.hdfgroup.org/release4/doc/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: https://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: https://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Configuration - Support for new platforms and compilers - Bugs fixed since HDF 4.2.11 -- Configuration -- Library -- Utilities - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= - None Library ========= - Behavior of HDstrdup changed HDstrdup now checks the input string for NULL. (BMR, 2016/05/12) - Behavior of SDsetexternalfile changed Previously, when SDsetexternalfile was called more than once on a data set, the library would repeatedly store the external file information in the main file, at different offsets. SDsetexternalfile is now fixed to cause no effect when the data set is already external. (BMR, 2016/05/30) - Added new utility function HDisnetcdf64 for use in tools HDisnetcdf64 returns TRUE(1) or FALSE(0) if a file is a netCDF 64-bit file or not. intn HDisnetcdf64(const char *filename) (BMR, 2016/06/14) Utilities ========= - hdp: detection of netCDF 64-bit file The utility hdp simply failed when the input file was a netCDF 64-bit file. It now reports that it cannot read a netCDF 64-bit file then exits or continue to the next input file. (BMR, 2016/06/14) Support for new platforms and compilers ======================================= - Support for Mac OS X Yosemeti 10.10 added (AKC - 2015/03/04, HDFFR-1500) - Support for Mac OS X El Capitan 10.11 added (AKC - 2015/11/18, HDFFR-1425) (BMR, added for AKC, 2016/06/24) Java Wrapper Library -------------------- The Java HDF JNI library has been integrated into the HDF repository. The configure option is "--enable-java", and the CMake option is HDF4_BUILD_JAVA:BOOL=ON. The package hierarchy has changed from the HDF 4 JNI, which was "ncsa.hdf.hdflib", to HDF 4.2.12, "hdf.hdflib". Bugs fixed since HDF 4.2.11 ========================= Configuration ============= - Examples from mfhdf and hdf will now be installed according to $DESTDIR when it is supplied. (LRK, 2016/06/29, HDFFR-1491) Library ========= - SDsetexternalfile on special elements When the data element is already special, incorrect data length was used for the element in subsequent calls to SDsetexternalfile, causing failures sometimes. This is now fixed. (BMR, 2016/01/04, HDFFR-1516) Utilities ========= - None Documentation ============= - In addition to minor improvements in the contents, user documentation have a new format to improve usability. (BMR, 2016/06/24) Platforms tested ================ This version has been tested in the following platforms: (Format: uname -s, uname -r uname -v, uname -p, uname -m) Linux 2.6.32-573.22.1.el6.x86_64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) #1 SMP, x86_64 GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16) (mayll/platypus) icc (ICC) 15.0.3.187 Build 20150407 ifort (IFORT) 15.0.3.187 Build 20150407 pgcc and pgf90 15.7-0 64-bit target on x86-64 Linux -tp nehalem Linux, 3.10.0-327.10.1.el7.x86_64 gcc (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4) #1 SMP x86_64, GNU/Linux GNU Fortran (GCC) 4.8.5 20150623 (Red Hat 4.8.5-4) (kituo/moohan) icc (ICC) 15.0.3.187 Build 20150407 ifort (IFORT) 15.0.3.187 Build 20150407 Linux, 2.6.32-573.18.1.el6.ppc64 (1) GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) #1 SMP, ppc64 (ostrich) (2) IBM XL Fortran for Linux, V15.1 (64-bit mode) SunOS 5.11 (32- and 64-bit) Sun C 5.12 SunOS_sparc 2011/11/16 11.1, sparc, sun4v (emu) Sun Fortran 95 8.6 SunOS_sparc 2011/11/16 Windows 7 Visual Studio 2012 w/ Intel Fortran 15 (cmake) Visual Studio 2013 w/ Intel Fortran 15 (cmake) Visual Studio 2015 w/ Intel Fortran 16 (cmake) Cygwin(CYGWIN_NT-6.1 2.2.1(0.289/5/3) gcc(4.9.3) compiler and gfortran) (cmake and autotools) Windows 7 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake) Visual Studio 2013 w/ Intel Fortran 15 (cmake) Visual Studio 2015 w/ Intel Fortran 16 (cmake) Windows 8.1 Visual Studio 2012 w/ Intel Fortran 15 (cmake) Visual Studio 2013 w/ Intel Fortran 15 (cmake) Windows 8.1 x64 Visual Studio 2012 w/ Intel Fortran 15 (cmake) Visual Studio 2013 w/ Intel Fortran 15 (cmake) Mac OS X 10.8.5, Darwin, 12.6.0 Apple clang version 5.1 from Xcode 5.1 12.6.0, x86_64 gfortran GNU Fortran (GCC) 4.8.2 (swallow,kite) Intel icc and ifort Version 15.0.3 Mac OS X 10.9.5, Darwin, 13.4.0 Apple clang version 6.0 from Xcode 6.2 13.4.0, x86_64 gfortran GNU Fortran (GCC) 4.9.2 (wren,quail) Intel icc and ifort Version 15.0.3 Mac OS X 10.10.5, Darwin, 14.5.0 Apple clang version 6.0 from Xcode 7.0 14.5.0, x86_64 666666 gfortran GNU Fortran (GCC) 4.9.2 (osx1010dev) Intel icc and ifort version 15.0.3 Mac OS X 10.11.5, Darwin, 15.4.0 Apple clang version 7.3 from Xcode 7.3 15.5.0, x86_64 666666 gfortran GNU Fortran (GCC) 5.2.0 (osx1010dev) Intel icc and ifort version 15.0.3 Debian7.5.0 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1, x86_64 GNU/Linux gcc (Debian 4.7.2-5) 4.7.2 GNU Fortran (Debian 4.7.2-5) 4.7.2 (cmake and autotools) Fedora20 3.15.3-200.fc20.x86_64 #1 SMP x86_64 GNU/Linux gcc (GCC) 4.8.3 20140624 (Red Hat 4.8.3-1) GNU Fortran (GCC) 4.8.3 20140624 (Red Hat 4.8.3-1) (cmake and autotools) SUSE 13.1 3.11.10-17-desktop #1 SMP PREEMPT x86_64 GNU/Linux gcc (SUSE Linux) 4.8.1 GNU Fortran (SUSE Linux) 4.8.1 (cmake and autotools) Ubuntu 14.04 3.13.0-35-generic #62-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.9.1-0ubuntu1) 4.9.1 GNU Fortran (Ubuntu/Linaro 4.9.1-0ubuntu1) 4.9.1 (cmake and autotools) Known problems ============== o Several Fortran examples print "^@" when displaying strings (for example, names of the attributes). This happens because Fortran application doesn't know the length of the strings passed from the C library. EIP - 2015-01-11, HDFFR-1477 o CMake builds in Windows uses the same pre-generated ncgen*.[ch] files from the yacc/lex input files. The generated file, ncgenyy.c, uses the header file that Windows does not support. This must be blocked out in order for Windows to use it. AKC 2014-02-03, HDFFR-1424 o CMake "make install" fails installing the tools: Use CPack to create an install package. ADB - 2014/02/03 o CMake does not install these man pages: hdf.1, ncdump.1, ncgen.1 AKC/BMR - 2014/02/02 o For Mac OS X 10.7 Lion, 10.8 Mountain Lion, 10.9 Mavericks, 10.10 Yosemite, and 10.11 El Capitan, when compiling with -O2, some xdr functions might cause memory corruption. This happened for GCC, Intel and Clang compilers. Currently, -O0 level optimization is used to avoid this problem. (HDFFR-1318,1327,1358,1425) EIP - 2013/02/05, BMR - 2016/06/24 o On IBM PowerPC 64, hdftest fails when gcc 4.4.6 is used with -O3 optimization level. o When building in AIX systems, if CC is xlc with -qlanglvl=ansi, configure will fail when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of the jpeg library have HAVE_STDLIB_H defined in the jconfig.h header file. Without the -qlanglvl=ansi, some older xlc versions (e.g., V7.0) still fail, but newer xlc versions (e.g., V9.0) pass. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files in the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale have the same name, subsequent accesses to the dimension scale or to the SDS might produce undesired results because the libraries could not distinguish between the two objects. In the case of writing, data might even be corrupted. For example, SDS data might be written to a dimension variable or vice versa. (bugzilla #624) HDF4 Library Releases 4.2r2 and later make a distinction between an SDS and a dimension variable. However, as with older versions, these recent versions are unable to detect such conflicts in files created by earlier releases. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. FB - 2009/01/26 BMR - revised 2011/06/24 o N-bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata) after being written and before SDgetdatasize is called. BMR - 2008/11/22 %%%4.2.11%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.11 released on 2015-02-09 ============================================== INTRODUCTION This document describes the differences between HDF 4.2.10 and HDF 4.2.11. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.11. The HDF 4.2.11 documentation can be found on the The HDF Group's website at: http://www.hdfgroup.org/release4/doc/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Configuration - Support for new platforms and compilers - Bugs fixed since HDF 4.2.10 -- Configuration -- Library -- Utilities - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= - None Support for new platforms and compilers ======================================= - None Bugs fixed since HDF 4.2.10 ========================= Configuration ============= - Windows installer incorrect display of PATH environment variable. In the Windows installer, the dialog box where the user can elect to add the product's bin path to the %PATH% environment variable displayed an incorrect path. This path was missing the C:\Program Files part and used the POSIX file separator '/' before the bin (/bin, instead of \bin). The dialog box text was changed to simply say that the product's bin path would be added instead of explicitly displaying the path. This is in line with most installers. The reason for not fixing the displayed path instead is that it is difficult to pass the correct path from CPack to the NSIS installer for display. Note that this was never a code issue - it was just a display problem. The installer always did the right thing when updating the environment variable. (DER - 2014/11/14, HDFFV-9016) Library ========= - Warning "array subscript is below array bounds" Applied user's patch to remove the warning. (BMR 2014/06/02, HDFFR-1379) Utilities ========= - Detection of read failure in ncdump Previously, ncdump did not detect failure from ncvarget because the returned value from ncvarget was not checked, and the calling function simply returned 0. The error code ERR_READFAIL (-2) is added to ncdump only to indicate this failure within ncdump, which will display this message: "Reading failed for variable , the data is possibly corrupted." (BMR 2015/01/21, HDFFR-1468) - Improvement of the ncgen's usage statement. Previously, ncgen's usage looked like this: ncgen: -: Usage: ncgen [-V] [ -b ] [ -c ] [ -f ] [ -o outfile] [ file... ] More details are added to the usage to improve clarity. Now, it is more clear, and consistent with ncdump, like this: Usage: ncgen [-V] [ -b ] [ -c ] [ -f ] [ -o outfile] [ file ... ] [-V] Display version of the HDF4 library and exit [-b] For binary netCDF output, '.nc' extension [-c] For C output [-f] For Fortran output [-o ] Explicitly specify output file name (BMR 2015/01/19, HDFFR-1459) - Output of hrepack containing an unnecessary vgroup of class RIG0.0 When the input file did not have any GR elements, hrepack still opened and closed the output file using the GR API, which caused the RIG0.0 vgroup to be written to the output file. Hrepack now skips accessing the output file using GR API, when the input file doesn't have any images and any GR attributes. (BMR 2015/01/18, HDFFR-1428) - Compliance with Fedora standard regarding printf/fprintf statements Users sent patches for the problem where the format string is missing from the printf/fprintf statements. This is in compliance with Fedora standard. For more information, see https://fedoraproject.org/wiki/Format-Security-FAQ. In the context where this problem occurred, the benefit of using puts/fputs over printf/fprintf is insignificant. Thus, the fix was adding "%s" to those printf/fprintf statements that don't have the format string instead of switching to puts/fputs. (BMR 2014/12/16, HDFFR-1423 and HDFFR-1475) - Failure of hdp on some hdfeos generated files Attribute vdatas created by hdfeos API have the field named "AttrValues". The utility functions Vattrhdfsize and VSattrhdfsize, in hdp.c, used ATTR_FIELD_NAME ("VALUES") to verify that a vdata is storing an attribute, causing failure on some hdfeos generated files. In addition, when this failure occurred, the calling function tried to free allocated resources prematurely. The check against ATTR_FIELD_NAME and the premature resource deallocation are removed. (BMR 2014/12/08, HDFFR-1471) - nclong versus long in tests Applied the user's patch to remove a test failure. (BMR 2014/10/21, HDFFR-1378) Documentation ============= - Updated Reference Manual and User's Guide The documents were updated to contain information of the changes to the tools. In addition, various improvements were applied. (BMR 2015/2/04) Platforms tested ================ This version has been tested in the following platforms: Linux 2.6.32-358.18.1 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) .el6.ppc64 #1 GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) SMP ppc64 GNU/Linux IBM XL Fortran for Linux, V15.1 (64-bit mode) (ostrich) Linux 2.6.18-308.13.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55) SMP i686 i386 GNU Fortran (GCC) 4.1.2 20080704 (jam) (Red Hat 4.1.2-55) pgcc and pgf90 14.10-0 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 15.0.1 20141022 Intel(R) Fortran Compiler, Version 15.0.1 Linux 2.6.18-398.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-55) SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.1.2 20080704 (koala) (Red Hat 4.1.2-55) icc (ICC) 15.0.1 20141022 ifort (IFORT) 15.0.1 20141022 Linux 2.6.32-504.1.3.el6 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-11) #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.4.7 20120313 (platypus) (Red Hat 4.4.7-11) icc (ICC) 15.0.1 20141022 ifort (IFORT) 15.0.1 20141022 pgcc and pgf90 14.10-0 64-bit target on x86-64 Linux -tp nehalem Linux 3.10.0-123.8.1.el7 gcc (GCC) 4.8.2 20140120 (Red Hat 4.8.2-16) #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.8.2 20140120 (aws ec2 CentOS 7 image) (Red Hat 4.8.2-16) SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc 2011/11/16 (emu) (see "Known problem" section) Sun Fortran 95 8.6 SunOS_sparc 2011/11/16 Windows 7 Visual Studio 2008 (cmake) Visual Studio 2010 w/ Intel Fortran 14 (cmake) Visual Studio 2012 w/ Intel Fortran 14 (cmake) Visual Studio 2013 w/ Intel Fortran 14 (cmake) Cygwin(CYGWIN_NT-6.1 1.7.32(0.274/5/3) gcc(4.8.3) compiler and gfortran) (cmake and autotools) Windows 7 x64 Visual Studio 2008 (cmake) Visual Studio 2010 w/ Intel Fortran 14 (cmake) Visual Studio 2012 w/ Intel Fortran 14 (cmake) Visual Studio 2013 w/ Intel Fortran 14 (cmake) Windows 8.1 Visual Studio 2012 w/ Intel Fortran 14 (cmake) Visual Studio 2013 w/ Intel Fortran 14 (cmake) Windows 8.1 x64 Visual Studio 2012 w/ Intel Fortran 14 (cmake) Visual Studio 2013 w/ Intel Fortran 14 (cmake) Mac OS X 10.7.5 Apple clang version 3.0 from Xcode 4.6.1 Darwin 11.4.2 gfortran GNU Fortran (GCC) 4.8.2 (duck) icc and ifort Version 13.0.3 20130606 Mac OS X 10.8.5 Apple clang version 5.1 from Xcode 5.1 Darwin 12.5.0 gfortran GNU Fortran (GCC) 4.8.2 (swallow,kite) icc and ifort Version 14.0.4 20140805 Mac OS X 10.9.5 Apple clang version 6.0 from Xcode 6.0.1 Darwin 13.4.0 gfortran GNU Fortran (GCC) 4.8.2 (wren,quail) icc and ifort Version 15.0.1 20141022 Debian7.5.0 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux gcc (Debian 4.7.2-5) 4.7.2 GNU Fortran (Debian 4.7.2-5) 4.7.2 (cmake and autotools) Fedora20 3.15.3-200.fc20.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.8.3 20140624 (Red Hat 4.8.3-1) GNU Fortran (GCC) 4.8.3 20140624 (Red Hat 4.8.3-1) (cmake and autotools) SUSE 13.1 3.11.10-17-desktop #1 SMP PREEMPT x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.8.1 GNU Fortran (SUSE Linux) 4.8.1 (cmake and autotools) Ubuntu 14.04 3.13.0-35-generic #62-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.9.1-0ubuntu1) 4.9.1 GNU Fortran (Ubuntu/Linaro 4.9.1-0ubuntu1) 4.9.1 (cmake and autotools) Known problems ============== o Several Fortran examples print "^@" when displaying strings (for example, names of the attributes). This happens because Fortran application doesn't know the length of the strings passed from the C library. EIP - 2015-01-11, HDFFR-1477 o CMake builds in Windows uses the same pre-generated ncgen*.[ch] files from the yacc/lex input files. The generated file, ncgenyy.c, uses the header file that Windows does not support. This must be blocked out in order for Windows to use it. AKC 2014-02-03, HDFFR-1424 o CMake "make install" fails installing the tools: Use CPack to create an install package. ADB - 2014/02/03 o CMake does not install these man pages: hdf.1, ncdump.1, ncgen.1 AKC/BMR - 2014/02/02 o For Mac OS X 10.7 Lion and on 10.8 Mountain Lion, several tests fail with GCC, Intel and Clang compilers. Currently, this situation is detected and -O0 level optimization is used. (HDFFR-1318,1358) EIP - 2013/02/05 o On IBM PowerPC 64, hdftest fails when gcc 4.4.6 is used with -O3 optimization level. o When building in AIX systems, if CC is xlc with -qlanglvl=ansi, configure will fail when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of the jpeg library have HAVE_STDLIB_H defined in the jconfig.h header file. Without the -qlanglvl=ansi, some older xlc versions (e.g., V7.0) still fail, but newer xlc versions (e.g., V9.0) pass. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files in the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale have the same name, subsequent accesses to the dimension scale or to the SDS might produce undesired results because the libraries could not distinguish between the two objects. In the case of writing, data might even be corrupted. For example, SDS data might be written to a dimension variable or vice versa. (bugzilla #624) HDF4 Library Releases 4.2r2 and later make a distinction between an SDS and a dimension variable. However, as with older versions, these recent versions are unable to detect such conflicts in files created by earlier releases. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. FB - 2009/01/26 BMR - revised 2011/06/24 o N-bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata) after being written and before SDgetdatasize is called. BMR - 2008/11/22 %%%4.2.10%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.10 released on 2014-02-09 ============================================== INTRODUCTION This document describes the differences between HDF 4.2.9 and HDF 4.2.10. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.10. The HDF 4.2.10 documentation can be found on the The HDF Group's website at: http://www.hdfgroup.org/doc.html Previous versions of the documentation are available from the FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Configuration - Support for new platforms and compilers - Bugs fixed since HDF 4.2.9 -- Configuration -- Library -- Utilities - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= CMake - Added support to create dmg bundles on Mac. (ADB 2013/9/12) CMake - Added support to use Windows /MT option. (ADB 2013/6/10) Support for new platforms and compilers ======================================= - Visual Studio 2012 w/ Intel Fortran 13 on Windows 7 and Windows 8 - Mac OS X Mavericks with clang and gfortran Bugs fixed since HDF 4.2.9 ========================= Configuration ============= - Removed the requirement of yacc/lex like tools. The ncgenXXX.* files generated files from yacc and lex input files are pre-created in the source code to build the ncgen tool. The msoft*.[ch] files were for Windows build but was out of dated. Cmake uses the pre-created files instead. See known problem below. (AKC 2014/02/02 HDFFR-1419) - Removed old Macintosh platform codes that are not used any more. (AKC 2014/01/21 HDFFR-1340) - Changed Mac platforms to use the Apple supported clang compiler as the default C compiler. (AKC 2014/01/15 HDFFR-1318) - Removed the following individual platform specific files and have them to be produced by configure using corresponding *.in files. mfhdf/libsrc/config/netcdf-XXX.h by mfhdf/libsrc/netcdf.h.in mfhdf/fortran/config/ftest-XXX.f by mfhdf/fortran/ftest.f.in mfhdf/fortran/config/jackets-XXX.c by mfhdf/fortran/jackets.c.in mfhdf/fortran/config/netcdf-XXX.inc by mfhdf/fortran/netcdf.inc.in (AKC 2013/12/31 HDFFR-1320/476) - The following platforms are old and no longer available. Removed their support code from the configure files: alpha, convex, dec, fujivp, hpux, ia64, irix32, irix4, irix5, irix6, mac, solarisx86, sun, t3e, unicos. (AKC 2013/12/26 HDFFR-1320) - Removed -Xc (strict ansi standard) from the default CFLAGS of Solaris since the latest Sun Compiler version 5.11 and 5.12 have a conflict with the system header file. Since the current versions of C compiler should be at least ANSI (aka C89) compliant, the removal of -Xc should be safe. This also fixes a previous known problem of needing to use -xc99 to build HDF4. (AKC 2013/12/20 HDFFR-1361) - CMake - Changed name of TGZ_PATH to TGZPATH. (ADB 2013/9/12) - CMake - Removed extra flag POSIX_SOURCE as it caused failures on Apple Mac builds. (ADB 2013/8/7) Library ========= - SDsetblocksize and VSsetblocksize would not change the block size if the sds/vdata did not use linked-block before it was closed. The problem is now fixed. (BMR 2013/1/15 - HDFFR-1357) - Patches from user are applied to the C test to correct an overflow variable and to the Fortran source for some missing declarations. (BMR/EP 2014/1/15 - HDFFR-1367) - Examples GR_write_chunks.c and GR_read_chunks.c were added. (BMR 2014/12/30 - HDFFR-1402) Utilities ========= - ncdump displayed garbage in place of fill-values when a variable had unlimited dimension and had been written with less number of records than the largest number of records in the file. This is now fixed. (BMR 2014/12/16 - HDFFR-1390) Documentation ============= - None Platforms tested ================ This version has been tested in the following platforms: Linux 2.6.32-358.18.1 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) .el6.ppc64 #1 GNU Fortran (GCC) 4.4.7 20120313 (Red Hat 4.4.7-3) SMP ppc64 GNU/Linux IBM XL Fortran for Linux, V13.1 (64-bit mode) (ostrich) Linux 2.6.18-308.13.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) SMP i686 i386 GNU Fortran (GCC) 4.1.2 20080704 (jam) (Red Hat 4.1.2-52) pgcc and pgf90 13.7-0 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 13.1.3 20130607 Intel(R) Fortran Compiler, Version 13.1.3 Linux 2.6.18-308.24.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-54) SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.1.2 20080704 (koala) (Red Hat 4.1.2-54) icc (ICC) 13.1.3 20130607 ifort (IFORT) 13.1.3 20130607 pgcc and pgf90 13.7-0 64-bit target on x86-64 Linux -tp nehalem Linux 2.6.32-431.el6.x86_64 gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-4) #1 SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.4.7 20120313 (platypus) (Red Hat 4.4.7-4) icc (ICC) 13.1.3 20130607 ifort (IFORT) 13.1.3 20130607 SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc 2011/11/16 (emu) (see "Known problem" section) Sun Fortran 95 8.6 SunOS_sparc 2011/11/16 Windows 7 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Visual Studio 2012 w/ Intel Fortran 13 (cmake) Cygwin(CYGWIN_NT-6.1 1.7.25(0.270/5/3) gcc(4.7.3) compiler and gfortran) (cmake and autotools) Windows 7 x64 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Visual Studio 2012 w/ Intel Fortran 13 (cmake) Windows 8 Visual Studio 2012 w/ Intel Fortran 13 (cmake) Windows 8 x64 Visual Studio 2012 w/ Intel Fortran 13 (cmake) Mac OS X Intel 10.6.8 Apple clang version 1.7 from Xcode 3.2.6 Darwin 10.8.0 gfortran GNU Fortran (GCC) 4.6.2 (fred) icc and ifort Version 12.1.6 20120928 Mac OS X 10.7.5 Apple clang version 3.0 from Xcode 4.6.1 Darwin 11.4.2 gfortran GNU Fortran (GCC) 4.6.2 (duck) icc and ifort Version 13.0.3 20130606 Mac OS X 10.8.5 Apple clang version 4.2 from Xcode 4.6.1 Darwin 12.2.0 gfortran GNU Fortran (GCC) 4.6.2 (wren) icc and ifort Version 13.0.3 20130606 Mac OS X 10.8.5 Apple clang version 5.0 from Xcode 5.0.2 Darwin 12.2.0 gfortran GNU Fortran (GCC) 4.6.2 (swallow,kite) icc and ifort Version 14.0.1 20131010 Mac OS X 10.9.1 Apple LLVM version 5.0 (clang-500.2.79) Darwin 13.0.0 (based on LLVM 3.3svn) gfortran GNU Fortran (GCC) 4.6.2 Debian7.2.0 3.2.0-4-amd64 #1 SMP Debian 3.2.51-1 x86_64 GNU/Linux gcc (Debian 4.7.2-5) 4.7.2 GNU Fortran (Debian 4.7.2-5) 4.7.2 (cmake and autotools) Fedora20 3.11.10-301.fc20.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.8.2 20131212 (Red Hat 4.8.2-7) GNU Fortran (GCC) 4.8.2 20130603 (Red Hat 4.8.2-7) (cmake and autotools) SUSE 13.1 3.11.6-4-desktop #1 SMP PREEMPT x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.8.1 GNU Fortran (SUSE Linux) 4.8.1 (cmake and autotools) Ubuntu 13.10 3.11.0-13-generic #20-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.8.1-10ubuntu8) 4.8.1 GNU Fortran (Ubuntu/Linaro 4.8.1-10ubuntu8) 4.8.1 (cmake and autotools) Known problems ============== o CMake builds in Windows uses the same pre-generated ncgen*.[ch] files from the yacc/lex input files. The generated file, ncgenyy.c, uses the header file that Windows does not support. This must be blocked out in order for Windows to use it. (AKC 2014-02-03 HDFFR-1424). o CMake "make install" fails installing the tools: Use CPack to create an install package. ADB - 2014/02/03 o CMake does not install these man pages: hdf.1, ncdump.1, ncgen.1 AKC/BMR - 2014/02/02 o For Mac OS X 10.7 Lion and on 10.8 Mountain Lion, several tests fail with GCC, Intel and Clang compilers. Currently, this situation is detected and -O0 level optimization is used. We will work on the issue for the next release. (HDFFR-1318,1358) EIP - 2013/02/05 o On IBM PowerPC 64, hdftest fails when gcc 4.4.6 is used with -O3 optimization level. o When building in AIX systems, if CC is xlc with -qlanglvl=ansi, configure will fail when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of the jpeg library have HAVE_STDLIB_H defined in the jconfig.h header file. Without the -qlanglvl=ansi, some older xlc versions (e.g., V7.0) still fail, but newer xlc versions (e.g., V9.0) pass. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files in the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale have the same name, subsequent accesses to the dimension scale or to the SDS might produce undesired results because the libraries could not distinguish between the two objects. In the case of writing, data might even be corrupted. For example, SDS data might be written to a dimension variable or vice versa. (bugzilla #624) HDF4 Library Releases 4.2r2 and later make a distinction between an SDS and a dimension variable. However, as with older versions, these recent versions are unable to detect such conflicts in files created by earlier releases. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. FB - 2009/01/26 BMR - revised 2011/06/24 o N-bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata) after being written and before SDgetdatasize is called. BMR - 2008/11/22 %%%4.2.9%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.9 released on 2013-02-07 ============================================= INTRODUCTION This document describes the differences between HDF 4.2.8 and HDF 4.2.9. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.9. The HDF 4.2.9 documentation can be found on the The HDF Group's website at: http://www.hdfgroup.org/doc.html Previous versions of the documentation are available from the FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Configuration - Support for new platforms and compilers - Bugs fixed since HDF 4.2.8 -- Configuration -- Library -- Utilities - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= - The macro H4_NO_DEPRECATED_SYMBOLS was added to handle deprecated functions/features. To use deprecated functions/features, the library must be configured with option HDF4_ENABLE_DEPRECATED_SYMBOLS. (ADB, BMR 2013/1/25) Support for new platforms and compilers ======================================= - Ported to Mac OSX 10.8 (Mountain Lion) with Clang as the default C compiler. (AKC 2013/1/19) - Ported to Mac OSX 10.8 (Mountain Lion) with Intel compilers. (EIP 2013/02/05) Bugs fixed since HDF 4.2.8 ========================= Configuration ============= - Cygwin >= 1.7.7; The SunRPC of the glibc has been replaced by a TI-RPC (Transport Independent RPC) library to support IPv6. Configure has been updated to look for the tirpc library instead of rpc. (ADB 2012/11/09) Library ========= - SDgetcompress is now deprecated and not available by default. Its availability can be activated using option HDF4_ENABLE_DEPRECATED_SYMBOLS. (BMR 2012/1/25) - Some memory leaks were fixed. (BMR 2012/10/01) Utilities ========= - Vnattrs/Vattrinfo/Vgetattr are replaced with Vnattrs2/Vattrinfo2/Vgetattr2 in various tools to ensure attributes that are not created by Vsetattr can still be detected and accessed. The Reference Manual and User's Guide provide details about this issue. (BMR 2012/12/25) Documentation ============= - The Reference Manual and User's Guide have minor updates. - The Specification and Developer's Guide is extensively updated. Platforms tested ================ This version has been tested in the following platforms: Linux 2.6.32-279.19.1 gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4) .el6.ppc64 #1 GNU Fortran (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4) SMP ppc64 GNU/Linux IBM XL Fortran for Linux, V13.1 (64-bit mode) (ostrich) Linux 2.6.18-308.13.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) SMP i686 i386 GNU Fortran (GCC) 4.1.2 20080704 (jam) (Red Hat 4.1.2-52) pgcc and pgf90 11.9-0 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 12.1.0 20110811 Intel(R) Fortran Compiler, Version 12.1.0 Linux 2.6.18-308.24.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-52) SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.1.2 20080704 (koala) (Red Hat 4.1.2-52) icc (ICC) 12.1.0 20110811 ifort (IFORT) 12.1.0 20110811 pgcc and pgf90 11.9-0 64-bit target on x86-64 Linux -tp nehalem SunOS 5.10 32- and 64-bit Sun C 5.9 SunOS_sparc Patch 124867-16 2010/08/11 (linew) Sun Fortran 95 8.3 SunOS_sparc Patch 127000-13 2010/01/26 Sun C 5.11 SunOS_sparc 2010/08/13 Sun Fortran 95 8.5 SunOS_sparc 2010/08/13 SunOS 5.11 32- and 64-bit Sun C 5.12 SunOS_sparc 2011/11/16 (emu) (see "Known problem" section) Sun Fortran 95 8.6 SunOS_sparc 2011/11/16 Windows 7 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Cygwin(CYGWIN_NT-6.1 1.7.15(0.260/5/3) gcc(4.5.3) compiler and gfortran) Windows 7 x64 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Cygwin(CYGWIN_NT-6.1 1.7.15(0.260/5/3) gcc(4.5.3) compiler and gfortran) MAC OS X Intel 10.6.8 icc (ICC) 12.1 Build 20120928 (64-bit) ifort (IFORT) 12.1 Build 20120928 Darwin 10.8.0 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (fred) GNU Fortran (GCC) 4.6.2 20111019 Apple clang version 1.7 (tags/Apple/clang-77) (based on LLVM 2.9svn) MAC OS X Intel 10.7.5 i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (64 bit) GNU Fortran (GCC) 4.6.2 20111019 Darwin 11.4.2 icc and ifort Version 13.0 Build 20121010 (duck) Apple clang version 3.0 (tags/Apple/clang-211.12) (based on LLVM 3.0svn) Mac OS X 10.8.2 Apple LLVM version 4.2 (clang-425.0.24) Darwin 12.2.0 gcc i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (wren) gfortran GNU Fortran (GCC) 4.6.2 icc and ifort Version 13.0.1.119 Build 20121010 Debian6.0.3 2.6.32-5-686 #1 SMP i686 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Debian6.0.3 2.6.32-5-amd64 #1 SMP x86_64 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Fedora17 3.5.2-1.fc17.i6866 #1 SMP i686 i686 i386 GNU/Linux gcc (GCC) 4.7.0 20120507 (Red Hat 4.7.0-5) GNU Fortran (GCC) 4.7.0 20120507 (Red Hat 4.7.0-5) Fedora17 3.5.2-1.fc17.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.7.0 20120507 (Red Hat 4.7.0-5) GNU Fortran (GCC) 4.7.0 20120507 (Red Hat 4.7.0-5) SUSE 12.2 3.4.6-2.10-desktop #1 SMP PREEMPT i686 i686 i386 GNU/Linux gcc (SUSE Linux) 4.7.1 GNU Fortran (SUSE Linux) 4.7.1 SUSE 12.2 3.4.6-2.10-desktop #1 SMP PREEMPT x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.7.1 GNU Fortran (SUSE Linux) 4.7.1 Ubuntu 12.04 3.2.0-29-generic #46-Ubuntu SMP i686 GNU/Linux gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 GNU Fortran (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Ubuntu 12.04 3.2.0-29-generic #46-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 GNU Fortran (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Known problems ============== o On SunOS 5.11, the -xc99 flag has to be used with the cc compiler to avoid compilation errors. Future releases will automatically detect the system and add the flag. (HDFFR-1361) EIP - 2013/02/05 o For Mac OS X 10.7 Lion and on 10.8 Mountain Lion, several tests fail with GCC, Intel and Clang compilers. Currently, this situation is detected and -O0 level optimization is used. We will work on the issue for the next release. (HDFFR-1358,1327,1358) EIP - 2013/02/05 o On IBM PowerPC 64, hdftest fails when gcc 4.4.6 is used with -O3 optimization level. o When building in AIX systems, if CC is xlc with -qlanglvl=ansi, configure will fail when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of the jpeg library have HAVE_STDLIB_H defined in the jconfig.h header file. Without the -qlanglvl=ansi, some older xlc versions (e.g., V7.0) still fail, but newer xlc versions (e.g., V9.0) pass. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files in the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale have the same name, subsequent accesses to the dimension scale or to the SDS might produce undesired results because the libraries could not distinguish between the two objects. In the case of writing, data might even be corrupted. For example, SDS data might be written to a dimension variable or vice versa. HDF4 Library Releases 4.2r2 and later make a distinction between an SDS and a dimension variable. However, as with older versions, these recent versions are unable to detect such conflicts in files created by earlier releases. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. o This release does not support VMS systems. o N-bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata) after being written and before SDgetdatasize is called. %%%4.2.8%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.8 released on 2012-08-03 =================================================== INTRODUCTION This document describes the differences between HDF 4.2.7 and HDF 4.2.8. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.8. The HDF 4.2.8 documentation can be found on the The HDF Group's FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Source distribution -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF 4.2.7 -- Configuration -- Library -- Utilities - Documentation - Platforms tested - Known problems New features and changes ======================== Source distribution =================== - [None. PPM - YYYY/MM/DD] Configuration ============= - [None] Library ========= - It was discovered by the HDF Mapping Project that there were certain type of palettes that cannot be retrieved by existing functions. A new function, GRgetpalinfo, was added to allow applications to get information about palettes. Please refer to the HDF User's Guide and Reference Manual for more information regarding this palette issue. BMR - 2012/07/26 - It was also discovered that the IMCOMP compression was not detected. Although IMCOMP is no longer supported in new data, HDF still needs to detect IMCOMP compression from existing data. We added COMP_CODE_IMCOMP to comp_coder_t with value 12 (same as COMP_IMCOMP.) The library and hdp now can detect IMCOMP compression. However, writing IMCOMP compressed images is not allowed. BMR - 2012/07/26 - Added GRgetcomptype to return image's compression, which can include IMCOMP. BMR - 2012/07/26 Test ==== - [None] Utilities ========= - [None] Support for new platforms and compilers ======================================= - Mac OS X 10.7.4 with GNU gcc and gfortran compilers Bugs fixed since HDF 4.2.7 ========================= Configuration ============= - Add config/apple-darwin11.4.0 customized for Mac OS X 10.7 Lion. This one set production CFLAGS to -O0 to eliminate the test errors encountered in Lion systems only. This one lets HDF4 to be build and run in Lion systems. More investigate needed to locate the exact locations of the errors. (HDFFR-1327 and HDFFR-1328) AKC - 2012/07/03 - The USE_ENUM in mfhdf/libsrc/netcdf.h, which is copied from config/netcdf-apple.h, which generated an error due to xdr_enum overwrote memory outside of its designated enum variable. This happened only to the apple provided compiler (i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1). We decided to not define USE_ENUM for all apple OS for now. (HDFFR-1318) AKC - 2012/07/01 Library ========= - [None] Utilities ========= - Added test file IMCOMP.hdf and sample file dumpgr-20.out for IMCOMP test for testing the detection of IMCOMP compressed image in hdp. BMR - 2012/07/26 Documentation ============== - The Reference Manual and User's Guide have been updated to include new functions: + GRgetpalinfo + GRgetcomptype BMR - 2012/07/26 Platforms tested ================ This version has been tested in the following platforms: Linux 2.6.32-279.2.1 gcc (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4) .el6.ppc64 #1 GNU Fortran (GCC) 4.4.6 20120305 (Red Hat 4.4.6-4) SMP ppc64 GNU/Linux IBM XL Fortran for Linux, V13.1 (64-bit mode) (ostrich) Linux 2.6.18-194.3.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50) SMP i686 i386 GNU Fortran (GCC) 4.1.2 20080704 (jam) (Red Hat 4.1.2-52) pgcc and pgf90 11.9-0 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 12.1.0 20110811 Intel(R) Fortran Compiler, Version 12.1.0 Linux 2.6.18-274.17.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-51) SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.1.2 20080704 (koala) (Red Hat 4.1.2-51) icc (ICC) 12.1.0 20110811 ifort (IFORT) 12.1.0 20110811 pgcc and pgf90 11.9-0 64-bit target on x86-64 Linux -tp nehalem SunOS 5.10 32- and 64-bit Sun C 5.9 SunOS_sparc Patch 124867-16 2010/08/11 (linew) Sun Fortran 95 8.3 SunOS_sparc Patch 127000-13 2010/01/26 Xeon Linux 2.6.32.24-0.2.1.2230.2.PTF-default #1 SMP x86_64 Intel(R) C Compiler Version 11.1.073 20100806 SGI Altix UV Intel(R) Fortran Compiler Version 11.1.073 (ember) gcc (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973 GNU Fortran (SUSE Linux) 4.3.4 [gcc-4_3-branch revision 152973] Windows 7 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Cygwin(1.7.15 native gcc(4.5.3) compiler and gfortran) Windows 7 x64 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Cygwin(1.7.15 native gcc(4.5.3) compiler and gfortran) MAC OS X Intel 10.6.8 Darwin 10.8.0 (32 bit) i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (tejeda) GNU Fortran (GCC) 4.6.1 icc (ICC) 12.1.0 20110811 ifort (IFORT) 12.1.0 20110811 MAC OS X Intel 10.6.8 Darwin 10.8.0 (64 bit) icc (ICC) 12.1.0 20110811 (fred) ifort (IFORT) 12.1.0 20110811 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 GNU Fortran (GCC) 4.6.2 MAC OS X Intel 10.7.4 Darwin 11.4.0 (64 bit) i686-apple-darwin11-llvm-gcc-4.2.1 (GCC) 4.2.1 (hdf-duck) gcc (GCC) 4.6.2 GNU Fortran (GCC) 4.6.2 Debian6.0.3 2.6.32-5-686 #1 SMP i686 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Debian6.0.3 2.6.32-5-amd64 #1 SMP x86_64 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Fedora15 3.2.9-2.fc16.i686.PAE #1 SMP i686 i686 i386 GNU/Linux gcc (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1) GNU Fortran (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1) Fedora15 3.2.9-2.fc16.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1) GNU Fortran (GCC) 4.6.2 20111027 (Red Hat 4.6.2-1) SUSE 12.1 3.1.9-1.4-desktop #1 SMP PREEMPT i686 i686 i386 GNU/Linux gcc (SUSE Linux) 4.6.2 GNU Fortran (SUSE Linux) 4.6.2 SUSE 12.1 3.1.9-1.4-desktop #1 SMP PREEMPT x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.6.2 GNU Fortran (SUSE Linux) 4.6.2 Ubuntu 11.10 3.2.0-26-generic #23-Ubuntu SMP i686 GNU/Linux gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 GNU Fortran (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Ubuntu 11.10 3.2.0-26-generic #23-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 GNU Fortran (Ubuntu/Linaro 4.6.3-1ubuntu5) 4.6.3 Known problems ============== o For Mac OS X 10.7 Lion, the test suite on the mfhdf side encounter 2 erros, when optimization is used, whether with the Apple GCC compiler or with the GNU GCC compiler. CFLAGS must be set to use no optimization (-O0 or -g) to avoid these failures. config/apple-darwin11.4.0 is added to direct configure to use -O0 for CFLAGS for Mac OS X 10.7 Lion. We hope to fix the bug in the next release. (HDFFR-1327 and HDFFR-1328) AKC - 2012/07/03 o On IBM PowerPC 64 hdftest fails when gcc 4.4.6 is used with -O3 optimization level. o When building in AIX systems, if CC is xlc with -qlanglvl=ansi, configure will fail when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of jpeg library have HAVE_STDLIB_H defined in the jconfig.h header file. Without the -qlanglvl=ansi, some older xlc (e.g., V7.0) still fails but newer xlc (e.g., V9.0) passes. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files in the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale have the same name, subsequent accesses to the dimension scale or to the SDS might produce undesired results because the libraries could not distinguish between the two objects. In the case of writing, data might even be corrupted. For example, SDS data might be written to a dimension variable or vice versa. HDF4 Library Releases 4.2r2 and later make a distinction between an SDS and a dimension variable. However, as with older versions, these recent versions are unable to detect such conflicts in files created by earlier releases. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. o This release does not support VMS systems. o N-bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata) after being written and before SDgetdatasize is called. %%%4.2.7%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.7 released on 2012-02-06 =================================================== INTRODUCTION This document describes the differences between HDF 4.2.6 and HDF 4.2.7. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.7. The HDF 4.2.7 documentation can be found on the The HDF Group's FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Source distribution -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2.6 -- Utilities -- Library -- Configuration - Documentation - Platforms tested - Known problems New features and changes ======================== Source distribution =================== - None Configuration ============= - Fortran fort_ps folders are now obsolete, and VS2008 project files have been updated. CMake was tested on Windows, Linux, and Cygwin. ADB - 2012/1/20 Library ========= - Fortran fort_ps folders are obsolete and have been removed. ADB - 2012/1/20 - VSgetexternalfile and SDgetexternalfile are superseded by VSgetexternalinfo and SDgetexternalinfo because their prototypes missed the parameter "length" for external data length. (part 1 in bug HDFFR-1297) BMR - 2012/1/15 - Visinternal is superseded by Vgisinternal, which allows the handling of special cases in old data. (part 2 in bug HDFFR-1297) BMR - 2012/1/15 - Fortran wrappers vfgvgroups and vsfgvdatas have been added for Vgetvgroups and VSgetvdatas, respectively. MSB - 2012/1/5 Test ==== - The following files were added: ./hdf/test/test_files/README: special notes on data files in this directory ./hdf/test/test_files/grtdfui83.hdf: file to test old data situations ./mfhdf/test/tutils.c: added to provide common code for various tests BMR - 2011/1/21 - The following file was removed: ./mfhdf/test/tidtypes.c: changed to tmixed_apis.c for broader contents BMR - 2011/1/21 Utilities ========= - The following files were added: ./mfhdf/dumper/testfiles/Roy.nc: file to test skipping compression check in a netCDF file ./mfhdf/dumper/testfiles/dumpsds-18.out: output of testing netCDF file ./hdf/test/tvnameclass.c: tests issues involving vgroup/vdata names/classes BMR - 2011/1/21 Support for new platforms and compilers ======================================= - IBM XL Fortran for Linux 64-bit, V13.1 on Linux PowerPC 64 EIP - 2012-02-01 Bugs fixed since HDF4.2.6 ========================= Utilities ========= - hrepack: Version HDF4 2.6 does not allow the combination of unlimited dimensions and compression, which is wrong. - Freeing incorrect buffers caused segfault. It was previously commented out, thus resulted in memory leaks. This is now fixed. (HDFFR-479) BMR - 2011/11/3 - hdp: fixed to skip netCDF files when checking for compression. (HDFFR-473) BMR - 2011/11/1 Library ========= - SDgetchunkinfo and SDreadchunk failed on an empty SDS when the file is opened as read-only (HDFFR-171). This is now fixed. BMR - 2011/10/20 - SDcheckempty was fixed to return "empty" when detecting a pair of DFTAG_SD/ that is associated with offset=INVALID_OFFSET and length=INVALID_LENGTH. BMR - 2011/10/9 - Vgetclass and Vgetname were fixed to return vgclass and vgname with null terminated character when the class or name is not set. (HDFFR-1288) BMR - 2011/9/19 Configuration ============= - [None] Documentation ============== - The Reference Manual and User's Guide have been updated to include new functions: + VSgetexternalinfo and SDgetexternalinfo + Vgisinternal + Fortran wrappers vfgvgroups and vsfgvdatas BMR - 2012/1/22 Platforms tested ================ This version has been tested in the following platforms: Linux 2.6.32-220.2.1 gcc (GCC) 4.4.6 20110731 (Red Hat 4.4.6-3) .el6.ppc64 #1 GNU Fortran (GCC) 4.4.6 20110731 SMP ppc64 GNU/Linux IBM XL Fortran for Linux, V13.1 (64-bit mode) (ostrich) Linux 2.6.18-194.3.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50) SMP i686 i386 GNU Fortran (GCC) 4.1.2 20080704 (jam) (Red Hat 4.1.2-51) pgcc and pgf90 11.8-0 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 12.0.4 20110427 Intel(R) Fortran Compiler, Version 12.0.4 Linux 2.6.18-274.17.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-51) SMP x86_64 GNU/Linux GNU Fortran (GCC) 4.1.2 20080704 (koala) (Red Hat 4.1.2-51) icc (ICC) 12.0.4 20110427 ifort (IFORT) 12.0.4 20110427 pgcc and pgf90 11.8-0 64-bit target on x86-64 Linux -tp nehalem SunOS 5.10 32- and 64-bit Sun C 5.9 SunOS_sparc Patch 124867-16 2010/08/11 (linew) Sun Fortran 95 8.3 SunOS_sparc Patch 127000-13 2010/01/26 Xeon Linux 2.6.32.24-0.2.1.2230.2.PTF-default #1 SMP x86_64 Intel(R) C Compiler Version 11.1.073 20100806 SGI Altix UV Intel(R) Fortran Compiler Version 11.1.073 (ember) Windows XP Visual Studio 2008 w/ Intel Fortran 10.1 (project files) Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Cygwin(1.7.9 native gcc(4.5.3) compiler and gfortran) Windows XP x64 Visual Studio 2008 w/ Intel Fortran 10.1 (project files) Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Cygwin(1.7.9 native gcc(4.5.3) compiler and gfortran) Windows 7 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) Windows 7 x64 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 w/ Intel Fortran 12 (cmake) MAC OS X Intel 10.6.8 Darwin 10.8.0 (32 bit) i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (tejeda) GNU Fortran (GCC) 4.6.1 icc (ICC) 12.1.0 20110811 ifort (IFORT) 12.1.0 20110811 MAC OS X Intel 10.6.8 Darwin 10.8.0 (64 bit) icc (ICC) 12.1.0 20110811 (fred) ifort (IFORT) 12.1.0 20110811 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 GNU Fortran (GCC) 4.6.1 Debian6.0.3 2.6.32-5-686 #1 SMP i686 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Debian6.0.3 2.6.32-5-amd64 #1 SMP x86_64 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Fedora15 2.6.41.4-1.fc15.i686.PAE #1 SMP i686 i686 i386 GNU/Linux gcc (GCC) 4.6.1 20110908 (Red Hat 4.6.1-9) GNU Fortran (GCC) 4.6.1 20110908 (Red Hat 4.6.1-9) Fedora15 2.6.41.4-1.fc15.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.6.1 20110908 (Red Hat 4.6.1-9) GNU Fortran (GCC) 4.6.1 20110908 (Red Hat 4.6.1-9) SUSE 11.4 2.6.37.6-0.9-desktop #1 SMP PREEMPT i686 i686 i386 GNU/Linux gcc (SUSE Linux) 4.5.1 20101208 GNU Fortran (SUSE Linux) 4.5.1 20101208 SUSE 11.4 2.6.37.6-0.9-desktop #1 SMP PREEMPT x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.5.1 20101208 GNU Fortran (SUSE Linux) 4.5.1 20101208 Ubuntu 11.10 3.0.0-14-generic #23-Ubuntu SMP i686 GNU/Linux gcc (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1 GNU Fortran (Ubuntu/Linaro 4.6.4-9ubuntu3) 4.6.1 Ubuntu 11.10 3.0.0-14-generic #23-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1 GNU Fortran (Ubuntu/Linaro 4.6.1-9ubuntu3) 4.6.1 Known problems ============== o On IBM PowerPC 64 hdftest fails when gcc 4.4.6 is used with -O3 optimization level. o When building in AIX systems, if CC is xlc with -qlanglvl=ansi, configure fails when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of jpeg library have HAVE_STDLIB_H defined in the jconfig.h header file. Without the -qlanglvl=ansi, some older xlc (e.g., V7.0) still fails but newer xlc (e.g., V9.0) passes. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files in the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale have the same name, subsequent accesses to the dimension scale or to the SDS might produce undesired results because the libraries could not distinguish between the two objects. In the case of writing, data might even be corrupted. For example, SDS data might be written to a dimension variable or vice versa. HDF4 Library Releases 4.2r2 and later make a distinction between an SDS and a dimension variable. However, as with older versions, these recent versions are unable to detect such conflicts in files created by earlier releases. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. o This release does not support VMS systems. o N-bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata) after being written and before SDgetdatasize is called. %%%4.2.6%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.6 released on 2011-06-15 =================================================== INTRODUCTION This document describes the differences between HDF 4.2.5 and HDF 4.2.6. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.6. The HDF 4.2.6 documentation can be found on the The HDF Group's FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Source distribution -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2.5 -- Utilities -- Library -- Configuration - Documentation - Platforms tested - Known problems New features and changes ======================== Source distribution =================== - [None] Configuration ============= - CMake support has been added. Current version recommemded is CMake 2.8.4. ADB - 2011/06/14 - Specified explicit version of the netCDF API (v 2.3.2) used by HDF in configure help and in configuration SUMMARY. Library ========= - Prior to HDF version 4.2.6, passing 0 for count caused failure; however, the failure did not occur in SDsetattr but was delayed until SDend. This can potentially cause file corruption. Starting from release 4.2.6, SDsetattr will fail when count is 0. BMR - 2011/06/09 - The function Vnattrs only processes attributes created by Vsetattr. It is not aware of attributes created by the pre-Vsetattr methods. The following functions are added to work around the limitation of Vnattrs: + Vnattrs2 gives number of new- and old-style attributes + Vattrinfo2 gives information about an old or new style attribute + Vgetattr2 reads values of an old or new style attribute BMR - 2011/06/09 - The following functions were added to support the HDF4 Mapping project specifically: + VSgetdatainfo gives offsets/lengths of a vdata's data + ANgetdatainfo gives offset/length of an annotation's data + SDgetdatainfo gives offsets/lengths of a data set's data + GRgetdatainfo gives offsets/lengths of a raster image's data + VSgetattdatainfo gives offset/length of vdata attribute's data + VGgetattdatainfo gives offset/length of vgroup attribute's data + GRgetattdatainfo gives offset/length of raster image attribute's data + SDgetattdatainfo gives offset/length of data set attribute's data + SDgetoldattdatainfo gives offset/length of a pre-defined attribute in old format + SDgetanndatainfo gives offset/lenth of an annotation belonging to an SDS + Vgetvgroups gives a list of user-created vgroups + VSgetvdatas gives a list of user-created vdatas + VSofclass gives a list of vdatas of a specific class + Hgetntinfo gives some information about a number type + GR2bmapped indicates whether a raster image should be mapped BMR - 2011/06/09 - Two functions are added to provide information of an external file: VSgetexternalfile for a vdata and SDgetexternalfile for a data set. BMR - 2011/06/09 Utilities ========= - [None] Support for new platforms and compilers ======================================= - [None] Bugs fixed since HDF4.2.5 ========================= Utilities ========= - hdp dumpvd and dumpsds now provide the name of the external file in the error message when reading fails due to missing the external file. BMR - 2011/06/10 - The problem where hdiff displays zeroes for Vdata values that are non-zero had been fixed. BMR - 2011/06/09 - hdp had been fixed for the problems of + dumpvd printing Name/Class incorrectly + dumprig giving wrong info about RIS24 and palettes + dumpvg missing some items in the Graphical representation BMR - 2011/06/09 Library ========= - The JPEG test failure due to the different versions of the JPEG library had been fixed. HDF4 is expected to build and pass regression tests for any version of the JPEG library available on the user's systems. BMR - 2011/06/09 Configuration ============= - [None] Documentation ============== The updated HDF Reference Manual is now available in HTML format. Platforms tested ================ This version has been tested in the following platforms: Linux 2.6.18-194.3.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50) SMP i686 i386 G95 (GCC 4.0.3 (g95 0.92!) Jun 24 2009) (jam) g77 (GCC) 3.4.6 20060404 pgcc and pgf90 10.6-0 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 11.1 Intel(R) Fortran Compiler, Version 11.1 GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50) Linux 2.6.18-238.9.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50) SMP x86_64 GNU/Linux G95 (GCC 4.0.3 (g95 0.93!) Apr 21 2010) (koala) icc (ICC) 12.0.3 20110309 ifort (IFORT) 12.0.3 20110309 pgcc and pgf90 11.3-0 64-bit target on x86-64 Linux -tp nehalem GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-50) SunOS 5.10 32- and 64-bit Sun C 5.9 SunOS_sparc Patch 124867-16 (linew) Sun Fortran 95 8.3 SunOS_sparc Patch 127000-13 Xeon Linux 2.6.32.24-0.2.1.2230.2.PTF-default #1 SMP x86_64 Intel(R) C Compiler Version 11.1.073 SGI Altix UV Intel(R) Fortran Compiler Version 11.1.073 (ember) AIX 6.1 (32/64 bit) IBM XL C/C++ for AIX, V11.1 (NCSA bp-login) IBM XL Fortran for AIX, V13.1 Windows XP Visual Studio 2008 w/ Intel Fortran 10.1 (project files) Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 (cmake) Cygwin(1.7.7 native gcc(4.3.4) compiler and gfortran) Windows XP x64 Visual Studio 2008 w/ Intel Fortran 10.1 (project files) Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Visual Studio 2010 (cmake) Cygwin(1.7.7 native gcc(4.3.4) compiler and gfortran) Windows 7 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) Windows 7 x64 Visual Studio 2008 w/ Intel Fortran 11.1 (cmake) MAC OS X Intel 10.6.2 Darwin 10.7.0 (32 bit) i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 (tejeda) GNU Fortran (GCC) 4.4.0 20090123 MAC OS X Intel 10.6.2 Darwin 10.7.0 (64 bit) Intel C icc (ICC) 12.0 20101110 (fred) Intel Fortran ifort (IFORT) 12.0 20101110 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 GNU Fortran (GCC) 4.6.0 Debian6.01 2.6.32-5-686 #1 SMP i686 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Debian6.01 2.6.32-5-amd64 #1 SMP x86_64 GNU/Linux gcc (Debian 4.4.5-8) 4.4.5 GNU Fortran (Debian 4.4.5-8) 4.4.5 Fedora14 2.6.35.12-88.fc14.i686.PAE #1 SMP i686 i686 i386 GNU/Linux gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) GNU Fortran (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) Fedora14 2.6.35.12-88.fc14.x86_64 #1 SMP x86_64 x86_64 x86_64 GNU/Linux gcc (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) GNU Fortran (GCC) 4.5.1 20100924 (Red Hat 4.5.1-4) SUSE 11.4 2.6.37.1-1.2-desktop #1 SMP PREEMPT i686 i686 i386 GNU/Linux gcc (SUSE Linux) 4.5.1 20101208 GNU Fortran (SUSE Linux) 4.5.1 20101208 SUSE 11.4 2.6.37.1-1.2-desktop #1 SMP PREEMPT x86_64 x86_64 x86_64 GNU/Linux gcc (SUSE Linux) 4.5.1 20101208 GNU Fortran (SUSE Linux) 4.5.1 20101208 Ubuntu 10.10 2.6.35-28-generic #50-Ubuntu SMP i686 GNU/Linux gcc (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5 GNU Fortran (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5 Ubuntu 10.10 2.6.35-28-generic #50-Ubuntu SMP x86_64 GNU/Linux gcc (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5 GNU Fortran (Ubuntu/Linaro 4.4.4-14ubuntu5) 4.4.5 Known problems ============== o Wnen buidling in AIX systems, if CC is xlc with -qlanglvl=ansi, configure fails when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of jpeg library has HAVE_STDLIB_H defined in its jconfig.h header file. Without the -qlanglvl=ansi, some older xlc (e.g., V7.0) still fails but newer xlc (e.g., V9.0) passes. AKC - 2010/02/17 o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files into the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale were created with the same name, subsequent accesses to the dimension scale or to the SDS might corrupt the data. HDF4 Library Releases 4.2r2 and later do not allow this conflict to occur. On the other hand, recent libraries are also unable to detect such conflicts that already exist in a file. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. o This release does not support VMS systems. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata), after being written and before SDgetdatasize is called. %%%4.2.5%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF version 4.2.5 released on Wed Feb 24 13:00:16 CST 2010 =================================================== INTRODUCTION This document describes the differences between HDF 4.2.4 and HDF 4.2.5 It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2.5 The HDF 4.2.5 documentation can be found on the The HDF Group's FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Source distribution -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2.4 -- Utilities -- Library -- Configuration - Documentation - Platforms tested - Known problems - Appendix: List of the removed files New features and changes ======================== Source distribution =================== - For the complete list of the removed obsolete files see Appendix. - Removed obsolete mfhdf/port and mfhdf/c++ directories and related code (EIP - 2010/1/18) - Removed obsolete hdf/fmpool directory and related code - Removed obsolete constants PCIO, WINIO, PAGEBIFIO, and WINNTIO from hdfi.h and hfile.h (EIP - 2009/12/31) - INSTALL* files were moved to the release_notes directory (EIP - 2009/12/29) - SD tests were moved from mfhdf/libsrc to mfhdf/test. (BMR - 2009/09/10) Configuration ============= - Added a configure check that will fail when jpeg version 7 is used, as this is not yet supported in HDF4. (MAM - 2010/01/28) - Configure suite now built with the following versions of the autotools: Automake 1.11.1, Autoconf 2.65, and Libtool 2.2.6b (MAM - 2009/12/14) Library ========= - SDgetchunkinfo now contains compression type and compression information. If failure occurs while getting compression information, the associate compression parameters will be set to -1 to indicate that no compression information is retrieved, instead of SDgetchunkinfo simply fails. This is to support backward compatibily. (BMR - 2010/02/04) - Vgroup name and class name can now be more than the previous limit of 64 characters. Two public functions are provided for applications to be able to allocate sufficient space for these items. int32 Vgetnamelen (int32 vkey, uint16 *name_len); int32 Vgetclassnamelen (int32 vkey, uint16 *classname_len); Please refer to the Reference Manual and User's Guide for details. (BMR - 2010/01/27) - SDreset_maxopenfiles allows users to reset the number of the files that can be open at the same time to the system limit minus 3. On AIX 6.1 system limit is 2GB-1 causing HDF4 to choke. Source code was modified to put a cap on the system limit to not exceed H4_MAX_AVAIL_OPENFILES (currently 20000). (EIP - 2010/02/01) - "make installcheck" builds and tests examples using h4cc and h4fc scripts (MAM, BMR and MSB - 2009/12) - HDF Fortran examples were added to hdf/fortran/examples. (MAM - 2009/12/14) - SD examples were added to mfhdf/examples. (BMR - 2009/08/28) - HDF C examples were added to hdf/examples. (BMR - 2009/11/07) Test ==== - Added tests for GRfindattr, GRattrinfo, and GRgetattr (BMR - 2009/11/15) - Moved SD tests out of mfhdf/libsrc into the new directory mfhdf/test (BMR - 2009/09/10) Utilities ========= - Added flag -k to hdp dumpsds to keep the order of the outputted SDSs the same as the order they were specified on the command line. The default (without -k) is SDSs are printed in the order in which they were added to the file (ie., indices.) (BMR - 2010/02/03) - Added -V flag to hdiff, hrepack, hdfimport, ncdump and ncgen; when specified, tool prints library version string and exits. (EIP - 2010/01/28) - Hrepack: set default value for JPEG's quality factor to 75 to prevent image distortion when the factor is not provided by user (BMR - 2010/01/14) Daily Test and Release ====================== Added h4vers and release scripts and Snapshot Release capability. Support for new platforms and compilers ======================================= Added support for 64-bit Mac Intel with gcc, gfortran and Intel C and Fortran compilers. Added support for AIX 6.1 using IBM XL C and Fortran compilers. (EIP - 2010/1/29) Bugs fixed since HDF4.2.4 ========================= Utilities ========= - None Library ========= - The problem where incorrect result occurred when attempting to retrieve the dimension scale's number type from a netCDF file was fixed (bugzilla #1644.) (BMR - 2009/09/25) - The problem where pieces of an image get written multiple times at different locations in the 2-D array had been fixed. The cause was the pointer to user's buffer was not advanced properly. This was part of bugzilla 1547. (BMR - 2009/06/10) - The problem which SDgetdimstrs failed if there are no attributes attached to the dimension had been fixed. SDgetdimstrs now returns the attribute strings containing '\0' for the first character, as specified in the documentation. (BMR - 2009/08/28) Configuration ============= - The mfhdf/ncgen/Makefile.in now has $(EXEEXT) appended to the ncgen program whenever it is referenced in a build rule dependency. This fixes some compile problems on Cygwin, where the .exe is necessary. (MAM - 2009/12/17). - Configure will now fail if the yacc or flex utilities are not available (as opposed to failing sometime during make when they are used). (MAM - 2009/12/17). - Configure will now properly check for the rpc library when on Cygwin, and fail gracefully if it is not found. (MAM - 2009/12/17) - Configure will no longer try to use a Fortran compiler to set up aspects of the build environment when Fortran has been disabled, as configure now deliberately sets the F77 environment variable to 'no' when Fortran is disabled. This should no longer cause build problems on Cygwin when no Fortran compiler is available. (MAM - 2009/12/14) - './configure --help' will now correctly indicate that shared libraries are disabled by default. (MAM - 2009/12/14) Documentation ============== The updated HDF Reference Manual is now available in HTML format. Platforms tested ================ HDF 4.2.5 has been tested in the following platforms: FreeBSD 6.3-STABLE i386 gcc 3.4.6 [FreeBSD] 20060305 (duty) f77 (gcc) 3.4.6 gcc (GCC) 4.4.4 20100126 (prerelease) GNU Fortran (GCC) 4.4.4 20100126 (prerelease) FreeBSD 6.3-STABLE amd64 gcc 3.4.6 [FreeBSD] 20060305 (liberty) f77 (gcc) 3.4.6 gcc (GCC) 4.4.4 20100126 (prerelease) GNU Fortran (GCC) 4.4.4 20100126 (prerelease) Linux jam 2.6.18-164.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46) SMP i686 i386 G95 (GCC 4.0.3 (g95 0.92!) Jun 24 2009) (jam) g77 (GCC) 3.4.6 20060404 pgcc and pgf90 8.0-5 32-bit target on x86 Linux -tp penryn Intel(R) C Compiler, Version 11.0 Intel(R) Fortran Compiler, Version 11.0 Linux 2.6.18-164.11.1.el5 #1 gcc (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46) SMP x86_64 GNU/Linux G95 (GCC 4.0.3 (g95 0.92!) Jun 24 2009) (amani) icc (ICC) 11.1 20090827 ifort (IFORT) 11.1 20090827 pgcc and pgf90 9.0-4 64-bit target on x86-64 Linux -tp k8-64e GNU Fortran (GCC) 4.1.2 20080704 (Red Hat 4.1.2-46) SunOS 5.10 32- and 64-bit Sun C 5.9 SunOS_sparc Patch 124867-12 (linew) Sun Fortran 95 8.3 SunOS_sparc Patch 127000-12 Linux 2.6.16.54-0.2.5 #1 Intel(R) C Compiler Version 10.1.017 Altix SMP ia64 Intel(R) Fortran Itanium(R) Version 10.1.017 (cobalt) SGI MPI 1.16 Xeon Linux 2.6.18-92.1.10.el5_lustre.1.6.6smp-perfctr #2 SMP x86_64 Intel(R) C Compiler Version 10.0.026 (abe) Intel(R) Fortran Compiler Version 10.0.026 IA-64 Linux 2.4.21.SuSE_292.til1 ia64 (NCSA tg-login) Intel(R) C Compiler Version 8.1.037 Intel(R) Fortran Compiler Version 8.1.033 AIX 5.3 (32/64 bit) IBM XL C/C++ for AIX, V9.0 (LLNL Up) IBM XL Fortran for AIX, V11.1 AIX 6.1 (32/64 bit) IBM XL C/C++ for AIX, V10.1 (NCSA bp-login) IBM XL Fortran for AIX, V12.1 Windows XP Visual Studio 2005 (with Intel Fortran 9.1/10.1) Visual Studio 2008 (with Intel Fortran 10.1) cygwin (gcc 4.3.4) Windows XP(64 bit) Visual Studio 2005 (with Intel Fortran 9.1/10.1) Visual Studio 2008 (with Intel Fortran 10.1) Windows Vista Visual Studio 2008 (with Intel Fortran 10.1) Windows Vista(64 bit) Visual Studio 2008 (with Intel Fortran 10.1) MAC OS X Intel 10.6.2 Darwin 10.2.0 Intel C icc (ICC) 11.1 20091130 Intel Fortran ifort (IFORT) 11.1 20091130 i686-apple-darwin10-gcc-4.2.1 (GCC) 4.2.1 GNU Fortran (GCC) 4.5.0 20090910 Linux 2.6.31.12-174.2.3.fc12.x86_64 #1 SMP x86_64 GNU/Linux gcc (GCC) 4.4.2 (Fedora 12) gfortran GNU Fortran (GCC) 4.4.2 20091222 (Red Hat 4.4.2-20) Linux 2.6.31-17-generic #54-Ubuntu SMP x86_64 GNU/Linux gcc (GCC) 4.4.1 (Ubuntu 9.10) gfortran GNU Fortran (GCC) 4.4.1 Linux 2.6.31.8-0.1-desktop #1 SMP x86_64 GNU/Linux gcc (GCC) 4.4.1 (OpenSuse 11.2) gfortran GNU Fortran (GCC) 4.4.1 Known problems ============== o Wnen buidling in AIX systems, if CC is xlc with -qlanglvl=ansi, configure fails when checking for the jpeglib.h header due to the duplicated macro definition of HAVE_STDLIB_H. This is because some newer builds of jpeg library has HAVE_STDLIB_H defined in its jconfig.h header file. Without the -qlanglvl=ansi, some older xlc (e.g., V7.0) still fails but newer xlc (e.g., V9.0) passes. (AKC - 2010/02/17) o When building on Linux/UNIX platforms, the szip shared library files must be in the system library path. This can be done by adding a link to the libsz.* files into the /usr/lib folder or by adding the library location to the LD_LIBRARY_PATH environment variable. Ex. export LD_LIBRARY_PATH=path_to_szip_lib:$LD_LIBRARY_PATH Optionally, one can use the static szip library files by adding '-static' to the CFLAGS environment variable. o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale were created with the same name, subsequent accesses to the dimension scale or to the SDS might corrupt the data. HDF4 Library Releases 4.2r2 and later do not allow this conflict to occur. On the other hand, recent libraries are also unable to detect such conflicts that already exist in a file. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. o This release does not support VMS systems. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata), after being written and before SDgetdatasize is called. Appendix: List of the removed files =================================== mfhdf/CHANGES mfhdf/CUSTOMIZE mfhdf/INSTALL mfhdf/MANIFEST mfhdf/ORIGIN mfhdf/README mfhdf/README.HDF mfhdf/README.HDF.33 mfhdf/VERSION mfhdf/macros.mk mfhdf/mfhdf.mak mfhdf/msoft.mk mfhdf/c++/README mfhdf/c++/example.c mfhdf/c++/example.cc mfhdf/c++/example.cdl mfhdf/c++/expected mfhdf/c++/nc.info mfhdf/c++/nc.txn mfhdf/c++/nctst.cc mfhdf/c++/ncvalues.cc mfhdf/c++/ncvalues.hh mfhdf/c++/netcdf.cc mfhdf/c++/netcdf.hh mfhdf/fortran/fortc mfhdf/fortran/fortc1.sed mfhdf/fortran/fortc2.sed mfhdf/fortran/ftest.lnk mfhdf/fortran/msoft/ mfhdf/fortran/Linux.m4 mfhdf/fortran/README mfhdf/fortran/aix.m4 mfhdf/fortran/common.m4 mfhdf/fortran/convex.m4 mfhdf/fortran/craympp.m4 mfhdf/fortran/descrip.mms mfhdf/fortran/freebsd.m4 mfhdf/fortran/fujivp.m4 mfhdf/fortran/hpux.m4 mfhdf/fortran/irix.m4 mfhdf/fortran/msoft.m4 mfhdf/fortran/msoft.mk mfhdf/fortran/osf.m4 mfhdf/fortran/solaris.m4 mfhdf/fortran/sunos.m4 mfhdf/fortran/ultrix.m4 mfhdf/fortran/unicos.m4 mfhdf/fortran/vax-ultrix.m4 mfhdf/fortran/vms.m4 mfhdf/libsrc/README mfhdf/libsrc/cdftest.c mfhdf/libsrc/cdftest.mak mfhdf/libsrc/cdftest.project.hqx mfhdf/libsrc/descrip.mms mfhdf/libsrc/gen_sds_szipped.c mfhdf/libsrc/hdftest.c mfhdf/libsrc/hdftest.h mfhdf/libsrc/hdftest.mak mfhdf/libsrc/hdftest.project.hqx mfhdf/libsrc/htons.mar mfhdf/libsrc/mfhdflib.project.hqx mfhdf/libsrc/msoft.mk mfhdf/libsrc/ntohs.mar mfhdf/libsrc/sds_szipped.dat mfhdf/libsrc/tchunk.c mfhdf/libsrc/tcomp.c mfhdf/libsrc/tcoordvar.c mfhdf/libsrc/tdatasizes.c mfhdf/libsrc/tdim.c mfhdf/libsrc/temptySDSs.c mfhdf/libsrc/tfile.c mfhdf/libsrc/tidtypes.c mfhdf/libsrc/tncunlim.c mfhdf/libsrc/tnetcdf.c mfhdf/libsrc/trank0.c mfhdf/libsrc/tsd.c mfhdf/libsrc/tsdsprops.c mfhdf/libsrc/tszip.c mfhdf/libsrc/tunlim.c mfhdf/libsrc/win32cdf.h mfhdf/libsrc/win32cdf.mak mfhdf/ncdump/ncdump.mak mfhdf/ncdump/msoft.mk mfhdf/ncdump/msofttab.c mfhdf/ncdump/ctest0.mak mfhdf/ncdump/ncdump.lnk mfhdf/ncgen/test0.lnk mfhdf/ncgen/ncgen.opt mfhdf/ncgen/msoft.mk mfhdf/ncgen/ncgen.mak mfhdf/ncgen/ctest0.mak mfhdf/ncgen/descrip.mms mfhdf/port/COPYRIGHT mfhdf/port/CUSTOMIZE mfhdf/port/HISTORY mfhdf/port/Makefile.am mfhdf/port/Makefile.in mfhdf/port/VERSION mfhdf/port/aclocal.m4 mfhdf/port/configure mfhdf/port/configure.in mfhdf/port/depend mfhdf/port/mast_mk.in mfhdf/port/master.mk.in mfhdf/port/uddummy.c mfhdf/port/udposix.h.in mfhdf/port/udposixh.in mfhdf/port/which hdf/COPYING hdf/COPYRIGHT hdf/README hdf/README.33r4 hdf/fmpool/Makefile.in hdf/fmpool/README hdf/fmpool/cdefs.h hdf/fmpool/compat.h hdf/fmpool/config.guess hdf/fmpool/config.sub hdf/fmpool/configure hdf/fmpool/configure.in hdf/fmpool/fmpio.3 hdf/fmpool/fmpio.c hdf/fmpool/fmpio.h hdf/fmpool/fmpool.3 hdf/fmpool/fmpool.c hdf/fmpool/fmpool.h hdf/fmpool/fmptypes.h hdf/fmpool/move-if-change hdf/fmpool/queue.h hdf/fmpool/test_fmpio.c hdf/fmpool/tfmpio_read.c hdf/fmpool/tfmpio_write.c hdf/fmpool/config/fmpaix.h hdf/fmpool/config/fmpalpha.h hdf/fmpool/config/fmpconvex.h hdf/fmpool/config/fmpdec.h hdf/fmpool/config/fmpfbsd.h hdf/fmpool/config/fmpfujivp.h hdf/fmpool/config/fmphpux.h hdf/fmpool/config/fmpia64.h hdf/fmpool/config/fmpirix32.h hdf/fmpool/config/fmpirix4.h hdf/fmpool/config/fmpirix5.h hdf/fmpool/config/fmpirix6.h hdf/fmpool/config/fmplinux.h hdf/fmpool/config/fmpmac.h hdf/fmpool/config/fmpsolaris.h hdf/fmpool/config/fmpsun.h hdf/fmpool/config/fmpt3e.h hdf/fmpool/config/fmpunicos.h hdf/fmpool/config/mh-aix hdf/fmpool/config/mh-alpha hdf/fmpool/config/mh-convex hdf/fmpool/config/mh-decstation hdf/fmpool/config/mh-fbsd hdf/fmpool/config/mh-fujivp hdf/fmpool/config/mh-hpux hdf/fmpool/config/mh-ia64 hdf/fmpool/config/mh-irix32 hdf/fmpool/config/mh-irix4 hdf/fmpool/config/mh-irix5 hdf/fmpool/config/mh-irix6 hdf/fmpool/config/mh-linux hdf/fmpool/config/mh-mac hdf/fmpool/config/mh-solaris hdf/fmpool/config/mh-sun hdf/fmpool/config/mh-t3e hdf/fmpool/config/mh-unicos hdf/src/hdf.bld hdf/src/hdflib.project.hqx hdf/src/hdfnof.bld hdf/src/hdfnofw3.lbc hdf/src/hdfnofwc.lbc hdf/src/hdfw386.lbc hdf/src/hdfwcc.lbc hdf/src/makepc.386 hdf/src/makepc.msc hdf/src/makepc.wcc hdf/src/makewin.msc hdf/src/win32hdf.mak hdf/util/fp2hdf.mak hdf/util/hdf24to8.mak hdf/util/hdf2jpeg.mak hdf/util/hdf8to24.mak hdf/util/hdfcomp.mak hdf/util/hdfed.mak hdf/util/hdfls.mak hdf/util/hdfpack.mak hdf/util/hdftopal.mak hdf/util/hdftor8.mak hdf/util/hdfunpac.mak hdf/util/jpeg2hdf.mak hdf/util/makepc.386 hdf/util/makepc.msc hdf/util/paltohdf.mak hdf/util/r8tohdf.mak hdf/util/ristosds.mak hdf/util/vcompat.mak hdf/util/vmake.mak hdf/util/vshow.mak hdf/test/MAKECOM.OLD hdf/test/makepc.386 hdf/test/makepc.msc hdf/test/makewin.msc hdf/test/makewin.new hdf/test/testhdf.386 hdf/test/testhdf.def hdf/test/testhdf.lnk hdf/test/testhdf.pc hdf/test/testhdf.project.hqx hdf/test/win32tst.mak %%%4.2r4%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF 4.2 Release 4 ================= January 25, 2009 INTRODUCTION This document describes the differences between HDF 4.2r3 and HDF 4.2r4 It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2r4 The HDF 4.2r4 documentation can be found on the The HDF Group's FTP server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2r3 -- Utilities -- Library -- Configuration - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= - Automake 1.10.1 used to generate Makefile.in files, and Autoconf 2.61 used to generate configure script. MAM - 2008/10/09 - Libtool support (2.2.6a) has been added, and HDF4 shared C libraries can now be built. To build, use the --enable-shared configuration flag. This needs to be used in conjunction with the --disable-fortran flag, as Fortran shared libraries are not supported. Shared libraries are disabled by default. MAM - 2008/10/09 - Added SZIP information to end of configure summary, which indicates whether SZIP is present and, if SZIP is present, whether the SZIP encoder is present. MAM - 2008/10/06 Library ========= - Added new API SDgetdatasize to retrieve the compressed and original sizes of an SDS' data. Its limitation is listed in the "Known problems" section at the end of this document. intn SDgetdatasize(int32 sdsid, int32* comp_size, int32* orig_size) sdsid - IN: dataset ID comp_size - OUT: size of compressed data orig_size - OUT: size of original data BMR - 2008/10/15 - The following Fortran APIs were added in this release: sfgetname sfgetnamelen sfgmaxopenf sfgnumopenf sfrmaxopenf sfidtype sfgnvars_byname sfn2indices Please see HDF Reference Manual for functions description. Note: These APIs are not available on Windows. Utilities ========= - hrepack output now includes dataset compression ratios. PVN - 2008/10/30 - hdp now displays various parameters of Gzip, Szip, and Skipping Huffman compressions when a dataset is compressed. (Bugzilla 1202) BMR - 2008/10/03 Support for new platforms and compilers ======================================= Linux 2.6.27 x86_64 Fedora 10 with GNU C and gfortran Ubuntu 8.10 with GNU C and gfortran OpenSuse 11.1 with GNU C and gfortran Mac Intel with GNU C, gfortran, g95 and Intel 10.1 32-bit C and Fortran compilers Note: Only the 32-bit version of the Intel compiler is supported. One has to run configuration scripts, which can be found under the bin directories in the Intel compiler installation directory tree (e.g., /opt/intel/cc/10.1.006/bin/iccvars.csh and /opt/intel/fc/10.1.006/bin/ifortvars.csh), to enable the 32-bit version of the compiler before configuring and building HDF4. Known problems: Due to a known bug in the Intel 10.1 icc compiler, one has to use -no-multibyte-chars flag with icc. O0 flag with ifort is required to build Fortran APIs and tests. This restriction is probably due to very old Fortran code and will be lifted for future releases. Bugs fixed since HDF4.2r3 ========================= Utilities ========= - hrepack previously failed to preserve unlimited dataset dimensions; they are now preserved. PVN - 2008/11/19 Library ========= - SDreaddata now checks for out-of-range values in the parameter 'stride' and fails when invalid values are given (Bugzilla 150.) This bug was actually fixed right before the HDF4.2r2 release. BMR - 2008/07/14 - Reading a record variable using nc API fills the buffer with fill values up to the maximum number of records of all unlimited dimension variables in the file, as in netCDF. It used to fail before (Bugzilla 1378.) - BMR 2009/01/21 - When the file name is too long, some SD APIs caused a segmentation fault (Bugzilla 1331.) This problem is now fixed. - BMR 2009/01/23 Configuration ============= - hdiff_array.c now gets linked against libm.a library. Documentation ============== The updated HDF Reference Manual is now available in HTML format. Platforms tested ================ HDF 4.2 Release 4 has been tested on the following platforms: FreeBSD 6.3-STABLE i386 gcc 3.4.6 [FreeBSD] 20060305 (duty) g++ 3.4.6 [FreeBSD] 20060305 f77 (gcc) 3.4.6 gcc 4.2.5 20080702 g++ 4.2.5 20080702 gfortran 4.2.5 20080702 FreeBSD 6.3-STABLE amd64 gcc 3.4.6 [FreeBSD] 20060305 (liberty) g++ 3.4.6 [FreeBSD] 20060305 f77 (gcc) 3.4.6 gcc 4.2.5 20080702 g++ 4.2.5 20080702 gfortran 4.2.5 20080702 IRIX64 6.5 MIPSpro cc 7.4.4m (ucar mop1 64 & n32) F90 MIPSpro 7.4.4m Linux 2.6.18-92.1.22.el5xen gcc (GCC) 4.1.2 20071124 #1 SMP i686 i686 i386 gG95 (GCC 4.0.3 (g95 0.92!) July 1 2008) (jam) g77 (GCC) 3.4.6 20060404 PGI C, Fortran, C++ 7.2-5 32-bit Intel(R) C Compiler for 32-bit applications, Version 10.1.018 Intel(R) Fortran Compiler for 32-bit applications, Version 10.1.018 Linux 2.6.9-42.0.10.ELsmp #1 gcc (GCC) 3.4.6 SMP i686 i386 g++ (GCC) 3.4.6 (kagiso) G95 (GCC 4.0.3 (g95 0.92!) April 18 2007) Linux 2.6.16.46-0.12-debug #1 Intel(R) C Compiler Version 10.0.025 SMP ia64 GNU/Linux Intel(R) Fortran Itanium(R) Version 10.0.025 (ucar hir1) Linux 2.6.16.46-0.14-smp #1 gcc (GCC) 4.1.2 20070115 (SUSE Linux) SMP x86_64 GNU/Linux G95 (GCC 4.0.3 (g95 0.92!) July 1 2008) (smirom) Intel(R) C Compiler for Intel(R) EM64T Ver. 10.1.013 Intel(R) Fortran Intel(R) EM64T Ver. 10.1.013 PGI C, Fortran Version 7.2-1 for 64-bit target on x86-64 Linux 2.6.16.54-0.2.5 #1 Intel(R) C Compiler Version 10.1.017 Altix SMP ia64 Intel(R) Fortran Itanium(R) Version 10.1.017 (cobalt) SGI MPI 1.16 SunOS 5.10 32- and 64-bit Sun WorkShop 6 update 2 C 5.8 Patch 121015-06 (linew) Sun WorkShop 6 update 2 Fortran 95 8.2 Patch 121019-09 Xeon Linux 2.6.18-92.1.10.el5_lustre.1.6.6smp-perfctr #2 SMP x86_64 Intel(R) C Compiler Version 10.0.026 (abe) Intel(R) Fortran Compiler Version 10.0.026 IA-64 Linux 2.4.21.SuSE_292.til1 ia64 (NCSA tg-login) Intel(R) C Compiler Version 8.1.037 Intel(R) Fortran Compiler Version 8.1.033 Windows XP Visual Studio 6.0 Visual Studio .NET (with Intel Fortran 9.1) Visual Studio 2005 (with Intel Fortran 9.1/10.1) Visual Studio 2008 (with Intel Fortran 10.1) cygwin (gcc 3.4.4 and g95 0.90!) Windows XP(64 bit) Visual Studio 2005 (with Intel Fortran 9.1/10.1) Visual Studio 2008 (with Intel Fortran 10.1) Windows Vista Visual Studio 2008 (with Intel Fortran 10.1) cygwin (gcc 3.4.4 and g95 0.90!) Windows Vista(64 bit) Visual Studio 2008 (with Intel Fortran 10.1) MAC OS X Intel Darwin 9.4.0 i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 G95 (GCC 4.0.3 (g95 0.91!) Apr 24 2008) gfortran GNU Fortran (GCC) 4.3.0 20070810 Intel 10.1 32-bit version Linux 2.6.27.9-159.fc10.x86_64 #1 SMP x86_64 GNU/Linux gcc (GCC) 4.3.2 (Fedora 10) gfortran GNU Fortran (GCC) 4.3.2 20081105 (Red Hat 4.3.2-7) Linux 2.6.27-9-generic #1 SMP x86_64 GNU/Linux gcc (GCC) 4.3.2 (Ubuntu 8.10) gfortran GNU Fortran (GCC) 4.3.2 Linux 2.6.27.7-9-default #1 SMP x86_64 GNU/Linux gcc (GCC) 4.3.2 (OpenSuse 11.1) gfortran GNU Fortran (GCC) 4.3.2 Known problems ============== o Existing data written by an HDF4 Library prior to HDF 4.2r2: When a one-dimensional SDS and a dimension scale were created with the same name, subsequent accesses to the dimension scale or to the SDS might corrupt the data. HDF4 Library Releases 4.2r2 and later do not allow this conflict to occur. On the other hand, recent libraries are also unable to detect such conflicts that already exist in a file. It is therefore STRONGLY recommended to check for such name duplication before working with data created with a pre-4.2r2 library. The functions SDgetnumvars_byname and SDnametoindices are provided to help detect such name conflicts and select the correct object to access, respectively; see the HDF Reference Manual entries for further details. o This release does not support VMS systems. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... o In order for the new API SDgetdatasize to get the correct compressed size of the data, the dataset needs to be closed (SDendaccess) or read (SDreaddata), after being written and before SDgetdatasize is called. o On a fedora x86_64 GNU/Linux machine with gfortran 4.3.2, the flag -fno-range-check needs to be used to work around old code. o On Windows with Visual Studio .NET 2003, a minor bug in the hdfnctest test causes it to fail. The failure is non-critical, and so the test has been disabled by default for all Windows compilers. ========================================================================== %%%4.2r3%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF 4.2 Release 3 ================= January 28, 2008 INTRODUCTION This document describes the differences between HDF 4.2r2 and HDF 4.2r3. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2r3. The HDF 4.2r3 documentation can be found on the THG ftp server: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the HDF FAQ, which can be reached from the HDF product home page: http://hdfgroup.org/products/hdf4/ If you have any questions or comments, please see the HDF Support page: http://hdfgroup.org/services/support.html CONTENTS - New features and changes -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2r2 -- Utilities -- Library -- Configuration - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= None Library ========= - Modified the HDF4 mfhdf library and configure to work in the absence of the NetCDF-3 header files when the HDF4 library is configured with the --disable-netcdf flag. This feature is required when building the ESDIS Toolkit in the presence of the NetCDF-3 library. In this case, the HDF4 file netcdf.h is used to build the HDF4 libraries and is installed under the name "hdf4_netcdf.h" to avoid a name clash with the NetCDF-3 file netcdf.h. EIP 2007-10-30 - Note that the previous change to SDnametoindex was backed out. It only returns the first variable of the requested name, as did the original version of this function. The variable can be an SDS or a coordinate variable. With this behavior, if there is more than one variable with the same name in the file, care must be taken to retrieve the desired variable. Please see new APIs in the next item for a way to handle non-uniquely named variables. The behavior of SDnametoindex will be documented in FAQs, documentation, and the newsletter. BMR 2008-01-17 - Added new APIs to SD interface: + SDgetnumvars_byname: Given a name, returns the number of variables in a file with that same name. + SDnametoindices: Given a name, returns a list of indices of all the variables in a file with that same name. With these new APIs, users can determine when an SDS name or a coordinate variable's name is not unique, retrieve the named variables, and examine them. BMR 2008-01-17 Utilities ========= None Support for new platforms and compilers ======================================= Support for Windows XP 64-bit with Visual Studio 2005 and Intel Fortran compiler was added. Bugs fixed since HDF4.2r2 ========================= Utilities ========= None Library ========= None Configuration ============= - libhdf4.settings file was not installed by make install command; fixed EIP, 2007-10-09 Documentation ============== Descriptions of the new functions SDgetnumvars_byname and SDnametoindices have been added to the documentation. Platforms tested ================ HDF 4.2 Release 3 has been tested on the following platforms: AIX 5.2 (32/64 bit) xlc 8.0.0.11 (datstar) xlf 10.01.0000.0002 FreeBSD 6.2 (32- and 64-bit) gcc and f77 GNU 3.4.6 (duty and liberty) GNU Fortran (GCC) 4.2.3 20080123 IRIX64 6.5 MIPSpro cc 7.4.4m (ucar mop1 64 & n32) F90 MIPSpro 7.4.4m Linux 2.4.21-47.ELsmp #1 SMP gcc and g77 3.2.3 i686 i386 GNU/Linux (osage) Linux 2.6.9-42.0.10.ELsmp #1 gcc (GCC) 3.4.6 SMP i686 i386 G95 (GCC 4.0.3 (g95 0.91!) April 18 2007) (kagiso) PGI C, Fortran 7.0-7 32-bit icc (ICC) 9.1 Intel(R) Fortran Compiler for 32-bit applications, Version 9.1 Linux 2.6.16.46-0.12-debug #1 Intel(R) C++ Version 10.0.025 SMP ia64 GNU/Linux Intel(R) Fortran Itanium(R) Version 10.0.025 (ucar hir1) Linux 2.6.16.46-0.14-smp #1 Intel(R) C for Intel(R) EM64T Ver. 9.1.037 SMP x86_64 GNU/Linux Intel(R) Fortran Intel(R) EM64T Ver. 9.1.031 (smirom) gcc (GCC) 4.1.2 20070115 (SUSE Linux) G95 (GCC 4.0.3 (g95 0.91!) Apr 19 2007) Linux 2.6.5-7.283-rtgfx Altix Intel(R) C++ Version 9.0.032 SMP ia64 Intel(R) Fortran Itanium(R) Version 9.0.033 (cobalt) SunOS 5.8 32- and 64-bit Sun WorkShop 6 update 2 C 5.3 (sol) Sun WorkShop 6 update 2 Fortran 77 5.3 SunOS 5.10 32- and 64-bit Sun WorkShop 6 update 2 C 5.8 Patch 121015-06 (linew) Sun WorkShop 6 update 2 Fortran 95 8.2 Patch 121019-09 IA-64 Linux 2.4.21.SuSE_292.til1 ia64 (NCSA tg-login) Intel(R) C++ Version 8.1.037 Intel(R) Fortran Compiler Version 8.1.033 Windows XP Visual Studio 6.0 Visual Studio .NET (with Intel Fortran 9.1) Visual Studio 2005 (with Intel Fortran 9.1) cygwin (gcc 3.4.4) Windows XP(64 bit) Visual Studio 2005(with Intel Fortran 9.1) Windows Vista Visual Studio 2005 (no fortran) MAC OS X Intel Darwin 8.10.1 i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 g95 0.91 gfortran GNU Fortran (GCC) 4.3.0 20070518 Known problems ============== o hdfcomp fails on HPUX 11.23 for the 64-bit version of the library. o This release does not support VMS systems. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o For existing data prior to HDF 4.2r2, when a one-dimensional SDS has the same name as the dimension scale, subsequent accesses to the dimension scale or the SDS might produce undesired results. It is recommended to check for name duplication first. See "New features and changes" section for details. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with a JPEG library built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... %%%4.2r2%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF 4.2 Release 2 October 4, 2007 INTRODUCTION This document describes the differences between HDF 4.2r1 and HDF 4.2r2. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2r2. The HDF 4.2r2 documentation can be found on the THG ftp server (ftp.hdfgroup.org) in the directory: ftp://ftp.hdfgroup.org/HDF/Documentation/ First-time HDF users are encouraged to read the FAQ for this release for more information about HDF. Also see the home page for HDF at: http://hdfgroup.org/ If you have any questions or comments, please send them to: help@hdfgroup.org CONTENTS - New features and changes -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2r1 -- Utilities -- Library - Documentation - Platforms tested - Known problems New features and changes ======================== Configuration ============= - The default installation directory name was changed from "NewHDF" to "hdf4". EIP - 2007/08/06 - Introduced --enable-netcdf configure flag to provide an option to enable/disable "HDF4-NetCDF"-like interfaces. By default, the HDF4 netcdf feature is enabled. Use the --disable-netcdf configuration flag to build HDF4 C and Fortran libraries that can be used by NetCDF-3 applications. There is no longer a need to specify the -DHAVE_NETCDF compilation flag. This feature is not yet supported on Windows. EIP - 2007/09/05 - Updated versions of autotools. HDF4 now uses automake 1.10.0, autoconf 2.61, and libtool 1.5.22. MAM - 2007/7/25. - The Fortran part of the HDF4 library uses the F77_FUNC macro to mangle names of C functions called from Fortran APIs. This should help with HDF4 code portability to different Fortran compilers. EIP - 2006/12/19 Library ========= - SZIP compression is supported now for GR C interfaces EIP - 2007/09/15 - Added new Fortran function hconfinf that determines whether the SZIP compression method is present and whether encoding is available. - Added support for SZIP compression in Fortran. EIP - 2007/09/01 - The following APIs were added; see "Bugs fixed" section for details. SDreset_maxopenfiles -- Resets the maximum number of files that can be open at a time. SDget_maxopenfiles -- Retrieves the current number of open files allowed in HDF and the maximum number of open files allowed on a system. SDget_numopenfiles -- Returns the number of files currently open. SDgetcompinfo -- Replaces SDgetcompress. GRgetcompinfo -- Replaces GRgetcompress. SDgetfilename -- Retrieves the name of the file, given its ID. SDgetnamelen -- Retrieves the name length of the object, given its ID. Note: Fortran wrappers for these new APIs are not available in this release. BMR - 2007/09/23 - SDS and vgroup names are no longer limited to 64 characters (Bugzilla #516). Note that when an older version of the library reads a new name that is longer than 64 characters, the name will contain some garbage after 64 characters. BMR - 2006/10/12 - User reported that SDreaddata went into an infinite loop when reading some corrupted compressed data. This problem is fixed. Two new error codes were added, consequently: DFE_READCOMP - when the zlib function returns Z_ERRNO (-1) or Z_STREAM_ERROR (-2) or Z_DATA_ERROR (-3) or Z_MEM_ERROR (-4) or Z_BUF_ERROR (-5) DFE_COMPVERSION - when the zlib function returns Z_VERSION_ERROR (-6) Utilities ========= - hrepack repacks by hyperslabs for large (non-compressed) datasets. PVN - 2007/7/10 - hdiff enables reading by hyperslabs. This feature was added to handle very large datasets, where available memory is a isssue. PVN - 2007/6/13 - hdiff now shows indices in multidimensional array notation. PVN - 2007/4/5 - hdiff now shows the name of the array when printing differences. PVN - 2007/4/5 - hdiff now shows a list of all objects in verbose mode. PVN - 2007/4/5 - hdiff return code is now 1 if differences are found, 0 if no differences are found, and -1 for an error. PVN - 2007/4/5 - hdiff has a new option, -p, for relative error. See usage. PVN - 2007/4/5 - hrepack now prints chunk information along with the compression type in verbose mode. PVN - 2007/4/5 Support for new platforms and compilers ======================================= - Added support for gfortran and g95 on Mac Intel. EIP 2007/09/14 - Added support for gfortran on FreeBSD for both 32- and 64-bit. - Added support for FreeBSD on AMD64 with gcc compilers. EIP 2007/05/24 - Added support for MAC OS X Intel with gcc and g95 compilers. - Added support for SUNOS 5.10 on Intel with SUN compilers (32- and 64-bit modes). EIP 2006/12/14 - Added support for HPUX 11.23 (32- and 64-bit modes). EIP 2006/12/19 Bugs fixed since HDF4.2r1 ========================= Utilities ========= - hrepack: Repeated vgroup insertions (duplicated links) were not being replicated. PVN - 2007/9/10 - hrepack: Improved performance for large number of SDSs, through the elimination of redundant file open calls. PVN - 2007/4/5 - hrepack: Fixed a bug in the -t option so that it now accepts multiple comma-separated names. PVN - 2007/4/5 - hrepack: Now duplicates dimension SDSs that are not accessed from any other SDSs. PVN - 2007/4/5 - hrepack: Fixed a bug that caused the unchunking of a dataset when uncompressing was requested. PVN - 2007/4/5 Library ========= - Added display of the compression method to hdp dumpsds and dumpgr (Bugzilla #130). BMR - 2005/4/4 - The current SDgetcompress and GRgetcompress APIs have severe flaws. Two new APIs, SDgetcompinfo and GRgetcompinfo, were added to provide better functionality and will eventually replace SDgetcompress and GRgetcompress. BMR - 2005/4/4 - Applied user's patch to fix bug #602. BMR - 2005/4/23 - Added a switch ('u') to ncdump to replace nonalpha-numeric characters with underscores. Thus, the SDS names won't be changed automatically unless the user so requests (Access bug #934/Bugzilla #381). BMR - 2005/7/17 - In HDF4.2r0, SDwritedata failed when the SDS had rank=0 (bug #1045). This change was retracted; writing to an SDS with rank=0 is allowed again. BMR - 2005/8/23 - When a dimension has the same name as an SDS, depending on the order in which they were created, either the SDS or the dimension will be corrupted if certain operations occur, such as a SDsetdimscale or SDsetattr call to the dimension. With this bugfix, the current situation can be summarized as below: + Data that has already been corrupted cannot be recovered. + For existing data that has not yet been corrupted, the problem has been fixed for multi-dimensional SDSs only. If the SDS had only one dimension, the results of subsequent accesses to the dimension would still be unpredictable (Bugzilla #328). BMR - 2005/8/23 + For future data, the problem has been fixed (Bugzilla #624). BMR - 2007/6/24 - Allowing the maximum number of open files to be increased revealed a problem: having more than 255 files open will cause file corruption (Bugzilla #440). Specifically, file number (255*m)+n will overwrite file number n, where n is [0..255] and m is [1..p], where (255*p) <= maximum system allowed. This problem is fixed. In addition, a new API is added for convenience: SDgetfilename -- retrieves the name of the file given its ID. BMR - 2005/10/05 - Prior to this release, the maximum number of files that can be open at the same time was only 32. This limit was implemented as a defined constant in the library, which users could not change without recompiling the library. In this release, if this limit is reached, the library will increase it to the system limit, minus 3 to account for stdin, stdout, and stderr. In addition, three APIs are added for more flexibility (Bugzilla #396/Access bug #935): SDreset_maxopenfiles -- Resets the maximum number of files that can be open at a time. SDget_maxopenfiles -- Retrieves the current number of open files allowed in HDF and the maximum number of open files allowed on a system. SDget_numopenfiles -- Returns the number of files currently open. Note: (1) Because there are also stdin, stdout, and stderr, the maximum limit that can be set must only be (system limit) - 3. (2) If the system maximum limit is reached, the library will push the error code DFE_TOOMANY onto the error stack. The user application can detect this after an SDstart fails. BMR - 2005/10/21 - The problem where ncgen failed with "too many attributes" on some user files is fixed (Bugzilla #373). BMR - 2005/10/28 - The problem where SDcheckempty returns "not empty" for empty chunked and compressed datasets is fixed (Bugzilla #218). BMR - 2005/10/31 - If a VSgetattr was called twice for an attribute, the second call would fail (Bugzilla #486). This problem is fixed. BMR - 2005/12/30 - A bug with reading metadata in hdfimport is fixed (Bugzilla #558). BMR - 2006/9/23 - The problem of writing to two unlimited 1-D arrays is fixed (Access bug #525). BMR - 2006/11/11 Documentation ============== Documentation contains multiple bug fixes and improvements. Platforms tested ================ HDF 4.2 Release 2 has been tested on the following platforms: AIX 5.3 (32/64 bit) xlc 7.0.0.0 (copper) xlf 9.1.0.3 FreeBSD 6.2 (32- and 64-bit) gcc and f77 GNU 3.4.6 (duty and liberty) GNU Fortran (GCC) 4.2.2 20070905 HP-UX B.11.23 (32- and 64-bit)HP aC++/ANSI C B3910B A.06.02 (sirius) HP F90 v3.1 IRIX64 6.5 MIPSpro cc 7.4.4m (ucar mop1 64 & n32) F90 MIPSpro 7.4.4m Linux 2.4.21-47.ELsmp #1 SMP gcc and f77 3.2.3 i686 i386 GNU/Linux (osage) Linux 2.6.9-42.0.10.ELsmp #1 gcc (GCC) 3.4.6 SMP i686 i386 G95 (GCC 4.0.3 (g95 0.91!) Nov 21 2006) (kagiso) PGI C, Fortran, C++ 6.2-5 32-bit icc (ICC) 9.1 Intel(R) Fortran Compiler for 32-bit applications, Version 9.1 Linux 2.6.16.46-0.12-debug #1 SMP ia64 GNU/Linux Intel(R) C++ Version 10.0.025 (ucar hir1) Intel(R) Fortran Itanium(R) Version 10.0.025 Linux 2.6.16.46-0.14-smp #1 Intel(R) C++ for Intel(R) EM64T Ver. 9.1.037 SMP x86_64 GNU/Linux Intel(R) Fortran Intel(R) EM64T Ver. 9.1.031 (smirom) gcc (GCC) 4.1.2 20070115 (SUSE Linux) G95 (GCC 4.0.3 (g95 0.91!) Apr 19 2007) Linux 2.6.5-7.283-rtgfx Altix SMP ia64 Intel(R) C++ Version 9.0 (cobalt) Intel(R) Fortran Itanium(R) Version 9.0 SunOS 5.8 32- and 64-bit Sun WorkShop 6 update 2 C 5.3 (sol) Sun WorkShop 6 update 2 Fortran 95 6.2 SunOS 5.10 32- and 64-bit Sun WorkShop 6 update 2 C 5.8 (linew) Sun WorkShop 6 update 2 Fortran 95 8.2 Patch 121019-06 IA-64 Linux 2.4.21.SuSE_292.til1 ia64 gcc (GCC) 3.2.2 (NCSA tg-login) Intel(R) C++ Version 8.0 Intel(R) Fortran Compiler Version 8.0 Windows XP Visual Studio 6.0 Visual Studio .NET (with Intel Fortran 9.1) Visual Studio 2005 (with Intel Fortran 9.1) cygwin (gcc 3.4.4) Windows Vista Visual Studio 2005 (no fortran) MAC OS X Intel Darwin 8.10.1 i686-apple-darwin8-gcc-4.0.1 (GCC) 4.0.1 g95 0.91 gfortran GNU Fortran (GCC) 4.3.0 20070518 Known problems ============== o hdfcomp fails on HPUX 11.23 for the 64-bit version of the library o This release does not support VMS systems. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets does not work. o When a one-dimensional SDS has the same name as the dimension, subsequent accesses to the dimension produce unpredictable results. o When using PGI compilers, make sure that the JPEG library is also compiled with a PGI C compiler; linking with JPEG built with gcc causes JPEG library tests to fail. To bypass the problem: x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installation directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use the --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... %%%4.2r1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF 4.2 Release 1 February, 2005 INTRODUCTION This document describes the differences between HDF 4.2r0 and HDF 4.2r1. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2r1 The HDF 4.2r1 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: ftp://hdf.ncsa.uiuc.edu/HDF/Documentation/ First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes -- Configuration -- Library -- Utilities - Support for new platforms and compilers - Bugs fixed since HDF4.2r1 - Documentation - Platforms Tested - Known problems New Features and Changes: ======================== Configuration ============= * By default HDF4 libraies and utilities are installed under /NewHDF 12/01/2004 EIP * Windows configuration, build and testing procedures have been changed. Please see INSTALL_WINDOWS.txt file for more information 02/12/2005 EIP Library ========= * New API SDidtype was added to the library (bug #766) 01/23/2005 EIP for BMR * HCgetcompress renamed to HCPgetcompress * New API HCget_config_info added * The default chunk cache size was changed for 2D and higher chunks * Pablo instumentation was removed Utilities ========= * hdiff and hrepack are supported on Windows. * Substantial performance improvements in hdiff and hrepack Support for new platforms and compilers ======================================= * Fortran IBM xlf v 8.1 and Absoft f95 version 8.2 compilers are supported on Mac OSX. 12/07/2004 EIP * Absoft Fortran compiler f95 version 9.0 is supported on Linux 2.4 12/07/2004 EIP * PGI C and Fortran compilers are supported on Linux 2.4 * Intel C and Fortran compilers are supported on Linux 2.4 * AMD Opteron is supported Bugs fixed since HDF4.2r0 ========================= * VERY IMPORTANT: Data compressed with SZIP may be corrupted; fixed. For more information see "HDF4.2r1 SZIP Release Notes" available at http://hdf.ncsa.uiuc.edu/doc_resource/SZIP/SZIP_HDF4_2r1.pdf 02/12/2005 EIP * Fortran couldn't read names with spaces when NetCDF interfaces were used; fixed 02/12/2005 EIP * Library failed to compile in presence of the NetCDF library; fixed 11/22/2004 EIP * h4fc couldn't create object files; fixed 01/23/2005 EIP * When rank of SDS is 0, some SD APIs give segmentation fault (bug 1045); fixed 01/23/2005 EIP for BMR * Some GR images with special elements are read in as duplicate (bug 814); fixed 02/14/2005 BMR * Many bugs fixed in hdiff and hrepack utilities Documentation ============== Documentation contains multiple bug fixes and improvements. Platforms Tested ================ HDF 4.2 Release 1 has been tested on the following platforms: AIX 5.1 (32 and 64-bit) xlc 6.0.0.6 xlf 8.1.1.6 AIX 5.2 (32 and 64-bit) xlc 6.0.0.8 xlf 8.1.1.7 Cray SV1 10.0.1.2 Cray Standard C Version 6.6.0.3.6 Cray Fortran: Version 3.6.0.3.1 Cray TS IEEE Cray Standard C Version 6.4.0.3 Cray Fortran: Version 3.4.0.0 FreeBSD 4.9 gcc 2.95.4 GNU Fortran 0.5.25 HP-UX B.11.00 HP C HP92453-01 A.11.01.20 HP F90 v2.4 HP ANSI C++ B3910B A.03.13 IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1.3m F90 MIPSpro 7.3.1.3m Linux 2.4.20.28 gcc 2.96, GNU Fortran 0.5.25, Absoft Fortran 9.0 Intel(R) C++ and Fortran Compilers Version 8.1 Linux 2.4.21-268-smp #1 SMP x86_64 (AMD) gcc 3.3.1 GNU Fortran (GCC) 3.3.1 Intel(R) C++ and Fortran Compilers Version 8.1 PGI C and Fortran Compilers Version 5.2-1 Linux 2.4.21-27.0.1.ELsmp #1 SMP gcc 3.2.3 PGI C and Fortran Compilers Version 5.2-1 Linux 2.4.21-sgi303rp05012313_10138 (Altix) Intel C++ and Intel Fortran Verison 8.1 Linux 2.4.20-31.9smp_perfctr_lustre (IA-32) Intel(R) C++ Version 8.0 Intel(R) Fortran Compiler Version 8.0 Linux 2.4.21.SuSE_241.bef1 (IA-64) Intel(R) C++ Version 8.0 Intel(R) Fortran Compiler Version 8.0 OSF1 V5.1 Compaq C V6.5-303 HP Fortran V5.5A-3548 HP Fortran Compiler X5.5A-4085-48E1K SunOS 5.7(32 and 64 bit) WorkShop Compilers 5.0 98/12/15 C 5.0 (Solaris 2.7) WorkShop Compilers 5.0 99/09/16 FORTRAN 77 5.0 patch 107596-03 gcc 3.2.2 g77 GNU Fortran (GCC 3.2.2) 3.2.2 SunOS 5.8(32 and 64 bit) Sun WorkShop 6 update 2 C 5.3 (Solaris 2.8) Sun WorkShop 6 update 2 Fortran 77 5.3 SunOS 5.9 (32 and 64 bit) Sun C 5.6 compiler, Sun Fortran 95 8.0 Windows 2000 (NT5.0) MSVC++ 6.0 DEC Visual Fortran 6.0 Intel C and F90 compilers version 7.1 Windows XP MSVC++.NET, Intel Fortran 8.1 (static libraries only) MAC OS X Darwin 7.7 gcc 3.3 IBM Fortran xlf 8.1 Absoft Fortran 8.2 Known problems ============== o SZIP Library is not available for Crays SV1 and TS Fortran APIs do not support SZIP compression. o SZIP compression cannot be used with GR interfaces. For more information see "HDF4.2r1 SZIP Release Notes" available at http://hdf.ncsa.uiuc.edu/doc_resource/SZIP/SZIP_HDF4_2r1.pdf o This release doesn't support VMS system. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets doesn't work. o SDgetdimscale incorrectly returns FAIL and or returns incorrect data when the associated SDS has the same name as the dimension. o When using PGI compilers make sure that JPEG library is also compiled with PGI C compiler; linking with JPEG built with gcc causes JPEG library tests to fail. To bypass the problem x Set LIBS flag to $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a where $PGI_JPEG_INSTALL_DIR points to the installtion directory for the PGI-compiled JPEG library: setenv LIBS $PGI_JPEG_INSTALL_DIR/lib/libjpeg.a x Use --with-jpeg=$PGI_JPEG_INSTALL_DIR configure flag to configure with the PGI-compiled JPEG library: ./configure --with-jpeg=$PGI_JPEG_INSTALL_DIR --with-zlib.... %%%4.2r0%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% HDF 4.2 Release 0 December 2003 INTRODUCTION This document describes the differences between HDF 4.1r5 and HDF 4.2r0. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2r0 The HDF 4.2r0 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: ftp://hdf.ncsa.uiuc.edu/HDF/Documentation/ First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Support for new platforms - Bugs fixed since HDF4.1r5 - Documentation - Platforms Tested - Known problems New Features and Changes: ======================== o IMPORTANT: HDF4 HAS A NEW CONFIGURATION ZLIB and JPEG libraries were removed from the HDF4 distribution source. ZLIB and JPEG have to be installed on the system before HDF4 library can be built or HDF4 precompiled binaries can be used. Please read INSTALL in the top HDF4 directory for instructions how to build the HDF4 library and applications. o HDF4 has an optional SZIP compression method; in order to use SZIP compression, SZIP library has to be installed on the system. Please refer to the INSTALL file for instructions on how to build with/without the SZIP Library. SZIP in HDF4 is free for non-commercial use; see http://hdf.ncsa.uiuc.edu/doc_resource/SZIP/Commercial_szip.html for information regarding commercial use. For more information about SZIP compression see http://hdf.ncsa.uiuc.edu/doc_resource/SZIP/ and the "HDF4 Reference Manual" entries for the GRsetcompress and SDsetcompress functions. o IMPORTANT note about prebuild binaries: NCSA precompiled binaries has SZIP compression method enabled for all platforms except Crays T3E and SV1 and Linux 2.4 SuSE x86_64. To use the binaries download the SZIP library from http://hdf.ncsa.uiuc.edu/doc_resource/SZIP/ o The following new tools have been added hrepack hdiff hdfimport See "HDF4 Reference Manual" in the HDF 4.2r0 documentation set for more information. o fp2hdf is removed. It is replaced with hdfimport. o Helper scripts to facilitate HDF4 installation and compilation were added: h4cc - to compile C application with the HDF4 Libraries h4fc - to compiler Fortran applications with the HDF4 Libraries h4redeploy - to fix binary installation See "HDF4 Reference Manual" in the HDF 4.2r0 documentation set for more information. Support for new platforms ========================= o HDF4 was ported to the following platforms AIX 5.1 64-bit version MacOSX Linux 2.4 RH8 and RH9 Linux 2.4 RH8 64-bit, SuSE 64-bit Linux 2.4 ia64 Linux 2.4 SGI (Altrix) Bugs fixed since HDF4.1r5 ========================= 1. "hdp dumpgr" and "hdp dumpsds" have two new options: -g to suppress the data of global (or file) attributes -l to suppress the data of local attributes 2. The problem where hdp failed on a very long file name has been fixed. (bug #693) 3. The problem where VSinquire failed when being called on a vdata that had no fields defined, has been fixed. (bug #626) 4. When the values of VGNAMELENMAX and VSNAMELENMAX are changed the hdp output doesn't reflect the change. This problem is now fixed. (bug #606) 5. hdp dumpvg sometimes failed when reading a file that had a vgroup being inserted into another. This has been fixed. (bug #477) 6. hdf2gif failed when a user tried it on JPEG compressed images. (Bug #601). The problem is now fixed. An error message is displayed if the image is not 8-bit. If the image is 24-bit, the message suggests using hdf2jpeg. 7. ncdump failed to read NetCDF files 3.5 when there was more than one variable with unlimited dimensions. Fixed. 8. The NetCDF part of the HDF4 library was not ported to Compaq True64 system. Fixed. 9. The hdp commands dumpsds, dumpgr, dumpvd, and dumpvg now display an informative message when a non-HDF file is given as input. (Bug #817) 10. The compilation warnings and error on the macro HDFclose are fixed. (Bug #818) Documentation ============== Entries for new utilities hdiff, hrepack, hdfimport and helper scripts h4cc, h4fc, and h4redeply were added to the Reference Manual. Platforms Tested ================ HDF 4.2 Release 0 has been tested on the following platforms: AIX 5.1 (32 and 64-bit) xlc 6.0.0.2 xlf 8.1.1 Cray T3E sn6606 2.0.6.08 Cray Standard C Version 6.6.0.2 Cray Fortran Version 3.6.0.2 Cray SV1 sn9617 10.0.1.2 Cray Standard C Version 6.6.0.2 Cray Fortran Version 3.6.0.2 FreeBSD 4.9 gcc 2.95.4 g++ 2.95.4 HP-UX B.11.00 HP C HP92453-01 A.11.01.20 HP F90 v2.4 HP ANSI C++ B3910B A.03.13 IRIX64 6.5 (64 & n32) MIPSpro cc 7.3.1.3m F90 MIPSpro 7.3.1.3m Linux 2.4.20-20.7 (RH8) gcc 3.3.1 Intel(R) C++ Version 7.1 Intel(R) Fortran Compiler Version 7.1 Linux 2.4.20-20.9 (RH9) gcc 3.2.2 Linux 2.4.21-2.9.5ws x86_64 gcc version 3.2.3 20030502 (Red Hat Linux 3.2.3-16) g77 based on gcc version 3.2.3 Linux 2.4.19-SMP #1 x86_64 gcc version 3.3.2 (SuSE Linux) g77 version 3.3.2 OSF1 V5.1 Compaq C V6.4-014 Compaq Fortran V5.5-1877 SunOS 5.7(32 and 64 bit) WorkShop Compilers 5.0 98/12/15 C 5.0 (Solaris 2.7) WorkShop Compilers 5.0 98/10/25 FORTRAN 77 5.0 gcc 3.2.2 g77 GNU Fortran (GCC 3.2.2) 3.2.2 SunOS 5.8(32 and 64 bit) Sun WorkShop 6 update 2 C 5.3 (Solaris 2.8) Sun WorkShop 6 update 2 Fortran 77 5.3 IA-32 Linux 2.4.9-31pctr Intel(R) C++ Version 7.0 Intel(R) Fortran Compiler Version 7.0 IA-64 Linux 2.4.16 ia64 Intel(R) C++ Version 7.0 Intel(R) Fortran Compiler Version 7.0 IA-64 Linux 2.4.19-SMP Intel(R) C++ Version 7.1 Intel(R) Fortran Compiler Version 7.1 gcc 3.2 g77 GNU Fortran (GCC 3.2) 3.2 IA-64 Linux 2.4.21-sgi Intel(R) C++ Version 7.1 (Altrix) Intel(R) Fortran Compiler Version 7.1 Windows 2000 (NT5.0) MSVC++ 6.0 DEC Visual Fortran 6.0 Intel C and F90 compilers version 7.1 Windows XP MSVC++.NET MAC OS X Darwin 6.8 gcc Apple Computer, Inc. GCC version 1175, based on gcc version 3.1 Known problems ============== o SZIP Library is not available for Crays SV1 and T3E Fortran APIs do not support SZIP compression. o NetCDF tests nctest (C) and ftest (Fortran) fail to read from NetCDF 3.5 files o On Linux RH8 64-bit SZIP tests fail if library is compiled in production mode. All tests pass in the debug mode. o This release doesn't support VMS system. o On Linux with gcc compilers Fortran NetCDF APIs cannot read attributes, variables, and dimensions when name contain spaces. o HDF4 Library cannot be built with PGI compilers. o N-Bit compression is not supported with Fortran APIs. o Using both fill-value and compression on SD datasets doesn't work. o --prefix defines where the installation path is. This version has the default set as /usr/local which is different from previous versions. o New utilities hdiff and hrepack are not available for Windows 2000. %%%4.2r0-Beta%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF 4.2 Release 0-Beta September 2003 INTRODUCTION This document describes the differences between HDF 4.1r5 and HDF 4.2r0-Beta. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.2r0-Beta The HDF 4.2r0-Beta documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: /HDF/pub/outgoing/hdf4/4.2-Beta/ First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested New Features and Changes: ======================== o ZLIB and JPEG libraries were removed from the HDF4 distribution source. Please read INSTALL-4.2r0-Beta in the top HDF4 directory for instructions how to build HDF4 Library and applications. o HDF4 has an optional SZIP compression; please refer to the INSTALL-4.2r0-Beta file for instructions how to build with/without the SZIP Library. SZIP in HDF4 is free for non-commercial use; see http://hdf.ncsa.uiuc.edu/doc_resource/SZIP/Commercial_szip.html for information regarding commercial use. For more information about SZIP compression see http://hdf.ncsa.uiuc.edu/HDF4/doc_resource/SZIP/ and the "HDF4 Reference Manual" entries for the GRsetcompress and SDsetcompress functions. o The following new tools have been added hrepack hdiff hdfimport See "HDF4 Reference Manual" in the HDF 4.2r0-Beta documentation set for more information. o HDF4 was ported to the following platforms AIX 5.1 64-bit version MacOSX Linux 2.4 RH8 and RH9 Please refer to the bugs_fixed.txt file for more details on bugs that were fixed. Platforms Tested: ================ HDF 4.2 Release 0-Beta has been tested on the following platforms: FreeBSD 4.9 HP-UX B.11.00 AIX 5.1 (32 and 64-bit) IRIX64 6.5 (-n32, -64) Linux 2.4 Solaris 2.7, 2.8 (32 and 64-bit) MacOSX No precompiled binaries is available for this release. %%%4.1r5%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF 4.1 Release 5 November 2001 INTRODUCTION This document describes the differences between HDF 4.1r4 and HDF 4.1r5. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.1r5. The HDF 4.1r5 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: /HDF/HDF/Documentation/HDF4.1r5/ First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested New Features and Changes: ======================== o The following Vdata routines were added: VSsetblocksize/vsfsetblsz -- sets the block size of the linked-block element. VSsetnumblocks/vsfsetnmbl -- sets the number of blocks for a linked-block element. VSgetblockinfo/vsfgetblinfo -- retrieves the block size and the number of blocks of a linked-block element. o Two routines were added to get compression information for the SD and GR interfaces, including chunked elements: SDgetcompress/sfgcompress and GRgetcompress/mggcompress. Note: - For a JPEG image, GRgetcompress only returns the compression type, not the compression information (i.e, quantity and force_baseline). This information is not currently retrievable. - Getting compression type for JPEG chunked images is not working yet. o "hdp dumpgr" has a new option, -pd, to print palette data only. Also, whenever option -p or -pd is given, only palettes are printed, and no images or file attributes. o A new FORTRAN function, heprntf (HEprint), was added. It takes two arguments: file name and level. If the file name string has 0 length, then error messages will be printed to standard output. o On Windows, the unresolved symbol (error_top) error has been fixed when calling HEclear and linking with the DLL. Users who want to use the HDF DLL should define HDFAPDLL in their applications. Simply go to Project Settings and add HDFAPDLL as the predefined constant. o A memory leak in the netCDF portion of the HDF/mfhdf distribution was fixed. o The "#define NULL" was removed since ANSI C compilers are required to define NULL. o When using "hdp dumpgr", data was being printed in the range of 0-250 when it should have been between 0-168. This problem is now fixed. Please refer to the bugs_fixed.txt file for more details on bugs that were fixed. Platforms Tested: ================ HDF 4.1 Release 5 has been tested on the following platforms: Cray SV1 10.0.0.8 Cray T3E sn6711 2.0.5.55 Compaq Tru64 Unix (OSF1) 5.1 DEC Alpha/OpenVMS AXP 7.2-1 FreeBSD 4.4 HP-UX B.11.00 IBM SP 4.3 IRIX 6.5 IRIX64 6.5 (-n32, -64) Linux 2.2.18smp Solaris 2.7, 2.8 Windows NT/98/2000 For more information on the platforms that were tested and for which we provide pre-compiled binaries, please refer to the following web page (accessible from the HDF home page): http://hdf.ncsa.uiuc.edu/platforms.html Know problmes: Writing n-bit datasets from FORTRAN with the SD interface is not working. SDgetchunkinfo does not return compression coding or modelling type. Using both fill-values and compression on SD datasets is not currently working, don't use one or the other. Dumping compressed Vdatas with vshow or hdp is not working. Reading or writing compressed images with the GR interface is not working. %%%4.1r4%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF 4.1 Release 4 October 2000 INTRODUCTION This document describes the differences between HDF 4.1r3 and HDF 4.1r4. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.1r4. The HDF 4.1r4 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: /HDF/HDF/Documentation/HDF4.1r4/ First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested - Known Problems New Features and Changes: ======================== This release focuses on new features and changes added to the GR interface. o Two new utilities have been added to HDF, gif2hdf and hdf2gif. The gif2hdf utility will convert a GIF image into an HDF file containing a GR image. The hdf2gif utility will convert an HDF GR image into a GIF image. o Chunking and chunking with compression have been added to the GR interface. o JPEG compression with the GR interface was not working properly. This problem has been fixed. Several hdp options have been added: o Added -s option to dumpgr and dumpsds to allow printing data as a stream instead of breaking the lines at 65 characters. o Added option -c to dumpgr and dumpsds to allow printing clean output for attributes with type DFNT_CHAR. With this option, hdp will print space characters, such as horizontal tabs, CRs, and LFs, as they are instead of "\digit" (still the default.) This option also prints "..." for one or more null characters among the data. o Added option -l to dumpgr to allow printing data in different interlace modes. Platforms Tested: ================ HDF 4.1 Release 4 has been tested on the following platforms: Cray J90 (available after initial 4.1r4 release) Cray T3E DEC Alpha/Digital Unix DEC Alpha/OpenVMS Exemplar FreeBSD HP-UX IRIX IRIX64 (-n32, -64) Linux Solaris Solaris x86 SP Windows NT/98/2000 For more information on the platforms that were tested and for which we provide pre-compiled binaries, please refer to the following web page (accessible from the HDF home page): http://hdf.ncsa.uiuc.edu/platforms.html Known Problems: ============== o The ncgen utility fails on the IBM SP. %%%4.1r3%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF 4.1 Release 3 May 7, 1999 INTRODUCTION This document describes the differences between HDF 4.1r2 and HDF 4.1r3. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.1r3. The release notes provide more in-depth information concerning the topics discussed here. The HDF 4.1r3 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: /HDF/HDF/Documentation/HDF4.1r3 First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested - Known Problems - Acknowledgements New Features and Changes: ======================== o HDF 4.1r2 was unable to properly read HDF SDSs created with HDF 3.3x. It did not read the correct SDS names. This problem has been fixed. o Many problems have been fixed with the GR interface, including the following: - The GR interface can now read compressed files created with the DFR8 and DF24 interfaces, except for those which were compressed with IMCOMP compression. - The GR interface can read and write images compressed with RLE, GZIP and Skipping Huffman compression methods. - Palettes can now be written and read properly with the GR interface. - 24-bit raster images can now be read by the GR interface. o You can now create an SDS with a name up to 256 characters in length. The previous limit was 64. o HDF now supports IJP JPEG version 6b and Gzip version 1.1.3. o Numerous hdp problems have been fixed, including the following: - hdp no longer fails on an HDF file which contains a vdata that no records have been written to. - hdp no longer fails on the PC and Mactinosh dumping large SDSs. - GR file attributes can now be displayed. - A palette can now be dumped with the GR command. o SDfileinfo no longer returns the wrong number of datasets for old files created with the DFSD interface. o This will be the last release that SunOS 4.1.4 is supported. Check the ./bugs_fixed.txt for other changes that are not listed here. Platforms Tested: ================ HDF 4.1 Release 3 has been tested on the following platforms: Cray J90 Cray T90 (CFP, IEEE) Cray T3E DEC Alpha/Digital Unix DEC Alpha/OpenVMS DEC Alpha NT VAX OpenVMS Exemplar FreeBSD HP-UX 10.2 IRIX 6.5 IRIX64 6.5 (-n32, -64) Linux Macintosh Solaris Solaris x86 SP SunOS 4.1.4 Windows NT/95 For more information on the platforms that were tested and for which we provide pre-compiled binaries, please refer to the following web page (accessible from the HDF home page): http://hdf.ncsa.uiuc.edu/platforms.html Known Problems: ============== o On Alpha OpenVMS version 6.2, the DF.OLB and MFHDF.OLB Libraries should be created with optimization turned off. Otherwise hdftest fails (the sfgichnk function returns incorrect information). o On VAX Open VMS 6.2, the ncgen utility core dumps and an error occurs when reading GR image data with user-defined fill values. o If you encounter problems building on a platform, please be sure to check the INSTALL file at the top of the HDF source tree, in case these problems are documented in section 2.5, Platform-specific Notes. o On the NT, the hdp utility fails in the debug version when using the list command. Acknowledgements: ================ Fortner Software LLC ("Fortner") created the reference implementations for the Macintosh and Windows NT/95 of the HDF 4.1r3 library. For more information, please refer to the macintosh.txt and windows.txt files in the ./release_notes/ directory. %%%4.1r2%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF 4.1 Release 2 March 16, 1998 INTRODUCTION This document describes the differences between HDF 4.1r1 and HDF 4.1r2. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.1r2. The release notes provide more in-depth information concerning the topics discussed here. The HDF 4.1r2 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: /HDF/Documentation/HDF4.1r2 First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested - Known Problems - Important Fixes - Acknowledgements New Features and Changes: ======================== o Data chunking is now supported with the GR interface. New routines for creating and manipulating chunked GR images have been added. Please refer to the ./release_notes/new_functions.txt file and HDF Reference Manual for information on using chunked GRs. o In previous releases, many C routines existed for which there were no Fortran counterparts. With HDF 4.1r2, we have added a Fortran routine for most C routines. Please check the ./release_notes/new_functions.txt file for a list of the new functions added to HDF. o This is the first release in which the Java Products (the Java-based HDF Viewer (JHV) and the Java HDF interface (JHI)) are incorporated in the HDF release itself. For information on the Java Products, please refer to the HDF home page under Information about HDF (http://hdf.ncsa.uiuc.edu/about.html). o In the SD interface, HDF now defaults to ONLY storing the new version of the dimension representation added in HDF 4.0r1. When the dimension representation was changed in 4.0r1, the HDF library defaulted to include both the new and old dimension representations in an HDF file. Now, this new dimension representation is stored by default. The SDsetdimval_comp function can be used to change the dimension representation stored. Following is a detailed description of the difference between the new and old representations: Prior to HDF 4.0r1, a vgroup was used to represent a dimension. The vgroup had a single field vdata with a class of "DimVal0.0". The vdata had number of records, with each record having a fake value from 0, 1, 2 ... , ( - 1). The fake values were not really required and took up a large amount of space. For applications that created large one dimensional array datasets, the disk space taken by these fake values almost doubled the size of the HDF file. In order to omit the fake values, the new version for a dimension vdata was implemented. The new version uses the same structure as the old version. The only differences are that the vdata has only 1 record with a value of and that the vdata's class is "DimVal0.1", to distinguish it from the old version. o Platforms dropped with this release: Cray Y-MP, T3D, and Linux (a.out) o Extensive changes have been made to the Reference Manual and User's Guide. The updated Reference Manual is available with this release. The updated User's Guide will be available in the near future. Platforms Tested: ================ HDF 4.1 Release 2 has been tested on the following platforms: Cray T90 (CFP, IEEE) IRIX 6.2 Cray T3E IRIX64 6.4 (-n32, -64) DEC Alpha/Digital Unix Linux (elf) Exemplar Solaris FreeBSD Solaris x86 HP-UX 9.03 SP2 HP-UX 10.2 SunOS IRIX 5.3 ** The Windows NT/95, Macintosh, Dec Alpha OpenVMS and VAX OpenVMS releases are not available with this release of HDF4.1r2. Separate releases for these platforms will be available in the near future. For more information on the platforms that were tested and for which we provide pre-compiled binaries, please refer to the following web page (accessible from the HDF home page): http://hdf.ncsa.uiuc.edu/platforms.html Known Problems: ============== o Writing n-bit datasets from FORTRAN with the SD interface is not working. o SDgetchunkinfo does not return compression coding or modeling type. o Using both fill-values and compression on SD datasets is not currently working; don't use one or the other. o Dumping compressed Vdatas with vshow or hdp is not working. o Reading or writing compressed images with the GR interface is not working. o With the GR interface, you cannot create a raster image without writing data to it. Important Fixes: =============== o HDF no longer core dumps when reading a NetCDF file. o HDF now supports little-endian conversion for VAX and Dec Alpha OpenVMS. o The problems that occurred on the Cray with HDF 4.1r1 have been corrected. See the ./release_notes/bugs_fixed.txt file for more information on bugs fixed in this release. Acknowledgements: ================ Fortner Software LLC ("Fortner") created the reference implementations for Macintosh and Windows NT/95 of the HDF 4.1r2 library, which will be available in the near future. For more information, please refer to the macintosh.txt and windows.txt files [in the ./release_notes/ directory]. (see above). ==================new_functions.txt================================== This file contains a list of the new functions added with HDF 4.1r2. The functions in parenthesis were already present in the HDF library, and are included for clarity. C FORTRAN Description -------------------------------------------------------------------------------- (SDsetcompress) sfscompress compresses SDS (SDwritechunk) sfwchnk writes the specified chunk of NUMERIC data to the SDS (SDwritechunk) sfwcchnk writes the specified chunk of CHARACTER data to the SDS (SDreadchunk) sfrchnk reads the specified chunk of NUMERIC data to the SDS (SDreadchunk) sfrcchnk reads the specified chunk of CHARACTER data to the SDS (SDsetchunk) sfschnk makes the SDS a chunked SDS (SDsetchunkcache) sfscchnk sets the maximum number of chunks to cache (SDgetchunkinfo) sfgichnk gets info on SDS (SDsetblocksize) sfsblsz sets block size (SDisrecord) sfisrcrd checks if an SDS is unlimited (GRsetcompress) mgscompress compresses raster image GRsetchunk mgschnk makes a raster image a chunked raster image GRgetchunkinfo mggichnk gets info on a raster image GRsetchunkcache mgscchnk sets the maximum number of chunks to cache (Hgetlibversion) hglibver gets version of the HDF Library (Hgetfileversion) hgfilver gets version of the HDF file Vdeletetagref vfdtr deletes tag/ref pair ( HDF object) from a vgroup (VSfindclass) vsffcls finds class with a specified name in a vdata VSdelete vsfdlte deletes a vdata Vdelete vdelete deletes a vgroup ===================================================================== %%%4.1r1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF4.1 Release 1 February 21, 1997 INTRODUCTION This document describes the differences between HDF 4.0r2 and HDF 4.1r1. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.1r1. The release notes provide more in-depth information concerning the topics discussed here. The HDF 4.1r1 documentation can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory: /HDF/Documentation/HDF4.1r1 First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested - Known Problems - Important Fixes New Features and Changes: ======================== o Attributes are now supported in both the vdata and vgroup APIs. In the vdata API, attributes can be attached to either vdata fields or vdatas; in the vgroup API, attributes can be attached to vgroups. This new functionality can also be used to attach attributes to vdatas and vgroups created by earlier versions of the HDF library. However, the old versions of the HDF library cannot read the new version vdatas and vgroups. A vdata/vgroup having attributes will become a new version vdata/vgroup. For more information, please refer to the file ../release_notes/vattr.txt, the man pages for the new functions, and the HDF 4.1 User's Guide. o Data chunking is now supported in SD scientific data sets. When data chunking is used, an n-dimensional SDS is stored as a series of n-dimensional chunks, improving performance on certain types of partial read operations. New routines for creating and manipulating chunked SD scientific data sets have been provided, and two preexisting SD I/O routines, SDreaddata and SDwritedata, have also been modified to work with chunked SDSs. For more information, please refer to the file ../release_notes/sd_chunk_examples.txt, the man page for sd_chunk, and the HDF 4.1 User's Guide. o Due to certain limitations in the way compressed SDS datasets are stored, data which has been compressed is not completely writable in ways that uncompressed datasets are. The "rules" for writing to a compressed dataset are as follows: (1) Write an entire dataset that is to be compressed. i.e. build the dataset entirely in memory, then write it out with a single call. (2) Append to a compressed dataset. i.e. write to a compressed dataset that has already been written out by adding to the unlimited dimension for that dataset. (3) For users of HDF 4.1, write to any subset of a compressed dataset that is also chunked. Please refer to the HDF 4.1 User's Guide for more information. o HDF now creates free format FORTRAN include files. In order to make FORTRAN 90 programs be able to use HDF include files (*.inc), HDF4.1r1 creates F90 versions of these files during the 'make' process on UNIX platforms, by replacing 'C' or 'c' in column 1 with '!'. Continuation lines in hdf.inc have been eliminated. The F90 version files are named as hdf.f90, dffunc.f90 and netcdf.f90. o Several performance improvements have been added. Test programs on SPARC 20/Solaris 2.5 show that when creating an hdf file with 2500 3D (10x10x10) float32 SDSs, the program execution speed is improved by 2.5 - 4.8 times, and SDend is faster by 4.3 - 20 times. o A new function, SDsetfillmode, has been added. It can be used to prevent SDwritedata from pre-filling the dataset with a user defined or default fill value, so that better performance can be obtained. o SGI has changed some compiler default settings in IRIX 6.2. We decided to explicitly define the settings of various ABI related options. For the 64 bit OS ("uname -s" returns IRIX64), HDF uses "-64 -mips4" code. For the traditional 32 bit OS ("uname -s" returns IRIX), HDF uses "-32 -mips2". To use n32 mode on IRIX64, HDF uses "-n32 -mips3" code. Note that in the previous release (4.0r2), HDF used only "-n32". In IRIX 6.1 and before, "-n32" defaulted to "-mips4" code but in IRIX 6.2, it defaults to mips3 or mips4 code. We decided to explicitly set it to "-n32 -mips3". Therefore, applications linking with the HDF library must be compiled with the same explicit ABI options. o This will be the last release that we support the CM5. HDF 4.1 Beta 1 USERS ONLY ------------------------- o The SD chunking routine names were changed to be more consistent with the SD interface. The names of the routines are now in lower case, after the two initial "SD" characters. For example, SDwriteChunk() has been changed to SDwritechunk(). o The _HDF_ENTIRE_VDATA variable has been changed to _HDF_VDATA. For those users already using it, a macro called _HDF_ENTIRE_VDATA has been added, which is defined as _HDF_VDATA. o You can now create an empty compressed SDS. Please refer to the ../release_notes/bugs_fixed.txt file for changes in this release. Platforms Tested: ================ HDF 4.1 Release 1 has been tested on the following platforms: CM5 Parallel I/O, 4.1.3_U1 DEC Alpha/Digital Unix 3.2 DEC Alpha/OpenVMS AXP 6.2 DEC VAX OpenVMS 6.2 Exemplar 9.03 Free BSD 2.2 HP-UX 9.03 HP-UX 10.10 IRIX 5.3 IRIX 6.2_64 IRIX 6.2_n32 IRIX 6.4_64 IRIX 6.4_n32 Linux A.OUT 1.2.4 Linux ELF 2.0.27 (C only) Macintosh PowerPC (C only) SP2 4.1 Solaris 2.5 Solaris_x86 2.5 (C only) SunOS 4.1.3 Windows NT/95 (C only) Known Problems: ============== o With the SD interface, you are unable to overwrite existing compressed data, that is not stored in "chunked" form. This is due to compression algorithms not being suitable for "local" modifications in a compressed datastream. o There are no plans to add the DF24writeref function to the DF24 interface. This function will be removed from the documentation. Important Fixes: =============== o If you opened a file in Read Only mode with the SD interface (using SDstart), it would create the file if the file did not exist. This no longer occurs. o HDF 4.0r2 did not recognize JPEG images created by HDF 3.3r4. This has been fixed. See the ../release_notes/bugs_fixed.txt file for more information on bugs fixed in this release. %%%4.1b1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF4.1 Beta 1 December 6, 1996 INTRODUCTION This document describes the differences between HDF 4.0r2 and HDF 4.1b1. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.1b1. The release notes provide more in-depth information concerning the topics discussed here. For documentation, please refer to the HDF 4.0r2 documentation which can be found on the NCSA ftp server (ftp.ncsa.uiuc.edu) in the directory /HDF/Documentation/HDF4.0r2. First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested - Known Problems - Important Fixes New Features and Changes: ------------------------- o Attributes are now supported in both the vdata and vgroup APIs. In the vdata API, attributes can be attached to either vdata fields or vdatas; in the vgroup API, attributes can be attached to vgroups. This new functionality can be used to attach attributes to vdatas and vgroups created by earlier versions of the HDF library. However, the old versions of the HDF library cannot read the new version vdatas and vgroups. A vdata/vgroup having attributes will become a new version vdata/vgroup. For more information, please refer to the file ../release_notes/vattr.txt, as well as the man pages for the new functions. o Data chunking is now supported in SD scientific data sets. When data chunking is used, an n-dimensional SDS is stored as a series of n-dimensional chunks, improving performance on certain types of partial read operations. New routines for creating and manipulating chunked SD scientific data sets have been provided, and two preexisting SD I/O routines, SDreaddata and SDwritedata, have also been modified to work with chunked SDSs. For more information, please refer to the file ../release_notes/sd_chunk_examples.txt, as well as the man page for sd_chunk. More information will be included in the HDF 4.1 documentation, which will be available with the release of HDF 4.1. o Due to certain limitations in the way compressed SDS datasets are stored, data which has been compressed is not completely writable in ways that uncompressed datasets are. The "rules" for writing to a compressed dataset are as follows: (1) Write an entire dataset that is to be compressed. i.e. build the dataset entirely in memory, then write it out with a single call. (2) Append to a compressed dataset. i.e. write to a compressed dataset that has already been written out by adding to the unlimited dimension for that dataset. (3) For users of HDF 4.1, write to any subset of a compressed dataset that is also chunked. Please refer to the ../release_notes/comp_SDS.txt file for more information. o A new file, ../release_notes/compile.txt, contains instructions on compiling applications on the supported platforms. If you encounter problems with it, please let us know at hdfhelp@ncsa.uiuc.edu. o SGI has changed some compiler default settings in IRIX 6.2. We decided to explicitly define the settings of various ABI related options. For the 64 bit OS ("uname -s" returns IRIX64), HDF uses "-64 -mips4" code. For the traditional 32 bit OS ("uname -s" returns IRIX), HDF uses "-32 -mips2". To use n32 mode on IRIX64, HDF uses "-n32 -mips3" code. Note that in the previous release (4.0r2), HDF used only "-n32". In IRIX 6.1 and before, "-n32" defaulted to "-mips4" code but in IRIX 6.2, it defaults to mips3 or mips4 code. We decided to explicitly set it to "-n32 -mips3". Therefore, applications linking with the HDF library must be compiled with the same explicit ABI options. Platforms Tested: ----------------- HDF 4.1b1 has been tested on the following platforms: DEC Alpha/Digital Unix 3.2 DEC Alpha/OpenVMS AXP v6.2 DEC VAX OpenVMS v6.2 Free BSD 2.2 HP-UX 9.03 IRIX 5.3 IRIX 6.2_64 IRIX 6.2_n32 Linux ELF 1.2.13 (C only) Macintosh PowerPC (C only) (not ready yet) SP2 4.1 Solaris 2.5 SunOS 4.1.3 Windows NT/95 (C only) YMP 9.0.2asC Known Problems: --------------- o With the SD interface, you are unable to overwrite existing compressed data, that is not stored in "chunked" form. This is due to compression algorithms not being suitable for "local" modifications in a compressed datastream. For more information, please refer to the ../release_notes/comp_SDS.txt file. o With 4.0r1p1, you could type "hdp list -a to get a list of the file attributes associated with a file. This does not currently work. o There are no plans to add the DF24writeref function to the DF24 interface. This function will be removed from the documentation. o When running "make test" on OpenVMS, Test 3 (float32) of the chunking tests fails, and has therefore been commented out. o When running the tests on Window NT/95, Test 2 (uint16) of the chunking tests fails, and will be commented out. Important Fixes: ---------------- o If you opened a file in Read Only mode with the SD interface (using SDstart), it would create the file if the file did not exist. This no longer occurs. o HDF 4.0r2 did not recognize JPEG images created by HDF 3.3r4. This has been fixed. See the ../release_notes/bug_fixed.txt file for more information on bugs fixed in this release. %%%4.0r2%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF4.0 Release 2 July 19, 1996 INTRODUCTION This document describes the differences between HDF 4.0r1p1 and HDF 4.0r2. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF 4.0r2. The documentation and release notes provide more in-depth information concerning the topics discussed here. The HDF 4.0 documentation can be found on the NCSA ftp server in the directory /HDF/Documentation/HDF4.0/Users_Guide. First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - New Features and Changes - Platforms Tested - Known Problems New Features and Changes: ------------------------- o HDF now supports unlimited number of access IDs and files IDs. o The vdata field size limit has been increased from 32000 to 65535. o The hdp utility has been updated to: - view a GR object - recognize the new compression methods - view descriptive annotations - display the library version of the HDF file o SDsetattr and GRsetattr now check for both MAX_ORDER and MAX_FIELD_SIZE. o The handling of DFNT_CHAR in all Fortran interfaces has been cleaned up. See release_notes/Fortran_APIs.txt for more information. o When appending compressed data onto the end of an unlimited dimension SDS, the SD interface no longer writes the fill-values in locations where they will immediately be over-written by data. This was done for the compression layer, but has the added enhancement of improving performance. o On the Cray, there were boundary problems when foreign data did not start from the 64-bit boundary. This has been fixed. o There are no longer the following name collisions with the HDF libraries: - AVS (HPread, HPwrite) - Windows SDK (_hread) - ODL library (_HDF_) o The 32-bit mode for IRIX 6.1 previously used the '-32' option which produces mips1 code. It has been changed to use '-n32' which produces mips4 code. This runs faster on the Power Challenges. Users who must use the '-32' option can link their code with the IRIX 5.3 HDF library. o The compression problems have been fixed when using HDF on IRIX 6.1 with the -n32 (see Known Problems below). o The zlib and jpeg libraries have been updated. The versions included with HDF 4.0 Release 2 are: zlib version 1.0.2 jpeg version 6a (7-Feb-96) o The hdfls utility has been updated to: - support the new compression modes - display the library version of the HDF file o Support for the 16-bit architecture has been pulled out of HDF. o The directories separator in the directory variable used by the function HXsetdir (Fortran equivalent: hxsdir) is the verticle bar ('|') now. It used to be the colon (':') symbol, but a colon is a legal symbol for a file pathname in the MacOS system. o The code has been rearranged so that most applications' binaries will be smaller. o A new routine, VSfpack(), has been added. Please see the HDF man page on how to use this routine. o A new routine, GRluttoref(), has been added. Please see the HDF man page on how to use this routine. o Several internal problems have been fixed with the GR interface. Changes in Compiling the Source Code: o A new compile option, '-DHAVE_NETCDF', has been added, to avoid conflicts in linking the HDF/MFHDF library with the original netCDF library. This is only available for the C-interface. However, keep in mind that you cannot read/write HDF files using the netCDF libraries. See section 2.4.3.2 in the INSTALL file for more information. o When compiling and installing HDF, the default location to place the binaries, has been changed from /usr/local/bin to NewHDF in the source directory. For example, assuming the library source is loaded at /usr/local/src/hdf, the following commands will result in the HDF binaries being placed in the directory /usr/local/src/hdf/dev/NewHDF. cd dev ./configure -v make make test make install o The Fortran test output has been cleaned up and shortened, when running "make test". Previously, the Fortran tests on the hdf/ side consisted of multiple Fortran programs invoked by a C frontend. The test programs were changed to subroutines and combined as one Fortran program. The C frontend was also changed to produce a 'directive' file, called fortest.arg, which contains directives to run the Fortran test program. Platforms Tested: ----------------- HDF 4.0r2 has been tested on the following platforms: AIX Linux ELF C90 MAC CM5 SP2 (single node) Digital Unix 3.2 Solaris_2.4 Exemplar 9.03 Solaris 2.5 Free BSD Solaris_x86 2.4 (C only) Fujitsu (C only) SunOS 4.1.4 HP-UX T3D (C only) IRIX 6.1 w/-n32 bit option VMS IRIX 6.1 w/-64 bit option Windows NT/95 IRIX 5.3 YMP Linux A.OUT Known Problems: --------------- o On the SunOS platform, there is a bug when using sfscal()/sfgcal() routines with gcc and f77. o On the VMS platform, there is a bug with float64 data. o For IRIX 6.1, the stdio.h file gives a false warning message if both the '-n32' and '-ansi' options are used for the C compiler. We have temporarily removed the '-ansi' option from our autoconfiguration for the Irix6_32 system, to avoid these messages. We have verified that the culprit in stdio.h has been corrected in IRIX 6.2, and plan to put the '-ansi' option back in our next release. o The compression tests produce errors for FLOAT32 data if the '-O' option is used on IRIX 6.1, for both the '-64' bit and '-n32' bit modes. It did not produce errors when using the '-32' bit option or when not using the '-O' option. We are unsure whether the errors are due to the compression code or the IRIX C optimizer. For now, we have chosen to compile the HDF library without the '-O' option, while we investigate the problem. %%%4.0r1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF4.0 Release 1 February 7, 1996 INTRODUCTION This document describes the differences between HDF4.0r1 and HDF3.3r4. It is written for people who are familiar with previous releases of HDF and wish to migrate to HDF4.0r1. The documentation and release notes provide more in-depth information concerning the topics discussed here. The HDF 4.0 documentation can be found on the NCSA ftp server in the directory /HDF/Documentation/HDF4.0/Users_Guide. For more history behind the implementation of the items listed here, refer to the ABOUT_4.0.alpha, ABOUT_4.0b1 and ABOUT_4.0b2 files. First-time HDF users are encouraged to read the FAQ in this release for more information about HDF. Users can also look at the home page for HDF at: http://hdf.ncsa.uiuc.edu/ If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu CONTENTS - Important Changes (that will affect you) - New Features and Changes - Changes in Utilities - Known Problems Important Changes: ----------------- 1. Several changes have been made to the libraries in HDF4.0 which affect the way that users compile and link their programs: * The mfhdf library has been renamed to libmfhdf.a from libnetcdf.a in previous releases. * HDF 4.0 libraries now use v5 of the Independent JPEG Group (IJG) JPEG file access library. * Gzip library libz.a is added in HDF4.0r1, in order to support "deflate" style compression of any object in HDF files. Due to these changes, users are required to specify four libraries when compiling and linking a program: libmfhdf.a, libdf.a, libjpeg.a and libz.a, even if your program does not use JPEG or GZIP compression. For example: For C: cc -o myprog myprog.c -I \ \ or cc -o myprog myprog.c -I \ -L -lmfhdf -ldf -ljpeg -lz For FORTRAN: f77 -o myprog myprog.f \ \ or f77 -o myprog myprog.f -L \ -lmfhdf -ldf -ljpeg -lz NOTE: The order of the libraries is important: libmfhdf.a first, then libdf.a, followed by libjpeg.a and libz.a. This is also discussed in Items 1, 2, and 3 of the New Features and Changes section of this document. 2. The HDF 4.0 library will ONLY compile with ANSI C compilers. See Item 4 in the New Features and Changes section of this document for more information. 3. The HDF library and netCDF library on Unix systems can now be automatically configured and built with one command. See Item 5 in the New Features and Changes section of this document for more information. 4. In HDF 4.0, the FORTRAN programs dffunct.i and constant.i have been changed to dffunct.inc and hdf.inc. See Item 16 in the New Features and Changes section of this document for more information. 5. Platforms tested on: IRIX (5.3, 6.1 (32 bit and 64 bit)), SunOS 4.1.4, Solaris (ver 2.4, 2.5), Solaris x86, HP-UX, Digital Unix, AIX, LINUX (A.OUT), CM5, YMP, FreeBSD, C90, Exemplar, Open VMS, and SP2 (single node only). HDF 4.0 is not yet available on the Macintosh for HDF4.0r1. 6. The HDF 4.0 binaries for each tested platform are available. Unix binaries are located in the bin/ directory. Binaries for Windows NT are located in the zip/ directory. New Features and Changes: ------------------------ 1. Changes to the mfhdf library The mfhdf library has been renamed to libmfhdf.a from libnetcdf.a in previous releases. To link a program with HDF4.0r1 libraries, four libraries are required: libmfhdf.a, libdf.a, libjpeg.a and libz.a. See Item 1 of 'Important Changes' for examples of how you would compile and link your programs. 2. JPEG Group v5b library HDF Version 4.0 libraries now use v5 of the Independent JPEG Group (IJG) JPEG file access library. The JPEG library will need to be linked with user's applications whether they are compressed with JPEG or not. See Item 1 of 'Important Changes' for examples of how you would compile and link your programs. 3. Gzip library added New with this release is support for gzip "deflate" style compression of any object in an HDF file. This is supported through the standard compression interface function calls (HCcreate, SDsetcompress, GRsetcompress). The ABOUT_4.0b2 file contains additional information on this. See Item 1 of 'Important Changes' for examples of how you would compile and link your programs. 4. ANSI C only As was previously noted in the HDF newsletters, this release of the HDF library will compile only with ANSI C compilers. This shift to ANSI C compliance has been accompanied by a large clean up in the source code. An attempt has been made to remove all warnings and informational messages that the compilers on supported platforms occasionally emit, but this may not be completely clean for all user sites. 5. Auto configuration Both the HDF library and netCDF library on Unix systems now use the same configure script and can be configured uniformally with one command. See the README and the INSTALL files at the top level of HDF4.0r1 for detailed instructions on configuration and installation. A consequence of the auto configuration is that on UNIX systems without FORTRAN installed, the top level config/mh- will need to have the 'FC' macros defined to "NONE" for correct configuration. 6. New version of dimension record In HDF4.0b1 and previous releases of the SDS interface, a vgroup was used to represent a dimension. The vgroup had a single field vdata with a class of "DimVal0.0". The vdata had number of records, with each record having a fake value from 0, 1, 2 ... , ( - 1). The fake values were not really required and took up a large amount of space. For applications that created large one dimensional array datasets, the disk space taken by these fake values almost doubled the size of the HDF file. In order to omit the fake values, a new version of dimension vdata was implemented. The new version uses the same structure as the old version. The only differences are that the vdata has only 1 record with a value of and that the vdata's class is "DimVal0.1", to distinguish it from the old version. No change was made in unlimited dimensions. Functions added to support this are: - SDsetdimval_comp -- sets backward compatibility mode for a dimension. The default mode is compatible in HDF4.0r1, and will be incompatible in HDF4.1. See the man page of SDsetdimval_comp(3) for detail. - SDisdimval_bwcomp(dimid) -- gets the backward compatibility mode of a dimension. See the man page of SDisdimval_bwcomp(3) for detail. 7. Reading CDF files With HDF 4.0 limited support for reading CDF files was added to the library. This support is still somewhat in the development stage and is therefore limited. To begin with, unlike the netCDF merger, the CDF API is not supported. Rather, the SD and netCDF APIs can be used to access information pulled out of CDF files. The type of files supported are limited to CDF 2.X files. The header information is different between CDF 1.X and 2.X files. In addition, all of the files must be stored as single-file CDFs in network encoding. If there is user demand, and support, the types of CDF files that are readable may be increased in the future. 8. Parallel I/O interface on CM5 An extension using the parallel IO in CM5 has been added to the SDS interface. Initial tests have resulted in about 25 MBytes/second IO throughput using the SDA (Scalable Disk Array) file system. The library provides interfaces for both C* and CMF programming languages. The ABOUT_4.0.alpha file has more information concerning this. Users will find some examples in the directory mfhdf/CM5/Examples. The parallel I/O interface stores scientific datasets in external files. New options have been added to hdfls and hdfpack to handle them. A new utility, hdfunpac, was created for external files handling, too. 9. Support for SGI Power Challenge running IRIX6.1 Power Challenge is now supported, both in the native 64-bit and the 32-bit objects modes. Note that the Power Challenge native 64 bits objects use 64 bits long integers. Users should be careful when using the netcdf interface. They should declare their variables as "nclong", not "long". 10. Multi-file Annotation Interface (ANxxx) The multi-file annotation Interface is for accessing file labels and descriptions, and object labels and descriptions. It allows users to keep open more than one file at a time, and to access more than one annotation at a time. It also allows multiple labels and multiple descriptions to be applied to an HDF object or HDF file. 11. Multi-file Raster Image (GRxxx) interface The new Generic Raster (GR) interface provides a set of functions for manipulating raster images of all kinds. This interface allows users to keep open more than one file at a time, and to "attach" more than one raster image at a time. It supports a general framework for attributes within the RIS data-model, allowing 'name = value' style metadata. It allows access to subsamples and subsets of images. The GRreqlutil and GRreqimageil functions allow for different methods of interlacing images in memory. The images are interlaced in memory only, and are actually written to disk in "pixel" interlacing. 12. Compression for HDF SDS Two new compression functions have been added to the SD interface for HDF 4.0: SDsetcompress and SDsetnbitdataset. SDsetcompress allows users to compress a scientific dataset using any of several compression methods. Initially three schemes, RLE encoding, an adaptive Huffman compression algorithm, and gzip 'deflation' compression are available. SDsetnbitdataset allows for storing a scientific dataset using integers whose size is any number of bits between 1 and 32 (instead of being restricted to 8, 16 or 32-bit sizes). Access to the data stored in an n-bit data item is transparent to the calling program. The ABOUT_4.0.alpha file has an in-depth description concerning this ("n-bit SDS" listed under Item 2). 13. External Path Handling New functions have been added to allow applications to specify directories to create or search for external files. - HXsetcreatedir (hxscdir for FORTRAN) - HXsetdir (hxsdir for FORTRAN) 14. I/O performance improvement HDF 4.0 unofficially supports file page buffering. With HDF 4.1 it will be officially supported. The file page buffering allows the file to be mapped to user memory on a per page basis i.e. a memory pool of the file. With regards to the file system, page sizes can be allocated based on the file system page-size or in a multiple of the file system page-size. This allows for fewer pages to be managed as well as accommodating the user's file usage pattern. See the top level INSTALL file and the release_notes/page_buf.txt file for creating the library with this support and using it. 15. Improvement in memory usage and general optimizations Considerable effort was put into this release (since the b2 release) to reduce the amount of memory used per file and by the library in general. In general terms, we believe that the library should have at least half as large of a memory footprint during the course of its execution and is more frugal about allocating large chunks of memory. Much time was spent optimizing the low-level HDF routines for this release to be faster than they have been in the past also. Applications which make use of files with many (1000+) datasets should notice significant improvements in execution speed. 16. In hdf/ there are two files for FORTRAN programs to include the values and functions defined in HDF. They were originally named as constant.i and dffunct.i. The extension .i caused problems on some machines since *.i is used by cc as an "intermediate" file produced by the cpp preprocessor. In HDF 4.0 dffunct.i has been changed to dffunct.inc, and constant.i has been changed to hdf.inc. Users' existing FORTRAN application programs need to make the corresponding changes, if they include the .i files, in order to compile with HDF4.0. 17. Limits file A new file, limits.txt, has been added to the ftp server. It is aimed at HDF applications programmers and defines the upper bounds of HDF 4.0. This information is also found in the hdf/src/hlimits.h file. Refer to the ABOUT_4.0.alpha for historical information concerning this. 18. Pablo available HDF4.0 supports creating an instrumented version of the HDF library(libdf-inst.a). This library, along with the Pablo performance data capture libraries, can be used to gather data about I/O behavior and procedure execution times. See the top level INSTALL file and the hdf/pablo/README.Pablo file for further information. 19. Support for the IBM SP-2 The HDF library has been ported to run in a single SP2 node. It does not support the parallel or distributed computing for multiple SP-2 nodes yet. 20. Miscellaneous fixes - To avoid conflicts with C++, internal structures' fields which were named 'new' have been renamed. - The maximum number of fields in a vdata now is decided by VSFIELDMAX. - The platform number subclass problem when an external data file was in Little_endian format has been fixed. - Unlimited dimension was not handled correctly by the HDF3.3r4 FORTRAN interface. This problem has been fixed in HDF4.0r1. Changes to utilities: -------------------- o hdf/util/ristosds Ristosds now converts several raster images into a 3D uint8, instead of float32, SDS. o hdf/util/hdfls New options have been added to support the parallel I/O interface on CM5. o hdf/util/hdfpack New options have been added to support the parallel I/O interface on CM5. o hdf/util/hdfunpac This is a new utility for external file handling for the parallel I/O interface on CM5. o mfhdf/dumper/hdp Hdp is a new command line utility designed for quick display of contents and data objects. It can list the contents of hdf files at various levels with different details. It can also dump the data of one or more specific objects in the file. See hdp.txt in the release notes for more information. Known Problems: -------------- o On the IRIX4 platform, fp2hdf creates float32 and float64 values incorrectly. o On the SP2, the hdp command gives a false message of "Failure to allocate space" when the hdf file has no annotations to list. o On the C90, hdfed fails inconsistently when opening hdf files more than once in the same hdfed session. o Currently there is a problem in re-writing data in the middle of compressed objects. o VMS gives an error on the test for Little Endian float64. o If the external element test in hdf/test/testhdf fails and there is no subdirectory "testdir" in hdf/test/, create one via "mkdir" and run the test again. (The "testdir" should have been created by "make". But the "make" in some old systems does not support the creation commands.) %%%4.0b2%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF4.0 Beta 2 Nov 14, 1995 CONTENTS 1. The mfhdf side library is renamed to libmfhdf.a, versus libnetcdf.a in previous releases 2. New version of dimension record 3. New features GR interface Gzip library is added Unified configuration of library I/O performance improvement 4. New functions added Fortran functions hxscdir and hxsdir SDsetdimval_comp SDisdimval_bwcomp 5. SGI Power Challenge running IRIX6.1 is now supported 6. Pablo available 7. Platforms tested 8. Changes in release notes 9. Bug fixes and Known problems 1. The mfhdf side library is renamed to libmfhdf.a, versus libnetcdf.a in previous releases. To link a program with HDF4.0b2 libraries, one needs four libraries, libmfhdf.a, libdf.a, libjpeg.a and libz.a (see "Gzip library is added" in item 3 below): cc -o myprog myprog.c -I \ libmfhdf.a libdf.a libjpeg.a libz.a Note, the order of the libraries is important. 2. New version of dimension record HDF4.0b1 and previous releases use a vgroup to represent a dimension. The vgroup has a single field vdata with class "DimVal0.0". The vdata has number of records, each record has a fake value from 0, 1, 2 ... , ( - 1 ). The fake values are not really required and take a lot of space. For applications that create large one dimensional array datasets the disk space taken by these fake values almost double the size of the HDF file. In order to omit the fake values, a new version of dimension vdata is proposed. The new version uses the same structure as the old version. The only differences are that the vdata has only 1 record with value and that the vdata's class is "DimVal0.1" to distinguish it from the old version. No change is made in Unlimited dimensions. See file dimval.txt in subdirectoy release_notes/ of HDF4.0b2 release for our policy on the backward compatibility of this dimension version. 3. New features . New with this beta release is the support for different methods of interlacing images in memory. This feature is supported through the GRreqlutil and GRreqimageil functions rescribed in the mf_ris.txt document in this directory. Please note that the images are interlaced in memory only, all images are actually written to disk in "pixel" interlacing. . Gzip library is added New with this release is support for gzip "deflate" style compression of any object in an HDF file. This is supported through the standard compression interface function calls (HCcreate, SDsetcompress, GRsetcompress) by using the COMP_CODE_DEFLATE parameter for the coding type. The comp_info structure has a new member, deflate.level, which specifies how much effort to expend trying to compress data. Values for deflate.level must be between 1-9, with 1 being small amounts of effort (time) and 9 being maximum effort (most time and compression), the default value is 6. Currently, due to our use of the gzip "zlib" library for support of this feature, users must link with the "libz.a" library produced by zlib. (See item1 above). cc -o myprog myprog.c -I \ libmfhdf.a libdf.a libjpeg.a libz.a Note, the order of the libraries is important. Also, this method of compression currently has several known bugs when used on a 64-bit architecture (DEC Alpha processors, Cray machines, and SGI Power Challenge machines in 64-bit "mode"). . Unified configuration of library Both sides of the library now use the same configure script and can be configured uniformly through one makefile fragment. Please see the top-level INSTALL file in the distribution for further details. . I/O performance improvement This version of the distribution also has preliminary support for file page buffering. Note that is a *Beta* release and is not supported officially. As such it is is provided as is. The file page buffering allows the file to be mapped to user memory on a per page basis i.e a memory pool of the file. With regards to the file system, page sizes can be allocated based on the file system page-size or if the user wants in some multiple of the file system page-size. This allows for fewer pages to be managed along with accommodating the users file usage pattern. Please see the documentation in 'release_notes/page_buf.txt'. We have also reduced the memory requirements for several of the internal HDF library data structures, for greater efficiency. 4. Functions added . Fortran interface functions added for the set external path features. They are hxscdir and hxsdir. See the man page of HXsetcreatedir(3) and HXsetdir(3) for detail. . SDsetdimval_comp -- sets backward compatibility mode for a dimension. The default mode is compatible in HDF4.0b2, and will be incompatible in HDF4.1. See the man page of SDsetdimval_comp(3) for detail. . SDisdimval_bwcomp(dimid) -- gets the backward compatibility mode of a dimension. See the man page of SDisdimval_bwcomp(3) for detail. 5. SGI Power Challenge running IRIX6.1 is now supported Power Challenge is now supported, both in the native 64-bit and the 32-bit objects modes. Note that the Power Challenge native 64 bits objects use 64 bits long integers, users should be careful when using the netcdf interface. They should declare their variables as "nclong", not "long". 6. Pablo available This version of the distribution has support to create an instrumented version of the HDF library(libdf-inst.a). This library along the Pablo performance data capture libraries can be used to gather data about I/O behaviour and procedure execution times. Please see the documentation release_notes/Pablo.txt in the distribution for further details. 7. Platforms tested HDF4.0b2 has been tested on the following systems: SunOS 4.1.3, SunOS 5.3 and 5.4(Solaris 2.3 and 2.4), Linux_a.out, Linux_elf, SGI/IRIX5.2, SGI/IRIX5.3, SGI Power Challenge/IRIX6.1 (32- and 64-bit), HP/UX 9.01, IBM RS6000/AIX, Cray C90, Cray YMP, DEC alpha/UNIX (OSF), DecStation/MIPSEL (ncdump doesn't work), Free BSD 2.0, Solaris_x86, Convex Exemplar/HPUX, and CM5 parallel I/O. See the INSTALL file at the top level of HDF4.0b2 for more details. 8. Changes in release notes The directory release_notes/ contains writeups for the alpha and beta releases of HDF4.0. Those files can be used as temporary documents for HDF4.0 before the official documentation is available. Newly added: ABOUT_4.0b2, Pablo.txt, dimval.txt, and page_buf.txt Files changed: bug_fixed.txt and parallel_CM5.txt. AOUBT_4.0.alpha is also included. 9. Fixes and Known problems Problems fixed: . To avoid conflicts with C++, internal structures' fields which were named 'new' have been renamed. . Maximum number of fields in a vdata now is decided by VSFIELDMAX. . Vshow and hdp are fixed. Now they can handle as many fields as defined by VSFIELDMAX. . Fixed platform number subclass problem when external data file was in Little_endian format. . A file hdf/src/hlimits.h has been added to hold definitions for maximum number of open files and other limits. . Miscelianeous fixes Known problems: . Hfidinquire not included in binaries . Gzip doesn't work on 64-bit machines. . Currently there is a problem in appending data to compressed objects. . Hfidinquire is in the source code, but it is not included in the pre-compiled code. If your program uses Hfidinquire, you need to re-compile libdf.a. %%%4.0b1%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT HDF4.0 Beta 1 July 25, 1995 CONTENTS 1. New features in the HDF4.0 Beta 1 release 2. Bugs fixed and known problems 3. Platforms tested 4. Installation of HDF4.0 Beta 1 on WindowsNT/95 5. Known problems in compilation, testing and installation of HDF4.0b1 6. Installing without FORTRAN support 1. New features in HDF4.0 Beta 1 release o Auto configuration It is now possible to automatically configure and build both the HDF library and netCDF library with one command. See the README and the INSTALL files at the top level of HDF4.0beta1 for detailed instructions on configuring and installation. o Multi-file Annotation Interface (ANxxx) The multi-file annotation Interface is for accessing file labels and descriptions, and object labels and descriptions. It allows users to keep open more than one file at a time, and to access more than one annotation at a time. It also allows multiple labels and multiple descriptions to be applied to an HDF object or HDF file. A draft of the documentation for this interface is in ./mf_anno.txt. o Multi-file Raster Image (GRxxx) interface The new Generic Raster (GR) interface provides a set of functions for manipulating raster images of all kinds. This interface allows users to keep open more than one file at a time, and to "attach" more than one raster image at a time. It supports a general framework for attributes within the RIS data-model, allowing 'name = value' style metadata. It allows access to subsamples and subsets of images. HDF4.0beta1 includes a C interface only. The Fortran interface will be available in the next release. A draft of the documentation for this interface is in ./mf_ris.txt. o New Compression Algorithms and interface A new low-level compression interface has been added to HDF which allows any data-object to be compressed using a variety of algorithms. Currently only two compression algorithms are supported: Run-Length Encoding (RLE) and adaptive Huffman. A draft of the documentation for this interface is in ./compression.txt o JPEG Group v5b library HDF Version 4.0 libraries now use v5 of the Independent JPEG Group (IJG) JPEG file access library. For more details about JPEG library see ./JPEG_v5b.txt The JPEG library will need to be linked with a user's applications whether they are compressed with JPEG or not. For example on a SUN SPARC, if the .h files are in the directory "incdir", and all libraries are in "libdir," the following command should be used to compile a C program "myprog.c": cc -DSUN -DHDF -Iincdir myprog.c libdir/libnetcdf.a \ libdir/libdf.a /libdir/libjpeg.a -o myprog or cc -DSUN -DHDF -Iincdir myprog.c -L libdir -lnetcdf \ -ldf -ljpeg -o myprog Note that the order is important: libnetcdf.a must occur first, then libdf.a, and then libjpeg.a. For FORTRAN programs, use command line: f77 -o myprogf myprogf.f libdir/libnetcdf.a \ libdir/libdf.a libdir/libjpeg.a or f77 -o myprogf myprogf.f -L libdir -lnetcdf -ldf -ljpeg Note that the order is important: libnetcdf.a, then libdf.a and then libjpeg.a. o Compression for HDF SDS (not completely working) Work is almost complete on the addition of two new compression functions to the SD interface. One function, which still has some known bugs, will allow users to compress a scientific dataset using any of several compression methods. Initially two schemes, RLE encoding and an adaptive Huffman compression algorithms, will be available. A second function is available for storing a scientific dataset using integers whose size is any number of bits between 1 and 32 (instead of being restricted to 8, 16 or 32-bit sizes). A draft of the documentation for these functions is in ./comp_SDS.txt o External Path Handling New functions have been added to allow applications to specify directories to create or search for external files. More explanation can be found in: ./external_path.txt. o Parallel I/O for the CM5 An extension using the parallel I/O facilities on a CM5 has been added to the SDS interface. Initial tests have resulted in about 25 MBytes/second I/O throughput using the SDA (Scalable Disk Array) file system. The library provides interfaces for both C* and CMF programming languages. See: ./parallel_CM5.txt for details. o HDF dumper Hdp is a command line utility designed for quick display of contents and data of HDF3.3 objects, RIS, SDS, Vdata, and Vgroup. It can list the contents of hdf files at various levels with different details. It can also dump the data of one or more specific objects in the file. See: ./hdp.txt for details. Currently hdp works on SunOS and LINUX only. 2. Bugs fixed and known problems Several bugs or problems, such as failure in setting and getting scales for unlimited dimensions, missing Fortran version of VSQxxxx functions, failure in defining more than 36 fields in Vdatas, etc. were fixed in this beta release. For more details about fixed and un-fixed bugs and problems please see: ./bug_fixed.txt. 3. HDF4.0 Beta 1 has been tested on the following systems: SunOS 4.1.3, SunOS 5.3 (Solaris 2.3), Linux, SGI/IRIX5.3, SGI Power Challenge/IRIX6.0 (32-bit mode only), HP/UX 9.01, IBM RS6000/AIX (C only), C3880/ConvexOS,11.0, CM5, Cray C90, DEC alpha/OSF (C only), DecStation/MIPSEL (C only), Windows NT, Free BSD 2.0, and Convex Exemplar/HPUX. See the INSTALL file at the top level of HDF4.0b1 for more details. 4. Installing HDF4.0 Beta 1 on Windows NT and Windows 95 Since Windows NT, Windows '95 (Chicago) and Windows 3.1 (with the Win 32s extensions) all are designed to run the same 32-bit code, we have decided to support only 32-bit libraries and code on the MS-Windows platform. To build the HDF, JPEG and netCDF libraries and utilities, follow the instructions listed in: ./install_winNT.txt. 5. Known problems in compilation,testing and installation of HDF4.0b1: . On SunOS, tsdmmsf.f in hdf/test/fortest fails . On C90, mfhdf/fortran test doesn't configure correctly. The adaptive Huffman algorithm does not work right either. Due to this problem, when running hdf/test/testhdf the test module comp prints out error messages. . On DecStation/MIPSEL, ncdump gives a segmentation fault. . The Fortran interface has not been tested on IBM RS6000, DecStation/MIPSEL, and Dec Alpha/OSF because a Fortran compiler is not available on those machines in our group. . If the external element test in hdf/test/testhdf fails and there is no subdirectory "testdir" in hdf/test/, create one via "mkdir" and run the test again. (The "testdir" should have been created by "make". But the "make" in some old systems does not support the creation commands.) . A bug was found in the "mfhdf.h" file late in the testing stage. The error occured in the CM5 parallel I/O extension only. The fix is not included in the source release, but it is avalable in the binary release for the CM5 version. Please retrieve the fix there. . SDsetcompress does not work correctly. . Hdp now works on SunOS and LINUX only. Commands dumpsds, dumpvd and dumpvg have different problems on other platforms. See mfhdf/dumper/README for more details. 6. Installing without FORTRAN support: . On UNIX systems without a FORTRAN system installed, the config/mh- will need to have the 'FC' macros defined to "NONE" for correct configuration and the target "allnofortran" should be used to build the distribution, instead of the target "all". %%%4.0alpha%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% ABOUT_4.0.alpha This file was last updated: November 8, 1994 INTRODUCTION This is a preliminary document describing the differences between HDF4.0 (Alpha) and HDF3.3r3. It is written for people who already use HDF3.3r3 or earlier versions and wish to be HDF4.0 Alpha testers. Special emphasis is given to changes that might be required in existing code. The files ABOUT_3.3r3, ABOUT_3.3r2 and ABOUT_3.3r1 which were released along with previous releases contain detailed descriptions of HDF3.3. Those files can be found in this directory. First-time HDF users are encouraged to read the FAQ file in directory HDF/ for more information about HDF and where to get HDF documentation. If you have any questions or comments, please send them to: hdfhelp@ncsa.uiuc.edu. Contents 1. Changes in include file names for FORTRAN programs 2. New features supported by HDF4.0 ANSI C only n-bit SDS Reading CDF files Parallel I/O interface on CM5 Installing HDF Libraries With CM5 Parallel IO Extension 3. Changes in HDF utilities hdp -- HDF dumper ristosds hdfls hdfpack hdfunpac 4. Platforms tested 5. Limits of the current release 1. Changes in include file names for FORTRAN programs In hdf/ there are two files for FORTRAN programs to include the values and functions defined in HDF. They were originally named as constant.i and dffunct.i. The extension .i causes problems on some machines since *.i is used by cc as an "intermediate" file produced by the cpp preprocessor. In HDF 4.0 dffunct.i is changed to dffunct.inc, and constant.i is changed to hdf.inc. Users' existing FORTRAN application programs need to make the corresponding changes, if they include the .i files, in order to compile with HDF4.0. 2. New Features supported by HDF4.0 ANSI C only As previously noted in the HDF newsletters, the next major release of the HDF library will compile only with ANSI C compilers. Backward compatibility will be provided through an ANSI->K&R filter which will need to be run on each source file in order to convert the ANSI style code into K&R style code. Currently the entire HDF library has been converted to ANSI code, but the filter is not yet in place. Future alpha releases may have the code filter in place, but it will definitely be in place for the first beta release. This shift to ANSI C compliance has been accompanied by a large cleanup in the source code. An attempt has been made to remove all warnings and informational messages that the compilers on supported platforms occasionally emit, but this may not be completely clean for all user sites. n-bit SDS Support for n-bit integer data has been incorporated into this release of the HDF library. The n-bit support is currently incorporated into the call to SDsetnbitdataset, future releases may incorporate high level access through the DFSD interface also. Access to the data stored in an n-bit data item is transparent to the calling program. For example to store an unsigned 12-bit integer (which is represented unpacked in memory as an unsigned 16-bit integer), with no sign extension or bit filling and which starts at bit 14 (counting from the right with bit zero being the lowest) the following setup & call would be appropriate: intn sign_ext = FALSE; intn fill_one = FALSE; intn start_bit= 14; intn bit_len = 12; SDsetnbitdataset(sds_id,start_bit,bit_len,sign_ext,fill_one); Further reads and writes to this dataset would transparently convert the 16-bit unsigned integers from memory into 12-bit unsigned integers stored on disk. The corresponding FORTRAN function name is sfsnbit which takes the same parameters in the same order. A breakdown of the parameters to the SDsetnbitdataset call is as follows: int32 sds_id - The id of a scientific dataset returned from SDcreate or SDselect. intn start_bit - This value determines the bit position of the highest end of the n-bit data to write out. Bits in all number- types are counted from the right starting with 0. For example, in the following bit data, "01111011", bits 2 and 7 are set to 0 and all the other bits are set to one. intn bit_len - The number of bits in the n-bit data to write, including the starting bit, counting towards the right (i.e. lower bit numbers). For example, starting at bit 5 and writing 4 bits from the following bit data, "01111011", would write out the bit data, "1110", to the dataset on disk. intn sign_ext - Whether to use the top bit of the n-bit data to sign-extend to the highest bit in the memory representation of of the data. For example, if 9-bit signed integer data is being extracted from bits 17-25 (nt=DFNT_INT32, start_bit=25, bit_len=9, see below for full information about start_bit & bit_len parameters) and the bit in position 25 is a 1, then when the data is read back in from the disk, bits 26-31 will be set to a 1, otherwise bit 25 will be a zero and bits 26-31 will be set to 0. This bit-filling takes higher precendence (i.e. is performed after) the fill_one (see below) bit-filling. intn fill_one - Whether to fill the "background" bits with 1's or 0's. The "background" bits of a n-bit dataset are those bits in the in-memory representation which fall outside of the actuall n-bit field stored on disk. For example, if 5 bits of an unsigned 16-bit integer (in-memory) dataset located in bits 5-9 are written to disk with the fill_one parameter set to TRUE (or 1), then when the data is read back into memory at a future time, bits 0-4 and 10-15 would be set to 1. If the same 5-bit data was written with a fill_one value of FALSE (or 0), then bits 0-4 and 10-15 would be set to 0. This setting has a lower precedence (i.e. is performed first) than the sign_ext setting. For example, using the sign_ext example above, bits 0-16 and 26-31 will first be set to either 1 or 0 based on the fill_one parameter, and then bits 26-31 will be set to 1 or 0 based on bit-25's value. Reading CDF files With HDF 4.0 limited support for reading CDF files was added to the library. This support is still somewhat in the development stage and is therefore limited. To begin with, unlike the netCDF merger, the CDF API is not supported. Rather, the SD and netCDF APIs can be used to access information pulled out of CDF files. The type of files supported are limited to CDF 2.X files. The header information is different between CDF 1.X and 2.X files. In addition, all of the files must be stored as single-file CDFs in network encoding. If there is user demand, and support, the types of CDF files readable may be increased in the future. Parallel I/O interface on CM5 An extension using the parallel IO in CM5 is added to the SDS interface. Initial tests have resulted in about 25 MBytes/second IO throughput using the SDA (Scalable Disk Array) file system. The library provides interfaces for both C* and CMF programming languages. Read the section "Installing HDF Libraries With CM5 Parallel IO Extension" below for specific installation instructions. Users will find some examples in the directory mfhdf/CM5/Examples. Please send comments, bugs reports, etc. to acheng@ncsa.uiuc.edu. The parallel I/O interface stores scientific datasets in external files. New options have been added to hdfls and hdfpack to handle them. A new utility program, hdfunpac, is created for external files handling too. See the man pages for details. Installing HDF Libraries With CM5 Parallel IO Extension The current alpha version requires two major steps to install the HDF libraries (libdf.a and libnetcdf.a). Works are in progress to make it simpler in the production release. Bear with us for now. 1) Compile and install the ordinary HDF libraries, include files and utilities according to the instructions for a Sun Microsystem machine. 2) To make the HDF library with CM5 parallel IO extension: There are two new libraries, libdfcm5.a and libnetcdfcm5.a that are similar to libdf.a and libnetcdf.a. For libdf.a cd hdf cp MAKE.CM5 Makefile cp src/Makefile.CM5 src/Makefile make libdf # create the parallel IO libdf.a # to install it in /usr/local/lib cp src/libdf.a /usr/local/lib/libdfcm5.a ranlib /usr/local/lib/libdfcm5.a For libnetcdf.a cd mfhdf # edit CUSTOMIZE to use "gcc" as the CC compiler # and add "-DCM5" to the CFLAGS variable. ./configure (cd libsrc; make ) # compile the library # to install it in /usr/local/lib cp libsrc/libnetcdf.a /usr/local/lib/libnetcdfcm5.a ranlib /usr/local/lib/libnetcdfcm5.a 3. Changes in HDF utilities hdp -- HDF dumper A new utility hdp is under development to list contents of HDF files and to dump data of HDF objects. A prototype is included in HDF4.0 Alpha for users to play with and comment on. Development will continue based on users' feedback. More information is contained in HDF/HDF4.0.alpha/mfhdf/dumper/README. ristosds Ristosds now converts several raster images into a 3D uint8, instead of float32, SDS. hdfls New options to recognize external elements. hdfpack New options to pack external elements back into the main file. hdfunpac New utility program to unpack scientific datasets to external elements. Can be used to prepare for CM5 parallel IO access. 4. HDF 4.0 Alpha has been tested on the following machines Platform 'base library' HDF/netCDF --------------------------------------------------------------- Sun4/SunOs X X Sun4/SOLARIS X X IBM/RS6000 X X SGI/IRIX4 X X Convex/ConvexOS * X X Cray Y-MP/UNICOS X X Cray/C90 X X NeXT/NeXTSTEP X X HP/UX 9.01 X X DecStation/MIPSEL X X IBM PC - MSDOS ** *** IBM PC - Windows 3.1 ** *** IBM PC - Windows NT X X DEC Alpha/OSF X X CM5/ X X Fujitsu VP/UXPM X Intel i860 X Mac/MacOS VMS * When compiling the mfhdf section of the library on a Convex3 you will need to set the environment variable 'MACHINE' to 'c3' before running the configure script. ** There is no FORTRAN support for either PC version of HDF4.0 Alpha *** The netCDF half of the HDF/netCDF merger is not working correctly, but the multi-file SD interface is working correctly. 5. Limits of the current release Sometimes it is important for HDF users to be aware of certain limits in using HDF files and HDF libraries. This section is aimed at HDF applications programmers and reflects upperbounds as of HDF 4.0. Limits that are #define'd are fully capitalized and the file where the symbol is defined is given in parentheses at the end of the sentence. If the #define's are changed in order to meet the needs of an application, it is important to make sure that all other users, who would share the HDF library and the hdf files of the application, are aware of the changes. If a limit has no #define, the size of the maximum storage allocated for that item is given; it would, generally, require a large amount of modification of the HDF library to change. If a limit is listed as a number type (e.g. int16) then it refers to the largest number that can be represented using that type. That is: int16 -- 32,767 int32 -- 2,147,483,647. H-Level Limits -------------- MAX_FILE files open at a single time (hfile.h) MAX_ACC access records open at a single time (hfile.h) int16 total tags (fixed) int32 max length and offset of an element in an HDF file (fixed) Vgroup Limits ------------- MAX_VFILE vset files open at a single time (hdf.h) int16 elements in a Vgroup (fixed) VGNAMELENMAX max length of a Vgroup name or class (vg.h) Vdata Limits ------------ MAX_VFILE vset files open at a single time (hdf.h) VSFIELDMAX fields in a Vdata (hdf.h) FIELDNAMELENMAX characters in a single field name (hdf.h) MAX_ORDER max field order in a Vdata (hdf.h) VSNAMELENMAX max length of a Vdata name or class (hdf.h) int16 max width in bytes of a Vdata record. (fixed) Vdatas can have a maximum field width of MAX_FIELD_SIZE bytes. (hdf.h) Raster Images ------------- int32 width or height of a raster image. (fixed) SD Limits --------- MAX_VAR_DIMS dimensions per dataset (defined in netcdf.h included by mfhdf.h) int32 maximum dimension length (fixed) MAX_NC_ATTRS attributes for a given object (defined in netcdf.h included by mfhdf.h) MAX_NC_NAME maximum length of a name of a dataset (defined in netcdf.h included by mfhdf.h) Other Convensions / Issues -------------------------- Some utility programs (e.g. ncgen) expect dataset names to be composed of only alphanumeric, '-' and '_' characters.