hdf images hdf images

This web site is no longer maintained (but will remain online).
Please see The HDF Group's new Support Portal for the latest information.

News Bulletin Archives

June 3, 2015:

Bulletin: HDF5-1.8.15 Patch 1 Release

HDF5-1.8.15 Patch 1 was released to correct a problem in HDF5-1.8.15 that caused compile failures in C++ applications depending on the order that header files where included.


May 1, 2015:

Announcing PyHexad – an HDF5 Add-in for Excel

We are proud to announce the availability of PyHexad 0.1!

PyHexad is a Python-based Excel add-in for HDF5 that can be used to read or write data in HDF5 files from Microsoft Excel on Windows. This functionality is made available to Excel users through a set of about a dozen user-defined functions and includes endpoints for displaying file contents, reading and writing arrays, tables, and attributes, as well as displaying images stored in HDF5 datasets.

The software can be downloaded from GitHub at:

   https://github.com/HDFGroup/PyHexad

For this first version, the emphasis was on functionality and there are no graphical frills or embellishments. It is meant for intermediate to advanced users, who are looking for just a few useful functions, which they can integrate into their workbooks. In the end, only an in-depth discussion can reveal where additional development is needed, and we would like to invite you to participate in that discussion.

Please join the discussion on the HDF-Forum, report problems, suggest improvements, submit patches, or support the development in other creative ways!

See the announcement above for more details.


May 1, 2015:

Dr. Lindsay Powers Joins The HDF Group's Earth Science Division, Boulder Office

Champaign, IL -- Dr. Lindsay Powers has joined The HDF Group's Boulder Office as Deputy Director of Earth Science. An interdisciplinary earth scientist who holds a Ph.D. in Water Resources Science from the University of Minnesota, St. Paul, she has a strong research and project management background with extensive experience in national and international scientific collaborations.

For complete details click on the link above.


March 5, 2015:

Announcing the HDF Blog

We are excited to introduce a blog series as another means to share knowledge about HDF. We've posted an introductory blog and a short history of HDF. Many interesting topics are in the pipeline, including information about HDF technologies, uses of HDF, plans for HDF, and anything else that might be of interest to HDF users and others who could enjoy the benefits of HDF. We invite you to subscribe to the blog and make comments and suggestions. Our staff will post regularly on the blog, but we also welcome guest bloggers from the community. If you'd like to do a post, please send an email message to:

     

You'll find our blog at blog.hdfgroup.org. We're looking forward to a lively and informative dialogue.

Thank you,

Mike Folk
President
The HDF Group


February 27, 2015:

Announcing HDF REST Server (h5serv) 0.1.0

We are proud to announce the availability of HDF REST Server (h5serv) 0.1.0!

HDF REST server is a Python-based web service that can be used to send and receive HDF5 data using an HTTP-based REST interface. HDF Server supports CRUD (create, read, update, delete) operations on the full spectrum of HDF5 objects including: groups, links, datasets, attributes, and committed data types. As a REST service a variety of clients can be developed in JavaScript, Python, C, and other common languages.

The HDF Server extends the HDF5 data model to efficiently store large data objects (e.g. up to multi-TB data arrays) and access them over the web using a RESTful API. As datasets get larger and larger, it becomes impractical to download files to access data. Using HDF Server, data can be kept in one central location and content vended via well-defined URIs. This enables exploration and analysis of the data while minimizing the number of bytes that need to be transmitted over the network.

Since HDF Server supports both reading and writing of data, it enables some interesting scenarios such as:

In addition to these, we would like to hear your ideas of how HDF Server could be utilized (as well as any other feedback you might have).

Thanks to everyone who helped and advised on this project.


January 14, 2015:

Champaign, IL -- Dr. Ted Habermann, Director of Earth Science at The HDF Group, was recognized last week by the Federation for Earth Science Information Partners (ESIP) in Washington D.C. with the Martha Maiden Lifetime Achievement Award. The award acknowledges significant lifetime leadership, dedication and a collaborative spirit in advancing the field of Earth Science information.

Dr. Habermann leads the Earth Science Division of The HDF Group. His team supports NASA's Earth Observing System, which collects and studies massive quantities of earth observation from all over the world. Dr. Habermann is recognized for his work in national and international standards for documenting data across many processing systems and data centers so that scientists, decision-makers, and the general public can understand and trust data collected by U.S. Federal agencies and the academic community. He is also widely recognized as an expert in data management and in architectures of observing systems, data archives, and distribution systems.


October 24, 2014:

Book Release:   High Performance Parallel I/O by Prabhat (Berkeley Lab) and Quincey Koziol (The HDF Group).

Parallel I/O is an integral component of modern high performance computing (HPC), especially in storing and processing very large datasets to facilitate scientific discovery. Revealing the state of the art in this field, this book draws on insights from leading practitioners, researchers, software architects, developers, and scientists who shed light on the parallel I/O ecosystem. Also see: A Comprehensive Look at High Performance Parallel I/O


September 16, 2014:

LVHDF5 Toolkit v1.0 provides nearly complete interface between LabVIEW and HDF5.


September 2014:

Ceemple v0.6.9 (C++ technical computing environment) now includes the latest version of HDF5. It is available from the Ceemple web site.


January 7, 2014:

The HDF Group is excited to welcome two new members to the HDF team.

Dr. Aleksandar Jelenak is joining the Earth Science team at The HDF Group. Aleksandar is an expert in satellite data access and management. Formerly at NOAA's National Environmental Satellite, Data, and Information Service (NESDIS), Aleksandar was lead designer and implementer of the JPSS Data Repository at the Center for Satellite Applications Research (STAR) and the data management lead for the international WMO Global Space-based Inter-Calibration System (GSICS) Project. Aleksandar brings many years of experience with HDF, netCDF, CF, THREDDS, IDL, MATLAB and Python to The HDF Group and will contribute to many on-going and new projects.

Dr. Scot Breitenfeld will work with our applications and High Performance Computing teams. Scot has been a student programmer for The HDF Group while working on his PhD in Aerospace Engineering at the University of Illinois. At The HDF Group Scot specialized in the Fortran APIs for HDF4, HDF5 and High-Level Libraries for HDF5. Scot just received his PhD and is now joining us full time to provide support for applications on high end systems. His first two assignments will be to assist applications teams in using HDF5 on the new Blue Waters petascale system and on Lawrence Berkeley Laboratory's high end systems.

HDF Group President Mike Folk says of the new staff members, "Aleksandar and Scot help fill an increasing need to provide application-specific services to users of HDF data. Aleksandar and Scot are not just HDF experts; they also have a deep understanding of the data needs and challenges scientists and engineers face. At the end of the day, people use HDF to help them solve problems and make discoveries. Aleksandar and Scot make us much better at helping our users in the earth sciences and HPC applications do just that."

Welcome, Aleksandar and Scot!


November 2013:

Book Release of "Python and HDF5"

Python users, be sure to check out "Python and HDF5: Unlocking Scientific Data" by Andrew Collette, published by O'Reilly. Andrew has implemented h5py, a powerful python interface to HDF5. This book is an excellent guide to learning h5py, with loads of exercises and real-world examples.

You should also have a look at this interview of Andrew Collette, given at the launching of the book in November 2013.


June 28, 2013: Samplify Releases APAX HDF Storage Library

June 28, 2013 -- Samplify Systems, Inc., announced the availability of its APAX HDF Storage Library at the recent International Supercomputing Conference in Leipzig, Germany. Samplify provides software and hardware solutions for solving memory, I/O, and storage bottlenecks in HPC, Big Data, cloud computing, consumer electronics and mobile devices. Its APAX technology is a universal numerical data encoder that operates on any integer or floating point data type and can achieve typical encoding rates of 3:1 to 8:1 without affecting the results of computing applications.

According to the APAX HDF Product Brief, "Using HDF’s plug-in capability, APAX HDF inserts Samplify’s APAX encoder into the write pipeline, and the APAX decoder in the read pipeline, to automatically save and access dataset chunks in APAX compressed format.  Any application which already uses HDF as its storage format can take advantage of Samplify’s APAX HDF storage library WITH NO CODING REQUIRED!  Unlike other plug-ins, Samplify’s APAX HDF requires no modification to solver applications."

See the APAX HDF web page on Samplify's website for more information.


May 31, 2013: Unprecedented "Trillion Particle" Simulation Relies on HDF5 to Store Data

A team of researchers from Lawrence Berkeley National Laboratory (Berkeley Lab) and Cray Inc. performed a trillion-particle simulation on the National Energy Research Scientific Computing Centerâ(NERSCâ Cray XE6 âpperâThe experiment pushed the machine's capabilities by using more than 120,000 processors and generating approximately 350 terabytes of data. The team recently won the Best Paper award at the 2013 Cray User Group conference for their description of the simulation and its findings.

"This is the largest I/O job ever undertaken by a NERSC application. It is quite a feat when you consider that even the smallest bottleneck in a production I/O stack can degrade performance at scale," says Prabhat, a researcher in Berkeley Lab's Scientific Visualization Group and co-author of the paper. He further explained how progress made by the ExaHDF5 team over the course of the project made it possible "to demonstrate that HDF5 can scale to petascale platforms like Hopper and achieve near peak I/O rates."

ExaHDF5 is a Department of Energy funded collaboration, led by Prabhat, to develop high performance I/O and analysis strategies for future exascale computers. A primary goal of the project has been to expand the capabilities of HDF5 for petascale and future exascale platforms.

"The outcome of this work was truly ideal,â ran a state-of-the-art simulation code at scale, which ww asnâpossible before, using the best computing resources and expertise, and this effort produced a first-time science result that no one had ever seen before. Computer science researchers always hope for such an outcome, but rarely do things come together in this fashion."


April 2, 2013: "The Earth Observer" Article Covers HDF and HDF-EOS Earth Science Data Formats, HDF-EOS Website

The March-April issue of NASA's "The Earth Observer" includes an interesting article entitled "Working with NASA’s HDF and HDF-EOS Earth Science Data Formats", written by Jennifer Brennan of NASA's Goddard Space Flight Center and H. Joe Lee, MuQun Yang, Mike Folk, and Elena Pourmal of The HDF Group.

The article provides a brief overview of how the HDF and HDF-EOS formats are used in the NASA Earth Observing System (EOS) followed by an excellent description of examples available on the HDF-EOS Tools and Information Center website.


February 7, 2013:   The HDF Group Board Member Recognized as 2013 "Person to Watch" in High-Performance Computing

HPCWire, the #1 news and information portal covering the fastest computers in the world and the people who run them announced on January 25 that it has published its HPCwire People to Watch 2013 list, and we are proud to congratulate The HDF Group board member Dr. William D. (Bill) Gropp for his selection to the list.

As stated in the announcement, The annual list is comprised of an elite group of the best and brightest minds in HPC whose research, dedication and hard work will be making a difference in the HPC community and in the world with their contributions.

At the University of Illinois at Urbana-Champaign Dr. Gropp is the Paul and Cynthia Saylor Professor of Computer Science, Director of the Parallel Computing Institute, and Deputy Director for Research at the Institute for Advanced Computing Applications and Technologies. He joined the board of The HDF Group in 2011.

Dr. Gropp is also General Chair for Supercomputing 2013, the International Conference for High Performance Computing, Networking, Storage and Analysis.

About receiving this recognition Dr. Gropp remarked, It's a great honor to be part of this year's group of people to watch in HPC. I'll be watching to see how our predictions turn out!
 


November 7, 2012:   ExxonMobil Upstream Research's Standards DevKit Uses HDF5DotNet

"HOUSTON, TX -- (Marketwire) -- 11/07/12 -- The Energistics Consortium announced today that ExxonMobil Upstream Research Company has provided an enhanced version of the Standards DevKit that adds support for RESQML 1.1, including managing the HDF5 file format. This enhanced version of the Standards DevKit will support RESQML, the Energistics reservoir data exchange standards as well as WITSML."

The HDF Group worked with the Energistics Consortium in 2011 to provide additional functionality in the .NET C++/CLI wrapper of the HDF5 library and supported the wrapper on Windows XP and Windows 7 for a limited period.

"RESQML is an XML-based data exchange standard that helps to address the data-incompatibility and data-integrity challenges faced by petro-technical professionals when using the multiple software technologies required along the entire subsurface workflow, for analysis, interpretation, modeling, and simulation." (Energistics Guide)

For more information about the HDF5DotNet wrapper, please visit hdf5.net.
 


August 2, 2012:  Major Upgrade of HDF5 OPenDAP handler

A major upgrade of the HDF5 OPeNDAP handler was released at opendap.org. The upgraded version greatly improves the accessibility of NASA Earth Sciences HDF5 data via OPeNDAP. It includes support for more HDF5 data and much better support for the CF conventions. The enhanced CF support will greatly improve interoperability with those clients that understand CF.
 


July 12, 2012:  DOE Primes Pump for Exascale Supercomputers

Article published in HPC Wire about The HDF Group collaboration with Whamcloud and Cray on July 10, 2012.
 


July 10, 2012:  Whamcloud Leads Group of HPC Experts Winning DOE FastForward Storage and IO Project

The HDF Group is collaborating with Cray and Whamcloud on the FastForward program. FastForward is a jointly funded project between the Department of Energy (DOE) and National Nuclear Security Administration (NNSA) to accelerate the research and development (R & D) of critical technologies needed for extreme scale computing. Exascale computing is essentially a grand challenge to provide the next level of computational power required to help ensure the prosperity and security of the United States.
 


March 23, 2012:   HDF 4.2.7-patch1

A patched version of the HDF 4.2.7 source code, HDF 4.2.7-patch1, is now available to correct a configure issue with compilers that contain a '-' in the name.
 


December 19, 2011:   HDF5DotNet 1.8.8 now available

HDF5DotNet 1.8.8 is now available for download. This release supports HDF5-1.8.8. See the HDF5DotNet home page for detailed information regarding this release:

   http://hdf5.net/


October 28, 2011:   NASA NPP Spacecraft Launches on Earth Observing Mission

The NPOESS Preparatory Project (NPP) spacecraft lifted off at 5:48 a.m. EDT on Oct. 28, 2011, to begin its earth observation mission. NPP is the first of several satellites, all of whose data will be stored in HDF5.
 

June 1, 2011:   The HDF Group joins OGC

Announcement that The HDF Group has joined the Open Geospatial Consortium (OGC) as an Associate Member.
 


December 5, 2010:   HDF Group's growth, move prompts news coverage

Link to an article in the Champaign, Illinois, News-Gazette.
 


October 12, 2010:   SMAP Data Products in HDF5

SMAP (Soil Moisture Active & Passive) data products will be delivered in the HDF5 Format. SMAP is one of four Tier-1 missions recommended by the U.S. National Research Council Committee on Earth Science and Applications from Space. It will provide global measurements of soil moisture and its freeze/thaw state. These measurements will be used to enhance understanding of processes that link the water, energy and carbon cycles and extend the capabilities of weather and climate prediction models. SMAP data will also be used to quantify net carbon flux in boreal landscapes and improve flood prediction and drought monitoring capabilities.


July 29, 2010:   Sony Pictures Imageworks and Industrial Light & Magic Join Forces on ALEMBIC

Sony Pictures Imageworks and Industrial Light and Magic (ILM) have collaborated to create ALEMBIC, an open source exchange format that aims to become the standard for exchanging animated computer graphics scenes between content creation software packages.


February 26, 2010:   HDF5 1.8 Corruption Problem

A corruption problem was found in HDF5 1.8, which affected releases 1.8.0 through 1.8.4. The problem was fixed in HDF5 1.8.4 Patch 1.


 


October 2, 2009:
Award-winning Sony Pictures Imageworks uses HDF5

Sony Pictures Imageworks, the award-winning visual effects and digital character animation unit of Sony Pictures Digital Productions, is launching an open source development program, which includes the Field3d technology.

Field3, a voxel data storage library, provides C++ classes that handle storage in memory, as well as a file format based on HDF5 that allows the C++ objects to easily be written to and read from disk.


May 8, 2009:
Latest HDF5 and HDF Group Podcast

Mike Folk and Quincey Koziol of The HDF Group speak about the HDF5 file API.
 


May 29, 2009:
RFC: Reporting of Non-Comparable Datasets by h5diff

A Request for Comments (RFC) on the handling of non-comparable datasets by h5diff has just been published. The HDF Group is soliciting feedback on this RFC.
 


March 3, 2009:
HDF5 Chunking Performance Improvement

A recent bug fix resulted in a significant improvement in chunking performance. This fix will be available in releases HDF5 1.6.9 and HDF5 1.8.3, due out in May, but the fix is in the latest snapshots.
 


January 14, 2009:
American Meteorological Society 89th Annual Meeting

Extended abstracts on work done by The HDF Group were presented at the 89th Annual AMS meeting:

    Investigation of using HDF5 Archival Information Packages (AIP) to store NASA ECS Data
    Using a friendly OPeNDAP client library to access HDF5 data

These abstracts can also be found on The HDF Group presentations page.
 


January 7, 2009:
HDF5 and netCDF-4 Tutorial @ 10th LCI Conference

The tutorial, HDF5 and netCDF-4: Two Solutions for Data Management Problems based on One File Format will be presented on March 9, 2009 at the 10th LCI International Conference on High-Performance Clustered Computing in Boulder,Colorado.


 


Bulletin December 12, 2008

RFC: Setting Raw Data Chunk Cache Parameters in HDF5

A Request for Comments (RFC) on new functions for setting individual chunk cache parameters for each dataset in HDF5 has just been published.

The HDF Group is currently soliciting feedback on this RFC. Community comments will be one of the factors considered by The HDF Group in making the final design and implementation decisions.


Bulletin November 3, 2008

HDF5 Users BOF @ SC08

The HDF Group will host a Birds-of-a-Feather (BOF) session for HDF5 Users at SC08 on November 19th. Quincey Koziol, Chief Architect for The HDF Group, will discuss features currently under development, answer questions, and gather input for future directions.

Please see the following page for more details: HDF5 BOF Information


Bulletin October 7, 2008

NASA Commits 3.1M to The HDF Group for Earth System Science

As reported on October 1st, 2008 in HPCWire, the HDF Group has received a 3-year contract from the National Aeronautics and Space Administration (NASA) to provide ongoing development and support for the HDF technologies used by NASA's Earth Observing System (EOS).

The contract will be announced at the upcoming 12th HDF & HDF-EOS Workshop in Aurora, Colorado, October 15th through 17th.


Bulletin October 3, 2008

RFC: Native Time Types in HDF5

An RFC has been published for handling Native Time Types in HDF5. The HDF Group is currently soliciting feedback on this RFC. [ PDF ]


Bulletin September 2, 2008

RFC: Special Values in HDF5

A new Request for Comment on the handling of Special Values in HDF5 has been published. The HDF Group is currently soliciting feedback on this RFC. Community comments will be one of the factors considered by The HDF Group in making the final design and implementation decision. Comments may be sent to the The HDF Group Heldesk.


Bulletin August 14, 2008

HDF5-OPeNDAP Used In Tracking Beijing Air Quality

The HDF5-OPeNDAP Project has been used to facilitate the use of HDF-EOS Data to track Beijing air quality. For more information on this, see: HDF5 OPeNDAP Brief 8/14/08 [pdf]


Bulletin June 24, 2008

NetCDF-4 Performance Report

NetCDF-4 is an I/O software package that retains the original netCDF APIs while using HDF5 to store the data. Sponsored by NASA ESTO, netCDF-4 is the result of a collaboration between Unidata and The HDF Group.

The HDF Group has prepared a report on the performance of netCDF-4 that uses benchmarks and examples to:

Some of the performance tuning and pitfalls discussions may also be of interest to users of HDF5 who do not use netCDF-4. The report is available at:

   2008-06_netcdf4_perf_report.pdf


- - Last modified: 24 January 2017