Library Design, API Function Specification

and Test for Fletcher32 Checksum

 

 

Introduction

 

This document mainly addresses the internal library design, API functions and the test program for Error-detecting code, Fletcher32 checksum specifically.  Please read the Error-detecting code proposal first for the background information.  For more details about Fletcher32 checksum, you can go to the website http://rfc.sunsite.dk/rfc/rfc1071.html and http://www.netzmafia.de/rfc/internet-drafts/draft-cavanna-iscsi-crc-vs-cksum-00.txt.

 

Library Design

 

Since the first stage of implementing error-detecting code is on chunked dataset, the checksum algorithm can be added to filter pipeline as a new filter.  There will be no file format change, only a 4-byte checksum data is appended to the original raw data during the writing process, making the actual chunk size 4-byte bigger than the original data.  The metadata of data dimensionality and size will remain the same.  More information about dataset filters can be found in the User’s Guide.  

 

API Function

 

There will be four new API functions, H5Pset_fletcher32, H5Pset_edc_check, H5Pget_edc_check and H5Pset_filter_callback.  There will be an addition of EDC (the EDC stands for error-detecting code) to another function, H5Pset_filter.  More values will be added to the parameter filter of H5Pset_filter when more EDC algorithms are included into the library.

 

The H5Pset_fletcher32 enables Fletcher32 checksum for a dataset creation property list.  H5Pset_edc_check gives user the option to skip an error-detecting (checksum for this step) during read process but not write process.  It can save time to read the data.  The functions H5Pget_nfilters and H5Pget_filter can be used to check whether the checksum is enabled.  User can call H5Pget_edc_check to query whether an EDC algorithm is enabled for reading data.  The H5Pset_filter_callback sets a user callback function to handle failure in filters.  Without user’s callback function, the write and read processes will fail if the Error-detecting code is enabled and finds error in the data.

 

Name: H5Pset_filter

Signature:

herr_t H5Pset_filter(hid_t plist, H5Z_filter_t filter, unsigned int flags, size_t cd_nelmts, const unsigned int cd_values[] )

Purpose:

Adds a filter to the filter pipeline.

Description:

H5Pset_filter adds the specified filter and corresponding properties to the end of an output filter pipeline. If plist is a dataset creation property list, the filter is added to the permanent filter pipeline; if plist is a dataset transfer property list, the filter is added to the transient filter pipeline.

The array cd_values contains cd_nelmts integers which are auxiliary data for the filter. The integer values will be stored in the dataset object header as part of the filter information.

The flags argument is a bit vector with the following fields specifying certain general properties of the filter:

H5Z_FLAG_OPTIONAL:  If this bit is set then the filter is optional. If the filter fails during an H5Dwrite() operation then the filter is just excluded from the pipeline for the chunk for which it failed; the filter will not participate in the pipeline during an H5Dread() of the chunk. This is commonly used for compression filter: if the filter result would be larger than the input then the compression filter returns failure and the uncompressed data is stored in the file. If this bit is cleard and a filter fails then H5Dwrite() or H5Dread() also fails.  For the Fletcher32 checksum, this flag will always be mandatory.

At this moment, this filter setting function supports data compression, data shuffling, and error-detecting code – Fletcher32 Checksum.  Please refer to the parameter filter below for each correspondent value.

Note:

This function currently supports only the permanent filter pipeline; plist_id must be a dataset creation property list.

Parameters:

hid_t plist

IN: Property list identifier.

H5Z_filter_t filter

IN: Filter to be added to the pipeline.  Valid values are

H5Z_FILTER_DEFLATE,

H5Z_FILTER_SHUFFLE,

H5Z_FILTER_FLETCHER32

unsigned int flags

IN: Bit vector specifying certain general properties of the filter.

size_t cd_nelmts

IN: Number of elements in cd_values.

const unsigned int cd_values[]

IN: Auxiliary data for the filter.

Returns:

Returns a non-negative value if successful; otherwise returns a negative value.

 

Name: H5Pset_fletcher32

Signature:

herr_t H5Pset_fletcher32(hid_t plist)

Purpose:

Sets Fletcher32 checksum for dataset.

Description:

H5Pset_fletcher32 sets the Fletcher32 checksum for dataset creation property list.  At this moment, only chunked dataset is supported.

Parameters:

hid_t plist

IN: Identifier for the dataset creation property list. 

Returns:

Returns a non-negative value if successful; otherwise returns a negative value.

 

Name: H5Pset_edc_check

Signature:

herr_t H5Pset_edc_check(hid_t plist, H5P_EDC_t check)

Purpose:

Decides whether to enable an error-detection for dataset reading.

Description:

H5Pset_edc_check decides whether to enable an error-detection for a dataset transfer property list for data reading process.  This error-detecting algorithm is whichever user chooses earlier.  This function cannot disable or enable error-detection for data writing process.  At this moment, only chunked dataset is supported.

Parameters:

hid_t plist

IN: Identifier for the dataset transfer property list.

H5P_EDC_t check

      IN: A value that decides whether an error-detection is enabled for dataset

                  reading.  The valid values are

                                    H5P_ENABLE_EDC

                                    H5P_DISABLE_EDC

                 The default value is H5P_ENABLE_EDC.

Returns:

Returns a non-negative value if successful; otherwise returns a negative value.

 

Name: H5Pget_edc_check

Signature:

H5P_EDC_t H5Pget_edc_check(hid_t plist)

Purpose:

Queries whether an error-detecting is enabled for dataset reading.

Description:

H5Pget_edc_check queries whether an error-detecting is enabled for a dataset transfer property list for data reading process.  The error-detecting algorithm is whichever user chooses earlier.  This function cannot disable or enable error-detecting for data writing process.  At this moment, only chunked dataset is supported.

Parameters:

hid_t plist

IN: Identifier for the dataset transfer property list.

Returns:

Returns H5P_ENABLE_EDC(1) or H5P_DISABLE_EDC(0) if successful; otherwise returns a negative value.

 

Name: H5Pset_filter_callback

Signature:

herr_t H5Pset_filter_callback(hid_t plist, H5Z_filter_func_t func, void* op_data)

Purpose:

Sets user’s callback function for filters.

Description:

H5Pset_filter_callback sets user’s callback function for dataset transfer property list.  This callback function defines what user wants to do if certain filter fails.

Parameters:

hid_t plist

IN: Identifier for the dataset transfer property list.

H5Z_filter_t filter

            IN: Identifier for filter.

H5Z_filter_func_t func

      IN: User’s callback function.  It’s defined as

typedef H5Z_cb_return_t (H5Z_filter_func_t)(H5Z_filter_t filter, void* buf, size_t buf_size, void* op_data)

where filter indicates which filter fails, buf and buf_size pass in the failed data, op_data is user’s input data for this callback function.  The valid return values are H5Z_CB_FAIL and H5Z_CB_CONT.  

void* op_data

            IN: User’s input data for callback function. 

Returns:

Returns a non-negative value if successful; otherwise returns a negative value.

 

 

Testing Model

 

The following pseudo codes illustrate how we are going to test the Fletcher32 checksum.

The actual code for this test can be found in the Appendix.

 

Step 1: Enable Fletcher32 checksum as a filter for writing and reading chunked dataset.

           

H5Pset_filter (dataset create property list, H5Z_FILTER_FLETCHER32);

               H5Dcreate (dataset create property list);

H5Dwrite;

H5Dread;

Compare data correctness;

 

Step 2: Enable Fletcher32 checksum for writing but disable it during reading to speedup

             read.

 

            H5Pset_filter (dataset create property list, H5Z_FILTER_FLETCHER32);

            H5Pset_edc_check (dataset transfer property list, H5Z_DISABLE_EDC);

               H5Dcreate (dataset create property list);

H5Dwrite (dataset transfer property list);

H5Dread (dataset transfer property list);

Compare data correctness;

 

Step 3: Simulate data corruption on disk by modifying part of data using another filter.

             Also set user’s filter callback functions to decide whether to continue reading

             data when there is data corruption.

 

            H5Pset_filter (dataset create property list, H5Z_FILTER_FLETCHER32);

            Randomly decide the offset and length of corrupted data;

                H5Zregister (H5Z_CORRUPT, corrupt data function);

                H5Pset_filter (dataset creation property list, H5Z_CORRUPT, corrupted data);

 

               H5Dcreate (dataset create property list);

H5Dwrite (dataset transfer property list);

H5Dread (dataset transfer property list);

Check if read fails as expected default setting;

 

            Set filter callback function to continue despite data is corrupted by calling

            H5Pset_filter_callback(transfer property list, callback function to continue);

H5Dread (dataset transfer property list);

Check if read continues;

 

            Set filter callback function to fail when data is corrupted by calling

            H5Pset_filter_callback(transfer property list, callback function to fail);

H5Dread (dataset transfer property list);

Check if read fails;

 

Step 4: Test filter pipeline in the order of checksum + shuffle + deflate.

 

                H5Pset_fletcher32 (dataset creation property list);

                H5Pset_shuffle (dataset creation property list);

                H5Pset_deflate (dataset creation property list, deflate degree);

            H5Dcreate (dataset create property list);

H5Dwrite;

H5Dread;

Compare data correctness;

 

Step 5: Test filter pipeline in another order of shuffle + deflate + checksum.

 

                H5Pset_shuffle (dataset creation property list);

                H5Pset_deflate (dataset creation property list, deflate degree);

                H5Pset_fletcher32 (dataset creation property list);

            H5Dcreate (dataset create property list);

H5Dwrite;

H5Dread;

Compare data correctness;

 

Appendix

 

Testing for Fletcher32 Checksum can be added to the existing test for H5Pset_filter, H5Pset_deflate, and H5Pset_shuffle in dsets.c.  The code for this part in dsets.c is called test_filters(), as follows:

 

/*-------------------------------------------------------------------------

 * Function:    test_filters

 *

 * Purpose:     Tests dataset filter.

 *-------------------------------------------------------------------------

 */

static herr_t

test_filters(hid_t file)

{

    hid_t       dc;                 /* Dataset creation property list ID */

    const hsize_t chunk_size[2] = {2, 25};  /* Chunk dimensions */

    hsize_t     null_size;          /* Size of dataset with null filter */

#ifdef H5_HAVE_FILTER_FLETCHER32

    hsize_t     fletcher32_size;       /* Size of dataset with Fletcher32 checksum */

    unsigned int data_corrupt[2];     /* position and length of data to be corrupted */

#endif /* H5_HAVE_FILTER_FLETCHER32 */

#ifdef H5_HAVE_FILTER_DEFLATE

    hsize_t     deflate_size;       /* Size of dataset with deflate filter */

#endif /* H5_HAVE_FILTER_DEFLATE */

#ifdef H5_HAVE_FILTER_SHUFFLE

    hsize_t     shuffle_size;       /* Size of dataset with shuffle filter */

#endif /* H5_HAVE_FILTER_SHUFFLE */

#if defined H5_HAVE_FILTER_DEFLATE && defined H5_HAVE_FILTER_SHUFFLE && defined H5_HAVE_FILTER_FLETCHER32

    hsize_t     combo_size;     /* Size of dataset with shuffle+deflate filter */

#endif /* H5_HAVE_FILTER_DEFLATE && H5_HAVE_FILTER_SHUFFLE && H5_HAVE_FILTER_FLETCHER32 */

   

    /*----------------------------------------------------------

     * STEP 1: Test Fletcher32 Checksum by itself.

     *----------------------------------------------------------

     */

#ifdef H5_HAVE_FILTER_FLETCHER32

    puts("Testing Fletcher32 checksum(enabled for read)");

    if((dc = H5Pcreate(H5P_DATASET_CREATE))<0) goto error;

    if (H5Pset_chunk (dc, 2, chunk_size)<0) goto error;

    if (H5Pset_filter (dc,H5Z_FILTER_FLETCHER32,0,0,NULL)<0) goto error;

 

    /* Enable checksum during read */                      if(test_filter_internal(file,DSET_FLETCHER32_NAME,dc,

   ENABLE_FLETCHER32,DATA_NOT_CORRUPTED,&fletcher32_size)<0)

goto error;

    if(fletcher32_size<=null_size) {

        H5_FAILED();

        puts("    Size after checksumming is incorrect.");

        goto error;

    } /* end if */

 

    /* Disable checksum during read */

    puts("Testing Fletcher32 checksum(disabled for read)");

    if(test_filter_internal(file,DSET_FLETCHER32_NAME_2,dc,

    DISABLE_FLETCHER32,DATA_NOT_CORRUPTED,&fletcher32_size)<0)

goto error;

    if(fletcher32_size<=null_size) {

        H5_FAILED();

        puts("    Size after checksumming is incorrect.");

        goto error;

    } /* end if */

 

    /* Try to corrupt data and see if checksum fails */

    puts("Testing Fletcher32 checksum(when data is corrupted)");

    data_corrupt[0] = 52;

    data_corrupt[1] = 33;

    if (H5Zregister (H5Z_CORRUPT, "corrupt", corrupt_data)<0) goto error;

    if (H5Pset_filter (dc, H5Z_CORRUPT, 0, 2, data_corrupt)<0) goto error;

    if(test_filter_internal(file,DSET_FLETCHER32_NAME_3,dc,

    ENABLE_FLETCHER32,DATA_CORRUPTED,&fletcher32_size)<0) goto error;

    if(fletcher32_size<=null_size) {

        H5_FAILED();

        puts("    Size after checksumming is incorrect.");

        goto error;

    } /* end if */

 

    /* Clean up objects used for this test */

    if (H5Pclose (dc)<0) goto error;

#else /* H5_HAVE_FILTER_FLETCHER32 */

    TESTING("fletcher32 checksum");

    SKIPPED();

    puts("fletcher32 checksum not enabled");

#endif /* H5_HAVE_FILTER_FLETCHER32 */

 

    /*----------------------------------------------------------

     * STEP 2: Test shuffle + deflate + checksum in any order.

     *----------------------------------------------------------

     */

#if defined H5_HAVE_FILTER_DEFLATE && defined H5_HAVE_FILTER_SHUFFLE && defined H5_HAVE_FILTER_FLETCHER32

    puts("Testing shuffle+deflate+checksum filters(checksum first)");

    if((dc = H5Pcreate(H5P_DATASET_CREATE))<0) goto error;

    if (H5Pset_chunk (dc, 2, chunk_size)<0) goto error;

    if (H5Pset_fletcher32 (dc)<0) goto error;

    if (H5Pset_shuffle (dc, sizeof(int))<0) goto error;

    if (H5Pset_deflate (dc, 6)<0) goto error;

    if(test_filter_internal(file,DSET_SHUF_DEF_FLET_NAME,dc,

    ENABLE_FLETCHER32,DATA_NOT_CORRUPTED,&combo_size)<0) goto error;

    /*if(combo_size>=deflate_size+2 || combo_size<=deflate_size) {

        H5_FAILED();

        puts("    Shuffle+deflate+checksum size is incorrect.");

        goto error;

    }*/ /* end if */

 

    /* Clean up objects used for this test */

    if (H5Pclose (dc)<0) goto error;

 

    puts("Testing shuffle+deflate+checksum filters(checksum last)");

    if((dc = H5Pcreate(H5P_DATASET_CREATE))<0) goto error;

    if (H5Pset_chunk (dc, 2, chunk_size)<0) goto error;

    if (H5Pset_shuffle (dc, sizeof(int))<0) goto error;

    if (H5Pset_deflate (dc, 6)<0) goto error;

    if (H5Pset_fletcher32 (dc)<0) goto error;

 

    if(test_filter_internal(file,DSET_SHUF_DEF_FLET_NAME_2,dc,

    ENABLE_FLETCHER32,DATA_NOT_CORRUPTED,&combo_size)<0) goto error;

    /*if(combo_size>=deflate_size+2 || combo_size<=deflate_size) {

        H5_FAILED();

        puts("    Shuffle+deflate+checksum size is incorrect.");

        goto error;

    }*/ /* end if */

 

    /* Clean up objects used for this test */

    if (H5Pclose (dc)<0) goto error;

#else /* H5_HAVE_FILTER_DEFLATE && H5_HAVE_FILTER_SHUFFLE && H5_HAVE_FILTER_FLETCHER32 */

    TESTING("shuffle+deflate+fletcher32 filters");

    SKIPPED();

    puts("Deflate, shuffle, or Fletcher32 checksum filter not enabled");

#endif /* H5_HAVE_FILTER_DEFLATE && H5_HAVE_FILTER_SHUFFLE && H5_HAVE_FILTER_FLETCHER32 */

 

    return 0;

error:

    return -1;

}

 

/*-------------------------------------------------------------------------

 * Function:    test_filter_internal

 *-------------------------------------------------------------------------

 */

static herr_t

test_filter_internal(hid_t fid, const char *name, hid_t dcpl, int if_fletcher32, int corrupted, hsize_t *dset_size)

{

    hid_t               dataset;        /* Dataset ID */

    hid_t               dxpl;           /* Dataset xfer property list ID */

    hid_t               sid;            /* Dataspace ID */

    const hsize_t       size[2] = {100, 200};           /* Dataspace dimensions */

    const hssize_t      hs_offset[2] = {7, 30}; /* Hyperslab offset */

    const hsize_t       hs_size[2] = {4, 50};   /* Hyperslab size */

    void                *tconv_buf = NULL;      /* Temporary conversion buffer */

    hsize_t             i, j, n;        /* Local index variables */

 

    /* Create the data space */

    if ((sid = H5Screate_simple(2, size, NULL))<0) goto error;

 

    /*

     * Create a small conversion buffer to test strip mining. We

     * might as well test all we can!

     */

    if ((dxpl = H5Pcreate (H5P_DATASET_XFER))<0) goto error;

    tconv_buf = malloc (1000);

    if (H5Pset_buffer (dxpl, 1000, tconv_buf, NULL)<0) goto error;

    if (if_fletcher32==DISABLE_FLETCHER32) {

        if(H5Pset_edc_check(dxpl, H5Z_DISABLE_EDC)<0)

            goto error;

        if(H5Z_DISABLE_EDC != H5Pget_edc_check(dxpl))

        goto error;

    }

    TESTING("filter (setup)");

   

    /* Create the dataset */

    if ((dataset = H5Dcreate(fid, name, H5T_NATIVE_INT, sid,

                             dcpl))<0) goto error;

    PASSED();

 

    /*----------------------------------------------------------------------

     * STEP 1: Read uninitialized data.  It should be zero.

     *----------------------------------------------------------------------

     */

    TESTING("filter (uninitialized read)");

 

    if (H5Dread (dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

        goto error;

   

    for (i=0; i<size[0]; i++) {

        for (j=0; j<size[1]; j++) {

            if (0!=check[i][j]) {

                H5_FAILED();

                printf("    Read a non-zero value.\n");

                printf("    At index %lu,%lu\n",

                       (unsigned long)i, (unsigned long)j);

                goto error;

            }

        }

    }

    PASSED();

 

    /*----------------------------------------------------------------------

     * STEP 2: Test filter by setting up a chunked dataset and writing

     * to it.

     *----------------------------------------------------------------------

     */

    TESTING("filter (write)");

   

    for (i=n=0; i<size[0]; i++) {

        for (j=0; j<size[1]; j++) {

            points[i][j] = (int)(n++);

        }

    }

 

    if (H5Dwrite(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, points)<0)

        goto error;

 

    PASSED();

 

    /*----------------------------------------------------------------------

     * STEP 3: Try to read the data we just wrote.

     *----------------------------------------------------------------------

     */

    TESTING("filter (read)");

 

    /* Read the dataset back */

    if(corrupted) {

        /* Default behavior is failure when data is corrupted. */

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

 

        /* Callback decides to continue inspite data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_cont, NULL)<0) goto error;

        if(H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

            goto error;

           

        /* Callback decides to fail when data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_fail, NULL)<0) goto error;

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

    } else {

        if (H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

           goto error;

 

        /* Check that the values read are the same as the values written */

        for (i=0; i<size[0]; i++) {

           for (j=0; j<size[1]; j++) {

               if (points[i][j] != check[i][j]) {

                  H5_FAILED();

                  printf("    Read different values than written.\n");

                  printf("    At index %lu,%lu\n",

                           (unsigned long)i, (unsigned long)j);

                  goto error;

               }

           }

        }

    }

   

    PASSED();

 

    /*----------------------------------------------------------------------

     * STEP 4: Write new data over the top of the old data.  The new data is

     * random thus not very compressible, and will cause the chunks to move

     * around as they grow.  We only change values for the left half of the

     * dataset although we rewrite the whole thing.

     *----------------------------------------------------------------------

     */

    TESTING("filter (modify)");

   

    for (i=0; i<size[0]; i++) {

        for (j=0; j<size[1]/2; j++) {

            points[i][j] = rand ();

        }

    }

    if (H5Dwrite (dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, points)<0)

        goto error;

       

    if(corrupted) {

        /* Default behavior is failure when data is corrupted. */

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

 

        /* Callback decides to continue inspite data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_cont, NULL)<0) goto error;

        if(H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

            goto error;

           

        /* Callback decides to fail when data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_fail, NULL)<0) goto error;

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

    } else {

        /* Read the dataset back and check it */

        if (H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

           goto error;

 

        /* Check that the values read are the same as the values written */

        for (i=0; i<size[0]; i++) {

           for (j=0; j<size[1]; j++) {

               if (points[i][j] != check[i][j]) {

                  H5_FAILED();

                  printf("    Read different values than written.\n");

                  printf("    At index %lu,%lu\n",

                           (unsigned long)i, (unsigned long)j);

                  goto error;

               }

           }

        }

    }

 

    PASSED();

 

    /*----------------------------------------------------------------------

     * STEP 5: Close the dataset and then open it and read it again.  This

     * insures that the filter message is picked up properly from the

     * object header.

     *----------------------------------------------------------------------

     */

    TESTING("filter (re-open)");

   

    if (H5Dclose (dataset)<0) goto error;

    if ((dataset = H5Dopen (fid, name))<0) goto error;

   

    if(corrupted) {

        /* Default behavior is failure when data is corrupted. */

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

 

        /* Callback decides to continue inspite data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_cont, NULL)<0) goto error;

        if(H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

            goto error;

           

        /* Callback decides to fail when data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_fail, NULL)<0) goto error;

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

    } else {

        if (H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

           goto error;

 

        /* Check that the values read are the same as the values written */

        for (i=0; i<size[0]; i++) {

           for (j=0; j<size[1]; j++) {

               if (points[i][j] != check[i][j]) {

                  H5_FAILED();

                  printf("    Read different values than written.\n");

                  printf("    At index %lu,%lu\n",

                        (unsigned long)i, (unsigned long)j);

                  goto error;

               }

           }

        }

    }

   

    PASSED();

   

    /*----------------------------------------------------------------------

     * STEP 6: Test partial I/O by writing to and then reading from a

     * hyperslab of the dataset.  The hyperslab does not line up on chunk

     * boundaries (we know that case already works from above tests).

     *----------------------------------------------------------------------

     */

    TESTING("filter (partial I/O)");

 

    for (i=0; i<hs_size[0]; i++) {

        for (j=0; j<hs_size[1]; j++) {

            points[hs_offset[0]+i][hs_offset[1]+j] = rand ();

        }

    }

    if (H5Sselect_hyperslab(sid, H5S_SELECT_SET, hs_offset, NULL, hs_size,

                            NULL)<0) goto error;

    if (H5Dwrite (dataset, H5T_NATIVE_INT, sid, sid, dxpl, points)<0)

        goto error;

    

    if(corrupted) {

        /* Default behavior is failure when data is corrupted. */

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

 

        /* Callback decides to continue inspite data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_cont, NULL)<0) goto error;

        if(H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check)<0)

            goto error;

           

        /* Callback decides to fail when data is corrupted. */

        if(H5Pset_filter_callback(dxpl, filter_cb_fail, NULL)<0) goto error;

        H5E_BEGIN_TRY {

            H5Dread(dataset, H5T_NATIVE_INT, H5S_ALL, H5S_ALL, dxpl, check);

        } H5E_END_TRY;

    } else {

        if (H5Dread (dataset, H5T_NATIVE_INT, sid, sid, dxpl, check)<0)

           goto error;

   

        /* Check that the values read are the same as the values written */

        for (i=0; i<hs_size[0]; i++) {

           for (j=0; j<hs_size[1]; j++) {

               if (points[hs_offset[0]+i][hs_offset[1]+j] !=

                  check[hs_offset[0]+i][hs_offset[1]+j]) {

                  H5_FAILED();

                  printf("    Read different values than written.\n");

                  printf("    At index %lu,%lu\n",

                         (unsigned long)(hs_offset[0]+i),

                         (unsigned long)(hs_offset[1]+j));

                  printf("    At original: %d\n",

                         (int)points[hs_offset[0]+i][hs_offset[1]+j]);

                  printf("    At returned: %d\n",

                         (int)check[hs_offset[0]+i][hs_offset[1]+j]);

                  goto error;

               }

           }

        }

    }

 

    PASSED();

 

    /* Get the storage size of the dataset */

    if((*dset_size=H5Dget_storage_size(dataset))==0) goto error;

   

    /* Clean up objects used for this test */

    if (H5Dclose (dataset)<0) goto error;

    if (H5Sclose (sid)<0) goto error;

    if (H5Pclose (dxpl)<0) goto error;

    free (tconv_buf);

 

    return(0);

 

error:

    return -1;

}

 

/*-------------------------------------------------------------------------

 * Function:    corrupt_data    

 *

 * Purpose:     For testing Fletcher32 checksum.  modify data slightly during

 *              writing so that when data is read back, the checksum should

 *              fail.

 *-------------------------------------------------------------------------

 */

static size_t

corrupt_data(unsigned int flags, H5Z_EDC_t edc, H5Z_callback_t callback_struct,

      size_t cd_nelmts, const unsigned int *cd_values, size_t nbytes,

      size_t *buf_size, void **buf)

{

    size_t   ret_value = 0;

    unsigned char *dst = (unsigned char*)(*buf);

    unsigned int   offset;

    unsigned int   length;

    int      i;

   

    if (cd_nelmts!=2 || !cd_values)

        return 0;

    offset = cd_values[0];

    length = cd_values[1];

    if(offset>nbytes || (offset+length)>nbytes)

        return 0;

 

    if (flags & H5Z_FLAG_REVERSE) {

        *buf_size = nbytes;

        ret_value = nbytes;

    } else {

        unsigned char* corrupt_data;

        corrupt_data = (unsigned char*)malloc(length);

        memset((void*)corrupt_data, 57, length);

       

        dst += offset;

        memcpy((void*)dst, (void*)corrupt_data, length);

        *buf_size = nbytes;

        ret_value = *buf_size;          

    }

 

    return ret_value;

}

 

/*-------------------------------------------------------------------------

 * Function:    filter_cb_cont

 *

 * Purpose:     Callback function to handle checksum failure.  Let it continue.

 *

 * Return:      continue        

 *

 *-------------------------------------------------------------------------

 */

static H5Z_CALLBACK_t

filter_cb_cont(H5Z_filter_t filter, void* UNUSED buf, size_t UNUSED buf_size,

           void* UNUSED op_data)

{

    if(H5Z_FILTER_FLETCHER32==filter)

       return H5Z_CALLBACK_CONT;

}

 

/*-------------------------------------------------------------------------

 * Function:    filter_cb_fail

 *

 * Purpose:     Callback function to handle checksum failure.  Let it fail.

 *

 * Return:      fail    

 *

 *-------------------------------------------------------------------------

 */

static H5Z_CALLBACK_t

filter_cb_fail(H5Z_filter_t filter, void* UNUSED buf, size_t UNUSED buf_size,

           void* UNUSED op_data)

{

    if(H5Z_FILTER_FLETCHER32==filter)

       return H5Z_CALLBACK_FAIL;

}