![]() |
![]() |
Xiangchi Cao | xcao@ncsa.uiuc.edu |
Robert Mcgrath | mcgrath@ncsa.uiuc.edu |
Xinjian Lu | xlu@ncsa.uiuc.edu |
1.0 Introduction and Overview 2.0 General Requirements 3.0 High-Level Architecture 3.1 Transparent access 3.2 Client Server model 3.3 Client server flow 4.0 Implementation 4.1 Class description 4.2 Source code 4.3 Download the trial version 5.0 References
The Java HDF Server (JHS) is a Java-based servlet/stand-alone program that servers NCSA Hierarchical Data Format (HDF) files. The JHS calls the HDF native library through the Java HDF Interface (JHI). The JHS provides the capability for HDF clients to access HDF files remotely and dynamically. The JHS is both a Java servlet and stand-alone application. It can either run under other servlet-supported servers or start as its own.
This document describes the requirements and the preliminary design of the Java HDF Server. It is required as part of the development effort and follows guidance set forth in the NCSA HDF Java (TM) Project documentation standards and guidelines. It is intended to provide a clear and understandable statement of the protocol, a high-level description of the design and an example of implementation. This is a review document. Critical comments and suggestions for improvements are anticipated and encouraged.
This document is divided into four major sections:
1.0 Introduction and Overview 2.0 Requirements 3.0 High-Level Architecture 4.0 Implementation
Section 1 is intended to provide the reader with a general understanding of the problem. It gives an overview of the current Java HDF Interface (JHI) and Java HDF Viewer (JHV), and the NCSA Scientific Data Browser (SDB). It also explains some of the terms and concepts that will be used in the rest of the document.
Section 2 gives a high-level description of the requirements that have been established for the Java HDF Server. These requirements include portability, extendibility, efficiency and maintainability .
Section 3 is a brief description of the preliminary design of the Java HDF Server design. The design should be reviewed by the project review committee. Redesign may be necessary based on suggestions for improvements.
Section 4 presents an implementation example of the Java HDF Server.
In previous work, we have implemented a Java based browser for HDF files, called the Java HDF Viewer (JHV). The JHV application links to the standard HDF library, through the Java ``native code'' methods to read HDF files on local disk. The JHV has pioneered the implementation of a Java application that uses HDF. The JHV implements classes such as a tree to display the HDF objects in a file, and displays of metadata, annotations, data, and imagery from the HDF file. The JHV also supports subsetting and subsampling of data from the HDF file.
In the latest release, we also have written a standard set of Java objects wrapping the HDF library, which is called Java HDF Interface (JHI). In these JHI objects Java native methods wrap the HDF library functions written in C. In this way, other Java application programs can make HDF library calls and access HDF files.
The current JHV cannot access remote files. Also, JHV is a standalone Java application. It cannot be used as a loadable applet in a Java enabled browser. A currently released NCSA product, which is called The Scientific Data Browser (SDB), also serves data HDF data. The SDB visualizes and displays subsampled images contained in a file and performs some rudimentary data analysis and visualization functions such as graphing tabular data and viewing images with a variety of palettes. The current SDB is also able to browse HDF files in a web browser. Moreover, the SDB does not have as many capabilities of handling HDF objects as the JHV such as displaying 3D images, subsetting image with mouse drag and creating histograms from images. Because the SDB is a Common Gateway Interface (CGI) program, its archetecture is not in an object-oriented fasion. This limits its extendibility and maintainability. In order to fill the gap of the current work, we need build a standard HDF Server which will allow HDF client such as JHV or (eventually)Web browser to access HDF files both in local disk and remote machines.
The Java HDF Server is intended to meet the following functional requirements:
The goal is to provide a single view of HDF files and objects, no matter where they reside. Java classes will access HDF through this view in the same object no matter where it resides. The JHS is required to support read-only access to HDF files. In the future, this will be extended to support modification and creation of HDF files and objects on local disk.
In a collaborative environment, the several sessions must share data from a single file. The JHS will make it possible for the clients of the Habanero session to all access the same file, no matter where the clients and data file may reside on the network.
The JHS provides a standard model for accessing HDF files and objects. The model defines the type of requests and the information that is returned from the request. This should be made into a standalone product which others can use to access HDF in a standard way from any application.
The Server is also intended to meet the following general requirements:
The server will mainly run as either a Servlet or Java stand-alone application. However, it can also run inside other Java/non-Java server that has appropriate interfaces.
The server must use the available compute resources (network connection and computer memory) efficiently.
The server objects should be able to be extended for different tasks. The components of the server also should be modular so that they can be reused with very little change.
The server must be designed with the expectation that it has a long life. Hence it must be corrective (removing residual and new errors), adaptive (adjusting application to changes in environment) and perfective (changing to improve some qualities).
Transparent access refers to the capability that provides the JHV with access to remote HDF files as though they were local. Transparent access means three things. First, a user need not know whether a file is local or remote. Access to remote file is provided by the JHV. The user does not need to make the network connection. Second, operations on remote files are performed remotely. Instead of loading the remote file to the local machine, the JHV performs operation remotely for remote files. The JHV sends a operation request to the server and the server performs the operation on the server side and sends the reply to the client. The whole procedure is transparent to the user. Third, transparent access also means interoperability between systems. A client, such as the personal computer, accesses both local and remote files. Local files are resident on a disk which is directly connected to the client. Remote files are physically located on a server which is connected to a network as is the client. The client and the server may have different operating systems. The client and server implementation should handle the system differences such as different byte orders between systems.
The JHV accesses both the local and remote HDF files through the same function call, HDFObject.service(). For local files, the JHV calls the getHDFObject() which creates an HDFObject instance and directly call its service. For remote files, the JHV also calls the getHDFObject() which creates an instance of HDFObject, makes connection to the server, sends request to the server, receives reply from the server and closes the connection.
For remote access, the server and client have to have common message format (HDFMessage) and the interface for handing the message (Messageable). The message and the interface are very generic and purposed for different types of data formats. The implementation of HDFObject may be different between the server and client. However, if the client sends a request to handle a certain type of HDFObject which the server does not have, the server will not serve the request and an error message will be returned. In this discussion, we assume both the server and client have the same sets of HDFObjects.
The basic object is the HDFObject which implements the interface Messageable. The current implementation of HDF objects (subclasses of HDFObject) only supports the NCSA HDF objects, which are 8-bit Raster Images, 24-bit images, Scientific Data Sets, Annotations, Vdatas, Vgroups and General Raster Images. Implementation of other objects or data types is also possible with the same message format and interface. The following table summarizes the data fields and message handlers (or methods) of the HDFObjects.
Object | Variables | Methods | ||||||||||||||||||||
HDFObject extends java.lang.Object |
|
|
||||||||||||||||||||
HDFAnnotation extends HDFObject |
|
|
||||||||||||||||||||
HDFGR extends HDFObject |
|
|
||||||||||||||||||||
HDFRIS8 extends HDFGR |
|
|||||||||||||||||||||
HDFRIS24 extends HDFGR |
|
|||||||||||||||||||||
HDFSDS extends HDFObject |
|
|
||||||||||||||||||||
HDFVdata extends HDFObject |
|
|
||||||||||||||||||||
HDFVgroup extends HDFObject |
|
There are five basic components in the Java HDF server/client model. These are the HDF client, server, interface, library and data. The server and client communicate with each other through HDFMessage and they process the message through the interface Messageable. The following figure shows their relationship and connections.
Generally speaking, any HDF tool or Web browser can be an HDF client. As long as the client can process the HDFMessage from the server, it can serve as an HDF client. An HDF client is an interactive tool for browsing and viewing the contents of an HDF file. It gives an overview of the HDF file as a tree, from which the HDF objects in the file can be selected. For each type of HDF object, information is displayed and data may be selected for display. To retrieve an HDF object or subset of HDF data from the server, the client sends HDFMessage to the server only for the requested HDF object or subset of data not the whole HDF file. The client is responsible for sending request to the server, processing the message received from the server and displaying the requested HDF object or data. The client also may directly access local HDF files without a server.
The Java HDF server is a general framework for HDF services built with a request-response paradigm and without a graphical user interface. The server can be a stand-alone Java application and runs as an independent server. It also can be a servlet which runs inside an Java-based server such Sun's JavaWebServer [URL?]or W3C's Jigsaw [URL] server.
Applications access HDF files through the HDF library, which is written in C. The server must interact with the library through the Java HDF interface. The server is responsible for processing the message received from the client, retrieving HDF object or data from HDF files through the Java HDF interface, and sending the message to the client.
Java HDF interface is a set of Java object with native methods which wrap the HDF library so that HDF files can be used from Java programs. More information about Java HDF interface can be found at http://hdf.ncsa.uiuc.edu/hdf/java/hdf/design.html .
HDF stands for Hierarchical Data Format. It is a library and multi-object file format for the transfer of graphical and numerical data between machines. HDF currently supports several data structure types: Scientific data sets (multi-dimensional arrays), vdatas (binary tables), "general" raster images, text entries (annotations), 8-bit raster images, 24-bit raster images, and color palettes. The details are at http://hdf.ncsa.uiuc.edu/ .
The HDF library implements a set of data structures and functions which are used to access HDF files. The current NCSA HDF library is implemented in C. Seen details at http://hdf.ncsa.uiuc.edu/ .
Before going into any details of the flow of Java HDF server-client
connection, we review the steps of the request-response paradigm in the
design architecture of the Java HDF Server. There are six steps for a completion
of a request. These steps are making connection, sending request to the
server, receiving request from the client, sending reply to the client,
receiving reply from the server and closing connection and displaying the
HDFObject as shown in the following figure.
The Java HDF Client-Server model implements an RPC message passing protocol, using Java Objects and Serialization to encapsulate the data. Each request message has an identifier (``owner'') and request specific information (file, object, subsetting parameters, etc., as apply). All responses are instances of the class HDFObject, with subclasses for each type of information returned. The data read from an HDF file is encapsulated in an appropriate Java object, and returned to requester. Each subclass of HDFObject has specific packing and unpacking methods.
Since all the messages are encapsulated in Java objects, the data is passed using Java Serialization, which implements all the necessary marshaling and unmarshaling, and assures the data is correct transferred.
The client first makes a connection to the server when a user takes a request action such as clicking on a file or HDF node in the HDF hierarchy tree. The connection can be a URL (String url) (for Java-based Web Server) or Socket(String host, int port) (for Java standalone application server) connection. It is made by function call getHDFObject()in the Java HDF Client (JHV). Actually, function JHV.getHDFObject() is the only place where the client makes the connection to the server.
If the selected object is a file, the client creates an instance of HDFHierarchy. Subsequently, when a node of an HDFObject in the HDFHierarchy tree is selected, an instance of the appropriate type of HDFObject is created. The HDFObject instance is used to hold information and data to/from server.
In general, the client creates an instance of the HDFObject class, selecting the appropriate subclass depending on the data to be requested. The creation of HDFObject instance is done in the following piece of code in JHV.getHDFObject().
if (node == null) hdfObject = new HDFHierarchy (node, filename); else if (node.type == HDFObjectNode.Annotation) hdfObject = new HDFAnnotation(node, filename); else if (node.type == HDFObjectNode.RIS8) hdfObject = new HDFRIS8(node, filename); else if (node.type == HDFObjectNode.RIS24) hdfObject = new HDFRIS24(node, filename); else if ((node.type == HDFObjectNode.GRGLOBALATTR) || (node.type == HDFObjectNode.GRDATASETATTR) || (node.type == HDFObjectNode.GRDATASET) || (node.type == HDFObjectNode.GRDATASETAN) ) hdfObject = new HDFGR(node, filename); else if ((node.type == HDFObjectNode.SDSGLOBALATTR) || (node.type == HDFObjectNode.SDSDATASETATTR) || (node.type == HDFObjectNode.SDSDATASET) || (node.type == HDFObjectNode.SDSDATASETAN)) hdfObject = new HDFSDS(node, filename); else if (node.type == HDFObjectNode.Vdata) hdfObject = new HDFVdata(node, filename); else // invalid selection return hdfObject;
After the instance of HDFObject is created, the client makes a URL or Socket connection and opens an ObjectOutputStream for sending message to the server and an ObjectInputStream for receiving message from the server.
After the instance of HDFObject is made and outputStream is opened, the client is ready to send message to the server. The client first construct an HDFMessage which will be sent to the server. The message is constructed by the function call HDFObject.toServer().
First, an instance of HDFMessage is created in HDFObject.toServer(). The owner of the message is set to the class name of the HDFObject instance so that the server knows whose message it is when a message arrived from the client. (This is effectively the ``message type'' in the abstract RPC protocol.)
The HDFMessage class extends from java.util.Hashtable: the content of message is stored in HDFMessage in the format of (key, value) pair. The message to the server contains two objects: the HDF node and the name of the requested file. The node, which is an instance of HDFObjectNode, contains the HDF request and all the information that the server needs to process the request. The file name is the full path of the requested HDF file. Given the file name and node object, the server knows how to process which HDF object in which HDF file.
/** * create a message sending to the server * * @return the HDFMessage created by this object */ public HDFMessage toServer() { HDFMessage message = new HDFMessage(getClass().getName()); if (nodeObject != null) message.put("nodeObject", nodeObject); if (hdfFilename != null) message.put("hdfFilename", hdfFilename); else message.put("hdfFilename", new String("")); return message; }
Once the message is created, the client sends the message through the outputStream. The standard Java method ObjectOutputStream.writeObject(Object) is used to write an object to the stream. The client writes the request message to the server by writeObject(message) and waits for a reply message from the server.
When the client makes the connection to the server, the server also opens an ObjectInputStream for receiving message from the client and an ObjectOutputStream for sending message to the client. The server receives the incoming message from the client by the function call ObjectInputStream.readObject(), which reads a serialized object from the stream.
The server work is done either in java.hdf.server.HDFServer.serveRequest() for Java standalone server or in java.hdf.server.HDFServer.doPost() for Java based web server. The server first creates an instance of specific type of HDFObject based on the ``owner'' (message type) of the request message. The type of the HDFObject instance is determined by calling the method HDFObject.isMe(HDFMessage).
The HDFObject instance created in fromClient() decodes the message and processes the request . Decoding a message fills in the fields of the HDFObject by calls to Hashtable.get(key), to retrieve the node object and the name of the requested HDF file.
/** * process a message received from the client. * * @param message The message received from the client * @param docRoot The server document root; */ public void fromClient(HDFMessage message, String docRoot) { String filename = (String) message.get("hdfFilename"); if ( filename != null ) hdfFilename = filename; hdfFilename = docRoot+File.separator + hdfFilename; hdfFilename = hdfFilename.replace('/', File.separatorChar); hdfFilename = hdfFilename.replace('\\', File.separatorChar); nodeObject = (HDFObjectNode) message.get("nodeObject"); int index = -1; String doubleSeparator = File.separator+File.separator; while ( (index = hdfFilename.indexOf(doubleSeparator)) >= 0) hdfFilename = hdfFilename.substring(0, index)+hdfFilename.substring(index+1); this.hdf = new HDFLibrary(); service(); }
The last two command lines in fromClient() create an instance of HDFLibrary and call the service() method to read the requested date from the HDF file.
The HDFLibrary is the Java class which wraps the HDF C library. The server accesses date from HDF files through the methods of the HDFLibrary class.
The HDFObject.service() is where the client request is processed, the service() method is specialized by each subclass of HDFObject to read the different types of HDFObject. For example, the HDFHierarchy.service() method reads a high level description of the contents of the HDF file (i.e., the HDF hierarchy), while the HDFAnnotation.service() method reads annotation text from the HDF file.textual information about the HDF file.
After the class specific service() method is called, all the fields of that HDFObject are filled with the appropriate values read from the HDF file.
The following is an example of HDFHierarchy.service() method. In this example, the only field that needs to be filled here is the Queue (called nodeQueue) which stores a tree that represents the hierarchy of the HDF file.
/** * serve the client request on the server */ public void service() { HDFAnalyse analyseHdf = new HDFAnalyse(); analyseHdf.getHdfObject(hdfFilename, nodeQueue); addNodeInformation(nodeQueue); }
After the request is processed, the server encodes the reply message in the HDFObject.toClient() method. The encoding puts the HDFObject fields into the Hashtable message. The toClient() method is specialized in each subclass of HDFObject to return the appropriate data objects for each type of request.
For example, HDFHierarchy.toClient() creates an instance of HDFMessage consisting of the nodeQueue object which contains the HDF hierarchy tree.
/** * create a message for sending to the client * * @return The HDFMessage created by this object */ public HDFMessage toClient() { HDFMessage message = new HDFMessage(getClass().getName()); message.put("nodeQueue", nodeQueue); return message; }
Finally, the server sends the reply message to the client through the ObjectOutputStream.writeObject(Object). After the server work is done, it closes its InputStream and OutputStream to release the network resources.
On the client side, the client reads the reply message from the server by calling ObjectInputStream.readObject() (which blocks until the response is returned?), and then decodes the response message. The decoding is done by the HDFObject.fromServer() method, which fills all the fields of the HDFObject with the values decoded from the message. The fromServer() method is specialized for each subclass of HDFObject.
For example, HDFHierarchy.fromServer(HDFMessage) fills the value of the nodeQueue from the message received from the server.
/** * process a message receiving from the server * * @param message the HDFMessage received from the server */ public void fromServer(HDFMessage message) { nodeQueue = (Queue) message.get("nodeQueue"); }
Finally, the client closes the connection and the JHV.getHDFObject() returns the HDFObject for display. For instance, the nodeQueue is displayed as a tree, which can be navigated by the user.
Currently HDFObjects can be viewed from JHV. In the future, HDFObjects should be able to be viewed from any browser which supports JDK1.x by making an applet of the JHV viewer.
/** * get the H HDFObject from server * * @param host The name of the remote machine * @param port The port number of the server * @param filename The hdf file name * @param node The selected node in the hdf hierarchy tree * @return The HDFObject containing the requested data * @auther Peter Cao (xcao@ncsa.uiuc.edu), 10/2/97 */ public HDFObject getHDFObject(String host, int port, String filename, HDFObjectNode node) { Socket server = null; ObjectOutputStream output = null; ObjectInputStream input = null; HDFObject hdfObject = null; HDFMessage message = null; if (node == null) hdfObject = new HDFHierarchy (node, filename); else if (node.type == HDFObjectNode.Annotation) hdfObject = new HDFAnnotation(node, filename); else if (node.type == HDFObjectNode.RIS8) hdfObject = new HDFRIS8(node, filename); else if (node.type == HDFObjectNode.RIS24) hdfObject = new HDFRIS24(node, filename); else if ((node.type == HDFObjectNode.GRGLOBALATTR) || (node.type == HDFObjectNode.GRDATASETATTR) || (node.type == HDFObjectNode.GRDATASET) || (node.type == HDFObjectNode.GRDATASETAN) ) hdfObject = new HDFGR(node, filename); else if ((node.type == HDFObjectNode.SDSGLOBALATTR) || (node.type == HDFObjectNode.SDSDATASETATTR) || (node.type == HDFObjectNode.SDSDATASET) || (node.type == HDFObjectNode.SDSDATASETAN)) hdfObject = new HDFSDS(node, filename); else if (node.type == HDFObjectNode.Vdata) hdfObject = new HDFVdata(node, filename); else // invalid selection return hdfObject; // get image data from the remote machine try { // Web server if (port < 1) { URL url = new URL(host); HttpURLConnection theConnection = (HttpURLConnection) url.openConnection(); theConnection.setDoOutput(true); theConnection.setRequestProperty("Content-Type", "application/octet-stream"); output = new ObjectOutputStream(theConnection.getOutputStream()); output.writeObject(hdfObject.toServer()); output.close(); input = new ObjectInputStream (theConnection.getInputStream()); message = (HDFMessage) input.readObject(); input.close(); } else { server = new Socket(host, port); output = new ObjectOutputStream (server.getOutputStream()); input = new ObjectInputStream (server.getInputStream()); output.writeObject(hdfObject.toServer()); message = (HDFMessage) input.readObject(); output.close(); input.close(); server.close(); } if (message == null) return null; hdfObject.fromServer(message); } catch (Exception exception) {infoText.setText(exception.toString());} return hdfObject; }
/** * get the HDFObject from the local machine * * @param filename The hdf file name * @param node The selected node in the hdf hierarchy tree * @return The HDFObject containing the requested data * @auther Peter Cao (xcao@ncsa.uiuc.edu), 12/18/97 */ public HDFObject getHDFObject(String filename, HDFObjectNode node) { HDFObject hdfObject = null; if (node == null) hdfObject = new HDFHierarchy (node, filename); else if (node.type == HDFObjectNode.Annotation) hdfObject = new HDFAnnotation(node, filename); else if (node.type == HDFObjectNode.RIS8) hdfObject = new HDFRIS8(node, filename); else if (node.type == HDFObjectNode.RIS24) hdfObject = new HDFRIS24(node, filename); else if ((node.type == HDFObjectNode.GRGLOBALATTR) || (node.type == HDFObjectNode.GRDATASETATTR) || (node.type == HDFObjectNode.GRDATASET) || (node.type == HDFObjectNode.GRDATASETAN) ) hdfObject = new HDFGR(node, filename); else if ((node.type == HDFObjectNode.SDSGLOBALATTR) || (node.type == HDFObjectNode.SDSDATASETATTR) || (node.type == HDFObjectNode.SDSDATASET) || (node.type == HDFObjectNode.SDSDATASETAN)) hdfObject = new HDFSDS(node, filename); else if (node.type == HDFObjectNode.Vdata) hdfObject = new HDFVdata(node, filename); else // invalid selection return hdfObject; hdfObject.service(); return hdfObject; }
package ncsa.hdf.message; import java.util.Hashtable; /** * HDFMessage holds information to be transfered between the server and client. * @version 1.1.3 September 2 1997 * @author Peter X. Cao (xcao@ncsa.uiuc.edu) */ public class HDFMessage extends Hashtable { /** * the owner's name of the HDFMessage. * the owner recognizes the HDFMessage and knows how to process the message */ private String owner; /** * constructs an HDFMessage without an owner. */ public HDFMessage() { this(""); } /** * constructs an HDFMessage with a specified owner. */ public HDFMessage(String owner) { this.owner = owner; } /** * get the owner of this message */ public String getOwner() { return owner; } /** * set the owner of the message */ public void setOwner(String newOwner) { this.owner = newOwner; } /** * Returns the String representation of this message value. */ public String toString() { return getClass().getName() + "\nowner = "+owner + "\ncontent = "+super.toString(); } }
/** * This is where the server actually does its work. It may serve more than * one requests at one connection. * * @param socket the socket for receiving message from the client and * sending message to the client */ private void serveRequest(Socket socket) { ObjectInputStream input; ObjectOutputStream output; HDFObject hdfObject; HDFMessage message; try { input = new ObjectInputStream (theSocket.getInputStream()); output= new ObjectOutputStream (theSocket.getOutputStream()); while ( (message = (HDFMessage) input.readObject()) != null) { if ( (hdfObject = new HDFFileList()).isMe(message) ); else if ( (hdfObject = new HDFHierarchy()).isMe(message) ); else if ( (hdfObject = new HDFAnnotation()).isMe(message) ); else if ( (hdfObject = new HDFRIS8()).isMe(message) ); else if ( (hdfObject = new HDFRIS24()).isMe(message) ); else if ( (hdfObject = new HDFGR()).isMe(message) ); else if ( (hdfObject = new HDFSDS()).isMe(message) ); else if ( (hdfObject = new HDFVdata()).isMe(message) ); else if ( (hdfObject = new HDFVgroup()).isMe(message) ); else { try { hdfObject = (HDFObject) (Class.forName(message.getOwner())).newInstance(); } catch (Exception e) {;} } hdfObject.fromClient(message, documentDir); message = hdfObject.toClient(); output.writeObject (message); } } catch(Exception e) { if (HDFServer.debug) System.out.println(e); } }
/** * takes an HDF request from the received HDFMessage and sends back an HDF * response by an HDFMessage through ObjectOutputStream * * @param req encapsulates the request to the servlet * @param resp encapsulates the response from the servlet */ public void doPost( HttpServletRequest req, HttpServletResponse res ) throws ServletException, IOException { // read the HDFMessage from the ObjectInputStream ObjectInputStream input = new ObjectInputStream(req.getInputStream()); HDFMessage message = null; try { message = (HDFMessage)input.readObject(); } catch (Exception e){} input.close(); // process the HDFMessage and write it to the ObjectOutputStream HDFObject hdfObject = null; ObjectOutputStream output = new ObjectOutputStream(res.getOutputStream()); res.setContentType("application/octet-stream"); if (message != null) { if ( (hdfObject = new HDFFileList()).isMe(message) ); else if ( (hdfObject = new HDFHierarchy()).isMe(message) ); else if ( (hdfObject = new HDFAnnotation()).isMe(message) ); else if ( (hdfObject = new HDFRIS8()).isMe(message) ); else if ( (hdfObject = new HDFRIS24()).isMe(message) ); else if ( (hdfObject = new HDFGR()).isMe(message) ); else if ( (hdfObject = new HDFSDS()).isMe(message) ); else if ( (hdfObject = new HDFVdata()).isMe(message) ); else if ( (hdfObject = new HDFVgroup()).isMe(message) ); else { try { hdfObject = (HDFObject) (Class.forName(message.getOwner())).newInstance(); } catch (Exception e) {;} } hdfObject.fromClient(message, documentDir); message = hdfObject.toClient(); output.writeObject (message); } output.close(); }
The Java HDF server can be implemented in different ways as long as it provides correct services and meets the design requirments. This section gives an example of Java HDF server implementation. The example given here only supports NCSA HDF files.
This implementation includes three packages ncsa.hdf.hdflib, ncsa.hdf.message and ncsa.hdf.server. Package ncsa.hdf.hdflib is the Java HDF interface (JHI). We will not discuss it here. Seen details at http://hdf.ncsa.uiuc.edu/hdf/java/hdf/design.html .
ncsa.hdf.message contains objects for creating and processing server/client message such as HDFMessage, HDFObject and Messageable.
ncsa.hdf.server is a package of server objects which makes server-client connection for a new request and closes connection when the service is done.
This section gives a brief description of the Java HDF Server classes. For details seen the Classes Documents
For further information about our prerleased source code, send your email to mcgrath@ncsa.uiuc.edu or xcao@ncsa.uiuc.edu
For further information about our trial version, send your email to
mcgrath@ncsa.uiuc.edu or xcao@ncsa.uiuc.edu