
- •In the structure returned from imfinfo function.
- •In addition to these png parameters, you can use any
- •Images), 'rle' (run-length encoding of 1-bit
- •Interleave: The format in which the data is stored. This can be
- •Cdf file handling
- •Image An array of structures containing information
- •IdxMapInfo a structure with 'Map' and 'Size' describing the
- •Vdata | 'Fields'
- •Values for 'Interpolate' are: lon, lat
- •It is written in the file. Each dimension in the file's datasets
- •Hdf version 5 data objects
- •Cdf library interface
- •Inquire - return basic characteristics of cdf
- •InquireVar - return information about cdf variable
- •Hdf version 4 library interface
- •Identifier.
- •Values for funcstr. For example, hdfdf24('lastref') corresponds to the
- •Values for funcstr. For example, hdfdfr8('setpalette',map) corresponds
- •Identifier.
- •In cases where the hdf c library accepts null for certain inputs, an
- •Identical to maxcache, but is -1 if the operation fails.
- •Xdimsize,ydimsize,upleft,lowright)
- •Vector of up to 13 elements containing projection-specific
- •Vector containing the number of values to read along each
- •Is a vector containing the dimension sizes of the subsetted region.
- •Index (zero-based) of the desired level. Fieldlist is a string
- •Vert_field,range)
- •Is an array containing the rank (number of dimensions) for each
- •Is an array of compression parameters. Status is -1 and other
- •Hdf version 5 library interface
- •Iterate - Calls a user function for each attribute
- •Vlen_get_buf_size - Determines storage requirements for vl data
- •Is_hdf5 - Determines if a file is in the hdf5 format
- •Is_simple - Determines if dataspace is simple
- •LibTiff library interface
- •InkSet - Set of inks used in a separated image
- •NetCdf library interface
- •In most cases, the syntax of the matlab function is similar to the
Vert_field,range)
Defines a vertical region for a point. VERT_FIELD is the name of
the field to subset. RANGE is a two-element vector containing the
minimum and maximum vertical values. PERIOD_ID is -1 if the
operation fails.
PTregioninfo
[BYTESIZE,STATUS] = HDFPT('regioninfo',POINT_ID,...
REGION_ID,LEVEL,FIELDLIST)
Returns the data size in bytes of the subset period of the
specified level. FIELDLIST is a string containing a
comma-separated list of fields to extract. STATUS and BYTESIZE are
-1 if the operation fails.
PTregionrecs
[NUMREC,RECNUMBERS,STATUS] = HDFPT('regionrecs',...
POINT_ID,REGION_ID,LEVEL)
Returns the records numbers within the subsetted region of the
specified level. STATUS and NUMREC are -1 and RECNUMBERS is [] if
the operation fails.
PTextractregion
[DATA,STATUS] = HDFPT('extractregion',POINT_ID,...
REGION_ID,LEVEL,FIELDLIST)
Reads data from the specified subset region. FIELDLIST is a string
containing a comma-separated list of requested fields. DATA is a
P-by-1 cell array where P is the number of requested fields. Each
cell of DATA contains an M(k)-by-N matrix of data where M(k) is the
order of the k-th field and N is the number of records. STATUS is
-1 and DATA is [] if the operation fails.
PTdeftimeperiod
PERIOD_ID = HDFPT('deftimeperiod',POINT_ID,...
STARTTIME,STOPTIME)
Defines a time period for a point data set. PERIOD_ID is
-1 if the operation fails.
PTperiodinfo
[BYTESIZE,STATUS] = HDFPT('periodinfo',POINT_ID,...
PERIOD_ID,LEVEL,FIELDLIST)
Retrieves the size in bytes of the subsetted period. FIELDLIST is
string containing a comma-separated list of desired field names.
BYTESIZE and STATUS are -1 if the operation fails.
PTperiodrecs
[NUMREC,RECNUMBERS,STATUS] = HDFPT('periodrecs',...
POINT_ID,PERIOD_ID,LEVEL)
Returns the records numbers within the subsetted time period of the
specified level. NUMREC and STATUS are -1 if the operation fails.
PTextractperiod
[DATA,STATUS] = HDFPT('extractperiod',...
POINT_ID,PERIOD_ID,LEVEL,FIELDLIST)
Reads data from the specified subsetted time period. FIELDLIST is a
string containing a comma-separated list of requested fields. DATA
is a P-by-1 cell array where P is the number of requested fields.
Each cell of DATA contains an M(k)-by-N matrix of data where M(k)
is the order of the k-th field and N is the number of records.
STATUS is -1 and DATA is [] if the operation fails.
Please read the file hdf4copyright.txt for more information.
See also hdf, hdfsw, hdfgd.
<hdfsw> - MATLAB Gateway to HDF-EOS swath interface.
HDFSW MATLAB interface to the HDF-EOS Swath object.
HDFSW is the MATLAB interface to the HDF-EOS Swath object. HDF-EOS is
an extension of NCSA (National Center for Supercomputing Applications)
HDF (Hierarchical Data Format). HDF-EOS is the scientific data format
standard selected by NASA as the baseline standard for EOS (Earth
Observing System).
HDFSW is a gateway to the Swath functions in the HDF-EOS C library,
which is developed and maintained by EOSDIS (Earth Observing System
Data and Information System). A swath data set consists of data
fields, geolocation fields, dimensions, and dimension maps. The data
field is the raw data of the file. Geolocation fields are used to tie
the swath to particular points on the Earth's surface. Dimensions
define the axes of the data and geolocation fields, and dimension maps
define the relationship between the dimensions of the data and
geolocation fields. The file may optionally have a fifth part called
an index for cases in which the geolocation information does not repeat
at regular intervals throughout the swath (the index was specifically
designed for Landsat 7 data products).
The general syntax for HDFSW is HDFSW(funcstr,param1,param2,...). There
is a one-to-one correspondence between SW functions in the HDF library
and valid values for funcstr. For example, HDFSW('detach',swathid)
corresponds to the C library call SWdetach(swath_id).
Syntax conventions
------------------
There is a one-to-one mapping between Swath functions in the HDF-EOS C
library and HDFSW syntaxes. For example, the C library contains this
function for getting the size of a specific dimension:
int32 SWdiminfo(int32 swathid, char *dimname)
The equivalent MATLAB usage is:
DIMSIZE = HDFSW('diminfo',SWATH_ID,DIMNAME)
SWATH_ID is the identifier (or handle) to a particular swath data set.
DIMNAME is a string containing the name of the specified dimension.
DIMSIZE is the size of the specified dimension, or -1 if the operation
fails.
Some of the C library functions accept input values that are defined in
terms of C macros. For example, the C SWopen() function requires an
access mode input that can be DFACC_READ, DFACC_RDWR, or DFACC_CREATE,
where these symbols are defined in the appropriate C header file.
Where macro definitions are used in the C library, the equivalent
MATLAB syntaxes use strings derived from the macro names. You can
either use a string containing the entire macro name, or you can omit
the common prefix. You can use either upper or lower case. For
example, this C function call:
status = SWopen("SwathFile.hdf",DFACC_CREATE)
is equivalent to these MATLAB function calls:
status = hdfsw('open','SwathFile.hdf','DFACC_CREATE') status =
hdfsw('open','SwathFile.hdf','dfacc_create') status =
hdfsw('open','SwathFile.hdf','CREATE') status =
hdfsw('open','SwathFile.hdf','create')
In cases where a C function returns a value with a macro definition,
the equivalent MATLAB function returns the value as a string containing
the lower-case short form of the macro.
HDF number types are specified by strings, including 'uchar8', 'uchar',
'char8', 'char', 'double', 'uint8', 'uint16', 'uint32', 'float',
'int8', 'int16', and 'int32'.
In cases where the HDF-EOS library accepts NULL, an empty matrix ([])
should be used.
Most routines return the flag STATUS, which is 0 when the routine
succeeds and -1 when the routine fails. Routines with syntaxes which
don't contain STATUS will return failure information in one of its
outputs as notated in the function syntaxes below.
Programming Model
-----------------
The programming model for accessing a swath data set through HDFSW is
as follows:
1. Open the file and initialize the SW interface by obtaining a file id
from a file name.
2. Open or create a swath data set by obtaining a swath id from a swath
name.
3. Perform desired operations on the data set.
4. Close the swath data set by disposing of the swath id.
5. Terminate swath access to the file by disposing of the file id.
To access a single swath data set that already exists in an HDF-EOS
file, use the following MATLAB commands:
fileid = hdfsw('open',filename,access);
swathid = hdfsw('attach',fileid,swathname);
% Optional operations on the data set...
status = hdfsw('detach',swathid);
status = hdfsw('close',fileid);
To access several files at the same time, obtain a separate file
identifier for each file to be opened. To access more than one swath
data set, obtain a separate swath identifier for each data set.
It is important to properly dispose of swath and file identifiers so
that buffered operations are written completely to disk. If you quit
MATLAB or clear all MEX-files with SW identifiers still open, MATLAB
will issue a warning and automatically dispose of them.
Note that file identifiers returned by HDFSW are not interchangeable
with file identifiers returned by any other HDF or HDF-EOS function.
Function categories
-------------------
Swath data set routines are classified into the following categories:
- Access routines initialize and terminate access to the SW interface
and swath data sets (including opening and closing files).
- Definition routines allow the user to set key features of a swath
data set.
- Basic I/O routines read and write data and metadata to a swath data
set.
- Inquiry routines return information about data contained in a swath
data set.
- Subset routines allow reading of data from a specified geographic
region.
Access Routines
---------------
SWopen
FILE_ID = HDFSW('open',FILENAME,ACCESS)
Given the FILENAME and desired access mode, opens or creates HDF
file in order to create, read, or write a swath data set. ACCESS
can be 'read', 'rdwr', or 'create'. FILE_ID is -1 if the operation
fails.
SWcreate
SWATH_ID = HDFSW('create',FILE_ID,SWATHNAME)
Creates a swath dataset within the file. SWATHNAME is a string
containing the name of the swath data set. SWATH_ID is -1 if the
operation fails.
SWattach
SWATH_ID = HDFSW('attach',FILE_ID,SWATHNAME)
Attaches to an existing swath data set within the file. SWATH_ID is
-1 if the operation fails.
SWdetach
STATUS = HDFSW('detach',SWATH_ID)
Detaches from swath dataset.
SWclose
STATUS = HDFSW('close',FILE_ID)
Closes file.
Definition Routines
-------------------
SWdefdim
STATUS = HDFSW('defdim',SWATH_ID,FIELDNAME,DIM)
Defines a new dimension within the swath. FIELDNAME is a string
specifying the name of the dimension to be defined. DIM is the
size of the new dimension. To specify an unlimited dimension, DIM
should be either 0 or Inf.
SWdefdimmap
STATUS = HDFSW('defdimmap',SWATH_ID,GEODIM, ...
DATADIM,OFFSET,INCREMENT);
Defines monotonic mapping between the geolocation and data
dimensions. GEODIM is the geolocation dimension name, and DATADIM
is the data dimension name. OFFSET and INCREMENT specify the
offset and increment of the geolocation dimension with respect to
the data dimension.
SWdefidxmap
STATUS = HDFSW('defidxmap',SWATH_ID,GEODIM, ...
DATADIM,INDEX)
Defines a nonregular mapping between the geolocation and the data
dimension. GEODIM is the geolocation dimension name, and DATADIM
is the data dimension name. INDEX is the array containing the
indices of the data dimension to which each geolocation element
corresponds.
SWdefgeofield
STATUS = HDFSW('defgeofield',SWATH_ID,FIELDNAME, ...
DIMLIST,NTYPE,MERGE)
Defines a new geolocation field within the swath. FIELDNAME is a
string containing the name of the field to be defined. DIMLIST is
a string containing a comma separated list of geolocation
dimensions defining the field. NTYPE is a string containing the
HDF number type of the field. MERGE, the merge code, is either
'nomerge' or 'automerge'.
SWdefdatafield
STATUS = HDFSW('defdatafield',SWATH_ID,FIELDNAME, ...
DIMLIST,NTYPE,MERGE)
Defines a new data field within the swath. FIELDNAME is a string
containing the name of the field to be defined. DIMLIST is a string
containing a comma-separated list of geolocation dimensions
defining the field. NTYPE is a string containing the HDF number
type of the field. MERGE, the merge code, is either 'nomerge' or
'automerge'.
SWdefcomp
STATUS = HDFSW('defcomp',SWATH_ID,COMPCODE,COMPPARM)
Sets the field compression for all subsequent field definitions.
COMPCODE, the HDF compression code, can be 'rle', 'skphuff',
'deflate', or 'none'. COMPPARM is an array of compression
parameters, if applicable. If no parameters are applicable,
COMPPARM should be [].
SWwritegeometa
STATUS = HDFSW('writegeometa',SWATH_ID,FIELDNAME, ...
DIMLIST,NTYPE)
Writes field metadata for an existing swath geolocation field named
FIELDNAME. DIMLIST is a string containing a comma-separated list
of geolocation dimensions defining the field. NTYPE is a string
containing the HDF number type of the data stored in the field.
SWwritedatameta
STATUS = HDFSW('writedatameta',SWATH_ID,FIELDNAME, ...
DIMLIST,NTYPE)
Writes field metadata for an existing swath data field named
FIELDNAME. DIMLIST is a string containing a comma separated list
of geolocation dimensions defining the field. NTYPE is a string
containing the HDF number type of the data stored in the field.
Basic I/O Functions
-------------------
SWwritefield
STATUS = HDFSW('writefield',SWATH_ID,FIELDNAME,...
START,STRIDE,EDGE,DATA)
Writes data to a swath field. FIELDNAME is the a string containing
the name of the field to write to. START is an array specifying the
starting location within each dimension (default is 0). STRIDE is
an array specifying the number of values to skip along each
dimension (default is 1). EDGE is an array specifying the number
of values to write along each dimension (default is {dim -
start}/stride). To use default values for start, stride, or edge,
pass in an empty matrix ([]).
The class of DATA must match the HDF number type defined for the
given field. A MATLAB string will be automatically converted to
match any of the HDF char types; other data types must match
exactly.
NOTE: HDF files use C-style ordering for multidimensional arrays,
while MATLAB uses FORTRAN-style ordering. This means that the size
of the MATLAB array must be flipped relative to the defined
dimension sizes of the HDF data field. For example, if the swath
field has been defined to have dimensions 3-by-4-by-5, then DATA
must have size 5-by-4-by-3. The PERMUTE command is useful for
making any necessary conversions.
SWreadfield
[DATA,STATUS] = HDFSW('readfield',SWATH_ID,...
FIELDNAME,START,STRIDE,EDGE)
Reads data from a swath field. FIELDNAME is the a string
containing the name of the field to read from. START is an array
specifying the starting location within each dimension (default is
0). STRIDE is an array specifying the number of values to skip
along each dimension (default is 1). EDGE is an array specifying
the number of values to write along each dimension (default is {dim
- start}/stride). To use default values for start, stride, or
edge, pass in an empty matrix ([]). The data values are returned
in the array DATA.
NOTE: HDF files use C-style ordering for multidimensional arrays,
while MATLAB uses FORTRAN-style ordering. This means that the size
of the MATLAB array is flipped relative to the defined dimension
sizes of the HDF data field. For example, if the grid field has
been defined to have dimensions 3-by-4-by-5, then DATA will have
size 5-by-4-by-3. The PERMUTE command is useful for making any
necessary conversions.
DATA is [] and STATUS is -1 if the operation fails.
SWwriteattr
STATUS = HDFSW('writeattr',SWATH_ID,ATTRNAME,DATA)
Writes/Updates attribute in a swath. ATTRNAME is a string
containing the name of the attribute. DATA is an array containing
the attribute values.
SWreadattr
[DATA,STATUS] = HDFSW('readattr',SWATH_ID,ATTRNAME)
Reads attribute from a swath. ATTRNAME is a string containing the
name of the attribute. The attribute values are returned in the
array DATA. DATA is [] and STATUS is -1 if the operation fails.
SWsetfillvalue
STATUS = HDFSW('setfillvalue',SWATH_ID,FIELDNAME,...
FILLVALUE)
Sets fill value for the specified field. FILLVALUE is a scalar
whose class must match the HDF number type of the specified field.
A MATLAB string will be automatically converted to match any of the
HDF char types; other data types must match exactly.
SWgetfillvalue
[FILLVALUE,STATUS] = HDFSW('getfillvalue',SWATH_ID,...
FIELDNAME)
Retrieves fill value for the specified field. FILLVALUE is [] and
STATUS is -1 if the operation fails.
Inquiry Functions
-----------------
SWinqdims
[NDIMS,DIMNAME,DIMS] = HDFSW('inqdims',SWATH_ID)
Retrieve information about all of the dimensions defined in swath.
NDIMS is the number of dimensions. DIMNAME is a string containing
a comma-separated list of dimension names. DIMS is an array
containing the size of each dimension. If the routine fails, NDIMS
is -1 and the other output arguments are [].
SWinqmaps
[NMAPS,DIMMAP,OFFSET,INCREMENT] = HDFSW('inqmaps',...
SWATH_ID)
Retrieve information about all of the (non-indexed) geolocation
relations defined in swath. The scalar NMAPS is the number of
geolocation relations found. DIMMAP is a string containing a
comma-separated list of the dimension map names. The two
dimensions in each mapping are separated by a slash (/). OFFSET
and INCREMENT are arrays which contain the offset and increment of
the geolocation dimensions with respect to the data dimensions. If
the routine fails, NMAPS is -1 and the other output arguments are
[].
SWinqidxmaps
[NIDXMAPS,IDXMAP,IDXSIZES] = HDFSW('inqidxmaps',...
SWATH_ID)
Retrieve information about all of the indexed geolocation/data
mappings defined in swath. NIDXMAPS is the number of mappings.
IDXMAP is a string containing a comma-separated list of the
mappings. IDXSIZES is an array containing the sizes of the
corresponding index arrays. If the routine fails, NIDXMAPS is -1
and the other output arguments are [].
SWinqgeofields
[NFLDS,FIELDLIST,RANK,NTYPE] = HDFSW('inqgeofields',...
SWATH_ID)
Retrieve information about all of the geolocation fields defined in
swath. NFLDS is the number of geolocation fields found. FIELDLIST
is a string containing a comma separated list of the field names.
RANK is an array containing the rank (number of dimensions) for
each field. NTYPE is a cell array of strings that denote the
number type of each field. If the routine fails, NFLDS is -1 and
the other output arguments are [].
SWinqdatafields
[NFLDS,FIELDLIST,RANK,NTYPE] = HDFSW('inqdatafields',...
SWATH_ID)
Retrieve information about all of the data fields defined in swath.
NFLDS is the number of geolocation fields found. FIELDLIST is a
string containing a comma separated list of the field names. RANK