Скачиваний:
68
Добавлен:
26.03.2015
Размер:
4.09 Mб
Скачать

13.2. ANALYZE

269

-r

numbers of patterns which were classi ed right are printed

-u

numbers of patterns which were not classi ed are printed

-a

same as -w -r -u

-S "t c"

speci c: numbers of class t pattern which are

 

classi ed as class c are printed (-1 = noclass)

-v

verbose output. Each printed number is preceded by one of the

 

words 'wrong', 'right', 'unknown', or 'speci c' depending

 

on the result of the classi cation.

-s

statistic information containing wrong, right and not classi ed

 

patterns. The network error is printed also.

-c

same as -s, but statistics for each output unit (class) is displayed.

-m

show confusion matrix (only works with -e 402040 or -e WTA)

-i <file name>

name of the 'result le' which is going to be analyzed.

-o <file name>

name of the le which is going to be produced by analyze.

-e <function>

de nes the name of the 'analyzing function'.

 

Possible names are: 402040, WTA, band (description see below)

-l <real value>

rst parameter of the analyzing function.

-h <real value>

second parameter of the analyzing function.

Starting analyze without any options is equivalent to: analyze -w -e 402040 -l 0.4 -h 0.6

13.2.1Analyzing Functions

The classi cation of the patterns depends on the analyzing function. 402040 stands for the '402040' rule. That means on a range from 0 to 1 h will be 0.6 (upper 40%) and l will be 0.4 (lower 40%). The middle 20% is represented by h ; l. The classi cation of the patterns will depend on h, l and other constrains (see 402040 below).

WTA stands for winner takes all. That means the classi cation depends on the unit with the highest output and other constrains (see WTA below). Band is an analyzing function that checks a band of values around the teaching output.

402040:

A pattern is classi ed correctly if:

the output of exactly one output unit is h.

the 'teaching output' of this unit is the maximum teaching output (> 0) of the pattern.

the output of all other output units is l.

A pattern is classi ed incorrectly if:

the output of exactly one output unit is h.

the 'teaching output' of this unit is NOT the maximum 'teaching output' of the pattern or there is no 'teaching output' > 0.

270

CHAPTER 13. TOOLS FOR SNNS

the output of all other units is l.

A pattern is unclassi ed in all other cases. Default values are: l = 0:4 h = 0:6

WTA:

A pattern is classi ed correctly if:

there is an output unit with the value greater than the output value of all other output units (this output value is supposed to be a).

a > h.

the 'teaching output of this unit is the maximum 'teaching output' of the pattern (> 0).

the output of all other units is < a ; l.

A pattern is classi ed incorrectly if:

there is an output unit with the value greater than the output value of all other output units (this output value is supposed to be a).

a > h.

the 'teaching output' of this unit is NOT the maximum 'teaching output' of the pattern or there is no 'teaching output' > 0.

the output of all other output units is < a ; l.

A pattern is unclassi ed in all other cases. Default values are: l = 0:0 h = 0:0

Band:

A pattern is classi ed correctly if for all output units:

the output is >= the teaching output - l.

the output is <= the teaching output + h.

A pattern is classi ed incorrectly if for all output units:

theor output is < the teaching output - l.

the output is > the teaching output + h. Default values are: l = 0:1 h = 0:1

13.3bignet

The program bignet can be used to automatically construct complex neural networks. The synopsis is kind of lengthy, so when networks are to be constructed manually, the graphical version included in xgui is preferrable. If, however, networks are to be constructed automatically, e.g. a whole series from within a shell script, this program is the method of choice.

13.3. FF

 

BIGNET

271

 

Synopsis:

ff bignet <plane definition>... <link definition>... [<output file>] where:

<plane definition> : -p <x> <y> [<act> [<out> [<type>]]]

<x>

: number of units in x-direction

<y>

: number of units in y-direction

<act> : optional activation function

 

e.g.: Act_Logistic

<out> : optional output function, <act> must be given too e.g.: Out_Identity

<type>: optional layer type, <act> and <out> must be given too. Valid types: input, hidden, or output

<link defintion> : -l <sp> ... [+] <tp> ... [+] Source section:

<sp> : source plane (1, 2, ...) <scx> : x position of source cluster <scy> : y position of source cluster <scw> : width of source cluster <sch> : height of source cluster

<sux> : x position of a distinct source unit <suy> : y position of a distinct source unit <smx> : delta x for multiple source fields <smy> : delta y for multiple source fields

Target section:

<tp> : target plane (1, 2, ...) <tcx> : x position of target cluster <tcy> : y position of target cluster <tcw> : width of target cluster <tch> : height of target cluster

<tux> : x position of a distinct target unit <tuy> : y position of a distinct target unit <tmx> : delta x for multiple target fields <tmy> : delta y for multiple target fields

<output file>

: name of the output file (default SNNS_FF_NET.net)

There might be any number of plane and link de nitons. Link parameters must be given in the exact order detailed above. Unused parameters in the link de nition have to be speci ed as 0. A series of 0s at the end of each link de nition may be abreviated by a '+' character.

Example:

ff bignet -p 6 20 -p 1 10 -p 1 1 -l 1 1 1 6 10 + 2 1 1 1 10 + -l 2 + 3 1 1 1 1 +

272

CHAPTER 13. TOOLS FOR SNNS

de nes a network with three layers. A 6x20 input layer, a 1x10 hidden layer, and a single output unit. The upper 6x10 input units are fully connected to the hidden layer, which in turn is fully connected to the output unit. The lower 6x10 input units do not have any connections.

NOTE:

Even though the tool is called bignet, it can not only construct feed-forward, but also recurrent networks.

13.4td bignet

The program td bignet can be used to automatically construct neural networks with the topology for time-delay learning. As with ff bignet, the graphical version included in xgui is preferrable if networks are to be constructed manually.

Synopsis:

td bignet <plane definition>... <link definition>... [<output file>] where:

<plane definition> : -p <f> <d>

 

<f>

:

number of feature units

 

<d>

:

total delay length

<link defintion>

: -l <sp>

<sf> <sw> <d> <tp> <tf> <tw>

 

<sp> :

source plane (1, 2, ...)

 

<sf> :

1st feature unit in source plane

 

<sw> :

field width in source plane

 

<d>

:

delay length in source plane

 

<tp> :

target plane (2, 3, ...)

 

<tf> :

1st feature unit in target plane

 

<tw> :

field width in target plane

<output file>

: name of

the output file (default SNNS_TD_NET.net)

At least two plane de nitions and one link de nition are mandatory. There is no upper limit on the number of planes that can be speci ed.

13.5linknets

linknets allows to easily link several independent networks to one combined network. In general n so called input networks (n ranges from 1 to 20) are linked to m so called output networks (m ranges from 0 to 20). It is possible to add a new layer of input units to feed the former input units of the input networks. It is also possible to add a new layer of output units which is either fed by the former output units of the output networks (if output networks are given) or by the former output units of the input networks.

Synopsis:

linknets -innets <netfile> ... [ -outnets <netfile> ... ]

13.5. LINKNETS

273

-o <output network file> [ options ]

It is possible to choose between the following options:

-inunits

use copies of input units

-inconnect <n>

fully connect with <n> input units

-direct

connect input with output one-to-one

-outconnect <n>

fully connect to <n> output units

-inunits and -inconnect may not be used together. -direct is ignored if no output networks are given.

If no input options are given (-inunits, -inconnect), the resulting network uses the same input units as the given input networks.

If -inconnect <n> is given, <n> new input units are created. These new input units are fully connected to the (former) input units of all input networks. The (former) input units of the input networks are changed to be hidden units in the resulting network. The newly created network links are initialized with weight 0:0.

To use the option -inunits, all input networks must have the same number of input units. If -inunits is given, a new layer input units is created. The number of new input units is equal to the number of (former) input units of a given input network. The new input units are connected by a one-to-one scheme to the (former) input units, which means, that every former input unit gets input activation from exactly one new input unit. The newly created network links are initialized with weight 1:0. The (former) input units of the input networks are changed to be special hidden units in the resulting network (incoming weights of special hidden units are not changed during further training). This connection scheme is usefull to feed several networks with similar input structure with equal input patterns.

Similar to the description of -inconnect, the option -outconnect may be used to create a new set of output units: If -outconnect <n> is given, <n> new output units are created. These new output units are fully connected either to the (former) output units of all output networks (if output networks are given) or to the (former) output units of all input networks. The (former) output units are changed to be hidden units in the resulting network. The newly created network links are initialized with weight 0:0.

There exsists no option -outunits (similar to -inunits), so far since it is not clear, how new output units should be activated by a xed weighting scheme. This heavily depends on the kind of used networks and type of application. However, it is possible to create a similar structure by hand, using the graphical user interface. Doing this, don't forget to change the unit type of the former output units to hidden.

By default all output units of the input networks are fully connected to all input units of the output networks. In some cases it is usefull, not to use a full connection but a one-by- one connection scheme. This is performed by giving the option -direct. To use the option -direct, the sum of all (former) output units of the input networks must equal the sum of all (former) input units of the output networks. Following the given succession of input and output networks (and the network dependent succession of input and output units),

274

CHAPTER 13. TOOLS FOR SNNS

Figure 13.1: A 2-1 interconnection

Figure 13.2: Sharing an input layer

every (former) output unit of the input networks is connected to exactly one (fomer) input unit of the output networks. The newly created network links are initialized with weight 1:0. The (former) input units of the output networks are changed to be special hidden units in the resulting network (incoming weights of special hidden units are not changed during further training). The (former) output units of the input networks are changed to be hidden units. This connection scheme is usefull to directly feed the output from one (or more) network(s) into one (or more) other network(s).

13.5.1Limitations

linknets accepts all types of SNNS networks. But.... It is only tested to use feedforward type networks (multy layered networks, RBF networks, CC networks). It will de nately not work with DLVQ, ART, reccurent type networks, and networks with DUAL units.

13.5.2Notes on further training

The resulting networks may be trained by SNNS as usual. All neurons that receive input by a one-by-one connection are set to be special hidden. Also the activation function of these neurons is set to Act Identity. During further training the incoming weights to these neurons are not changed.

If you want to keep all weights of the original (sub) networks, you have to set all involved neurons to type special hidden. The activation function does not have to be changed!

Due to a bug in snns2c all special units (hidden, input, output) have to be set to their corresponding regular type. Otherwise the C-function created by snns2c will fail to produce

13.5. LINKNETS

275

the correct output.

If networks of di erent types are combined (RBF, standard feedforward, ...), it is often not possible to train the whole resulting network. Training RBF networks by Backprop will result in unde ned behavior. At least for the combination of networks of di erent type it is necessary to x some network links by using special neurons.

Note that the default training function of the resulting network is set to the training of the last read output network. This may not be usefull for further training of the resulting network and has to be changed in SNNS or batchman.

13.5.3Examples

Figure 13.3: Adding a new input layer with full connection

Figure 13.4: A one-by-one connection generated by: linknets -innets 4-2-4.net -outnets 2-1-2.net 2-1-3.net -o result.net -direct

The following examples assume that the networks 4-2-4.net, 3-2-3.net, 2-1-3.net, 2-1-2.net, ... have been created by some other program (usually using Bignet inside of xgui).

Figure 13.1 shows two input networks that are fully connected to one output network. The new link weights are set to 0.0. A ected units have become hidden units. This net was generated by: linknets -innets 4-2-4.net 4-2-4.net -outnets 3-2-3.net -o result.net

Figure 13.2 shows how two networks can share the same input patterns. The link weights of the rst layers are set to 1.0. Former input units have become special hidden units. Generated by: linknets -innets 4-2-4.net 4-2-4.net -o result.net -inunits

Figure 13.3 shows how the input layers of two nets can be combined to form a single one. The link weights of the rst layers are set to 0.0. Former input units have become

276

CHAPTER 13. TOOLS FOR SNNS

hidden units. Generated by: linknets -innets 4-2-4.net 4-2-4.net -o result.net -inconnect 8

Figures 13.4 and 13.5 show examples of one-to-one connections. In gure 13.5 the links have been created following the given succession of networks. The link weights are set to 1.0. Former input units of the output networks have become special hidden units. Former output units of the input networks are now hidden units. This network was generated by: linknets -innets 2-1-2.net 3-2-3.net -outnets 3-2-3.net 2-1-3.net -o result.net -direct

Figure 13.5: Two input networks one-by-one connected to two output networks

13.6Convert2snns

In order to work with the KOHONEN tools in SNNS, a pattern le and a network le with a special format are necessary.

Convert2snns will accomplish three important things:

Creation of a 2-dimensional Kohonen Feature Map with n components

Weight les are converted in a SNNS compatible .net le

A le with raw patterns is converted in a .pat le

When working with convert2snns, 3 les are necessary:

1.A control le, containing the con guration of the network

2.A le with weight vectors

3.A le with raw patterns

13.7. FEEDBACK-GENNET

277

13.6.1Setup and Structure of a Control, Weight, Pattern File

Each line of the control le begins with a KEYWORD followed by the respective declaration. The order of the keywords is arbitrary.

Example of a control le:

PATTERNFILE eddy.in

**

 

WEIGHTFILE eddy.dat

 

 

XSIZE 18

 

 

YSIZE 18

 

 

COMPONENTS 8

 

 

PATTERNS 47

**

 

For creation of a network le you need at least the statements marked

and for the .pat

le additionally the statements marked **.

 

Omitting the WEIGHTFILE will initialize the weights of the network with 0.

The WEIGHTFILE is a simple ASCII le, containing the weight vectors row by row. The PATTERNFILE contains in each line the components of a pattern.

If convert2snns has nished the conversion it will ask for the name of the network and pattern les to be saved.

13.7Feedback-gennet

The program feedback-gennet generates network de nition les for fully recurrent networks of any size. This is not possible by using bignet.

The networks have the following structure:

-input layer with no intra layer connections

-fully recurrent hidden layer

-output layer: connections from each hidden unit to each output unit

AND

optionally fully recurrent intra layer connections in the output layer

AND

optionally feedback connections from each output unit to each hidden unit.

The activation function of the output units can be set to sigmoidal or linear. All weights are initialized with 0.0. Other initializations should be performed by the init functions in SNNS.

Synopsis: feedback-gennet example:

278

CHAPTER 13. TOOLS FOR SNNS

unix> feedback-gennet

produces

Enter # input units: 2

Enter # hidden units: 3

Enter # output units: 1

INTRA layer connections in the output layer

(y/n) :n

feedback connections from output to hidden units

(y/n) :n

Linear output activation function

(y/n) :n

Enter name of the network le: xor-rec.net

 

working...

 

generated xor-rec.net

 

13.8Mkhead

This program writes a SNNS pattern le header to stdout. This program can be used mkpat and mkout to produce pattern les from raw les in a shell script.

Synopsis: mkhead <pats> <in units> <out units> where:

pats

are the number of patterns in the le

in

 

units

are the number of input units in the le

 

out

 

units

are the number of output units in the le

 

13.9Mkout

This program writes a SNNS output pattern to stdout. This program can be used together with mkpat and mkhead to produce pattern les from raw les in a shell script.

Synopsis: mkout <units> <active unit> where:

units

is the number of output units

active

 

unit

is the unit which has to be activated

 

13.10Mkpat

The purpose of this program is to read a binary 8-Bit le from the stdin and writes a SNNS pattern le entry to stdout. This program can be used together with mkpat and mkout to produce pattern les from raw les in a shell script.

Соседние файлы в папке Параллельные Процессы и Параллельное Программирование