
Параллельные Процессы и Параллельное Программирование / SNNSv4.2.Manual
.pdf
13.11. NETLEARN |
279 |
Synopsis: mkpat <xsize> <ysize> where:
xsize |
is the xsize of the raw le |
ysize |
is the ysize of the raw le |
13.11Netlearn
This is a SNNS kernel backpropagation test program. It is a demo for using the SNNS kernel interface to train networks.
Synopsis: netlearn example:
unix> netlearn
produces
SNNS 3D-Kernel V 4.2 |Network learning|
Filename of the network le: letters untrained.net
Loading the network...
Network name: letters |
|
|
|
|
|
No. of units: |
71 |
|
|
|
|
No. of input units: |
35 |
|
|
|
|
No. of output units: |
26 |
|
|
|
|
No. of sites: |
0 |
|
|
|
|
No. of links: |
610 |
|
|
|
|
Learning function: |
Std |
|
Backpropagation |
||
|
|||||
Update function: |
Topological |
|
Order |
||
|
|||||
Filename of the pattern le: letters.pat |
|||||
loading the patterns... |
|
|
|
|
|
Number of pattern: |
26 |
|
|
|
|
The learning function Std Backpropagation needs 2 input parameters:
Parameter [1]: |
0.6 |
Parameter [2]: |
0.6 |
Choose number of cycles: 250
Shu e patterns (y/n) n
280 |
CHAPTER 13. TOOLS FOR SNNS |
Shu ing of patterns disabled
learning...
13.12Netperf
This is a benchmark program for SNNS. Propagation and backpropagation tests are performed.
Synopsis: netperf |
|
|
|
|
|
example: |
|
|
|
|
|
unix> netperf |
|
|
|
|
|
produces |
|
|
|
|
|
SNNS 3D-Kernel V4.2 |
| Benchmark Test | |
||||
Filename of the network le: nettalk.net |
|||||
loading the network... |
Network name: nettalk1 |
||||
No. of units: |
349 |
|
|
|
|
No. of input units: |
203 |
|
|
|
|
No. of ouput units: |
26 |
|
|
|
|
No. of sites: |
0 |
|
|
|
|
No. of links: |
27480 |
|
|
||
Learningfunction: |
Std |
|
Backpropagation |
||
|
|||||
Updatefunction: |
Topological |
|
Order |
||
|
|||||
Do you want to benchmark |
|
|
|
|
|
Propagation |
[1] or |
||||
Backpropagation |
[2] |
|
|
|
|
Input: |
1 |
|
|
|
|
Choose no. of cycles |
|
|
|
|
|
Begin propagation... |
|
|
|
|
|
No. of units updated: |
34900 |
|
|
||
No. of sites updated: |
0 |
|
|
|
|
No. of links updated: |
2748000 |
|
|
||
CPU Time used: |
3.05 seconds |
No. of connections per second (CPS) : 9.0099e+05

282 |
CHAPTER 13. TOOLS FOR SNNS |
This program is also an example how to use the SNNS kernel interface for loading a net and changing the loaded net into another format. All data and all SNNS functions { except the activation functions { are placed in a single C function.
Note:
Snns2c does not support sites. Any networks created with SNNS that make use of the site feature can not be converted to C source by this tool. Output functions are not supported, either.
The program can translate the following network-types:
Feedforward networks trained with Backpropagation and all variants of it like Quickprop, RPROP etc.
Radial Basis Functions
Partially-recurrent Elman and Jordan networks
Time Delay Neural Networks (TDNN)
Dynamic Learning Vector Quantisation (DLVQ)
Backpropagation Through Time (BPTT, QPTT, BBPTT)
Counterpropagation Networks
While the use of SNNS or any parts of it in commercial applications requires a special agreement/licensing from the developers, the use of trained networks generated with snns2c is hereby granted without any fees for any purpose, provided proper academic credit to the SNNS team is given in the documentation of the application.
13.14.1Program Flow
Because the compilation of very large nets may require some time, the program outputs messages, indicating which state of compilation is passed at the moment.
loading net... the network le is loaded with the function o ered by the kernel user interface.
dividing net into layers ... all units are grouped into layers where all units have the same type and the same activation function. There must not exist any dependencies between the units of the layers except the connections of SPECIAL HIDDEN units to themselves in Elman and Jordan networks or the links of the BPTT-networks.
sorting layers... these layers are sorted in topological order, e.g. rst the input layer, then the hidden layers followed by the output layers and at last the special hidden layers. A layer which has sources in another layer of the same type is updated later as the source layer.
writing net... selects the needed activation functions and writes them to the C-sourcele. After that, the procedure for pattern propagation is written.

284 |
CHAPTER 13. TOOLS FOR SNNS |
13.14.3Special Network Architectures
Normally, the architecture of the network and the numbers of the units are kept. Therefore a dummy unit with the number 0 is inserted in the array which contains the units. Some architectures are translated with other special features.
TDNN: Generally, a layer in a time delay neural network consists of feature units and their delay units. snns2c generates code only containing the feature units. The delay units are only additional activations in the feature unit. This is possible because every delay unit has the same link weights to it's corresponding source units as its feature unit.
So the input layer consists only of its prototype units, too. Therefore it's not possible to present the whole input pattern to the network. This is not necessary because it can be presented step by step to the inputs. This is useful for a real-time application, with the newest feature units as inputs. To mark a new sequence the init- ag (parameter of the function) can be set to 1. After this, the delays are lled when the init ag is set to 0 again. To avoid meaningless outputs the function returns NOT VALID until the delays are lled again.
There is a new variable in the record of the header le for TDNNs. It is called \MinDelay" and is the minimum number of time steps which are needed to get a valid output after the init- ag was set.
CPN: Counterpropagation doesn't need the output layer. The output is calculated as a weighted sum of the activations of the hidden units. Because only one hidden unit has the activation 1 and all others the activation 0, the output can be calculated with the winner unit, using the weights from this unit to the output.
DLVQ: Here no output units are needed either. The output is calculated as the bias of the winner unit.
BPTT: If all inputs are set to zero, the net is not initialized. This feature can be chosen by setting the init- ag.
13.14.4Activation Functions
Supported Activation Functions: Following activation functions are implemented in
snns2c:
Act |
|
Logistic |
Act |
|
StepFunc |
Act |
|
Elliott |
||||||||||||
|
|
|
||||||||||||||||||
Act |
|
Identity |
Act |
|
BSB |
Act |
|
IdentityPlusBias |
||||||||||||
|
|
|
||||||||||||||||||
Act |
|
TanH |
Act |
|
RBF |
|
Gaussian |
Act |
|
TanHPlusBias |
||||||||||
|
|
|
|
|||||||||||||||||
Act |
|
RBF |
|
MultiQuadratic |
Act |
|
TanH |
|
Xdiv2 |
Act |
|
RBF |
|
ThinPlateSpline |
||||||
|
|
|
|
|
|
|||||||||||||||
Act |
|
Perceptron |
Act |
|
TD |
|
Logistic |
Act |
|
Signum |
||||||||||
|
|
|
|
|||||||||||||||||
Act |
|
TD |
|
Elliott |
Act |
|
Signum0 |
|
|
|
|
|
||||||||
|
|
|
|
|
|
|
|
|||||||||||||
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|


286 |
CHAPTER 13. TOOLS FOR SNNS |
net is not a CounterPropagation network: Counterpropagation-networks need a special architecture: one input, one output and one hidden layer which are fully connected.
net is not a Time Delay Neural Network: The SNNS TDNNs have a very special architecture. They should be generated by the bignet-tool. In other cases there is no guaranty for successful compilation.
not supported network type: There are several network types which can't be compiled with the snns2c. The snns2c-tool will be maintained and so new network types will be implemented.
unspeci ed Error: should not occur, it's only a feature for user de ned updates.
13.15isnns
isnns is a small program based on the SNNS kernel which allows stream-oriented network training. It is supposed to train a network with patterns that are generated on the y by some other process. isnns does not support the whole SNNS functionality, it only o ers some basic operations.
The idea of isnns is to provide a simple mechanism which allows to use an already trained network within another application, with the possibility to retrain this network during usage. This can not be done with networks created by snns2c. To use isnns e ectively, another application should fork an isnns process and communicate with the isnns-process over the standard input and standard output channels. Please refer to the common literature about UNIX processes and how to use the fork() and exec() system calls (don't forget to ush() the stdout channel after sending data to isnns, other wise it would hang). We can not give any more advise within this manual.
Synopsis of the isnns call:
isnns [ <output pattern le> ]
After starting isnns, the program prints its prompt \ok>" to standard output. This prompt is printed again whenever an isnns command has been parsed and performed completely. If there are any input errors (unrecognized commands), the prompt changes to \notok>" but will change back to \ok>" after the next correct command. If any kernel error occurs (loading non-existent or illegal networks, etc.) isnns exits immediately with an exit value of 1.
13.15.1Commands
The set of commands is restricted to the following list:
load <net le name>
This command loads the given network into the SNNS kernel. After loading the network, the number of input units n and the number of output units m is printed

288 |
CHAPTER 13. TOOLS FOR SNNS |
13.15.2Example
Here is an example session of an isnns run. First the xor-network from the examples directory is loaded. This network has 2 input units and 1 output unit. Then the patterns (0 0), (0 1), (1 0), and (1 1) are propagated through the network. For each pattern the activation of all (here it is only one) output units are printed. The pattern (0 1) seems not to be trained very well (output: 0.880135). Therefore one learning step is performed with a learning rate of 0.3, an input pattern (0 1), and a teaching output of 1. The next propagation of the pattern (0 1) gives a slightly better result of 0.881693. The pattern (which is still stored in the input activations) is again trained, this time using the train command. A last propagation shows a nal result before quitting isnns. (The comments starting with the #-character have been added only in this documentation and are not printed by isnns)
unix> isnns test.pat |
|
ok> load examples/xor.net |
|
2 1 |
# 2 input and 1 output units |
ok> prop 0 0 |
|
0.112542 |
# output activation |
ok> prop 0 1 |
|
0.880135 |
# output activation |
ok> prop 1 0 |
|
0.91424 |
# output activation |
ok> prop 1 1 |
|
0.103772 |
# output activation |
ok> learn 0.3 0 1 1 |
|
0.0143675 |
# summed squared output error |
ok> prop 0 1 |
|
0.881693 |
# output activation |
ok> train 0.3 1 |
|
0.0139966 |
# summed squared output error |
ok> prop 0 1 |
|
0.883204 |
# output activation |
ok> quit |
|
ok> |
|
Since the command line de nes an output pattern le, after quitting isnns this le contains a log of all patterns which have been trained. Note that for recurrent networks the input activation of the second training pattern might have been di erent from the values given by the prop command. Since the pattern le is generated while isnns is working, the number of pattern is not known at the beginning of execution. It must be set by the user afterwards.