
Параллельные Процессы и Параллельное Программирование / SNNSv4.2.Manual
.pdf2.3. INSTALLATION |
9 |
After installing SNNS you may want to cleanup the source directories (delete all object and library les) with the command
make clean
If you are totally unhappy with your SNNS installation, you can run the command
make uninstall
If you want to compile and install, clean, or uninstall only parts of SNNS, you may also call one or more of the following commands:
make compile-kernel |
|
make compile-tools |
(implies making of kernel libraries) |
make compile-xgui |
(implies making of kernel libraries) |
make install-tools |
(implies making of kernel libraries) |
make install-xgui |
(implies making of kernel libraries) |
make clean-kernel |
|
make clean-tools |
|
make clean-xgui |
|
make uninstall-kernel make uninstall-tools make uninstall-xgui
If you are a developer and like to modify SNNS or parts of it for your own purpose, there are even more make targets available for the Make les in each of the source directories. See the source of those Make les for details. Developers experiencing di culties may alsond the target
make bugreport
useful. Please send those reports to the contact address given below.
Note, that SNNS is ready to work together with the genetic algortihm tool ENZO. A default installation will, however, not support this. If you plan to use genetic algorithms, you must specify --enable-enzo for the con gure call and then later on compile ENZO in its respective directory. See the ENZO Readmele and manual for details.
Possible Problems during con guration and compilation of SNNS
\con gure" tries to locate all of the tools which might be necessary for the development of SNNS. However, you don't need to have all of them installed on your system if you only want to install the unchanged SNNS distribution. You may ignore the following warning messages but you should keep them in mind whenever you plan to modify SNNS:

10CHAPTER 2. LICENSING, INSTALLATION AND ACKNOWLEDGMENTS
messages concerning the parser generator 'bison'
messages concerning the scanner generator ' ex'
messages concerning 'makedepend'
If con gure is unable to locate the X libraries and include les, you may give advise by using the mentioned --x-include and --x-libraries ags. If you don't have the X installed on your system at all, you may still use the batch version of SNNS "batchman" which is included in the SNNS tools tree.
At some sites di erent versions of X may be installed in di erent directories (X11R6, X11R5, : : :). The con gure script always tries to determine the newest one of these installations. However, although con gure tries its best, it may happen that you are linking to the newest X11 libraries but compiling with older X header les. This can happen, if outdated versions of the X headers are still available in some of the default include directories known to your C compiler. If you encounter any strange X problems (like unmotivated Xlib error reports during runtime) please double check which headers and which libraries you are actually using. To do so, set the C compiler to use the -v option (by de ning CFLAGS as written above) and carefully look at the output during recompilation. If you see any con icts at this point, also use the --x-... options described above to x the problem.
The pattern le parser of SNNS was built by the program bison. A pregenerated version of the pattern parser (kr pat parse.c and y.tab.h) as well as the original bison grammar (kr pat parse bison.y) is included in the distribution. The generated les are newer than kr pat parse bison.y if you unpack the SNNS distribution. Therefore bison is not called (and does not need to be) by default. Only if you want to change the grammar or if you have trouble with compiling and linking kr pat parse.c you should enter the kernel/sources directory and rebuild the parser. To do this, you have either to \touch" the le kr pat parse bison.y or to delete either of the les kr pat parse.c or y.tab.h. Afterwards running
make install
in the <SNNSDIR>/kernel/sources directory will recreate the parser and reinstall the kernel libraries. If you completely messed up your pattern parser, please use the original kr pat parse.c/y.tab.h combination from the SNNS distribution. Don't forget to \touch" these les before running make to ensure that they remain unchanged.
To rebuild the parser you should use bison version 1.22 or later. If your version of bison is older, you may have to change the de nition of BISONFLAGS in Make le.def. Also look for any warning messages while running \con gure". Note, that the common parser generator yacc will not work!
The equivalent bison discussion holds true for the parser, which is used by the SNNS tool batchman in the tools directory. Here, the orginal grammar le is called gram1.y, while the bison created les are named gram1.tab.c and gram1.tab.h.
The parsers in SNNS receive their input from scanners which were built by the program flex. A pre-generated version of every necessary scanner (kr pat scan.c in the

2.4. CONTACT POINTS |
11 |
kernel/sources directory, lex.yyy.c and lex.yyz.c in the tools/sources directory) are included in the distribution. These les are newer than the corresponding input les (kr pat scan.l, scan1.l, scan2.l) when the SNNS distribution is unpacked. Therefore flex is not called (and does not need to be) by default. Only if you want to change a scanner or if you have trouble with compiling and linking you should enter the sources directories and rebuild the scanners. To do this, you have either to touch the *.lles or to delete the les kr pat scan.c, lex.yyy.c, and lex.yyz.c. Running
make install
in the sources directories will then recreate and reinstall all necessary parts. If you completely messed up your pattern scanners please use the original les from the SNNS distribution. Don't forget to \touch" these les before runing make to ensure that they remain unchanged.
Note, that to rebuild the scanners you must use flex. The common scanner generator lex will not work!
Running SNNS
After installation, the executable for the graphical user interface can be found as program xgui in the <SNNSDIR>/xgui/sources directory. We usually build a symbolic link named snns to point to the executable xgui program, if we often work on the same machine architecture. E.g.:
ln -s xgui/bin/<architecture>/xgui snns
This link should be placed in the user's home directory (with the proper path pre x to SNNS) or in a directory of binaries in the local user's search path.
The simulator is then called simply with
snns
For further details about calling the various simulator tools see chapter 13.
2.4Contact Points
If you would like to contact the SNNS team please write to Andreas Zell at
Prof. Dr. Andreas Zell Eberhard-Karls-Universitat Tubingen Kostlinstr. 6
72074 Tubingen
Germany
e-mail: zell@informatik.uni-tuebingen.de
12 CHAPTER 2. LICENSING, INSTALLATION AND ACKNOWLEDGMENTS
If you would like to contact other SNNS users to exchange ideas, ask for help, or distribute advice, then post to the SNNS mailing list. Note, that you must be subscribed to it before being able to post.
To subscribe, send a mail to SNNS-Mail-Request@informatik.uni-tuebingen.de
With the one line message (in the mail body, not in the subject) subscribe
You will then receive a welcome message giving you all the details about how to post.
2.5Acknowledgments
SNNS is a joint e ort of a number of people, computer science students, research assistants as well as faculty members at the Institute for Parallel and Distributed High Performance Systems (IPVR) at University of Stuttgart, the Wilhelm Schickard Institute of Computer Science at the University of Tubingen, and the European Particle Research Lab CERN in Geneva.
The project to develop an e cient and portable neural network simulator which later became SNNS was lead since 1989 by Prof. Dr. Andreas Zell, who designed the predecessor to the SNNS simulator and the SNNS simulator itself and acted as advisor for more than two dozen independent research and Master's thesis projects that made up the SNNS simulator and some of its applications. Over time the SNNS source grew to a total size of now 5MB in 160.000+ lines of code. Research began under the supervision of Prof. Dr. Andreas Reuter and Prof. Dr. Paul Levi. We are all grateful for their support and for providing us with the necessary computer and network equipment. We also would like to thank Prof. Sau Lan Wu, head of the University of Wisconsin research group on high energy physics at CERN in Geneva, Switzerland for her generous support of our work towards new SNNS releases.
The following persons were directly involved in the SNNS project. They are listed in the order in which they joined the SNNS team.
Andreas Zell |
Design of the SNNS simulator, SNNS project team leader |
|
|
[ZMS90], [ZMSK91b] [ZMSK91c], [ZMSK91a] |
|
Niels Mache |
SNNS simulator kernel (really the heart of SNNS) [Mac90], |
|
|
parallel SNNS kernel on MasPar MP-1216. |
|
Tilman Sommer |
original version of the graphical user interface XGUI with in- |
|
|
tegrated network editor [Som89], PostScript printing. |
|
Ralf Hubner |
SNNS simulator 3D graphical user interface [Hub92], |
user in- |
|
terface development (version 2.0 to 3.0). |
|
Thomas Korb |
SNNS network compiler and network description language Nes- |
|
|
sus [Kor89] |
|

2.5. ACKNOWLEDGMENTS |
13 |
||
Michael Vogt |
Radial Basis Functions [Vog92]. Together with Gunter |
Mamier |
|
|
|
implementation of Time Delay Networks. De nition of the new |
|
|
|
pattern format and class scheme. |
|
Gunter |
Mamier |
SNNS visualization and analyzing tools [Mam92]. Implementa- |
|
|
|
tion of the batch execution capability. Together with Michael |
|
|
|
Vogt implementation of the new pattern handling. Compila- |
|
|
|
tion and continuous update of the user manual. Bug xes and |
|
|
|
installation of external contributions. Implementation of pat- |
|
|
|
tern remapping mechanism. |
|
Michael Schmalzl |
SNNS network creation tool Bignet, implementation of Cas- |
||
|
|
cade Correlation, and printed character recognition with SNNS |
|
|
|
[Sch91a] |
|
Kai-Uwe Herrmann |
ART models ART1, ART2, ARTMAP and modi cation of the |
||
|
|
BigNet tool [Her92]. |
|
Artemis Hatzigeorgiou |
Video documentation about the SNNS project, learning proce- |
||
|
|
dure Backpercolation 1.1 |
|
Dietmar Posselt |
ANSI-C translation of SNNS. |
|
|
Sven Doring |
ANSI-C translation of SNNS and source code maintenance. |
||
|
|
Implementation of distributed kernel for workstation clusters. |
|
Tobias Soyez |
Jordan and Elman networks, implementation of the network |
||
|
|
analyzer [Soy93]. |
|
Tobias Schreiner |
Network pruning algorithms [Sch94] |
|
|
Bernward Kett |
Redesign of C-code generator snns2c. |
|
|
Gianfranco Clemente |
Help with the user manual |
|
|
Henri Bauknecht |
Manager of the SNNS mailing list. |
|
|
Jens Wieland |
Design and implementation of batchman. |
|
|
Jurgen |
Gatter |
Implementation of TACOMA and some modi cations of Cas- |
|
|
|
cade Correlation [Gat96]. |
|
We are proud of the fact that SNNS is experiencing growing support from people outside our development team. There are many people who helped us by pointing out bugs or o ering bug xes, both to us and other users. Unfortunately they are to numerous to list here, so we restrict ourselves to those who have made a major contribution to the source code.
1Backpercolation 1 was developed by JURIK RESEARCH & CONSULTING, PO 2379, Aptos, CA 95001 USA. Any and all SALES of products (commercial, industrial, or otherwise) that utilize the Backpercolation 1 process or its derivatives require a license from JURIK RESEARCH & CONSULTING. Write for details.
14 CHAPTER 2. LICENSING, INSTALLATION AND ACKNOWLEDGMENTS
Martin Riedmiller, University of Karlsruhe
Implementation of RPROP in SNNS
Martin Reczko, German Cancer Research Center (DKFZ)
Implementation of Backpropagation Through Time (BPTT),
BatchBackpropagation Through Time (BBPTT), and Quickprop Through Time (QPTT).
Mark Seemann and Marcus Ritt, University of Tubingen Implementation of self organizing maps.
Jamie DeCoster, Purdue University
Implementation of auto-associative memory functions.
Jochen Biedermann, University of Gottingen
Help with the implementation of pruning Algorithms and noncontributing units
Christian Wehrfritz, University of Erlangen
Original implementation of the projection tool, implementation of the statistics computation and learning algorithm Pruned Cascade Correlation.
Randolf Werner, University of Koblenz
Support for NeXT systems
Joachim Danz, University of Darmstadt
Implementation of cross validation, simulated annealing and Monte Carlo learning algorithms.
Michael Berthold, University of Karlsruhe
Implementation of enhanced RBF algorithms.
Bruno Orsier, University of Geneva
Implementation of Scaled Conjugate Gradient learning.
Till Brychcy, Technical University of Munich
Suplied the code to keep only the important parameters in the control panel visible.
Joydeep Ghosh, University of Texas, Austin
Implenetation of WinSNNS, a MS-Windows front-end to SNNS batch execution on unix workstations.
Thomas Ragg, University of Karlsruhe
Implementation of Genetic algorithm tool Enzo.
Thomas Rausch, University of Dresden
Activation function handling in batchman.
The SNNS simulator is a successor to an earlier neural network simulator called NetSim [ZKSB89], [KZ89] by A. Zell, T. Sommer, T. Korb and A. Bayer, which was itself in uenced by the popular Rochester Connectionist Simulator RCS [GLML89].

2.6. NEW FEATURES OF RELEASE 4.2 |
15 |
In September 1991 the Stuttgart Neural Network Simulator SNNS was awarded the \Deutscher Hochschul-Software-Preis 1991" (German Federal Research Software Prize) by the German Federal Minister for Science and Education, Prof. Dr. Ortleb.
2.6New Features of Release 4.2
Users already familiar with SNNS and its usage may be interested in the di erences between the versions 4.1 and 4.2. New users of SNNS may skip this section and proceed with the next chapter.
New Features of Release 4.2:
1.greatly improved installation procedure
2.pattern remapping functions introduced to SNNS
3.class information in patterns introduced to SNNS
4.change to all batch algorithms: The learning rate is now divided by the number of patterns in the set. This allows for direct comparisons of learning rates and training of large pattern les with BP-Batch since it doesn't require ridiculous learning rates like 0.0000001 anymore.
5.Changes to Cascade-Correlation:
(a)Several modi cations can be used to achieve a net with a smaller depth or smaller Fan-In.
(b)New activation functions ACT GAUSS and ACT SIN
(c)The backpropagation algorithm of Cascade-Correlation is now present in an o line and a batch version.
(d)The activations of the units could be cached. The result is a faster learning for nets with many units. On the other hand, the needed memory space will rise for large training patterns.
(e)Changes in the 2D-display, the hidden units are displayed in layers, the candidate units are placed on the top of the net.
(f)validation now possible
(g)automatic deletion of candidate units at the end of training.
6.new meta learning algorithm TACOMA.
7.new learning algorithm BackpropChunk. It allows chunkwise updating of the weights as well as selective training of units on the basis of pattern class names.
8.new learning algorithm RPROP with weight decay.
9.algorithm \Recurrent Cascade Correlation" deleted from repository.

16CHAPTER 2. LICENSING, INSTALLATION AND ACKNOWLEDGMENTS
10.the options of adding noise to the weights with the JogWeights function improved im multiple ways.
11.improved plotting in the graph panel as well as printing option
12.when standard colormap is full, SNNS will now start with a privat map instead of aborting.
13.analyze tool now features a confusion matrix.
14.pruning panel now more \SNNS-like". You do not need to close the panel anymore before pruning a network.
15.Changes in batchman
(a)batchman can now handle DLVQ training
(b)new batchman command \setActFunc" allows the changing of unit activation functions from within the training script. Thanks to Thomas Rausch, University of Dresden, Germany.
(c)batchman output now with \#" pre x. This enables direct processing by a lot of unix tools like gnuplot.
(d)batchman now automatically converts function parameters to correct type instead of aborting.
(e)jogWeights can now also be called from batchman
(f)batchman catches some non-fatal signals (SIGINT, SIGTERM, ...) and sets the internal variable SIGNAL so that the script can react to them.
(g)batchman features ResetNet function (e.g. for Jordan networks).
16.new tool \linknets" introduced to combine existing networks
17.new tools \td bignet" and \ bignet" introduced for script-based generation of network les Old tool bignet removed.
18.displays will be refreshed more often when using the graphical editor
19.weight and projection display with changed color scale. They now match the 2Ddisplay scale.
20.pat sel now can handle pattern les with multi-line comments
21.manpages now available for most of the SNNS programs.
22.the number of things stored in an xgui con guration le was greatly enhanced.
23.Extensive debugging:
(a)batchman computes MSE now correctly from the number of (sub-) patterns.
(b)RBFs receive now correct number of parameters.
(c)spurious segmentation faults in the graphical editor tracked and eliminated.
(d)segmentation fault when training on huge pattern les cleared.
2.6. NEW FEATURES OF RELEASE 4.2 |
17 |
(e)various seg-faults under single operating systems tracked and cleared.
(f)netperf now can test on networks that need multiple training parameters.
(g)segmentaion faults when displaying 3D-Networks cleared.
(h)correct default values for initialization functions in batchman.
(i)the call \TestNet()" prohibited further training in batchman. Now everything works as expected.
(j)segmentation fault in batchman when doing multiple string concats cleared and memory leak in string operations closed. Thanks to Walter Prins, University of Stellenbosch, South Africa.
(k)the output of the validation error on the shell window was giving wrong values.
(l)algorithm SCG now respects special units and handles them correctly.
(m)the description of the learning function parameters in section 4.4 is nally ordered alphabetically.
Chapter 3
Neural Network Terminology
Connectionism is a current focus of research in a number of disciplines, among them arti cial intelligence (or more general computer science), physics, psychology, linguistics, biology and medicine. Connectionism represents a special kind of information processing: Connectionist systems consist of many primitive cells (units) which are working in parallel and are connected via directed links (links, connections). The main processing principle of these cells is the distribution of activation patterns across the links similar to the basic mechanism of the human brain, where information processing is based on the transfer of activation from one group of neurons to others through synapses. This kind of processing is also known as parallel distributed processing (PDP).
The high performance of the human brain in highly complex cognitive tasks like visual and auditory pattern recognition was always a great motivation for modeling the brain. For this historic motivation connectionist models are also called neural nets. However, most current neural network architectures do not try to closely imitate their biological model but rather can be regarded simply as a class of parallel algorithms.
In these models, knowledge is usually distributed throughout the net and is stored in the structure of the topology and the weights of the links. The networks are organized by (automated) training methods, which greatly simplify the development of speci c applications. Classical logic in ordinary AI systems is replaced by vague conclusions and associative recall (exact match vs. best match). This is a big advantage in all situations where no clear set of logical rules can be given. The inherent fault tolerance of connectionist models is another advantage. Furthermore, neural nets can be made tolerant against noise in the input: with increased noise, the quality of the output usually degrades only slowly (graceful performance degradation).
3.1Building Blocks of Neural Nets
The following paragraph describes a generic model for those neural nets that can be generated by the SNNS simulator. The basic principles and the terminology used in dealing with the graphical interface are also brie y introduced. A more general and more detailed introduction to connectionism can, e.g., be found in [RM86]. For readers uent