next up previous contents index
Next: Layer Model of Up: Design of the Previous: Network Model of

Design Factors

The goal in designing the simulator kernel was to meet the following specifications:

The size of the networks that can be handled by the kernel is limited only by the size of (virtual) memory and the address space of the computer used. The simulator memory management relieves the operating system (UNIX), especially with larger nets. In interactive mode, the user has some powerful commands at hand to create and manipulate networks. These interface functions reduce the complex internal representation of the data to a representation at the logical network level.

Naturally, the demands of encapsulation and efficiency contradict each other. Nevertheless, a good compromise has been found: the functions, that can be defined by the user (activation and site functions)may be used with a macro library to access the kernel structures. This principle makes the combination of tight encapsulation and high execution speed possible. Measurements on several computer systems yielded the following propagation rates:

    The propagation rate measures the speed of the simulator in recall mode, i.e. the forward propagation rate. In this mode no weight updates take place. The usual measurement unit is connections/sec (CPS).  

  The weight update rate measures the speed of the training of a fully connected feedforward network trained with 'vanilla` (on line) backpropagation. Because there is a forward propagation, a backward propagation and a weight update phase for every pattern in each cycle, the weight update rates, measured in connection updates/sec. (CUPS)   are usually lower by a factor of between 2 and 3 than the propagation rates.

These performance numbers have been obtained on machines running in our lab during normal use, with other users on the machines, with different main memory sizes and with the SNNS home directory mounted via NFS over the ethernet. Therefore, these numbers should only be regarded as performance indicators of SNNS but may not be quoted as machine architecture benchmarks.

The use of simulators for neural nets almost demands the use of parallel computers like the Connection Machine CM-2 [Hil85,HS86a,HS86b] or the MasPar MP-1, because of the inherent parallelism of the algorithm. The use of parallel computers will therefore surely increase the above values.



next up previous contents index
Next: Layer Model of Up: Design of the Previous: Network Model of



Niels Mache
Wed May 17 11:23:58 MET DST 1995