next up previous contents index
Next: Learning in Neural Up: Neural Network Terminology Previous: Sites

Update Modes

     

To compute the new activation values of the units, the SNNS simulator running on a sequential workstation processor has to visit all of them in some sequential order. This order is defined in the so called Update Mode. Five update modes for general use are implemented in SNNS. The first is a synchronous mode, all other are asynchronous, i.e. in these modes units see the new outputs of their predecessors if these have fired before them.

  1. synchronous: The units change their activation all together after each step. To do this, the kernel first computes the new activations of all units from their activation functions in some arbitrary order. After all units have their new activation value assigned, the new output of the units is computed. The outside spectator gets the impression that all units have fired simultaneously (in sync).

  2. random permutation: The units compute their new activation and output function sequentially. The order is defined randomly, but each unit is selected exactly once in every step.

  3. random: The order is defined by a random number generator. Thus it is not guaranteed that all units are visited exactly once in one update step, i.e. some units may be updated several times, some not at all.

  4. serial: The order is defined by ascending internal unit number. If units are created with ascending unit numbers from input to output units, this is the fastest mode. Note that the use of serial mode is not advisable if the units of a network are not in ascending order.

  5. topological: The kernel sorts the units by their topology. This order corresponds to the natural propagation of activity from input to output. In pure feed-forward nets the input activation reaches the output especially fast with this mode, because many units already have their final output which doesn't change later.

Additionally, there are 12 more update modes for special network topologies implemented in SNNS.

  1. CPN: For learning with counterpropagation.

  2. Time Delay: This mode takes into account the special connections of time delay networks. Connections have to be updated in the order in which they become valid in the course of time.

  3. ART1_Stable, ART2_Stable and ARTMAP_Stable: Three update modes for the three adaptive resonance theory network models. They propagate a pattern through the network until a stable state has been reached.

  4. ART1_Synchronous, ART2_Synchronous and ARTMAP_Synchronous: Three other update modes for the three adaptive resonance theory network models. They perform just one propagation step with each call.

  5. CC and RCC: Special update modes for the cascade correlation and recurrent cascade correlation meta algorithms.

  6. BPTT: For recurrent networks, trained with `backpropagation through time'.

  7. RM_Synchronous: Special update mode for auto-associative memory networks.

Note, that all update modes only apply to the forward propagation phase, the backward phase in learning procedures like backpropagation is not affected at all.



next up previous contents index
Next: Learning in Neural Up: Neural Network Terminology Previous: Sites



Niels Mache
Wed May 17 11:23:58 MET DST 1995