A
In summary, the activation function is the result of applying a squashing function to the total net input.
In the asynchronous case, if the yellow node fires first, then it uses the then current value of its input from the red node to determine its output in time step 2, and the red node, if it fires next, will use the updated output from the yellow node to compute its new output in time step 3. In summary, the output values of the red and yellow nodes in time step 3 depend on the outputs of the yellow and red nodes in time steps 2 and 1, respectively.
In the synchronous case, each node obtains the current output of the other node at the same time, and uses the value obtained to compute its new output (in time step 2). In summary, the output values of the red and yellow nodes in time step 2 depend on the outputs of the yellow and red nodes in time step 1. This can produce a different result from the asynchronous method.
Some neural network algorithms are firmly tied to synchronous updates, and some can be operated in either mode. Biological neurons normally fire asynchronously.
Attributes are sometimes also called features.
B
See also expected error estimate.
The backward pass starts at the output layer of the feedforward network, and updates the incoming weights to units in that layer using the delta rule. Then it works backward, starting with the penultimate layer (last hidden layer), updating the incoming weights to those layers.
Statistics collected during the forward pass are used during the backward pass in updating the weights.
This has the effect of giving each hidden or output a trainable threshold, equal to the value of the weight from the bias unit to the unit.
This is a very much simplified diagram of a biological neuron. Biological neurons come in a variety of types. There is a lot of further structure and physiology that could be considered. The features of a neuron shown above are those of most interest for those constructing artificial neural networks (other than spiking-neuron-based models, and those relying on synchronous activation, such as the Shastri and Ajjanagadde model: see L.Shastri and V. Ajjanagadde: Behavioral and Brain Sciences (1993) 16, 417-494).
However, from the artificial neural network point of view, a biological neuron operates as follows: electrical pulses from other neurons cause the transfer of substances called neurotransmitters (of which there are several varieties) from the synaptic terminals of a neuron's axon (think "output") across a structure called a synapse to the dendrites of other neurons (call them downstream neurons). The arrival of the neurotransmitter in the dendrite of the downstream neuron increases the tendency of the downstream neuron to send an electrical pulse itself ("fire"). If enough dendrites of a neuron receive neurotransmitters in a short enough period of time, the neuron will fire.
Caveat: neurotransmitter substances may be excitatory or inhibitory. The text above assumes that only excitatory neurotransmitters are involved. Inhibitory neurotransmitters, as the name suggests, reduce the tendency of a neuron to fire. Some neurons have a mixture of excitatory synapses and inhibitory synapses (i.e. synapses using inhibitory neurotransmitters) and will only fire if there is enough additional excitatory input to overcome the effect of the inhibitory synapses.
C