A related function, also sometimes used in backprop-trained networks, is 2φ(x)–1, which can also be expressed as tanh(x/2). tanh(x/2) is, of course, a smoothed version of the step function which jumps from –1 to 1 at x = 0, i.e. the function which = –1 if x < 0, and = 1 if x ≥ 0.
M
N
Learning might or might not occur, depending on the type of neural network and the mode of operation of the network.
Also referred to a neurode, node, or unit.
See also decision tree pruning and generalization in backprop.