Carsten Peterson and Eric Hartman
Explorations of the mean field theory learning algorithm
Neural Networks 2, 475-494 (1989)

Abstract:
The mean field theory (MFT) learning algorithm is elaborated and explored with respect to a variety of tasks. MFT is benchmarked against the back-propagation learning algorithm (BP) on two different feature recognition problems: two-dimensional mirror symmetry and multidimensional statistical pattern classification. We find that while the two algorithms are very similar with respect to generalization properties, MFT normally requires a substantially smaller number of training epochs than BP. Since the MFT model is bidirectional, rather than feed-forward, its use can be extended naturally from purely functional mappings to a content addressable memory. A network with N visible and N hidden units can store up to approximately 4N patterns with good content-addressability. We stress an implementational advantage for MFT: it is natural for VLSI circuitry.

LU TP 89-02