MEAN FIELD THEORY NEURAL NETWORKS FOR FEATURE RECOGNITION,
CONTENT ADDRESSABE MEMORY AND OPTIMIZATION
Carsten Peterson
Various applications of the mean field theory (MFT) technique for
obtaining solutions close to optimal minima in feed-back networks
are reviewed. Using this method in the context of the Boltzmann
machine gives rise to a fast deterministic learning algorithm with
a performance comparable with that of the back-propagation algorithm (BP)
in feature recognition applications. Since MFT learning is bidirectional
its use can be extended from purely functional mappings to a content
addressable memory. The storage capacity of such a network grows
like $O$(10-20)$n_{H}$ with the number of hidden units. The MFT
learning algorithm is local and thus it has an advantage over BP
with respect to VLSI implementations.
It is also demonstrated how MFT and BP are related in situations where
thenumber of input units is much larger than the number of output units.
In the context of finding good solutions to difficult optimization
problems the MFT technique again turns out to be extemely powerful.
Thequality of the solutions for large traveling salesman and graph
partition problems are in parity with those obtained by optimaly
tuned simulated annealing methods. The algorithm employed here is based
on multi-state K-valued ($K>2$) neurons rather than binary ($K=2$)
neurons. This method is also advantageous for more nested decision
problems like scheduling.
The MFT equations are isomorfic to RC-equations and hence naturally map
onto custom made hardware. With the diversity of successesful application
areas the MFT approach thus constitutes a convenient platform for
hardware development.
LU TP 89-13