The Transition to Perfect Generalization in Perceptrons

Neural Computation - Tập 3 Số 3 - Trang 386-401 - 1991
Eric B. Baum1, Yuh‐Dauh Lyuu1
1NEC Research Institute, Princeton, NJ 08540, USA

Tóm tắt

Several recent papers (Gardner and Derrida 1989; Györgyi 1990; Sompolinsky et al. 1990) have found, using methods of statistical physics, that a transition to perfect generalization occurs in training a simple perceptron whose weights can only take values ±1. We give a rigorous proof of such a phenomena. That is, we show, for α = 2.0821, that if at least αn examples are drawn from the uniform distribution on {+1, −1}n and classified according to a target perceptron wt ∈ {+1, −1}n as positive or negative according to whether wt·x is nonnegative or negative, then the probability is 2−(√n) that there is any other such perceptron consistent with the examples. Numerical results indicate further that perfect generalization holds for α as low as 1.5.

Từ khóa


Tài liệu tham khảo

10.1162/neco.1989.1.1.151

10.1088/0305-4470/22/12/004

10.1103/PhysRevA.41.7097

10.1103/PhysRevLett.65.1683