On learning feedforward neural networks with noise injection into inputs
Tóm tắt
Injecting noise to the inputs during the training of feedforward neural networks (FNN) can improve their generalization performance remarkably. Reported works justify this fact arguing that noise injection is equivalent to a smoothing regularization with the input noise variance playing the role of the regularization parameter. The success of this approach depends on the appropriate choice of the input noise variance. However, it is often not known a priori if the degree of smoothness imposed on the FNN mapping is consistent with the unknown function to be approximated. In order to have a better control over this smoothing effect, a cost function putting in balance the smoothed fitting induced by the noise injection and the precision of approximation, is proposed. The second term, which aims at penalizing the undesirable effect of input noise injection or controlling the deviation of the random perturbed cost, was obtained by expressing a certain distance between the original cost function and its random perturbed version. In fact, this term can be derived in general for parametrical. models that satisfy the Lipschitz property. An example is included to illustrate the effectiveness of learning with this proposed cost function when noise injection is used.
Từ khóa
#Neural networks #Feedforward neural networks #Cost function #Neurons #Function approximation #Smoothing methods #Pattern classification #Approximation algorithms #Electronic mail #Fuzzy controlTài liệu tham khảo
10.1109/21.155944
10.1109/18.256500
10.1162/neco.1997.9.5.1093
10.1162/neco.1996.8.3.643
10.1162/089976600300014782
10.1109/72.377960
10.1109/72.883393
10.1162/neco.1995.7.1.108
10.1109/18.243433
10.1109/18.382014
10.1016/S0167-7152(00)00224-8