New adaptive approach for the back-propagation training algorithm

M. H. Abdallah, A. E. Marble, K. J. Macleod

Résultat de recherche: Conference articleexamen par les pairs

Résumé

The traditional back-propagation training algorithm (BP) is an iterative gradient descent algorithm designed to minimize the mean square error between the actual output of a multilayer feedforward perceptron and the desired output. It is highly accurate for most classification problems but it is time consuming and computer intensive. An adaptive approach is proposed so as to reduce the number of iterations needed to train the neural network. The new method is applied on a multilayer network with one hidden layer to classify the letters A to J. A reduction of 25% in the number of iterations is achieved at 98% classification rate. We also propose the confidence region (CR). It is based on the average and the standard deviation of the output node values. A reduction of 75% in the number of iterations is achieved if CR is used. Experimental results indicate that the adaptive approach in addition to the confidence region is faster than the traditional BP training algorithm.

Langue d'origineEnglish
Pages (de-à)722-725
Nombre de pages4
JournalCanadian Conference on Electrical and Computer Engineering
Volume2
Statut de publicationPublished - 1994
ÉvénementProceedings of the 1994 Canadian Conference on Electrical and Computer Engineering. Part 2 (of 2) - Halifax, Can
Durée: sept. 25 1994sept. 28 1994

ASJC Scopus Subject Areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering

Empreinte numérique

Plonger dans les sujets de recherche 'New adaptive approach for the back-propagation training algorithm'. Ensemble, ils forment une empreinte numérique unique.

Citer