Abstrakti
To avoid oversized feedforward networks we propose that after Cascade-Correlation learning the network is fine-tuned with backpropagation algorithm. Our experiments show that if one uses merely Cascade-Correlation learning the network may require a large number of hidden units to reach the desired error level. However, if the network is in addition fine-tuned with backpropagation method then the desired error level can be reached with much smaller number of hidden units. It is also shown that the combined Cascade-Correlation backpropagation training is a faster scheme compared to mere backpropagation training.
Alkuperäiskieli | Englanti |
---|---|
Sivut | 10-12 |
Sivumäärä | 3 |
Julkaisu | Neural Processing Letters |
Vuosikerta | 2 |
Numero | 2 |
DOI - pysyväislinkit | |
Tila | Julkaistu - maalisk. 1995 |
OKM-julkaisutyyppi | A1 Alkuperäisartikkeli tieteellisessä aikakauslehdessä |