Abstract
Dropout has been proven to be an effective method for reducing overfitting in deep artificial neural networks. We present 3 new alternative methods for performing dropout on a deep neural network which improves the effectiveness of the dropout method over the same training period. These methods select neurons to be dropped through statistical values calculated using a neurons change in weight, the average size of a neuron’s weights, and the output variance of a neuron. We found that increasing the probability of dropping neurons with smaller values of these statistics and decreasing the probability of those with larger statistics gave an improved result in training over 10,000 epochs. The most effective of these was found to be the Output Variance method, giving an average improvement of 1.17 % accuracy over traditional dropout methods.
Original language | English |
---|---|
Title of host publication | Lecture Notes in Computer Science |
Editors | Akira Hirose, Seiichi Ozawa, Kenji Doya, Kazushi Ikeda, Minho Lee, Derong Liu |
Publisher | Springer Verlag |
Pages | 519-528 |
Volume | 9949 |
ISBN (Print) | 978-3-319-46674-3 |
DOIs | |
Publication status | Published - 29 Sept 2016 |
Bibliographical note
This paper is not available on the repositoryKeywords
- MNIST
- Artificial neural network
- Deep learning
- Dropout network
- Non-random dropout
- Selective dropout