Selective Dropout for Deep Neural Networks

Erik Barrow, Mark Eastwood, C. Jayne

Research output: Chapter in Book/Report/Conference proceedingChapter

2 Citations (Scopus)

Abstract

Dropout has been proven to be an effective method for reducing overfitting in deep artificial neural networks. We present 3 new alternative methods for performing dropout on a deep neural network which improves the effectiveness of the dropout method over the same training period. These methods select neurons to be dropped through statistical values calculated using a neurons change in weight, the average size of a neuron’s weights, and the output variance of a neuron. We found that increasing the probability of dropping neurons with smaller values of these statistics and decreasing the probability of those with larger statistics gave an improved result in training over 10,000 epochs. The most effective of these was found to be the Output Variance method, giving an average improvement of 1.17 % accuracy over traditional dropout methods.
Original languageEnglish
Title of host publicationLecture Notes in Computer Science
EditorsAkira Hirose, Seiichi Ozawa, Kenji Doya, Kazushi Ikeda, Minho Lee, Derong Liu
PublisherSpringer Verlag
Pages519-528
Volume9949
ISBN (Print)978-3-319-46674-3
DOIs
Publication statusPublished - 29 Sep 2016

Bibliographical note

This paper is not available on the repository

Keywords

  • MNIST
  • Artificial neural network
  • Deep learning
  • Dropout network
  • Non-random dropout
  • Selective dropout

Fingerprint Dive into the research topics of 'Selective Dropout for Deep Neural Networks'. Together they form a unique fingerprint.

  • Cite this

    Barrow, E., Eastwood, M., & Jayne, C. (2016). Selective Dropout for Deep Neural Networks. In A. Hirose, S. Ozawa, K. Doya, K. Ikeda, M. Lee, & D. Liu (Eds.), Lecture Notes in Computer Science (Vol. 9949, pp. 519-528). Springer Verlag. https://doi.org/10.1007/978-3-319-46675-0_57