Selective Dropout for Deep Neural Networks

Erik Barrow, Mark Eastwood, C. Jayne

    Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

    7 Citations (Scopus)

    Abstract

    Dropout has been proven to be an effective method for reducing overfitting in deep artificial neural networks. We present 3 new alternative methods for performing dropout on a deep neural network which improves the effectiveness of the dropout method over the same training period. These methods select neurons to be dropped through statistical values calculated using a neurons change in weight, the average size of a neuron’s weights, and the output variance of a neuron. We found that increasing the probability of dropping neurons with smaller values of these statistics and decreasing the probability of those with larger statistics gave an improved result in training over 10,000 epochs. The most effective of these was found to be the Output Variance method, giving an average improvement of 1.17 % accuracy over traditional dropout methods.
    Original languageEnglish
    Title of host publicationLecture Notes in Computer Science
    EditorsAkira Hirose, Seiichi Ozawa, Kenji Doya, Kazushi Ikeda, Minho Lee, Derong Liu
    PublisherSpringer Verlag
    Pages519-528
    Volume9949
    ISBN (Print)978-3-319-46674-3
    DOIs
    Publication statusPublished - 29 Sept 2016

    Bibliographical note

    This paper is not available on the repository

    Keywords

    • MNIST
    • Artificial neural network
    • Deep learning
    • Dropout network
    • Non-random dropout
    • Selective dropout

    Fingerprint

    Dive into the research topics of 'Selective Dropout for Deep Neural Networks'. Together they form a unique fingerprint.

    Cite this