Sentiment Classification via a Response Recalibration Framework

Phillip Smith, M. Lee

    Research output: Contribution to conferencePaper

    56 Downloads (Pure)

    Abstract

    Probabilistic learning models have the ability to be calibrated to improve the performance of tasks such as sentiment classification. In this paper, we introduce a framework for sentiment classification that enables classifier recalibration given the presence of related, context-bearing documents. We investigate the use of probabilistic thresholding and document similarity based recalibration methods to yield classifier improvements. We demonstrate the performance of our proposed recalibration methods on a dataset of online clinical reviews from the patient feedback domain that have adjoining management responses that yield sentiment bearing information. Experimental results show the proposed recalibration methods outperform uncalibrated supervised machine learning models trained for sentiment analysis, and yield significant improvements over a robust baseline.
    Original languageEnglish
    Pages175-180
    Publication statusPublished - 2015
    EventWorkshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis - Lisboa, Portugal
    Duration: 17 Sept 201517 Sept 2015

    Workshop

    WorkshopWorkshop on Computational Approaches to Subjectivity, Sentiment & Social Media Analysis
    Country/TerritoryPortugal
    CityLisboa
    Period17/09/1517/09/15

    Bibliographical note

    The full text is available from: http://www.emnlp2015.org/proceedings/WASSA/WASSA-2015.pdf

    Fingerprint

    Dive into the research topics of 'Sentiment Classification via a Response Recalibration Framework'. Together they form a unique fingerprint.

    Cite this