Download PDFOpen PDF in browser

Negation Scope Resolution: Quantifying Neural Uncertainty In An Imbalanced Setting

EasyChair Preprint 1602

7 pagesDate: October 7, 2019

Abstract

Negation scope detection is an interesting task for neural machine learning models, because of the sequential dependencies in the input data. Having a neural classifier being able to untangle negated parts of a sentence from the non-negated part is useful for downstream tasks. Additionally, generally in classification tasks one has to work with quite imbalanced data sets. Within natural language only a subset of sentences contain negations - thus negation annotated data might be prone to imbalance in such a way that there are many annotated sentences without any negations (positive sentences) versus sentences with negations (negative sentences). This paper looks at how this kind of imbalance affects neural model performance by comparing models trained on the full data set, with models trained on a subset in which the positive sentences have been filtered out. The results evaluated on the *SEM 2012 shared task on negation scope detection show that there does seem to be a difference in how the classifiers are affected by imbalance, depending on architecture; and how including part-of-speech (PoS) features help to reduce this difference.

Keyphrases: BiLSTM, NLP, negation analysis, negation scope detection, neural network, scope detection, scope match

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:1602,
  author    = {Chris Ghai},
  title     = {Negation Scope Resolution: Quantifying Neural Uncertainty In An Imbalanced Setting},
  howpublished = {EasyChair Preprint 1602},
  year      = {EasyChair, 2019}}
Download PDFOpen PDF in browser