-
Notifications
You must be signed in to change notification settings - Fork 1
/
classifier.tex
152 lines (134 loc) · 8.9 KB
/
classifier.tex
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
\section{Incidence of annotations on suppervised polarity classification}
\label{sect:classifier}
\texttt{<SVM-based classifier: Matlab, (LibSVM?)>}
This section intends to evaluate the incidence of AMT-generated annotations on the polarity classification task.
According to this, a comparative evaluation between two polarity classification systems is conducted.
More specifically, baseline or reference classifiers trained with noisy available metadata are compared with
contrastive classifiers trained with AMT generated annotations.
Although more sophisticated classification schemas can be conceived for this task, a simple SVM-based binary supervised classification approach is considered here.
\subsection{Description of datasets}
For conducting the experimental evaluation, three different datasets were considered:
\begin{enumerate}
\item Baseline: constitutes the dataset used for training the baseline or reference classifiers.
Automatic annotation for this dataset was obtained by using the following naive approach: those sentences extracted from
comments with ratings equal to 5 were assigned to class "positive", those extracted from comments with ratings
equal to 3 were assigned to "neutral", and those extracted from comments with ratings equal to 1 were assigned to
"negative". This dataset contains a total of 5570 sentences, with a vocabulary coverage of 11797 words.
\item Annotated: constitutes the dataset that was manually annotated by AMT workers.
This dataset is used for training the contrastive classifiers which are to be compared with baseline systems.
The three independent annotations generated by AMT workers for each sentence within this dataset were consolidated into one unique annotation
by using the following criterion: if the three provided annotations happened to be
different\footnote{Actually, this kind of total disagreement among annotators occurred only in 13 sentences out of 1000.},
the sentence was assigned to class "neutral"; otherwise, the sentence was assigned to the class with
at least two annotation agreements. This dataset contains a total of 1000 sentences, with a vocabulary coverage
of 3022 words.
\item Evaluation: constitutes the gold standard used for evaluating the performance of classifiers.
This dataset was manually annotated by three experts in an independent manner. The gold standard annotation
was consolidated by using the same criterion used in the case of the previous dataset\footnote{In this case,
annotator inter-agreement was above 80\%, and total disagreement among annotators occurred only in 1 sentence
out of 500}. This dataset contains a total of 500 sentences, with a vocabulary coverage of 2004 words.
\end{enumerate}
These three datasets were constructed by randomly extracting sample sentences from an original corpus
of over 25000 comments containing more than 1000000 sentences in total. The sampling was conducted
with the following constraints in mind: the three resulting datasets should not overlap, only sentences
containing more than 3 tokens could be extracted, each resulting dataset must be balanced, as much
as possible, in terms of the amount of sentences per class. Table \ref{tc_corpus} presents the
distribution of sentences per class for each of the three considered datasets.
\begin{table}
\begin{tabular}{|l|l|l|l|}
\hline
&Baseline &Annotated &Evaluation \\
\hline
Positive &1882 &341 &200 \\
\hline
Negative &1876 &323 &137 \\
\hline
Neutral &1812 &336 &161 \\
\hline
Totals &5570 &1000 &500 \\
\hline
\end{tabular}
\caption{Sentence-per-class distributions for baseline, annotated and evaluation datasets.}
\label{tc_corpus}
\end{table}
\subsection{Experimental settings}
As mentioned above, a simple SVM-based supervised classification approach was considered for the
polarity detection task under consideration. According to this, two different groups of classifiers were
considered: a baseline or reference group, and a contrastive group. Classifiers within these two groups were
trained with data samples extracted from the baseline and annotated datasets, respectively. Within each group
of classifiers, three different binary classification subtasks were considered: positive/not_positive,
negative/not_negative and neutral/not-neutral. All trained binary classifiers were evaluated by computing
precision and recall for each considered class, as well as overall classification accuracy, over the
evaluation dataset.
A feature space model representation of the data was constructed by considering the standard bag-of-words approach.
In this way, a sparse vector was obtained for each sentence in the datasets. Stop-word removal was not
conducted before computing vector models, and standard normalization and TF-IDF weighting schemes were used.
Multiple-fold cross-validation was used in all conducted experiments to tackle with statistical variability of the
data. In this sense, twenty independent realizations were actually conducted for each experiment presented and,
instead of individual output results, mean values and standard deviations of evaluation metrics are reported.
Each binary classifier realization was trained with a random subsample set of 600 sentences extracted from
the training dataset corresponding to the classifier group, i.e. baseline dataset for reference systems,
and annotated dataset for contrastive systems. Training subsample sets were always balanced with respect to
the original three categories: "positive", "negative" and "neutral".
\subsection{Results and discussion}
Table \ref{tc_pre_rec} presents the resulting average values of precision and recall for each considered class
in classifiers trained with either the baseline or the annotated dataset. As observed in the table, with the
exception of recall for class "negative" and precision for class "not_negative", both metrics are substantially
improved when the annotated dataset is used for training the classifiers. The most impressive improvements
are observed for "neutral" precision and recall, and for "positive" precision.
\begin{table}
\begin{tabular}{|l|l|l|l|l|}
\hline
&baseline &baseline &annotated &annotated \\
\hline
class &precision &recall &precision &recall \\
\hline
positive &54.23 (3.52) &44.65 (3.68) &68.33 (3.09) &53.65 (2.93) \\
\hline
not_positive &66.88 (1.79) &74.75 (2.85) &72.88 (1.21) &83.28 (2.53) \\
\hline
negative &40.49 (3.22) &39.93 (4.18) &44.96 (2.08) &38.26 (5.38) \\
\hline
not_negative &77.16 (1.27) &77.53 (2.33) &77.69 (1.07) &82.02 (2.92) \\
\hline
neutral &34.37 (3.57) &31.43 (7.93) &49.69 (3.39) &50.43 (5.60) \\
\hline
not_neutral &68.75 (1.60) &71.72 (5.84) &76.26 (1.89) &75.65 (2.92) \\
\hline
\end{tabular}
\caption{Average precision and average recall (with standard deviations provided in parenthesis)
for each considered class in classifiers trained with either the baseline or the annotated dataset.}
\label{tc_pre_rec}
\end{table}
Table \ref{tc_accu} presents the resulting average values of accuracy for each considered subtask
in classifiers trained with either the baseline or the annotated dataset. As observed in the table,
all subtasks benefit from using the annotated dataset for training the classifiers; however, it is
important to mention that while similar absolute gains are observed for the "positive/not_positive" and "neutral/not_neutral"
subtasks, this is not the case for the subtask "negative/not_negative", which actually gains much less than the other
two subtasks.
\begin{table}
\begin{tabular}{|l|l|l|}
\hline
classifier &baseline &annotated \\
\hline
positive/not_positive &62.69 (2.35) &71.40 (1.64) \\
\hline
negative/not_negative &67.13 (1.90) &69.92 (1.19) \\
\hline
neutral/not_neutral &58.72 (2.55) &67.52 (2.10) \\
\hline
\end{tabular}
\caption{Average accuracy (with standard deviations provided in parenthesis)
for each classification subtasks trained with either the baseline or the annotated dataset.}
\label{tc_accu}
\end{table}
After considering all evaluation metrics, it is evident the important benefit provided by human-annotated data
availability for classes "neutral" and "positive". However, in the case of class "negative", although some
gain is also observed, the benefit of human-annotated data does not seem to be as much as for the other two
classes. This, along with the fact that the "negative/not_negative" subtask is actually the best performing
one (in terms of accuracy) when baseline training data is used, might suggest that low rating comments contains
a better representation of sentences belonging to class "negative" than medium and high rating comments do with
respect to classes "neutral" and "positive".
In any case, this experimental work just verifies the feasibility of constructing training datasets for
opinionated content analysis, as well as it provides an approximated idea of costs involved in the generation
of this type of resources, by using AMT.