-
Notifications
You must be signed in to change notification settings - Fork 1
/
abstract.tex
8 lines (7 loc) · 2.21 KB
/
abstract.tex
1
2
3
4
5
6
7
8
We address the challenging task of \textit{computational natural language inference}, by which we mean bridging two or more natural language texts while also providing an explanation of how they are connected. In the context of question answering (i.e., finding short answers to natural language questions), this inference connects the question with its answer and we learn to approximate this inference with machine learning.
%This, by focusing on the related task of question answering, or finding short answers to natural language questions.
%Here we particularly focus our efforts on developing methods which are both robust, or flexible enough to handle language variation, and interpretable.
In particular, here we present four approaches to question answering, each of which shows a significant improvement in performance over baseline methods.
In our first approach, we make use of the underlying discourse structure inherent in free text (i.e. whether the text contains an \textit{explanation}, \textit{elaboration}, \textit{contrast}, etc.) in order to increase the amount of training data for (and subsequently the performance of) a monolingual alignment model.
In our second work, we propose a framework for training customized lexical semantics models such that each one represents a single semantic relation. We use \textit{causality} as a use case, and demonstrate that our customized model is able to both identify causal relations as well as significantly improve our ability to answer causal questions.
We then propose two approaches that seek to answer questions by learning to rank human-readable justifications for the answers, such that the model selects the answer with the best justification. The first uses a graph-structured representation of the background knowledge and performs information aggregation to construct multi-sentence justifications. The second reduces pre-processing costs by limiting itself to a single sentence and using a neural network to learn a latent representation of the background knowledge. For each of these, we show that in addition to significant improvement in correctly answering questions, we also outperform a strong baseline in terms of the quality of the answer justification given.