Skip to content

Commit

Permalink
gurt 23 ingestion (#2405)
Browse files Browse the repository at this point in the history
* ingested CxGs+NLP 2023, closes #2231.

* ingested DepLing, closes #2229.

* ingested TLT, closes #2228.

* ingested udw, closes #2230.
  • Loading branch information
xinru1414 authored Mar 2, 2023
1 parent 34a9e35 commit cf4550d
Show file tree
Hide file tree
Showing 7 changed files with 461 additions and 0 deletions.
124 changes: 124 additions & 0 deletions data/xml/2023.cxgsnlp.xml
Original file line number Diff line number Diff line change
@@ -0,0 +1,124 @@
<?xml version='1.0' encoding='UTF-8'?>
<collection id="2023.cxgsnlp">
<volume id="1" ingest-date="2023-02-25">
<meta>
<booktitle>Proceedings of the First International Workshop on Construction Grammars and NLP (CxGs+NLP, GURT/SyntaxFest 2023)</booktitle>
<editor><first>Claire</first><last>Bonial</last></editor>
<editor><first>Harish</first><last>Tayyar Madabushi</last></editor>
<publisher>Association for Computational Linguistics</publisher>
<address>Washington, D.C.</address>
<month>March</month>
<year>2023</year>
<url hash="70de1bde">2023.cxgsnlp-1</url>
<venue>cxgsnlp</venue>
</meta>
<frontmatter>
<url hash="5b6b5b9e">2023.cxgsnlp-1.0</url>
<bibkey>cxgsnlp-2023-international</bibkey>
</frontmatter>
<paper id="1">
<title>Exploring the Constructicon: Linguistic Analysis of a Computational <fixed-case>C</fixed-case>x<fixed-case>G</fixed-case></title>
<author><first>Jonathan</first><last>Dunn</last></author>
<pages>1-11</pages>
<abstract>Recent work has formulated the task for computational construction grammar as producing a constructicon given a corpus of usage. Previous work has evaluated these unsupervised grammars using both internal metrics (for example, Minimum Description Length) and external metrics (for example, performance on a dialectology task). This paper instead takes a linguistic approach to evaluation, first learning a constructicon and then analyzing its contents from a linguistic perspective. This analysis shows that a learned constructicon can be divided into nine major types of constructions, of which Verbal and Nominal are the most common. The paper also shows that both the token and type frequency of constructions can be used to model variation across registers and dialects.</abstract>
<url hash="2e0ab2f4">2023.cxgsnlp-1.1</url>
<bibkey>dunn-2023-exploring</bibkey>
</paper>
<paper id="2">
<title>Constructions, Collocations, and Patterns: Alternative Ways of Construction Identification in a Usage-based, Corpus-driven Theoretical Framework</title>
<author><first>Gábor</first><last>Simon</last></author>
<pages>12-20</pages>
<abstract>There is a serious theoretical and methodological dilemma in usage-based construction grammar: how to identify constructions based on corpus pattern analysis. The present paper provides an overview of this dilemma, focusing on argument structure constructions (ASCs) in general. It seeks to answer the question of how a data-driven construction grammatical description can be built on the collocation data extracted from corpora. The study is of meta-scientific interest: it compares theoretical proposals in construction grammar regarding how they handle co-occurrences emerging from a corpus. Discussing alternative bottom-up approaches to the notion of construction, the paper concludes that there is no one-to-one correspondence between corpus patterns and constructions. Therefore, a careful analysis of the former can empirically ground both the identification and the description of constructions.</abstract>
<url hash="8e068e9b">2023.cxgsnlp-1.2</url>
<bibkey>simon-2023-constructions</bibkey>
</paper>
<paper id="3">
<title><fixed-case>CAL</fixed-case>a<fixed-case>M</fixed-case>o: a Constructionist Assessment of Language Models</title>
<author><first>Ludovica</first><last>Pannitto</last></author>
<author><first>Aurélie</first><last>Herbelot</last></author>
<pages>21-30</pages>
<abstract>This paper presents a novel framework for evaluating Neural Language Models’ linguistic abilities using a constructionist approach. Not only is the usage-based model in line with the un- derlying stochastic philosophy of neural architectures, but it also allows the linguist to keep meaning as a determinant factor in the analysis. We outline the framework and present two possible scenarios for its application.</abstract>
<url hash="e3507152">2023.cxgsnlp-1.3</url>
<bibkey>pannitto-herbelot-2023-calamo</bibkey>
</paper>
<paper id="4">
<title>High-dimensional vector spaces can accommodate constructional features quite conveniently</title>
<author><first>Jussi</first><last>Karlgren</last><affiliation>Numolo</affiliation></author>
<pages>31-35</pages>
<abstract>Current language processing tools presuppose input in the form of a sequence of high-dimensional vectors with continuous values. Lexical items can be converted to such vectors with standard methodology and subsequent processing is assumed to handle structural features of the string. Constructional features do typically not fit in that processing pipeline: they are not as clearly sequential, they overlap with other items, and the fact that they are combinations of lexical items obscures their ontological status as observable linguistic items in their own right. Constructional grammar frameworks allow for a more general view on how to understand lexical items and their configurations in a common framework. This paper introduces an approach to accommodate that understanding in a vector symbolic architecture, a processing framework which allows for combinations of continuous vectors and discrete items, convenient for various downstream processing using e.g. neural processing or other tools which expect input in vector form.</abstract>
<url hash="de51b87f">2023.cxgsnlp-1.4</url>
<bibkey>karlgren-2023-high</bibkey>
</paper>
<paper id="5">
<title>Constructivist Tokenization for <fixed-case>E</fixed-case>nglish</title>
<author><first>Allison</first><last>Fan</last></author>
<author><first>Weiwei</first><last>Sun</last></author>
<pages>36-40</pages>
<abstract>This paper revisits tokenization from a theoretical perspective, and argues for the necessity of a constructivist approach to tokenization for semantic parsing and modeling language acquisition. We consider two problems: (1) (semi-) automatically converting existing lexicalist annotations, e.g. those of the Penn TreeBank, into constructivist annotations, and (2) automatic tokenization of raw texts. We demonstrate that (1) a heuristic rule-based constructivist tokenizer is able to yield relatively satisfactory accuracy when gold standard Penn TreeBank part-of-speech tags are available, but that some manual annotations are still necessary to obtain gold standard results, and (2) a neural tokenizer is able to provide accurate automatic constructivist tokenization results from raw character sequences. Our research output also includes a set of high-quality morpheme-tokenized corpora, which enable the training of computational models that more closely align with language comprehension and acquisition.</abstract>
<url hash="0963dbef">2023.cxgsnlp-1.5</url>
<bibkey>fan-sun-2023-constructivist</bibkey>
</paper>
<paper id="6">
<title>Fluid Construction Grammar: State of the Art and Future Outlook</title>
<author><first>Katrien</first><last>Beuls</last></author>
<author><first>Paul</first><last>Van Eecke</last></author>
<pages>41-50</pages>
<abstract>Fluid Construction Grammar (FCG) is a computational framework that provides a formalism for representing construction grammars and a processing engine that supports construction- based language comprehension and production. FCG is conceived as a computational operationalisation of the basic tenets of construction grammar. It thereby aims to establish more solid foundations for constructionist theories of language, while expanding their application potential in the fields of artificial intelligence and natural language understanding. This paper aims to provide a brief introduction to the FCG research programme, reflecting on what has been achieved so far and identifying key challenges for the future.</abstract>
<url hash="526b5395">2023.cxgsnlp-1.6</url>
<bibkey>beuls-van-eecke-2023-fluid</bibkey>
</paper>
<paper id="7">
<title>An Argument Structure Construction Treebank</title>
<author><first>Kristopher</first><last>Kyle</last></author>
<author><first>Hakyung</first><last>Sung</last></author>
<pages>51-62</pages>
<abstract>In this paper we introduce a freely available treebank that includes argument structure construction (ASC) annotation. We then use the treebank to train probabilistic annotation models that rely on verb lemmas and/ or syntactic frames. We also use the treebank data to train a highly accurate transformer-based annotation model (F1 = 91.8%). Future directions for the development of the treebank and annotation models are discussed.</abstract>
<url hash="b4640bbe">2023.cxgsnlp-1.7</url>
<bibkey>kyle-sung-2023-argument</bibkey>
</paper>
<paper id="8">
<title>Investigating Stylistic Profiles for the Task of Empathy Classification in Medical Narrative Essays</title>
<author><first>Priyanka</first><last>Dey</last></author>
<author><first>Roxana</first><last>Girju</last></author>
<pages>63-74</pages>
<abstract>One important aspect of language is how speakers generate utterances and texts to convey their intended meanings. In this paper, we bring various aspects of the Construction Grammar (CxG) and the Systemic Functional Grammar (SFG) theories in a deep learning computational framework to model empathic language. Our corpus consists of 440 essays written by premed students as narrated simulated patient–doctor interactions. We start with baseline classifiers (state-of-the-art recurrent neural networks and transformer models). Then, we enrich these models with a set of linguistic constructions proving the importance of this novel approach to the task of empathy classification for this dataset. Our results indicate the potential of such constructions to contribute to the overall empathy profile of first-person narrative essays.</abstract>
<url hash="fc18c12d">2023.cxgsnlp-1.8</url>
<bibkey>dey-girju-2023-investigating</bibkey>
</paper>
<paper id="9">
<title><fixed-case>UMR</fixed-case> annotation of <fixed-case>C</fixed-case>hinese Verb compounds and related constructions</title>
<author><first>Haibo</first><last>Sun</last></author>
<author><first>Yifan</first><last>Zhu</last></author>
<author><first>Jin</first><last>Zhao</last></author>
<author><first>Nianwen</first><last>Xue</last></author>
<pages>75-84</pages>
<abstract>This paper discusses the challenges of annotating the predicate-argument structure of Chinese verb compounds in Uniform Meaning Representation (UMR), a recent meaning representation framework that extends Abstract Meaning Representation (AMR) to cross-linguistic settings. The key issue is to decide whether to annotate the argument structure of a verb compound as a whole, or to annotate the argument structure of their component verbs as well as the relations between them. We examine different types of Chinese verb compounds, and propose how to annotate them based on the principle of compositionality, level of grammaticalization, and productivity of component verbs. We propose a solution to the practical problem of having to define the semantic roles for Chinese verb compounds that are quite open-ended by separating compositional verb compounds from verb compounds that are non-compositional or have grammaticalized verb components. For compositional verb compounds, instead of annotating the argument structure of the verb compound as a whole, we annotate the argument structure of the component verbs as well as the semantic relations between them as creating an exhaustive list of such verb compounds is infeasible. Verb compounds with grammaticalized verb components also tend to be productive and we represent grammaticalized verb compounds as either attributes of the primary verb or as relations.</abstract>
<url hash="f5ed00ab">2023.cxgsnlp-1.9</url>
<bibkey>sun-etal-2023-umr</bibkey>
</paper>
<paper id="10">
<title>Construction Grammar Provides Unique Insight into Neural Language Models</title>
<author><first>Leonie</first><last>Weissweiler</last></author>
<author><first>Taiqi</first><last>He</last></author>
<author><first>Naoki</first><last>Otani</last></author>
<author><first>David</first><last>R. Mortensen</last></author>
<author><first>Lori</first><last>Levin</last></author>
<author><first>Hinrich</first><last>Schütze</last></author>
<pages>85-95</pages>
<abstract>Construction Grammar (CxG) has recently been used as the basis for probing studies that have investigated the performance of large pretrained language models (PLMs) with respect to the structure and meaning of constructions. In this position paper, we make suggestions for the continuation and augmentation of this line of research. We look at probing methodology that was not designed with CxG in mind, as well as probing methodology that was designed for specific constructions. We analyse selected previous work in detail, and provide our view of the most important challenges and research questions that this promising new field faces.</abstract>
<url hash="58154862">2023.cxgsnlp-1.10</url>
<bibkey>weissweiler-etal-2023-construction</bibkey>
</paper>
<paper id="11">
<title>Modeling Construction Grammar’s Way into <fixed-case>NLP</fixed-case>: Insights from negative results in automatically identifying schematic clausal constructions in <fixed-case>B</fixed-case>razilian <fixed-case>P</fixed-case>ortuguese</title>
<author><first>Arthur</first><last>Lorenzi</last></author>
<author><first>Vânia</first><last>Gomes de Almeida</last></author>
<author><first>Ely</first><last>Edison Matos</last></author>
<author><first>Tiago</first><last>Timponi Torrent</last></author>
<pages>96-109</pages>
<abstract>This paper reports on negative results in a task of automatic identification of schematic clausal constructions and their elements in Brazilian Portuguese. The experiment was set up so as to test whether form and meaning properties of constructions, modeled in terms of Universal Dependencies and FrameNet Frames in a Constructicon, would improve the performance of transformer models in the task. Qualitative analysis of the results indicate that alternatives to the linearization of those properties, dataset size and a post-processing module should be explored in the future as a means to make use of information in Constructicons for NLP tasks.</abstract>
<url hash="85fb4453">2023.cxgsnlp-1.11</url>
<bibkey>lorenzi-etal-2023-modeling</bibkey>
</paper>
</volume>
</collection>
Loading

0 comments on commit cf4550d

Please sign in to comment.