Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
Link to record
Permanent link

Direct link
Publications (10 of 19) Show all publications
Hershcovich, D., Frank, S., Lent, H., de Lhoneux, M., Abdou, M., Brandl, S., . . . Søgaard, A. (2022). Challenges and Strategies in Cross-Cultural NLP. In: PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS). Paper presented at 60th Annual Meeting of the Association-for-Computational-Linguistics (ACL), MAY 22-27, 2022, Dublin, IRELAND (pp. 6997-7013). Association for Computational Linguistics
Open this publication in new window or tab >>Challenges and Strategies in Cross-Cultural NLP
Show others...
2022 (English)In: PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), Association for Computational Linguistics, 2022, p. 6997-7013Conference paper, Published paper (Refereed)
Abstract [en]

Various efforts in the Natural Language Processing (NLP) community have been made to accommodate linguistic diversity and serve speakers of many different languages. However, it is important to acknowledge that speakers and the content they produce and require, vary not just by language, but also by culture. Although language and culture are tightly linked, there are important differences. Analogous to cross-lingual and multilingual NLP, cross-cultural and multicultural NLP considers these differences in order to better serve users of NLP systems. We propose a principled framework to frame these efforts, and survey existing and potential strategies.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2022
Series
Proceedings of the conference - Association for Computational Linguistics, ISSN 0736-587X
National Category
Natural Language Processing
Identifiers
urn:nbn:se:uu:diva-484753 (URN)10.18653/v1/2022.acl-long.482 (DOI)000828702307009 ()978-1-955917-21-6 (ISBN)
Conference
60th Annual Meeting of the Association-for-Computational-Linguistics (ACL), MAY 22-27, 2022, Dublin, IRELAND
Funder
Swedish Research Council, 2020-00437EU, Horizon 2020, 801199
Available from: 2022-09-16 Created: 2022-09-16 Last updated: 2025-02-07Bibliographically approved
Milewski, V., de Lhoneux, M. & Moens, M.-F. (2022). Finding Structural Knowledge in Multimodal-BERT. In: PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS). Paper presented at 60th Annual Meeting of the Association-for-Computational-Linguistics (ACL), MAY 22-27, 2022, Dublin, IRELAND (pp. 5658-5671). ASSOC COMPUTATIONAL LINGUISTICS-ACL Association for Computational Linguistics
Open this publication in new window or tab >>Finding Structural Knowledge in Multimodal-BERT
2022 (English)In: PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), ASSOC COMPUTATIONAL LINGUISTICS-ACL Association for Computational Linguistics, 2022, p. 5658-5671Conference paper, Published paper (Refereed)
Abstract [en]

In this work, we investigate the knowledge learned in the embeddings of multimodal-BERT models. More specifically, we probe their capabilities of storing the grammatical structure of linguistic data and the structure learned over objects in visual data. To reach that goal, we first make the inherent structure of language and visuals explicit by a dependency parse of the sentences that describe the image and by the dependencies between the object regions in the image, respectively. We call this explicit visual structure the scene tree, that is based on the dependency tree of the language description. Extensive probing experiments show that the multimodal-BERT models do not encode these scene trees.

Place, publisher, year, edition, pages
Association for Computational LinguisticsASSOC COMPUTATIONAL LINGUISTICS-ACL, 2022
National Category
General Language Studies and Linguistics
Identifiers
urn:nbn:se:uu:diva-484791 (URN)000828702305053 ()978-1-955917-21-6 (ISBN)
Conference
60th Annual Meeting of the Association-for-Computational-Linguistics (ACL), MAY 22-27, 2022, Dublin, IRELAND
Funder
EU, European Research Council, 788506Swedish Research Council, 2020-00437
Available from: 2022-09-19 Created: 2022-09-19 Last updated: 2024-01-15Bibliographically approved
Lent, H., Ogueji, K., de Lhoneux, M., Ahia, O. & Søgaard, A. (2022). What a Creole Wants, What a Creole Needs. In: Calzolari, N Bechet, F Blache, P Choukri, K Cieri, C Declerck, T Goggi, S Isahara, H Maegaard, B Mazo, H Odijk, H Piperidis, S (Ed.), LREC 2022: Thirteen International Conference on Language Resources and Evaluation. Paper presented at 13th International Conference on Language Resources and Evaluation (LREC), JUN 20-25, 2022, Marseille, FRANCE (pp. 6439-6449). European Language Resources Association
Open this publication in new window or tab >>What a Creole Wants, What a Creole Needs
Show others...
2022 (English)In: LREC 2022: Thirteen International Conference on Language Resources and Evaluation / [ed] Calzolari, N Bechet, F Blache, P Choukri, K Cieri, C Declerck, T Goggi, S Isahara, H Maegaard, B Mazo, H Odijk, H Piperidis, S, European Language Resources Association, 2022, p. 6439-6449Conference paper, Published paper (Refereed)
Abstract [en]

In recent years, the natural language processing (NLP) community has given increased attention to the disparity of efforts directed towards high-resource languages over low-resource ones. Efforts to remedy this delta often begin with translations of existing English datasets into other languages. However, this approach ignores that different language communities have different needs. We consider a group of low-resource languages, Creole languages. Creoles are both largely absent from the NLP literature, and also often ignored by society at large due to stigma, despite these languages having sizable and vibrant communities. We demonstrate, through conversations with Creole experts and surveys of Creole-speaking communities, how the things needed from language technology can change dramatically from one language to another, even when the languages are considered to be very similar to each other, as with Creoles. We discuss the prominent themes arising from these conversations, and ultimately demonstrate that useful language technology cannot be built without involving the relevant community.

Place, publisher, year, edition, pages
European Language Resources Association, 2022
Keywords
natural language processing, low-resource languages, Creole
National Category
General Language Studies and Linguistics
Identifiers
urn:nbn:se:uu:diva-497159 (URN)000889371706059 ()979-10-95546-72-6 (ISBN)
Conference
13th International Conference on Language Resources and Evaluation (LREC), JUN 20-25, 2022, Marseille, FRANCE
Funder
Swedish Research Council, 2020-00437EU, Horizon 2020, 801199
Available from: 2023-02-27 Created: 2023-02-27 Last updated: 2023-02-27Bibliographically approved
de Lhoneux, M., Zhang, S. & Sogaard, A. (2022). Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Acl 2022): (Short papers), Vol 2. Paper presented at 60th Annual Meeting of the Association-for-Computational-Linguistics (ACL), MAY 22-27, 2022, Dublin, Ireland (pp. 578-587). Association for Computational Linguistics
Open this publication in new window or tab >>Zero-Shot Dependency Parsing with Worst-Case Aware Automated Curriculum Learning
2022 (English)In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Acl 2022): (Short papers), Vol 2, Association for Computational Linguistics, 2022, p. 578-587Conference paper, Published paper (Refereed)
Abstract [en]

Large multilingual pretrained language models such as mBERT and XLM-RoBERTa have been found to be surprisingly effective for cross-lingual transfer of syntactic parsing models (Wu and Dredze, 2019), but only between related languages. However, source and training languages are rarely related, when parsing truly low-resource languages. To close this gap, we adopt a method from multi-task learning, which relies on automated curriculum learning, to dynamically optimize for parsing performance on outlier languages. We show that this approach is significantly better than uniform and size-proportional sampling in the zero-shot setting.

Place, publisher, year, edition, pages
Association for Computational Linguistics, 2022
National Category
Natural Language Processing
Identifiers
urn:nbn:se:uu:diva-482666 (URN)10.18653/v1/2022.acl-short.64 (DOI)000828732800064 ()978-1-955917-22-3 (ISBN)
Conference
60th Annual Meeting of the Association-for-Computational-Linguistics (ACL), MAY 22-27, 2022, Dublin, Ireland
Funder
Swedish Research Council, 2020-00437Google
Available from: 2022-09-28 Created: 2022-09-28 Last updated: 2025-02-07Bibliographically approved
de Lhoneux, M., Stymne, S. & Nivre, J. (2020). What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?. Computational linguistics - Association for Computational Linguistics (Print), 46(4), 763-784
Open this publication in new window or tab >>What Should/Do/Can LSTMs Learn When Parsing Auxiliary Verb Constructions?
2020 (English)In: Computational linguistics - Association for Computational Linguistics (Print), ISSN 0891-2017, E-ISSN 1530-9312, Vol. 46, no 4, p. 763-784Article in journal (Refereed) Published
Abstract [en]

There is a growing interest in investigating what neural NLP models learn about language. A prominent open question is the question of whether or not it is necessary to model hierarchical structure. We present a linguistic investigation of a neural parser adding insights to this question. We look at transitivity and agreement information of auxiliary verb constructions (AVCs) in comparison to finite main verbs (FMVs). This comparison is motivated by theoretical work in dependency grammar and in particular the work of Tesnière (1959), where AVCs and FMVs are both instances of a nucleus, the basic unit of syntax. An AVC is a dissociated nucleus; it consists of at least two words, and an FMV is its non-dissociated counterpart, consisting of exactly one word. We suggest that the representation of AVCs and FMVs should capture similar information. We use diagnostic classifiers to probe agreement and transitivity information in vectors learned by a transition-based neural parser in four typologically different languages. We find that the parser learns different information about AVCs and FMVs if only sequential models (BiLSTMs) are used in the architecture but similar information when a recursive layer is used. We find explanations for why this is the case by looking closely at how information is learned in the network and looking at what happens with different dependency representations of AVCs. We conclude that there may be benefits to using a recursive layer in dependency parsing and that we have not yet found the best way to integrate it in our parsers.

Place, publisher, year, edition, pages
MIT Press, 2020
National Category
Natural Language Processing
Research subject
Computational Linguistics
Identifiers
urn:nbn:se:uu:diva-462769 (URN)10.1162/coli_a_00392 (DOI)
Available from: 2022-01-02 Created: 2022-01-02 Last updated: 2025-02-07Bibliographically approved
Kulmizev, A., de Lhoneux, M., Gontrum, J., Fano, E. & Nivre, J. (2019). Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing – A Tale of Two Parsers Revisited. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): . Paper presented at 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), November 3-7, Hong Kong, China (pp. 2755-2768).
Open this publication in new window or tab >>Deep Contextualized Word Embeddings in Transition-Based and Graph-Based Dependency Parsing – A Tale of Two Parsers Revisited
Show others...
2019 (English)In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), 2019, p. 2755-2768Conference paper, Published paper (Refereed)
Abstract [en]

Transition-based and graph-based dependency parsers have previously been shown to have complementary strengths and weaknesses: transition-based parsers exploit rich structural features but suffer from error propagation, while graph-based parsers benefit from global optimization but have restricted feature scope. In this paper, we show that, even though some details of the picture have changed after the switch to neural networks and continuous representations, the basic trade-off between rich features and global optimization remains essentially the same. Moreover, we show that deep contextualized word embeddings, which allow parsers to pack information about global sentence structure into local feature representations, benefit transition-based parsers more than graph-based parsers, making the two approaches virtually equivalent in terms of both accuracy and error profile. We argue that the reason is that these representations help prevent search errors and thereby allow transitionbased parsers to better exploit their inherent strength of making accurate local decisions. We support this explanation by an error analysis of parsing experiments on 13 languages.

National Category
Natural Language Processing
Research subject
Computational Linguistics
Identifiers
urn:nbn:se:uu:diva-406697 (URN)000854193302085 ()
Conference
2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), November 3-7, Hong Kong, China
Funder
Swedish Research Council, 2016-01817
Available from: 2020-03-11 Created: 2020-03-11 Last updated: 2025-02-07Bibliographically approved
de Lhoneux, M. (2019). Linguistically Informed Neural Dependency Parsing for Typologically Diverse Languages. (Doctoral dissertation). Uppsala: Acta Universitatis Upsaliensis
Open this publication in new window or tab >>Linguistically Informed Neural Dependency Parsing for Typologically Diverse Languages
2019 (English)Doctoral thesis, monograph (Other academic)
Abstract [en]

This thesis presents several studies in neural dependency parsing for typologically diverse languages, using treebanks from Universal Dependencies (UD). The focus is on informing models with linguistic knowledge. We first extend a parser to work well on typologically diverse languages, including morphologically complex languages and languages whose treebanks have a high ratio of non-projective sentences, a notorious difficulty in dependency parsing. We propose a general methodology where we sample a representative subset of UD treebanks for parser development and evaluation. Our parser uses recurrent neural networks which construct information sequentially, and we study the incorporation of a recursive neural network layer in our parser. This follows the intuition that language is hierarchical. This layer turns out to be superfluous in our parser and we study its interaction with other parts of the network. We subsequently study transitivity and agreement information learned by our parser for auxiliary verb constructions (AVCs). We suggest that a parser should learn similar information about AVCs as it learns for finite main verbs. This is motivated by work in theoretical dependency grammar. Our parser learns different information about these two if we do not augment it with a recursive layer, but similar information if we do, indicating that there may be benefits from using that layer and we may not yet have found the best way to incorporate it in our parser. We finally investigate polyglot parsing. Training one model for multiple related languages leads to substantial improvements in parsing accuracy over a monolingual baseline. We also study different parameter sharing strategies for related and unrelated languages. Sharing parameters that partially abstract away from word order appears to be beneficial in both cases but sharing parameters that represent words and characters is more beneficial for related than unrelated languages.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2019. p. 178
Series
Studia Linguistica Upsaliensia, ISSN 1652-1366 ; 24
Keywords
Dependency parsing, multilingual NLP, Universal Dependencies, Linguistically informed NLP
National Category
General Language Studies and Linguistics
Research subject
Computational Linguistics
Identifiers
urn:nbn:se:uu:diva-394133 (URN)978-91-513-0767-1 (ISBN)
Public defence
2019-11-25, Bertil Hammer, Blåsenhus, von Kraemers Allé 1, Uppsala, 13:15 (English)
Opponent
Supervisors
Available from: 2019-10-28 Created: 2019-10-03 Last updated: 2023-03-13
Basirat, A., de Lhoneux, M., Kulmizev, A., Kurfal, M., Nivre, J. & Östling, R. (2019). Polyglot Parsing for One Thousand and One Languages (And Then Some). In: : . Paper presented at First workshop on Typology for Polyglot NLP, Florence, Italy, August 1 2019.
Open this publication in new window or tab >>Polyglot Parsing for One Thousand and One Languages (And Then Some)
Show others...
2019 (English)Conference paper, Poster (with or without abstract) (Other academic)
National Category
General Language Studies and Linguistics Natural Language Processing
Identifiers
urn:nbn:se:uu:diva-392156 (URN)
Conference
First workshop on Typology for Polyglot NLP, Florence, Italy, August 1 2019
Available from: 2019-08-29 Created: 2019-08-29 Last updated: 2025-02-01Bibliographically approved
de Lhoneux, M., Ballesteros, M. & Nivre, J. (2019). Recursive Subtree Composition in LSTM-Based Dependency Parsing. In: Jill Burstein; Christy Doran; Thamar Solorio (Ed.), Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). Paper presented at 2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, June 2-7, 2019 (pp. 1566-1576). Stroudsburg: Association for Computational Linguistics
Open this publication in new window or tab >>Recursive Subtree Composition in LSTM-Based Dependency Parsing
2019 (English)In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) / [ed] Jill Burstein; Christy Doran; Thamar Solorio, Stroudsburg: Association for Computational Linguistics, 2019, p. 1566-1576Conference paper, Published paper (Refereed)
Abstract [en]

The need for tree structure modelling on top of sequence modelling is an open issue in neural dependency parsing. We investigate the impact of adding a tree layer on top of a sequential model by recursively composing subtree representations (composition) in a transition-based parser that uses features extracted by a BiLSTM. Composition seems superfluous with such a model, suggesting that BiLSTMs capture information about subtrees. We perform model ablations to tease out the conditions under which composition helps. When ablating the backward LSTM, performance drops and composition does not recover much of the gap. When ablating the forward LSTM, performance drops less dramatically and composition recovers a substantial part of the gap, indicating that a forward LSTM and composition capture similar information. We take the backward LSTM to be related to lookahead features and the forward LSTM to the rich history-based features both crucial for transition-based parsers. To capture history-based information, composition is better than a forward LSTM on its own, but it is even better to have a forward LSTM as part of a BiLSTM. We correlate results with language properties, showing that the improved lookahead of a backward LSTM is especially important for head-final languages.

Place, publisher, year, edition, pages
Stroudsburg: Association for Computational Linguistics, 2019
Keywords
dependency parsing, recursive neural networks, recurrent neural networks, long short-term memory networks
National Category
General Language Studies and Linguistics
Research subject
Computational Linguistics
Identifiers
urn:nbn:se:uu:diva-395676 (URN)000900116901063 ()978-1-950737-13-0 (ISBN)
Conference
2019 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, June 2-7, 2019
Funder
Swedish Research Council, 2016-01817
Available from: 2019-10-23 Created: 2019-10-23 Last updated: 2023-05-29Bibliographically approved
Smith, A., Bohnet, B., de Lhoneux, M., Nivre, J., Shao, Y. & Stymne, S. (2018). 82 Treebanks, 34 Models: Universal Dependency Parsing with Multi-Treebank Models. In: Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. Paper presented at Conference on Computational Natural Language Learning (CoNLL),October 31 - November 1, 2018 Brussels, Belgium (pp. 113-123).
Open this publication in new window or tab >>82 Treebanks, 34 Models: Universal Dependency Parsing with Multi-Treebank Models
Show others...
2018 (English)In: Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, 2018, p. 113-123Conference paper, Published paper (Refereed)
National Category
Natural Language Processing
Research subject
Computational Linguistics
Identifiers
urn:nbn:se:uu:diva-371246 (URN)
Conference
Conference on Computational Natural Language Learning (CoNLL),October 31 - November 1, 2018 Brussels, Belgium
Available from: 2018-12-19 Created: 2018-12-19 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-8844-2126

Search in DiVA

Show all publications