kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
A comparative evaluation and analysis of three generations of Distributional Semantic Models
RISE.ORCID iD: 0000-0002-2236-4978
Show others and affiliations
2022 (English)In: Language resources and evaluation, ISSN 1574-020X, E-ISSN 1574-0218, Vol. 56, no 4, p. 1269-1313Article in journal (Refereed) Published
Abstract [en]

Distributional semantics has deeply changed in the last decades. First, predict models stole the thunder from traditional count ones, and more recently both of them were replaced in many NLP applications by contextualized vectors produced by neural language models. Although an extensive body of research has been devoted to Distributional Semantic Model (DSM) evaluation, we still lack a thorough comparison with respect to tested models, semantic tasks, and benchmark datasets. Moreover, previous work has mostly focused on task-driven evaluation, instead of exploring the differences between the way models represent the lexical semantic space. In this paper, we perform a large-scale evaluation of type distributional vectors, either produced by static DSMs or obtained by averaging the contextualized vectors generated by BERT. First of all, we investigate the performance of embeddings in several semantic tasks, carrying out an in-depth statistical analysis to identify the major factors influencing the behavior of DSMs. The results show that (i) the alleged superiority of predict based models is more apparent than real, and surely not ubiquitous and (ii) static DSMs surpass BERT representations in most out-of-context semantic tasks and datasets. Furthermore, we borrow from cognitive neuroscience the methodology of Representational Similarity Analysis (RSA) to inspect the semantic spaces generated by distributional models. RSA reveals important differences related to the frequency and part-of-speech of lexical items.

Place, publisher, year, edition, pages
2022. Vol. 56, no 4, p. 1269-1313
National Category
Natural Language Processing
Identifiers
URN: urn:nbn:se:kth:diva-322094DOI: 10.1007/s10579-021-09575-zISI: 000762866400001Scopus ID: 2-s2.0-85125439429OAI: oai:DiVA.org:kth-322094DiVA, id: diva2:1715227
Note

QC 20221202

Available from: 2022-12-01 Created: 2022-12-01 Last updated: 2025-02-07Bibliographically approved
In thesis
1. Quantifying Meaning
Open this publication in new window or tab >>Quantifying Meaning
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [sv]

Distributionella semantikmodeller är en klass av maskininlärningsmodeller med syfte att konstruera representationer som fångar semantik, i.e. mening, av objekt som bär mening på ett datadrivet sätt. Denna avhandling är särskilt inriktad på konstruktion av semantisk representation av ord, en strävan som har en lång historia inom datalingvistik och som sett dramatiska utvecklingar under de senaste åren.

Det primära forskningsmålet med denna avhandling är att utforska gränserna och tillämpningarna av distributionella semantikmodeller av ord, i.e. word embeddings. I synnerhet utforskar den relationen mellan modell- och embeddingsemantik, det vill säga hur modelldesign påverkar vad ord-embeddings innehåller, hur man resonerar om ord-embeddings, och hur egenskaperna hos modellen kan utnyttjas för att extrahera ny information från embeddings. Konkret introducerar vi topologiskt medvetna grannskapsfrågor som berikar den information som erhålls från grannskap extraherade från distributionella sematikmodeller, villkorade likhetsfrågor (och modeller som möjliggör dem), konceptutvinning från distributionella semantikmodeller, tillämpningar av embbeddningmodeller inom statsvetenskap, samt en grundlig utvärdering av en bred mängd av distributionella semantikmodeller.

Abstract [en]

Distributional semantic models are a class of machine learning models with the aim of constructing representations that capture the semantics, i.e. meaning, of objects that carry meaning in a data-driven fashion. This thesis is particularly concerned with the construction of semantic representations of words, an endeavour that has a long history in computational linguistics, and that has seen dramatic developments in recent years.

The primary research objective of this thesis is to explore the limits and applications of distributional semantic models of words, i.e. word embeddings. In particular, it explores the relation between model and embedding semantics, i.e. how model design influences what our embeddings encode, how to reason about embeddings, and how properties of the model can be exploited to extract novel information from embeddings. Concretely, we introduce topologically aware neighborhood queries that enrich the information gained from neighborhood queries on distributional semantic models, conditioned similarity queries (and models enabling them), concept extraction from distributional semantic models, applications of embedding models in the realm of political science, as well as a thorough evaluation of a broad range of distributional semantic models. 

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2023. p. 45
Series
TRITA-EECS-AVL ; 2023:2
National Category
Natural Language Processing
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-322262 (URN)978-91-8040-444-0 (ISBN)
Public defence
2023-01-17, Zoom: https://kth-se.zoom.us/j/66943302856, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Note

QC 20221207

Available from: 2022-12-08 Created: 2022-12-07 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Cuba Gyllensten, Amaru

Search in DiVA

By author/editor
Lenci, AlessandroSahlgren, MagnusJeuniaux, PatrickCuba Gyllensten, Amaru
In the same journal
Language resources and evaluation
Natural Language Processing

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 53 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf