kth.sePublications
Planned maintenance
A system upgrade is planned for 10/12-2024, at 12:00-13:00. During this time DiVA will be unavailable.
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
R-grams: Unsupervised Learning of Semantic Units in Natural Language
RISE.
2019 (English)In: Proceedings of the 13th International Conference on Computational Semantics - Student Papers, 2019Conference paper, Published paper (Refereed)
Abstract [en]

This paper investigates data-driven segmentation using Re-Pair or Byte Pair Encoding-techniques. In contrast to previous work which has primarily been focused on subword units for machine translation, we are interested in the general properties of such segments above the word level. We call these segments r-grams, and discuss their properties and the effect they have on the token frequency distribution. The proposed approach is evaluated by demonstrating its viability in embedding techniques, both in monolingual and multilingual test settings. We also provide a number of qualitative examples of the proposed methodology, demonstrating its viability as a language-invariant segmentation procedure.

Place, publisher, year, edition, pages
2019.
National Category
Language Technology (Computational Linguistics)
Identifiers
URN: urn:nbn:se:kth:diva-322093DOI: 10.48550/arXiv.1808.04670OAI: oai:DiVA.org:kth-322093DiVA, id: diva2:1715224
Conference
13th International Conference on Computational Semantics
Note

QC 20221202

Available from: 2022-12-01 Created: 2022-12-01 Last updated: 2024-03-18Bibliographically approved
In thesis
1. Quantifying Meaning
Open this publication in new window or tab >>Quantifying Meaning
2023 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [sv]

Distributionella semantikmodeller är en klass av maskininlärningsmodeller med syfte att konstruera representationer som fångar semantik, i.e. mening, av objekt som bär mening på ett datadrivet sätt. Denna avhandling är särskilt inriktad på konstruktion av semantisk representation av ord, en strävan som har en lång historia inom datalingvistik och som sett dramatiska utvecklingar under de senaste åren.

Det primära forskningsmålet med denna avhandling är att utforska gränserna och tillämpningarna av distributionella semantikmodeller av ord, i.e. word embeddings. I synnerhet utforskar den relationen mellan modell- och embeddingsemantik, det vill säga hur modelldesign påverkar vad ord-embeddings innehåller, hur man resonerar om ord-embeddings, och hur egenskaperna hos modellen kan utnyttjas för att extrahera ny information från embeddings. Konkret introducerar vi topologiskt medvetna grannskapsfrågor som berikar den information som erhålls från grannskap extraherade från distributionella sematikmodeller, villkorade likhetsfrågor (och modeller som möjliggör dem), konceptutvinning från distributionella semantikmodeller, tillämpningar av embbeddningmodeller inom statsvetenskap, samt en grundlig utvärdering av en bred mängd av distributionella semantikmodeller.

Abstract [en]

Distributional semantic models are a class of machine learning models with the aim of constructing representations that capture the semantics, i.e. meaning, of objects that carry meaning in a data-driven fashion. This thesis is particularly concerned with the construction of semantic representations of words, an endeavour that has a long history in computational linguistics, and that has seen dramatic developments in recent years.

The primary research objective of this thesis is to explore the limits and applications of distributional semantic models of words, i.e. word embeddings. In particular, it explores the relation between model and embedding semantics, i.e. how model design influences what our embeddings encode, how to reason about embeddings, and how properties of the model can be exploited to extract novel information from embeddings. Concretely, we introduce topologically aware neighborhood queries that enrich the information gained from neighborhood queries on distributional semantic models, conditioned similarity queries (and models enabling them), concept extraction from distributional semantic models, applications of embedding models in the realm of political science, as well as a thorough evaluation of a broad range of distributional semantic models. 

Place, publisher, year, edition, pages
Stockholm: KTH Royal Institute of Technology, 2023. p. 45
Series
TRITA-EECS-AVL ; 2023:2
National Category
Language Technology (Computational Linguistics)
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-322262 (URN)978-91-8040-444-0 (ISBN)
Public defence
2023-01-17, Zoom: https://kth-se.zoom.us/j/66943302856, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Note

QC 20221207

Available from: 2022-12-08 Created: 2022-12-07 Last updated: 2023-01-20Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full text

Authority records

Cuba Gyllensten, Amaru

Search in DiVA

By author/editor
Cuba Gyllensten, Amaru
Language Technology (Computational Linguistics)

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 35 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf