Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
An Efficient Graph-Based Model for Learning Representations
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems.ORCID iD: 0000-0003-1007-8533
Research Institute of Sweden (RISE), Stockholm.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems.ORCID iD: 0000-0001-7949-1815
2020 (English)Manuscript (preprint) (Other academic)
Place, publisher, year, edition, pages
2020.
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-263908OAI: oai:DiVA.org:kth-263908DiVA, id: diva2:1371092
Note

Submitted manuscript. QC 20191119

Available from: 2019-11-19 Created: 2019-11-19 Last updated: 2019-11-20Bibliographically approved
In thesis
1. Graph Algorithms for Large-Scale and Dynamic Natural Language Processing
Open this publication in new window or tab >>Graph Algorithms for Large-Scale and Dynamic Natural Language Processing
2019 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In Natural Language Processing, researchers design and develop algorithms to enable machines to understand and analyze human language. These algorithms benefit multiple downstream applications including sentiment analysis, automatic translation, automatic question answering, and text summarization. Topic modeling is one such algorithm that solves the problem of categorizing documents into multiple groups with the goal of maximizing the intra-group document similarity. However, the manifestation of short texts like tweets, snippets, comments, and forum posts as the dominant source of text in our daily interactions and communications, as well as being the main medium for news reporting and dissemination, increases the complexity of the problem due to scalability, sparsity, and dynamicity. Scalability refers to the volume of the messages being generated, sparsity is related to the length of the messages, and dynamicity is associated with the ratio of changes in the content and topical structure of the messages (e.g., the emergence of new phrases). We improve the scalability and accuracy of Natural Language Processing algorithms from three perspectives, by leveraging on innovative graph modeling and graph partitioning algorithms, incremental dimensionality reduction techniques, and rich language modeling methods. We begin by presenting a solution for multiple disambiguation on short messages, as opposed to traditional single disambiguation. The solution proposes a simple graph representation model to present topical structures in the form of dense partitions in that graph and applies disambiguation by extracting those topical structures using an innovative distributed graph partitioning algorithm. Next, we develop a scalable topic modeling algorithm using a novel dense graph representation and an efficient graph partitioning algorithm. Then, we analyze the effect of temporal dimension to understand the dynamicity in online social networks and present a solution for geo-localization of users in Twitter using a hierarchical model that combines partitioning of the underlying social network graph with temporal categorization of the tweets. The results show the effect of temporal dynamicity on users’ spatial behavior. This result leads to design and development of a dynamic topic modeling solution, involving an online graph partitioning algorithm and a significantly stronger language modeling approach based on the skip-gram technique. The algorithm shows strong improvement on scalability and accuracy compared to the state-of-the-art models. Finally, we describe a dynamic graph-based representation learning algorithm that modifies the partitioning algorithm to develop a generalization of our previous work. A strong representation learning algorithm is proposed that can be used for extracting high quality distributed and continuous representations out of any sequential data with local and hierarchical structural properties similar to natural language text.

Place, publisher, year, edition, pages
KTH Royal Institute of Technology, 2019. p. 42
Series
TRITA-EECS-AVL ; 2019:85
Keywords
Natural Language Processing; Lexical Disambiguation; Topic Modeling; Representation Learning; Graph Partitioning; Distributed Algorithms; Dimensionality Reduction; Random Indexing;
National Category
Computer Systems
Identifiers
urn:nbn:se:kth:diva-263914 (URN)978-91-7873-377-4 (ISBN)
Public defence
2019-12-17, Sal C, Electrum, Kistagången 16, Kista, 10:00 (English)
Opponent
Supervisors
Note

QC 20191125

Available from: 2019-11-25 Created: 2019-11-19 Last updated: 2019-11-25Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records BETA

Ghoorchian, KambizBoman, Magnus

Search in DiVA

By author/editor
Ghoorchian, KambizBoman, Magnus
By organisation
Intelligent systems
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 109 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf