Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Free Construction of a Free SwedishDictionary of Synonyms
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0003-3199-8953
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
2005 (English)In: NoDaLiDa 2005, 2005, 1-6 p.Conference paper, Published paper (Refereed)
Abstract [en]

Building a large dictionary of synonymsfor a language is a very tedioustask. Hence there exist veryfew synonym dictionaries for mostlanguages, and those that exist aregenerally not freely available due tothe amount of work that have beenput into them.The Lexin on-line dictionary1 is avery popular web-site for translationsof Swedish words to about tendifferent languages. By letting userson this site grade automatically generatedpossible synonym pairs a freedictionary of Swedish synonyms hasbeen created. The lexicon reflectsthe users intuitive definition of synonymityand the amount of work putinto the project is only as much asthe participants want to.

Place, publisher, year, edition, pages
2005. 1-6 p.
Keyword [en]
Synonyms, dictionary construction, multi-user collaboration, random indexing.
National Category
Computer and Information Science
Identifiers
URN: urn:nbn:se:kth:diva-10122OAI: oai:DiVA.org:kth-10122DiVA: diva2:209199
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2011-02-18Bibliographically approved
In thesis
1. Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
Open this publication in new window or tab >>Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Text clustering divides a set of texts into clusters (parts), so that texts within each cluster are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on familiar ones. The main contributions of this thesis are an investigation of text representation for Swedish and some extensions of the work on how to use text clustering as an exploration tool. We have also done some work on synonyms and evaluation of clustering results. Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Swedish has more morphological variation than for instance English. We show that it is beneficial to use the lemma form of words rather than the word forms. Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages. Our experiments show that it is beneficial to split solid compounds into their parts when building the representation. The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. We have also tried to differentiate between homographs, words that look alike but mean different things, by augmenting all words with a tag indicating their part of speech. None of our experiments using phrases or part of speech information have shown any improvement over using the ordinary model. Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is – text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe how evaluation can be improved for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. In some related work we have built a dictionary of synonyms. We use it to compare two different principles for automatic word relation extraction through clustering of words. Text clustering can be used to explore the contents of a text set. We have developed a visualization method that aids such exploration, and implemented it in a tool, called Infomat. It presents the representation matrix directly in two dimensions. When the order of texts and words are changed, by for instance clustering, distributional patterns that indicate similarities between texts and words appear. We have used Infomat to explore a set of free text answers about occupation from a questionnaire given to over 40 000 Swedish twins. The questionnaire also contained a closed answer regarding smoking. We compared several clusterings of the text answers to the closed answer, regarded as a categorization, by means of clustering evaluation. A recurring text cluster of high quality led us to formulate the hypothesis that “farmers smoke less than the average”, which we later could verify by reading previous studies. This hypothesis generation method could be used on any set of texts that is coupled with data that is restricted to a limited number of possible values.

Place, publisher, year, edition, pages
Stockholm: KTH, 2009. vii, 71 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2009:4
National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-10129 (URN)978-91-7415-251-7 (ISBN)
Public defence
2009-04-06, Sal F3, KTH, Lindstedtsvägen 26, Stockholm, 13:15 (English)
Opponent
Supervisors
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2010-08-06Bibliographically approved

Open Access in DiVA

No full text

Authority records BETA

Kann, Viggo

Search in DiVA

By author/editor
Kann, ViggoRosell, Magnus
By organisation
Numerical Analysis and Computer Science, NADA
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 113 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf