Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Part of Speech Tagging for Text Clustering in Swedish
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
2009 (English)In: Proceedings of the 17th Nordic Conference of Computational Linguistics NODALIDA 2009, 2009Conference paper, Published paper (Refereed)
Abstract [en]

Text clustering could be very useful bothas an intermediate step in a large naturallanguage processing system and as a toolin its own right. The result of a clusteringalgorithm is dependent on the text representationthat is used. Swedish has afairly rich morphology and a large numberof homographs. This possibly leads toproblems in Information Retrieval in general.We investigate the impact on textclustering of adding the part-of-speech-tagto all words in the the common term-bydocumentmatrix.The experiments are carried out on a fewdifferent text sets. None of them give anyevidence that part-of-speech tags improveresults. However, to represent texts usingonly nouns and proper names gives asmaller representation without worsen results.In a few experiments this smallerrepresentation gives better results.We also investigate the effect of lemmatizationand the use of a stoplist, bothof which improves results significantly insome cases.

Place, publisher, year, edition, pages
2009.
National Category
Computer and Information Science
Identifiers
URN: urn:nbn:se:kth:diva-10124OAI: oai:DiVA.org:kth-10124DiVA: diva2:209212
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2010-08-06Bibliographically approved
In thesis
1. Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
Open this publication in new window or tab >>Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Text clustering divides a set of texts into clusters (parts), so that texts within each cluster are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on familiar ones. The main contributions of this thesis are an investigation of text representation for Swedish and some extensions of the work on how to use text clustering as an exploration tool. We have also done some work on synonyms and evaluation of clustering results. Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Swedish has more morphological variation than for instance English. We show that it is beneficial to use the lemma form of words rather than the word forms. Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages. Our experiments show that it is beneficial to split solid compounds into their parts when building the representation. The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. We have also tried to differentiate between homographs, words that look alike but mean different things, by augmenting all words with a tag indicating their part of speech. None of our experiments using phrases or part of speech information have shown any improvement over using the ordinary model. Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is – text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe how evaluation can be improved for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. In some related work we have built a dictionary of synonyms. We use it to compare two different principles for automatic word relation extraction through clustering of words. Text clustering can be used to explore the contents of a text set. We have developed a visualization method that aids such exploration, and implemented it in a tool, called Infomat. It presents the representation matrix directly in two dimensions. When the order of texts and words are changed, by for instance clustering, distributional patterns that indicate similarities between texts and words appear. We have used Infomat to explore a set of free text answers about occupation from a questionnaire given to over 40 000 Swedish twins. The questionnaire also contained a closed answer regarding smoking. We compared several clusterings of the text answers to the closed answer, regarded as a categorization, by means of clustering evaluation. A recurring text cluster of high quality led us to formulate the hypothesis that “farmers smoke less than the average”, which we later could verify by reading previous studies. This hypothesis generation method could be used on any set of texts that is coupled with data that is restricted to a limited number of possible values.

Place, publisher, year, edition, pages
Stockholm: KTH, 2009. vii, 71 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2009:4
National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-10129 (URN)978-91-7415-251-7 (ISBN)
Public defence
2009-04-06, Sal F3, KTH, Lindstedtsvägen 26, Stockholm, 13:15 (English)
Opponent
Supervisors
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2010-08-06Bibliographically approved

Open Access in DiVA

No full text

Other links

Dspace

Search in DiVA

By author/editor
Rosell, Magnus
By organisation
Numerical Analysis and Computer Science, NADA
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 48 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf