Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Text clustering divides a set of texts into clusters (parts), so that texts within each cluster are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on familiar ones. The main contributions of this thesis are an investigation of text representation for Swedish and some extensions of the work on how to use text clustering as an exploration tool. We have also done some work on synonyms and evaluation of clustering results. Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Swedish has more morphological variation than for instance English. We show that it is beneficial to use the lemma form of words rather than the word forms. Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages. Our experiments show that it is beneficial to split solid compounds into their parts when building the representation. The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. We have also tried to differentiate between homographs, words that look alike but mean different things, by augmenting all words with a tag indicating their part of speech. None of our experiments using phrases or part of speech information have shown any improvement over using the ordinary model. Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is – text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe how evaluation can be improved for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. In some related work we have built a dictionary of synonyms. We use it to compare two different principles for automatic word relation extraction through clustering of words. Text clustering can be used to explore the contents of a text set. We have developed a visualization method that aids such exploration, and implemented it in a tool, called Infomat. It presents the representation matrix directly in two dimensions. When the order of texts and words are changed, by for instance clustering, distributional patterns that indicate similarities between texts and words appear. We have used Infomat to explore a set of free text answers about occupation from a questionnaire given to over 40 000 Swedish twins. The questionnaire also contained a closed answer regarding smoking. We compared several clusterings of the text answers to the closed answer, regarded as a categorization, by means of clustering evaluation. A recurring text cluster of high quality led us to formulate the hypothesis that “farmers smoke less than the average”, which we later could verify by reading previous studies. This hypothesis generation method could be used on any set of texts that is coupled with data that is restricted to a limited number of possible values.

Place, publisher, year, edition, pages
Stockholm: KTH , 2009. , vii, 71 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2009:4
National Category
Computer and Information Science
Identifiers
URN: urn:nbn:se:kth:diva-10129ISBN: 978-91-7415-251-7 (print)OAI: oai:DiVA.org:kth-10129DiVA: diva2:209282
Public defence
2009-04-06, Sal F3, KTH, Lindstedtsvägen 26, Stockholm, 13:15 (English)
Opponent
Supervisors
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2010-08-06Bibliographically approved
List of papers
1. Improving Clustering of Swedish Newspaper Articles using Stemming and Compound Splitting
Open this publication in new window or tab >>Improving Clustering of Swedish Newspaper Articles using Stemming and Compound Splitting
2003 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The use of properties of the Swedish language when indexing newspaper articles improves clustering results. To show this a clustering algorithm was implemented and language specific tools were used when building the representation of the articles.Since Swedish is an inflecting language many words have different forms. Thus two documents compared based on word occurrence(i.e. the vector space model and cosine measure of Information Retrieval) do not necessarily become similar although containing the sameword(s). To overcome this we have used a stemmer.Compounds are regularly formed as one word in Swedish. Hence indexing on words leaves the informationin the components of compounds unused.We use the spell checking program Stavato split compounds into their components.Newspapers sort their articles into sections such as Economy, Domestic, Sports etc. Using these we calculate entropy for the clusterings and use as a measure of quality.We have found that stemming improves clustering results on our collections by about 4 % compared to not using it. Compound splitting improves results by about 10 % (by 13 % incombination with stemming). Keeping the original compounds in the representation does not improve results.

 

 

 

National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-7120 (URN)
Conference
NoDaLiDa 2003, Reykjavik, Iceland 2003
Note
QC 20100806Available from: 2005-09-29 Created: 2005-09-29 Last updated: 2010-12-20Bibliographically approved
2. Comparing Comparisons: Document Clustering Evaluation Using Two Manual Classifications
Open this publication in new window or tab >>Comparing Comparisons: Document Clustering Evaluation Using Two Manual Classifications
2004 (English)Conference paper, Published paper (Refereed)
Abstract [en]

“Describe your occupation in a few words”, is a question answered by 44 000 Swedish twins.Each respondent was then manually categorized according to two established occupation classificationsystems. Would a clustering algorithm have produced satisfactory results? Usually,this question cannot be answered. The existing quality measures will tell us how much thealgorithmic clustering deviates from the manual classification, not if this is an acceptable deviation. But in our situation, with two different manual classifications (in classificationsystems called AMSYK and YK80), we can indeed construct such quality measures. If the algorithmic result differs no more from the manual classifications than these differ from eachother (comparing the comparisons) we have an indication of its being useful. Further, weuse the kappa coefficient as a clustering quality measure. Using one manual classification asa coding scheme we assess the agreement of a clustering and the other. After applying both these novel evaluation methods we conclude that our clusterings are useful.

National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-7121 (URN)
Conference
ICON 2004, India.
Note
QC 20100806Available from: 2005-09-29 Created: 2005-09-29 Last updated: 2012-01-20Bibliographically approved
3. The Impact of Phrases in Document Clustering for Swedish
Open this publication in new window or tab >>The Impact of Phrases in Document Clustering for Swedish
2005 (English)In: Proceedings of the 15th NODALIDA conference, Joensuu 2005 / [ed] Werner, S., 2005, 173-179 p.Conference paper, Published paper (Refereed)
Abstract [en]

We have investigated the impact of using phrases in the vector spacemodel for clustering documents in Swedish in different ways. The investigation is carried out on two textsets from different domains: one set of newspaper articles and one set of medical papers.The use of phrases do not improveresults relative the ordinary use ofwords. The results differ significantly between the text types. Thisindicates that one could benefit from different text representations for different domains although a fundamentally different approach probably would be needed.

National Category
Computer Science
Identifiers
urn:nbn:se:kth:diva-7122 (URN)952-458-771-8 (ISBN)
Conference
NoDaLiDa 2005, Joensuu, Finland, 2005
Note
QC 20100806Available from: 2005-09-29 Created: 2005-09-29 Last updated: 2010-12-20Bibliographically approved
4. Free Construction of a Free SwedishDictionary of Synonyms
Open this publication in new window or tab >>Free Construction of a Free SwedishDictionary of Synonyms
2005 (English)In: NoDaLiDa 2005, 2005, 1-6 p.Conference paper, Published paper (Refereed)
Abstract [en]

Building a large dictionary of synonymsfor a language is a very tedioustask. Hence there exist veryfew synonym dictionaries for mostlanguages, and those that exist aregenerally not freely available due tothe amount of work that have beenput into them.The Lexin on-line dictionary1 is avery popular web-site for translationsof Swedish words to about tendifferent languages. By letting userson this site grade automatically generatedpossible synonym pairs a freedictionary of Swedish synonyms hasbeen created. The lexicon reflectsthe users intuitive definition of synonymityand the amount of work putinto the project is only as much asthe participants want to.

Keyword
Synonyms, dictionary construction, multi-user collaboration, random indexing.
National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-10122 (URN)
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2011-02-18Bibliographically approved
5. Revealing Relations between Open and Closed Answers in Questionnaires through Text Clustering Evaluation
Open this publication in new window or tab >>Revealing Relations between Open and Closed Answers in Questionnaires through Text Clustering Evaluation
2008 (English)In: The Sixth International Conference on Language Resources and Evaluation, LREC 2008, Marrakech, Morocco, May 28-30, 2008, 2008Conference paper, Published paper (Other academic)
Abstract [en]

Open answers in questionnaires contain valuable information that is very time-consuming to analyze manually. We present a method for hypothesis generation from questionnaires based on text clustering. Text clustering is used interactively on the open answers, and the user can explore the cluster contents. The exploration is guided by automatic evaluation of the clusters against a closed answer regarded as a categorization. This simplifies the process of selecting interesting clusters. The user formulates a hypothesis from the relation between the cluster content and the closed answer categorization. We have applied our method on an open answer regarding occupation compared to a closed answer on smoking habits. With no prior knowledge of smoking habits in different occupation groups we have generated the hypothesis that farmers smoke less than the average. The hypothesis is supported by several separate surveys. Closed answers are easy to analyze automatically but are restricted and may miss valuable aspects. Open answers, on the other hand, fully capture the dynamics and diversity of possible outcomes. With our method the process of analyzing open answers becomes feasible.

Keyword
Informationssökning,, Språkteknologi,, Klustring,, Datalingvistik,, Dokumentklustring,, Text, Mining, Information, Retrieval,, Clustering,, Document, Clustering,, Natural, Language, Processing,, Computational, Linguistics,, Text, Mining,, Questionnaires
Identifiers
urn:nbn:se:su:diva-18485 (URN)
Available from: 2009-02-27 Created: 2009-01-26Bibliographically approved
6. Part of Speech Tagging for Text Clustering in Swedish
Open this publication in new window or tab >>Part of Speech Tagging for Text Clustering in Swedish
2009 (English)In: Proceedings of the 17th Nordic Conference of Computational Linguistics NODALIDA 2009, 2009Conference paper, Published paper (Refereed)
Abstract [en]

Text clustering could be very useful bothas an intermediate step in a large naturallanguage processing system and as a toolin its own right. The result of a clusteringalgorithm is dependent on the text representationthat is used. Swedish has afairly rich morphology and a large numberof homographs. This possibly leads toproblems in Information Retrieval in general.We investigate the impact on textclustering of adding the part-of-speech-tagto all words in the the common term-bydocumentmatrix.The experiments are carried out on a fewdifferent text sets. None of them give anyevidence that part-of-speech tags improveresults. However, to represent texts usingonly nouns and proper names gives asmaller representation without worsen results.In a few experiments this smallerrepresentation gives better results.We also investigate the effect of lemmatizationand the use of a stoplist, bothof which improves results significantly insome cases.

National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-10124 (URN)
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2010-08-06Bibliographically approved
7. Global Evaluation of Random Indexing through Swedish Word Clustering Compared to the People’s Dictionary of Synonyms
Open this publication in new window or tab >>Global Evaluation of Random Indexing through Swedish Word Clustering Compared to the People’s Dictionary of Synonyms
2009 (English)In: Proceedings of the International Conference RANLP-2009, 2009, 376-380 p.Conference paper, Published paper (Refereed)
Abstract [en]

Evaluation of word space models is usually local in the sense that it only considers words that are deemed very similar by the model. We propose a global evaluation scheme based on clustering of the words. A clustering of high quality in an external evaluation against a semantic resource, such as a dictionary of synonyms, indicates a word space model of high quality. We use Random Indexing to create several different models and compare them by clustering evaluation against the People's Dictionary of Synonyms, a list of Swedish synonyms that are graded by the public. Most notably we get better results for models based on syntagmatic information (words that appear together) than for models based on paradigmatic information (words that appear in similar contexts). This is quite contrary to previous results that have been presented for local evaluation. Clusterings to ten clusters result in a recall of 83% for a syntagmatic model, compared to 34% for a comparable paradigmatic model, and 10% for a random partition.

Series
International Conference Recent Advances in Natural Language Processing, RANLP, ISSN 1313-8502
Keyword
Random Indexing, Word Space Model, Word Clustering, Evaluation, Dictionary of Synonyms
National Category
Computer and Information Science Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-10125 (URN)2-s2.0-84866846352 (Scopus ID)
Conference
International Conference on Recent Advances in Natural Language Processing, RANLP-2009; Borovets; Bulgaria; 14 September 2009 through 16 September 2009
Note

QC 20100806

Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2014-09-24Bibliographically approved
8. Infomat: Visualizing and Exploring Vector Space Model Data Matrixes
Open this publication in new window or tab >>Infomat: Visualizing and Exploring Vector Space Model Data Matrixes
2009 (English)Article in journal (Refereed) Submitted
Abstract [en]

Infomat is a vector space visualization tool aimed at Information Retrieval.It presents information stored in a matrix, such as the term-document-matrix, as arectangular picture. The opacity of each pixel is proportional to the weight(s) of thecorresponding matrix element(s).Reordering the objects of the rows and columns makes different distributional patternsappear. These can be explored to understand the relations (similarities and differences)between the objects. Infomat allows the user to zoom in and out of the pictureto obtain more detailed information, to remove objects and matrix elements, to reweightthe matrix, and to cluster all, or a part of the objects. At the same time textualinformation is presented.Infomat provides an overview of the content of the entire data and parts of it. Inparticular, text clustering results become easier to grasp, than when presented only intextual form.

National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-10128 (URN)
Note
QS 20120315Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2012-03-15Bibliographically approved

Open Access in DiVA

fulltext(699 kB)741 downloads
File information
File name FULLTEXT01.pdfFile size 699 kBChecksum SHA-512
fb55b1a144162dd45ab14af64992e84031463836f8f89180d3f330e60e11fc2ffc23d0bee8584c02779f8fb51b36d323226360f5151ba4c591f1f1d15b84ebd5
Type fulltextMimetype application/pdf

Search in DiVA

By author/editor
Rosell, Magnus
By organisation
Numerical Analysis and Computer Science, NADA
Computer and Information Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 741 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 546 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf