Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The Impact of Phrases in Document Clustering for Swedish
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.
KTH, School of Computer Science and Communication (CSC), Numerical Analysis and Computer Science, NADA.ORCID iD: 0000-0002-4178-2980
2005 (English)In: Proceedings of the 15th NODALIDA conference, Joensuu 2005 / [ed] Werner, S., 2005, 173-179 p.Conference paper, Published paper (Refereed)
Abstract [en]

We have investigated the impact of using phrases in the vector spacemodel for clustering documents in Swedish in different ways. The investigation is carried out on two textsets from different domains: one set of newspaper articles and one set of medical papers.The use of phrases do not improveresults relative the ordinary use ofwords. The results differ significantly between the text types. Thisindicates that one could benefit from different text representations for different domains although a fundamentally different approach probably would be needed.

Place, publisher, year, edition, pages
2005. 173-179 p.
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-7122ISBN: 952-458-771-8 (print)OAI: oai:DiVA.org:kth-7122DiVA: diva2:12038
Conference
NoDaLiDa 2005, Joensuu, Finland, 2005
Note
QC 20100806Available from: 2005-09-29 Created: 2005-09-29 Last updated: 2010-12-20Bibliographically approved
In thesis
1. Clustering in Swedish: The Impact of some Properties of the Swedish Language on Document Clustering and an Evaluation Method
Open this publication in new window or tab >>Clustering in Swedish: The Impact of some Properties of the Swedish Language on Document Clustering and an Evaluation Method
2005 (English)Licentiate thesis, comprehensive summary (Other scientific)
Abstract [en]

Text clustering divides a set of texts into groups, so that texts within each group are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on known ones. The contributions of this thesis are an investigation of text representation for Swedish and an evaluation method that uses two or more manual categorizations.

Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Since Swedish has more morphological variation than for instance English we have used a stemmer to strip suffixes. This gives moderate improvements and reduces the number of words in the representation.

Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages.In the ordinary vector space model the constituents of compounds are not accounted for when calculating the similarity between texts. To use them we have employed a spell checking program to split compounds. The results clearly show that this is beneficial.

The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. Noneof our experiments have shown any improvement over using the ordinary model.

Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. Automatic evaluation methods are either intrinsic or extrinsic. Internal quality measures use the representation in some manner. Therefore they are not suitable for comparisons of different representations.

External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is -- text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe an evaluation method for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. We also describe the kappa coefficient as a clustering quality measure in the same setting.

Abstract [sv]

Textklustring delar upp en mängd texter i grupper, så att texterna inom dessa liknar varandra till innehåll. Man kan använda textklustring för att uppdaga strukturer och innehåll i okända textmängder och för att få nya perspektiv på redan kända. Bidragen i denna avhandling är en undersökning av textrepresentationer för svenska texter och en utvärderingsmetod som använder sig av två eller fler manuella kategoriseringar.

Textklustring, åtminstonde som det beskrivs här, utnyttjar sig av den vektorrumsmodell, som används allmänt inom området. I denna modell representeras texter med orden som förekommer i dem och texter som har många gemensamma ord betraktas som lika till innehåll. Vad som betraktas som ett ord skiljer sig mellan språk. Vi har undersökt inverkan av några av svenskans egenskaper på textklustring. Eftersom svenska har större morfologisk variation än till exempel engelska har vi tagit bort suffix med hjälp av en stemmer. Detta ger lite bättre resultat och minskar antalet ord i representationen.

I svenska används och skapas hela tiden fasta sammansättningar. De flesta delar av sammansättningar används som ord på egen hand och i många olika sammansättningar. Fasta sammansättningar i svenska språket motsvarar ofta fraser och öppna sammansättningar i andra språk. Delarna i sammansättningar används inte vid likhetsberäkningen i vektorrumsmodellen. För att utnyttja dem har vi använt ett rättstavningsprogram för att dela upp sammansättningar. Resultaten visar tydligt att detta är fördelaktigt

I vektorrumsmodellen tas ingen hänsyn till ordens inbördes ordning. Vi har försökt utvidga modellen med nominalfraser på olika sätt. Inga av våra experiment visar på någon förbättring jämfört med den vanliga enkla modellen.

Det är mycket svårt att utvärdera textklustringsresultat. Det ligger i sakens natur att vad som är en bra uppdelning av en mängd texter är subjektivt. Automatiska utvärderingsmetoder är antingen interna eller externa. Interna kvalitetsmått utnyttjar representationen på något sätt. Därför är de inte lämpliga att använda vid jämförelser av olika representationer.

Externa kvalitetsmått jämför en klustring med en (manuell) kategorisering av samma mängd texter. Det teoretiska bästa värdet för måtten är kända, men vad som är ett bra värde är inte uppenbart -- mängder av texter skiljer sig åt i svårighet att klustra och kategoriseringar är mer eller mindre lämpliga för en speciell mängd texter. Vi beskriver en utvärderingsmetod som kan användas då en mängd texter har mer än en kategorisering. I sådana fall kan resultatet för en klustring jämföras med resultatet för en av kategoriseringarna, som vi antar är en bra uppdelning. Vi beskriver också kappakoefficienten som ett kvalitetsmått för klustring under samma förutsättningar.

Place, publisher, year, edition, pages
Stockholm: KTH, 2005. vii, 35 p.
Series
Trita-NA, ISSN 0348-2952 ; 05:31
Keyword
Document Clustering
National Category
Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-438 (URN)91-7178-166-8 (ISBN)
Presentation
2005-10-18, E3, E-huset, Osquars backe 14, Stockholm, 10:15
Opponent
Supervisors
Note
QC 20101220Available from: 2005-09-29 Created: 2005-09-29 Last updated: 2010-12-20Bibliographically approved
2. Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
Open this publication in new window or tab >>Text Clustering Exploration: Swedish Text Representation and Clustering Results Unraveled
2009 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Text clustering divides a set of texts into clusters (parts), so that texts within each cluster are similar in content. It may be used to uncover the structure and content of unknown text sets as well as to give new perspectives on familiar ones. The main contributions of this thesis are an investigation of text representation for Swedish and some extensions of the work on how to use text clustering as an exploration tool. We have also done some work on synonyms and evaluation of clustering results. Text clustering, at least such as it is treated here, is performed using the vector space model, which is commonly used in information retrieval. This model represents texts by the words that appear in them and considers texts similar in content if they share many words. Languages differ in what is considered a word. We have investigated the impact of some of the characteristics of Swedish on text clustering. Swedish has more morphological variation than for instance English. We show that it is beneficial to use the lemma form of words rather than the word forms. Swedish has a rich production of solid compounds. Most of the constituents of these are used on their own as words and in several different compounds. In fact, Swedish solid compounds often correspond to phrases or open compounds in other languages. Our experiments show that it is beneficial to split solid compounds into their parts when building the representation. The vector space model does not regard word order. We have tried to extend it with nominal phrases in different ways. We have also tried to differentiate between homographs, words that look alike but mean different things, by augmenting all words with a tag indicating their part of speech. None of our experiments using phrases or part of speech information have shown any improvement over using the ordinary model. Evaluation of text clustering results is very hard. What is a good partition of a text set is inherently subjective. External quality measures compare a clustering with a (manual) categorization of the same text set. The theoretical best possible value for a measure is known, but it is not obvious what a good value is – text sets differ in difficulty to cluster and categorizations are more or less adapted to a particular text set. We describe how evaluation can be improved for cases where a text set has more than one categorization. In such cases the result of a clustering can be compared with the result for one of the categorizations, which we assume is a good partition. In some related work we have built a dictionary of synonyms. We use it to compare two different principles for automatic word relation extraction through clustering of words. Text clustering can be used to explore the contents of a text set. We have developed a visualization method that aids such exploration, and implemented it in a tool, called Infomat. It presents the representation matrix directly in two dimensions. When the order of texts and words are changed, by for instance clustering, distributional patterns that indicate similarities between texts and words appear. We have used Infomat to explore a set of free text answers about occupation from a questionnaire given to over 40 000 Swedish twins. The questionnaire also contained a closed answer regarding smoking. We compared several clusterings of the text answers to the closed answer, regarded as a categorization, by means of clustering evaluation. A recurring text cluster of high quality led us to formulate the hypothesis that “farmers smoke less than the average”, which we later could verify by reading previous studies. This hypothesis generation method could be used on any set of texts that is coupled with data that is restricted to a limited number of possible values.

Place, publisher, year, edition, pages
Stockholm: KTH, 2009. vii, 71 p.
Series
Trita-CSC-A, ISSN 1653-5723 ; 2009:4
National Category
Computer and Information Science
Identifiers
urn:nbn:se:kth:diva-10129 (URN)978-91-7415-251-7 (ISBN)
Public defence
2009-04-06, Sal F3, KTH, Lindstedtsvägen 26, Stockholm, 13:15 (English)
Opponent
Supervisors
Note
QC 20100806Available from: 2009-03-24 Created: 2009-03-24 Last updated: 2010-08-06Bibliographically approved

Open Access in DiVA

No full text

Other links

KTH

Authority records BETA

Velupillai, Sumithra

Search in DiVA

By author/editor
Rosell, MagnusVelupillai, Sumithra
By organisation
Numerical Analysis and Computer Science, NADA
Computer Science

Search outside of DiVA

GoogleGoogle Scholar

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 64 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf