kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Toward automated veracity assessment of data from open sources using features and indicators
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Software and Computer systems, SCS.ORCID iD: 0000-0002-0408-1421
2024 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

This dissertation hypothesizes that the key to automated veracity assessment of data from open sources is the careful estimation and extraction of relevant features and indicators. These features and indicators provide added value to a quantifiable veracity assessment, either directly or indirectly. The importance and usefulness of a veracity assessment largely depend on the specific situation and reason for which it is being conducted. Factors such as the recipient of the veracity assessment, the scope of the assessment, and the metrics used to measure accuracy and performance, all play a role in determining the value and perceived quality of the assessment.

Five peer-reviewed publications; two journal articles, two conference articles, and one workshop article, are included in this compilation thesis.

The main contributions of the work presented in this dissertation are: i) a compilation of challenges with manual methods of veracity assessment, ii) a road map for addressing the identified challenges, iii) identification of the state-of-the-art and gap analysis of veracity assessment of open-source data, iv) exploration of indicators such as topic geo-location tracking over time and stance classification, and v) evaluation of various feature types, model transferability, and style obfuscation attacks and the impact on accuracy for automated veracity assessment of a type of deception: fake reviews.

Abstract [sv]

Denna avhandling har som hypotes att nyckeln till automatiserad trovärdighetsbedömning av data från öppna källor ligger i det noggranna urvalet och estimeringen av relevanta särdrag och indikatorer. Dessa särdrag och indikatorer ger ett direkt eller indirekt mervärde till en kvantifierbar trovärdighetsbedömning. Betydelsen och användbarheten av en trovärdighetsbedömning beror till stor del på den specifika kontexten och anledningen till att den genomförs. Faktorer som mottagaren av trovärdighetsbedömningen, omfattningen av bedömningen och de mått som används för att mäta noggrannhet och prestanda, spelar alla in för att bestämma värdet och den upplevda kvalitén på bedömningen.

Fem referentgranskade publikationer ingår i denna sammanläggningsavhandling; två tidskriftsartiklar, två konferensartiklar och en workshopartikel.

De huvudsakliga bidragen från arbetet som presenteras i denna avhandling är: i) en sammanställning av utmaningar relaterade till manuella metoder för trovärdighetsbedömning, ii) en plan för att ta itu med de identifierade utmaningarna, iii) identifiering av forskningsfronten och en gapanalys av trovärdighetsbedömning av data från öppna källor, iv) studie av indikatorer såsom geolokalisering av ämnen och spårning av dem över tid samt klassificering av individers reaktioner i inlägg på sociala medier, och v) en utvärdering av särdragstyper som påverkar noggrannheten för automatisk trovärdighetsbedömning applicerat på en typ av bedrägeri: falska recensioner.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2024. , p. 71
Series
TRITA-EECS-AVL ; 2024:47
Keywords [en]
Veracity assessment, natural language processing, machine learning, open-source data
Keywords [sv]
Trovärdighetsbedömning, naturlig språkbehandling, maskininlärning, data från öppna källor
National Category
Software Engineering
Research subject
Information and Communication Technology
Identifiers
URN: urn:nbn:se:kth:diva-346353ISBN: 978-91-8040-927-8 (print)OAI: oai:DiVA.org:kth-346353DiVA, id: diva2:1857437
Public defence
2024-06-03, https://kth-se.zoom.us/j/63226866138, Sal C, Kistagången 16, Stockholm, 13:30 (English)
Opponent
Supervisors
Note

QC 20240514

Available from: 2024-05-14 Created: 2024-05-13 Last updated: 2024-05-21Bibliographically approved
List of papers
1. Towards Automatic Veracity Assessment of Open Source Information
Open this publication in new window or tab >>Towards Automatic Veracity Assessment of Open Source Information
2015 (English)In: 2015 IEEE International Congress on Big Data (BigData Congress), IEEE Computer Society, 2015, p. 199-206Conference paper, Published paper (Refereed)
Abstract [en]

Intelligence analysis is dependent on veracity assessment of Open Source Information (OSINF) which includes assessment of the reliability of sources and credibility of information. Traditionally, OSINF veracity assessment is done by intelligence analysts manually, but the large volumes, high velocity, and variety make it infeasible to continue doing so, and calls for automation. Based on meetings, interviews and questionnaires with military personnel, analysis of related work and state of the art, we identify the challenges and propose an approach and a corresponding framework for automated veracity assessment of OSINF. The framework provides a basis for new tools which will give the intelligence analysts the ability to automatically or semi-automatically assess veracity of larger amounts of data in a shorter amount of time. Instead of spending their time working with irrelevant, ambiguous, contradicting, biased, or plain wrong data, they can spend more time on analysis.

Place, publisher, year, edition, pages
IEEE Computer Society, 2015
Keywords
Big Data, public domain software, Big Data, OSINF, automatic data veracity assessment, intelligence analysis, open source information, Automation, Big data, Interviews, Probabilistic logic, Reliability, Semantics, Twitter, NATO STANAG 2511, OSINF, big data, data veracity, reliability and credibility, trust, veracity assessment
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-179402 (URN)10.1109/BigDataCongress.2015.36 (DOI)000380443700026 ()2-s2.0-84959501022 (Scopus ID)978-1-4673-7277-0 (ISBN)
External cooperation:
Conference
Big Data (BigData Congress), 2015 IEEE International Congress on, New York, USA, June 27 - July 2, 2015.
Note

QC 20151217

Available from: 2015-12-16 Created: 2015-12-16 Last updated: 2024-05-14Bibliographically approved
2. Veracity assessment of online data
Open this publication in new window or tab >>Veracity assessment of online data
Show others...
2020 (English)In: Decision Support Systems, ISSN 0167-9236, E-ISSN 1873-5797, Vol. 129, article id 113132Article in journal (Refereed) Published
Abstract [en]

Fake news, malicious rumors, fabricated reviews, generated images and videos, are today spread at an unprecedented rate, making the task of manually assessing data veracity for decision-making purposes a daunting task. Hence, it is urgent to explore possibilities to perform automatic veracity assessment. In this work we review the literature in search for methods and techniques representing state of the art with regard to computerized veracity assessment. We study what others have done within the area of veracity assessment, especially targeted towards social media and open source data, to understand research trends and determine needs for future research. The most common veracity assessment method among the studied set of papers is to perform text analysis using supervised learning. Regarding methods for machine learning much has happened in the last couple of years related to the advancements made in deep learning. However, very few papers make use of these advancements. Also, the papers in general tend to have a narrow scope, as they focus on solving a small task with only one type of data from one main source. The overall veracity assessment problem is complex, requiring a combination of data sources, data types, indicators, and methods. Only a few papers take on such a broad scope, thus, demonstrating the relative immaturity of the veracity assessment domain.

Place, publisher, year, edition, pages
Elsevier, 2020
Keywords
Veracity assessment, Credibility, Data quality, Online data, Social media, Fake news
National Category
Computer and Information Sciences
Identifiers
urn:nbn:se:kth:diva-268789 (URN)10.1016/j.dss.2019.113132 (DOI)000510956500001 ()2-s2.0-85076227196 (Scopus ID)
Note

QC 20200224

Available from: 2020-02-24 Created: 2020-02-24 Last updated: 2024-05-14Bibliographically approved
3. Tracking Geographical Locations using a Geo-Aware Topic Model for Analyzing Social Media Data
Open this publication in new window or tab >>Tracking Geographical Locations using a Geo-Aware Topic Model for Analyzing Social Media Data
2017 (English)In: Decision Support Systems, ISSN 0167-9236, E-ISSN 1873-5797, Vol. 99, no SI, p. 18-29Article in journal (Refereed) Published
Abstract [en]

Tracking how discussion topics evolve in social media and where these topics are discussed geographically over time has the potential to provide useful information for many different purposes. In crisis management, knowing a specific topic’s current geographical location could provide vital information to where, or even which, resources should be allocated. This paper describes an attempt to track online discussions geographically over time. A distributed geo-aware streaming latent Dirichlet allocation model was developed for the purpose of recognizing topics’ locations in unstructured text. To evaluate the model it has been implemented and used for automatic discovery and geographical tracking of election topics during parts of the 2016 American presidential primary elections. It was shown that the locations correlated with the actual election locations, and that the model provides a better geolocation classification compared to using a keyword-based approach.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
Social media, Topic modeling, Geo-awareness, Trend analysis, Latent Dirichlet allocation, Streaming media
National Category
Computer Sciences
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-210570 (URN)10.1016/j.dss.2017.05.006 (DOI)000405162500003 ()2-s2.0-85020801622 (Scopus ID)
Funder
EU, FP7, Seventh Framework Programme, 312649
Note

QC 20170704

Available from: 2017-07-02 Created: 2017-07-02 Last updated: 2024-05-14Bibliographically approved
4. Mama Edha at SemEval-2017 Task 8: Stance Classification with CNN and Rules
Open this publication in new window or tab >>Mama Edha at SemEval-2017 Task 8: Stance Classification with CNN and Rules
2017 (English)In: ACL 2017 - 11th International Workshop on Semantic Evaluations, SemEval 2017, Proceedings of the Workshop, Association for Computational Linguistics (ACL) , 2017, p. 481-485Conference paper, Published paper (Refereed)
Abstract [en]

For the competition SemEval-2017 we investigated the possibility of performing stance classification (support, deny, query or comment) for messages in Twitter conversation threads related to rumours. Stance classification is interesting since it can provide a basis for rumour veracity assessment. Our ensemble classification approach of combining convolutional neural networks with both automatic rule mining and manually written rules achieved a final accuracy of 74.9% on the competition's test data set for Task 8A. To improve classification we also experimented with data relabeling and using the grammatical structure of the tweet contents for classification.

Place, publisher, year, edition, pages
Association for Computational Linguistics (ACL), 2017
National Category
Computer Sciences Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-332056 (URN)2-s2.0-85097656375 (Scopus ID)
Conference
11th International Workshop on Semantic Evaluations, SemEval 2017, co-located with the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, Aug 4 2017 - Aug 3 2017
Note

Part of ISBN 9781945626555

QC 20230719

Available from: 2023-07-19 Created: 2023-07-19 Last updated: 2024-05-14Bibliographically approved
5. Identifying deceptive reviews: Feature exploration, model transferability and classification attack
Open this publication in new window or tab >>Identifying deceptive reviews: Feature exploration, model transferability and classification attack
2019 (English)In: Proceedings of the 2019 European Intelligence and Security Informatics Conference, EISIC 2019, Institute of Electrical and Electronics Engineers Inc. , 2019, p. 109-116Conference paper, Published paper (Refereed)
Abstract [en]

The temptation to influence and sway public opinion most certainly increases with the growth of open online forums where anyone anonymously can express their views and opinions. Since online review sites are a popular venue for opinion influencing attacks, there is a need to automatically identify deceptive posts. The main focus of this work is on automatic identification of deceptive reviews, both positive and negative biased. With this objective, we build a deceptive review SVM based classification model and explore the performance impact of using different feature types (TF-IDF, word2vec, PCFG). Moreover, we study the transferability of trained classification models applied to review data sets of other types of products, and, the classifier robustness, i.e., the accuracy impact, against attacks by stylometry obfuscation trough machine translation. Our findings show that i) we achieve an accuracy of over 90% using different feature types, ii) the trained classification models do not perform well when applied on other data sets containing reviews of different products, and iii) machine translation only slightly impacts the results and can not be used as a viable attack method. 

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2019
Keywords
Classification, Deceptive, Fake, PCFG, SVM, Word2vec, Automation, Computational linguistics, Computer aided language translation, Social aspects, Support vector machines, Attack methods, Classification models, Feature types, Machine translations, Model transferabilities, Online reviews, Performance impact, Public opinions, Classification (of information)
National Category
Computer Sciences Language Technology (Computational Linguistics)
Identifiers
urn:nbn:se:kth:diva-285414 (URN)10.1109/EISIC49498.2019.9108852 (DOI)2-s2.0-85087087979 (Scopus ID)9781728167350 (ISBN)
Conference
2019 European Intelligence and Security Informatics Conference, EISIC 2019, 26 November 2019 through 27 November 2019
Note

QC 20201130

Available from: 2020-11-30 Created: 2020-11-30 Last updated: 2024-05-14Bibliographically approved

Open Access in DiVA

summary(879 kB)168 downloads
File information
File name SUMMARY01.pdfFile size 879 kBChecksum SHA-512
39220bc334b5ee58a0de78a6a343c234465cc87f7929351711b6f3ffe84ed0fa8b35936533fe6d20ff0f12c4e912f081736d81a30e6c23c2c1f70fdc0ebab699
Type summaryMimetype application/pdf

Authority records

García Lozano, Marianela

Search in DiVA

By author/editor
García Lozano, Marianela
By organisation
Software and Computer systems, SCS
Software Engineering

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 1950 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf