Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
The effect of background knowledge in graph-based learning in the chemoinformatics domain
KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV. Stockholms universitet, Institutionen för data- och systemvetenskap.
University of Skövde, Sweden.
2008 (English)In: Trends in Intelligent Systems and Computer Engineering / [ed] Oscar Castillo, Li Xu, Sio-Iong Ao, Springer , 2008, p. 141-153Chapter in book (Refereed)
Abstract [en]

Typical machine learning systems often use a set of previous experiences (examples) to learn concepts, patterns, or relations hidden within the data [1]. Current machine learning approaches are challenged by the growing size of the data repositories and the growing complexity of those data [1, 2]. In order to accommodate the requirement of being able to learn from complex data, several methods have been introduced in the field of machine learning [2]. Based on the way the input and resulting hypotheses are represented, two main categories of such methods exist, namely, logic-based and graph-based methods [3]. The demarcation line between logic- and graph-based methods lies in the differences of their data representation methods, hypothesis formation, and testing as well as the form of the output produced.

The main purpose of our study is to investigate the effect of incorporating background knowledge into graph learning methods. The ability of graph learning methods to obtain accurate theories with a minimum of background knowledge is of course a desirable property, but not being able to effectively utilize additional knowledge that is available and has been proven important is clearly a disadvantage. Therefore we examine how far additional, already available, background knowledge can be effectively used for increasing the performance of a graph learner. Another contribution of our study is that it establishes a neutral ground to compare classifi- cation accuracies of the two closely related approaches, making it possible to study whether graph learning methods actually would outperform ILP methods if the same background knowledge were utilized [9].

The rest of this chapter is organized as follows. The next section discusses related work concerning the contribution of background knowledge when learning from complex data. Section 10.3 provides a description of the graph learning method that is used in our study. The experimental setup, empirical evaluation, and the results from the study are described in Sect. 10.4. Finally, Sect. 10.5 provides conclusions from the experiments and points out interesting extensions of the work reported in this study.

Place, publisher, year, edition, pages
Springer , 2008. p. 141-153
Series
Lecture Notes in Electrical Engineering, ISSN 1876-1100 ; 6
National Category
Engineering and Technology
Research subject
Computer and Systems Sciences
Identifiers
URN: urn:nbn:se:kth:diva-221566DOI: 10.1007/978-0-387-74935-8_10Scopus ID: 2-s2.0-78651558390ISBN: 978-0-387-74934-1 (print)ISBN: 978-0-387-74935-8 (print)OAI: oai:DiVA.org:kth-221566DiVA, id: diva2:1175239
Note

QC 20180123

Available from: 2014-02-24 Created: 2018-01-17 Last updated: 2018-01-23Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Search in DiVA

By author/editor
Karunaratne, ThashmeeBoström, Henrik
By organisation
Computer and Systems Sciences, DSV
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 20 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf