Change search
Refine search result
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Gammerman, Alexander
    et al.
    Royal Holloway Univ London, Egham, Surrey, England..
    Vovk, Vladimir
    Royal Holloway Univ London, Egham, Surrey, England..
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Carlsson, Lars
    Stena Line AB, Gothenburg, Sweden..
    Conformal and probabilistic prediction with applications: editorial2019In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 108, no 3, p. 379-380Article in journal (Other academic)
  • 2.
    Johansson, Ulf
    et al.
    Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
    Lofstrom, Tuve
    Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
    Linusson, Henrik
    Univ Boras, Dept Informat Technol, Boras, Sweden..
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Efficient Venn predictors using random forests2019In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 108, no 3, p. 535-550Article in journal (Refereed)
    Abstract [en]

    Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. In addition, a probabilistic classifier must, of course, also be as accurate as possible. In this paper, Venn predictors, and its special case Venn-Abers predictors, are evaluated for probabilistic classification, using random forests as the underlying models. Venn predictors output multiple probabilities for each label, i.e., the predicted label is associated with a probability interval. Since all Venn predictors are valid in the long run, the size of the probability intervals is very important, with tighter intervals being more informative. The standard solution when calibrating a classifier is to employ an additional step, transforming the outputs from a classifier into probability estimates, using a labeled data set not employed for training of the models. For random forests, and other bagged ensembles, it is, however, possible to use the out-of-bag instances for calibration, making all training data available for both model learning and calibration. This procedure has previously been successfully applied to conformal prediction, but was here evaluated for the first time for Venn predictors. The empirical investigation, using 22 publicly available data sets, showed that all four versions of the Venn predictors were better calibrated than both the raw estimates from the random forest, and the standard techniques Platt scaling and isotonic regression. Regarding both informativeness and accuracy, the standard Venn predictor calibrated on out-of-bag instances was the best setup evaluated. Most importantly, calibrating on out-of-bag instances, instead of using a separate calibration set, resulted in tighter intervals and more accurate models on every data set, for both the Venn predictors and the Venn-Abers predictors.

  • 3.
    Karunaratne, Thashmee
    et al.
    KTH, School of Information and Communication Technology (ICT), Computer and Systems Sciences, DSV.
    Boström, Henrik
    University of Skövde, Sweden.
    Using background knowledge for graph based learning: a case study in chemoinformatics2007In: IMECS 2007: International Multiconference of Engineers and Computer Scientists, Vols I and II, HONG KONG: INT ASSOC ENGINEERS-IAENG , 2007, p. 153-157Conference paper (Refereed)
    Abstract [en]

    Incorporating background knowledge in the learning process is proven beneficial for numerous applications of logic based learning methods. Yet the effect of background knowledge in graph based learning is not systematically explored. This paper describes and demonstrates the first step in this direction and elaborates on how additional relevant background knowledge could be used to improve the predictive performance of a graph learner. A case study in chemoinformatics is undertaken in this regard in which various types of background knowledge are encoded in graphs that are given as input to a graph learner. It is shown that the type of background knowledge encoded indeed has an effect on the predictive performance, and it is concluded that encoding appropriate background knowledge can be more important than the choice of the graph learning algorithm.

  • 4.
    Linusson, Henrik
    et al.
    Univ Boras, Dept Informat Technol, Boras, Sweden..
    Johansson, Ulf
    Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
    Boström, Henrik
    KTH, School of Engineering Sciences in Chemistry, Biotechnology and Health (CBH).
    Löfström, Tuve
    Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
    Classification with Reject Option Using Conformal Prediction2018In: Advances in Knowledge Discovery and Data Mining, PAKDD 2018, PT I / [ed] Phung, D Tseng, VS Webb, GI Ho, B Ganji, M Rashidi, L, Springer, 2018, Vol. 10937, p. 94-105Conference paper (Refereed)
    Abstract [en]

    In this paper, we propose a practically useful means of interpreting the predictions produced by a conformal classifier. The proposed interpretation leads to a classifier with a reject option, that allows the user to limit the number of erroneous predictions made on the test set, without any need to reveal the true labels of the test objects. The method described in this paper works by estimating the cumulative error count on a set of predictions provided by a conformal classifier, ordered by their confidence. Given a test set and a user-specified parameter k, the proposed classification procedure outputs the largest possible amount of predictions containing on average at most k errors, while refusing to make predictions for test objects where it is too uncertain. We conduct an empirical evaluation using benchmark datasets, and show that we are able to provide accurate estimates for the error rate on the test set.

  • 5.
    Linusson, Henrik
    et al.
    Department of Information Technology, University of Borås, Sweden.
    Norinder, Ulf
    Swetox, Karolinska Institutet, Unit of Toxicology Sciences, Sweden.
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS. Department of Computer and Systems Sciences, Stockholm University, Sweden.
    Johansson, Ulf
    Högskolan i Jönköping, JTH, Datateknik och informatik.
    Löfström, Tuve
    Högskolan i Jönköping, JTH. Forskningsmiljö Datavetenskap och informatik.
    On the calibration of aggregated conformal predictors2017In: Proceedings of Machine Learning Research: Volume 60: Conformal and Probabilistic Prediction and Applications, 13-16 June 2017, Stockholm, Sweden / [ed] Alex Gammerman, Vladimir Vovk, Zhiyuan Luo, and Harris Papadopoulos, 2017, p. 154-173Conference paper (Refereed)
    Abstract [en]

    Conformal prediction is a learning framework that produces models that associate with each of their predictions a measure of statistically valid confidence. These models are typically constructed on top of traditional machine learning algorithms. An important result of conformal prediction theory is that the models produced are provably valid under relatively weak assumptions—in particular, their validity is independent of the specific underlying learning algorithm on which they are based. Since validity is automatic, much research on conformal predictors has been focused on improving their informational and computational efficiency. As part of the efforts in constructing efficient conformal predictors, aggregated conformal predictors were developed, drawing inspiration from the field of classification and regression ensembles. Unlike early definitions of conformal prediction procedures, the validity of aggregated conformal predictors is not fully understood—while it has been shown that they might attain empirical exact validity under certain circumstances, their theoretical validity is conditional on additional assumptions that require further clarification. In this paper, we show why validity is not automatic for aggregated conformal predictors, and provide a revised definition of aggregated conformal predictors that gains approximate validity conditional on properties of the underlying learning algorithm.

  • 6. Rao, W.
    et al.
    Boström, Henrik
    KTH, Superseded Departments (pre-2005), Numerical Analysis and Computer Science, NADA.
    Xie, S.
    Rule induction for structural damage identification2004In: Proc. Int. Conf. Mach. Learning Cybernetics, 2004, p. 2865-2869Conference paper (Refereed)
    Abstract [en]

    Structural damage identification is becoming a worldwide research subject. Some machine learning methods have been used to solve this problem, and most of them are neural network methods. In this paper, three different rule inductive methods named as Divide-and-Conquer (DAC), Bagging and Separate-and-Conquer (SAC) are investigated for predicting the damage position and extent of a concrete beam. Then radial basis function neural network (RBFNN) is used here for comparative purposes. The rule inductive methods/ especially Bagging are shown to obtain good prediction.

  • 7.
    Vasiloudis, Theodore
    et al.
    KTH, School of Electrical Engineering and Computer Science (EECS), Computational Science and Technology (CST). RISE.
    Cho, Hyunsu
    AmazonWebServices.
    Boström, Henrik
    KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
    Block-distributed Gradient Boosted Trees2019Conference paper (Refereed)
1 - 7 of 7
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf