Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Efficient Venn predictors using random forests
Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
Univ Boras, Dept Informat Technol, Boras, Sweden..
KTH, School of Electrical Engineering and Computer Science (EECS), Software and Computer systems, SCS.
2019 (English)In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 108, no 3, p. 535-550Article in journal (Refereed) Published
Abstract [en]

Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. In addition, a probabilistic classifier must, of course, also be as accurate as possible. In this paper, Venn predictors, and its special case Venn-Abers predictors, are evaluated for probabilistic classification, using random forests as the underlying models. Venn predictors output multiple probabilities for each label, i.e., the predicted label is associated with a probability interval. Since all Venn predictors are valid in the long run, the size of the probability intervals is very important, with tighter intervals being more informative. The standard solution when calibrating a classifier is to employ an additional step, transforming the outputs from a classifier into probability estimates, using a labeled data set not employed for training of the models. For random forests, and other bagged ensembles, it is, however, possible to use the out-of-bag instances for calibration, making all training data available for both model learning and calibration. This procedure has previously been successfully applied to conformal prediction, but was here evaluated for the first time for Venn predictors. The empirical investigation, using 22 publicly available data sets, showed that all four versions of the Venn predictors were better calibrated than both the raw estimates from the random forest, and the standard techniques Platt scaling and isotonic regression. Regarding both informativeness and accuracy, the standard Venn predictor calibrated on out-of-bag instances was the best setup evaluated. Most importantly, calibrating on out-of-bag instances, instead of using a separate calibration set, resulted in tighter intervals and more accurate models on every data set, for both the Venn predictors and the Venn-Abers predictors.

Place, publisher, year, edition, pages
SPRINGER , 2019. Vol. 108, no 3, p. 535-550
Keywords [en]
Probabilistic prediction, Venn predictors, Venn-Abers predictors, Random forests, Out-of-bag calibration
National Category
Computer Sciences
Identifiers
URN: urn:nbn:se:kth:diva-246235DOI: 10.1007/s10994-018-5753-xISI: 000459945900008Scopus ID: 2-s2.0-85052523706OAI: oai:DiVA.org:kth-246235DiVA, id: diva2:1302122
Note

QC 20190403

Available from: 2019-04-03 Created: 2019-04-03 Last updated: 2019-04-03Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records BETA

Boström, Henrik

Search in DiVA

By author/editor
Boström, Henrik
By organisation
Software and Computer systems, SCS
In the same journal
Machine Learning
Computer Sciences

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 55 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf