Endre søk
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Efficient Venn predictors using random forests
Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
Jonkoping Univ, Dept Comp Sci & Informat, Jonkoping, Sweden..
Univ Boras, Dept Informat Technol, Boras, Sweden..
KTH, Skolan för elektroteknik och datavetenskap (EECS), Programvaruteknik och datorsystem, SCS.
2019 (engelsk)Inngår i: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565, Vol. 108, nr 3, s. 535-550Artikkel i tidsskrift (Fagfellevurdert) Published
Abstract [en]

Successful use of probabilistic classification requires well-calibrated probability estimates, i.e., the predicted class probabilities must correspond to the true probabilities. In addition, a probabilistic classifier must, of course, also be as accurate as possible. In this paper, Venn predictors, and its special case Venn-Abers predictors, are evaluated for probabilistic classification, using random forests as the underlying models. Venn predictors output multiple probabilities for each label, i.e., the predicted label is associated with a probability interval. Since all Venn predictors are valid in the long run, the size of the probability intervals is very important, with tighter intervals being more informative. The standard solution when calibrating a classifier is to employ an additional step, transforming the outputs from a classifier into probability estimates, using a labeled data set not employed for training of the models. For random forests, and other bagged ensembles, it is, however, possible to use the out-of-bag instances for calibration, making all training data available for both model learning and calibration. This procedure has previously been successfully applied to conformal prediction, but was here evaluated for the first time for Venn predictors. The empirical investigation, using 22 publicly available data sets, showed that all four versions of the Venn predictors were better calibrated than both the raw estimates from the random forest, and the standard techniques Platt scaling and isotonic regression. Regarding both informativeness and accuracy, the standard Venn predictor calibrated on out-of-bag instances was the best setup evaluated. Most importantly, calibrating on out-of-bag instances, instead of using a separate calibration set, resulted in tighter intervals and more accurate models on every data set, for both the Venn predictors and the Venn-Abers predictors.

sted, utgiver, år, opplag, sider
SPRINGER , 2019. Vol. 108, nr 3, s. 535-550
Emneord [en]
Probabilistic prediction, Venn predictors, Venn-Abers predictors, Random forests, Out-of-bag calibration
HSV kategori
Identifikatorer
URN: urn:nbn:se:kth:diva-246235DOI: 10.1007/s10994-018-5753-xISI: 000459945900008Scopus ID: 2-s2.0-85052523706OAI: oai:DiVA.org:kth-246235DiVA, id: diva2:1302122
Merknad

QC 20190403

Tilgjengelig fra: 2019-04-03 Laget: 2019-04-03 Sist oppdatert: 2019-04-03bibliografisk kontrollert

Open Access i DiVA

Fulltekst mangler i DiVA

Andre lenker

Forlagets fulltekstScopus

Personposter BETA

Boström, Henrik

Søk i DiVA

Av forfatter/redaktør
Boström, Henrik
Av organisasjonen
I samme tidsskrift
Machine Learning

Søk utenfor DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric

doi
urn-nbn
Totalt: 55 treff
RefereraExporteraLink to record
Permanent link

Direct link
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annet format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annet språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf