Have I seen you before?: Principles of Bayesian predictive classification revisited
2013 (English)In: Statistics and computing, ISSN 0960-3174, E-ISSN 1573-1375, Vol. 23, no 1, 59-73 p.Article in journal (Refereed) Published
A general inductive Bayesian classification framework is considered using a simultaneous predictive distribution for test items. We introduce a principle of generative supervised and semi-supervised classification based on marginalizing the joint posterior distribution of labels for all test items. The simultaneous and marginalized classifiers arise under different loss functions, while both acknowledge jointly all uncertainty about the labels of test items and the generating probability measures of the classes. We illustrate for data from multiple finite alphabets that such classifiers achieve higher correct classification rates than a standard marginal predictive classifier which labels all test items independently, when training data are sparse. In the supervised case for multiple finite alphabets the simultaneous and the marginal classifiers are proven to become equal under generalized exchangeability when the amount of training data increases. Hence, the marginal classifier can be interpreted as an asymptotic approximation to the simultaneous classifier for finite sets of training data. It is also shown that such convergence is not guaranteed in the semi-supervised setting, where the marginal classifier does not provide a consistent approximation.
Place, publisher, year, edition, pages
Springer Berlin/Heidelberg, 2013. Vol. 23, no 1, 59-73 p.
Classification, Exchangeability, Inductive learning, Predictive inference
Computer and Information Science
IdentifiersURN: urn:nbn:se:kth:diva-88088DOI: 10.1007/s11222-011-9291-7ISI: 000313731400005ScopusID: 2-s2.0-84872607314OAI: oai:DiVA.org:kth-88088DiVA: diva2:502193
FunderEU, European Research Council, 239784
QC 201302042012-02-142012-02-142013-02-14Bibliographically approved