Optimal Viterbi Bayesian predictive classification for data from finite alphabets
2013 (English)In: Journal of Statistical Planning and Inference, ISSN 0378-3758, Vol. 143, no 2, 261-275 p.Article in journal (Refereed) Published
A family of Viterbi Bayesian predictive classifiers has been recently popularized for speech recognition applications with continuous acoustic signals modeled by finite mixture densities embedded in a hidden Markov framework. Here we generalize such classifiers to sequentially observed data from multiple finite alphabets and derive the optimal predictive classifier under exchangeability of the emitted symbols. We demonstrate that the optimal predictive classifier which learns from unlabelled test items improves considerably upon marginal maximum a posteriori rule in the presence of sparse training data. It is shown that the learning process saturates when the amount of test data tends to infinity, such that no further gain in classification accuracy is possible upon arrival of new test items in the long run.
Place, publisher, year, edition, pages
2013. Vol. 143, no 2, 261-275 p.
Bayesian learning, Hidden Markov models, Predictive classification
IdentifiersURN: urn:nbn:se:kth:diva-107602DOI: 10.1016/j.jspi.2012.07.013ISI: 000310942200004ScopusID: 2-s2.0-84867736475OAI: oai:DiVA.org:kth-107602DiVA: diva2:577043
QC 201212142012-12-142012-12-142012-12-14Bibliographically approved