Open this publication in new window or tab >>Show others...
2024 (English)In: PROCEEDINGS OF THE 17TH ACM SIGPLAN INTERNATIONAL CONFERENCE ON SOFTWARE LANGUAGE ENGINEERING, SLE 2024 / [ed] Laemmel, R Pereira, JA Mosses, PD, Association for Computing Machinery (ACM) , 2024, p. 196-209Conference paper, Published paper (Refereed)
Abstract [en]
Hidden Markov models (HMMs) are frequently used in areas such as speech recognition and bioinformatics. However, implementing HMM algorithms correctly and efficiently is time-consuming and error-prone. Specifically, using model-specific knowledge to improve performance, such as sparsity in the transition probability matrix, ties the implementation to a particular model, making it harder to modify. Previous work has introduced high-level frameworks for defining HMMs, thus lifting the burden of efficiently implementing HMM algorithms from the user. However, existing tools are ill-suited for sparse HMMs with many states. This paper introduces Trellis, a domain-specific language for succinctly defining sparse HMMs that use GPU acceleration to achieve high performance. We show that Trellis outperforms previous work and is on par with a hand-written CUDA kernel implementation for a particular sparse HMM.
Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
HiddenMarkovmodels, DSL, parallelization, GPU acceleration
National Category
Probability Theory and Statistics
Identifiers
urn:nbn:se:kth:diva-357515 (URN)10.1145/3687997.3695641 (DOI)001344239100017 ()2-s2.0-85210805499 (Scopus ID)
Conference
17th ACM SIGPLAN International Conference on Software Language Engineering (SLE), OCT 20-21, 2024, Pasadena, CA
Note
Part of ISBN 979-8-4007-1180-0
QC 20241209
2024-12-092024-12-092025-05-27Bibliographically approved