kth.sePublications KTH
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 93) Show all publications
Al Hafiz, M. I., Ravichandran, N. B., Lansner, A., Herman, P. & Podobas, A. (2025). A Reconfigurable Stream-Based FPGA Accelerator for Bayesian Confidence Propagation Neural Networks. In: Applied Reconfigurable Computing. Architectures, Tools, and Applications - 21st International Symposium, ARC 2025, Proceedings: . Paper presented at 21st International Symposium on Applied Reconfigurable Computing, ARC 2025, Seville, Spain, Apr 9 2025 - Apr 11 2025 (pp. 196-213). Springer Nature
Open this publication in new window or tab >>A Reconfigurable Stream-Based FPGA Accelerator for Bayesian Confidence Propagation Neural Networks
Show others...
2025 (English)In: Applied Reconfigurable Computing. Architectures, Tools, and Applications - 21st International Symposium, ARC 2025, Proceedings, Springer Nature , 2025, p. 196-213Conference paper, Published paper (Refereed)
Abstract [en]

Brain-like algorithms are attractive and emerging alternatives to classical deep learning methods for use in various machine learning applications. Brain-like systems can feature local learning rules, both unsupervised/semi-supervised learning and different types of plasticity (structural/synaptic), allowing them to potentially be faster and more energy-efficient than traditional machine learning alternatives. Among the more salient brain-like algorithms are Bayesian Confidence Propagation Neural Networks (BCPNNs). BCPNN is an important tool for both machine learning and computational neuroscience research, and recent work shows that BCPNN can reach state-of-the-art performance in tasks such as learning and memory recall compared to other models. Unfortunately, BCPNN is primarily executed on slow general-purpose processors (CPUs) or power-hungry graphics processing units (GPUs), reducing the applicability of using BCPNN in Edge systems, among others. In this work, we design a reconfigurable stream-based accelerator for BCPNN using Field-Programmable Gate Arrays (FPGA) using Xilinx Vitis High-Level Synthesis (HLS) flow. Furthermore, we model our accelerator’s performance using first principles, and we empirically show that our proposed accelerator (full-featured kernel non-structural plasticity) is between 1.3x - 5.3x faster than an Nvidia A100 GPU while at the same time consuming between 2.62x - 3.19x less power and 5.8x - 16.5x less energy without any degradation in performance.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
BCPNN, FPGA, HLS, Neuromorphic
National Category
Computer Sciences Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-363095 (URN)10.1007/978-3-031-87995-1_12 (DOI)2-s2.0-105002874652 (Scopus ID)
Conference
21st International Symposium on Applied Reconfigurable Computing, ARC 2025, Seville, Spain, Apr 9 2025 - Apr 11 2025
Note

Part of ISBN 9783031879944

QC 20250922

Available from: 2025-05-06 Created: 2025-05-06 Last updated: 2025-09-22Bibliographically approved
Duan, Z., Kizyte, A., Butler Forslund, E., Gutierrez-Farewik, E., Herman, P. & Wang, R. (2025). In vivo estimation of motor unit intrinsic properties in individuals with spinal cord injury. Journal of NeuroEngineering and Rehabilitation, 22(1), Article ID 128.
Open this publication in new window or tab >>In vivo estimation of motor unit intrinsic properties in individuals with spinal cord injury
Show others...
2025 (English)In: Journal of NeuroEngineering and Rehabilitation, E-ISSN 1743-0003, Vol. 22, no 1, article id 128Article in journal (Refereed) Published
Abstract [en]

Background: Individuals who have experienced spinal cord injury (SCI) may exhibit various muscle-related neurophysiological adaptations, including alterations in motor unit (MU) size and firing behavior. However, due to the technical challenges of in vivo measurement, our understanding of the alterations in the electrophysiological parameters of these MUs remains limited. This study proposed an integrated approach using high-density electromyography (HD-EMG) decomposition and motor neuron (MN) modelling to estimate the intrinsic properties of MUs in vivo and investigated alterations of these properties in persons with SCI.

Methods: HD-EMG signals were recorded during submaximal isometric dorsiflexion and plantar flexion tasks on tibialis anterior (TA), soleus, and gastrocnemius medialis muscles from twenty-six participants with SCI and eighteen non-disabled controls. The HD-EMG signals were subsequently decomposed into MN spike trains and the common synaptic input to the MN pool was estimated. A simplified leaky integrate-and-fire neuron model was then used to simulate MN spiking trains, with soma size and inert period as tunning parameters, which are crucial for MU recruitment and firing patterns, respectively. These parameters were estimated by fitting the instantaneous discharge frequencies of decomposed and simulated spike trains via a genetic algorithm.

Results: The results showed a prolonged inert period in the TA of the persons with SCI. This finding suggested that the MUs in the TA have a slower recovery period before becoming excitable again, which may result in a lower firing rate of MUs in the TA muscle. No significant differences were observed in the soleus and gastrocnemius medialis muscles between the SCI and control groups for either the soma size or inert period parameters.

Conclusions: The simplified leaky integrate-and-fire model exhibited robustness in estimating MN parameters in vivo, offering valuable insights into personalized MU behavior monitoring. To the best knowledge of authors, this is the first study to combine HD-EMG and MU modeling to investigate MU electrophysiological changes in persons with SCI in vivo. This novel approach offers a comprehensive understanding of MU properties adaptations following neurological disorders and informs the development of novel rehabilitation strategies.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
Discharge rate, HD-EMG decomposition, Motor neuron modelling, Motor neuron Spike trains, Soma size
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-366020 (URN)10.1186/s12984-025-01659-z (DOI)001502147500001 ()40468383 (PubMedID)2-s2.0-105007449220 (Scopus ID)
Note

QC 20250703

Available from: 2025-07-03 Created: 2025-07-03 Last updated: 2025-07-03Bibliographically approved
Christiansen, F., Konuk, E., Ganeshan, A. R., Welch, R., Palés Huix, J., Czekierdowski, A., . . . Epstein, E. (2025). International multicenter validation of AI-driven ultrasound detection of ovarian cancer. Nature Medicine, 31(1), 189-196
Open this publication in new window or tab >>International multicenter validation of AI-driven ultrasound detection of ovarian cancer
Show others...
2025 (English)In: Nature Medicine, ISSN 1078-8956, E-ISSN 1546-170X, Vol. 31, no 1, p. 189-196Article in journal (Refereed) Published
Abstract [en]

Ovarian lesions are common and often incidentally detected. A critical shortage of expert ultrasound examiners has raised concerns of unnecessary interventions and delayed cancer diagnoses. Deep learning has shown promising results in the detection of ovarian cancer in ultrasound images; however, external validation is lacking. In this international multicenter retrospective study, we developed and validated transformer-based neural network models using a comprehensive dataset of 17,119 ultrasound images from 3,652 patients across 20 centers in eight countries. Using a leave-one-center-out cross-validation scheme, for each center in turn, we trained a model using data from the remaining centers. The models demonstrated robust performance across centers, ultrasound systems, histological diagnoses and patient age groups, significantly outperforming both expert and non-expert examiners on all evaluated metrics, namely F1 score, sensitivity, specificity, accuracy, Cohen’s kappa, Matthew’s correlation coefficient, diagnostic odds ratio and Youden’s J statistic. Furthermore, in a retrospective triage simulation, artificial intelligence (AI)-driven diagnostic support reduced referrals to experts by 63% while significantly surpassing the diagnostic performance of the current practice. These results show that transformer-based models exhibit strong generalization and above human expert-level diagnostic accuracy, with the potential to alleviate the shortage of expert ultrasound examiners and improve patient outcomes.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Cancer and Oncology Computer graphics and computer vision
Identifiers
urn:nbn:se:kth:diva-371960 (URN)10.1038/s41591-024-03329-4 (DOI)001388159800001 ()39747679 (PubMedID)2-s2.0-85214010322 (Scopus ID)
Note

Not duplicate with diva 1905526

QC 20251022

Available from: 2025-10-22 Created: 2025-10-22 Last updated: 2025-10-22Bibliographically approved
Kurfali, M., Herman, P., Pierzchajlo, S., Olofsson, J. & Horberg, T. (2025). Representations of smells: The next frontier for language models?. Cognition, 264, Article ID 106243.
Open this publication in new window or tab >>Representations of smells: The next frontier for language models?
Show others...
2025 (English)In: Cognition, ISSN 0010-0277, E-ISSN 1873-7838, Vol. 264, article id 106243Article in journal (Refereed) Published
Abstract [en]

Whereas human cognition develops through perceptually driven interactions with the environment, language models (LMs) are "disembodied learners" which might limit their usefulness as model systems. We evaluate the ability of LMs to recover sensory information from natural language, addressing a significant gap in cognitive science research literature. Our investigation is carried out through the sense of smell - olfaction - because it is severely underrepresented in natural language and thus poses a unique challenge for linguistic and cognitive modeling. By systematically evaluating the ability of three generations of LMs, including static word embedding models (Word2Vec, FastText), encoder-based models (BERT), and the decoder-based large LMs (LLMs; GPT-4o, Llama 3.1 among others), under nearly 200 training configurations, we investigate their proficiency in acquiring information to approximate human odor perception from textual data. As benchmarks for the performance of the LMs, we use three diverse experimental odor datasets including odor similarity ratings, imagined similarities of odor pairings from word labels, and odor-to-label ratings. The results reveal the possibility for LMs to accurately represent olfactory information, and describe the conditions under which this possibility is realized. Static, simpler models perform best in capturing odor-perceptual similarities under certain training configurations, while GPT-4o excels in simulating olfactory-semantic relationships, as suggested by its superior performance on datasets where the collected odor similarities are derived from word-based assessments. Our findings show that natural language encodes latent information regarding human olfactory information that is retrievable through text-based LMs to varying degrees. Our research shows promise for LMs to be useful tools in investigating the long debated relation between symbolic representations and perceptual experience in cognitive science.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Large language models, Olfaction, Human perception, Chemical senses, Human perception modeling
National Category
Natural Language Processing
Identifiers
urn:nbn:se:kth:diva-372847 (URN)10.1016/j.cognition.2025.106243 (DOI)001539064900001 ()40675053 (PubMedID)2-s2.0-105010697892 (Scopus ID)
Note

QC 20251114

Available from: 2025-11-14 Created: 2025-11-14 Last updated: 2025-11-14Bibliographically approved
Chrysanthidis, N., Fiebig, F., Lansner, A. & Herman, P. (2025). Short-term plasticity influences episodic memory recall: an interplay of synaptic traces in a spiking neural network model. Scientific Reports, 15(1), Article ID 28164.
Open this publication in new window or tab >>Short-term plasticity influences episodic memory recall: an interplay of synaptic traces in a spiking neural network model
2025 (English)In: Scientific Reports, E-ISSN 2045-2322, Vol. 15, no 1, article id 28164Article in journal (Refereed) Published
Abstract [en]

We investigated the interaction of episodic memory processes with the short-term dynamics of recency effects. This work takes inspiration from a seminal experimental work involving an odor-in-context association task conducted on rats. In the experimental task, rats were presented with odor pairs in two arenas serving as old or new contexts for specific odor items. Rats were rewarded for selecting the odor that was new to the current context. These new-in-context odor items were deliberately presented with higher recency relative to old-in-context items, so that episodic memory was put in conflict with a short-term recency effect. To study our hypothesis about the major role of synaptic interplay of plasticity phenomena on different time-scales in explaining rats’ performance in such episodic memory tasks, we built a computational spiking neural network model consisting of two reciprocally connected networks that stored contextual and odor information as stable distributed memory patterns. We simulated the experimental task resulting in a dynamic context-item coupling between the two networks by means of Bayesian–Hebbian plasticity with eligibility traces to account for reward-based learning. We first reproduced quantitatively and explained mechanistically the findings of the experimental study, and then to further differentiate the impact of short-term plasticity we simulated an alternative task with old-in-context items presented with higher recency, thus synergistically confounding episodic memory with effects of recency. Our model predicted that higher recency of old-in-context items enhances episodic memory by boosting the activations of old-in-context items. We argue that the model offers a computational framework for studying behavioral implications of the synaptic underpinning of different memory effects in experimental episodic memory paradigms.

Place, publisher, year, edition, pages
Springer Nature, 2025
National Category
Computer Sciences Neurosciences
Identifiers
urn:nbn:se:kth:diva-372195 (URN)10.1038/s41598-025-12611-5 (DOI)001542639300007 ()40750641 (PubMedID)2-s2.0-105012454086 (Scopus ID)
Funder
KTH Royal Institute of Technology
Note

QC 20251028

Available from: 2025-10-28 Created: 2025-10-28 Last updated: 2025-10-28Bibliographically approved
Ravichandran, N. B., Lansner, A. & Herman, P. (2025). Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks. Neurocomputing, 626, Article ID 129440.
Open this publication in new window or tab >>Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks
2025 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 626, article id 129440Article in journal (Refereed) Published
Abstract [en]

Neural networks that can capture key principles underlying brain computation offer exciting new opportunities for developing artificial intelligence and brain-like computing algorithms. Such networks remain biologically plausible while leveraging localized forms of synaptic learning rules and modular network architecture found in the neocortex. Compared to backprop-driven deep learning approches, they provide more suitable models for deployment of neuromorphic hardware and have greater potential for scalability on large-scale computing clusters. The development of such brain-like neural networks depends on having a learning procedure that can build effective internal representations from data. In this work, we introduce and evaluate a brain-like neural network model capable of unsupervised representation learning. It builds on the Bayesian Confidence Propagation Neural Network (BCPNN), which has earlier been implemented as abstract as well as biophysically detailed recurrent attractor neural networks explaining various cortical associative memory phenomena. Here we developed a feedforward BCPNN model to perform representation learning by incorporating a range of brainlike attributes derived from neocortical circuits such as cortical columns, divisive normalization, Hebbian synaptic plasticity, structural plasticity, sparse activity, and sparse patchy connectivity. The model was tested on a diverse set of popular machine learning benchmarks: grayscale images (MNIST, F-MNIST), RGB natural images (SVHN, CIFAR-10), QSAR (MUV, HIV), and malware detection (EMBER). The performance of the model when using a linear classifier to predict the class labels fared competitively with conventional multi-layer perceptrons and other state-of-the-art brain-like neural networks.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Brain-like computing, Brain inspired, Neuroscience informed, Biologically plausible, Representation learning, Unsupervised learning, Hebbian plasticity, BCPNN structural plasticity, Cortical columns, Modular neural networks, Sparsity, Rewiring, Self-organization
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-360750 (URN)10.1016/j.neucom.2025.129440 (DOI)001425064400001 ()2-s2.0-85217068343 (Scopus ID)
Note

QC 20250922

Available from: 2025-03-03 Created: 2025-03-03 Last updated: 2025-09-22Bibliographically approved
Horberg, T., Kurfali, M., Larsson, M., Laukka, E. J., Herman, P. & Olofsson, J. K. (2024). A Rose by Another Name?: Odor Misnaming is Associated with Linguistic Properties. Cognitive science, 48(10), Article ID e70003.
Open this publication in new window or tab >>A Rose by Another Name?: Odor Misnaming is Associated with Linguistic Properties
Show others...
2024 (English)In: Cognitive science, ISSN 0364-0213, E-ISSN 1551-6709, Vol. 48, no 10, article id e70003Article in journal (Refereed) Published
Abstract [en]

Naming common odors is a surprisingly difficult task: Odors are frequently misnamed. Little is known about the linguistic properties of odor misnamings. We test whether odor misnamings of old adults carry information about olfactory perception and its connection to lexical-semantic processing. We analyze the olfactory-semantic content of odor source naming failures in a large sample of older adults in Sweden (n = 2479; age 58-100 years). We investigate whether linguistic factors and semantic proximity to the target odor name predict how odors are misnamed, and how these factors relate to overall odor identification performance. We also explore the primary semantic dimensions along which misnamings are distributed. We find that odor misnamings consist of surprisingly many vague and unspecific terms, such as category names (e.g., fruit) or abstract or evaluative terms (e.g., sweet). Odor misnamings are often strongly associated with the correct name, capturing properties such as its category or other abstract features. People are also biased toward misnaming odors with high-frequency terms that are associated with olfaction or gustation. Linguistic properties of odor misnamings and their semantic proximity to the target odor name predict odor identification performance, suggesting that linguistic processing facilitates odor identification. Further, odor misnamings constitute an olfactory-semantic space that is similar to the olfactory vocabulary of English. This space is primarily differentiated along pleasantness, edibility, and concreteness dimensions. Odor naming failures thus contain plenty of information about semantic odor knowledge.

Place, publisher, year, edition, pages
Wiley, 2024
Keywords
Odor naming, Odor identification, Olfactory vocabulary, Natural language processing, Semantic analysis
National Category
Psychology
Identifiers
urn:nbn:se:kth:diva-355811 (URN)10.1111/cogs.70003 (DOI)001338153700001 ()39439400 (PubMedID)2-s2.0-85207230601 (Scopus ID)
Note

QC 20241104

Available from: 2024-11-04 Created: 2024-11-04 Last updated: 2024-11-04Bibliographically approved
Lundqvist, M., Miller, E. K., Nordmark, J., Liljefors, J. & Herman, P. (2024). Beta: bursts of cognition. Trends in cognitive sciences, 28(7), 662-676
Open this publication in new window or tab >>Beta: bursts of cognition
Show others...
2024 (English)In: Trends in cognitive sciences, ISSN 1364-6613, E-ISSN 1879-307X, Vol. 28, no 7, p. 662-676Article, review/survey (Refereed) Published
Abstract [en]

Beta oscillations are linked to the control of goal-directed processing of sensory information and the timing of motor output. Recent evidence demonstrates they are not sustained but organized into intermittent high-power bursts mediating timely functional inhibition. This implies there is a considerable moment-tomoment variation in the neural dynamics supporting cognition. Beta bursts thus offer new opportunities for studying how sensory inputs are selectively processed, reshaped by inhibitory cognitive operations and ultimately result in motor actions. Recent method advances reveal diversity in beta bursts that provide deeper insights into their function and the underlying neural circuit activity motifs. We propose that brain-wide, spatiotemporal patterns of beta bursting reflect various cognitive operations and that their dynamics reveal nonlinear aspects of cortical processing.

Place, publisher, year, edition, pages
Elsevier BV, 2024
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-350856 (URN)10.1016/j.tics.2024.03.010 (DOI)001264957100001 ()38658218 (PubMedID)2-s2.0-85192441853 (Scopus ID)
Note

QC 20250922

Available from: 2024-07-22 Created: 2024-07-22 Last updated: 2025-09-22Bibliographically approved
Cao, L., Halvardsson, G., McCornack, A., von Ehrenheim, V. & Herman, P. (2024). Beyond Gut Feel: Using Time Series Transformers to Find Investment Gems. In: Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings: . Paper presented at 33rd International Conference on Artificial Neural Networks, ICANN 2024, Lugano, Switzerland, September 17-20, 2024 (pp. 373-388). Springer Nature
Open this publication in new window or tab >>Beyond Gut Feel: Using Time Series Transformers to Find Investment Gems
Show others...
2024 (English)In: Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings, Springer Nature , 2024, p. 373-388Conference paper, Published paper (Refereed)
Abstract [en]

This paper addresses the growing application of data-driven approaches within the Private Equity (PE) industry, particularly in sourcing investment targets (i.e., companies) for Venture Capital (VC) and Growth Capital (GC). We present a comprehensive review of the relevant approaches and propose a novel approach leveraging a Transformer-based Multivariate Time Series Classifier (TMTSC) for predicting the success likelihood of any candidate company. The objective of our research is to optimize sourcing performance for VC and GC investments by formally defining the sourcing problem as a multivariate time series classification task. We consecutively introduce the key components of our implementation which collectively contribute to the successful application of TMTSC in VC/GC sourcing: input features, model architecture, optimization target, and investor-centric data processing. Our extensive experiments on two real-world investment tasks, benchmarked towards three popular baselines, demonstrate the effectiveness of our approach in improving decision making within the VC and GC industry.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Company success prediction, Growth equity, Investment, Multivariate time series, Private equity, Venture capital
National Category
Computer and Information Sciences Economics and Business
Identifiers
urn:nbn:se:kth:diva-354663 (URN)10.1007/978-3-031-72356-8_25 (DOI)001331897800025 ()2-s2.0-85205307874 (Scopus ID)
Conference
33rd International Conference on Artificial Neural Networks, ICANN 2024, Lugano, Switzerland, September 17-20, 2024
Note

Part of ISBN 9783031723551

QC 20241205

Available from: 2024-10-09 Created: 2024-10-09 Last updated: 2025-12-05Bibliographically approved
Liljefors, J., Almeida, R., Rane, G., Lundström, J. N., Herman, P. & Lundqvist, M. (2024). Distinct functions for beta and alpha bursts in gating of human working memory. Nature Communications, 15(1), Article ID 8950.
Open this publication in new window or tab >>Distinct functions for beta and alpha bursts in gating of human working memory
Show others...
2024 (English)In: Nature Communications, E-ISSN 2041-1723, Vol. 15, no 1, article id 8950Article in journal (Refereed) Published
Abstract [en]

Multiple neural mechanisms underlying gating to working memory have been proposed with divergent results obtained in human and animal studies. Previous findings from non-human primates suggest prefrontal beta frequency bursts as a correlate of transient inhibition during selective encoding. Human studies instead suggest a similar role for sensory alpha power fluctuations. To cast light on these discrepancies we employed a sequential working memory task with distractors for human participants. In particular, we examined their whole-brain electrophysiological activity in both alpha and beta bands with the same single-trial burst analysis earlier performed on non-human primates. Our results reconcile earlier findings by demonstrating that both alpha and beta bursts in humans correlate with the filtering and control of memory items, but with region and task-specific differences between the two rhythms. Occipital beta burst patterns were selectively modulated during the transition from sensory processing to memory retention whereas prefrontal and parietal beta bursts tracked sequence order and were proactively upregulated prior to upcoming target encoding. Occipital alpha bursts instead increased during the actual presentation of unwanted sensory stimuli. Source reconstruction additionally suggested the involvement of striatal and thalamic alpha and beta. Thus, specific whole-brain burst patterns correlate with different aspects of working memory control.

Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Neurosciences Neurology Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:kth:diva-355428 (URN)10.1038/s41467-024-53257-7 (DOI)001338950000037 ()39419974 (PubMedID)2-s2.0-85206680358 (Scopus ID)
Note

QC 20250925

Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2025-09-25Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6553-823X

Search in DiVA

Show all publications