kth.sePublications
Change search
Link to record
Permanent link

Direct link
Alternative names
Publications (10 of 89) Show all publications
Al Hafiz, M. I., Ravichandran, N. B., Lansner, A., Herman, P. & Podobas, A. (2025). A Reconfigurable Stream-Based FPGA Accelerator for Bayesian Confidence Propagation Neural Networks. In: Applied Reconfigurable Computing. Architectures, Tools, and Applications - 21st International Symposium, ARC 2025, Proceedings: . Paper presented at 21st International Symposium on Applied Reconfigurable Computing, ARC 2025, Seville, Spain, Apr 9 2025 - Apr 11 2025 (pp. 196-213). Springer Nature
Open this publication in new window or tab >>A Reconfigurable Stream-Based FPGA Accelerator for Bayesian Confidence Propagation Neural Networks
Show others...
2025 (English)In: Applied Reconfigurable Computing. Architectures, Tools, and Applications - 21st International Symposium, ARC 2025, Proceedings, Springer Nature , 2025, p. 196-213Conference paper, Published paper (Refereed)
Abstract [en]

Brain-like algorithms are attractive and emerging alternatives to classical deep learning methods for use in various machine learning applications. Brain-like systems can feature local learning rules, both unsupervised/semi-supervised learning and different types of plasticity (structural/synaptic), allowing them to potentially be faster and more energy-efficient than traditional machine learning alternatives. Among the more salient brain-like algorithms are Bayesian Confidence Propagation Neural Networks (BCPNNs). BCPNN is an important tool for both machine learning and computational neuroscience research, and recent work shows that BCPNN can reach state-of-the-art performance in tasks such as learning and memory recall compared to other models. Unfortunately, BCPNN is primarily executed on slow general-purpose processors (CPUs) or power-hungry graphics processing units (GPUs), reducing the applicability of using BCPNN in Edge systems, among others. In this work, we design a reconfigurable stream-based accelerator for BCPNN using Field-Programmable Gate Arrays (FPGA) using Xilinx Vitis High-Level Synthesis (HLS) flow. Furthermore, we model our accelerator’s performance using first principles, and we empirically show that our proposed accelerator (full-featured kernel non-structural plasticity) is between 1.3x - 5.3x faster than an Nvidia A100 GPU while at the same time consuming between 2.62x - 3.19x less power and 5.8x - 16.5x less energy without any degradation in performance.

Place, publisher, year, edition, pages
Springer Nature, 2025
Keywords
BCPNN, FPGA, HLS, Neuromorphic
National Category
Computer Sciences Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
urn:nbn:se:kth:diva-363095 (URN)10.1007/978-3-031-87995-1_12 (DOI)2-s2.0-105002874652 (Scopus ID)
Conference
21st International Symposium on Applied Reconfigurable Computing, ARC 2025, Seville, Spain, Apr 9 2025 - Apr 11 2025
Note

Part of ISBN 9783031879944

QC 20250506

Available from: 2025-05-06 Created: 2025-05-06 Last updated: 2025-05-06Bibliographically approved
Ravichandran, N. B., Lansner, A. & Herman, P. (2025). Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks. Neurocomputing, 626, Article ID 129440.
Open this publication in new window or tab >>Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks
2025 (English)In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 626, article id 129440Article in journal (Refereed) Published
Abstract [en]

Neural networks that can capture key principles underlying brain computation offer exciting new opportunities for developing artificial intelligence and brain-like computing algorithms. Such networks remain biologically plausible while leveraging localized forms of synaptic learning rules and modular network architecture found in the neocortex. Compared to backprop-driven deep learning approches, they provide more suitable models for deployment of neuromorphic hardware and have greater potential for scalability on large-scale computing clusters. The development of such brain-like neural networks depends on having a learning procedure that can build effective internal representations from data. In this work, we introduce and evaluate a brain-like neural network model capable of unsupervised representation learning. It builds on the Bayesian Confidence Propagation Neural Network (BCPNN), which has earlier been implemented as abstract as well as biophysically detailed recurrent attractor neural networks explaining various cortical associative memory phenomena. Here we developed a feedforward BCPNN model to perform representation learning by incorporating a range of brainlike attributes derived from neocortical circuits such as cortical columns, divisive normalization, Hebbian synaptic plasticity, structural plasticity, sparse activity, and sparse patchy connectivity. The model was tested on a diverse set of popular machine learning benchmarks: grayscale images (MNIST, F-MNIST), RGB natural images (SVHN, CIFAR-10), QSAR (MUV, HIV), and malware detection (EMBER). The performance of the model when using a linear classifier to predict the class labels fared competitively with conventional multi-layer perceptrons and other state-of-the-art brain-like neural networks.

Place, publisher, year, edition, pages
Elsevier BV, 2025
Keywords
Brain-like computing, Brain inspired, Neuroscience informed, Biologically plausible, Representation learning, Unsupervised learning, Hebbian plasticity, BCPNN structural plasticity, Cortical columns, Modular neural networks, Sparsity, Rewiring, Self-organization
National Category
Computer Sciences
Identifiers
urn:nbn:se:kth:diva-360750 (URN)10.1016/j.neucom.2025.129440 (DOI)001425064400001 ()2-s2.0-85217068343 (Scopus ID)
Note

QC 20250303

Available from: 2025-03-03 Created: 2025-03-03 Last updated: 2025-03-03Bibliographically approved
Horberg, T., Kurfali, M., Larsson, M., Laukka, E. J., Herman, P. & Olofsson, J. K. (2024). A Rose by Another Name?: Odor Misnaming is Associated with Linguistic Properties. Cognitive science, 48(10), Article ID e70003.
Open this publication in new window or tab >>A Rose by Another Name?: Odor Misnaming is Associated with Linguistic Properties
Show others...
2024 (English)In: Cognitive science, ISSN 0364-0213, E-ISSN 1551-6709, Vol. 48, no 10, article id e70003Article in journal (Refereed) Published
Abstract [en]

Naming common odors is a surprisingly difficult task: Odors are frequently misnamed. Little is known about the linguistic properties of odor misnamings. We test whether odor misnamings of old adults carry information about olfactory perception and its connection to lexical-semantic processing. We analyze the olfactory-semantic content of odor source naming failures in a large sample of older adults in Sweden (n = 2479; age 58-100 years). We investigate whether linguistic factors and semantic proximity to the target odor name predict how odors are misnamed, and how these factors relate to overall odor identification performance. We also explore the primary semantic dimensions along which misnamings are distributed. We find that odor misnamings consist of surprisingly many vague and unspecific terms, such as category names (e.g., fruit) or abstract or evaluative terms (e.g., sweet). Odor misnamings are often strongly associated with the correct name, capturing properties such as its category or other abstract features. People are also biased toward misnaming odors with high-frequency terms that are associated with olfaction or gustation. Linguistic properties of odor misnamings and their semantic proximity to the target odor name predict odor identification performance, suggesting that linguistic processing facilitates odor identification. Further, odor misnamings constitute an olfactory-semantic space that is similar to the olfactory vocabulary of English. This space is primarily differentiated along pleasantness, edibility, and concreteness dimensions. Odor naming failures thus contain plenty of information about semantic odor knowledge.

Place, publisher, year, edition, pages
Wiley, 2024
Keywords
Odor naming, Odor identification, Olfactory vocabulary, Natural language processing, Semantic analysis
National Category
Psychology
Identifiers
urn:nbn:se:kth:diva-355811 (URN)10.1111/cogs.70003 (DOI)001338153700001 ()39439400 (PubMedID)2-s2.0-85207230601 (Scopus ID)
Note

QC 20241104

Available from: 2024-11-04 Created: 2024-11-04 Last updated: 2024-11-04Bibliographically approved
Lundqvist, M., Miller, E. K., Nordmark, J., Liljefors, J. & Herman, P. (2024). Beta: bursts of cognition. Trends in cognitive sciences, 28(7), 662-676
Open this publication in new window or tab >>Beta: bursts of cognition
Show others...
2024 (English)In: Trends in cognitive sciences, ISSN 1364-6613, E-ISSN 1879-307X, Vol. 28, no 7, p. 662-676Article, review/survey (Refereed) Published
Abstract [en]

Beta oscillations are linked to the control of goal-directed processing of sensory information and the timing of motor output. Recent evidence demonstrates they are not sustained but organized into intermittent high-power bursts mediating timely functional inhibition. This implies there is a considerable moment-tomoment variation in the neural dynamics supporting cognition. Beta bursts thus offer new opportunities for studying how sensory inputs are selectively processed, reshaped by inhibitory cognitive operations and ultimately result in motor actions. Recent method advances reveal diversity in beta bursts that provide deeper insights into their function and the underlying neural circuit activity motifs. We propose that brain-wide, spatiotemporal patterns of beta bursting reflect various cognitive operations and that their dynamics reveal nonlinear aspects of cortical processing.

Place, publisher, year, edition, pages
Elsevier BV, 2024
National Category
Neurosciences
Identifiers
urn:nbn:se:kth:diva-350856 (URN)10.1016/j.tics.2024.03.010 (DOI)001264957100001 ()38658218 (PubMedID)2-s2.0-85192441853 (Scopus ID)
Note

QC 20240722

Available from: 2024-07-22 Created: 2024-07-22 Last updated: 2024-07-22Bibliographically approved
Cao, L., Halvardsson, G., McCornack, A., von Ehrenheim, V. & Herman, P. (2024). Beyond Gut Feel: Using Time Series Transformers to Find Investment Gems. In: Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings: . Paper presented at 33rd International Conference on Artificial Neural Networks, ICANN 2024, Lugano, Switzerland, September 17-20, 2024 (pp. 373-388). Springer Nature
Open this publication in new window or tab >>Beyond Gut Feel: Using Time Series Transformers to Find Investment Gems
Show others...
2024 (English)In: Artificial Neural Networks and Machine Learning – ICANN 2024 - 33rd International Conference on Artificial Neural Networks, Proceedings, Springer Nature , 2024, p. 373-388Conference paper, Published paper (Refereed)
Abstract [en]

This paper addresses the growing application of data-driven approaches within the Private Equity (PE) industry, particularly in sourcing investment targets (i.e., companies) for Venture Capital (VC) and Growth Capital (GC). We present a comprehensive review of the relevant approaches and propose a novel approach leveraging a Transformer-based Multivariate Time Series Classifier (TMTSC) for predicting the success likelihood of any candidate company. The objective of our research is to optimize sourcing performance for VC and GC investments by formally defining the sourcing problem as a multivariate time series classification task. We consecutively introduce the key components of our implementation which collectively contribute to the successful application of TMTSC in VC/GC sourcing: input features, model architecture, optimization target, and investor-centric data processing. Our extensive experiments on two real-world investment tasks, benchmarked towards three popular baselines, demonstrate the effectiveness of our approach in improving decision making within the VC and GC industry.

Place, publisher, year, edition, pages
Springer Nature, 2024
Keywords
Company success prediction, Growth equity, Investment, Multivariate time series, Private equity, Venture capital
National Category
Computer and Information Sciences Economics and Business
Identifiers
urn:nbn:se:kth:diva-354663 (URN)10.1007/978-3-031-72356-8_25 (DOI)2-s2.0-85205307874 (Scopus ID)
Conference
33rd International Conference on Artificial Neural Networks, ICANN 2024, Lugano, Switzerland, September 17-20, 2024
Note

Part of ISBN 9783031723551

QC 20241205

Available from: 2024-10-09 Created: 2024-10-09 Last updated: 2024-12-05Bibliographically approved
Liljefors, J., Almeida, R., Rane, G., Lundström, J. N., Herman, P. & Lundqvist, M. (2024). Distinct functions for beta and alpha bursts in gating of human working memory. Nature Communications, 15(1), Article ID 8950.
Open this publication in new window or tab >>Distinct functions for beta and alpha bursts in gating of human working memory
Show others...
2024 (English)In: Nature Communications, E-ISSN 2041-1723, Vol. 15, no 1, article id 8950Article in journal (Refereed) Published
Abstract [en]

Multiple neural mechanisms underlying gating to working memory have been proposed with divergent results obtained in human and animal studies. Previous findings from non-human primates suggest prefrontal beta frequency bursts as a correlate of transient inhibition during selective encoding. Human studies instead suggest a similar role for sensory alpha power fluctuations. To cast light on these discrepancies we employed a sequential working memory task with distractors for human participants. In particular, we examined their whole-brain electrophysiological activity in both alpha and beta bands with the same single-trial burst analysis earlier performed on non-human primates. Our results reconcile earlier findings by demonstrating that both alpha and beta bursts in humans correlate with the filtering and control of memory items, but with region and task-specific differences between the two rhythms. Occipital beta burst patterns were selectively modulated during the transition from sensory processing to memory retention whereas prefrontal and parietal beta bursts tracked sequence order and were proactively upregulated prior to upcoming target encoding. Occipital alpha bursts instead increased during the actual presentation of unwanted sensory stimuli. Source reconstruction additionally suggested the involvement of striatal and thalamic alpha and beta. Thus, specific whole-brain burst patterns correlate with different aspects of working memory control.

Place, publisher, year, edition, pages
Springer Nature, 2024
National Category
Neurosciences Neurology Psychology (excluding Applied Psychology)
Identifiers
urn:nbn:se:kth:diva-355428 (URN)10.1038/s41467-024-53257-7 (DOI)001338950000037 ()39419974 (PubMedID)2-s2.0-85206680358 (Scopus ID)
Note

QC 20241209

Available from: 2024-10-30 Created: 2024-10-30 Last updated: 2024-12-09Bibliographically approved
Fontan, A., Cvetkovic, V., Herman, P., Sundh, J. & Johansson, K. H. (2024). Exploring rationality of prospect choices among decision-makers in a population. In: : . Paper presented at 5th IFAC Workshop on Cyber-Physical Human Systems, CPHS 2024, Antalya, Türkiye, Dec 12 2024 - Dec 13 2024 (pp. 133-138). Elsevier BV, 58
Open this publication in new window or tab >>Exploring rationality of prospect choices among decision-makers in a population
Show others...
2024 (English)Conference paper, Published paper (Refereed)
Abstract [en]

The random utility model (RUM) is a fundamental notion in studies of human decision-making. However, RUM relies on the calibration of its choice function's weight parameter, usually interpreted as a rationality parameter, resulting in a case-dependence that undermines both interpretability and predictability of choices across experimental settings. We addressed this limitation by normalizing utilities in RUM and deriving a new choice parameter β, independent of case-specific prospects. Drawing from a novel interpretation of β in terms of the lowest perceived probability of unlikely events, we conducted an experimental survey in Swedish universities to infer β distributions, capturing the variability of probability perception among decision-makers. We tested these statistical models for β on two independent datasets exploring the framing effect. The results showed that the predictions align with the observed experimental data (Pearson's correlation greater than 94%), thereby indicating that the novel characterization of the choice parameter strengthens the predictive capabilities of RUM.

Place, publisher, year, edition, pages
Elsevier BV, 2024
Keywords
choice parameter, decision-making, prospects, RUM, unlikely events probability
National Category
Other Natural Sciences
Identifiers
urn:nbn:se:kth:diva-360565 (URN)10.1016/j.ifacol.2025.01.169 (DOI)001403404200023 ()2-s2.0-85218017523 (Scopus ID)
Conference
5th IFAC Workshop on Cyber-Physical Human Systems, CPHS 2024, Antalya, Türkiye, Dec 12 2024 - Dec 13 2024
Note

QC 20250228

Available from: 2025-02-26 Created: 2025-02-26 Last updated: 2025-04-01Bibliographically approved
Senane, Z., Cao, L., Buchner, V. L., Tashiro, Y., You, L., Herman, P., . . . Von Ehrenheim, V. (2024). Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask. In: KDD 2024 - Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining: . Paper presented at 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024, Barcelona, Spain, Aug 25 2024 - Aug 29 2024 (pp. 2560-2571). Association for Computing Machinery (ACM)
Open this publication in new window or tab >>Self-Supervised Learning of Time Series Representation via Diffusion Process and Imputation-Interpolation-Forecasting Mask
Show others...
2024 (English)In: KDD 2024 - Proceedings of the 30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Association for Computing Machinery (ACM) , 2024, p. 2560-2571Conference paper, Published paper (Refereed)
Abstract [en]

Time Series Representation Learning (TSRL) focuses on generating informative representations for various Time Series (TS) modeling tasks. Traditional Self-Supervised Learning (SSL) methods in TSRL fall into four main categories: reconstructive, adversarial, contrastive, and predictive, each with a common challenge of sensitivity to noise and intricate data nuances. Recently, diffusion-based methods have shown advanced generative capabilities. However, they primarily target specific application scenarios like imputation and forecasting, leaving a gap in leveraging diffusion models for generic TSRL. Our work, Time Series Diffusion Embedding (TSDE), bridges this gap as the first diffusion-based SSL TSRL approach. TSDE segments TS data into observed and masked parts using an Imputation-Interpolation-Forecasting (IIF) mask. It applies a trainable embedding function, featuring dual-orthogonal Transformer encoders with a crossover mechanism, to the observed part. We train a reverse diffusion process conditioned on the embeddings, designed to predict noise added to the masked part. Extensive experiments demonstrate TSDE's superiority in imputation, interpolation, forecasting, anomaly detection, classification, and clustering. We also conduct an ablation study, present embedding visualizations, and compare inference speed, further substantiating TSDE's efficiency and validity in learning representations of TS data.

Place, publisher, year, edition, pages
Association for Computing Machinery (ACM), 2024
Keywords
anomaly detection, classification, clustering, diffusion model, forecasting, imputation, interpolation, multivariate time series, representation learning, self-supervised learning, time series modeling
National Category
Computer graphics and computer vision Computer Sciences
Identifiers
urn:nbn:se:kth:diva-353962 (URN)10.1145/3637528.3671673 (DOI)001324524202061 ()2-s2.0-85203684729 (Scopus ID)
Conference
30th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, KDD 2024, Barcelona, Spain, Aug 25 2024 - Aug 29 2024
Note

Part of ISBN 9798400704901

QC 20240926

Available from: 2024-09-25 Created: 2024-09-25 Last updated: 2025-03-17Bibliographically approved
Ravichandran, N. B., Lansner, A. & Herman, P. (2024). Spiking representation learning for associative memories. Frontiers in Neuroscience, 18, Article ID 1439414.
Open this publication in new window or tab >>Spiking representation learning for associative memories
2024 (English)In: Frontiers in Neuroscience, ISSN 1662-4548, E-ISSN 1662-453X, Vol. 18, article id 1439414Article in journal (Refereed) Published
Abstract [en]

Networks of interconnected neurons communicating through spiking signals offer the bedrock of neural computations. Our brain's spiking neural networks have the computational capacity to achieve complex pattern recognition and cognitive functions effortlessly. However, solving real-world problems with artificial spiking neural networks (SNNs) has proved to be difficult for a variety of reasons. Crucially, scaling SNNs to large networks and processing large-scale real-world datasets have been challenging, especially when compared to their non-spiking deep learning counterparts. The critical operation that is needed of SNNs is the ability to learn distributed representations from data and use these representations for perceptual, cognitive and memory operations. In this work, we introduce a novel SNN that performs unsupervised representation learning and associative memory operations leveraging Hebbian synaptic and activity-dependent structural plasticity coupled with neuron-units modelled as Poisson spike generators with sparse firing (similar to 1 Hz mean and similar to 100 Hz maximum firing rate). Crucially, the architecture of our model derives from the neocortical columnar organization and combines feedforward projections for learning hidden representations and recurrent projections for forming associative memories. We evaluated the model on properties relevant for attractor-based associative memories such as pattern completion, perceptual rivalry, distortion resistance, and prototype extraction.

Place, publisher, year, edition, pages
Frontiers Media SA, 2024
Keywords
spiking neural networks, associative memory, attractor dynamics, Hebbian learning, structural plasticity, BCPNN, representation learning, unsupervised learning
National Category
Computer Sciences Computer graphics and computer vision Neurosciences
Identifiers
urn:nbn:se:kth:diva-355141 (URN)10.3389/fnins.2024.1439414 (DOI)001328684900001 ()39371606 (PubMedID)2-s2.0-85205940985 (Scopus ID)
Note

QC 20241023

Available from: 2024-10-23 Created: 2024-10-23 Last updated: 2025-02-01Bibliographically approved
Lenninger, M., Skoglund, M., Herman, P. & Kumar, A. (2023). Are single-peaked tuning curves tuned for speed rather than accuracy?. eLIFE, 12, Article ID e84531.
Open this publication in new window or tab >>Are single-peaked tuning curves tuned for speed rather than accuracy?
2023 (English)In: eLIFE, E-ISSN 2050-084X, Vol. 12, article id e84531Article in journal (Refereed) Published
Abstract [en]

According to the efficient coding hypothesis, sensory neurons are adapted to provide maximal information about the environment, given some biophysical constraints. In early visual areas, stimulus-induced modulations of neural activity (or tunings) are predominantly single-peaked. However, periodic tuning, as exhibited by grid cells, has been linked to a significant increase in decoding performance. Does this imply that the tuning curves in early visual areas are sub-optimal? We argue that the time scale at which neurons encode information is imperative to understand the advantages of single-peaked and periodic tuning curves, respectively. Here, we show that the possibility of catastrophic (large) errors creates a trade-off between decoding time and decoding ability. We investigate how decoding time and stimulus dimensionality affect the optimal shape of tuning curves for removing catastrophic errors. In particular, we focus on the spatial periods of the tuning curves for a class of circular tuning curves. We show an overall trend for minimal decoding time to increase with increasing Fisher information, implying a trade-off between accuracy and speed. This trade-off is reinforced whenever the stimulus dimensionality is high, or there is ongoing activity. Thus, given constraints on processing speed, we present normative arguments for the existence of the single-peaked tuning organization observed in early visual areas.

Place, publisher, year, edition, pages
eLife Sciences Publications, Ltd, 2023
Keywords
neural coding, tuning curves, decoding time, high-dimensional stimuli, spiking activity, None
National Category
Bioinformatics (Computational Biology)
Identifiers
urn:nbn:se:kth:diva-330512 (URN)10.7554/eLife.84531 (DOI)001006600800001 ()37191292 (PubMedID)2-s2.0-85161573273 (Scopus ID)
Note

QC 20250527

Available from: 2023-06-30 Created: 2023-06-30 Last updated: 2025-05-27Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0001-6553-823X

Search in DiVA

Show all publications