kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Brain-Inspired Physics-Informed Neural Networks: Bare-Minimum Neural Architectures for PDE Solvers
KTH, School of Electrical Engineering and Computer Science (EECS), Computer Science, Computational Science and Technology (CST).ORCID iD: 0000-0003-0639-0639
2024 (English)In: Computational Science – ICCS 2024 - 24th International Conference, 2024, Proceedings, Springer Nature , 2024, p. 331-345Conference paper, Published paper (Refereed)
Abstract [en]

Physics-Informed Neural Networks (PINNs) have emerged as a powerful tool for solving partial differential equations (PDEs) in various scientific and engineering domains. However, traditional PINN architectures typically rely on large, fully connected multilayer perceptrons (MLPs), lacking the sparsity and modularity inherent in many traditional numerical solvers. An unsolved and critical question for PINN is: What is the minimum PINN complexity regarding nodes, layers, and connections needed to provide acceptable performance? To address this question, this study investigates a novel approach by merging established PINN methodologies with brain-inspired neural network techniques. We use Brain-Inspired Modular Training (BIMT), leveraging concepts such as locality, sparsity, and modularity inspired by the organization of the brain. With brain-inspired PINN, we demonstrate the evolution of PINN architectures from large, fully connected structures to bare-minimum, compact MLP architectures, often consisting of a few neural units! Moreover, using brain-inspired PINN, we showcase the spectral bias phenomenon occurring on the PINN architectures: bare-minimum architectures solving problems with high-frequency components require more neural units than PINN solving low-frequency problems. Finally, we derive basic PINN building blocks through BIMT training on simple problems akin to convolutional and attention modules in deep neural networks, enabling the construction of modular PINN architectures. Our experiments show that brain-inspired PINN training leads to PINN architectures that minimize the computing and memory resources yet provide accurate results.

Place, publisher, year, edition, pages
Springer Nature , 2024. p. 331-345
Keywords [en]
Bare-Minimum PINN Architectures, Brain-Inspired PINN, Modular PINN, Spectral Bias Phenomenon
National Category
Computer Sciences Other Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-351763DOI: 10.1007/978-3-031-63749-0_23ISI: 001279316700023Scopus ID: 2-s2.0-85199618799OAI: oai:DiVA.org:kth-351763DiVA, id: diva2:1888730
Conference
24th International Conference on Computational Science, ICCS 2024, Malaga, Spain, Jul 2 2024 - Jul 4 2024
Note

Part of ISBN 9783031637483

QC 20240820

Available from: 2024-08-13 Created: 2024-08-13 Last updated: 2024-09-10Bibliographically approved

Open Access in DiVA

No full text in DiVA

Other links

Publisher's full textScopus

Authority records

Markidis, Stefano

Search in DiVA

By author/editor
Markidis, Stefano
By organisation
Computational Science and Technology (CST)
Computer SciencesOther Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar

doi
urn-nbn

Altmetric score

doi
urn-nbn
Total: 36 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf