kth.sePublications KTH
Operational message
There are currently operational disruptions. Troubleshooting is in progress.
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Machine Learning Algorithms in Neural Networks
KTH, School of Electrical Engineering and Computer Science (EECS).
KTH, School of Electrical Engineering and Computer Science (EECS).
2025 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

This project explores the theoretical foundations and practical implementation of feedforward neural networks. A simple network was implemented and trained on the MNIST data set using stochastic gradient descent combined with backpropagation. Training performance was analyzed using two different cost functions, cross-entropy and quadratic, as well as L2 regularization and limited tuning of hyperparameters. As expected, carefully selecting the learning rate significantly improved convergence. Furthermore, L2 regularization proved to be an effective method for reducing overfitting and improving validation accuracy, leading to a final classification accuracy of 96.4% when using the cross-entropy cost function. The results of the implementations also highlight the fact that theoretical expectations might require adjustments in practice.

Abstract [sv]

Detta projekt utforskar de teoretiska grunderna och den praktiska implementeringen av framåtriktade neurala nätverk. Ett enkelt nätverk implementerades och tränades på MNIST-datasettet med hjälp av stokastisk gradientnedstigning kombinerat med bakåtpropagering. Träningsprestandan analyserades med två olika kostnadsfunktioner, cross-entropy och kvadratisk, samt med L2-regularisering och begränsad justering av hyperparametrar. Som förväntat förbättrades konvergensen avsevärt genom ett noggrant val av inlärningshastighet. L2-regularisering visade sig dessutom vara en effektiv metod för att minska överanpassning och förbättra valideringsnoggrannheten, vilket resulterade i en högsta klassificeringsnoggrannhet på 96.4%. Resultaten från implementeringarna påvisar också att teoretiska förväntningar ofta kräver justeringar i praktiken.

Place, publisher, year, edition, pages
2025. , p. 437-444
Series
TRITA-EECS-EX ; 2025:142
National Category
Electrical Engineering, Electronic Engineering, Information Engineering
Identifiers
URN: urn:nbn:se:kth:diva-376125OAI: oai:DiVA.org:kth-376125DiVA, id: diva2:2034032
Supervisors
Examiners
Projects
Kandidatexamensarbete i Elektroteknik 2025, EECS, KTHAvailable from: 2026-01-30 Created: 2026-01-30

Open Access in DiVA

fulltext(80627 kB)18 downloads
File information
File name FULLTEXT01.pdfFile size 80627 kBChecksum SHA-512
35ce0a386dafe4649eb99cbe0efdfed651a3c9044e3339612422234d17a7e8ec21d4fd4aa201500c3c7a8f57194994b78b3e0cfbd5319ecd49f18a5d8a7ff775
Type fulltextMimetype application/pdf

By organisation
School of Electrical Engineering and Computer Science (EECS)
Electrical Engineering, Electronic Engineering, Information Engineering

Search outside of DiVA

GoogleGoogle Scholar
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 4923 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf