kth.sePublications
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using Variational Multi-view Learning for Classification of Grocery Items
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.
Microsoft Research, Cambridge, United Kingdom.
KTH, School of Electrical Engineering and Computer Science (EECS), Intelligent systems, Robotics, Perception and Learning, RPL.ORCID iD: 0000-0002-5750-9655
2020 (English)In: Patterns, ISSN 2666-3899, Vol. 1, no 8Article in journal (Refereed) Published
Abstract [en]

An essential task for computer vision-based assistive technologies is to help visually impaired people to recognize objects in constrained environments, for instance, recognizing food items in grocery stores. In this paper, we introduce a novel dataset with natural images of groceries—fruits, vegetables, and packaged products—where all images have been taken inside grocery stores to resemble a shopping scenario. Additionally, we download iconic images and text descriptions for each item that can be utilized for better representation learning of groceries. We select a multi-view generative model, which can combine the different item information into lower-dimensional representations. The experiments show that utilizing the additional information yields higher accuracies on classifying grocery items than only using the natural images. We observe that iconic images help to construct representations separated by visual differences of the items, while text descriptions enable the model to distinguish between visually similar items by different ingredients.

Place, publisher, year, edition, pages
Elsevier , 2020. Vol. 1, no 8
National Category
Computer graphics and computer vision
Identifiers
URN: urn:nbn:se:kth:diva-292181OAI: oai:DiVA.org:kth-292181DiVA, id: diva2:1539855
Note

QC 20220426

Available from: 2021-03-25 Created: 2021-03-25 Last updated: 2025-02-07Bibliographically approved
In thesis
1. Fine-Grained and Continual Visual Recognition for Assisting Visually Impaired People
Open this publication in new window or tab >>Fine-Grained and Continual Visual Recognition for Assisting Visually Impaired People
2022 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

In recent years, computer vision-based assistive technologies have enabled visually impaired people to use automatic visual recognition on their mobile phones. These systems should be capable of recognizing objects on fine-grained levels to provide the user with accurate predictions. Additionally, the user should have the option to update the system continuously to recognize new objects of interest. However, there are several challenges that need to be tackled to enable such features with assistive vision systems in real and highly-varying environments. For instance, fine-grained image recognition usually requires large amounts of labeled data to be robust. Moreover, image classifiers struggle with retaining performance of previously learned abilities when they are adapted to new tasks. This thesis is divided into two parts where we address these challenges. First, we focus on the application of using assistive vision systems for grocery shopping, where items are naturally structured based on fine-grained details. We demonstrate how image classifiers can be trained with a combination of natural images and web-scraped information about the groceries to obtain more accurate classification performance compared to only using natural images for training. Thereafter, we bring forward a new approach for continual learning called replay scheduling, where we select which tasks to replay at different times to improve memory retention. Furthermore, we propose a novel framework for learning replay scheduling policies that can generalize to new continual learning scenarios for mitigating the catastrophic forgetting effect in image classifiers. This thesis provides insights on practical challenges that need to be addressed to enhance the usefulness of computer vision for assisting the visually impaired in real-world scenarios.

Abstract [sv]

De senaste åren har teknologiska hjälpmedel baserade på datorseende möjliggjort för synskadade personer att använda sig av automatisk visuell igenkänning på deras mobiltelefoner. Dessa system bör kunna känna igen objekt på finfördelade nivåer för att förse användaren med noggranna prediktioner. Användaren bör även ha möjligheten att uppdatera systemet kontinuerligt till att känna igen nya objekt av intresse. Dock finns det flera utmaningar som behöver avklaras för att aktivera dessa funktioner i synhjälpmedelssystem i reella och mycket varierande miljöer. Exempelvis behöver finfördelad bildigenkänning vanligtvis stora mängder märkt data för att vara robust. Dessutom har bildklassificerare besvär med att behålla sin prestanda av tidigare inlärda förmågor när de anpassas till nya uppgifter. Denna avhandling är uppdelad i två delar, där vi tar oss an dessa utmaningar. Först fokuserar vi på tillämpningen av att använda synhjälpmedelssystem för att handla matvaror, där varorna är naturligt strukturerade enligt finfördelade detaljer. Vi påvisar hur bildklassificerare kan tränas med en kombination av naturliga bilder och webbskrapad information om matvarorna för att erhålla mer träffsäker klassificeringsförmåga jämfört med att enbart använda naturliga bilder för träning. Därefter lägger vi fram ett nytt tillvägagångssätt för kontinuerlig inlärning som kallas replay scheduling (repris-schemaläggning), där vi väljer vilka uppgifter som ska repeteras vid olika tidpunkter för att förbättra bibehållande av minnen. Vi föreslår även ett nytt ramverk för inlärning av policyer för replay scheduling som kan generalisera till nya scenarion för kontinuerlig inlärning för att mildra effekten av katastrofal glömska i bildklassificerare. Denna avhandling ger insyn till praktiska utmaningar som behöver lösas för att förbättra användbarheten hos datorseende till att hjälpa synskadade personer i verkliga scenarier.

Place, publisher, year, edition, pages
Stockholm, Sweden: KTH Royal Institute of Technology, 2022. p. 89
Series
TRITA-EECS-AVL ; 2022:63
Keywords
Fine-Grained Image Recognition; Continual Learning; Visually Impaired People; Image Classification; Replay Scheduling
National Category
Computer graphics and computer vision
Research subject
Computer Science
Identifiers
urn:nbn:se:kth:diva-320067 (URN)978-91-8040-377-1 (ISBN)
Public defence
2022-11-08, F3, Lindstedtsvägen 26, Stockholm, 09:00 (English)
Opponent
Supervisors
Funder
Promobilia foundation, F-16500
Note

QC 20221014

Available from: 2022-10-14 Created: 2022-10-13 Last updated: 2025-02-07Bibliographically approved

Open Access in DiVA

fulltext(5461 kB)254 downloads
File information
File name FULLTEXT01.pdfFile size 5461 kBChecksum SHA-512
2438010b9351709bc3233cad6f940732e79bc2213525e2801865e82399d62cf20a67b8d6d4c63b5984c600da4f33192faf34b5837d256e978a5f1fbc110bc692
Type fulltextMimetype application/pdf

Authority records

Klasson, MarcusKjellström, Hedvig

Search in DiVA

By author/editor
Klasson, MarcusKjellström, Hedvig
By organisation
Robotics, Perception and Learning, RPL
Computer graphics and computer vision

Search outside of DiVA

GoogleGoogle Scholar
Total: 254 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 301 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf