Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
EMOTIONS OF A FACIALEXPRESSION ON A FURHAT ROBOT FACE USING A KINECT INPUT.
KTH, School of Computer Science and Communication (CSC).
KTH, School of Computer Science and Communication (CSC).
2013 (English)Independent thesis Basic level (degree of Bachelor), 10 credits / 15 HE creditsStudent thesis
Abstract [en]

Facial expressions reflects a person’s emotions or are used socially to express oneself. With the use of a Kinect it is possible to capture 3D points from a face. If the person Bob, who is in front of the Kinect, does a facial expression when data are being caught that data can be used as calibration for that facial expression. Then the same facial expressions are defined for the 3D face application Furhat, making it possible to do a mapping between Bob’s facial expression and that of the Furhat. The mapping is done by comparing the length of vectors between some Key Points from Bob’s face to the calibration data enabling expression identification and the receival of a percentage of how well the expressions matched. This percentage was multiplied with each Furhat value for that expression to scale the expressions against each other.

With this mapping between the facial expressions of Bob to the Furhat 3D model, a survey were done of how well the emotions of facial expressions were replicated. Pictures taken with the RGB camera of the Kinect were compared to pictures of the Furhat when these pictures were taken simultaneously making both pictures do the same facial expression. The survey were done where the responders assignment were to to write in free text what emotion the facial expression on the different pictures had. The number of respondents was 31. The result for the different facial expressions varied considerably.

The best match between the Furhat and Bob according to the respondents had an accuracy of 61% which is good. On the other hand, the worse matches had an accuracy of less than 20%. Thereof the conclusion that the essence of the facial expressions were not replicated well with the constraints that the mapping together with the Furhat gave.

Abstract [sv]

Ansiktsuttryck reflekterar oftast en persons känslor eller används socialt för att uttrycka sig. Med hjälp av en Kinect kan man fånga 3D punkter från ansiktet. Om personen Bob, som är framför Kinect:en, gör ansiktsuttryck medan data fångas så kan den datan användas som kalibrering för dessa ansiktsuttryck. Sedan definieras samma ansiktsuttryck i 3D-ansiktesappliationen Furhat, då går det att göra en mappning mellan Bobs ansiktsuttryck och Furhat:ens ansiktsuttryck. Denna mappning görs genom att jämföra längden på vektorer mellan några nyckelpunkter i Bobs ansikte mot den kalibrerade datan för att identifiera vilket uttryck Bob gör samt att få en procentsats om hur väl de stämde överens. Procentsatsen multiplicerades med värdet på varje Furhat parameter för det identifierade ansiktsuttrycket för att skala uttrycken mellan varandra.

Med denna mappning från ansiktsuttryck hos Bob till 3D modellen Furhat så gjordes en undersökning hur väl känslor i ansiktsuttryck speglades. Bilder tagna med Kinect kameran jämfördes med bilder från Furhat:en och dessa bilder togs samtidigt så att de visade samma ansiktsuttryck. Undersökningen som gjordes var en enkätundersökning där deltagarna skulle skriva fritext vilken känsla ansiktsuttrycket på de olika bilderna hade. Antalet respondenter var 31. Resultatet för de olika ansiktsuttrycken varierade kraftigt.

Den matchning mellan Furhat och Bob som blev bäst enligt respondenterna hade en träffsäkerhet på 61 %. Dock så hade de sämre matchningarna en träffsäkerhet på under 20%. Därav drog vi slutsatsen att själen av ansiktsuttrycken inte återspeglades särskilts väl med de begränsningar som mappningen tillsammans med Furhat:en gav.

Place, publisher, year, edition, pages
2013.
Series
Kandidatexjobb CSC, K13032
National Category
Computer Science
Identifiers
URN: urn:nbn:se:kth:diva-134948OAI: oai:DiVA.org:kth-134948DiVA: diva2:668848
Educational program
Master of Science in Engineering - Computer Science and Technology
Supervisors
Examiners
Available from: 2013-12-13 Created: 2013-12-02 Last updated: 2013-12-13Bibliographically approved

Open Access in DiVA

REPL ICATING THE EMOTIONS OF A FACIAL EXPRESSION ON A FURHAT ROBOT FACE USING A KINECT INPUT .(1974 kB)131 downloads
File information
File name FULLTEXT01.pdfFile size 1974 kBChecksum SHA-512
f3d49fd45f11dc87edcd1f61280195f22b074e90f11dfdf45c44e0e661fcc33c5ca2695e3211ba4128cee205814e68282336bd3746a5e747f3a17dfaa61970ba
Type fulltextMimetype application/pdf

Other links

http://www.csc.kth.se/utbildning/kth/kurser/DD143X/dkand13/Group6Gabriel/report/report_david_thomas.pdf
By organisation
School of Computer Science and Communication (CSC)
Computer Science

Search outside of DiVA

GoogleGoogle Scholar
Total: 131 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

urn-nbn

Altmetric score

urn-nbn
Total: 108 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf