Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Using Social and Physiological Signals for User Adaptation in Conversational Agents
KTH, School of Electrical Engineering and Computer Science (EECS), Speech, Music and Hearing, TMH.ORCID iD: 0000-0003-3687-6189
2019 (English)In: AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, ASSOC COMPUTING MACHINERY , 2019, p. 2420-2422Conference paper, Published paper (Refereed)
Abstract [en]

In face-to-face communication, humans subconsciously emit social signals which are picked up and used by their interlocutors as feedback for how well the previously communicated messages have been received. The feedback is then used in order to adapt the way the coming messages are being produced and sent to the interlocutor, leading to the communication to become as efficient and enjoyable as possible. Currently however, it is rare to find conversational agents utilizing this feedback channel for altering how the multimodal output is produced during interactions with users, largely due to the complex nature of the problem. In most regards, humans have a significant advantage over conversational agents in interpreting and acting on social signals. Humans are however restricted to a limited set of sensors, "the five senses", which conversational agents are not. This makes it possible for conversational agents to use specialized sensors to pick up physiological signals, such as skin temperature, respiratory rate or pupil dilation, which carry valuable information about the user with respect to the conversation. This thesis work aims at developing methods for utilizing both social and physiological signals emitted by humans in order to adapt the output of the conversational agent, allowing for an increase in conversation quality. These methods will primarily be based on automatically learning adaptive behavior from examples of real human interactions using machine learning methods.

Place, publisher, year, edition, pages
ASSOC COMPUTING MACHINERY , 2019. p. 2420-2422
Keywords [en]
Learning agent capabilities (agent models, communication, observation), Deep learning, Single and multi-agent planning and scheduling
National Category
Engineering and Technology
Identifiers
URN: urn:nbn:se:kth:diva-256291ISI: 000474345000426OAI: oai:DiVA.org:kth-256291DiVA, id: diva2:1368057
Conference
18th International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), Montreal, CANADA, MAY 13-17, 2019
Note

QC 20191105

Available from: 2019-11-05 Created: 2019-11-05 Last updated: 2019-11-05Bibliographically approved

Open Access in DiVA

No full text in DiVA

Authority records BETA

Jonell, Patrik

Search in DiVA

By author/editor
Jonell, Patrik
By organisation
Speech, Music and Hearing, TMH
Engineering and Technology

Search outside of DiVA

GoogleGoogle Scholar

urn-nbn

Altmetric score

urn-nbn
Total: 62 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf