Logo: to the web site of Uppsala University

uu.sePublications from Uppsala University
Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
When robot personalisation does not help: Insights from a robot-supported learning study
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. (Social Robotics)ORCID iD: 0000-0003-3324-4418
Gothenburg Univ, Dept Appl IT, Gothenburg, Sweden.
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. (Social Robotics)
Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Division of Visual Information and Interaction. Uppsala University, Disciplinary Domain of Science and Technology, Mathematics and Computer Science, Department of Information Technology, Computerized Image Analysis and Human-Computer Interaction. (Social Robotics)
2018 (English)In: Proc. 27th International Symposium on Robot and Human Interactive Communication, IEEE, 2018, p. 705-712Conference paper, Published paper (Refereed)
Abstract [en]

In the domain of robotic tutors, personalised tutoring has started to receive scientists' attention, but is still relatively underexplored. Previous work using reinforcement learning (RL) has addressed personalised tutoring from the perspective of affective policy learning. However, little is known about the effects of robot behaviour personalisation on user's task performance. Moreover, it is also unclear if and when personalisation may be more beneficial than a robot that adapts to its users and the context of the interaction without personalising its behaviour. In this paper we build on previous work on affective policy learning that used RL to learn what robot's supportive behaviours are preferred by users in an educational scenario. We build a RL framework for personalisation that allows a robot to select verbal supportive behaviours to maximise the user's task progress and positive reactions in a learning scenario where a Pepper robot acts as a tutor and helps people to learn how to solve grid-based logic puzzles. A between-subjects design user study showed that participants were more efficient at solving logic puzzles and preferred a robot that exhibits more varied behaviours compared with a robot that personalises its behaviour by converging on a specific one over time. We discuss insights on negative effects of personalisation and report lessons learned together with design implications for personalised robots.

Place, publisher, year, edition, pages
IEEE, 2018. p. 705-712
National Category
Human Computer Interaction
Identifiers
URN: urn:nbn:se:uu:diva-366205DOI: 10.1109/ROMAN.2018.8525832ISI: 000494315600112ISBN: 978-1-5386-7981-4 (electronic)OAI: oai:DiVA.org:uu-366205DiVA, id: diva2:1263873
Conference
RO-MAN 2018, August 27–31, Nanjing, China
Funder
Swedish Research Council, 2015-04378Swedish Foundation for Strategic Research , RIT15-0133Available from: 2018-11-17 Created: 2018-11-17 Last updated: 2020-10-25Bibliographically approved
In thesis
1. Machine Behavior Development and Analysis using Reinforcement Learning
Open this publication in new window or tab >>Machine Behavior Development and Analysis using Reinforcement Learning
2020 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

We are approaching a future where robots and humans will co-exist and co-adapt. To understand how can a robot co-adapt with humans, we need to understand and develop efficient algorithms suitable for our interactive purposes. Not only it can help us to advance the field of robotics but also it can help us to understand ourselves. A subject Machine Behavior, proposed by Iyad Rahwan in a recent Science article, studies algorithms and the social environments in which algorithms operate. What this paper's view tells us is that, when we would like to study any artificial robot we create, like natural science, a two-step method based on logical positivism should be applied. That is, we need to, on one hand, provide a complicated theory based on logical deduction, and on another hand, empirically setup experiments to conduct.

Reinforcement learning (RL) is a computational model that helps us to build a theory to explain the interactive process. Integrated with neural networks and statistics, the current RL is able to obtain a reliable learning representation and adapt over interactive processes. It might be one of the first times that we are able to use a theoretical framework to capture uncertainty and adapt automatically during interactions between humans and robots. Though some limitations are observed in different studies, many positive aspects have also been revealed. Additionally, considering the potentials of these methods people observed from related fields e.g. image recognition, physical human-robot interaction and manipulation, we hope this framework will bring more insights to the field of robotics. The main challenge in applying Deep RL to the field of social robotics is the volume of data. In traditional robotics problems such as body control, simultaneous localization and mapping and grasping, deep reinforcement learning often takes place only in a non-human environment. In such an environment, the robot can learn infinitely in the environment to optimize its strategies. However, applications in social robotics tend to be in a complex environment of human-robot interaction. Social robots require human involvement every time they learn in such an environment, which leads to very expensive data collection. In this thesis, we will discuss several ways to deal with this challenge, mainly in terms of two aspects, namely, evaluation of learning algorithms and the development of learning methods for human-robot co-adaptation.

Place, publisher, year, edition, pages
Uppsala: Acta Universitatis Upsaliensis, 2020. p. 43
Series
Digital Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology, ISSN 1651-6214 ; 1983
Keywords
reinforcement learning, robotics, human robot interaction
National Category
Robotics
Research subject
Computer Science
Identifiers
urn:nbn:se:uu:diva-423434 (URN)978-91-513-1053-4 (ISBN)
Public defence
2020-12-11, Häggsalen, Ångströmlaboratoriet, Lägerhyddsvägen 1, Uppsala, 10:00 (English)
Opponent
Supervisors
Available from: 2020-11-20 Created: 2020-10-25 Last updated: 2021-01-25

Open Access in DiVA

fulltext(3190 kB)531 downloads
File information
File name FULLTEXT01.pdfFile size 3190 kBChecksum SHA-512
8263f2f126d5a4f0d8a9599807e1d9ba0344773d567bd1bcc718e1f093abc45da94bf7221ac45df1f79084426e879e9a4536503714e2a331816c9814fecec2a0
Type fulltextMimetype application/pdf

Other links

Publisher's full text

Authority records

Gao, YuanObaid, MohammadCastellano, Ginevra

Search in DiVA

By author/editor
Gao, YuanObaid, MohammadCastellano, Ginevra
By organisation
Division of Visual Information and InteractionComputerized Image Analysis and Human-Computer Interaction
Human Computer Interaction

Search outside of DiVA

GoogleGoogle Scholar
Total: 531 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

doi
isbn
urn-nbn

Altmetric score

doi
isbn
urn-nbn
Total: 269 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf