Social and emotional robots

Publications related to robot interaction.

  1. T. Lourens and E. I. Barakova. User-Friendly Robot Environment for Creation of Social Scenarios [2.56 MB pdf]. In J. M. Ferrandez, J. R. Alvarez, F. de la Paz, and F. J. Toledo, editors, IWINAC 2011, number 6686 in Lecture Notes in Computer Science, pages 212-221, La Palma, Spain, May-June 2011. Springer-verlag.

    Abstract

    his paper proposes a user-friendly framework for designing robot behaviors by users with minimal understanding of programming. It is a step towards an end-user platform which is meant to be used by domain specialists for creating social scenarios, i.e. scenarios in which not high precision of movement is needed but frequent redesign of the robot behavior is a necessity. We show by a hand shaking experiment how convincing it is to construct robot behavior in this framework.

  2. T. Lourens, R. van Berkel, and E. I. Barakova Communicating emotions and mental states to robots in a real time parallel framework using Laban movement analysis [2476 KB pdf]. Robotics and Autonomous Systems, 58(12):1256-1265, 2010, doi:10.1016/j.robot.2010.08.006.

    Abstract

    This paper presents a parallel real time framework for emotions and mental states extraction and recognition from video fragments of human movements. In the experimental setup human hands are tracked by evaluation of moving skin-colored objects. The tracking analysis demonstrates that acceleration and frequency characteristics of the traced objects are relevant for classification of the emotional expressiveness of human movements. The outcomes of the emotional and mental states recognition are cross validated with the analysis of two independent certified movement analysts (CMA’s) who use Laban movement analysis (LMA) method. We argue that LMA based computer analysis can serve as a common language for expressing and interpreting emotional movements between robots and humans, and in that way it resembles the common coding principle between action and perception by humans and primates that is embodied by the mirror neuron system. The solution is part of a larger project on interaction between a human and a humanoid robot with the aim of training social behavioral skills to autistic children with robots acting in a natural environment.

  3. E. I. Barakova and T. Lourens Expressing and interpreting emotional movements in social games with robots. Personal and Ubiquitous Computing, 14:457–467, 2010.

    Abstract

    This paper provides a framework for recording, analyzing and modeling of 3 dimensional emotional movements for embodied game applications. To foster embodied interaction, we need interfaces that can develop a complex, meaningful understanding of intention –both kinesthetic and emotional– as it emerges through natural human movement. The movements are emulated on robots or other devices with sensory-motor features as a part of games that aim improving the social interaction skills of children. The design of an example game platform that is used for training of children with autism is described since the type of the emotional behaviors depends on the embodiment of the robot and the context of the game. The results show that quantitative movement parameters can be matched to emotional state of the embodied agent (human or robot) using the Laban movement analysis. Emotional movements that were emulated on robots using this principle were tested with children in the age group 7–9. The tests show reliable recognition on most of the behaviors.

  4. E. I. Barakova, and D. Vanderelst From Spreading of Behavior to Dyadic Interaction—A Robot Learns What to Imitate [418 KB pdf], International Journal of Intelligent Systems, vol. , 1–18, 2011.

    Abstract

    Imitation learning is a promising way to learn new behavior in robotic multiagent systems and in human-robot interaction. However, imitating agents should be able to decide autonomously which behavior, observed in others, is interesting to copy. This paper shows a method for extraction of meaningful chunks of information from a continuous sequence of observed actions by using a simple recurrent network (Elman Net). Results show that, independently of the high level of task-specific noise, Elman nets can be used for learning through prediction a reoccurring action patterns, observed in another robotic agent. We conclude that this primarily robot to robot interaction study can be generalized to human-robot interaction and show how we use these results for recognizing emotional behaviors in human-robot interaction scenarios. The limitations of the proposed approach and the future directions are discussed.

  5. T. Lourens and E. I. Barakova Retrieving Emotion from Motion Analysis: In a Real Time Parallel Framework for Robots[1746 KB pdf]. In C. S. Leung, M. Lee, and J. H. Chan, editors, ICONIP 2009, number 5864, part II in Lecture Notes in Computer Science, pages 430-438, Bangkok, Thailand, December 2009. Springer-verlag.

    Abstract

    This paper presents a parallel real time framework for emotion extraction from video fragments of human movements. Its framework is used for tracking of a waving hand by evaluation of moving skin-colored objects. The tracking analysis demonstrates that acceleration and frequency characteristics of the traced objects are relevant for classification of the emotional expressiveness of human movements. The solution is part of a larger project on interaction between a human and a humanoid robot with the aim of training social behavioral skills to autistic children with robots acting in natural environment.

  6. Dieter Vanderelst , Rene M.C. Ahn, Emilia I. Barakova Simulated Trust: A cheap social learning strategy[330 KB pdf]. Theoretical Population Biology, 76 (3): 189-196 November, 2009.

    Abstract

    Animals use heuristic strategies to determine from which conspecifics to learn socially. This leads to directed social learning. Directed social learning protects them from copying non-adaptive information. So far, the strategies of animals, leading to directed social learning, are assumed to rely on (possibly indirect) inferences about the demonstrator’s success. As an alternative to this assumption, we propose a strategy that only uses self-established estimates of the pay-offs of behavior. We evaluate the strategy in a number of agent-based simulations. Critically, the strategy’s success is warranted by the inclusion of an incremental learning mechanism. Our findings point out new theoretical opportunities to regulate social learning for animals. More broadly, our simulations emphasize the need to include a realistic learning mechanism in game-theoretic studies of social learning strategies, and call for re-evaluation of previous findings.

  7. E. I. Barakova, J. Gillessen, and L. Feijs Social training of autistic children with interactive intelligent agents [877 KB pdf]. Journal of Integrative Neuroscience, 8(1):23-34, 2009.

    Abstract

    The ability of autistic children to learn by applying logical rules has been used widely in behavioral therapies for social training. We propose to teach social skills to autistic children through games that simultaneously stimulate social behavior and include recognition of elements of social interaction. For this purpose we created a multi-agent platform of interactive blocks, and we created appropriate games that require shared activities leading to a common goal. The games included perceiving and understanding elements of social behavior that non-autistic children can recognize. We argue that the importance of elements of social interaction such as perceiving interaction behaviors and assigning metaphoric meanings has been overlooked, and that they are very important in the social training of autistic children. Two games were compared by testing them with users. The first game focused only on the interaction between the agents and the other combined interaction between the agents and metaphoric meanings that are assigned to them. The results show that most of the children recognized the patterns of interaction as well as the metaphors when they were demonstrated through embodied agents and were included within games having features that engage the interest of this user group. The results also show the potential of the platform and the games to influence the social behavior of the children positively.

  8. T. Lourens and E. I. Barakova Humanoid Robots are Retrieving Emotion from Motion Analysis[2053 KB pdf]. In T. Calders, K Tuyls, and M. Pechenizkiy, editors, BNAIC 2009, pages 161-168, Eindhoven, The Netherlands, October 2009.

    Abstract

    This paper presents an application for hand waving in real time using a parallel framework. Analysis of 15 different video fragments demonstrates that acceleration and frequency are relevant parameters for emotion classification of hand waving. Its solution will be used for human-robot interaction with the aim of training autistic children social behavioral skills in a natural environment.

  9. E. I. Barakova, G. van Wanrooij, R. van Limpt, and M. Menting Using an emergent system concept in designing interactive games for autistic children [401 KB pdf]. 6th International Conference on Interaction Desing and Children (IDC07), pages 73-77, Aalborg Denmark, June 2007. ACM 978-1-59593-747-6.

    Abstract

    This paper features the design process, the outcome, and preliminary tests of an interactive toy that expresses emergent behavior and can be used for behavioral training of autistic children, as well as for an engaging toy for every child. We exploit the interest of the autistic children in regular patterns and order to stimulate their motivational, explorative and social skills. As a result we have developed a toy that consists of undefined number of cubes that express emergent behavior by communicating with each other and changing their colors as a result of how they have been positioned by the players. The user tests have shown increased time of engagement of the children with the toy in comparison with their usual play routines, pronounced explorative behavior and encouraging results with improvement of turn taking interaction.

  10. D. Vanderelst, R. Ahn, and E. I. Barakova Simulated trust: towards robust social learning [207 KB pdf]. In S. Bullock, J. Noble, R. Watson, and M. A. Bedau, editors, Artificial Life XI: Proceedings of the Eleventh International Conference on the Simulation and Synthesis of Living Systems, pages 632-639, 2008. MIT Press, Cambridge, MA.

    Abstract

    Social learning is a potentially powerful learning mechanism to use in artificial multi-agent systems. However, findings about how animals use social learning show that it is also possibly detrimental. By using social learning agents act based on second-hand information that might not be trustworthy. This can lead to the spread of maladaptive behavior throughout populations. Animals employ a number of strategies to selectively use social learning only when appropriate. This suggests that artificial agents could learn more successfully if they are able to strike the appropriate balance between social and individual learning. In this paper, we propose a simple mechanism that regulates the extent to which agents rely on social learning. Our agents can vary the amount of trust they have in others. The trust is not determined by the performance of others but depends exclusively on the agents own rating of the demonstrations. The effectiveness of this mechanism is examined through a series of simulations. We first show that there are various circumstances under which the performance of multi-agents systems is indeed seriously hampered when agents rely on indiscriminate social learning. We then investigate how agents that incorporate the proposed trust mechanism fare under the same circumstances. Our simulations indicate that the mechanism is quite effective in regulating the extent to which agents rely on social learning. It causes considerable improvements in the learning rate, and can, under some circumstances, even improve the eventual performance of the agents. Finally, some possible extensions of the proposed mechanism are being discussed.

  11. D. Vanderelst and E. I. Barakova Autonomous parsing of behavior in a multi-agent setting [711 KB pdf]. Im L. Rutkowski et al., editors, ICAISC 2008, number 5079 in Lecture Notes in Artificial Intelligence, pages 1198-1209, 2008. Springer-verlag.

    Abstract

    Imitation learning is a promising route to instruct robotic multi-agent systems. However, imitating agents should be able to decide autonomously what behavior, observed in others, is interesting to copy. Here we investigate whether a simple recurrent network (Elman Net) can be used to extract meaningful chunks from a continuous sequence of observed actions. Results suggest that, even in spite of the high level of task specific noise, Elman nets can be used for isolating re-occurring action patterns in robots. Limitations and future directions are discussed.

  12. E. I. Barakova Emotion recognition in robots in a social game for autistic children [113 KB pdf]. In J. Sturm and M.M. Bekker, editors, Proceedings of the 1st workshop on Design for Social Interaction through Physical Play, pages 21-25. Eindhoven, the Netherlands, 2008.

    Abstract

    This paper provides a framework for a social game that has as a goal improving the social interaction skills through associative play. It describes the design of the game platform and an ongoing study on the perception of emotional expression from motion cues for communication and social coordination. Especially, children with autism spectrum disorders are targeted, since they will benefit most from behavioral training that may improve their social skills. The promising results from two stages of this work are shown.

  13. T. Lourens and E. I. Barakova. My Sparring Partner is a Humanoid Robot -A parallel framework for improving social skills by imitation [1646 KB pdf]. In J. R. Alvarez, editor, IWINAC 2009, number 5602 in Lecture Notes in Computer Science, pages 344-352, Santiago de Compostella, Spain, June 2009. Springer-verlag.

    Abstract

    This paper presents a framework for parallel tracking of human hands and faces in real time, and is a partial solution to a larger project on human-robot interaction which aims at training autistic children using a humanoid robot in a realistic non-restricted environment. In addition to the framework, the results of tracking different hand waving patterns are shown. These patterns provide an easy to understand profile of hand waving, and can serve as the input for a classification algorithm.

  14. E. I. Barakova and T. Lourens. Mirror neuron framework yields representations for robot interaction [858 KB pdf]. Neurocomputing, 72(4-6):895-900, 2009.

    Abstract

    Common coding is a functional principle that underlies the mirror neuron paradigm. It insures actual parity between perception and action, since the perceived and performed actions are equivalently and simultaneously represented within the mirror neuron system. Based on the parity of this representation we show how the mirror neuron system may facilitate the interaction between two robots. Synchronization between neuron groups in different structures of the mirror neuron system are on the basis of the interaction behavior. The robotic simulation is used to illustrate several interactions. The resulting synchronization and turn taking behaviors show the potential of the mirror neuron paradigm for designing of socially meaningful behaviors.

  15. E. I. Barakova, J. C. C. Gillesen, and L. M. G. Feijs Use of goals and dramatic elements in behavioral training of children with ASD [303 KB pdf]. Proceedings of the 7th international conference on Interaction design and children, pages 37-40, 2008. ACM, New York.

    Abstract

    We describe the development of a multi-agent platform and adequate games that aim to stimulate social behaviro of autistic children. User tests with two games, one with emerging patterns and another with goals and dramtaic elements were compared. The results show that the childeren do not play significantly longer with either of the games, when exposed for first time to the multi-agent toy. Interestingly, most of the children recognized the dramatic elements, which makes us believe that by longer exposure and proper guidance children might be thought social skills. Test results are described quantitatively and qualitatively.

  16. E. I. Barakova and T. Lourens. Novelty gated episodic memory formation for robot exploration [1166 KB pdf]. In R. R. Yager and V. S. Sgurev, editors, Second International IEEE Conference on Intelligent Systems (IS 2004), volume I, pages 116-121, Varna, Bulgaria, June 2004.

    Abstract

    This paper presents a method for novelty and familiarity detection, aiming at inferential use of episodic memories for modeling behavior in novel situations. The method is based on the simulation of the hippocampal function, especially on its aspects, that relate to the memory formation in spatial context as: (1) the sensory, perceptual, and behavioral correlates of the episodic memory formation, (2) its involvement in novelty/familiarity detection and inferential reuse of old memories, and (3) the natural way to relate the internal hippocampal and abstract spatial representations. The study differs substantially from the existing models, that relate the hippocampal function and robot exploration, since it focuses on flexible reuse of experienced episodes rather than on navigation. The model is build on the experimentally supported hypothesis of the novelty/familiarity discrimination function of the hippocampal area CA1.

  17. T. Lourens, K. Nakadai, H. G. Okuno, and H. Kitano. Selective Attention by Integration of Vision and Audition [3897 KB pdf]. In Proceedings of the First IEEE-RAS International Conference on Humanoid Robots, Humanoids2000, pages 20, file: 44.pdf, The Massachusetts Institute of Technology, Boston, U.S.A., September 2000.

    Abstract

    Selective attention is one of the tasks humans solve with great ease, still in computer simulations of human cognition this is a very complicated problem. In humanoid research it even becomes more complicated due to physical restrictions of hardware. Compared to a human, camera’s, e.g., have small visual fields and low resolution while motion causes a lot of noise, which makes audition a more complicated task. Combining vision and audition in humanoids is beneficial for both cues: vision because it does not suffer from noise, while audition is not restricted to an approximately 40o x 40o receptive field area, neither to partly or fully occluded objects. Low localization accuracy of both human (±8o) and artificial (±10o) audition systems can be compensated for by using vision. In this paper we propose a model that simulates selective attention by integrating vision and audition. A learning mechanism is incorporated as well to make the model adaptive to any arbitrary scene. The input of the model is formed by specific and robust features that are extracted from a huge amount of sensor data, hence part of the paper will focus on feature extraction. Audition is employed to improve selective attention because objects can be occluded or outside the visual field of a camera or human vision. Visual fields can be made wider by lenses, but never reach the full 360 degrees, hence a map is needed. This map contains information about all recognized objects over time, where objects are represented by features in a symbolic description. This map, in fact, represents a kind of artificial (temporal) memory. The location information of the objects (given by real world coordinates) is stored in the map as well. Also features from both vision and audition cues are integrated in this map. Storing information over time in such a map facilitates and speeds up the selective attention model. The map can be easily extended to incorporate extracted features from other types of sensors. In a simple natural environment the functionality of the model as well as the symbiosis between vision and audition are illustrated. The scenario will show that interaction between vision and audition is beneficial which is found rarely in literature. Promising results of the scenario show that audition was needed to localize an initially invisible object, while vision after that was used to accurately localize the object.

  18. E. I. Barakova Social Interaction in Robotic Agents Emulating the Mirror Neuron Function [635 KB pdf]. Nature Inspired Problem-Solving Methods in Knowledge Engineering, number 4528 of LNCS, pages 389-398, 2007. Springer Verlag.

    Abstract

    Emergent interactions that are expressed by the movements of two agents are discussed in this paper. The common coding principle is used to show how the mirror neuron system may facilitate interaction behaviour. Synchronization between neuron groups in different structures of the mirror neuron system are in the basis of the interaction behaviour. The robotics experimental setting is used to illustrate the method. The resulting synchronization and turn taking behaviours show the advantages of the mirror neuron paradigm for designing of socially meaningful behaviour.

  19. L. Feijs and E. I. Barakova Semantics through Embodiment: a Non-linear Dynamics approach to Affective Design [1577 KB pdf]. In L. Feijs, S. Kyffin, and B. Yong, editors, DesForm 2007, pages 108-116, 2007.

    Abstract

    In this paper we address the creation and interpretation of movements, light and sound from a fundamental and innovative viewpoint. Using a number of concepts from the relatively new and very promising research field of nonlinear adaptive systems, and getting some inspiration from psychophysical studies on the perception of emotion we address the study of movements and other autonomous expressions of products. The goal is to understand the semantics of movement, particularly the emotional meaning of the movement and to translate it to other autonomous expressive behavior.

  20. E. I. Barakova Sensorimotor paradigms for design of movement and social interaction [95 KB pdf]. In L. Feijs, S. Kyffin, and B. Yong, editors, DesForm 2006, pages 126-130, 2006.

    Abstract

    The human brain has evolved for governing motor activity by transforming sensory patterns to patterns of motor coordination. Movement, as a basic bodily expression of this governing function is shown to underlie higher cognitive processes and social interaction. There are three prevailing concepts of sensorimotor interaction that set up different frameworks for design of artificial movement. This paper focuses on the common coding [14] paradigm of sensorimotor interaction as justified by recent experimental studies on the mirror neuron system. It aims to provide a novel approach to design of movement interactions in an inter-agent setting.

  21. M. M. Bekker, J. A. Sturm, and E. I. Barakova Designing for social interaction through physical play [1263 KB pdf]. In J. A. Sturm and M. M. Bekker, editors, Proceedings of the 1st workshop on Design for Social Interaction through Physical Play, pages 7-10, 2008. Eindhoven, the Netherlands.

    Abstract

    Nine very interesting position papers were submitted to our workshop on Design for Social Interactioll and Physical Play. The papers, present.ed in these proceedings, cover design concepts for very diverse user groups and eontexts of use. Creating novel concepts is often done using theories about human behaviour a~ all inspiration source. This introduction describes the content of our workshop along three dimensions: user groups, context of use and related theories.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.