Robot navigation

Publications related to navigation.

  1. T. Lourens and E. I. Barakova. User-Friendly Robot Environment for Creation of Social Scenarios [2.56 MB pdf]. In J. M. Ferrandez, J. R. Alvarez, F. de la Paz, and F. J. Toledo, editors, IWINAC 2011, number 6686 in Lecture Notes in Computer Science, pages 212-221, La Palma, Spain, May-June 2011. Springer-verlag.


    This paper proposes a user-friendly framework for designing robot behaviors by users with minimal understanding of programming. It is a step towards an end-user platform which is meant to be used by domain specialists for creating social scenarios, i.e. scenarios in which not high precision of movement is needed but frequent redesign of the robot behavior is a necessity. We show by a hand shaking experiment how convincing it is to construct robot behavior in this framework.

  2. E. I. Barakova and T. Lourens. Event based self-suprevised temporal integration for multimodal sensor data. [702 KB pdf]. Journal of Integrative Neuroscience, 4(2):265-282, June 2005. DOI: 10.1142/S021963520500077X.


    A method for synergistic integration of multimodal sensor data is proposed in this paper. This method is based on two aspects of the integration process: (1) achieving synergistic integration of two or more sensory modalities, and (2) fusing the various information streams at particular moments during processing. Inspired by psychophysical experiments, we propose a self-supervised learning method for achieving synergy with combined representations. Evidence from temporal registration and binding experiments indicates that different cues are processed individually at specific time intervals. Therefore, an event-based temporal co-occurrence principle is proposed for the integration process. This integration method was applied to a mobile robot exploring unfamiliar environments. Simulations showed that integration enhanced route recognition with many perceptual similarities; moreover, they indicate that a perceptual hierarchy of knowledge about instant movement contributes significantly to short-term navigation, but that visual perceptions have bigger impact over longer intervals.

  3. E. I. Barakova Emergent behaviors based on episodic encoding and familiarity driven retrieval [431 KB pdf]. In C. Bussler and D. Fensel, editors, Artificial Intelligence: Methodology, Systems, and Applications, 11th International Conference, AIMSA 2004, volume 3192 of Lecture Notes in Artificial Intelligence, pages 188-197. Springer-Verlag, 2004.


    In analogy to animal research, where behavioral and internal neural dynamics are simultaneously analysed, this paper suggests a method for emergent behaviors arising in interaction with the underlying neural mechanism. This way an attempt to go beyond the indeterministic nature of the emergent behaviors of robots is made. The neural dynamics is represented as an interaction of memories of experienced episodes, the current environmental input and the feedback of previous motor actions. The emergent properties can be observed in a two staged process: exploratory (latent) learning and goal oriented learning. Correspondingly, the learning is dominated to a different extent by two factors: novelty and reward. While the reward learning is used to show the relevance of the method, the novelty/familiarity is a basis for forming the emergent properties. The method is strongly inspired by the state of the art understanding of the hippocampal functioning and especially its role in novelty detection and episodic memory formation in relation to spatial context.

  4. D. Vanderelst and E. I. Barakova Autonomous parsing of behavior in a multi-agent setting [711 KB pdf]. Im L. Rutkowski et al., editors, ICAISC 2008, number 5079 in Lecture Notes in Artificial Intelligence, pages 1198-1209, 2008. Springer-verlag.


    Imitation learning is a promising route to instruct robotic multi-agent systems. However, imitating agents should be able to decide autonomously what behavior, observed in others, is interesting to copy. Here we investigate whether a simple recurrent network (Elman Net) can be used to extract meaningful chunks from a continuous sequence of observed actions. Results suggest that, even in spite of the high level of task specific noise, Elman nets can be used for isolating re-occurring action patterns in robots. Limitations and future directions are discussed.

  5. E. I. Barakova and T. Lourens. Efficient episode encoding for spatial navigation [1347 KB pdf]. International Journal of Systems Science, 36(14):877-885, November 2005.


    A method for familiarity mediated encoding of episodic memories for their inferential use in spatial navigation task is proposed. The method is strongly inspired by the state-of-the-art understanding of the hippocampal functioning and especially its role in novelty detection and episodic memory formation in relation to spatial context. The model is constructed on the presumption that episodic memory formation has behavioral, as well as sensory and perceptual correlates. In addition, the findings regarding hippocampal involvement in the novelty/familiarity detection and episodic memory formation, together with the existence of a straightforward parallel between internal hippocampal and abstract spatial representations are incorporated in the model. A navigation task is used to provide an experimental setup for behavioral testing with a rat-like agent. For this purpose, a framework that connects robot navigation and episodic memory representation is suggested. The computations are adapted for a real-time application. Simulation results show encoding of episodes and their use for navigation.

  6. E. I. Barakova and T. Lourens. Spatial navigation based on novelty mediated autobiographical memory [364 KB pdf]. In J. Mira and J. R. Alvarez, editors, IWINAC 2005, number 3561 in Lecture Notes in Computer Science, pages 1-10, Las Palmas de Gran Canaria, Spain, June 2005. Springer-verlag.


    This paper presents a method for spatial navigation performed mainly on past experiences. The past experiences are remembered in their temporal context, i.e. as episodes of events. The learned episodes form an active autobiography that determines the future navigation behaviour. The episodic and autobiographical memories are modelled to resemble the memory formation process that takes place in the rat hippocampus. The method implies naturally inferential reasoning in the robotic framework that may make it more flexible for navigation in unseen environments. The relation between novelty and life-long exploratory (latent) learning is shown to be important and therefore is incorporated into the learning process. As a result, active autobiography formation depends on latent learning while individual trials might be reward driven. The experimental results show that learning mediated by novelty provides a flexible and efficient way to encode spatial information in its contextual relatedness and directionality. Therefore, performing a novel task is fast but solution is not optimal. In addition, learning becomes naturally a continuous process – encoding and retrieval phase have the same underlying mechanism, and thus do not need to be separated. Therefore, building a “life long” autobiography is feasible.

  7. E. I. Barakova Social Interaction in Robotic Agents Emulating the Mirror Neuron Function [635 KB pdf]. Nature Inspired Problem-Solving Methods in Knowledge Engineering, number 4528 of LNCS, pages 389-398, 2007. Springer Verlag.


    Emergent interactions that are expressed by the movements of two agents are discussed in this paper. The common coding principle is used to show how the mirror neuron system may facilitate interaction behaviour. Synchronization between neuron groups in different structures of the mirror neuron system are in the basis of the interaction behaviour. The robotics experimental setting is used to illustrate the method. The resulting synchronization and turn taking behaviours show the advantages of the mirror neuron paradigm for designing of socially meaningful behaviour.

  8. W. Chonnaparamutt and E. I. Barakova Robot Simulation of Sensory Integration Dysfunction in Autism with Dynamic Neural Fields Model [578 KB pdf]. Im L. Rutkowski et al., editors, ICAISC 2008, number 5079 in Lecture Notes in Artificial Intelligence, pages 741-751, 2008. Springer-verlag.


    This paper applies dynamic neural fields model [1,23,7] to multimodal interaction of sensory cues obtained from a mobile robot, and shows the impact of different temporal aspects of the integration to the precision of movements. We speculate that temporally uncoordinated sensory integration might be a reason for the poor motor skills of patients with autism. Accordingly, we make a simulation of orientation behavior and suggest that the results can be generalized for grasping and other movements that are performed in three dimensional space. Our experiments show that impact of temporal aspects of sensory integration on the precision of movement are concordant with behavioral studies of sensory integration dysfunction and of autism. Our simulation and the robot experiment may suggest ideas for understanding and training the motor skills of patients with sensory integration dysfunction, and autistic patients in particular, and are aimed to help design of games for behavioral training of autistic children.

  9. Maya Dimitrova, Emilia Barakova, Tino Lourens, and Petia Radeva. The web as an autobiographical agent [597 KB pdf]. In C. Bussler and D. Fensel, editors, Artificial Intelligence: Methodology, Systems, and Applications, 11th International Conference, AIMSA 2004, volume 3192 of Lecture Notes in Artificial Intelligence, pages 510-519. Springer-Verlag, 2004.


    The reward-based autobiographic memory approach has been applied to the Web search agent. The approach is based on the analogy between the Web and the environmental exploration by a robot and has branched off from a currently developed method for autonomous agent learning of novel environments and consolidating the learned information for efficient further use.

  10. E. I. Barakova and T. Lourens. Prediction of Rapidly Changing Environmental Dynamics for Real Time Behavior Adaptation using Visual Information [134 KB pdf]. In R. P. Wurtz and M. Lappe, editors, 4th Workshop on Dynamic Perception, pages 147-152, Bochum, Germany, November 2002. IOS press.


    This paper features a method for acting in a real world environment with rapid dynamics, based on behaviors with different complexity, that emerge from: (1) direct sensing (2) on-line prediction about the future development of the environmental dynamics, and (3) internal restrictions derived by the robots strategy. The method is built upon the understanding of perception as dynamic integration of sensing, expectations, and behavioral goals, which is necessary when the environmental dynamics depends also on other intelligent agents. Two central aspects are considered: prediction of the development of environmental dynamics and the subsequent integration. The integration captures the dynamics of processes that happen in different temporal intervals but relate to the same perception. A real time object tracking method that is used is briefly described. Experiments made in a RoboCup environment with physical robots illustrate the plausibility of the method.

  11. E. I. Barakova An Integration Principle for Multimodal Sensor Data Based on Temporal Coherence of Self-Organized Patterns [104 KB pdf]. In J. Mira R. Moreno-Diaz, and J. Cabestany, editors, Biological and Artificial Computation: Methodologies, Neural Modeling and Bioengineering Applications, Lect. Notes in Computer Science, vol. 2085, part II, pages 55-63, 2001. Springer-Verlag.


    The world around us offers continuously huge amounts of information, from which living organisms can elicit the knowledge and understanding they need for survival or well-being. A fundamental cognitive feature, that makes this possible is the ability of a brain to integrate the inputs it receives from different sensory modalities into a coherent description of its surrounding environment. By analogy, artificial autonomous systems are designed to record continuously large amounts of data with various sensors. A major design problem by the last is the lack of reference of how the information from the different sensor streams can be integrated into a consistent description. This paper focuses on the development of a sinergistic integration principle, supported by the synchronization of the multimodal information streams on temporal coherence principle. The processing of the individual information streams is done by a self organizing neural algorithm, known as Neural gas algorithm. The integration itself uses a supervised learning method to allow the various information streams to interchange their knowledge as emerged experts. Two complementary data streams, recorded by exploration of autonomous robot of unprepared environments are used to simultaneously illustrate and motivate in a concrete sense the developed integration approach.

  12. E. I. Barakova and U. R. Zimmer Dynamical Situation and Trajectory Discrimination by Means of Clustering and Summation of Raw Range Measurements [847 KB pdf]. International Conference on Advances in Intelligent Systems: Theory and Application -AISTA 2000, pages 1-6, 2000. Canberra Australia.


    This article focuses on the problem of identifying and discriminating situations and trajectories (as sequences of situations) in an autonomous mobile robot setup. The static identification level of situations as well as the dynamical level of trajectories are based on egocentric measurements only. Adaptation to a specific operating environment is performed in an exploration phase and continuously during operation. Descriptions and classifications are based on statistical entities of the operating environment (in the geometrical space and in the space of dynamics). The recognition is performed in the sense of emitting the same signals in similar situations or on similar trajectories. Neither a global position nor any other global geometrical description is created or employed by this approach.

  13. K. Nakadai, T. Lourens, H. G. Okuno, and H. Kitano. Active audition for humanoid [236 KB pdf]. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, AAAI 2000, pages 832-839. AAAI Press / MIT Press, July-August 2000.


    In this paper, we present an active audition system for humanoid robot “SIG the humanoid”. The audition system of the highly intelligent humanoid requires localization of sound sources and identification of meanings of the sound in the auditory scene. The active audition reported in this paper focuses on improved sound source tracking by integrating audition, vision, and motor movements. Given the multiple sound sources in the auditory scene, SIG actively moves its head to improve localization by aligning microphones orthogonal to the sound source and by capturing the possible sound sources by vision. However, such an active head movement inevitably creates motor noise. The system must adaptively cancel motor noise using motor control signals. The experimental result demonstrates that the active audition by integration of audition, vision, and motor control enables sound source tracking in variety of conditions.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.