posted on 2025-05-09, 09:51authored byAaron Seng Wai Wong
Companion robots are popular entities in the world of science fiction; however, in the real world no robot has been sufficiently programmed to act as a useful companion robot. There are still many issues to be resolved before this can be achieved, including the basic skills of walking, localisation and object recognition, and more difficult functions such as language, memory and human-robot interaction. Human emotions are shaped by cultural, genetic and environmental factors. Taking a wide-ranging approach to the context with respect to human-robot interactions, the surrounding environment has an effect on the way humans emotionally interact with each other. The focus of this is the extraction of environmental features (acoustic and visual) for use in human-robot interaction through an affective processing system on the robot. Applications of manifold learning were applied to acoustic and visual domains along with the design of an affective processing architecture. Manifold learning and alignment were applied to acoustic datasets to obtain `soundscapes'. With these soundscape manifolds it was possible to visualise angular and intensity relationships. In combination with standard classification methods, this allowed us to achieve high classification rates on a sound source classification example task. An application of soundscapes was then applied to localisation on a humanoid robot. Gaze direction of another pedestrian can influence the observer to re-focus their attention to another point. The visual perception of pedestrian gaze was explored with the use of humanoid robots and applied with manifold learning, which reveals the underlying low dimensional structure of a torus. Robots are ideal for this research because they allow for `actual' pedestrian data (parameter readings taken directly from their control systems) to be compared with the data collected through video recordings of their actions. This allowed the global structure of the set of real head/gaze images to be strengthened through alignment with an artificially generated torus-shaped head rotation dataset. In a step towards bridging the discomfort gap that must be overcome before integrating robots into everyday society, an affective processing system was implemented on a humanoid robot using visual features from the environment in an on-line learning process that allows the robot to adapt closely to the user's unique emotional status. The environmental emotion detection system allows the robot to predict the user's feelings, to help increase the user's level of comfort in interacting with the robot in various situations. Future works will also include the extension of the environmental emotion detection system, including comparisons and simulations with other learning techniques such as alignment of manifolds of different features. Gestalt psychology states that the brain is holistic, parallel, and analogue, with self-organizing tendencies. Hence machine learning techniques, specifically manifold learning and alignment parallels with these fundamental concepts of the brain. It is important for the robots to learn and adapt in a similar fashion, so robots one day can achieve cognition, in the same manner as human beings.
History
Year awarded
2014.0
Thesis category
Doctoral Degree
Degree
Doctor of Philosophy (PhD)
Supervisors
Chalup, Stephan (University of Newcastle); Mahata, Kaushik (University of Newcastle)
Language
en, English
College/Research Centre
Faculty of Engineering and Built Environment
School
School of Electrical Engineering and Computer Science