JAMES
REPORTS
REPORTS
This work is concerned with the development of visual processing components that are able to reliably track the location, gestures, and facial displays of multiple people in a constantly changing scene. Such information is necessary in the bartending domain to properly identify users interacting (or not interacting) with the JAMES robot, and related objects in the scene, in order to react and respond appropriately.
D1.1 - Visual Identification and Tracking of Humans
Maria Pateraki, Haris Baltzakis, Panos Trahanias
D1.2 - Recognition of Hand Gestures, Facial Expressions, and Conversational States
Maria Pateraki, Haris Baltzakis, Panos Trahanias
This work is concerned with developing components for recognising, understanding, and generating embodied natural language. In particular, the main objectives of this work include developing a speech-recognition system for use in the JAMES environment, developing a natural-language grammar that can be used for both input understanding and output generation, and implementing a multimodal presentation planner capable of controlling the robot hardware.
This work focuses on the mid-level input processing components of the system, multimodal fusion and social state estimation, and the core research challenge in this area: the automatic detection of social signals based on low-level sensor data. The resulting system classifies both task-based and social intentions, and also identifies instances of non-communicative actions.
D3.1 - Multimodal Fusion and Basic Social State Estimation
Mary Ellen Foster, Zhuoran Wang, Oliver Lemon
D3.2 - Multimodal Social State Processing
Mary Ellen Foster, Simon Keizer, Oliver Lemon, Zhuoran Wang
The work addresses the problem of high-level planning and reasoning. Since reasoning and planning are essential for an intelligent agent acting in a dynamic and incompletely known world, achieving goals under such conditions often requires complex forward deliberation that cannot easily be achieved by simply reacting to a situation without considering the long term consequences of a course of action. Action selection is carried out by a knowledge-level planner which reasons about the agent's knowledge and how that knowledge changes due to action (physical robot actions or linguistic speech acts).
D4.1 - Specification of High-Level Representations
Ron Petrick, Mary Ellen Foster
D4.2 - Initial Extensions for Knowledge-Level Planning and Heuristic Search
Ron Petrick
D4.3 - Knowledge-Level Planning and Reasoning in Social State Spaces
Ron Petrick
This work is focused on applying machine learning to the task of selecting appropriate social behaviour for the robot, by building on techniques that have been applied successfully to spoken dialogue systems and adapting them to this new context.
D5.1 - Initial Social Skills Learning Component and Simulation Environment
Simon Keizer, Oliver Lemon
This work is centred on the physical robot platform, and addresses problems related to interaction and communication with humans in a socially appropriate manner. A crucial issue in this area is the concept of embodiment and the idea that a robot necessarily exists in the physical work and is able to carry out physical tasks on its own or in collaboration with human partners. This work addresses a number of problems in human-robot interaction, with particular emphasis on the interaction context, the number of interaction partners, and the range of social behaviours supported.
D6.1 - Initial Robotics Components and Simulation Environment
Manuel Giuliani, Andre Gaschler, Markus Rickert
D6.2 - Embodiment for Social Interaction
Manuel Giuliani, Andre Gaschler, Sören Jentzsch
This work focuses on the coordination of project-wide integration activities for implementation on the JAMES robot platform, and for carrying out system evaluations. This activities include: building a technical infrastructure that allows all components to communicate, coordinating the development and integration of the overall system, providing technical support for data-collection studies, supporting interim formative evaluations of the system components and the overall demonstrator system, and carrying out full user evaluations of the final implemented human-robot system with users.
D7.1 - First Integrated System: Prototype and Evaluation
Mary Ellen Foster, Andre Gaschler, Manuel Giuliani, Amy Isard, Maria Pateraki, Ron Petrick
D7.2 - Second Integrated System: Prototype and Evaluation
Mary Ellen Foster, Andre Gaschler, Manuel Giuliani, Amy Isard, Simon Keizer, Maria Pateraki, Ron Petrick, Markos Sigalas
D7.3 - Final Integrated System: Prototype and Evaluation
Mary Ellen Foster, Andre Gaschler, Manuel Giuliani, Amy Isard, Simon Keizer, Maria Pateraki, Ron Petrick, Markos Sigalas, Zhuoran Wang
D7.4 - Extended System Evaluation: Uncertain Conditions
Mary Ellen Foster, Andre Gaschler, Manuel Giuliani, Amy Isard, Simon Keizer, Maria Pateraki, Ron Petrick, Markos Sigalas, Zhuoran Wang
This work focuses on the collection and analysis of high-quality, clearly annotated, natural, multimodal data to train the project's learning models and inform the implementation of the embodied robot system. Data is gathered using a novel Ghost-in-the-Machine data-collection paradigm, in which a participant plays the role of the artificial agent, making use of only the input and output channels that are supported in the system.
D8.2 - Intention-Recognition Study
Sebastian Loth, Kerstin Huth, Jan de Ruiter
D8.3 - Ghost-in-the-Machine Study
Sebastian Loth, Kerstin Huth, Jan de Ruiter
D8.4 - Ghost-in-the-Machine Study
Sebastian Loth, Katharina Jettka, Jan de Ruiter, Manuel Giuliani