User Tools

Site Tools


public:about

<-HUMANOBS Main Page



Humanobs autonomous TV Show Host

HUMANOBS: About

Our goal is to develop new cognitive architectural principles to allow intelligent agents to learn socio-communicative skills by observing and imitating people.

We address skill development as a fundamental architectural epiphenomenon: Learning happens through unique architectural constructs specified by developers, coupled with the ability of the architecture to automatically reconfigure itself to accommodate new skills through observation. While we initially target socio-communicative skills, we intend to develop our architecture and imitation learning process in a generic way, so that the principles developed can be applied to other equally complicated tasks. Within the framework of this project the learning will be supervised; however, our long-term goal is to provide humanoid agents and robots with full autonomy for learning such multimodal skills in dynamic social situations.

As an appropriate and challenging demonstration we will have the resulting system control a virtual humanoid television host, capable of taking interviews with users and conducting a 30-minute TV program. By interacting with humans, via their advanced multimodal avatars, the TV-host will acquire increasingly complex socio-communicative skills. By the end of the project the host will have approximated the socio-communicative skills of an average human television show host.

We have three main objectives, all of which derive directly from the above. They are to build an auto-reconfiguring architecture, behavior observation mechanisms and behavior generation and coordination mechanisms.

Auto-Reconfiguring Architecture
The ability to learn complex new skills on top of old ones brings new requirements on the underlying architecture: it must be able to reorganize its internal agency while displaying robustness and resilience, and this constitutes our first scientific objective:

A. To build a framework for developing an auto-reconfiguring architecture.
As the architecture grows in complexity in this way – on its path towards reaching human-like capabilities – its relatively large size begins to significantly challenge the engineering of the system. Therefore, scale factors into the overall engineering approach that we must take. We target large-scale, modular architectures able to reassemble their components in light of new operational conditions, that is, to compute new assembly schemes and system configurations, given (1) the specification of the available modules and (2) the specification of the new skills and behaviors.

Achieving this objective will result in the implementation of a highly reconfigurable architecture, and this builds on three sub-objectives:

  • A1 To design a model of component assembly, that is independent of the application domain. The model will define procedures to choose and assemble components in a configuration matching a given specification. This will result in a generic architectural blueprint, where re-configurability will result from the clean separation of abstract specifications from concrete implementations.
  • A2 To design a model for the resilience of system configurations. The model needs to work in such a way that acquiring new skills does not compromise the abilities of the system, which is a prerequisite for incremental development.
  • A3 To design a model for configuration robustness in light of architecture expansion. Acquired skills need to be maintained when new components are integrated in the architecture to expand the stock from which skills can be assembled.

Observation Mechanisms
Many human skills, including socio-communicative ones, are highly complex, with many degrees of freedom. Imitation learning is potentially an extremely efficient way to acquire such skills since providing a target specification for behaviors in all circumstances would be extremely tedious (or impossible) to program manually. So computing such target specifications automatically is eventually needed, if the system is supposed to imitate.

In our approach, imitation learning is a two-step process, (1) observing a behavior and extracting a specification for it and (2) according to this specification, implementing the behavior and trying it out, or storing it for future use. Accordingly, our second objective is:

B. To build a system that can auto-generate specifications for skills and behaviors based on their observation. This is broken down into three sub-objectives:

  • B1 To define models of observation for the system. The system will be observing a human or humans – it needs mechanisms for identifying and classifying multimodal movement and communicative events. Identification and classification modules will be developed to encompass increasing levels of acuity, targeting in the end the inference of the intention of a behavior rather than the behavior itself. They will also be built to be reusable instead of application-dependent and will be developed in a generic fashion.
  • B2 To develop a formalization engine able to generate architectural specifications form the observed behaviors. The observed behaviors will be formalized in terms of the capabilities of the system, i.e. the existing components and configurations either hand-crafted or resulting from previous learning.
  • B3 To develop supervision procedures to guide the system’s observation. We will use supervision to help the observing system separating what matters from what does not. Modules will be developed to direct the attention of the system at specified behaviors and salient events produced by humans during the learning sessions.

Behavior Generation and Coordination Mechanisms
Lastly, generating implementations for the observed skills is the second half of imitation learning, and this is our third objective:

C. To build behavior generation and coordination mechanisms for the reproduction and reuse of observed skills. From the specifications of target skills and behaviors we will develop processes that can build, integrate and test new system configurations as implementations of the desired skills. The testing is done by actually trying them out and evaluating the result. This leads to three sub-objectives:

  • C1 To build an assembly engine capable of generating new configurations as implementations of the target skills from specifications. The assembly engine will be patterned after the model designed in objective A1.
  • C2 To build an integration sub-system to coordinate and augment existing behaviors with the learned skills. This sub-system addresses the problem of coordinating newly acquired skills with existing ones for a given purpose. It will build models of the skills to motivate and perform the re-assembly of impacted behaviors. This sub-system will also make use of the model defined in A1 on potentially larger scale when multiple behaviors need re-assembly.
  • C3 To build an embedded testing sub-architecture to allow the agent to verify the correctness of its implementation. The system needs to validate its newly acquired skills in situations that are different than the one presented during the learning sessions. The system will build and maintain a model of the reactions of the human it interacts with, and will test its newly implemented skills against this model.





public/about.txt · Last modified: 2011/04/26 21:35 by thorisson