User Tools

Site Tools


<-main workshop page
humanobs header

Challenges 4: Team Conclusions & Solutions

Author: Pei Wang


1. What is the proper format for a reasoning system and a robot to exchange information?
2. What types of inference are needed for a reasoning system to control a robot?

Team A

Mark Wernsdorfer, U. Bamberg, Germany
Gudny R. Jonsdottir, IIIM, Iceland
Hamid Pourvatan, IIIM, Iceland
John-Jules Meyer, U. of Utrecht, the Netherlands

Challenge 1:
reasoning system can serve multiple robots
but reasoning system cannot act
problem might be, that everything is reasoning, nothing is reactive
introducing a reactive part could preprocess information and therefore reduce information exchange and processing time

Challenge 2:
first order logic is undecidable, domain knowledge could be reagarded like weak, local axioms
language can be expressed by a- and t-box (relations are t-box and facts are a-box)

uncertainty in inference represented by frequency and certainty (c = w / (1 + w))
what is w?

how is knowledge propagated?

no problem with negative evidence, following open world assumption

Team B

Team Members:
Eric Baum, USA
Helgi Páll Helgason, Reykjavik U.
Pei Wang, Temple U., USA
Ricardo Sanz, U. Madrid

Team Conclusions:

Actions are not traditionally constructs of logic languages, they can usually not be executed to a degree (binary nature). Uncertainty regarding the effects of actions needs to be taken into account. Many actions will have associated precondition. Reliability of preconditions (frequency/confidence) should effect beliefs about action outcome. Confidence in the outcome of an action should also effect the selection of that action. If an action has ever been observed producing outcomes that are bad in the current or similar contexts, this should also effect the selection of that action.

Operations are executable statements. But executable by whom?

NARS as a general purpose controller.
Format proposed by Pei:

  • operator(input/output arguments)
  • outputs are immediate feedback

Actions need to be translated for the executing device. Language of the reasoning system needs to be general enough and tolerate uncertainty. Sensors generate beliefs, translated from the device to the reasoning system, which may have varying degrees of confidence.

Inferential power:
An action (executed to achieve a goal) can produce outcomes that make other currently active goals impossible to achieve.

  • Temporal reasoning is required, sequence of events can be critical.
  • Actions can not be executed to a degree.
  • Execution: resources, confidence in preconditions, confidence in outcomes, effect on other goals.
  • May need to have a theory of the mind/internal of the devices that execute the actions.
  • Metrics for action efficiency are beneficial.

Team C

Team Members:
Hannes Högni Vilhjalmsson, Reykjavik U.
Deon Garrett, IIIM, Iceland
Yngvi Björnsson, Reykjavik U.

Team Conclusions:

Team D

Team Members:
James Bonaiuto, Cal Tech, USA
Marjan Sirjani, Reykjavik U.
Antonio Chella, U. Palermo, Italy
Hrafn Th. Thorisson, IIIM, Iceland
Haris Dindo, U. Palermo, Italy

Team Conclusions:

  • Original AI systems - robots controlled by logic systems
    • Shakey, STRIPS
  • 2 levels
    • logic formalism - generates knowledge from axioms
    • robot - lower level representation
  • how to combine these levels
    • goal - logic system with some sort of predicates strictly related with the robot (observe one thing, go here, etc)
    • give some sort of semantics to the higher level predicates
      • (i.e. external hook - logical predicate that says go, then lower-level code - when you say go do something)
    • another way - intermediate level - geometric level
  • maybe robot needs its own reasoning system
    • forward/inverse models
  • realtime problem is problematic
    • tight requirement for robot
    • timing constraints on actions, policies that are rules
  • if you remove learning component - what will happen?
    • is this the main feature of the system?
    • it is - without learning - you shuold have information about the environment in which the robot will operate
      • robot shuold be able to find its own goals
      • robot in new environment - will explore, shuold be able to learn environment and find what to do
    • different techniques - behavioral techniques, logic, reasoning, without learning
      • learning used to feedback knowledge of robot and increase it
  • learning should be most interesting point
    • robot should be able to learn its own environment and what to do
    • should be able to be useful - self-preservation, drive to remain alive
  • semantic grounding - motor babbling?
  • what is the right level for interface?
    • symbolic level should be very high level
    • robot needs its own intelligence - not just set of effectors and sensors
    • logical level should be a very small component
  • what types of inference systems are needed to control a robot
    • very high-level - not “control” a robot but guide a robot in a very long term
    • NARS is small component of system
      • reactive behavior, learning, most important part for robot
      • highest level of HMOSAIC?
      • logic should be way to summarize well-established (most common) experiences of robot
      • two options for interface
        • robot sends sensory data to NARS to summarize and symbolize
        • robot processes sensory data and sends lowest level symbols (terms) to NARS
  • what about plugging into different robots
    • never works
    • only if NARS is smallest component and robot communicates via symbols
    • boss - employee, controlling different robots
      • communicate in intermediate level - training virtual machine
  • complementary system
    • reasoning using images
    • iconic reasoning - missing in Pei's system

Team D

Team Members:
Anna Ingolfsdottir, Reykjavik U.
Bas Steunebrink, IDSIA, Switzerland
Kristinn R. Thórisson, Reykjavik U. / IIIM, Iceland
Eric Nivel, Reykjavik U.
Jörg Siekman, DFKI, Germany

Team Conclusions:

public/events/challenges4-team-results.txt · Last modified: 2011/10/03 13:29 by thorisson