User Tools

Site Tools


public:events:challenges3-team-results

<-main workshop page
humanobs header

Challenges 3: Team Conclusions & Solutions

Lecture: PREDICTABLE CONSTRUCTIVISM
Author: Ricardo Sanz

Challenges

1. How to be sure that a constructivist system will develop value for the owner.
2. How to be sure that a constructivist system will not become a spoiled child.

Team A

Team Members:
Anna Ingolfsdottir, Reykjavik U.
Mark Wernsdorfer, U. Bamberg, Germany
Deon Garrett, IIIM, Iceland
Pei Wang, Temple U., USA

Conclusions:

  • The dichotomy between constructivist and constructionist AI is no so clear cut
    • Not a binary classification
    • Points on a continuum, and no one serious advocates either extreme
    • As such, there will always be elements of top-down control to guide system evolution
  • Lots of other false dichotomies as well
    • Pure Good vs Pure Evil
    • “Don't Panic” – system won't automatically try to go crazy and there's plenty of time to figure out how we guide evolution as we learn how to build the systems.
  • Basic conclusions
    • Current systems pose no danger of going too far off the rails
    • Today's AGI systems aren't capable enough “Value” providing systems don't have enough freedom to deviate that far from desired goals
    • We may worry whether our systems are brittle, but we trust they will follow our goals. Another way of saying this: The second challenge was about building a better Go player; an excellent example of a “value-creating” system. No one believes that the way to do that now is to use OpenCog or NARS, nor did anyone express concern that one of his randomly produced Go agent would turn on its creator. The march toward AGI will likely be a gradual increase in capabilities, and each step will provide opportunities to consider how best to constrain these new capabilities to produce what we want.
  • In the future, we'll need to worry more about it, but
    • We'll have a lot more information about how to build such systems
    • Makes more sense to spend our efforts once we have better ideas

Team B

Team Members:
Eric Baum, USA
Antonio Chella, U. Palermo, Italy
Gudny R. Jonsdottir, IIIM, Iceland
Ricardo Sanz, U. Madrid

Team Conclusions: How to avoid the possibility that a self programming and self evolving agents could evolve in a useless way on the one side or it could get the upper hand over people on the other side. The third way is an artificial agent that live together and at the same level of human agents.

Maybe we could consider a new complex society where people and agents could live together, considering new forms of laws that take into account some forms of punishments for the bad agent, e.g. to shut off an agent, to decrease its access to computational resources, and so on. On the other side we may also imagine that some people may want to organize forms of protests against the shut off of a particular agent.

A problem is related on how people may really trust an agent. After all, today agents control the flights of airplanes, speculate money on Stock Exchange, or help people to make difficult decisions. In robotics there is a hot debate on making robots able to kill people. A sort of ethics and a sense of respect for the laws must be programmed in robots.

Is could be possible also to think of a sort of new test for trusting robots, i.e. to think of a robot the could be “elected” as a delegate in a political competition agains human candidates.

Team C

Team Members:
Helgi Páll Helgason
Marjan Sirjani
Eric Nivel
Jörg Siekman
Hannes Högni Vilhjalmsson
Bas Steunebrink
Haris Dindo
Hrafn Th. Thorisson
Yngvi Björnsson

Team Conclusions: Let's first define “Value” = it refers to the quality of a system that will do what it is supposed to do! In constructivist AI we are interested in building things that, if built manually, would easily go out of human control

There are two main ingredients: 1. The system must remain in the boundaries fixed by the owner but must reach goals; 2. We need a reward/punishment if the system goes out of hand;

In constructivist we start with a narrow seed (which is so general that it cannot do anything general), then the system learns by observation and by some help of a teacher (providing reward signal). The system should gave a continuous evaluation capabilities as well which is the key to adaptation. To this aim, we need a hierarchy of requirements at different layers of complexity. We have two modes of operation: training and commission;

In critical systems we have to be sure that the system DO what it is supposed to do! Probably, constructivist approach will never be employed for mission critical system. In addition, the fate of the system should be linked with that of the owner in order to avoid the “spoiled child” problem

Team D

Team Members:
James Bonaiuto
John-Jules Mayer
Hamid Pourvatan
Kristinn R. Thórisson
Luca Aceto

Team Conclusions: Let's assume we have an AGI that we can raise and/or train to become somewhat similar in its skills as a human participating in a society.

You can induce guilt into the AGI by simulating/running 100 other agents that impose “social” and “moral” limits and constraints on that AGI, especially as it is growing up, so that part of its fundamental processing is guarded by principles with useful predictive powers. (There is an important distinction here to be made between guilt and shame: guilt is internalized, shame is externally imposed.) This kind of limiting should not be built into the system (by hand), you want it to internalize it as a part of its “upbringing”. This calls for less imposing from the outside.

Should “institutions” (whether real or simulated) impose strict rules on all the agents in an artificial society? Probably not - it might be better to to allow some limited violations of the rules to enable the whole system to evolve.

public/events/challenges3-team-results.txt · Last modified: 2011/09/26 21:32 by thorisson