Lecture: PREDICTABLE CONSTRUCTIVISM
Author: Ricardo Sanz
1. How to be sure that a constructivist system will develop value for the owner. 2. How to be sure that a constructivist system will not become a spoiled child.
Anna Ingolfsdottir, Reykjavik U.
Mark Wernsdorfer, U. Bamberg, Germany
Deon Garrett, IIIM, Iceland
Pei Wang, Temple U., USA
Eric Baum, USA
Antonio Chella, U. Palermo, Italy
Gudny R. Jonsdottir, IIIM, Iceland
Ricardo Sanz, U. Madrid
Team Conclusions: How to avoid the possibility that a self programming and self evolving agents could evolve in a useless way on the one side or it could get the upper hand over people on the other side. The third way is an artificial agent that live together and at the same level of human agents.
Maybe we could consider a new complex society where people and agents could live together, considering new forms of laws that take into account some forms of punishments for the bad agent, e.g. to shut off an agent, to decrease its access to computational resources, and so on. On the other side we may also imagine that some people may want to organize forms of protests against the shut off of a particular agent.
A problem is related on how people may really trust an agent. After all, today agents control the flights of airplanes, speculate money on Stock Exchange, or help people to make difficult decisions. In robotics there is a hot debate on making robots able to kill people. A sort of ethics and a sense of respect for the laws must be programmed in robots.
Is could be possible also to think of a sort of new test for trusting robots, i.e. to think of a robot the could be “elected” as a delegate in a political competition agains human candidates.
Helgi Páll Helgason
Hannes Högni Vilhjalmsson
Hrafn Th. Thorisson
Team Conclusions: Let's first define “Value” = it refers to the quality of a system that will do what it is supposed to do! In constructivist AI we are interested in building things that, if built manually, would easily go out of human control
There are two main ingredients: 1. The system must remain in the boundaries fixed by the owner but must reach goals; 2. We need a reward/punishment if the system goes out of hand;
In constructivist we start with a narrow seed (which is so general that it cannot do anything general), then the system learns by observation and by some help of a teacher (providing reward signal). The system should gave a continuous evaluation capabilities as well which is the key to adaptation. To this aim, we need a hierarchy of requirements at different layers of complexity. We have two modes of operation: training and commission;
In critical systems we have to be sure that the system DO what it is supposed to do! Probably, constructivist approach will never be employed for mission critical system. In addition, the fate of the system should be linked with that of the owner in order to avoid the “spoiled child” problem
Kristinn R. Thórisson
Team Conclusions: Let's assume we have an AGI that we can raise and/or train to become somewhat similar in its skills as a human participating in a society.
You can induce guilt into the AGI by simulating/running 100 other agents that impose “social” and “moral” limits and constraints on that AGI, especially as it is growing up, so that part of its fundamental processing is guarded by principles with useful predictive powers. (There is an important distinction here to be made between guilt and shame: guilt is internalized, shame is externally imposed.) This kind of limiting should not be built into the system (by hand), you want it to internalize it as a part of its “upbringing”. This calls for less imposing from the outside.
Should “institutions” (whether real or simulated) impose strict rules on all the agents in an artificial society? Probably not - it might be better to to allow some limited violations of the rules to enable the whole system to evolve.