User Tools

Site Tools


<-main workshop page
humanobs header

Challenges 1: Team Conclusions & Solutions

Author: Jörg Siekman


1. Automated (mathematical) concept formation and discovery
2. Automated theory formation (in mathematics)

Team A

Team Members:
Anna Ingolfsdottir, Reykjavik U.
Antonio Chella, U. Palermo, Italy
Bas Steunebrink, IDSIA, Switzerland
Deon Garrett, IIIM, Iceland

Conclusions: Prolog is an example of theorem prover, but a automated system has to discover what to prove.
Examples from genetics: you don't know what are you looking for - unsupervised search is needed. Similar in robotics: a real autonomous robot doesn't need to have its own goals.
A robot should be curiosity driven.
This challenge has some point in common with self programming systems: a novel theory is like a novel program.
A main point is how to formalize creativity.
Complexity how to reduce NP to P
Sometimes, as in proving of the Fermat last theorem, it is necessary to change the problem and solve another problems in order to solve the original one.
The discovery of new concepts is based on paradigm shifts, in order to solve problems in the original paradigm: negative numbers have been discovered to solve problems in natural numbers, real numbers have been discovered to solve problems in rational numbers, imaginary numbers are discovered to solve problems in real numbers and so on.
A completely new truth expand an existent theory.

Team B

Team Members:
Eric Baum, USA
Eric Nivel, Reykjavik U.
Gudny R. Jonsdottir, IIIM, Iceland
Hamid Pourvatan, IIIM, Iceland

Team Conclusions:

Where does intelligence come from? Can a system „come up with“ a concept?
A concept is a piece of knowledge, that, when computed serves a purpose (it has a meaning)
In the example of the limit function? What is the concept we want to formulate it?

  • The mathematical standard semantics
  • AI has not all the possible evidence
  • Discovery must be goal directed
  • Must have an example from the real world
  • AI could find the concept but it is out of AI reach to assert that this concept is valid

Concept is „things that implies the concept and things that the concept implies“

  • Can it only be found by a search?
  • Example: Learning the seiki concept in GO
    • Concept is similar to deadlock so it is familiar in some way
    • When coming across it for the first time it goes against previous assumptions
    • After this concept can be learned it can be turned into a goal

We dont just want to find a concept, we want to find meaningful concept that serve a purpose, this purpose is defined by a goal

  • We first need the purpose before the concept

Automated theory formation = Finding useful actions

  • Similar to challenge 1
  • Discover the assumptions to be made
  • To make the right hypothesis

Team C

Team Members:
Hannes Högni Vilhjalmsson, Reykjavik U.
Haris Dindo, U. Palermo, Italy
Helgi Páll Helgason, Reykjavik U.
Hrafn Th. Thorisson, IIIM, Iceland
Ricardo Sanz, U. Madrid

Team Conclusions: There is no external environment to perform experiments in the mathematical domain. Any new knowledge we must actively generate ourselves from existing knowledge.

Relation between challenges - in order to discover a concept, a proven theory of its existence may be a precondition (if the concept was not directly supplied).

Space of possible theories is potentially huge - depending on known concepts/axioms. Interestingness / usefulness is a possible heuristic to guide this search.

Posing interesting questions (theories) likely to lead to interesting (concepts) Measure of interestingness (heuristics) is key.

Related to constructivism. Machine will generate own concepts. Generate the concepts it needs or fall into the world of infinite abstraction. Where should we stop in the abstraction? (going further and further will create concepts harder to match with reality)

Team D

Team Members:
James Bonaiuto, Cal Tech, USA
John-Jules Meyer, U. of Utrecht, the Netherlands
Jörg Siekman, DFKI, Germany
Kristinn R. Thórisson, Reykjavik U. / IIIM, Iceland
Luca Aceto, Reykjavik U.

Team Conclusions: There are two views: one says to feed the system with mathematical knowledge. Concepts are not isolated, so you need to represent some web of knowledge over and above these, so that needs to be part of the information you give the system. And then you would need algortihms that enable it to infer commonalities among concepts, using the web of knowledge. That is one view. This, however, might help us automate discovery, but is unlikely to enable automatic theory formation. For creation of new ideas: the system must be able to propose ideas and then evaluate. They must be able to form abstract notions and make analogies. For this the present methods fall short. The other view is thus that this is not enough, or even the wrong focus, that we rather need to build the system so that it can learn to understand mathematics from more basic principles, involving, among other things, non-symbolic representations.

An even more general comment is that in order to guide the system you need to provide it (either via training or direct “feeding”) with more general knowledge that enables it to understand guiding principles and worthy tasks to work on given by its human creators, because after all the system is intended to serve humans, or humanity in some way. There is the question of whether we want to create a “human mathematician's assistant system” or an “automated mathematician”. The above becomes even more important in this light, because if you want an automated mathematician some connection to societal issues might be necessary.

public/events/challenges1-team-results.txt · Last modified: 2011/09/26 22:15 by thorisson