User Tools

Site Tools


public:events:agi-summerschool-2012:debates

«– HUMANOBS.ORG | SummerSchool Main | Readings | Schedule Day-by-Day


SUMMER SCHOOL DEBATES & WORKSHOPS

Debate I: Logic

Topic 1: Logic and AGI: A Cognitive Necessity or Just a Methodology?
Should logic form some significant part of the core methodology we use when designing and constructing what we hope will be the first AGI?

  • Argument for logic-as-present-methodology:
    • Numeric approaches (HMMs, ANNs, etc.) hide the underlying structure of what they represent.
    • Making AI systems that evolve (e.g. AGI systems) is, at present, impossible with such representations.
    • Approaches based on logic do not have such limitations.
    • Hence, logic is a practical way to start building AGIs.
    • If something comes along that removes the limitations of present numeric approaches it is conceivable that this will replace logic as the best bet towards AGI.
  • Argument for logic-as-cognitive-necessity:
    • The human mind uses logic (in some form or other) to do its job.
    • The human mind is a best first approximation/obvious target for AGI builders.
    • Conclusion 1: AGIs should use logic.
    • Additional claim: Human minds use logic because there is no other way to do intelligence.
    • Conclusion 2: AGIs must use logic.

Subtopic 1-a: If logic is necessary (at present or forever), what kind of logic should we use to build AGIs?

Subtopic 1-b: Logic is not necessary (at present and forever), but what alternatives are there?



Topic 2: Tests for AGIs: To be Desired or to be Avoided?

  • Argument for X test (where X is some kind of appropriate IQ test or other) for AGIs:
    • We must be able to evaluate our AGIs.
    • We have experience of building tests for human intelligence.
    • Creating tests for AGIs should be a natural and necessary part of developing AGIs.
  • Argument against X test (where X is some kind of appropriate test) for AGIs:
    • AGIs will - for the next 1 or 2 decades at least - very different from human intelligence.
    • It will be difficult to devise proper tests for them.
    • Any and every attempt will likely divert the attention of AGI developers away from the main task.
    • Tests for AGIs should be avoided.

Subtopic 2-a: Might it make sense to start an annual competition for General X, for some X, reminiscent of general game playing (GGP), in which an appropriate sponsor, aligned of course with AGI, offers significant prize money?



Debate II: Systems

Topic 1: The intelligence is in the architecture.

  • Arguments in favor of the claim
    • Intelligence is an emergent property of a properly arranged set of interacting and interactive functions; Without a proper structure in support of the systems' component interactions, this property will not emerge.
    • In a system where the whole is greater – and in fact different – than the sum of its parts, architecture is a critical “component” of the system.
  • Arguments against the claim
    • A proper architecture is certainly necessary, but it is not sufficient.

Topic 2: Mechanisms to support system self-inspection must be part of any system that aims to become an AGI.

  • Arguments in support of the claim
    • Any system starting from a reasonably non-axiomatic beginnings must improve its basic cognitive functions over time, in order to really become good at learning. Self-growth requires inspection and evaluation of self, which means self-inspection is unavoidable.
    • For any general learning system it is not possible to pre-describe its exact learning mechanisms – these must grow with the system in its particular encountered environments. This means that learning mechanisms must be inspected and improved at the architectural level; hence, self-inspection is necessary.
    • The capabilities of a situated system is based on the a-priori capabilities of the system in the particular environment(s) that it finds itself. Hence, it is not possible to describe the cognitive abilities of a system up-front, if it is supposed to be capable of adapting to a variety of environments, because the interaction between the cognitive system and the environment is extremely complex and involved. Development of an understanding of one's own cognitive capabilities is, however, a necessary part of any advanced cognitive system. The acquisition of such an understanding is not possible without introspective capabilities.
  • Arguments against the claim
    • It is difficult enough to build reasonably general learning mechanisms, how do we ever hope to build self-modifying systems when we cannot even do the basics?
    • Acquisition of an understanding of one's own cognitive abilities can be done by observing external behavior of oneself in a variety of circumstances; for this cognitive introspective abilities are not required.

Topic 3: Claim: There is no need for specific means to endow AGI systems with curiosity.

  • Pros:
    • it's just a matter of meta-control.
    • grounding: drive to reduce uncertainty (ex: minimize the derivative of success rate of goal/prediction patterns).
    • any other alternative? Engineering-wise, budget-wise, career-wise, coding-management-wise, etc.?
  • Cons:
    • where will the tower of fixed inference rules stop? Cognitive milestones? But then, who is going to teach, guide, correct and implement as the system faces more and more complex situations that call for more meta-control?
    • It seems that AGI research (under this tenet of delegating meta-learning to more evolved “ghosts in the shell”) pre-supposes the existence of AGI in the first place.

Topic 4: Cognitive systems must be capable of analogy-making if they are to learn fairly novel concepts in fairly novel domains (domains not encountered before), so analogy-making is necessary (but not sufficient) prerequisite for AGI.

  • Arguments in favor of the claim
    • Analogies make learning quite a bit easier, as novel information can be learned by proper analogies to priorly learned things.
  • Arguments against the claim
    • It has not been shown that analogies are necessary for learning or cognition in general.
    • Analogies are simply a fancy side effect of the cognitive apparatus – it serves no purpose to try to build it as a special component or function.

Topic 5: The main and almost sole requirement for giving a system the ability to make analogies is a relatively uniform representation system.

  • Arguments for the claim
    • Analogies require the ability to do cross-domain mapping of variables, which can be done through simple similarity mappings, making the analogy process relatively straight-forward.
  • Arguments against the claim
    • In addition to a convenient representation scheme, architectural support for analogies is required because making analogies requires decisions that lie outside of the knowledge strictly required for the analogy-making itself.
    • Analogy-making requires selection of relevant parts to be compared; the most sensible way to do that is via attention mechanisms, which naturally are part of the operation of the architecture proper.

Topic 6: Claim: With regards to AGI, piece-wise design/engineering is just this: plain wrong.

  • Pros:
    • gas factory.
    • expensive.
    • un-traceable.
    • no method to drive the “where to cut” issue, i.e. to justify the needed/practical level of abstraction.
    • PR-stuff.
    • will scale with dollars (if available), and not for too long (because no results).
  • Cons:
    • PR-stuff: keep researchers alive, otherwise, no AGI at all, on practical grounds: visibility and hype keeps AGI afloat (not results!): just like old school AI?



BEN GOERTZEL

Questions for discussions following Ben Goertzel's presentation.

Topic 1: Should we try to piece AGI together by combining specific functionality (like reasoning, planning, anticipation, language understanding, …), or should we try to seek general organizational principles that produce these faculties as an emergent result?

Topic 2: If both architecture and learning are necessary – but not sufficient – components for building an AGI, what other necessary components or functions are missing so that the set of components becomes sufficient for building an AGI?

Topic 3: What are useful benchmark problems? In other words: which are concise, tractable and interesting questions that have AGI as an answer?

Topic 4: Which aspects of the human intelligence should AGI attempt to duplicate?

Topic 5: Might it make sense to start an annual competition for General X, for some X, reminiscent of general game playing (GGP on Wikipedia), in which an appropriate sponsor, aligned of course with AGI, offers significant prize money?



KAI-UWE KUHNBERGER

STUDENT SUMMARY WORKSHOPS

Teams will work together in groups of 3 or 4. Each team picks one topic to address, and works out a set of answers and counter questions. The discussion session is 2 hours. The results are presented to summer school attendees. Each team gets no more than 15 minutes for presentations and 5 minutes for discussions.

Rules for Teamwork

  • The teams are randomly assigned
  • Teams must choose:
    • A secretary
    • A presenter
    • A doodler/investigator
  • The roles of these are defined as:
    • Secretary is responsible for taking notes and summarizing what the team has discussed and concluded
    • Presenter presents the results. Note: Presenter and Secretary roles cannot be filled by the same person.
    • Doodler/investigator is responsible for making diagrams explaining the points that the team wants to convey to the group and finding relevant online information related to the points that the team presents.
  • After the Team presentations the persons responsible must:
    • Secretary and presenter should collaborate on a final version of points on the wiki.
    • Doodler must help summer school staff get diagrams, pictures and related data, onto the wiki.

Workshop I

Topics To Pick From

Teams must pick one of the following topics to address in Workshop I:

  1. How can we get the largest amount of I (intelligence) and G (generality) in the shortest amount of time?
  2. What kinds of tests should we create to evaluate AGIs?
  3. Logic must necessarily be part of current AGI methodologies, as a way to build them. What kind of logic should we use?
  4. What are the main differences in requirements between AGI and “traditional” mainstream AI, if any (if not, why not)?
  5. Does the human brain do supervised, unsupervised or reinforcement machine learning OR some combination of these? Which of these do we wish to employ in AGI as the general learner?
Team A Team B Team C Team D Team E
TEAM
MEMBERS
Ahmed Abdel-Fattah
Atli Ö. Sverrisson
Elsa Eiriksdottir
David Muench
Daniel Ewert
Arni S. Sigurdsson
Deon Garrett
Hailang Song
Gudjon Magnusson
Gudny R. Jonsdottir
Jarrad Hope
Mario Brcic
Helgi Helgason
Ivan Beloborodov
Dhamotharan Sritharan
Mikhail Jacob
Robert Costa
Johannes Wienke
Hamid Pourvatan
RESULTS Team A Results Page Team B Results Page Team C Results Page Team D Results Page Team E Results Page


Workshop II

Topics To Pick From

Teams must pick one of the following topics to address in Workshop I: Topics for Workshop II

Teams
Team A Team B Team C Team D Team E
TEAM
MEMBERS
Daniel Ewert
Johannes Wienke
David Muench
Gudjon Magnusson
Atli Ö. Sverrisson
Hailang Song
Hamid Pourvatan
Helgi Helgason
Deon Garrett
Jarrad Hope
Robert Costa
Dhamotharan Sritharan
Mario Brcic
Elsa Eiriksdottir
Ahmed Abdel-Fattah
Ivan Beloborodov
Mikhail Jacob
Gudny R. Jonsdottir
RESULTS Team A Results Page Team B Results Page Team C Results Page Team D Results Page Team E Results Page


Workshop III

Teams
Team A Team B Team C Team D Instructor Team
TEAM
MEMBERS
Johannes Wienke
David Muench
Hailang Song
Helgi Helgason
Mario Brcic
Deon Garrett
Robert Costa
Dhamotharan Sritharan
Daniel Ewert
Ivan Beloborodov
Mikhail Jacob
Kris Thorisson
Eric Nivel
Pei Wang
Kai-Uwe Kühnberger
RESULTS Team A Results Page Team B Results Page Team C Results Page Team D Results Page Instructor topics and
Answers for Workshop III



eof

public/events/agi-summerschool-2012/debates.txt · Last modified: 2012/08/15 15:33 by thorisson