User Tools

Site Tools


agi

Artificial General Intelligence: A deep learning meta-course

This course is for everyone wanting to build Artificial General Intelligence (AGI) using Deep Learning. At the end of the course you should be at researcher level, that is you'll know enough to perform original research in the field of AGI (e.g. by reviewed publication).

There are several approaches to AGI, here we focus on the most popular, that of Machine Learning (ML) and Deep Learning using conventional computer hardware, i.e. the modern computer science route. This is mainly because that's what I know but also it's what I personally believe will yield results soonest. Other approaches are documented in THREAD2.

This meta-course is organised around several threads which can be followed in parallel. Whenever there's a link it is there to click through and skim read the content. If it's material that you know about then great, move on. If you don't know it then you might want to read, understand, think about it and follow what interests you deeper. Mostly this material is factual, but opinions are useful, so mine are marked in sections and if you think I'm wrong then do let me know.

AGI: Defining the problem

What do we mean by intelligence? A dictionary definition is "the ability to acquire and apply knowledge and skills".

Another name for Artificial General Intelligence is Strong Artificial Intelligence. This is in contrast to Weak or Narrow Artificial Intelligence. Most uses of the term AI refer to Weak AI, which in turn is often mostly Machine Learning.

Opinion

The term “Artificial” is misleading as it means "Made or produced by human beings rather than occurring naturally, especially as a copy of something natural". Our computers may be artificial, but we are trying to create true intelligence, not a human made substitute.

Issac Newton said "If I have seen further it is by standing on the sholders of Giants.". I believe that this is generally true, that is intelligence builds upon itself through communication with other intelligent agents. The consequence for AGI is that (a) we have to be able to communicate with AGI in order for us to consider it intelligent (Turing test), (b) Natural Language is the most efficient form of communication so we should look to building it into AGI, (c) AGI will bootstrap from our collective knowledge, e.g. the mostly unstructured information on the internet and (d) AGI could make good teachers and so improve education worldwide (e.g. as an evolution of the mostly shallow knowledge already present on the internet).

THREAD0: Should we build AGI?

There is so much written on the benefits and dangers of AGI you've probably already come to a decision on this, but take the time to rethink now. Sam Harris on Artificial Intelligence is a near 3 hour accessible compilation and a good place to start (All Sam Harris). In contrast, the main reference Superintelligence: Paths, Dangers, Strategies by Nick Bostrom needs a lots of patience and concentration to get through the last chapters (it took me 5 months). The Singularity is a more accessible entry into this viewpoint.

Those working in AGI tend to take a much more pragmatic approach. We tend to think that we are a long way from AGI (both in terms of algorithms and having the necessary compute) and we expect good things from the next steps. For example, Yoshua Bengio says "Existential risk, for me, is a very unlikely consideration, but still worth academic investigation in the same way as you could say that we should study what could happen if a meteorite came to earch and destroyed it". The Seven Deadly Sins of Predicting the Future of AI is worth reading. Bear in mind There’s No Fire Alarm for AGI and AGI researchers are self selecting - it would be hard for them to believe that AGI is a really bad idea and then spend all day working on it.

A long but considered view of the current situation is given by A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy and also Concrete Problems in AI Safety

Opinion

Science Fiction (e.g. Asimov's three laws of robotics, Blade Runner,HAL in 2001, Humans and stacks more) is a really useful tool here, it allows one question to be artificially isolated and explored as a thought experiment. By it's nature, science fiction explores the extremes by being sensationalist, but if it's entertaining and it helps you think in a different way about the subject then it's educational and belongs here.

We already have lots of serious existential risks (e.g. nuclear war, climate change, loss of biodiversity, bioterrorism, ocean acidification) you need to decide if you think that AGI could become another one.

I think that we are already dependent on computers that process information in a faster and smarter way than humans can and they have contributed much to our standard of living. There's no going back, but we need to manage change. Compute costs are likely to keep coming down (Moore's law) so it's better to starting now and share what we learn.

THREAD1: What we know we don't know about AGI

The aim of this thread is to introduce the problems that we may need to solve to generate AGI.

  • What processing architecture best captures the input of information, the output of actions, an internal model of the world and a model of self? In the deep learning framework these are going to processing blocks that pass information between themselves as distributed representations.
  • Every machine learning problem needs an objective function, so what is the goal of AGI? This could be hard wired in (e.g. the creation of wealth or the avoidance of pain-like signals) but it would be much cleaner if it arose directly from the definition of AGI, that is the objective of AGI is to be intelligent and therefore learn.
  • A system extracts the most information from an experience if it can remember it exactly. Deep learning relies upon gradient decent methods which are not as efficient. Do we need explicit one-shot learning, or can the architecture focus on important events and replay them enough times for few-shot learning?
  • It's easy to measure correlations, which is sufficient for narrow AI. However, in AGI we need to infer causality, a much harder problem.
  • Toy Problem's have proved to be very useful. What is a good set of toy problems for AGI that allows us to chart progress towards a real-world AGI?
  • Superintelligence: could an AGI be more intelligent than ourselves?
  • How do we control an AGI? Ref: Superintelligence, Chapter 9.
  • Does consciousness matter? It's clearly useful to split processing into unconscious processes processes such as speech and image recognition, which we are now very good at, and processes that require thinking/planning, which we know less about. However, do necessarily have to debate whether a machine could be conscious (e.g. Searle's Chinese Room and his recent talk) or can we finesse the whole subject?
  • Is AGI achievable with current hardware? We have recently achieved human-like performance for many tasks, such as speech recognition, machine translation, autonomous cars, and board and video games. It seems plausible that if we have the compute for significant components we have the compute to assemble a proof-of-concept of a complete system. It would be convenient if current hardware is enough for dog-level intelligence but not enough for superintelligence.

Opinion

THREAD2: Existing courses on AGI

This meta-course takes an unashamedly deep learning approach (perhaps because I've been using gradient techniques to solve complex real-world problems since 1984 - so like the drunk looking under the lamp post for his lost keys, it might not be the best place to look but it's the only place I can see). More general approaches are:

Opinion

One book that was very influential for me in the '80's is Gödel, Escher, Bach. It's too old and long to recommend reading in full here, but if you have the time then go for it.

The MIT course is the gentlest introduction, it's accessible and has broad coverage.

THREAD3: What we know about the Machine Learning approach to AGI

This section lists what we know from machine learning that is likely to be useful. It assumes you know enough computer science and maths, if you don't then dig out the background material in the referenced courses. This gives you the lego blocks to build your AGI creation.

Machine Learning Foundation

You'll need a solid foundation in Machine Learning. The web is awash with information, for example pick one from this list. If you know about half of The Most Important Machine Learning Algorithms that's probably enough foundational material.

Deep Learning

There are so many, so just pick one:

Historically people have liked Geoff Hinton's and Andrew Ng's courses.

At the end of it you should know about:

Deep learning framework

You are going to want to know at least one of:

and if you pick more than one then Tensorflow should be in your set, with Keras and Caffe2 also being popular.

Reinforcement Learning

Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximise some notion of cumulative reward.

For a complete course, OpenAI have just published Spinning Up in Deep RL. Other resources are:

Reinforcement learning is a key part to AGI, but it tends to concentrate on toy problems that can be solved. For a discussion on what's missing from classic reinforcement learning that's needed in AGI:

THREAD4: Recent Achievements

One can argue that these are not AGI achievements, but they are steps on the way:

Landmark achievements you should know about:

Interesting components:

Open Source Implementations

THREAD5: Keeping Current

THREAD6: Research Directions

Each of these resources are a summary of research directions, it's good to know what other people feel are the important problems:

Each of these resources are individual research directions moving towards AGI:

Finally a 2017/2018 poll and results- When will Artificial General Intelligence (AGI) be achieved?

THREAD7: Giving Back

Please give me feedback as to how I can improve this page.

agi.txt · Last modified: 2018/11/17 19:01 by tonyr