User Tools

Site Tools


agi

Artificial General Intelligence: A deep learning meta-course

This course is for everyone wanting to build Artificial General Intelligence (AGI) using Deep Learning. At the end of the course you should be at researcher level, that is you'll know enough to perform original research in the field of AGI (e.g. by reviewed publication).

There are several approaches to AGI, here we focus on the most popular, that of Machine Learning (ML) and Deep Learning using conventional computer hardware, i.e. the modern computer science route. This is mainly because that's what I know but also it's what I personally believe will yield results soonest. Other approaches are documented in THREAD2.

This meta-course is organised around several threads which can be followed in parallel. Whenever there's a link it is there to click through and skim read the content. If it's material that you know about then great, move on. If you don't know it then you might want to read, understand, think about it and follow what interests you deeper. Mostly this material is factual, but opinions are useful, so mine are marked in sections and if you think I'm wrong then do let me know.

AGI: Defining the problem

What do we mean by intelligence? A dictionary definition is "the ability to acquire and apply knowledge and skills".

Another name for Artificial General Intelligence is Strong Artificial Intelligence. This is in contrast to Weak or Narrow Artificial Intelligence. Most uses of the term AI refer to Weak AI, which in turn is often mostly Machine Learning.

Opinion

The term “Artificial” is misleading as it means "Made or produced by human beings rather than occurring naturally, especially as a copy of something natural". Our computers may be artificial, but we are trying to create true intelligence, not a human made fake.

Most examples of AI today are tasks that humans do but machines have found it difficult in the past. This is narrow AI, doing tasks that intelligent agents can do. The objective function is to do the task well, the intelligence comes in by setting everything up so that it works (whether by hand coding rules or by employing gradient descent techniques on an example data set). So doing these tasks is more researcher intelligence than computer intelligence.

AGI is very different. In AGI the objective function is intelligence. That is, that task doesn't really matter, what matters is that it is solved in a manner which shows that knowledge and skills are both acquired and applied as well as possible. AGI should be driven by evaluating whether a solution was found in the most data and time efficient manner - late answers are wrong answers.

THREAD0: Should we build AGI?

There is so much written on the benefits and dangers of AGI you've probably already come to a decision on this, but take the time to rethink now. Sam Harris on Artificial Intelligence is a near 3 hour accessible compilation and a good place to start (All Sam Harris). In contrast, the main reference Superintelligence: Paths, Dangers, Strategies by Nick Bostrom needs a lots of patience and concentration to get through the last chapters (it took me 5 months). The Singularity is a more accessible entry into this viewpoint.

Those working in AGI tend to take a much more pragmatic approach. We tend to think that we are a long way from AGI (both in terms of algorithms and having the necessary compute) and we expect good things from the next steps. For example, Yoshua Bengio says "Existential risk, for me, is a very unlikely consideration, but still worth academic investigation in the same way as you could say that we should study what could happen if a meteorite came to earch and destroyed it". The Seven Deadly Sins of Predicting the Future of AI is worth reading. Bear in mind There’s No Fire Alarm for AGI and AGI researchers are self selecting - it would be hard for them to believe that AGI is a really bad idea and then spend all day working on it.

A long but considered view of the current situation is given by A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy and also Concrete Problems in AI Safety

Opinion

Science Fiction (e.g. Asimov's three laws of robotics, Blade Runner,HAL in 2001, Humans and stacks more) is a really useful tool here, it allows one question to be artificially isolated and explored as a thought experiment. By it's nature, science fiction explores the extremes by being sensationalist, but if it's entertaining and it helps you think in a different way about the subject then it's educational and belongs here.

We are already dependent on computers that process information in a faster and smarter way than humans can and they have contributed much to our standard of living. There's no going back, but we need to manage change. Compute costs are likely to keep coming down (Moore's law) so it's better to starting now and share what we learn so that everything adapts slowly.

We have serious existential risks (e.g. nuclear war, climate change, loss of biodiversity, bioterrorism, ocean acidification) you need to decide if you think that AGI could become another one. I personally think that we have bigger worries to confront immediately, for example we only have 12 years to limit climate change catastrophe and that progress in AI is going to help with the more immediate concerns.

THREAD1: What we know we don't know about AGI

Opinion

These are my thoughts on the problems that we need to solve to generate AGI. They are introduced here so that when we review what we do know about we are already thinking about the problem we want to solve.

  • What processing architecture best captures the input of information, the output of actions, an internal model of the world and a model of self? In the deep learning framework these are going to processing blocks that pass information between themselves as distributed representations.
  • Every machine learning problem needs an objective function, is my above view on intelligence as the loss function correct? Other thoughts are that it is hard wired (maybe by evolution) ans is the avoidance of pain-like signals or the creation of wealth. Using intelligence as the loss function implies sub-goals of exploration to gather data, the finding of models in that data and finding efficient solutions using those models.
  • A system extracts the most information from an experience if it can remember it exactly. Deep learning relies upon gradient decent methods which are not as efficient. Do we need explicit one-shot learning, or can the architecture focus on important events and replay them enough times for few-shot learning?
  • It's easy to measure correlations, which is sufficient for narrow AI. However, in AGI we need to infer causality, a much harder problem.
  • Toy Problem's have proved to be very useful. What is a good set of toy problems for AGI that allows us to chart progress towards a real-world AGI?
  • Superintelligence: could an AGI be more intelligent than ourselves?
  • How do we control an AGI? Ref: Superintelligence, Chapter 9.
  • Does consciousness matter? It's clearly useful to split processing into unconscious processes processes such as speech and image recognition, which we are now very good at, and processes that require thinking/planning, which we know less about. However, do necessarily have to debate whether a machine could be conscious (e.g. Searle's Chinese Room and his recent talk) or can we finesse the whole subject?
  • Is AGI achievable with current hardware? We have recently achieved human-like performance for many tasks, such as speech recognition, machine translation, autonomous cars, and board and video games. It seems plausible that if we have the compute for significant components we have the compute to assemble a proof-of-concept of a complete system. It would be convenient if current hardware is enough for dog-level intelligence but not enough for superintelligence.

THREAD2: Existing courses on AGI

This meta-course takes an unashamedly deep learning approach (perhaps because I've been using gradient techniques to solve complex real-world problems since 1984 - so like the drunk looking under the lamp post for his lost keys, it might not be the best place to look but it's the only place I can see). More general approaches are:

Opinion

One book that was very influential for me in the '80's is Gödel, Escher, Bach. It's too old and long to recommend reading in full here, but if you have the time then go for it.

The MIT course is the gentlest introduction, it's accessible and has broad coverage.

THREAD3: What we know about the Machine Learning approach to AGI

This section lists what we know from machine learning that is likely to be useful. It assumes you know enough computer science and maths, if you don't then dig out the background material in the referenced courses. This gives you the lego blocks to build your AGI creation.

Machine Learning Foundation

You'll need a solid foundation in Machine Learning. The web is awash with information, for example pick one from this list. If you know about half of The Most Important Machine Learning Algorithms that's probably enough foundational material.

Deep Learning

There are so many, so just pick one:

Historically people have liked Geoff Hinton's and Andrew Ng's courses.

At the end of it you should know about:

Deep learning framework

You are going to want to know at least one of:

With Tensorflow and PyTorch being the most popular.

Reinforcement Learning

Reinforcement learning (RL) is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximise some notion of cumulative reward.

For a complete course, OpenAI have just published Spinning Up in Deep RL. Other resources are:

Fix Slides are here: http://people.eecs.berkeley.edu/~cbfinn/_files/mbrl_cifar.pdf An older version of this tutorial (1.5 yrs older) is here: https://www.youtube.com/watch?v=iC2a7M9voYU

Reinforcement learning is a key part to AGI, but it tends to concentrate on toy problems that can be solved. For a discussion on what's missing from classic reinforcement learning that's needed in AGI:

THREAD4: Recent Achievements

Open Source Implementations

THREAD5: Keeping Current

THREAD6: Research Directions

Opinion

Here are the problems that I think we need to tackle over-and-above deep reinforcement learning:

  • We started off by thinking that “slow thinking” was the key to AGI, e.g. solving chess by brute force. We then moved to thinking that speech and image processing was the hard task, i.e. “fast thinking”. I now think that we need to concentrate on a connectionist view of “slow thinking” to get reasoning into what we do well with deep learning.
  • AGI and common sense (real world) knowledge. It is commonly accepted that for an agent to appear to be intelligent it must have “common sense” knowledge, that is knowledge of the world and how it works. If this is taken, then how should that knowledge be obtained? I'd argue that human level intelligence is inextricably linked to language as the main source of information. Roboticists argue that AGI needs to be embedded in order to learn.
  • Inferring Causality: In supervised ML we know that the input causes the output, but in AGI we want to infer causation and keep it separate from correlation. In brief, see XKCD explained. This is a hard problem, but less so if AGI is embedded. NEW BOOKCould future robots have human-like levels of intuition or even exhibit free will? Judea Pearl thinks the answer is yes, but achieving this goal will require fundamental changes to the field of AI research.

https://www.quantamagazine.org/to-build-truly-intelligent-machines-teach-them-cause-and-effect-20180515/

  • Episodic Memory: Our current systems learn functions (supervised ML) and store data for any particular time (e.g. RNNs). AGI needs to store data in the form of concepts, i.e. have episodic memory. Reasoning happens with concepts that aren't anchored in time.
  • Cooperation and real world knowledge: Issac Newton said "If I have seen further it is by standing on the sholders of Giants.". I believe that this is generally true, that is intelligence builds upon itself through communication with other intelligent agents. The consequence for AGI is that (a) we have to be able to communicate with AGI in order for us to consider it intelligent (Turing test), (b) Natural Language is the most efficient form of communication so we should look to building it into AGI, (c) AGI will bootstrap from our collective knowledge, e.g. the mostly unstructured information on the internet and (d) AGI could make good teachers and so improve education worldwide (e.g. as an evolution of the mostly shallow knowledge already present on the internet).

FIX UP BELOW HERE:

Finally a 2017/2018 poll and results- When will Artificial General Intelligence (AGI) be achieved?

THREAD7: Giving Back

Please give me feedback as to how I can improve this page.

agi.txt · Last modified: 2018/12/02 09:46 by tonyr