This course is for everyone wanting to build Artificial General Intelligence (AGI) using Deep Learning. At the end of the course you should be at researcher level, that is you'll know enough to perform original research in the field of AGI (e.g. by reviewed publication).
There are several approaches to AGI, here we focus on the most popular, that of Machine Learning (ML) and Deep Learning using conventional computer hardware, i.e. the modern computer science route. This is mainly because that's what I know but also it's what I personally believe will yield results soonest. Other approaches are documented in THREAD2.
This meta-course is organised around several threads which can be followed in parallel. Whenever there's a link it is there to click through and skim read the content. If it's material that you know about then great, move on. If you don't know it then you might want to read, understand, think about it and follow what interests you deeper. Mostly this material is factual, but opinions are useful, so mine are marked in sections and if you think I'm wrong then do let me know.
What do we mean by intelligence? A dictionary definition is "the ability to acquire and apply knowledge and skills".
Another name for Artificial General Intelligence is Strong Artificial Intelligence. This is in contrast to Weak or Narrow Artificial Intelligence. Most uses of the term AI refer to Weak AI, which in turn is often mostly Machine Learning.
The term “Artificial” is misleading as it means "Made or produced by human beings rather than occurring naturally, especially as a copy of something natural". Our computers may be artificial, but we are trying to create true intelligence, not a human made fake.
Most examples of AI today are tasks that humans do but machines have found it difficult in the past. This is narrow AI, doing tasks that intelligent agents can do. The objective function is to do the task well, the intelligence comes in by setting everything up so that it works (whether by hand coding rules or by employing gradient descent techniques on an example data set). So doing these tasks is more researcher intelligence than computer intelligence.
AGI is very different. In AGI the objective function is intelligence. That is, that task doesn't really matter, what matters is that it is solved in a manner which shows that knowledge and skills are both acquired and applied as well as possible. AGI should be driven by evaluating whether a solution was found in the most data and time efficient manner - late answers are wrong answers.
There is so much written on the benefits and dangers of AGI you've probably already come to a decision on this, but take the time to rethink now. Sam Harris on Artificial Intelligence is a near 3 hour accessible compilation and a good place to start (All Sam Harris). In contrast, the main reference Superintelligence: Paths, Dangers, Strategies by Nick Bostrom needs a lots of patience and concentration to get through the last chapters (it took me 5 months). The Singularity is a more accessible entry into this viewpoint.
Those working in AGI tend to take a much more pragmatic approach. We tend to think that we are a long way from AGI (both in terms of algorithms and having the necessary compute) and we expect good things from the next steps. For example, Yoshua Bengio says "Existential risk, for me, is a very unlikely consideration, but still worth academic investigation in the same way as you could say that we should study what could happen if a meteorite came to earch and destroyed it". The Seven Deadly Sins of Predicting the Future of AI is worth reading. Bear in mind There’s No Fire Alarm for AGI and AGI researchers are self selecting - it would be hard for them to believe that AGI is a really bad idea and then spend all day working on it.
A long but considered view of the current situation is given by A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy and also Concrete Problems in AI Safety
Science Fiction (e.g. Asimov's three laws of robotics, Blade Runner,HAL in 2001, Humans and stacks more) is a really useful tool here, it allows one question to be artificially isolated and explored as a thought experiment. By it's nature, science fiction explores the extremes by being sensationalist, but if it's entertaining and it helps you think in a different way about the subject then it's educational and belongs here.
We are already dependent on computers that process information in a faster and smarter way than humans can and they have contributed much to our standard of living. There's no going back, but we need to manage change. Compute costs are likely to keep coming down (Moore's law) so it's better to starting now and share what we learn so that everything adapts slowly.
We have serious existential risks (e.g. nuclear war, climate change, loss of biodiversity, bioterrorism, ocean acidification) you need to decide if you think that AGI could become another one. I personally think that we have bigger worries to confront immediately, for example we only have 12 years to limit climate change catastrophe and that progress in AI is going to help with the more immediate concerns.
These are my thoughts on the problems that we need to solve to generate AGI. They are introduced here so that when we review what we do know about we are already thinking about the problem we want to solve.
This meta-course takes an unashamedly deep learning approach (perhaps because I've been using gradient techniques to solve complex real-world problems since 1984 - so like the drunk looking under the lamp post for his lost keys, it might not be the best place to look but it's the only place I can see). More general approaches are:
One book that was very influential for me in the '80's is Gödel, Escher, Bach. It's too old and long to recommend reading in full here, but if you have the time then go for it.
The MIT course is the gentlest introduction, it's accessible and has broad coverage.
This section lists what we know from machine learning that is likely to be useful. It assumes you know enough computer science and maths, if you don't then dig out the background material in the referenced courses. This gives you the lego blocks to build your AGI creation.
You'll need a solid foundation in Machine Learning. The web is awash with information, for example pick one from this list. If you know about half of The Most Important Machine Learning Algorithms that's probably enough foundational material.
There are so many, so just pick one:
Historically people have liked Geoff Hinton's and Andrew Ng's courses.
At the end of it you should know about:
You are going to want to know at least one of:
With Tensorflow and PyTorch being the most popular.
You'll need some GPUs to play with, if you are just starting our read this Deep learning hardware guide.
For a complete course, OpenAI have just published Spinning Up in Deep RL. Other resources are:
Fix Slides are here: http://people.eecs.berkeley.edu/~cbfinn/_files/mbrl_cifar.pdf An older version of this tutorial (1.5 yrs older) is here: https://www.youtube.com/watch?v=iC2a7M9voYU
Reinforcement learning is a key part to AGI, but it tends to concentrate on toy problems that can be solved. For a discussion on what's missing from classic reinforcement learning that's needed in AGI:
In Reinforcement Learning there was no expert to help learn a policy, Imitation Learning uses an expert who already has a good policy. In Reinforcement Learning we care about learning the best policy, in Imitation Learning we mostly care about getting a good enough policy, if we want a much better one we find a better expert.
One can argue that these are not AGI achievements, but they are steps on the way:
Landmark achievements you should know about:
Interesting components:
There's lots of code that implements the main ideas we know about:
CC=gcc-6 CXX=g++-6
)Once you know the basics you'll want to keep up with current and contribute to the discussion. You can use https://blogtrottr.com to covert an RSS feed into email and https://alerts.google.com to follow keywords.
Newsletters:
Forums:
Podcasts: (be selective, there is a lot of ML and AI mixed in with the AGI):
Organisations:
Conferences:
Each of these resources are a summary of research directions, it's good to know what other people feel are the important problems:
Here are the problems that I think we need to tackle over-and-above deep reinforcement learning:
FIX UP BELOW HERE:
Finally a 2017/2018 poll and results- When will Artificial General Intelligence (AGI) be achieved?
Please give me feedback as to how I can improve this page.