This course is for everyone wanting to build Artificial General Intelligence (AGI) using Deep Learning. At the end of the course you should be at researcher level, that is you'll know enough to perform original research in the field of AGI (e.g. by reviewed publication).
There are several approaches to AGI, here we focus on the most popular, that of Machine Learning (ML) and Deep Learning using conventional computer hardware, i.e. the modern computer science route. This is mainly because that's what I know but also it's what I personally believe will yield results soonest. Other approaches are documented in THREAD2.
This meta-course is organised around several threads which can be followed in parallel. Whenever there's a link it is there to click through and skim read the content. If it's material that you know about then great, move on. If you don't know it then you might want to read, understand, think about it and follow what interests you deeper. Mostly this material is factual, but opinions are useful, so mine are marked in sections and if you think I'm wrong then do let me know.
What do we mean by intelligence? A dictionary definition is "the ability to acquire and apply knowledge and skills".
Another name for Artificial General Intelligence is Strong Artificial Intelligence. This is in contrast to Weak or Narrow Artificial Intelligence. Most uses of the term AI refer to Weak AI, which in turn is often mostly Machine Learning.
The term “Artificial” is misleading as it means "Made or produced by human beings rather than occurring naturally, especially as a copy of something natural". Our computers may be artificial, but we are trying to create true intelligence, not a human made substitute.
Issac Newton said "If I have seen further it is by standing on the sholders of Giants.". I believe that this is generally true, that is intelligence builds upon itself through communication with other intelligent agents. The consequence for AGI is that (a) we have to be able to communicate with AGI in order for us to consider it intelligent (Turing test), (b) Natural Language is the most efficient form of communication so we should look to building it into AGI, (c) AGI will bootstrap from our collective knowledge, e.g. the mostly unstructured information on the internet and (d) AGI could make good teachers and so improve education worldwide (e.g. as an evolution of the mostly shallow knowledge already present on the internet).
There is so much written on the benefits and dangers of AGI you've probably already come to a decision on this, but take the time to rethink now. Sam Harris on Artificial Intelligence is a near 3 hour accessible compilation and a good place to start (All Sam Harris). In contrast, the main reference Superintelligence: Paths, Dangers, Strategies by Nick Bostrom needs a lots of patience and concentration to get through the last chapters (it took me 5 months). The Singularity is a more accessible entry into this viewpoint.
Those working in AGI tend to take a much more pragmatic approach. We tend to think that we are a long way from AGI (both in terms of algorithms and having the necessary compute) and we expect good things from the next steps. For example, Yoshua Bengio says "Existential risk, for me, is a very unlikely consideration, but still worth academic investigation in the same way as you could say that we should study what could happen if a meteorite came to earch and destroyed it". The Seven Deadly Sins of Predicting the Future of AI is worth reading. Bear in mind There’s No Fire Alarm for AGI and AGI researchers are self selecting - it would be hard for them to believe that AGI is a really bad idea and then spend all day working on it.
A long but considered view of the current situation is given by A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy and also Concrete Problems in AI Safety
Science Fiction (e.g. Asimov's three laws of robotics, Blade Runner,HAL in 2001, Humans and stacks more) is a really useful tool here, it allows one question to be artificially isolated and explored as a thought experiment. By it's nature, science fiction explores the extremes by being sensationalist, but if it's entertaining and it helps you think in a different way about the subject then it's educational and belongs here.
We already have lots of serious existential risks (e.g. nuclear war, climate change, loss of biodiversity, bioterrorism, ocean acidification) you need to decide if you think that AGI could become another one.
I think that we are already dependent on computers that process information in a faster and smarter way than humans can and they have contributed much to our standard of living. There's no going back, but we need to manage change. Compute costs are likely to keep coming down (Moore's law) so it's better to starting now and share what we learn.
The aim of this thread is to introduce the problems that we may need to solve to generate AGI.
This meta-course takes an unashamedly deep learning approach (perhaps because I've been using gradient techniques to solve complex real-world problems since 1984 - so like the drunk looking under the lamp post for his lost keys, it might not be the best place to look but it's the only place I can see). More general approaches are:
One book that was very influential for me in the '80's is Gödel, Escher, Bach. It's too old and long to recommend reading in full here, but if you have the time then go for it.
The MIT course is the gentlest introduction, it's accessible and has broad coverage.
This section lists what we know from machine learning that is likely to be useful. It assumes you know enough computer science and maths, if you don't then dig out the background material in the referenced courses. This gives you the lego blocks to build your AGI creation.
You'll need a solid foundation in Machine Learning. The web is awash with information, for example pick one from this list. If you know about half of The Most Important Machine Learning Algorithms that's probably enough foundational material.
There are so many, so just pick one:
At the end of it you should know about:
You are going to want to know at least one of:
and if you pick more than one then Tensorflow should be in your set, with Keras and Caffe2 also being popular.
For a complete course, OpenAI have just published Spinning Up in Deep RL. Other resources are:
Reinforcement learning is a key part to AGI, but it tends to concentrate on toy problems that can be solved. For a discussion on what's missing from classic reinforcement learning that's needed in AGI:
One can argue that these are not AGI achievements, but they are steps on the way:
Landmark achievements you should know about:
There's lots of code that implements the main ideas we know about:
Once you know the basics you'll want to keep up with current and contribute to the discussion. You can use https://blogtrottr.com to covert an RSS feed into email and https://alerts.google.com to follow keywords.
Podcasts: (be selective, there is a lot of ML and AI mixed in with the AGI):
Each of these resources are a summary of research directions, it's good to know what other people feel are the important problems:
Each of these resources are individual research directions moving towards AGI:
Finally a 2017/2018 poll and results- When will Artificial General Intelligence (AGI) be achieved?
Please give me feedback as to how I can improve this page.