User Tools

Site Tools


private:imagi

IMAGI

“The true sign of intelligence is not knowledge but imagination. I have no special talent. I am only passionately curious.” Albert Einstien.

Vision/Mission/Strategy

Vision

The difference you will create in your customers' lives

Mission

An ambitious but achievable position in our market that talks to our vision and leads towards it

Strategy

An effective mixture of thought and action with a basic underlying structure that is a step towards the missionrr

Components

  • correlated: If I'm thinking about X I should also think about Y because they are often related.
  • caused: If do(X) then Y will happen with probability p.
  • reasonablness: What makes sense? What is the probability of this thought/event happening?
  • complete: Is X a goal state (answer) for problem Y?

High Objective: To become as smart as possible, that is to “prove as many theorums”, “understand the world” or just “know” as much as possible. This avoids the exploration/exploitation problem as evrything is exploration/learning. It also mitigates the objective mismatch problem as learning is reasonably benign.

Low Objective: To convert correlations to causations. Correlations may be obtained unsupervised, work out the causal structure given what is already known about the correlated and causal world.

How might it fit together:

  • input from audio, video (at about 30 fps) or text from internet
  • “state compression” style network - keep as much of input possible (online version of RNN transformer)
  • several compressed representations, maybe all powers of 2 down to 1
  • correlate with external rewards - same way as outcomes are scored below.
  • causate - attention mechanism on compressed representations then transform to get opportunity, action, outcome vectors, Outcomes are scored. Potentially good outcomes are expanded as opportuntites with potential actions to get more outcomes. opportunity/action/outcome is more than state/action/next state:
    • temporal order is implied, but no time scale
    • all are partial observations - not trying to predict complete next state, just a component
    • more like concepts than state vectors

Learning through:

  • Introspection: Self generated goals and exporation to find answers
  • Experimentation: Interaction with the world to find out how it works
  • Literature research: Learning from what others have written
  • Teaching: Learning by being shown how to do something
  • Conversation: Learning through questioning others

GAN on “caused” database allows the generation of ideas which are not known to be true but are plausibly true as close to existing ideas. For each idea:

  • Assuming start from problem:
    • Generate ideas from correlated
    • check they are reasonable and then a “logical step” from problem statement and known facts
    • do they make progress towards “completed”?
  • Assuming divide and conquor
    • Pick a point, establish if it's true, see if it usefully divides the path into two easier problems

Long Term Goals

  • A conversational bot which is able to pass Turing test.
    • Why: Because it can be text based and there's lots of knowledge that's text based, it can learn in parallel and doesn't need physical robot.
  • High level Q&A reasoning, e.g. from Wikipedia
    • Why: Educational tool, also to help learning to think

Short Term Goals

  • Autogeneration of high quality 'photos'
  • Super resolution of old films
  • Autogeneration of background for big budget Holywood films

Compute

Full systems commodity compute came down by a factor of 10,000 in 30 years (https://aiimpacts.org/wikipedia-history-of-gflops-costs), so expect availble cheap compute to rise at e^0.3t where t is in years.

The 2008 paper Whole Brain Emulation: A Roadmap says that supercomputing is much cheaper. We have supercomputers that'll do spiking neuron emulation/similation now. In 2040 we expect the cost to be down to $1m - prob too high - the current supercomputer has done 148.6 petaFLOPS and cost around $200 million https://en.wikipedia.org/wiki/Summit_(supercomputer)

Location

Prices are for 1 or 5 days days per week and charged monthly ex-VAT. ideaSpace first, then ideaSpace and Eagle?

OLD

Potential other names

  • causate (com ($2k), net, org, io, ai)
  • Artgeni (ARTificial GENeral Intelligence). - all but .com available

https://80000hours.org/podcast/episodes/the-world-needs-ai-researchers-heres-how-to-become-one/

- text based - lots of knowledge that's text based, can learn in parallel, doesn't need physical robot.

Big Picture:

Nested RNNs (LSTMs), each with a GAN that learns the state vector space. Each RNN has inputs from the GAN representaiton of the last layer and outputs to lower layers. At the bottom is input from the senses and outputs to motor control. The states from all level map to “emotions” (pleasure/pain signals).

Thinking about things leads to memory as that point is trained on with the GAN.

ADD in

  • the job of the brain is to keep the body in allostasis
  • add in mental time travel

Assumptions:

  • Build a machine that learns as much about the world as possible - i.e. metric is how much real-world knowledge can be accumulated
    • Pleasure/pain are seen as heuristic enablers/disablers to the overall objective function
  • No need to be biologically inspired or use standard DNN techniques - anything that works is good enough
  • Correlation implies causation until proved otherwise (“a machine for jumping to conclusions”)
  • All input goes through the same pathway, whether physically observed (direct senses on world) or indirectly observed (text or pictures) or “dreamed up” (thoughts)

Operating principles:

  • Efficiency is important so compile multi-step thoughts (System2) into single step heuristics (System1)
    • Associate cause/effect (input/output) discarding the processing steps
  • The space of thought vectors is much larger than what is physically possible and likely
    • There is a “sanity check” function that says whether any thought vector is likely
  • There is an embedded subspace that spans what is likely
    • Can learn the subspace with a GAN (do androids dream of electric thought vectors?)
  • Need a degree of confidence - do the System1 compiled heuristics know the answer?
    • If not, need a mechanism for generating plausible next steps and a search to get answer
  • Need a novelty detector - is what I have just experienced new?
    • If so, need to store in memory and associate with surrounding thoughts

Attention:

How does the attention mechanism cover all words in a machine translation task?

  • Understand “Locality-Sensitive Hashing (LSH)” http://www.cs.princeton.edu/cass/papers/mplsh_vldb07.pdf
  • Understand “Scalable and Sustainable Deep Learning via Randomized Hashing” https://arxiv.org/pdf/1602.08194.pdf Can compute approximate dot products very quickly. Has a tie in to adaptive dropout. Need to understand as it should speed up attention mechanisms.
  • Idea - train attention mechanisms with noise contrastive estimation - that way no softmax is needed. and it scales better in training. Use LSH hash (above) to evaluate only the most likely vectors that contribute to attention.

Fast learning:

  • One-Shot Learning in Discriminative Neural Networks: https://arxiv.org/pdf/1707.05562.pdf
  • Consider softmax as posterior of n-points with same covariance matrix - this should allow for Baysian statistics in learning.

Resources:

how does GAN video prediction work?

  • UNDERSTAND ArKXiv:1711.04994 - “Error Encoding Network”

https://arxiv.org/pdf/1806.03027.pdf https://arxiv.org/pdf/1711.09020.pdf

Free will:

  • The ability to model the world and to plan ahead so far that current actions are not obviously related to the current or past states of the world.
  • Assume:
    • There is a “what if” search mechanism to evaluate possible outcomes
    • Only a few of the scenarios are useful
    • It is advatagous to commumicate the reasons for a decision - better outcome of group behaviour, may be wrong, may learn, may teach others
  • Then:
    • An introspection mechanism for the top decisions is useful so that they may be communicated
    • It is retrospective, it's of the form “I thought this because”
    • It generates a sense of self/consouciness

Companies to watch: A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3070741

  • Deepmind
  • FAIR
  • OpenAI
  • Prowler.io
  • Vicarious
  • SenseTime
  • The MIT Intelligence Quest

MORE STUFF TO ADD

http://www.agi-society.org/

Consciousness. hypnosis . Focus and attention. Suggestion that people are more suggestible. Supression of Region of brain that people are responsible for actions. Reinforcement learning and reward allocation. Need to decide if what happens is a result of what we do, if it is then we learn to do better, if not then we are modelling the outside world. This is direct sense of self, or consciousnesses.

Need to read and understand https://towardsdatascience.com/a-new-kind-of-deep-neural-networks-749bcde19108?_utm_source=1-2-2

MORE STUFF TO ORGANISE:

Value of state/action V(s, a). Utility of state is expectation or maximisation of Value, U(s) = E| V(s, a) | over all actions. Have a Confidence on U(s), C(s).

Policy, P, generates actions from states, a = P(s).

Improve Policy, P(s), by running forwards over actions until we are fairly certain of a significant value. Other stopping criteria are available. Repeat whilst the AGI is in the same state, i.e. “thinking” about the same thing. This is just sampling from current distributions, but it could allow explanation, that is X leads to Y which leads to Z which I know is good/bad therefore I will (not) do X.

OLD NAMES

Japanese:

  • sensAI - teacher - can be sensei
  • t
  • kohAI - younger
  • brothers, siblings (kyōdai)
  • sisters (shimai)
  • society (shakai)
  • economy, economics (keizai)
  • request (irai)
  • kawai.ai - cute - really kawaii
  • now (genzai)
  • future (mirai)
  • expensive, high (takai)
  • small (chiisai)
  • young (wakai)

ちかい、近い – near, close (chikai) しゃかい、社会 – society (shakai) けいざい、経済 – economy, economics (keizai) いらい、依頼 – request (irai) げんざい、現在 – now (genzai) みらい、未来 – future (mirai) たかい、高い – expensive, high (takai) ちいさい、小さい – small (chiisai) わかい、若い – young (wakai) やわらかい、柔らかい – soft (yawarakai) かたい、硬い、堅い – hard (katai) つめたい、冷たい – cold (tsumetai) うまい、美味い、旨い – delicious, appetizing (umai) まずい、不味い – tastes awful (mazui) あまい、甘い – sweet (amai) からい、辛い – hot [spicy] (karai) しょっぱい、塩っぱい – salty (shoppai) にがい、苦い – bitter (nigai) こわい、怖い、恐い – scary (kowai) いたい、痛い – painful (itai) くさい、臭い – stinky (kusai) つらい、辛い – painful, heart-breaking (tsurai) はい – yes (hai) たい – indicates desire to perform verb (tai) くらい、ぐらい – approximately, about (kurai)

private/imagi.txt · Last modified: 2020/01/16 10:19 by admin