User Tools

Site Tools


private:running_notes

Running Notes: A list of things I thought were a good idea at the time

Current

  • Wikipedia: DNN known and used in '80s - cite PhD or earlier.
  • Discriminative autoencoder. Build some form of AE - once trained, compute p(z_i|x_i) i,e, mu_i and sigma_i for all data x_i. Then build a discriminator between these samples and a unit Guassian. Now can 'sharpen' an image - start by sampling from unit gaussian then do SGD in discriminator space, i.e. the minimal change to make Z more like a real image.
  • Take photos apart in layers; i.e. remove objects and replace with background. Generation starts with background and adds foreground. Videos maybe good as foreground moves and background doesn't. Run a cycle, guess the gaps, predict the gaps and repeat.
  • Need to be able to overlay a template at (x,y) (with scale s and orientation theta?). This defines a NN architecture of decomposition.
  • Fast dropout (or any dropout) should zero mean the input - that way dropout doesn't change the weights.
  • Values GAN (or classifier) - pain/pleasure classifier on latent space, use to guide the search when trying to solve a problem.
  • Take a space, second order decorrelate, model with BIC diagonal GMMs, expand as c_i * x_j, truncate those that are small (as a fraction of signal power?), repeat until no signal left. Inverse is to generate from latent/noise model, apply inverse to get c_i * x_j, estiamte c_i, fill in missing from noise distribution, sum to get x_j and repeat downwards over.

Bad/done ideas

  • gut feeling is that if elimiate pairwise correlation then v.diff to have higher order correlations, so good enough. NOTE: This really isn't true, most of the current NN arch work is based on finding new structure in each layer.
  • buy a clock spring to use as acoustic memory. NOTE: not persued as not visual - currently going for bubble memory.
  • tooway does remote IP, NOTE: not interested in this anymore.
  • The VQ time series predictor trick is to run the encoder on the decoder outputs, i.e. using the noisy not the real signal, This way both the encoder and the decoder do the same thing and everything stays in sync. NOTE: This doesn't prove that this is the right way to train, just the right way to run.
  • Look at the rev.com ASR service. NOTE: rev.com don't advertise a pure ASR service.
  • Write up sparse dynamic VQ ans teh main way to model time - but it doesn't get to concept. NOTE: Opps - forgotton how this works…
  • When the goal is to learn as much as possible about the world there is no exploration/exploitation problem - it's all exploration.
  • SyndicateRoom challenge amongst Angels. NOTE: With SR who are to get back to me.
private/running_notes.txt · Last modified: 2019/12/27 21:28 by admin