Skip to content

Jaderberg (2016) Decoupled Neural Interfaces using Synthetic Gradients

Louis Maddox edited this page Aug 20, 2016 · 5 revisions

Status quo:

  • Training directed neural nets with fwd-prop thru comp graph then error signal backprop to update weights”

  • All layers (modules more generally) of the net are therefore locked

    • Note ideas of 'minds locked together' come quite naturally when thinking about computer vision, see 'On links'
  • via DeepMindAI