Skip to main content Skip to main navigation


Differentiable Implicit Layers

Andreas Look; Simona Doneva; Melih Kandemir; Rainer Gemulla; Jan Peters
In: Computing Research Repository eprint Journal (CoRR), Vol. abs/2010.07078, Pages 0-10, arXiv, 2020.


In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions. These functions are parametrized by a set of learnable weights and may optionally depend on some input; making them perfectly suitable as a learnable layer in a neural network. We demonstrate our scheme on different applications: (i) neural ODEs with the implicit Euler method, and (ii) system identification in model predictive control.

Weitere Links