Automatic differentiation

In mathematics and computer algebra, automatic differentiation (AD), also called algorithmic differentiation or computational differentiation,[1][2] is a set of techniques to numerically evaluate the derivative of a function specified by a computer program. AD exploits the fact that every computer program, no matter how complicated, executes a sequence of elementary arithmetic operations (addition, subtraction, multiplication, division, etc.) and elementary functions (exp, log, sin, cos, etc.). By applying the chain rule repeatedly to these operations, derivatives of arbitrary order can be computed automatically, accurately to working precision, and using at most a small constant factor more arithmetic operations than the original program.

Automatic differentiation is not:

Figure 1: How automatic differentiation relates to symbolic differentiation

These classical methods run into problems: symbolic differentiation leads to inefficient code (unless carefully done) and faces the difficulty of converting a computer program into a single expression, while numerical differentiation can introduce round-off errors in the discretization process and cancellation. Both classical methods have problems with calculating higher derivatives, where the complexity and errors increase. Finally, both classical methods are slow at computing the partial derivatives of a function with respect to many inputs, as is needed for gradient-based optimization algorithms. Automatic differentiation solves all of these problems, at the expense of introducing more software dependencies.

The chain rule, forward and reverse accumulation

Fundamental to AD is the decomposition of differentials provided by the chain rule. For the simple composition y = g(h(x)) = g(w) the chain rule gives

\frac{dy}{dx} = \frac{dy}{dw} \frac{dw}{dx}

Usually, two distinct modes of AD are presented, forward accumulation (or forward mode) and reverse accumulation (or reverse mode). Forward accumulation specifies that one traverses the chain rule from inside to outside (that is, first one computes dw/dx and then dy/dw, while reverse accumulation has the traversal from outside to inside.

Forward accumulation

Figure 2: Example of forward accumulation with computational graph

In forward accumulation AD, one first fixes the independent variable to which differentiation is performed and computes the derivative of each sub-expression recursively. In a pen-and-paper calculation, one can do so by repeatedly substituting the derivative of the inner functions in the chain rule:

\frac{\partial y}{\partial x}
= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}
= \frac{\partial y}{\partial w_1} \left(\frac{\partial w_1}{\partial w_2} \frac{\partial w_2}{\partial x}\right)
= \frac{\partial y}{\partial w_1} \left(\frac{\partial w_1}{\partial w_2} \left(\frac{\partial w_2}{\partial w_3} \frac{\partial w_3}{\partial x}\right)\right)
= \cdots

This can be generalized to multiple variables as a matrix product of Jacobians.

Compared to reverse accumulation, forward accumulation is very natural and easy to implement as the flow of derivative information coincides with the order of evaluation. One simply augments each variable w with its derivative (stored as a numerical value, not a symbolic expression),

\dot w = \frac{\partial w}{\partial x}

as denoted by the dot. The derivatives are then computed in sync with the evaluation steps and combined with other derivatives via the chain rule.

As an example, consider the function:

\begin{align}
z
&= f(x_1, x_2) \\
&= x_1 x_2 + \sin x_1 \\
&= w_1 w_2 + \sin w_1 \\
&= w_3 + w_4 \\
&= w_5
\end{align}

For clarity, the individual sub-expressions have been labeled with the variables wi.

The choice of the independent variable to which differentiation is performed affects the seed values 1 and 2. Suppose one is interested in the derivative of this function with respect to x1. In this case, the seed values should be set to:

\begin{align}
\dot w_1 = \frac{\partial x_1}{\partial x_1} = 1 \\
\dot w_2 = \frac{\partial x_2}{\partial x_1} = 0
\end{align}

With the seed values set, one may then propagate the values using the chain rule as shown in both the table below. Figure 2 shows a pictorial depiction of this process as a computational graph.

\begin{array}{l|l}
\text{Operations to compute value} &
\text{Operations to compute derivative}
\\
\hline
w_1 = x_1 &
\dot w_1 = 1 \text{ (seed)}
\\
w_2 = x_2 &
\dot w_2 = 0 \text{ (seed)}
\\
w_3 = w_1 \cdot w_2 &
\dot w_3 = w_2 \cdot \dot w_1 + w_1 \cdot \dot w_2
\\
w_4 = \sin w_1 &
\dot w_4 = \cos w_1 \cdot \dot w_1
\\
w_5 = w_3 + w_4 &
\dot w_5 = \dot w_3 + \dot w_4
\end{array}

To compute the gradient of this example function, which requires the derivatives of f with respect to not only x1 but also x2, one must perform an additional sweep over the computational graph using the seed values \dot w_1 = 0; \dot w_2 = 1.

The computational complexity of one sweep of forward accumulation is proportional to the complexity of the original code.

Forward accumulation is more efficient than reverse accumulation for functions f : ℝn → ℝm with mn as only n sweeps are necessary, compared to m sweeps for reverse accumulation.

Reverse accumulation

Figure 3: Example of reverse accumulation with computational graph

In reverse accumulation AD, one first fixes the dependent variable to be differentiated and computes the derivative with respect to each sub-expression recursively. In a pen-and-paper calculation, one can perform the equivalent by repeatedly substituting the derivative of the outer functions in the chain rule:

\frac{\partial y}{\partial x}
= \frac{\partial y}{\partial w_1} \frac{\partial w_1}{\partial x}
= \left(\frac{\partial y}{\partial w_2} \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}
= \left(\left(\frac{\partial y}{\partial w_3} \frac{\partial w_3}{\partial w_2}\right) \frac{\partial w_2}{\partial w_1}\right) \frac{\partial w_1}{\partial x}
= \cdots

In reverse accumulation, the quantity of interest is the adjoint, denoted with a bar (); it is a derivative of a chosen dependent variable with respect to a subexpression w:

\bar w = \frac{\partial y}{\partial w}

Reverse accumulation traverses the chain rule from outside to inside, or in the case of the computational graph in Figure 3, from top to bottom. The example function is real-valued, and thus there is only one seed for the derivative computation, and only one sweep of the computational graph is needed in order to calculate the (two-component) gradient. This is only half the work when compared to forward accumulation, but reverse accumulation requires the storage of the intermediate variables wi as well as the instructions that produced them in a data structure known as a Wengert list (or "tape"),[3] which may represent a significant memory issue if the computational graph is large. This can be mitigated to some extent by storing only a subset of the intermediate variables and then reconstructing the necessary work variables by repeating the evaluations, a technique known as checkpointing.

The operations to compute the derivative using reverse accumulation are shown in the table below (note the reversed order):

\begin{array}{l}
\text{Operations to compute derivative}
\\ \hline
\bar w_5 = 1 \text{ (seed)}
\\
\bar w_4 = \bar w_5
\\
\bar w_3 = \bar w_5
\\
\bar w_2 = \bar w_3 \cdot w_1
\\
\bar w_1 = \bar w_3 \cdot w_2 + \bar w_4 \cdot \cos w_1
\end{array}

The data flow graph of a computation can be manipulated to calculate the gradient of its original calculation. This is done by adding an adjoint node for each primal node, connected by adjoint edges which parallel the primal edges but flow in the opposite direction. The nodes in the adjoint graph represent multiplication by the derivatives of the functions calculated by the nodes in the primal. For instance, addition in the primal causes fanout in the adjoint; fanout in the primal causes addition in the adjoint; a unary function y = f(x) in the primal causes = ȳ f′(x) in the adjoint; etc.

Reverse accumulation is more efficient than forward accumulation for functions f : ℝn → ℝm with mn as only m sweeps are necessary, compared to n sweeps for forward accumulation.

Reverse mode AD was first published in 1970 by Seppo Linnainmaa in his master thesis.[4][5][6]

Backpropagation of errors in multilayer perceptrons, a technique used in machine learning, is a special case of reverse mode AD.

Beyond forward and reverse accumulation

Forward and reverse accumulation are just two (extreme) ways of traversing the chain rule. The problem of computing a full Jacobian of f : ℝn → ℝm with a minimum number of arithmetic operations is known as the optimal Jacobian accumulation (OJA) problem, which is NP-complete.[7] Central to this proof is the idea that there may exist algebraic dependencies between the local partials that label the edges of the graph. In particular, two or more edge labels may be recognized as equal. The complexity of the problem is still open if it is assumed that all edge labels are unique and algebraically independent.

Automatic differentiation using dual numbers

Forward mode automatic differentiation is accomplished by augmenting the algebra of real numbers and obtaining a new arithmetic. An additional component is added to every number which will represent the derivative of a function at the number, and all arithmetic operators are extended for the augmented algebra. The augmented algebra is the algebra of dual numbers.

Replace every number \,x with the number x + x'\varepsilon, where x' is a real number, but \varepsilon is an abstract number with the property \varepsilon^2=0 (an infinitesimal; see Smooth infinitesimal analysis). Using only this, we get for the regular arithmetic

\begin{align}
      (x + x'\varepsilon) + (y + y'\varepsilon) &= x + y + (x' + y')\varepsilon \\
  (x + x'\varepsilon) \cdot (y + y'\varepsilon) &= xy + xy'\varepsilon + yx'\varepsilon + x'y'\varepsilon^2 = xy + (x y' + yx')\varepsilon
\end{align}

and likewise for subtraction and division.

Now, we may calculate polynomials in this augmented arithmetic. If P(x) = p_0 + p_1 x + p_2x^2 + \cdots + p_n x^n, then

\begin{align}
  P(x + x'\varepsilon) &= p_0 + p_1(x + x'\varepsilon) + \cdots + p_n (x + x'\varepsilon)^n \\
                       &= p_0 + p_1 x + \cdots + p_n x^n + p_1x'\varepsilon + 2p_2xx'\varepsilon + \cdots + np_n x^{n-1} x'\varepsilon \\
                       &= P(x) + P^{(1)}(x)x'\varepsilon
\end{align}

where P^{(1)} denotes the derivative of P with respect to its first argument, and x', called a seed, can be chosen arbitrarily.

The new arithmetic consists of ordered pairs, elements written \langle x, x' \rangle, with ordinary arithmetics on the first component, and first order differentiation arithmetic on the second component, as described above. Extending the above results on polynomials to analytic functions we obtain a list of the basic arithmetic and some standard functions for the new arithmetic:

\begin{align}
  \left\langle u,u'\right\rangle + \left\langle v,v'\right\rangle &= \left\langle u + v, u' + v' \right\rangle \\
  \left\langle u,u'\right\rangle - \left\langle v,v'\right\rangle &= \left\langle u - v, u' - v' \right\rangle \\
  \left\langle u,u'\right\rangle * \left\langle v,v'\right\rangle &= \left\langle u v, u'v + uv' \right\rangle \\
  \left\langle u,u'\right\rangle / \left\langle v,v'\right\rangle &= \left\langle \frac{u}{v}, \frac{u'v - uv'}{v^2} \right\rangle \quad ( v\ne 0) \\
                               \sin\left\langle u,u'\right\rangle &= \left\langle \sin(u) , u' \cos(u) \right\rangle \\
                               \cos\left\langle u,u'\right\rangle &= \left\langle \cos(u) , -u' \sin(u) \right\rangle \\
                               \exp\left\langle u,u'\right\rangle &= \left\langle \exp u , u' \exp u \right\rangle \\
                               \log\left\langle u,u'\right\rangle &= \left\langle \log(u) , u'/u \right\rangle \quad (u>0) \\
                                 \left\langle u,u'\right\rangle^k &= \left\langle u^k , k u^{k - 1} u' \right\rangle \quad (u \ne 0) \\
                    \left| \left\langle u,u'\right\rangle \right| &= \left\langle \left| u \right| , u' \mbox{sign} u \right\rangle \quad (u \ne 0)
\end{align}

and in general for the primitive function g,

g(\langle u,u' \rangle , \langle v,v' \rangle ) = \langle g(u,v) , g_u(u,v) u' + g_v(u,v) v' \rangle

where g_u and g_v are the derivatives of g with respect to its first and second arguments, respectively.

When a binary basic arithmetic operation is applied to mixed arguments—the pair \langle u, u' \rangle and the real number c—the real number is first lifted to \langle c, 0 \rangle. The derivative of a function f : \mathbb{R}\rightarrow\mathbb{R} at the point x_0 is now found by calculating f(\langle x_0, 1 \rangle) using the above arithmetic, which gives \langle f ( x_0 ) , f' ( x_0 ) \rangle as the result.

Vector arguments and functions

Multivariate functions can be handled with the same efficiency and mechanisms as univariate functions by adopting a directional derivative operator. That is, if it is sufficient to compute y' = \nabla f(x)\cdot x', the directional derivative y' \in \mathbb{R}^m of f:\mathbb{R}^n\rightarrow\mathbb{R}^m at x \in \mathbb{R}^n in the direction x' \in \mathbb{R}^n, this may be calculated as (\langle y_1,y'_1\rangle, \ldots, \langle y_m,y'_m\rangle) = f(\langle x_1,x'_1\rangle, \ldots, \langle x_n,x'_n\rangle) using the same arithmetic as above. If all the elements of \nabla f are desired, then n function evaluations are required. Note that in many optimization applications, the directional derivative is indeed sufficient.

High order and many variables

The above arithmetic can be generalized to calculate second order and higher derivatives of multivariate functions. However, the arithmetic rules quickly grow very complicated: complexity will be quadratic in the highest derivative degree. Instead, truncated Taylor polynomial algebra can be used. The resulting arithmetic, defined on generalized dual numbers, allows to efficiently compute using functions as if they were a new data type. Once the Taylor polynomial of a function is known, the derivatives are easily extracted. Currently there are a few software projects that implement the Taylor polynomial truncated algebra. AuDi, CTaylor and COSY infinity. Only the first two are open source.

Implementation

Forward-mode AD is implemented by a nonstandard interpretation of the program in which real numbers are replaced by dual numbers, constants are lifted to dual numbers with a zero epsilon coefficient, and the numeric primitives are lifted to operate on dual numbers. This nonstandard interpretation is generally implemented using one of two strategies: source code transformation or operator overloading.

Source code transformation (SCT)

Figure 4: Example of how source code transformation could work

The source code for a function is replaced by an automatically generated source code that includes statements for calculating the derivatives interleaved with the original instructions.

Source code transformation can be implemented for all programming languages, and it is also easier for the compiler to do compile time optimizations. However, the implementation of the AD tool itself is more difficult.

Operator overloading (OO)

Figure 5: Example of how operator overloading could work

Operator overloading is a possibility for source code written in a language supporting it. Objects for real numbers and elementary mathematical operations must be overloaded to cater for the augmented arithmetic depicted above. This requires no change in the form or sequence of operations in the original source code for the function to be differentiated, but often requires changes in basic data types for numbers and vectors to support overloading and often also involves the insertion of special flagging operations.

Operator overloading for forward accumulation is easy to implement, and also possible for reverse accumulation. However, current compilers lag behind in optimizing the code when compared to forward accumulation.

Software

Package License Approach Brief info
ADC Version 4.0 nonfree OO
Adept Apache 2.0 OO First-order forward and reverse modes. Very fast due to its use of expression templates and an efficient tape structure.
ADIC free for noncommercial SCT forward mode
ADMB BSD SCT+OO Uses a template approach. See http://admb-project.org/
ADNumber dual license OO arbitrary order forward/reverse
ADOL-C CPL 1.0 or GPL 2.0 OO arbitrary order forward/reverse, part of COIN-OR
AuDi GPL 2.0 OO AuDI is an open source, header only, C++ library that allows for AUtomated DIfferentiation implementing the Taylor truncated polynomial algebra (forward mode automated differentiation). It was created with the aim to offer a generic solution to the user in need of an automated differentiation system. AuDi is thread-safe and, when possible, use of Piranha fine-grained parallelization of the truncated polynomial multiplication. The benefits of this fine grained parallelization are well visible for many variables and high orders.
AMPL free for students SCT
CasADi LGPL SCT Forward/reverse modes, matrix-valued atomic operations.
ceres-solver BSD OO A portable C++ library that allows for modeling and solving large complicated nonlinear least squares problems
CppAD EPL 1.0 or GPL 3.0 OO arbitrary order forward/reverse, AD<Base> for arbitrary Base including AD<Other_Base>, part of COIN-OR; can also be used to produce C source code using the CppADCodeGen library.
CTaylor free OO truncated taylor series, multi variable, high performance, calculating and storing only potentially nonzero derivatives, calculates higher order derivatives, order of derivatives increases when using matching operations until maximum order (parameter) is reached, example source code and executable available for testing performance
Eigen Auto Diff MPL2 OO
FADBAD++ free for
noncommercial
OO uses operator new
OpenAD depends on components SCT
ReverseAD free OO High order reverse mode which evaluates the high order derivative tensor directly instead of a forward/reverse modes hierarchy.
Sacado GNU GPL OO A part of the Trilinos collection, forward/reverse modes.
Stan (software) BSD OO forward- and reverse-mode automatic differentiation with library of special functions, probability functions, matrix operators, and linear algebra solvers; interfaces to MATLAB, R and Python.
TAPENADE Free for noncommercial SCT
Tensorflow Apache 2.0 OO TensorFlow is a Google-developed Python and C++ library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays both on CPU and GPU efficiently.
Package License Approach Brief info
ADF Version 4.0 nonfree OO
ADIFOR >>>
(free for non-commercial)
SCT
AUTO_DERIV free for non-commercial OO
OpenAD depends on components SCT
TAF nonfree SCT
TAPENADE Free for noncommercial SCT
GADfit Free (GPL 3) OO First (forward, reverse) and second (forward) order, principal use is nonlinear curve fitting, includes the differentiation of integrals
Package License Approach Brief info
AD for MATLAB GNU GPL OO Forward (1st & 2nd derivative, Uses MEX files & Windows DLLs)
Adiff BSD OO Forward (1st derivative)
MAD Proprietary OO Forward (1st derivative) full/sparse storage of derivatives
ADiMat Proprietary SCT Forward (1st & 2nd derivative) & Reverse (1st), proprietary server side transform
MADiff GNU GPL OO Reverse
Package License Approach Brief info
ad BSD OO first and second-order, reverse accumulation, transparent on-the-fly calculations, basic NumPy support, written in pure python
FuncDesigner BSD OO uses NumPy arrays and SciPy sparse matrices,
also allows to solve linear/non-linear/ODE systems and
to perform numerical optimizations by OpenOpt
ScientificPython CeCILL OO see modules Scientific.Functions.FirstDerivatives and
Scientific.Functions.Derivatives
pycppad BSD OO arbitrary order forward/reverse, implemented as wrapper for CppAD including AD<double> and AD< AD<double> >.
pyadolc BSD OO wrapper for ADOL-C, hence arbitrary order derivatives in the (combined) forward/reverse mode of AD, supports sparsity pattern propagation and sparse derivative computations
uncertainties BSD OO first-order derivatives, reverse mode, transparent calculations
algopy BSD OO same approach as pyadolc and thus compatible, support to differentiate through numerical linear algebra functions like the matrix-matrix product, solution of linear systems, QR and Cholesky decomposition, etc.
pyderiv GNU GPL OO automatic differentiation and (co)variance calculation
CasADi LGPL SCT Python front-end to CasADi. Forward/reverse modes, matrix-valued atomic operations.
Tensorflow Apache 2.0 OO TensorFlow is a Google-developed Python and C++ library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays both on CPU and GPU efficiently.
Theano BSD OO Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays both on CPU and GPU efficiently.
Autograd MIT OO Autograd can reverse-mode differentiate native Python and Numpy code. It can handle a large subset of Python's features, including loops, ifs, recursion and closures. It is closed under its own operation and hence can compute derivatives of any order.
Package License Approach Brief info
Torch BSD OO Torch is a LuaJIT library used for Deep Learning. Its nn package is divided into modular objects that share a common Module interface. Modules have a forward and backward function that allow them to feedforward and backpropagate (first-order derivatives). Modules can be joined together using module composites to create complex task-tailored graphs.
SciLua MIT OO SciLua, a framework for general purpose scientific computing in LuaJIT, features complete and transparent support for forward-mode automatic differentiation.
Package License Approach Brief info
AutoDiff GNU LGPL OO Automatic differentiation with C# operator overloading.
FuncLib MIT OO Automatic differentiation and numerical optimization, operator overloading, unlimited order of differentiation, compilation to IL code for very fast evaluation.
DiffSharp GNU LGPL OO An automatic differentiation library implemented in the F# language. It supports C# and the other CLI languages. The library provides gradients, Hessians, Jacobians, directional derivatives, and matrix-free Hessian- and Jacobian-vector products, which can be incorporated with minimal change into existing algorithms. Operations can be nested to any level, meaning that you can compute exact higher-order derivatives and differentiate functions that are internally making use of differentiation.
Package License Approach Brief info
ad BSD OO Forward Mode (1st derivative or arbitrary order derivatives via lazy lists and sparse tries)
Reverse Mode
Combined forward-on-reverse Hessians.
Uses Quantification to allow the implementation automatically choose appropriate modes.
Quantification prevents perturbation/sensitivity confusion at compile time.
fad BSD OO Forward Mode (lazy list). Quantification prevents perturbation confusion at compile time.
rad BSD OO Reverse Mode. (Subsumed by 'ad').
Quantification prevents sensitivity confusion at compile time.
Package License Approach Brief info
JAutoDiff MIT OO Provides a framework to compute derivatives of functions on arbitrary types of field using generics. Coded in 100% pure Java.
Apache Commons Math Apache License v2 OO This class is an implementation of the extension to Rall's numbers described in Dan Kalman's paper[8]
Deriva Eclipse Public License v1.0 DSL+Code Generation Deriva automates algorithmic differentiation in Java and Clojure projects. It defines DSL for building extended arithmetic expressions (the extension being support for conditionals, allowing to express non analytic functions). The DSL is used to generate flat byte-code at runtime, providing implementation without overhead of function calls.
Jap Public OO/SCT Jap is a tools using Virtual Operator Overloading for java class. Jap was developed in the thesis of Phuong PHAM-QUANG 2008-2011.
Package License Approach Brief info
ForwardDiff.jl MIT OO A unified package for forward-mode automatic differentiation, combining both DualNumbers and vector-based gradient accumulations.
DualNumbers.jl MIT OO Implements a Dual number type which can be used for forward-mode automatic differentiation of first derivatives via operator overloading.
HyperDualNumers.jl MIT OO Implements a Hyper number type which can be used for forward-mode automatic differentiation of first and second derivatives via operator overloading.
ReverseDiffSource.jl MIT SCT Implements reverse-mode automatic differentiation for gradients and high-order derivatives given user-supplied expressions or generic functions. Accepts a subset of valid Julia syntax, including intermediate assignments.
TaylorSeries.jl MIT OO Implements truncated multivariate power series for high-order integration of ODEs and forward-mode automatic differentiation of arbitrary order derivatives via operator overloading.
Package License Approach Brief info
Deriva Eclipse Public License v1.0 DSL+Code Generation Deriva automates algorithmic differentiation in Java and Clojure projects. It defines DSL for building extended arithmetic expressions (the extension being support for conditionals, allowing to express non analytic functions). The DSL is used to generate flat byte-code at runtime, providing implementation without overhead of function calls.
Package License Approach Brief info
ad Apache 2.0 DSL+Code Generation Creates source code corresponding to algebraic expressions. See https://autodiff.info for a demo.
Package License Approach Brief info
PBSadmb GNU GPL SCT+OO Uses a template approach. See http://admb-project.org/ .

References

  1. Neidinger, Richard D. (2010). "Introduction to Automatic Differentiation and MATLAB Object-Oriented Programming" (PDF). SIAM Review 52 (3): 545–563. doi:10.1137/080743627.
  2. http://www.ec-securehost.com/SIAM/SE24.html
  3. Bartholomew-Biggs, Michael; Brown, Steven; Christianson, Bruce; Dixon, Laurence (2000). "Automatic differentiation of algorithms" (PDF). Journal of Computational and Applied Mathematics 124 (1-2): 171–190. doi:10.1016/S0377-0427(00)00422-2.
  4. Linnainmaa, Seppo (1970). The representation of the cumulative rounding error of an algorithm as a Taylor expansion of the local rounding errors. Master's Thesis (in Finnish), Univ. Helsinki, 6-7.
  5. Linnainmaa, Seppo (1976). Taylor expansion of the accumulated rounding error. BIT Numerical Mathematics, 16(2), 146-160.
  6. Griewank, Andreas (2012). Who Invented the Reverse Mode of Differentiation?. Optimization Stories, Documenta Matematica, Extra Volume ISMP (2012), 389-400.
  7. Naumann, Uwe (April 2008). "Optimal Jacobian accumulation is NP-complete". Mathematical Programming 112 (2): 427–441. doi:10.1007/s10107-006-0042-z. |contribution= ignored (help)
  8. Kalman, Dan (June 2002). "Doubly Recursive Multivariate Automatic Differentiation" (PDF). Mathematics Magazine 75 (3): 187–202. doi:10.2307/3219241.

Literature

External links

This article is issued from Wikipedia - version of the Monday, April 18, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.