TooN 2.1

Evaluation of useful functions. More...
Functions  
template<class F , int S, class P , class B >  
Vector< S, P >  numerical_gradient (const F &f, const Vector< S, P, B > &x) 
template<class F , int S, class P , class B >  
Matrix< S, 2, P >  numerical_gradient_with_errors (const F &f, const Vector< S, P, B > &x) 
template<class F , int S, class P , class B >  
pair< Matrix< S, S, P > , Matrix< S, S, P > >  numerical_hessian_with_errors (const F &f, const Vector< S, P, B > &x) 
template<class F , int S, class P , class B >  
Matrix< S, S, P >  numerical_hessian (const F &f, const Vector< S, P, B > &x) 
Evaluation of useful functions.
Extrapolate a derivative to zero using Ridder's Algorithm.
Ridder's algorithm works by evaluating derivatives with smaller and smaller step sizes, fitting a polynomial and extrapolating its value to zero.
This algorithm is generally more accurate and much more reliable, but much slower than using simple finite differences. It is robust to awkward functions and does not require careful tuning of the step size, furthermore it provides an estimate of the errors. This implementation has been tuned for accuracy instead of speed. For an estimate of the errors, see also numerical_gradient_with_errors(). In general it is useful to know the errors since some functions are remarkably hard to differentiate numerically.
Neville's Algorithm can be used to find a point on a fitted polynomial at . Taking some points and , one can define a table of points on various polyomials:
where is the value of a polynomial fitted to datapoints 1, 2 and 3 evaluated at . The initial values are simple to evaluate: . The remaining values are determined by the recurrance:
For Ridder's algorithm, we define the extrapolation point and the sequence of points to evaluate as where is some constant a little greater than 1. Substituting in to the above equation gives:
To measure errors, when computing (for example) , we compare the value to the lower order with the same step size , and the lower order with a larger step size . The error estimate is the largest of these. The extrapolation with the smallest error is retained.
Not only that, but since every value of P is used, every single subset of points is used for polynomial fitting. So, bad points for large and small do not spoil the answer.
It is generally assumed that as decreases, the errors decrease, then increase again. Therefore, if the current step size did not yield any improvements on the best point so far, then we terminate when the highest order point is times worse then the previous order point.
The parameters are:
f  Functor to differentiate 
x  Point about which to differentiate. 
References Vector< Size, Precision, Base >::size().
Compute numerical gradients with errors.
See numerical_gradient(). Gradients are returned in the first row of the returned matrix. Errors are returned in the second row. The errors have not been scaled, so they are in the same range as the gradients.
f  Functor to differentiate 
x  Point about which to differentiate. 
References Vector< Size, Precision, Base >::size().
pair<Matrix<S, S, P>, Matrix<S, S, P> > TooN::numerical_hessian_with_errors  (  const F &  f, 
const Vector< S, P, B > &  x  
) 
Compute the numerical Hessian using central differences and Ridder's method:
See numerical_gradient().
f  Functor to doubledifferentiate 
x  Point about which to doubledifferentiate. 
References Vector< Size, Precision, Base >::size().
Compute the numerical Hessian and errors.
The Hessian is returned as the first element, and the errors as the second. See numerical_hessian().
f  Functor to doubledifferentiate 
x  Point about which to doubledifferentiate. 
References Vector< Size, Precision, Base >::size().