paramz.optimization package¶
Submodules¶
paramz.optimization.optimization module¶
-
class
Adam
(step_rate=0.0002, decay=0, decay_mom1=0.1, decay_mom2=0.001, momentum=0, offset=1e-08, *args, **kwargs)[source]¶
-
class
Optimizer
(messages=False, max_f_eval=10000.0, max_iters=1000.0, ftol=None, gtol=None, xtol=None, bfgs_factor=None)[source]¶ Bases:
object
Superclass for all the optimizers.
Parameters: - x_init – initial set of parameters
- f_fp – function that returns the function AND the gradients at the same time
- f – function to optimize
- fp – gradients
- messages ((True | False)) – print messages from the optimizer?
- max_f_eval – maximum number of function evaluations
Return type: optimizer object.
-
class
RProp
(step_shrink=0.5, step_grow=1.2, min_step=1e-06, max_step=1, changes_max=0.1, *args, **kwargs)[source]¶
paramz.optimization.scg module¶
Scaled Conjuagte Gradients, originally in Matlab as part of the Netlab toolbox by I. Nabney, converted to python N. Lawrence and given a pythonic interface by James Hensman.
Edited by Max Zwiessele for efficiency and verbose optimization.
-
SCG
(f, gradf, x, optargs=(), maxiters=500, max_f_eval=inf, xtol=None, ftol=None, gtol=None)[source]¶ Optimisation through Scaled Conjugate Gradients (SCG)
f: the objective function gradf : the gradient function (should return a 1D np.ndarray) x : the initial condition
Returns x the optimal value for x flog : a list of all the objective values function_eval number of fn evaluations status: string describing convergence status