paramz.optimization package

Submodules

paramz.optimization.optimization module

class Adam(step_rate=0.0002, decay=0, decay_mom1=0.1, decay_mom2=0.001, momentum=0, offset=1e-08, *args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]
class Opt_Adadelta(step_rate=0.1, decay=0.9, momentum=0, *args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]
class Optimizer(messages=False, max_f_eval=10000.0, max_iters=1000.0, ftol=None, gtol=None, xtol=None, bfgs_factor=None)[source]

Bases: object

Superclass for all the optimizers.

Parameters:
  • x_init – initial set of parameters
  • f_fp – function that returns the function AND the gradients at the same time
  • f – function to optimize
  • fp – gradients
  • messages ((True | False)) – print messages from the optimizer?
  • max_f_eval – maximum number of function evaluations
Return type:

optimizer object.

opt(x_init, f_fp=None, f=None, fp=None)[source]
run(x_init, **kwargs)[source]
class RProp(step_shrink=0.5, step_grow=1.2, min_step=1e-06, max_step=1, changes_max=0.1, *args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]
class opt_SCG(*args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]
class opt_bfgs(*args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]

Run the optimizer

class opt_lbfgsb(*args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]

Run the optimizer

class opt_simplex(*args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]

The simplex optimizer does not require gradients.

class opt_tnc(*args, **kwargs)[source]

Bases: paramz.optimization.optimization.Optimizer

opt(x_init, f_fp=None, f=None, fp=None)[source]

Run the TNC optimizer

get_optimizer(f_min)[source]

paramz.optimization.scg module

Scaled Conjuagte Gradients, originally in Matlab as part of the Netlab toolbox by I. Nabney, converted to python N. Lawrence and given a pythonic interface by James Hensman.

Edited by Max Zwiessele for efficiency and verbose optimization.

SCG(f, gradf, x, optargs=(), maxiters=500, max_f_eval=inf, xtol=None, ftol=None, gtol=None)[source]

Optimisation through Scaled Conjugate Gradients (SCG)

f: the objective function gradf : the gradient function (should return a 1D np.ndarray) x : the initial condition

Returns x the optimal value for x flog : a list of all the objective values function_eval number of fn evaluations status: string describing convergence status

paramz.optimization.verbose_optimization module

class VerboseOptimization(model, opt, maxiters, verbose=False, current_iteration=0, ipython_notebook=True, clear_after_finish=False)[source]

Bases: object

finish(opt)[source]
print_out(seconds)[source]
print_status(me, which=None)[source]
update()[source]
exponents(fnow, current_grad)[source]

Module contents