accinv.optim module¶
- class accinv.optim.Feedback(*, maxiter: Optional[int] = None, drop_singular_values: Collection[int] = (-2, -1), ftol: float = 1e-08, xtol: float = 1e-08, gtol: float = 1e-08)¶
Bases:
Optimizer
Implementation of a feedback-like optimizer which uses a constant Jacobian.
The Jacobian is computed once for the initial guess of parameters and then used throughout the optimization procedure.
The sum of number of quadrupoles, horizontal BPMs and steerers, vertical BPMs and steerers must match the number of columns in the Jacobian. Similarly, the number of BPMs and steerers must match the size of the computed ORM or residuals.
- Parameters
maxiter – Maximum number of iterations that will be performed.
drop_singular_values – Collection of indices of singular values that will be set to zero.
ftol – See
scipy.optimize.least_squares()
.xtol – See
scipy.optimize.least_squares()
.gtol – See
scipy.optimize.least_squares()
.
- run(*, f: ~typing.Callable[[<MagicMock name='mock.ndarray' id='139857761889296'>], <MagicMock name='mock.ndarray' id='139857761889296'>], j: ~typing.Callable[[<MagicMock name='mock.ndarray' id='139857761889296'>], <MagicMock name='mock.ndarray' id='139857761889296'>], x0: <MagicMock name='mock.ndarray' id='139857761889296'>) Result ¶
- class accinv.optim.GaussNewton(*, maxiter: Optional[int] = None, step_size: float = 0.1, drop_singular_values: Collection[int] = (-2, -1), ftol: float = 1e-08, xtol: float = 1e-08, gtol: float = 1e-08)¶
Bases:
Optimizer
Implementation of the Gauss-Newton optimizer.
- Parameters
maxiter – Maximum number of iterations that will be performed.
step_size – The step size used for updating the current guess during each iteration. This is a scaling factor in the sense that it is multiplied with the update obtained from solving the Gauss-Newton step. Hence, the step_size does not determine the actual magnitude of the update but rather it is a scaling factor for each update.
drop_singular_values – Collection of indices of singular values that will be set to zero.
ftol – See
scipy.optimize.least_squares()
.xtol – See
scipy.optimize.least_squares()
.gtol – See
scipy.optimize.least_squares()
.
- run(*, f: ~typing.Callable[[<MagicMock name='mock.ndarray' id='139857761889296'>], <MagicMock name='mock.ndarray' id='139857761889296'>], j: ~typing.Callable[[<MagicMock name='mock.ndarray' id='139857761889296'>], <MagicMock name='mock.ndarray' id='139857761889296'>], x0: <MagicMock name='mock.ndarray' id='139857761889296'>) Result ¶
- class accinv.optim.LeastSquares(**configuration)¶
Bases:
Optimizer
A thin wrapper around
scipy.optimize.least_squares()
.- Parameters
configuration – Any keyword-arguments for
scipy.optimize.least_squares()
.
- run(*, f: ~typing.Callable[[<MagicMock name='mock.ndarray' id='139857761889296'>], <MagicMock name='mock.ndarray' id='139857761889296'>], j: ~typing.Callable[[<MagicMock name='mock.ndarray' id='139857761889296'>], <MagicMock name='mock.ndarray' id='139857761889296'>], x0: <MagicMock name='mock.ndarray' id='139857761889296'>) Result ¶
- class accinv.optim.Result(x: <MagicMock name='mock.ndarray' id='139857761889296'>, fun: <MagicMock name='mock.ndarray' id='139857761889296'>, jac: <MagicMock name='mock.ndarray' id='139857761889296'>, nfev: int, njev: int, status: int, message: str)¶
Bases:
object
This class mimics
scipy.optimize.OptimizeResult
and contains various information about the results of a fitting/optimization process.- property cost¶
- fun: <MagicMock name='mock.ndarray' id='139857761889296'>¶
- property grad¶
- jac: <MagicMock name='mock.ndarray' id='139857761889296'>¶
- message: str¶
- nfev: int¶
- njev: int¶
- status: int¶
- property success¶
- x: <MagicMock name='mock.ndarray' id='139857761889296'>¶