Once learning is done, the weights should be copied back into normal tensors.
Once learning is done, the weights should be copied back into normal tensors.
The weights
Some optimizers swap out weights with special purpose tensors for e.
Some optimizers swap out weights with special purpose tensors for e.g. efficient scoring while learning.
The weights
Whether the optimizer has converged yet.
Reset the optimizers internal state (such as Hessian approximation, etc.
Reset the optimizers internal state (such as Hessian approximation, etc.)
Updates the weights according to the gradient.
Updates the weights according to the gradient.
The weights
The gradient
The value
Base trait for optimizers that update weights according to a gradient.