cc.factorie

optimize

package optimize

Visibility
  1. Public
  2. All

Type Members

  1. class AdaGrad extends AdaptiveLearningRate

    The AdaGrad algorithm.

  2. class AdaGradRDA extends GradientOptimizer

    The AdaGrad regularized dual averaging algorithm from Duchi et al, Adaptive Subgradient Algorithms for Online Learning and Stochastic Optimization.

  3. class AdaMira extends AdaptiveLearningRate with MarginScaled

    The combination of AdaGrad with MIRA

  4. trait AdaptiveLearningRate extends GradientStep

    This implements the adaptive learning rates from the AdaGrad algorithm (with Composite Mirror Descent update) from "Adaptive Subgradient Methods for Online Learning and Stochastic Optimization" by Duchi et al.

  5. class AveragedPerceptron extends ConstantLearningRate with ParameterAveraging

    Convenience name for the averaged perceptron.

  6. class BackTrackLineOptimizer extends GradientOptimizer with FastLogging

    A backtracking line optimizer.

  7. class BatchTrainer extends Trainer with FastLogging

    Learns the parameters of a Model by summing the gradients and values of all Examples, and passing them to a GradientOptimizer (such as ConjugateGradient or LBFGS).

  8. class CaseFactorDiscreteLikelihoodExample extends Example

    A gradient from a single DiscreteVar, where the set of factors is allowed to change based on its value.

  9. class CompositeLikelihoodExample extends Example

    Generalization of pseudo likelihood to sets of variables, instead of a single one

  10. class ConjugateGradient extends GradientOptimizer with FastLogging

    A conjugate gradient optimizer.

  11. class ConstantLearningRate extends ConstantStepSize

    A simple gradient descent algorithm with constant learning rate.

  12. class ConstantLengthLearningRate extends ConstantLengthStepSize

    A simple gradient descent algorithm with constant norm-independent learning rate.

  13. trait ConstantLengthStepSize extends GradientStep

    Mixin trait for a step size which is normalized by the length of the gradient and is constant

  14. trait ConstantStepSize extends GradientStep

    Mixin trait for a constant step size

  15. class ContrastiveDivergenceExample[C] extends Example

    A training example for using contrastive divergence.

  16. class ContrastiveDivergenceHingeExample[C <: variable.Var] extends Example

    Contrastive divergence with the hinge loss.

  17. class DiscreteLikelihoodExample extends Example

    An example for a single labeled discrete variable.

  18. class DominationLossExample extends Example

    Implements the domination loss function: it penalizes models that rank any of the badCandidates above any of the goodCandidates.

  19. class DominationLossExampleAllGood extends Example

    Implements a variant of the the domination loss function.

  20. trait Example extends AnyRef

    Main abstraction over a training example.

  21. class ExponentiatedGradient extends GradientOptimizer

    This implements the Exponentiated Gradient algorithm of Kivinen and Warmuth - also known as Entropic Mirror Descent (Beck and Teboulle)

  22. trait GradientOptimizer extends AnyRef

    Base trait for optimizers that update weights according to a gradient.

  23. trait GradientStep extends GradientOptimizer

    Base trait for optimizers whose operational form can be described as

  24. class HogwildTrainer extends Trainer with FastLogging

    A parallel online trainer which has no locks or synchronization.

  25. trait InvSqrtTLengthStepSize extends GradientStep

    Mixin trait for a step size which is normalized by the length of the gradient and looks like 1/sqrt(T)

  26. trait InvSqrtTStepSize extends GradientStep

    Mixin trait for a step size which looks like 1/sqrt(T)

  27. trait InvTLengthStepSize extends GradientStep

    Mixin trait for a step size which is normalized by the length of the gradient and looks like 1/T

  28. trait InvTStepSize extends GradientStep

    Mixin trait for a step size which looks like 1/T

  29. trait L2Regularization extends GradientOptimizer

    Include L2 regularization (Gaussian with given scalar as the spherical covariance) in the gradient and value.

  30. class L2RegularizedConstantRate extends GradientOptimizer

    Simple efficient l2-regularized SGD with a constant learning rate

  31. class LBFGS extends GradientOptimizer with FastLogging

    Maximize using Limited-memory BFGS, as described in Byrd, Nocedal, and Schnabel, "Representations of Quasi-Newton Matrices and Their Use in Limited Memory Methods"

  32. class LikelihoodExample[A <: Iterable[variable.Var], B <: model.Model] extends Example

    Base example for maximizing log likelihood.

  33. class LineSearchGradientAscent extends GradientOptimizer with FastLogging

    Change the weights in the direction of the gradient by using back-tracking line search to make sure we step up hill.

  34. class LinearL2SVM extends AnyRef

    An implementation of the liblinear algorithm.

  35. class MIRA extends MarginScaled

    The MIRA algorithm

  36. trait MarginScaled extends GradientStep

    Mixin trait for implementing a MIRA step

  37. class MiniBatchExample extends Example

    Treats many examples as one.

  38. trait MultivariateOptimizableObjective[Output] extends OptimizableObjective[la.Tensor1, Output]

  39. class OnlineTrainer extends Trainer with FastLogging

    Learns the parameters of a model by computing the gradient and calling the optimizer one example at a time.

  40. trait OptimizableObjective[Prediction, Output] extends AnyRef

    Abstract trait for any (sub)differentiable objective function used to train predictors.

  41. class ParallelBatchTrainer extends Trainer with FastLogging

  42. class ParallelOnlineTrainer extends Trainer with FastLogging

  43. trait ParameterAveraging extends GradientStep

    Mixin trait to add parameter averaging to any GradientStep

  44. class Pegasos extends GradientOptimizer

    This implements an efficient version of the Pegasos SGD algorithm for l2-regularized hinge loss it won't necessarily work with other losses because of the aggressive projection steps note that adding a learning rate here is nontrivial since the update relies on baseRate / step < 1.

  45. class Perceptron extends ConstantLearningRate

    Convenience name for the perceptron.

  46. class PersistentContrastiveDivergenceExample[C <: LabeledMutableVar] extends Example

    A variant of the contrastive divergence algorithm which does not reset to the ground truth.

  47. class PersistentContrastiveDivergenceHingeExample[C <: LabeledMutableVar] extends Example

    A contrastive divergence hinge example which keeps the chain going.

  48. class PredictorExample[Output, Prediction, Input] extends Example

    Base example for all OptimizablePredictors

  49. class PseudolikelihoodExample extends Example

    Trains a model with pseudo likelihood.

  50. class PseudomaxExample extends Example

    An example which independently maximizes each label with respect to its neighbors' true assignments.

  51. class PseudomaxMarginExample extends Example

    A variant of PseudomaxExample which enforces a margin.

  52. class RDA extends GradientOptimizer

    Implements the Regularized Dual Averaging algorithm of Xiao with support for l1 and l2 regularization

  53. class SampleRankExample[C] extends Example

    Provides a gradient that encourages the model.

  54. class SampleRankTrainer[C] extends OnlineTrainer

  55. class SemiSupervisedLikelihoodExample[A <: Iterable[variable.Var], B <: model.Model] extends Example

    Maximum likelihood in one semi supervised setting.

  56. class SimpleLikelihoodExample[A <: Iterable[variable.Var], B <: model.Model] extends Example

  57. class StructuredPerceptronExample[A <: Iterable[variable.Var], B <: model.Model] extends LikelihoodExample[A, B]

    Implements the structured perceptron.

  58. class StructuredSVMExample[A <: Iterable[variable.Var]] extends StructuredPerceptronExample[A, model.CombinedModel with model.Parameters]

    Implements the structured SVM objective function, by doing loss-augmented inference.

  59. class SynchronizedOptimizerOnlineTrainer extends Trainer with FastLogging

  60. class ThreadLocalBatchTrainer extends Trainer with FastLogging

  61. trait Trainer extends AnyRef

    Learns the parameters of a Model by processing the gradients and values from a collection of Examples.

  62. class TwoStageTrainer extends AnyRef

    Train using one trainer, until it has converged, and then use the second trainer instead.

  63. trait UnivariateOptimizableObjective[Output] extends OptimizableObjective[Double, Output]

Value Members

  1. object Example

  2. object GoodBadExample

  3. object ISTAHelper

  4. object MiniBatchExample

  5. object MutableScalableWeights

  6. object OptimizableObjectives

  7. object Trainer

  8. object TrainerHelpers

Ungrouped