cc.factorie.optimize

AdaMira

class AdaMira extends AdaptiveLearningRate with MarginScaled

The combination of AdaGrad with MIRA

Linear Supertypes
Ordering
  1. Alphabetic
  2. By inheritance
Inherited
  1. AdaMira
  2. MarginScaled
  3. AdaptiveLearningRate
  4. GradientStep
  5. GradientOptimizer
  6. AnyRef
  7. Any
  1. Hide All
  2. Show all
Learn more about member selection
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AdaMira(rate: Double, delta: Double = 0.1, C: Double = 1.0)

    rate

    See AdaGrad, but here it matters much less.

    delta

    See AdaGrad, but again here it matters much less.

    C

    See MIRA.

Value Members

  1. final def !=(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  2. final def !=(arg0: Any): Boolean

    Definition Classes
    Any
  3. final def ##(): Int

    Definition Classes
    AnyRef → Any
  4. final def ==(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  5. final def ==(arg0: Any): Boolean

    Definition Classes
    Any
  6. val C: Double

    See MIRA.

    See MIRA.

    Definition Classes
    AdaMiraMarginScaled
  7. final def asInstanceOf[T0]: T0

    Definition Classes
    Any
  8. def clone(): AnyRef

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  9. val delta: Double

    See AdaGrad, but again here it matters much less.

    See AdaGrad, but again here it matters much less.

    Definition Classes
    AdaMiraAdaptiveLearningRate
  10. def doGradStep(weights: WeightsSet, gradient: WeightsMap, rate: Double): Unit

    Actually adds the gradient to the weights.

    Actually adds the gradient to the weights. ParameterAveraging overrides this.

    weights

    The weights

    gradient

    The gradient

    rate

    The learning rate

    Definition Classes
    GradientStep
  11. final def eq(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  12. def equals(arg0: Any): Boolean

    Definition Classes
    AnyRef → Any
  13. def finalize(): Unit

    Attributes
    protected[java.lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  14. def finalizeWeights(weights: WeightsSet): Unit

    Once learning is done, the weights should be copied back into normal tensors.

    Once learning is done, the weights should be copied back into normal tensors.

    weights

    The weights

    Definition Classes
    GradientStepGradientOptimizer
  15. final def getClass(): Class[_]

    Definition Classes
    AnyRef → Any
  16. def hashCode(): Int

    Definition Classes
    AnyRef → Any
  17. def initializeWeights(weights: WeightsSet): Unit

    Some optimizers swap out weights with special purpose tensors for e.

    Some optimizers swap out weights with special purpose tensors for e.g. efficient scoring while learning.

    weights

    The weights

    Definition Classes
    AdaptiveLearningRateGradientStepGradientOptimizer
  18. def isConverged: Boolean

    Online optimizers generally don't converge

    Online optimizers generally don't converge

    returns

    Always false

    Definition Classes
    GradientStepGradientOptimizer
  19. final def isInstanceOf[T0]: Boolean

    Definition Classes
    Any
  20. var it: Int

    Definition Classes
    GradientStep
  21. def lRate(weights: WeightsSet, gradient: WeightsMap, value: Double): Double

    Override this method to change the learning rate

    Override this method to change the learning rate

    weights

    The weights

    gradient

    The gradient

    value

    The value

    returns

    The learning rate

    Definition Classes
    MarginScaledGradientStep
  22. final def ne(arg0: AnyRef): Boolean

    Definition Classes
    AnyRef
  23. final def notify(): Unit

    Definition Classes
    AnyRef
  24. final def notifyAll(): Unit

    Definition Classes
    AnyRef
  25. var printed: Boolean

    Definition Classes
    AdaptiveLearningRate
  26. def processGradient(weights: WeightsSet, gradient: WeightsMap): Unit

    Override this method do to some transformation to the gradient before going on with optimization

    Override this method do to some transformation to the gradient before going on with optimization

    weights

    The weights

    gradient

    The gradient

    Definition Classes
    AdaptiveLearningRateGradientStep
  27. val rate: Double

    See AdaGrad, but here it matters much less.

    See AdaGrad, but here it matters much less.

    Definition Classes
    AdaMiraAdaptiveLearningRate
  28. def reset(): Unit

    To override if you want to reset internal state.

    To override if you want to reset internal state.

    Definition Classes
    AdaptiveLearningRateGradientStepGradientOptimizer
  29. final def step(weights: WeightsSet, gradient: WeightsMap, value: Double): Unit

    Should not be overriden.

    Should not be overriden. The main flow of a GradientStep optimizer.

    weights

    The weights

    gradient

    The gradient

    value

    The value

    Definition Classes
    GradientStepGradientOptimizer
  30. final def synchronized[T0](arg0: ⇒ T0): T0

    Definition Classes
    AnyRef
  31. def toString(): String

    Definition Classes
    AnyRef → Any
  32. final def wait(): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  33. final def wait(arg0: Long, arg1: Int): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  34. final def wait(arg0: Long): Unit

    Definition Classes
    AnyRef
    Annotations
    @throws( ... )

Inherited from MarginScaled

Inherited from AdaptiveLearningRate

Inherited from GradientStep

Inherited from GradientOptimizer

Inherited from AnyRef

Inherited from Any

Ungrouped