CUV  0.9.201304091348
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
Functions
Function Optimization
special purpose functions
Collaboration diagram for Function Optimization:

Functions

template<class V , class M , class L >
void cuv::libs::opt::softmax (cuv::tensor< V, M, L > &dst, const cuv::tensor< V, M, L > &src, unsigned int vardim=1)
 calculate derivative of softmax.
template<class V , class M , class L >
void cuv::libs::opt::softmax_derivative (cuv::tensor< V, M, L > &dst, const cuv::tensor< V, M, L > &softmax_act, const cuv::tensor< V, M, L > &residual, unsigned int vardim=1)
 calculate derivative of softmax.
template<class V , class M , class L >
void cuv::libs::opt::adagrad (tensor< V, M, L > &W, const tensor< V, M, L > &dW, tensor< V, M, L > &sW, const float &learnrate, const float &delta, const float &decay=0.0f, const float &sparsedecay=0.0f)
 Do a gradient update step using AdaGrad.
template<class V , class M , class L >
void cuv::libs::opt::rmsprop (tensor< V, M, L > &W, const tensor< V, M, L > &dW, tensor< V, M, L > &sW, const float &learnrate, const float &delta, const float &decay=0.0f, const float &sparsedecay=0.0f, const float &grad_avg=0.9f)
 Do a gradient update step using RMSPROP.

Detailed Description

Function Documentation

template<class V , class M , class L >
void cuv::libs::opt::adagrad ( tensor< V, M, L > &  W,
const tensor< V, M, L > &  dW,
tensor< V, M, L > &  sW,
const float &  learnrate,
const float &  delta,
const float &  decay = 0.0f,
const float &  sparsedecay = 0.0f 
)

Do a gradient update step using AdaGrad.

Parameters
WDestination matrix
dWThe gradient of W. This is a tensor of same shape as W.
sWThe sum of the squared gradients for each component as W (therefore also same shape as W).
learnrateScalar learnreate
deltaadded in denominator of adagrad
decay(optional) Scalar L2 penalty
sparsedecay(optional) Scalar L1 penalty
template<class V , class M , class L >
void cuv::libs::opt::rmsprop ( tensor< V, M, L > &  W,
const tensor< V, M, L > &  dW,
tensor< V, M, L > &  sW,
const float &  learnrate,
const float &  delta,
const float &  decay = 0.0f,
const float &  sparsedecay = 0.0f,
const float &  grad_avg = 0.9f 
)

Do a gradient update step using RMSPROP.

Parameters
WDestination matrix
dWThe gradient of W. This is a tensor of same shape as W.
sWThe sum of the squared gradients for each component as W (therefore also same shape as W).
learnrateScalar learnreate
deltaadded in denominator of rmsprop
decay(optional) Scalar L2 penalty
sparsedecay(optional) Scalar L1 penalty
avg_gradtime constant to average gradient squares with (0.9 means keep most of old average)
template<class V , class M , class L >
void cuv::libs::opt::softmax ( cuv::tensor< V, M, L > &  dst,
const cuv::tensor< V, M, L > &  src,
unsigned int  vardim = 1 
)

calculate derivative of softmax.

Calculates the SoftMax function $S(\vec x) = exp(x_i)/Sum_k(exp(x_k))$ for $m$ multinomial variables with $n$ values.

Warning
this /adds/ to the values already in dst, so you may need to zero dst first!
    @param dst     the value of \form#8 of size \form#9
    @param src     the input values to be softmaxed
Parameters
vardimthe dimension in which the variables are stored
template<class V , class M , class L >
void cuv::libs::opt::softmax_derivative ( cuv::tensor< V, M, L > &  dst,
const cuv::tensor< V, M, L > &  softmax_act,
const cuv::tensor< V, M, L > &  residual,
unsigned int  vardim = 1 
)

calculate derivative of softmax.

Calculates the derivative of SoftMax function $S(\vec x) = exp(x_i)/Sum_k(exp(x_k))$ for $m$ multinomial variables with $n$ values.

Warning
this /adds/ to the values already in dst, so you may need to zero dst first!
    @param dst         destination tensor of size \form#10
    @param softmax_act the value of \form#8 of size \form#9
Parameters
residualthe residual of size $ S(\vec x) $, also size $ n\times m$
vardimthe dimension in which the variables are stored