CUV  0.9.201304091348
 All Classes Namespaces Files Functions Variables Typedefs Enumerations Enumerator Friends Groups Pages
Modules | Functions
BLAS1 -- Vector/Vector operations
Collaboration diagram for BLAS1 -- Vector/Vector operations:

Modules

 Pointwise functors on vectors
 Functors reducing a vector to a scalar

Functions

template<class __value_type , class __memory_space_type , class S >
void cuv::rprop (tensor< __value_type, __memory_space_type > &W, tensor< __value_type, __memory_space_type > &dW, tensor< S, __memory_space_type > &dW_old, tensor< __value_type, __memory_space_type > &rate, const float &decay=0.0f, const float &sparsedecay=0.0f)
 Does a gradient descent step using the "RPROP" algorithm.
template<class __value_type , class __memory_space_type , class S >
void cuv::rprop (tensor< __value_type, __memory_space_type, column_major > &W, tensor< __value_type, __memory_space_type, column_major > &dW, tensor< S, __memory_space_type, column_major > &dW_old, tensor< __value_type, __memory_space_type, column_major > &rate, const float &decay=0.0f, const float &sparsedecay=0.0f)
 This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. casting column major to row major since working on linear memory anyway.
template<class __value_type , class __memory_space_type >
void cuv::learn_step_weight_decay (tensor< __value_type, __memory_space_type > &W, const tensor< __value_type, __memory_space_type > &dW, const float &learnrate, const float &decay=0.0f, const float &sparsedecay=0.0f)
 Do a step of gradient descent with optional weight decay.
template<class V , class M >
void cuv::learn_step_weight_decay_momentum (tensor< V, M > &W, tensor< V, M > &momentum, const tensor< V, M > &dW, const float &learnrate, const float &momentum_weight=0.9, const float &decay=0.0f, const float &sparsedecay=0.0f)
 Same as learn_step_weight_decay, but with momentum.
template<class __value_type , class __memory_space_type >
void cuv::learn_step_weight_decay (tensor< __value_type, __memory_space_type, column_major > &W, const tensor< __value_type, __memory_space_type, column_major > &dW, const float &learnrate, const float &decay=0.0f, const float &sparsedecay=0.0f)
 This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. casting column major to row major since working on linear memory anyway.

Detailed Description

Function Documentation

template<class __value_type , class __memory_space_type >
void cuv::learn_step_weight_decay ( tensor< __value_type, __memory_space_type > &  W,
const tensor< __value_type, __memory_space_type > &  dW,
const float &  learnrate,
const float &  decay = 0.0f,
const float &  sparsedecay = 0.0f 
)

Do a step of gradient descent with optional weight decay.

Parameters
WDestination matrix
dWDirection of gradient descent. Vector of same size as W.
learnrateScalar learnreate
decayScalar L2 weight decay (cost) parameter
sparsedecayScalar L1 weight decay (cost) parameter

Calculates W = (1-decay*learnrate) * W + learnrate * dW

template<class V , class M >
void cuv::learn_step_weight_decay_momentum ( tensor< V, M > &  W,
tensor< V, M > &  momentum,
const tensor< V, M > &  dW,
const float &  learnrate,
const float &  momentum_weight = 0.9,
const float &  decay = 0.0f,
const float &  sparsedecay = 0.0f 
)

Same as learn_step_weight_decay, but with momentum.

Parameters
WDestination matrix
momentumThe accumulated momentum (IN and OUT)
dWDirection of gradient descent. Vector of same size as W.
learnrateScalar learnreate
momentum_weighthow strong to rely on accumulated momentum
decayScalar L2 weight decay (cost) parameter
sparsedecayScalar L1 weight decay (cost) parameter
template<class __value_type , class __memory_space_type , class S >
void cuv::rprop ( tensor< __value_type, __memory_space_type > &  W,
tensor< __value_type, __memory_space_type > &  dW,
tensor< S, __memory_space_type > &  dW_old,
tensor< __value_type, __memory_space_type > &  rate,
const float &  decay = 0.0f,
const float &  sparsedecay = 0.0f 
)

Does a gradient descent step using the "RPROP" algorithm.

Parameters
WDestination tensor
dWDirection of gradient descent. Vector of same size as W.
dW_oldDirection of gradient descent in privious step. Vector of same size as W.
rateVector of same size as W containing separate learnrates for each entry.
decayScalar L2 weight decay (cost) parameter
sparsedecayScalar L1 weight decay (cost) parameter
    Updates W according to the "RPROP" algorithm.
    Calculates W = (1-decay*rate)*W + rate * W
    where all multiplications are pointwise.
    Also rate and dW_old are updated at each step.