yat
0.20.3pre
|
Support Vector Machine. More...
#include <yat/classifier/SVM.h>
Public Member Functions | |
SVM (void) | |
Constructor. | |
SVM (const SVM &) | |
Copy constructor. | |
virtual | ~SVM () |
Destructor. | |
SVM * | make_classifier (void) const |
Create an untrained copy of SVM. More... | |
const utility::Vector & | alpha (void) const |
double | C (void) const |
unsigned long int | max_epochs (void) const |
void | max_epochs (unsigned long int) |
set maximal number of epochs in training | |
const theplu::yat::utility::Vector & | output (void) const |
void | predict (const KernelLookup &input, utility::Matrix &predict) const |
void | reset (void) |
make SVM untrained More... | |
void | set_C (const double) |
sets the C-Parameter | |
void | train (const KernelLookup &kernel, const Target &target) |
bool | trained (void) const |
Support Vector Machine.
const utility::Vector& theplu::yat::classifier::SVM::alpha | ( | void | ) | const |
double theplu::yat::classifier::SVM::C | ( | void | ) | const |
The C-parameter is the balance term (see train()). A very large C means the training will be focused on getting samples correctly classified, with risk for overfitting and poor generalisation. A too small C will result in a training, in which misclassifications are not penalized. C is weighted with respect to the size such that , meaning a misclassificaion of the smaller group is penalized harder. This balance is equivalent to the one occuring for regression with regularisation, or ANN-training with a weight-decay term. Default is C set to infinity.
SVM* theplu::yat::classifier::SVM::make_classifier | ( | void | ) | const |
unsigned long int theplu::yat::classifier::SVM::max_epochs | ( | void | ) | const |
Default is max_epochs set to 100,000.
const theplu::yat::utility::Vector& theplu::yat::classifier::SVM::output | ( | void | ) | const |
The output is calculated as , where is the target.
void theplu::yat::classifier::SVM::predict | ( | const KernelLookup & | input, |
utility::Matrix & | predict | ||
) | const |
Generate prediction predict from input. The prediction is calculated as the output times the margin, i.e., geometric distance from decision hyperplane: The output has 2 rows. The first row is for binary target true, and the second is for binary target false. The second row is superfluous as it is the first row negated. It exist just to be aligned with multi-class SupervisedClassifiers. Each column in input and output corresponds to a sample to predict. Each row in input corresponds to a training sample, and more exactly row i in input should correspond to row i in KernelLookup that was used for training.
void theplu::yat::classifier::SVM::reset | ( | void | ) |
make SVM untrained
Setting variable trained to false; other variables are undefined.
void theplu::yat::classifier::SVM::train | ( | const KernelLookup & | kernel, |
const Target & | target | ||
) |
Training the SVM following Platt's SMO, with Keerti's modifacation. Minimizing , which corresponds to minimizing .
Class for SVM using Keerthi's second modification of Platt's Sequential Minimal Optimization. The SVM uses all data given for training.
utility::runtime_error | if maximal number of epoch is reach. |
bool theplu::yat::classifier::SVM::trained | ( | void | ) | const |