Skip to content

i.svm.train

Train a SVM

Train a Support Vector Machine

i.svm.train [-sp] group=name [subgroup=name] trainingmap=name signaturefile=name [type=name] [kernel=name] [cache=cache size] [degree=value] [gamma=value] [coef0=value] [eps=value] [cost=value] [nu=value] [p=value] [--overwrite] [--verbose] [--quiet] [--qq] [--ui]

Example:

i.svm.train group=name trainingmap=name signaturefile=name

grass.script.run_command("i.svm.train", group, subgroup=None, trainingmap, signaturefile, type="c_svc", kernel="rbf", cache=512, degree=3, gamma=1, coef0=0, eps=None, cost=1, nu=0.5, p=0.1, flags=None, overwrite=False, verbose=False, quiet=False, superquiet=False)

Example:

gs.run_command("i.svm.train", group="name", trainingmap="name", signaturefile="name")

Parameters

group=name [required]
    Maps with feature values (attributes)
subgroup=name
    Name of input imagery subgroup
trainingmap=name [required]
    Map with training labels or target values
signaturefile=name [required]
    Name for output file containing result signatures
type=name
    Type of SVM
    Allowed values: c_svc, nu_svc, one_class, epsilon_svr, nu_svr
    Default: c_svc
    c_svc: C-SVM classification
    nu_svc: nu-SVM classification
    one_class: one-class SVM
    epsilon_svr: epsilon-SVM regression
    nu_svr: nu-SVM regression
kernel=name
    SVM kernel type
    Allowed values: linear, poly, rbf, sigmoid
    Default: rbf
    linear: u'*v
    poly: (gamma*u'*v + coef0)^degree
    rbf: exp(-gamma*|u-v|^2)
    sigmoid: tanh(gamma*u'*v + coef0)
cache=cache size
    LIBSVM kernel cache size in MB
    Allowed values: 1-
    Default: 512
degree=value
    Degree in kernel function
    Allowed values: 0-
    Default: 3
gamma=value
    Gamma in kernel function
    Default: 1
coef0=value
    coef0 in kernel function
    Default: 0
eps=value
    Tolerance of termination criterion
    Defaults to 0.00001 for nu-SVC and 0.001 for others
cost=value
    Cost of constraints violation
    The parameter C of C-SVC, epsilon-SVR, and nu-SVR
    Default: 1
nu=value
    The parameter nu of nu-SVC, one-class SVM, and nu-SVR
    Default: 0.5
p=value
    The epsilon in epsilon-insensitive loss function of epsilon-SVM regression
    Default: 0.1
-s
    Do not use the shrinking heuristics
    Defaults to use the shrinking heuristics
-p
    Train a SVC or SVR model for probability estimates
    Defaults to no probabilities in model
--overwrite
    Allow output files to overwrite existing files
--help
    Print usage summary
--verbose
    Verbose module output
--quiet
    Quiet module output
--qq
    Very quiet module output
--ui
    Force launching GUI dialog

group : str, required
    Maps with feature values (attributes)
    Used as: input, group, name
subgroup : str, optional
    Name of input imagery subgroup
    Used as: input, subgroup, name
trainingmap : str, required
    Map with training labels or target values
    Used as: input, raster, name
signaturefile : str, required
    Name for output file containing result signatures
    Used as: output, sigfile, name
type : str, optional
    Type of SVM
    Used as: name
    Allowed values: c_svc, nu_svc, one_class, epsilon_svr, nu_svr
    c_svc: C-SVM classification
    nu_svc: nu-SVM classification
    one_class: one-class SVM
    epsilon_svr: epsilon-SVM regression
    nu_svr: nu-SVM regression
    Default: c_svc
kernel : str, optional
    SVM kernel type
    Used as: name
    Allowed values: linear, poly, rbf, sigmoid
    linear: u'*v
    poly: (gamma*u'*v + coef0)^degree
    rbf: exp(-gamma*|u-v|^2)
    sigmoid: tanh(gamma*u'*v + coef0)
    Default: rbf
cache : int, optional
    LIBSVM kernel cache size in MB
    Used as: cache size
    Allowed values: 1-
    Default: 512
degree : int, optional
    Degree in kernel function
    Used as: value
    Allowed values: 0-
    Default: 3
gamma : float, optional
    Gamma in kernel function
    Used as: value
    Default: 1
coef0 : float, optional
    coef0 in kernel function
    Used as: value
    Default: 0
eps : float, optional
    Tolerance of termination criterion
    Defaults to 0.00001 for nu-SVC and 0.001 for others
    Used as: value
cost : float, optional
    Cost of constraints violation
    The parameter C of C-SVC, epsilon-SVR, and nu-SVR
    Used as: value
    Default: 1
nu : float, optional
    The parameter nu of nu-SVC, one-class SVM, and nu-SVR
    Used as: value
    Default: 0.5
p : float, optional
    The epsilon in epsilon-insensitive loss function of epsilon-SVM regression
    Used as: value
    Default: 0.1
flags : str, optional
    Allowed values: s, p
    s
        Do not use the shrinking heuristics
        Defaults to use the shrinking heuristics
    p
        Train a SVC or SVR model for probability estimates
        Defaults to no probabilities in model
overwrite: bool, optional
    Allow output files to overwrite existing files
    Default: False
verbose: bool, optional
    Verbose module output
    Default: False
quiet: bool, optional
    Quiet module output
    Default: False
superquiet: bool, optional
    Very quiet module output
    Default: False

DESCRIPTION

i.svm.train finds parameters for a Support Vector Machine and stores them in a signature file for later usage by i.svm.predict.

Internally the module performs input value rescaling of each of imagery group rasters by mean normalisation based on minimum and maximum value present in the raster metadata. Rescaling parameters are written into the signature file for use during prediction.

NOTES

i.svm.train internally is using the LIBSVM. For introduction into value prediction or estimation with LIBSVM, see a Practical Guide to Support Vector Classification by Chih-Wei Hsu, Chih-Chung Chang, and Chih-Jen Lin.

It is strongly suggested to have semantic labels set for each raster map in the training data (feature value) imagery group. Use r.support to set semantic labels.

PERFORMANCE

SVM training is done by loading all training data into memory. In a case of large input raster files, use sparse label rasters (e.g. raster points or small patches instead of uninterrupted cover).

During the training process there is no progress output printed. Training with large number of data points can take significant time - just be patient.

By default the shrinking heuristics option of LIBSVM is enabled. It should not impact the outcome, just the training time. On some input parameter and data combinations training with the shrinking heuristics disabled might be faster.

The cache parameter determines the maximum memory allocated for kernel caching to enhance computational speed. It's important to note that the actual module's memory consumption may vary from this setting, as it solely impacts LIBSVM's internal caching. The cache is utilized on an as-needed basis, so it's unlikely to reach the specified value.

EXAMPLE

This is the first part of classification process. See i.svm.predict for the second part.

Train a SVM to identify land use classes according to the 1996 land use map landuse96_28m and then classify a LANDSAT scene from October of 2002. Example requires the nc_spm_08 dataset.

# Align computation region to the scene
g.region raster=lsat7_2002_10 -p

# store VIZ, NIR, MIR into group/subgroup
i.group group=lsat7_2002 subgroup=res_30m \
    input=lsat7_2002_10,lsat7_2002_20,lsat7_2002_30,lsat7_2002_40,lsat7_2002_50,lsat7_2002_70

# Now digitize training areas "training" with the digitizer
# and convert to raster model with v.to.rast
v.to.rast input=training output=training use=cat label_column=label
# If you are just playing around and do not care about the accuracy of outcome,
# just use one of existing maps instead e.g.
# r.random input=landuse96_28m npoints=10000 raster=training -s

# Train the SVM
i.svm.train group=lsat7_2002 subgroup=res_30m \
    trainingmap=training signaturefile=landuse96_rnd_points

# Go to i.svm.predict for the next step.

SEE ALSO

Predict values: i.svm.predict
Set semantic labels: r.support
Other classification modules: i.maxlik, i.smap

LIBSVM home page: LIBSVM - A Library for Support Vector Machines

REFERENCES

Please cite both - LIBSVM and i.svm.

  • For i.svm.* modules:
    Nartiss, M., & Melniks, R. (2023). Improving pixel-­based classification of GRASS GIS with support vector machine. Transactions in GIS, 00, 1–16. https://doi.org/10.1111/tgis.13102
  • For LIBSVM:
    Chang, C.-C., & Lin, C.-J. (2011). LIBSVM : a library for support vector machines. ACM Transactions on Intelligent Systems and Technology, 2:27:1--27:27.

AUTHOR

Maris Nartiss, University of Latvia.

SOURCE CODE

Available at: i.svm.train source code (history)
Latest change: Wednesday Mar 19 14:27:15 2025 in commit a56e76f