ImLab Scilab Function
Last update :
imkmeanstrain - k-means method, training phase: second step of image segmentation using k-means clustering method.
Calling Sequence
-
[prototype] = imkmeanstrain(learningdata,classnb[,stopthreshold[,iterationnb]])
Parameters
-
learningdata
: set of pixels attributes values. This data is obtained with 'imlearningdata' Imlab function.
-
classnb
: number of classes (or regions) that must be created during the segmentation.
-
stopthreshold
: threshold value that allows to stop the training phase. It corresponds to the distance between prototypes of two consecutive iterations. When this threshold is reached, the stability of the prototype is considered. Default value used is
1E-10.
-
iterationnb
: minimum number of iterations that must be executed. Default value used is 10. The training phase stops after at least 'iterationnb' iterations and when 'stopthreshold' is reached.
-
prototype
: representative values of the classes. If learningdata parameter is a 2D matrix, the result prototype is also a 2D matrix where each line corresponds to an attribute in learningdata and each column corresponds to a class. If learningdata parameter is a 3D hypermatrix, the result prototype is also a 3D hypermatrix where each plan corresponds to a pixel component in learningdata.
Description
Thanks to learning data and following k-means clustering method, this function creates classes and returns their features in a prototype. The number K of classes is user defined.
Algorithm :
1. The centroids of the classes are initialized.
2. Each object of learning data is assigned to the class that has the closest centroid.
3. When all objects have been assigned, the positions of the K centroids are recalculated.
Steps 2 and 3 are repeated until the centroids no longer move ('stopthreshold' is reached) and at least 'iterationnb' have been done. The result prototype is formed with the centroids of the last iteration. To initialize the centroids, this algorithm is used several times (the centroids are initialized with values from learning data) and the best prototype is kept (this one which minimizes within-class variance and maximizes between-class variance). Note that the distance used is the euclidean one.