The package evclass
contains methods for evidential classification. An evidential classifier quantifies the uncertainty about the class of a pattern by a Dempster-Shafer mass function. In evidential distance-based classifiers, the mass functions are computed from distances between the test pattern and either training pattern, or prototypes. The user is invited to read the papers cited in this vignette to get familiar with the main concepts underlying evidential clustering. These papers can be downloaded from the author’s web site, at https://www.hds.utc.fr/~tdenoeux. Here, we provide a short guided tour of the main functions in the evclass
package. The two classification methods implemented to date ate:
You first need to install this package:
library(evclass)
The following sections contain a brief introduction on the way to use the functions in the package evclass
for evidential classification.
The principle of the evidential K-nearest neighbor (EK-NN) classifier is explained in (Denœux 1995), and the optimization of the parameters of this model is presented in (Zouhal and Denœux 1998). The reader is referred to these references. Here, we focus on the practical application of this method using the functions implemented in evclass
.
Consider, for instance, the ionosphere
data. This dataset consists in 351 instances grouped in two classes and described by 34 numeric attributes. The first 175 instances are training data, the rest are test data. Let us first load the data, and split them into a training set and a test set.
data(ionosphere)
xtr<-ionosphere$x[1:176,]
ytr<-ionosphere$y[1:176]
xtst<-ionosphere$x[177:351,]
ytst<-ionosphere$y[177:351]
The EK-NN classifier is implemented as three functions: EkNNinit
for initialization, EkNNfit
for training, and EkNNval
for testing. Let us initialize the classifier and train it on the ionosphere
data, with \(K=5\) neighbors. (If the argument param
is not passed to EkNNfit
, the function EkNNinit
is called inside EkNNfit
; here, we make the call explicit for clarity).
param0<- EkNNinit(xtr,ytr)
options=list(maxiter=300,eta=0.1,gain_min=1e-5,disp=FALSE)
fit<-EkNNfit(xtr,ytr,param=param0,K=5,options=options)
The list fit
contains the optimized parameters, the final value of the cost function, the leave-one-out (LOO) error rate, the LOO predicted class labels and the LOO predicted mass functions. Here the LOO error rate and confusion matrix are:
print(fit$err)
## [1] 0.1079545
table(fit$ypred,ytr)
## ytr
## 1 2
## 1 108 14
## 2 5 49
We can then evaluate the classifier on the test data:
val<-EkNNval(xtrain=xtr,ytrain=ytr,xtst=xtst,K=5,ytst=ytst,param=fit$param)
print(val$err)
## [1] 0.1142857
table(val$ypred,ytst)
## ytst
## 1 2
## 1 107 15
## 2 5 48
To determine the best value of \(K\), we may compute the LOO error for different candidate value. Here, we will all values between 1 and 15
err<-rep(0,15)
i<-0
for(K in 1:15){
fit<-EkNNfit(xtr,ytr,K,options=list(maxiter=100,eta=0.1,gain_min=1e-5,disp=FALSE))
err[K]<-fit$err
}
plot(1:15,err,type="b",xlab='K',ylab='LOO error rate')
The minimum LOO error rate is obtained for \(K=8\). The test error rate and confusion matrix for that value of \(K\) are obtained as follows:
fit<-EkNNfit(xtr,ytr,K=8,options=list(maxiter=100,eta=0.1,gain_min=1e-5,disp=FALSE))
val<-EkNNval(xtrain=xtr,ytrain=ytr,xtst=xtst,K=8,ytst=ytst,param=fit$param)
print(val$err)
## [1] 0.09142857
table(val$ypred,ytst)
## ytst
## 1 2
## 1 106 10
## 2 6 53
In the evidential neural network classifier, the output mass functions are based on distances to protypes, which allows for faster classification. The prototypes and their class-membership degrees are leanrnt by minimizing a cost function. This function is defined as the sum of an error term and, optionally, a regularization term. As for the EK-NN classifier, the evidential neural network classifier is implemented as three functions: proDSinit
for initialization, proDSfit
for training and proDSval
for evaluation.
Let us demonstrate this method on the glass
dataset. This data set contains 185 instances, which can be split into 89 training instances and 96 test instances.
data(glass)
xtr<-glass$x[1:89,]
ytr<-glass$y[1:89]
xtst<-glass$x[90:185,]
ytst<-glass$y[90:185]
We then initialize a network with 7 prototypes:
param0<-proDSinit(xtr,ytr,nproto=7,nprotoPerClass=FALSE,crisp=FALSE)
and train this network without regularization:
options<-list(maxiter=500,eta=0.1,gain_min=1e-5,disp=20)
fit<-proDSfit(x=xtr,y=ytr,param=param0,options=options)
## [1] 1.0000000 0.3582862 10.0000000
## [1] 21.0000000 0.2933023 1.2426672
## [1] 41.0000000 0.2263345 0.2205550
## [1] 61.00000000 0.19483767 0.04260374
## [1] 81.000000000 0.188954121 0.007109284
## [1] 1.010000e+02 1.884715e-01 1.285468e-03
## [1] 1.210000e+02 1.883760e-01 3.389789e-04
## [1] 1.410000e+02 1.881880e-01 1.417006e-04
Finally, we evaluate the performance of the network on the test set:
val<-proDSval(xtst,fit$param,ytst)
print(val$err)
## [1] 0.28125
table(ytst,val$ypred)
##
## ytst 1 2 4
## 1 32 8 0
## 2 5 28 4
## 3 5 3 0
## 4 0 2 9
If the training is done with regularization, the hyperparameter mu
needs to be determined by cross-validation.
Denœux, T. 1995. “A \(k\)-Nearest Neighbor Classification Rule Based on Dempster-Shafer Theory.” IEEE Trans. on Systems, Man and Cybernetics 25 (05): 804–13.
———. 2000. “A Neural Network Classifier Based on Dempster-Shafer Theory.” IEEE Trans. on Systems, Man and Cybernetics A 30 (2): 131–50.
Zouhal, L. M., and T. Denœux. 1998. “An Evidence-Theoretic \(k\)-NN Rule with Parameter Optimization.” IEEE Trans. on Systems, Man and Cybernetics C 28 (2): 263–71.