Abstract—Vapnik’s quadratic programming (QP)-based support vector machine (SVM) is a state-of-the-art powerful classifier with high accuracy while being sparse. However, moving one step further in the direction of sparsity, Vapnik proposed another SVM using linear programming (LP) for the cost function. This machine, compared with the complex QPbased one, is sparser but poses similar accuracy, which is essential to work on any large dataset. Nevertheless, further sparsity is optimum for computational savings as well as to work with very large and complicated datasets. In this paper, we accelerate the classification speed of Vapnik’s sparser LPSVM maintaining optimal complexity and accuracy by applying computational techniques generated from the “unity outward deviation (ζ)”—analysis of kernel computing vectors. Benchmarking shows that the proposed method reduces up to 63% of the classification cost by Vapnik’s sparser LPSVM. In spite of this massive classification cost reduction, the proposed algorithm poses classification accuracy quite similar to the stateof-
the-art and most powerful machines, for example, Vapnik’s QPSVM or Vapnik’s LPSVM while being very simple to realize and applicable to any large dataset with high complexity.
Index Terms—Classifier, kernel, LP, QP, sparse, SVM.
The authors are with the Department of Electrical and Electronic Engineering, Uttara University, Bangladesh (e-mail: {rezkar, amit31416}@uttarauniversity.edu.bd).
Cite: Rezaul Karim and Amit Kumar Kundu, "Computational Analysis to Reduce Classification Cost Keeping High Accuracy of the Sparser LPSVM," International Journal of Machine Learning and Computing vol. 9, no. 6, pp. 728-733, 2019.
Copyright © 2019 by the authors. This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).