谷歌浏览器插件
订阅小程序
在清言上使用

Fast Support Vector Classification for Large-Scale Problems

IEEE transactions on pattern analysis and machine intelligence(2022)

引用 20|浏览55
暂无评分
摘要
The support vector machine (SVM) is a very important machine learning algorithm with state-of-the-art performance on many classification problems. However, on large datasets it is very slow and requires much memory. To solve this defficiency, we propose the fast support vector classifier (FSVC) that includes: 1) an efficient closed-form training free of any numerical iterative procedure; 2) a small collection of class prototypes that avoids to store in memory an excessive number of support vectors; and 3) a fast method that selects the spread of the radial basis function kernel directly from data, without classifier execution nor iterative hyper-parameter tuning. The memory requirements of FSVC are very low, spending in average only 6 $\cdot 10^{-7}$ sec. per pattern, input and class, and processing datasets up to 31 millions of patterns, 30,000 inputs and 131 classes in less than 1.5 hours (less than 3 hours with only 2GB of RAM). In average, the FSVC is 10 times faster, requires 12 times less memory and achieves 4.7 percent more performance than Liblinear, that fails on the 4 largest datasets by lack of memory, being 100 times faster and achieving only 6.7 percent less performance than Libsvm. The time spent by FSVC only depends on the dataset size and thus it can be accurately estimated for new datasets, while Libsvm or Liblinear are much slower on “difficult” datasets, even if they are small. The FSVC adjusts its requirements to the available memory, classifying large datasets in computers with limited memory. Code for the proposed algorithm in the Octave scientific programming language is provided. 1
更多
查看译文
关键词
Classification,large-scale datasets,support vector machine,closed-form training,model selection
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要