A fast algorithm for training support vector regression via smoothed primal function minimization

Abstract

The support vector regression (SVR) model is usually fitted by solving a quadratic programming problem, which is computationally expensive. To improve the computational efficiency, we propose to directly minimize the objective function in the primal form. However, the loss function used by SVR is not differentiable, which prevents the well-developed gradient based optimization methods from being applicable. As such, we introduce a smooth function to approximate the original loss function in the primal form of SVR, which transforms the original quadratic programming into a convex unconstrained minimization problem. The properties of the proposed smoothed objective function are discussed and we prove that the solution of the smoothly approximated model converges to the original SVR solution. A conjugate gradient algorithm is designed for minimizing the proposed smoothly approximated objective function in a sequential minimization manner. Extensive experiments on real-world datasets show that, compared to the quadratic programming based SVR, the proposed approach can achieve similar prediction accuracy with significantly improved computational efficiency, specifically, it is hundreds of times faster for linear SVR model and multiple times faster for nonlinear SVR model.

Department(s)

Mathematics

Document Type

Article

DOI

https://doi.org/10.1007/s13042-013-0200-6

Publication Date

2015

Journal Title

International Journal of Machine Learning and Cybernetics

Share

COinS