# QP and SVM

Let's consider the following very simple version of the SVM problem. Suppose we have observed $y_i \in \{-1,1\}$, $X_i \in \mathbb{R}^{m}$ for $n$ (perfectly linearly separable) training cases. We let $y$ denote the $n \times 1$ vector of training labels and $X$ the $n \times m$ matrix of predictor variables. Our task is to find the hyperplane in $\mathbb{R}^m$ which "best separates" our two classes of labels $-1$ and $1$. Visually:library("e1071") library("rgl") train <- iris train$y <-ifelse(train[,5]=="setosa", 1, -1) sv <- svm(y~Petal.Length+Petal.Width+Sepal.Length, data=train, kernel="linear", scale=FALSE, type="C-classification") W <- rowSums(sapply(1:length(sv$coefs), function(i) sv$coefs[i]*sv$SV[i,])) plot3d(train$Petal.Length, train$Petal.Width, train$Sepal.Length, col= ifelse(train$y==-1,"red","blue"), size = 2, type='s', alpha = .6) rgl.planes(a = W[1], b=W[2], c=W[3], d=-sv$rho, color="gray", alpha=.4)

The problem of finding this optimal hyperplane is a quadratic program (the primal SVM problem). For computational reasons however, it is much easier to work with a related quadratic program, the SVM dual problem. With a general kernel function, this quadratic program is: $$ \begin{aligned} \underset{\alpha \in \mathbb{R}^n}{\text{Maximize}}: \qquad & L\left( \alpha \right) = \sum_{i=1}^n \alpha_{i} - \frac{1}{2} \sum_{i=1}^n \sum_{j=1}^n \alpha_i \left( y_i k\left(\mathbf{X}_i, \mathbf{X}_j \right) y_j \right) \alpha_j \\ \text{Subject to:} \qquad & 0 \leq \alpha_i \leq C. \end{aligned}$$ In the simple case of a linear kernel, $k(\mathbf{x}, \mathbf{y}) = \mathbf{x} \cdot \mathbf{y}$ and so we can rewrite $L$ in matrix notation as \[ L(\alpha) = \mathbf{1} \cdot \alpha - \alpha^T Q^TQ \alpha\] with \[ Q = \left[ \begin{array}{ccc} y_1 (\mathbf{X}^T)_1 \; | & \ldots & | \; y_n (\mathbf{X}^T)_n \end{array}\right] \] where $(\mathbf{X}^T)_i$ denotes the $i$-th column of $\mathbf{X}^T$.

It is important to note that the matrix $A = Q^TQ$ is symmetric positive semi-definite since $A^T = A$ and for any nonzero $x$ in $\mathbb{R}^n$: \[x^TAx = x^TQ^T Q x = (Qx)^T (Qx) = \|Qx\|_2^2 \geq 0.\] However, in the context of SVM the matrix $A$ will usually not be positive definite. In particular, the rank of $X^T$ is at most $m$ (number of features) which is typically much less than the number of observations $n$. By the Rank-Nullity theorem, the nullspace of $Q$ has dimension $n-m$ which we expect to be quite large. So intuitively quite a few non-zero $x$ map to $0$ under $Q$ and so the above inequality cannot be strengthened to a strict inequality.

The fact that the matrix $A$ is only semi-definite and not positive-definite is significant because many quadratic programming algorithms are specialized to solve only positive definite problems. For example, R's quadprog handles only positive definite problems, whereas solvers like kernlab's ipop method can handle semidefinite problems. In the specialized semidefinite case of SVM, many highly optimized algorithms exist (for example, the algorithms implemented in libsvm and liblinear). In the following gist, we solve a separable SVM problem for Fisher's iris data in three ways:

- Using the e1071 wrapper around libsvm.
- Using the semi-definite ipop solver from kernlab. Note that this general interior point solver is implemented in R and it can be quite slow when applied to larger scale problems.
- Using quadprog's positive definite solver with a slight perturbance to the SVM data so that the system matrix becomes positive definite. Quadprog is a wrapper around an interior point solver implemented in Fortran.

We see that all three methods generate quite similar solutions for this highly stable and quite simple problem.

As another example of the SVM technique, here is a minimal example that uses SVM to classify topics in the Reuters21578 corpus.

# The Circus Tent Revisited

In a previous post, I explained how the solution of a quadratic program can model the shape of a circus tent.The system matrix in the circus tent problem is symmetric positive definite and therefore an ideal candidate for the quadprog solver. In the gist below, we build a series of increasingly larger circus tent problems and use this to profile the solver times of ipop and quadprog.

From this simple experiment we can make the following observations:

- For this symmetric positive definite problem, quadprog's solver is significantly faster than ipop. This is not surprising given that quadprog is calling compiled Fortran routines whereas ipop is an R implementation.
- The time complexity for both solvers is superlinear in the problem size. Roughly, both solvers appear to be nearly cubic in problem size.
- Even though both the system and constraint matrices are sparse, neither solver is able to take advantage of sparse matrix representations. In a pinch, it's worth noting that a little memory can be saved by using quadprog's solve.QP.compact which utilizes a more compact representation of the constraint matrix $\mathrm{Amat}$.

Jelmer Ypma, who wrote and maintains the nloptr package to interface with the NLOPT libraries also wrote a package called ipoptr which will interface with an installation of ipopt written in C++, which should make its speed much closer that of quadprog. It isn't simple to install, though (I've never done it). See http://www.ucl.ac.uk/~uctpjyy/ipoptr.html.

ReplyDeleteThank you very much for this reference; I was not aware of the ipoptr project. After some fiddling around, I was able to build ipoptr. I will post an update with details on building the package if I can get it working on my example problem. Unfortunately, the licensing for the ipopt backend is very complex and restrictive, so it may not be an option that works for the whole R community.

ReplyDelete"programming" is misspelled in the title of your blog and is misspelled in a different way at R bloggers http://www.r-bloggers.com/more-on-quadratic-progamming-in-r/ .

ReplyDeleteThanks! Fixed that.

Delete