
支持向量机如同一位建筑大师:
整个过程体现了"简单即美"的哲学,用少数关键样本定义全局决策。
public class LinearSVM {
private double[] weights;
private double bias;
private final double learningRate;
private final int epochs;
public LinearSVM(double learningRate, int epochs) {
this.learningRate = learningRate;
this.epochs = epochs;
}
// 使用Hinge Loss + 梯度下降
public void train(double[][] features, int[] labels) {
int n = features[0].length;
weights = new double[n];
bias = 0;
for (int epoch = 0; epoch < epochs; epoch++) {
for (int i = 0; i < features.length; i++) {
double prediction = predict(features[i]);
int label = labels[i];
// Hinge Loss梯度更新
if (label * prediction < 1) {
for (int j = 0; j < n; j++) {
weights[j] += learningRate * (label * features[i][j] - 2 * weights[j]);
}
bias += learningRate * label;
}
}
}
}
private double predict(double[] x) {
double result = 0;
for (int i = 0; i < x.length; i++) {
result += weights[i] * x[i];
}
return result + bias;
}
public int classify(double[] x) {
return predict(x) > 0 ? 1 : -1;
}
}指标 | 数值 | 说明 |
|---|---|---|
训练时间复杂度 | O(n³) | 取决于优化算法 |
预测时间复杂度 | O(d) | d为特征维度 |
空间复杂度 | O(n×d) | 存储支持向量和权重向量 |
关键特性:
工业案例:
新手必练:
// 使用示例
double[][] X = {{1,2}, {3,4}, {5,6}};
int[] y = {-1, 1, 1};
LinearSVM svm = new LinearSVM(0.01, 1000);
svm.train(X, y);
System.out.println(svm.classify(new double[]{4,5})); // 输出1高手进阶:
// 核函数扩展接口
interface Kernel {
double compute(double[] x1, double[] x2);
}
class RBFKernel implements Kernel {
private double gamma;
public RBFKernel(double gamma) {
this.gamma = gamma;
}
@Override
public double compute(double[] x1, double[] x2) {
double sum = 0;
for (int i=0; i<x1.length; i++) {
sum += Math.pow(x1[i]-x2[i], 2);
}
return Math.exp(-gamma * sum);
}
}支持向量机教会我们:
当你能在推荐系统中用SVM进行亿级用户画像分类时,说明真正掌握了核方法的精髓——这不仅需要算法理解,更需要工程化落地的智慧。记住:SVM的数学之美在于,它将复杂的模式识别问题转化为优雅的凸优化问题,这正是数学指导工程实践的典范。