从理论到代码:手把手复现李航《统计学习方法》第2版经典算法(附习题思路)
从理论到代码手把手复现李航《统计学习方法》第2版经典算法统计学习作为机器学习的重要分支其理论体系严谨而深厚。李航教授的《统计学习方法》第2版堪称该领域的经典教材但许多读者在从理论理解到代码实现的跨越中常遇到障碍。本文将带你以工程师视角通过Python代码复现书中核心算法并结合课后习题构建完整的理论-代码-验证闭环。1. 环境准备与基础工具链在开始算法复现之前我们需要搭建适合统计学习实验的Python环境。推荐使用Anaconda创建独立环境conda create -n stats_learn python3.8 conda activate stats_learn基础工具包安装清单NumPy矩阵运算核心库SciPy科学计算与优化算法Matplotlib可视化与结果呈现Jupyter Lab交互式实验环境提示建议固定库版本以避免兼容性问题可使用pip freeze requirements.txt保存环境配置对于特定算法实现我们还需要一些扩展库# 在Jupyter中检查版本 import numpy as np print(fNumPy版本{np.__version__})2. 感知机算法的工程化实现感知机作为最简单的线性分类模型是理解统计学习的最佳起点。书中第2章给出了原始形式的算法描述我们将其转化为可运行的代码。2.1 原始形式实现class Perceptron: def __init__(self, eta1.0, max_iter1000): self.eta eta # 学习率 self.max_iter max_iter # 最大迭代次数 def fit(self, X, y): 训练感知机模型 n_samples, n_features X.shape self.w np.zeros(n_features) # 初始化权重 self.b 0.0 # 初始化偏置 for _ in range(self.max_iter): errors 0 for xi, yi in zip(X, y): update self.eta * (yi - self.predict(xi)) self.w update * xi self.b update errors int(update ! 0.0) if errors 0: break return self def predict(self, x): 预测样本类别 return np.where(np.dot(x, self.w) self.b 0, 1, -1)2.2 对偶形式实现感知机的对偶形式更适用于高维特征空间其核心是Gram矩阵计算def fit_dual(self, X, y): 对偶形式感知机 n_samples X.shape[0] self.alpha np.zeros(n_samples) self.b 0.0 # 预计算Gram矩阵 Gram np.dot(X, X.T) for _ in range(self.max_iter): errors 0 for i in range(n_samples): if y[i] * (np.sum(self.alpha * y * Gram[i]) self.b) 0: self.alpha[i] self.eta self.b self.eta * y[i] errors 1 if errors 0: break return self注意对偶形式中最终权重可表示为w Σα_i y_i x_i这在支持向量机中尤为重要3. 支持向量机的完整实现路径支持向量机(SVM)是统计学习中最具代表性的算法之一我们从线性可分情况开始逐步扩展到核方法。3.1 线性SVM的序列最小优化(SMO)class LinearSVM: def __init__(self, C1.0, tol0.01, max_iter1000): self.C C # 惩罚参数 self.tol tol # 容忍度 self.max_iter max_iter def _compute_L_H(self, C, alpha_i, alpha_j, y_i, y_j): if y_i ! y_j: L max(0, alpha_j - alpha_i) H min(C, C alpha_j - alpha_i) else: L max(0, alpha_i alpha_j - C) H min(C, alpha_i alpha_j) return L, H def fit(self, X, y): n_samples, n_features X.shape self.alpha np.zeros(n_samples) self.b 0.0 # SMO算法主循环 for _ in range(self.max_iter): alpha_prev np.copy(self.alpha) for j in range(n_samples): i self._select_second_alpha(j, n_samples) # 计算误差 E_i self._decision_function(X[i]) - y[i] E_j self._decision_function(X[j]) - y[j] # 计算边界 L, H self._compute_L_H(self.C, self.alpha[i], self.alpha[j], y[i], y[j]) if L H: continue # 计算η eta 2 * X[i].dot(X[j]) - X[i].dot(X[i]) - X[j].dot(X[j]) if eta 0: continue # 更新alpha_j self.alpha[j] - y[j] * (E_i - E_j) / eta self.alpha[j] np.clip(self.alpha[j], L, H) # 检查收敛 if abs(self.alpha[j] - alpha_prev[j]) self.tol: continue # 更新alpha_i self.alpha[i] y[i] * y[j] * (alpha_prev[j] - self.alpha[j]) # 更新b b1 self.b - E_i - y[i] * (self.alpha[i] - alpha_prev[i]) * X[i].dot(X[i]) \ - y[j] * (self.alpha[j] - alpha_prev[j]) * X[i].dot(X[j]) b2 self.b - E_j - y[i] * (self.alpha[i] - alpha_prev[i]) * X[i].dot(X[j]) \ - y[j] * (self.alpha[j] - alpha_prev[j]) * X[j].dot(X[j]) if 0 self.alpha[i] self.C: self.b b1 elif 0 self.alpha[j] self.C: self.b b2 else: self.b (b1 b2) / 2 # 检查收敛 diff np.linalg.norm(self.alpha - alpha_prev) if diff self.tol: break # 计算最终权重 self.w np.zeros(n_features) for i in range(n_samples): self.w self.alpha[i] * y[i] * X[i] return self3.2 核函数扩展通过核技巧SVM可以处理非线性可分问题。常用核函数实现如下def linear_kernel(x1, x2): return np.dot(x1, x2) def polynomial_kernel(x1, x2, p3): return (1 np.dot(x1, x2)) ** p def rbf_kernel(x1, x2, gamma0.1): return np.exp(-gamma * np.linalg.norm(x1 - x2)**2)在SVM类中添加核矩阵计算def _compute_kernel_matrix(self, X): n_samples X.shape[0] K np.zeros((n_samples, n_samples)) for i in range(n_samples): for j in range(n_samples): K[i,j] self.kernel(X[i], X[j]) return K4. EM算法的高效实现EM算法是处理隐变量问题的强大工具我们以高斯混合模型(GMM)为例展示其实现。4.1 E步计算后验概率def _e_step(self, X): 计算每个样本属于各高斯分布的后验概率 n_samples X.shape[0] self.responsibilities np.zeros((n_samples, self.n_components)) for k in range(self.n_components): self.responsibilities[:, k] self.weights[k] * \ self._multivariate_normal(X, self.means[k], self.covariances[k]) # 归一化 self.responsibilities / np.sum(self.responsibilities, axis1, keepdimsTrue) return self.responsibilities4.2 M步参数更新def _m_step(self, X): 更新模型参数 n_samples X.shape[0] # 更新权重 self.weights np.sum(self.responsibilities, axis0) / n_samples # 更新均值 self.means np.dot(self.responsibilities.T, X) / \ np.sum(self.responsibilities, axis0, keepdimsTrue).T # 更新协方差 for k in range(self.n_components): diff X - self.means[k] self.covariances[k] np.dot(self.responsibilities[:, k] * diff.T, diff) / \ np.sum(self.responsibilities[:, k]) return self4.3 完整EM流程def fit(self, X, max_iter100, tol1e-4): EM算法主循环 self._initialize_parameters(X) log_likelihood [] for i in range(max_iter): # E步 self._e_step(X) # M步 self._m_step(X) # 计算对数似然 current_log_likelihood self._compute_log_likelihood(X) log_likelihood.append(current_log_likelihood) # 检查收敛 if i 0 and abs(current_log_likelihood - log_likelihood[-2]) tol: break return self, log_likelihood5. 条件随机场的实现技巧条件随机场(CRF)是序列标注任务的强大模型我们实现其关键计算步骤。5.1 特征函数设计def word2features(sent, i): 提取单词特征 word sent[i][0] features { bias: 1.0, word.lower(): word.lower(), word[-3:]: word[-3:], word.isupper(): word.isupper(), word.istitle(): word.istitle(), word.isdigit(): word.isdigit(), } if i 0: prev_word sent[i-1][0] features.update({ prev_word.lower(): prev_word.lower(), prev_word.istitle(): prev_word.istitle(), }) else: features[BOS] True if i len(sent)-1: next_word sent[i1][0] features.update({ next_word.lower(): next_word.lower(), next_word.istitle(): next_word.istitle(), }) else: features[EOS] True return features5.2 前向-后向算法实现def _forward_algorithm(self, features): 计算前向概率 alpha np.zeros((len(features), self.n_tags)) alpha[0] self.start_prob self._compute_state_features(features[0]) for t in range(1, len(features)): alpha[t] logsumexp(alpha[t-1] self.trans_prob.T, axis1) \ self._compute_state_features(features[t]) return alpha def _backward_algorithm(self, features): 计算后向概率 beta np.zeros((len(features), self.n_tags)) beta[-1] 0.0 # log(1) for t in range(len(features)-2, -1, -1): beta[t] logsumexp( self.trans_prob self._compute_state_features(features[t1]) beta[t1], axis1 ) return beta5.3 参数估计def _compute_gradient(self, X, y): 计算梯度 empirical_counts np.zeros_like(self.trans_prob) expected_counts np.zeros_like(self.trans_prob) for features, tags in zip(X, y): # 前向后向计算 alpha self._forward_algorithm(features) beta self._backward_algorithm(features) log_Z logsumexp(alpha[-1]) # 计算经验特征期望 for t, (feat, tag) in enumerate(zip(features, tags)): empirical_counts[tag] self._compute_state_features(feat) if t 0: empirical_counts[tags[t-1], tag] 1 # 计算模型特征期望 for t in range(len(features)): state_features self._compute_state_features(features[t]) expected_counts np.exp(alpha[t] beta[t] - log_Z) * state_features if t 0: trans_matrix alpha[t-1][:, None] self.trans_prob \ state_features beta[t] expected_counts np.exp(trans_matrix - log_Z) return empirical_counts - expected_counts
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2513691.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!