Actor-Critic算法实战:从QAC到A2C,用PyTorch一步步实现策略梯度与价值评估的结合
Actor-Critic算法实战从QAC到A2C的PyTorch实现指南在强化学习领域Actor-Critic算法因其结合了策略梯度与价值评估的双重优势而备受关注。本文将带您从零开始用PyTorch实现从基础的QAC到进阶的A2C算法解决实际编码中的关键问题。1. 环境搭建与核心概念在开始编码前我们需要明确几个核心组件。Actor负责生成动作策略Critic则评估这些策略的价值。这种分工协作的模式使得算法既能保持策略梯度方法的灵活性又能利用价值评估的稳定性。首先安装必要的库pip install torch gym numpy matplotlib对于离散动作空间的环境如CartPole我们可以定义如下网络结构import torch import torch.nn as nn import torch.optim as optim class Actor(nn.Module): def __init__(self, state_dim, action_dim): super(Actor, self).__init__() self.fc nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, action_dim), nn.Softmax(dim-1) ) def forward(self, state): return self.fc(state) class Critic(nn.Module): def __init__(self, state_dim): super(Critic, self).__init__() self.fc nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, 1) ) def forward(self, state): return self.fc(state)提示初始学习率设置为0.001通常是个不错的起点但需要根据具体环境调整2. QAC算法实现QACQ Actor-Critic是最基础的Actor-Critic算法它直接使用动作价值函数Q作为Critic的评估标准。以下是训练循环的关键步骤数据收集使用当前策略与环境交互价值估计Critic网络评估状态-动作对的价值策略更新根据Critic的评估调整Actor策略def train_qac(env, actor, critic, episodes1000): actor_optim optim.Adam(actor.parameters(), lr1e-3) critic_optim optim.Adam(critic.parameters(), lr1e-3) for episode in range(episodes): state env.reset() done False while not done: # 选择动作 prob actor(torch.FloatTensor(state)) action torch.multinomial(prob, 1).item() # 执行动作 next_state, reward, done, _ env.step(action) # Critic更新 q_value critic(torch.FloatTensor(state)) next_q critic(torch.FloatTensor(next_state)) if not done else 0 target reward 0.99 * next_q critic_loss nn.MSELoss()(q_value, torch.tensor([target])) critic_optim.zero_grad() critic_loss.backward() critic_optim.step() # Actor更新 advantage target - q_value.detach() actor_loss -torch.log(prob[action]) * advantage actor_optim.zero_grad() actor_loss.backward() actor_optim.step() state next_state常见问题调试表问题现象可能原因解决方案回报不增长学习率过高逐步降低学习率训练不稳定批次大小不足增加交互步数再更新策略过早收敛探索不足增加熵正则项3. A2C算法进阶实现A2CAdvantage Actor-Critic通过引入优势函数减少方差通常能获得更稳定的训练效果。优势函数的核心公式为A(s,a) Q(s,a) - V(s)改进后的Critic网络现在评估状态价值V而非动作价值Qclass A2CCritic(nn.Module): def __init__(self, state_dim): super(A2CCritic, self).__init__() self.fc nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, 1) ) def forward(self, state): return self.fc(state)训练循环中需要计算TD误差作为优势估计def train_a2c(env, actor, critic, episodes1000): actor_optim optim.Adam(actor.parameters(), lr1e-4) critic_optim optim.Adam(critic.parameters(), lr1e-3) for episode in range(episodes): state env.reset() done False while not done: # 收集轨迹数据 states, actions, rewards [], [], [] for _ in range(5): # 5步更新 prob actor(torch.FloatTensor(state)) action torch.multinomial(prob, 1).item() next_state, reward, done, _ env.step(action) states.append(state) actions.append(action) rewards.append(reward) state next_state if done: break # 计算回报和优势 returns [] advantages [] R critic(torch.FloatTensor(state)).item() if not done else 0 for r in reversed(rewards): R r 0.99 * R returns.insert(0, R) values critic(torch.FloatTensor(np.array(states))).squeeze() advantages torch.FloatTensor(returns) - values.detach() # 更新Critic critic_loss nn.MSELoss()(values, torch.FloatTensor(returns)) critic_optim.zero_grad() critic_loss.backward() critic_optim.step() # 更新Actor probs actor(torch.FloatTensor(np.array(states))) selected_probs probs.gather(1, torch.LongTensor(actions).unsqueeze(1)) actor_loss (-torch.log(selected_probs) * advantages).mean() actor_optim.zero_grad() actor_loss.backward() actor_optim.step()注意A2C中n步回报的步数需要根据环境特性调整对于延迟奖励的环境需要更大的步数4. 连续动作空间处理对于连续动作空间环境如Pendulum需要对Actor网络进行修改class ContinuousActor(nn.Module): def __init__(self, state_dim, action_dim): super(ContinuousActor, self).__init__() self.fc_mean nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, action_dim), nn.Tanh() # 假设动作范围在[-1,1] ) self.fc_std nn.Sequential( nn.Linear(state_dim, 64), nn.ReLU(), nn.Linear(64, action_dim), nn.Softplus() # 标准差必须为正 ) def forward(self, state): mean self.fc_mean(state) std self.fc_std(state) return torch.distributions.Normal(mean, std)训练时需要从分布中采样动作def continuous_action_train(env, actor, critic, episodes1000): # ...初始化优化器... for episode in range(episodes): state env.reset() done False while not done: dist actor(torch.FloatTensor(state)) action dist.sample() log_prob dist.log_prob(action).sum() next_state, reward, done, _ env.step(action.numpy()) # 计算优势 value critic(torch.FloatTensor(state)) next_value critic(torch.FloatTensor(next_state)) if not done else 0 advantage reward 0.99 * next_value - value.detach() # 更新Critic... # 更新Actor...5. 实战技巧与性能优化在实际项目中我发现以下几个技巧能显著提升训练效果梯度裁剪防止梯度爆炸torch.nn.utils.clip_grad_norm_(actor.parameters(), 0.5) torch.nn.utils.clip_grad_norm_(critic.parameters(), 0.5)熵正则化鼓励探索entropy dist.entropy().mean() actor_loss actor_loss - 0.01 * entropy # 系数可调学习率衰减后期稳定训练scheduler optim.lr_scheduler.StepLR(actor_optim, step_size100, gamma0.9)并行环境加速数据收集from multiprocessing import Process, Pipe调试过程中记录这些指标有助于分析每回合总回报平均优势值策略熵反映探索程度价值损失梯度幅值在Pendulum环境中经过3000回合训练后使用A2C算法通常能达到-200以下的稳定回报。一个常见的误区是过早停止训练——即使回报看似稳定网络参数可能仍在微调过早停止会导致最终性能不佳。
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2567231.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!