HRNet代码逐行解析:从BasicBlock到HighResolutionNet,手把手教你读懂多分辨率融合
HRNet代码深度解析从基础模块到多分辨率融合实战在计算机视觉领域HRNetHigh-Resolution Network因其独特的并行多分辨率架构而备受关注。与传统的串行降采样网络不同HRNet在整个前向传播过程中始终保持高分辨率表示通过多分支并行处理和跨分辨率信息交换在姿态估计、语义分割等任务中展现出卓越性能。本文将带您深入HRNet的PyTorch实现从最基础的BasicBlock开始逐步解析HighResolutionModule的核心机制最终构建完整的HighResolutionNet。1. 基础构建块BasicBlock与Bottleneck任何优秀的深度网络都由精心设计的基础模块堆叠而成HRNet也不例外。让我们首先剖析其两大基础组件1.1 BasicBlock实现解析BasicBlock是HRNet中最基础的残差单元其结构类似于ResNet中的基础块class BasicBlock(nn.Module): expansion 1 def __init__(self, inplanes, planes, stride1, downsampleNone): super(BasicBlock, self).__init__() self.conv1 conv3x3(inplanes, planes, stride) self.bn1 BatchNorm2d(planes, momentumBN_MOMENTUM) self.relu nn.ReLU(inplaceTrue) self.conv2 conv3x3(planes, planes) self.bn2 BatchNorm2d(planes, momentumBN_MOMENTUM) self.downsample downsample self.stride stride def forward(self, x): residual x out self.conv1(x) out self.bn1(out) out self.relu(out) out self.conv2(out) out self.bn2(out) if self.downsample is not None: residual self.downsample(x) out residual out self.relu(out) return out关键设计要点双3×3卷积结构保持感受野的同时减少参数量恒等映射通过残差连接解决梯度消失问题下采样适配当stride≠1时通过downsample模块调整残差路径维度1.2 Bottleneck深度优化版本对于更深的网络HRNet采用Bottleneck结构来平衡计算量与模型容量class Bottleneck(nn.Module): expansion 4 def __init__(self, inplanes, planes, stride1, downsampleNone): super(Bottleneck, self).__init__() self.conv1 nn.Conv2d(inplanes, planes, kernel_size1, biasFalse) self.bn1 BatchNorm2d(planes, momentumBN_MOMENTUM) self.conv2 nn.Conv2d(planes, planes, kernel_size3, stridestride, padding1, biasFalse) self.bn2 BatchNorm2d(planes, momentumBN_MOMENTUM) self.conv3 nn.Conv2d(planes, planes * self.expansion, kernel_size1, biasFalse) self.bn3 BatchNorm2d(planes * self.expansion, momentumBN_MOMENTUM) self.relu nn.ReLU(inplaceTrue) self.downsample downsample self.stride stride def forward(self, x): residual x out self.conv1(x) out self.bn1(out) out self.relu(out) out self.conv2(out) out self.bn2(out) out self.relu(out) out self.conv3(out) out self.bn3(out) if self.downsample is not None: residual self.downsample(x) out residual out self.relu(out) return outBottleneck的核心优势1×1-3×3-1×1的瓶颈结构先降维再升维大幅减少计算量扩展因子(expansion4)最终输出通道是中间层的4倍梯度传播更稳定三重残差连接设计实际工程中选择建议在HRNet中通常Stage1使用Bottleneck后续Stage使用BasicBlock这种混合设计在精度和效率间取得了良好平衡。2. HighResolutionModule多分辨率融合引擎HRNet的核心创新在于其并行多分辨率处理架构HighResolutionModule正是实现这一机制的关键组件。2.1 模块整体架构HighResolutionModule的初始化函数展示了其主要构成class HighResolutionModule(nn.Module): def __init__(self, num_branches, blocks, num_blocks, num_inchannels, num_channels, fuse_method, multi_scale_outputTrue): super(HighResolutionModule, self).__init__() self._check_branches(num_branches, num_blocks, num_inchannels, num_channels) self.num_inchannels num_inchannels self.fuse_method fuse_method self.num_branches num_branches self.multi_scale_output multi_scale_output self.branches self._make_branches( num_branches, blocks, num_blocks, num_channels) self.fuse_layers self._make_fuse_layers() self.relu nn.ReLU(inplaceTrue)主要参数说明num_branches并行分支数量blocks使用的块类型BasicBlock/Bottlenecknum_blocks每个分支包含的块数量num_inchannels各分支输入通道数num_channels各分支基础通道数fuse_method融合方法如SUM/AVG/CONCAT2.2 分支构建机制_make_branches方法负责创建并行处理分支def _make_branches(self, num_branches, block, num_blocks, num_channels): branches [] for i in range(num_branches): branches.append( self._make_one_branch(i, block, num_blocks, num_channels)) return nn.ModuleList(branches)每个分支通过_make_one_branch独立构建def _make_one_branch(self, branch_index, block, num_blocks, num_channels, stride1): downsample None if stride ! 1 or self.num_inchannels[branch_index] ! \ num_channels[branch_index] * block.expansion: downsample nn.Sequential( nn.Conv2d(self.num_inchannels[branch_index], num_channels[branch_index] * block.expansion, kernel_size1, stridestride, biasFalse), BatchNorm2d(num_channels[branch_index] * block.expansion, momentumBN_MOMENTUM)) layers [] layers.append(block(self.num_inchannels[branch_index], num_channels[branch_index], stride, downsample)) self.num_inchannels[branch_index] \ num_channels[branch_index] * block.expansion for i in range(1, num_blocks[branch_index]): layers.append(block(self.num_inchannels[branch_index], num_channels[branch_index])) return nn.Sequential(*layers)分支构建的关键点每个分支由多个残差块BasicBlock/Bottleneck堆叠而成首个块可能包含下采样操作后续块保持分辨率不变通道数通过block.expansion自动调整2.3 跨分辨率融合策略_make_fuse_layers实现了HRNet最精髓的多分辨率信息交换def _make_fuse_layers(self): if self.num_branches 1: return None num_branches self.num_branches num_inchannels self.num_inchannels fuse_layers [] for i in range(num_branches if self.multi_scale_output else 1): fuse_layer [] for j in range(num_branches): if j i: # 低分辨率到高分辨率 fuse_layer.append(nn.Sequential( nn.Conv2d(num_inchannels[j], num_inchannels[i], 1, 1, 0, biasFalse), BatchNorm2d(num_inchannels[i], momentumBN_MOMENTUM))) elif j i: # 同分辨率 fuse_layer.append(None) else: # 高分辨率到低分辨率 conv3x3s [] for k in range(i-j): if k i - j - 1: num_outchannels num_inchannels[i] conv3x3s.append(nn.Sequential( nn.Conv2d(num_inchannels[j], num_outchannels, 3, 2, 1, biasFalse), BatchNorm2d(num_outchannels, momentumBN_MOMENTUM))) else: conv3x3s.append(nn.Sequential( nn.Conv2d(num_inchannels[j], num_inchannels[j], 3, 2, 1, biasFalse), BatchNorm2d(num_inchannels[j], momentumBN_MOMENTUM), nn.ReLU(inplaceTrue))) fuse_layer.append(nn.Sequential(*conv3x3s)) fuse_layers.append(nn.ModuleList(fuse_layer)) return nn.ModuleList(fuse_layers)融合策略矩阵源分辨率目标分辨率转换方法典型操作高分辨率低分辨率降采样3×3卷积(stride2)低分辨率高分辨率上采样1×1卷积 双线性插值同分辨率同分辨率恒等映射None2.4 前向传播流程HighResolutionModule的前向传播实现了完整的多分支处理与融合def forward(self, x): if self.num_branches 1: return [self.branches[0](x[0])] # 各分支独立处理 for i in range(self.num_branches): x[i] self.branches[i](x[i]) # 跨分支融合 x_fuse [] for i in range(len(self.fuse_layers)): y x[0] if i 0 else self.fuse_layers[i][0](x[0]) for j in range(1, self.num_branches): if i j: y y x[j] elif j i: # 低到高 y y F.interpolate( self.fuse_layers[i][j](x[j]), size[x[i].shape[2], x[i].shape[3]], modebilinear, align_cornersTrue) else: # 高到低 y y self.fuse_layers[i][j](x[j]) x_fuse.append(self.relu(y)) return x_fuse数据流动特点各分支首先独立进行特征提取然后进行多分辨率间的双向信息交换最终输出融合后的多尺度特征调试技巧在实际应用中可以通过hook机制捕获各分支的中间特征可视化验证信息融合效果。3. HighResolutionNet完整网络组装HighResolutionNet将多个HighResolutionModule组合成完整的端到端网络。3.1 网络初始化结构网络初始化展示了HRNet的层级设计class HighResolutionNet(nn.Module): def __init__(self, config, **kwargs): super(HighResolutionNet, self).__init__() # Stem网络初始下采样 self.conv1 nn.Conv2d(3, 64, kernel_size3, stride2, padding1, biasFalse) self.bn1 BatchNorm2d(64, momentumBN_MOMENTUM) self.conv2 nn.Conv2d(64, 64, kernel_size3, stride2, padding1, biasFalse) self.bn2 BatchNorm2d(64, momentumBN_MOMENTUM) self.relu nn.ReLU(inplaceTrue) # Stage1 self.stage1_cfg config[STAGE1] self.layer1 self._make_layer( blocks_dict[self.stage1_cfg[BLOCK]], 64, self.stage1_cfg[NUM_CHANNELS][0], self.stage1_cfg[NUM_BLOCKS][0]) # 过渡层与后续Stage self.stage2_cfg config[STAGE2] self.transition1 self._make_transition_layer( [stage1_out_channel], self.stage2_cfg[NUM_CHANNELS]) self.stage2, pre_stage_channels self._make_stage( self.stage2_cfg, self.stage2_cfg[NUM_CHANNELS]) # 类似地构建stage3和stage4 ...网络构建关键点渐进式增加分支每个stage增加一个新的低分辨率分支通道数规律分辨率减半通道数翻倍过渡层设计平滑连接不同stage3.2 过渡层实现过渡层负责在不同stage间转换特征def _make_transition_layer(self, num_channels_pre_layer, num_channels_cur_layer): num_branches_cur len(num_channels_cur_layer) num_branches_pre len(num_channels_pre_layer) transition_layers [] for i in range(num_branches_cur): if i num_branches_pre: if num_channels_cur_layer[i] ! num_channels_pre_layer[i]: transition_layers.append(nn.Sequential( nn.Conv2d(num_channels_pre_layer[i], num_channels_cur_layer[i], 3, 1, 1, biasFalse), BatchNorm2d(num_channels_cur_layer[i], momentumBN_MOMENTUM), nn.ReLU(inplaceTrue))) else: transition_layers.append(None) else: conv3x3s [] for j in range(i1-num_branches_pre): inchannels num_channels_pre_layer[-1] outchannels num_channels_cur_layer[i] \ if j i-num_branches_pre else inchannels conv3x3s.append(nn.Sequential( nn.Conv2d(inchannels, outchannels, 3, 2, 1, biasFalse), BatchNorm2d(outchannels, momentumBN_MOMENTUM), nn.ReLU(inplaceTrue))) transition_layers.append(nn.Sequential(*conv3x3s)) return nn.ModuleList(transition_layers)过渡层主要处理两种情况已有分支的通道数调整新增分支的创建与下采样3.3 完整前向传播流程网络的前向传播展示了数据在HRNet中的完整流动路径def forward(self, x): # Stem网络 x self.conv1(x) x self.bn1(x) x self.relu(x) x self.conv2(x) x self.bn2(x) x self.relu(x) # Stage1 x self.layer1(x) # Stage2 x_list [] for i in range(self.stage2_cfg[NUM_BRANCHES]): if self.transition1[i] is not None: x_list.append(self.transition1[i](x)) else: x_list.append(x) y_list self.stage2(x_list) # Stage3 x_list [] for i in range(self.stage3_cfg[NUM_BRANCHES]): if self.transition2[i] is not None: x_list.append(self.transition2[i](y_list[i])) else: x_list.append(y_list[i]) y_list self.stage3(x_list) # Stage4 x_list [] for i in range(self.stage4_cfg[NUM_BRANCHES]): if self.transition3[i] is not None: x_list.append(self.transition3[i](y_list[i])) else: x_list.append(y_list[i]) x self.stage4(x_list) # 上采样并拼接所有分支 x0_h, x0_w x[0].size(2), x[0].size(3) x1 F.interpolate(x[1], size(x0_h, x0_w), modebilinear, align_cornersTrue) x2 F.interpolate(x[2], size(x0_h, x0_w), modebilinear, align_cornersTrue) x3 F.interpolate(x[3], size(x0_h, x0_w), modebilinear, align_cornersTrue) return torch.cat([x[0], x1, x2, x3], 1)典型HRNet(V2)的数据流动输入图像(256×256)Stem网络下采样到64×64Stage1(64×64)Stage2(64×64 32×32)Stage3(64×64 32×32 16×16)Stage4(64×64 32×32 16×16 8×8)上采样拼接输出(64×64×sum_channels)4. 实战技巧与性能优化4.1 自定义HRNet配置通过修改配置字典可以灵活调整网络结构hrnet_w18 { STAGE1: { NUM_MODULES: 1, NUM_BRANCHES: 1, NUM_BLOCKS: [4], NUM_CHANNELS: [64], BLOCK: BOTTLENECK, FUSE_METHOD: SUM }, STAGE2: { NUM_MODULES: 1, NUM_BRANCHES: 2, NUM_BLOCKS: [4, 4], NUM_CHANNELS: [18, 36], BLOCK: BASIC, FUSE_METHOD: SUM }, STAGE3: { NUM_MODULES: 4, NUM_BRANCHES: 3, NUM_BLOCKS: [4, 4, 4], NUM_CHANNELS: [18, 36, 72], BLOCK: BASIC, FUSE_METHOD: SUM }, STAGE4: { NUM_MODULES: 3, NUM_BRANCHES: 4, NUM_BLOCKS: [4, 4, 4, 4], NUM_CHANNELS: [18, 36, 72, 144], BLOCK: BASIC, FUSE_METHOD: SUM } }常见变体配置对比模型变体Stage1通道Stage2通道Stage3通道Stage4通道参数量HRNet-W1864[18,36][18,36,72][18,36,72,144]9.6MHRNet-W3264[32,64][32,64,128][32,64,128,256]28.5MHRNet-W4864[48,96][48,96,192][48,96,192,384]63.6M4.2 内存优化策略HRNet的多分辨率特性会带来显存消耗以下优化策略值得关注梯度检查点技术from torch.utils.checkpoint import checkpoint def forward(self, x): # 在内存敏感层使用 return checkpoint(self._forward_impl, x)混合精度训练scaler torch.cuda.amp.GradScaler() with torch.cuda.amp.autocast(): outputs model(inputs) loss criterion(outputs, targets) scaler.scale(loss).backward() scaler.step(optimizer) scaler.update()分支梯度平衡# 对低分辨率分支使用较小的梯度权重 for i, branch in enumerate(model.branches): for param in branch.parameters(): param.register_hook(lambda grad: grad * (0.9 ** i))4.3 部署优化建议TensorRT加速trtexec --onnxhrnet.onnx \ --saveEnginehrnet.engine \ --fp16 \ --workspace2048分支并行化# 使用PyTorch的并行处理 from torch.nn.parallel import parallel_apply outputs parallel_apply( [branch for branch in model.branches], [x[i] for i in range(model.num_branches)] )自定义融合算子// 实现高效的多分辨率融合CUDA内核 __global__ void fuse_kernel(float* high_res, float* low_res, ...) { // 合并上采样与卷积操作 }在实际项目中HRNet的灵活架构使其能够适应各种计算机视觉任务。通过深入理解其代码实现开发者可以更好地调整网络结构、优化性能并将其应用于特定领域的问题解决方案中。
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2464193.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!