即插即用系列 | CVPR 2026 | CCSM:创新Mamba块!打破像素级扫描桎梏!首创聚类中心状态空间建模,实现UHD图像修复效率与精度的双重飞跃! | 代码分享
0. 前言本文介绍了CCSMCluster-Centric Scanning Module聚类中心扫描模块其通过创新的“特征聚合分数扩散”双阶段机制首次在视觉状态空间模型中实现从像素级串行扫描到聚类中心级并行推理的根本性范式转变有效破解了UHD图像修复中全局建模精度与计算开销不可兼得的核心矛盾。将其作为即插即用模块轻松助力CNN、YOLO、Transformer等深度学习模型实现高达90%以上的计算复杂度削减同时保持甚至超越原始像素级扫描的全局建模能力让模型在面对8K超高清图像、大尺度遥感影像或高分辨率医学影像等超大规模视觉任务时依然能够以消费级硬件实现全分辨率实时推理与精准重建。专栏链接即插即用系列专栏链接可点击跳转免费订阅目录0. 前言1. CCSM注意力简介2. CCSM注意力原理与创新点 CCSM注意力基本原理 GSRA注意力创新点3. 适用范围与模块效果适用范围⚡模块效果4. CCSM注意力代码实现1. CCSM注意力简介超高清UHD图像修复正陷入可扩展性危机现有模型受限于像素级操作其计算需求难以持续。虽然Mamba等状态空间模型SSM承诺线性复杂度但其像素级串行扫描对于UHD内容的数百万像素而言仍是根本瓶颈。我们不禁要问必须处理每一个像素才能理解图像吗本文提出C²SSM——一种视觉状态空间模型通过从像素级串行扫描转向聚类中心级串行扫描打破了这一禁忌。我们的核心发现是UHD图像丰富的特征分布可以通过神经参数化的混合模型提炼为一组稀疏的语义聚类中心。C²SSM利用这一发现将全局建模重构为一种新颖的双路径过程它仅对少量聚类中心进行扫描和推理然后通过一个原则性的相似度分布将全局上下文扩散回所有像素同时由一个轻量级调制器保留细节。原始论文https://arxiv.org/pdf/2602.21917原始代码https://github.com/5chen/C2SSM/tree/main/2. CCSM注意力原理与创新点 CCSM注意力基本原理CCSM的核心洞察在于自然图像具有天然的结构冗余性——相邻像素的特征高度相关无需逐一处理每个像素即可捕捉全局语义。基于这一观察CCSM设计了一套“压缩-推理-恢复”的三阶段计算范式将UHD图像数百万像素的全局建模问题优雅地转化为对少量语义聚类中心的状态空间建模问题。具体而言CCSM的基本原理包含以下三个关键步骤1特征聚合将像素压缩为语义聚类中心CCSM首先对输入特征图进行降维处理通过一个可学习的初始化策略选取少量像素作为初始聚类中心典型值为4~8个。随后它构建一个n维相似度分布为每个像素计算与所有聚类中心的概率化关联权重。关键创新在于CCSM通过一个带门控机制的可学习函数实现一步式的聚类中心自适应精炼——这一过程无需任何迭代优化即可将UHD图像中数百万像素的冗余信息浓缩为少量携带全局语义信息的聚类中心。2聚类中心级状态空间扫描与推理在获得精炼后的聚类中心后CCSM将其输入Mamba的选择性扫描模块S6 Block。由于聚类中心的数量n远小于原始像素总数H×W这一步骤的计算复杂度从O(C·H²W²)骤降至O(C·n²)实现了两个数量级以上的计算量削减。更重要的是Mamba的线性复杂度特性使得聚类中心之间的长程依赖关系能够被高效建模确保全局上下文信息的完整保留。3分数扩散将全局权重从聚类中心反演回像素完成聚类中心的全局权重估计后CCSM通过一个基于全概率公式的权重反演机制将聚类中心的全局权重重新分配回每一个像素。这一过程利用第一步中建立的n维相似度分布作为概率映射每个像素的最终全局权重被定义为所有聚类中心权重的概率加权期望。这一设计确保了全局上下文信息能够以概率一致的方式扩散回整个特征图实现了“压缩时高效、恢复时精准”的信息闭环。 GSRA注意力创新点范式创新首次将视觉状态空间模型的扫描粒度从像素级提升至聚类中心级从根本上破解了UHD图像修复中全局建模精度与计算开销的矛盾。机制创新提出“特征聚合分数扩散”的双阶段机制通过可学习门控实现一步式聚类中心精炼并通过全概率公式保证权重反演的数学严谨性。架构创新在C²SSM整体框架中CCSM与Spatial-Channel Feature ModulatorSCFM形成协同互补——CCSM负责全局语义建模SCFM负责局部细节补偿二者并行处理兼顾全局感知与高频细节保留。实用性突破首次实现UHD图像在消费级GPU上的全分辨率端到端恢复解决了现有Mamba类方法因VRAM瓶颈而无法部署的实际问题。3. 适用范围与模块效果适用范围CCSM特别适用于需要全局感受野与计算效率兼顾的视觉任务尤其是输入分辨率极高、像素冗余性显著的场景。具体适用场景包括超高清图像修复低光增强、去雨、去模糊、去雾、去雪等任务CCSM可在全分辨率下实现SOTA性能。大尺度遥感图像分析遥感影像通常包含数亿像素CCSM的聚类中心级扫描可大幅降低计算负担。高分辨率医学影像处理病理切片、CT/MRI重建等任务中全局上下文对精准诊断至关重要CCSM可在有限硬件资源下实现全图建模。视频超分辨率与增强视频帧序列的时空冗余性使得CCSM的聚类压缩策略具备天然适配性。⚡模块效果CCSM的移除导致PSNR从39.61 dB大幅下降至35.87 dB验证了全局依赖建模对恢复任务的关键作用且CCSM相较于其他Mamba变体在性能与参数量的平衡上表现最优。在UHD-LOL4K、UHD-Blur、UHD-Haze三个数据集上4~6个聚类中心即可达到最优性能验证了该模块对超高清图像语义信息的高度压缩能力CCSM的FLOPs仅为0.407G远低于MambaIR的4.774G和MambaIRv2的4.940G验证了聚类中心级扫描相比像素级扫描在计算效率上的压倒性优势总结性结论CCSM模块的消融实验与复杂度对比充分证明其提出的聚类中心级扫描机制能够在保持甚至超越现有Mamba类方法性能的前提下实现计算复杂度的数量级下降首次使得全分辨率UHD图像恢复在消费级硬件上成为可能。4. CCSM注意力代码实现以下为CCSM注意力机制的官方pytorch实现代码import torch import torch.nn as nn import torch.nn.functional as F import torch.autograd from einops import rearrange, repeat import math import numbers import warnings # 检查PyTorch版本 TORCH_MAJOR_VERSION int(torch.__version__.split(.)[0]) TORCH_MINOR_VERSION int(torch.__version__.split(.)[1]) # 尝试导入CUDA扩展 WITH_SELECTIVESCAN_OFLEX False WITH_SELECTIVESCAN_CORE False WITH_SELECTIVESCAN_MAMBA False try: import selective_scan_cuda_oflex WITH_SELECTIVESCAN_OFLEX True except ImportError: pass try: import selective_scan_cuda_core WITH_SELECTIVESCAN_CORE True except ImportError: pass try: import selective_scan_cuda WITH_SELECTIVESCAN_MAMBA True except ImportError: pass def selective_scan_torch( u: torch.Tensor, # (B, K * C, L) delta: torch.Tensor, # (B, K * C, L) A: torch.Tensor, # (K * C, N) B: torch.Tensor, # (B, K, N, L) C: torch.Tensor, # (B, K, N, L) D: torch.Tensor None, # (K * C) delta_bias: torch.Tensor None, # (K * C) delta_softplusTrue, oflexTrue, *args, **kwargs ): PyTorch原生实现的选择性扫描函数用于调试和CPU回退 dtype_in u.dtype Batch, K, N, L B.shape KCdim u.shape[1] Cdim int(KCdim / K) assert u.shape (Batch, KCdim, L) assert delta.shape (Batch, KCdim, L) assert A.shape (KCdim, N) assert C.shape B.shape if delta_bias is not None: delta delta delta_bias[..., None] if delta_softplus: delta torch.nn.functional.softplus(delta) u, delta, A, B, C u.float(), delta.float(), A.float(), B.float(), C.float() # 重塑张量以便计算 B B.view(Batch, K, 1, N, L).repeat(1, 1, Cdim, 1, 1).view(Batch, KCdim, N, L) C C.view(Batch, K, 1, N, L).repeat(1, 1, Cdim, 1, 1).view(Batch, KCdim, N, L) deltaA torch.exp(torch.einsum(bdl,dn-bdln, delta, A)) deltaB_u torch.einsum(bdl,bdnl,bdl-bdln, delta, B, u) # 循环计算状态 x A.new_zeros((Batch, KCdim, N)) ys [] for i in range(L): x deltaA[:, :, i, :] * x deltaB_u[:, :, i, :] y torch.einsum(bdn,bdn-bd, x, C[:, :, :, i]) ys.append(y) y torch.stack(ys, dim2) # (B, C, L) out y if D is None else y u * D.unsqueeze(-1) return out if oflex else out.to(dtypedtype_in) class SelectiveScanCuda(torch.autograd.Function): CUDA加速的选择性扫描函数 staticmethod def forward(ctx, u, delta, A, B, C, DNone, delta_biasNone, delta_softplusFalse, oflexTrue, backendNone): ctx.delta_softplus delta_softplus # 选择可用的后端 backend oflex if WITH_SELECTIVESCAN_OFLEX and backend is None else backend backend core if WITH_SELECTIVESCAN_CORE and backend is None else backend backend mamba if WITH_SELECTIVESCAN_MAMBA and backend is None else backend ctx.backend backend # 根据后端调用不同的CUDA函数 if backend oflex: out, x, *rest selective_scan_cuda_oflex.fwd( u, delta, A, B, C, D, delta_bias, delta_softplus, 1, oflex ) elif backend core: out, x, *rest selective_scan_cuda_core.fwd( u, delta, A, B, C, D, delta_bias, delta_softplus, 1 ) elif backend mamba: out, x, *rest selective_scan_cuda.fwd( u, delta, A, B, C, D, None, delta_bias, delta_softplus ) else: raise ValueError(fUnknown backend: {backend}) ctx.save_for_backward(u, delta, A, B, C, D, delta_bias, x) return out staticmethod def backward(ctx, dout, *args): u, delta, A, B, C, D, delta_bias, x ctx.saved_tensors backend ctx.backend if dout.stride(-1) ! 1: dout dout.contiguous() # 根据后端调用不同的CUDA反向函数 if backend oflex: du, ddelta, dA, dB, dC, dD, ddelta_bias, *rest selective_scan_cuda_oflex.bwd( u, delta, A, B, C, D, delta_bias, dout, x, ctx.delta_softplus, 1 ) elif backend core: du, ddelta, dA, dB, dC, dD, ddelta_bias, *rest selective_scan_cuda_core.bwd( u, delta, A, B, C, D, delta_bias, dout, x, ctx.delta_softplus, 1 ) elif backend mamba: du, ddelta, dA, dB, dC, dD, ddelta_bias, *rest selective_scan_cuda.bwd( u, delta, A, B, C, D, None, delta_bias, dout, x, None, None, ctx.delta_softplus, False ) else: raise ValueError(fUnknown backend: {backend}) return du, ddelta, dA, dB, dC, dD, ddelta_bias, None, None, None def selective_scan_fn( u: torch.Tensor, # (B, K * C, L) delta: torch.Tensor, # (B, K * C, L) A: torch.Tensor, # (K * C, N) B: torch.Tensor, # (B, K, N, L) C: torch.Tensor, # (B, K, N, L) D: torch.Tensor None, # (K * C) delta_bias: torch.Tensor None, # (K * C) delta_softplus: bool True, oflex: bool True, backend: str None, **kwargs, # 添加 **kwargs 来接收额外参数如 z ): 选择性扫描函数的主入口 Args: u: 输入张量 (B, K*C, L) delta: delta张量 (B, K*C, L) A: A矩阵 (K*C, N) B: B矩阵 (B, K, N, L) C: C矩阵 (B, K, N, L) D: D参数 (K*C) delta_bias: delta偏置 (K*C) delta_softplus: 是否使用softplus激活 oflex: 是否使用oflex模式 backend: 后端选择 (torch, oflex, core, mamba) **kwargs: 其他参数如z会被忽略 Returns: 输出张量 (B, K*C, L) # 检查CUDA是否可用 WITH_CUDA (WITH_SELECTIVESCAN_OFLEX or WITH_SELECTIVESCAN_CORE or WITH_SELECTIVESCAN_MAMBA) # 选择使用PyTorch实现还是CUDA实现 if backend torch or not WITH_CUDA: # 使用PyTorch原生实现支持CPU和CUDA fn selective_scan_torch else: # 使用CUDA加速实现 fn SelectiveScanCuda.apply return fn(u, delta, A, B, C, D, delta_bias, delta_softplus, oflex, backend) def to_3d(x): return rearrange(x, b c h w - b (h w) c) def to_4d(x, h, w): return rearrange(x, b (h w) c - b c h w, hh, ww) class WithBias_LayerNorm(nn.Module): def __init__(self, normalized_shape): super(WithBias_LayerNorm, self).__init__() if isinstance(normalized_shape, numbers.Integral): normalized_shape (normalized_shape,) normalized_shape torch.Size(normalized_shape) assert len(normalized_shape) 1 self.weight nn.Parameter(torch.ones(normalized_shape)) self.bias nn.Parameter(torch.zeros(normalized_shape)) self.normalized_shape normalized_shape def forward(self, x): return x * torch.rsqrt(x.pow(2).mean(-1, keepdimTrue) 1e-6) * self.weight self.bias class LayerNorm(nn.Module): def __init__(self, dim): super(LayerNorm, self).__init__() self.body WithBias_LayerNorm(dim) def forward(self, x): h, w x.shape[-2:] return to_4d(self.body(to_3d(x)), h, w) def pairwise_cos_sim(x1: torch.Tensor, x2: torch.Tensor): return pair-wise similarity matrix between two tensors :param x1: [B,...,M,D] :param x2: [B,...,N,D] :return: similarity matrix [B,...,M,N] x1 F.normalize(x1, dim-1) x2 F.normalize(x2, dim-1) sim torch.matmul(x1, x2.transpose(-2, -1)) return sim class SparseStateSpaceModule(nn.Module): def __init__( self, d_model, proposal_hw, fold_hw, heads, d_state8, d_conv3, expand2, dt_rankauto, dt_min0.001, dt_max0.1, dt_initrandom, dt_scale1.0, dt_init_floor1e-4, dropout0., conv_biasTrue, biasFalse, deviceNone, dtypeNone, **kwargs, ): factory_kwargs {device: device, dtype: dtype} super().__init__() self.d_model d_model self.proposal_hw proposal_hw self.fold_hw fold_hw self.heads heads self.d_state d_state self.d_conv d_conv self.expand expand self.d_inner int(self.expand * self.d_model) // self.heads self.dt_rank math.ceil(self.d_model / 16) if dt_rank auto else dt_rank self.in_proj nn.Linear(self.d_model, self.d_inner * 2, biasbias, **factory_kwargs) self.conv2d nn.Conv2d( in_channelsself.d_inner, out_channelsself.d_inner, groupsself.d_inner, biasconv_bias, kernel_sized_conv, padding(d_conv - 1) // 2, **factory_kwargs, ) self.act nn.SiLU() # 使用列表而不是元组 self.x_proj nn.ModuleList([ nn.Linear(self.d_inner, (self.dt_rank self.d_state * 2), biasFalse, **factory_kwargs) ]) self.x_proj_weight nn.Parameter(torch.stack([t.weight for t in self.x_proj], dim0)) # (K1, N, inner) del self.x_proj self.x_conv nn.Conv1d(in_channels(self.dt_rank self.d_state * 2), out_channels(self.dt_rank self.d_state * 2), kernel_size7, padding3, groups(self.dt_rank self.d_state * 2)) # 使用列表而不是元组 self.dt_projs nn.ModuleList([ self.dt_init(self.dt_rank, self.d_inner, dt_scale, dt_init, dt_min, dt_max, dt_init_floor, **factory_kwargs) ]) self.dt_projs_weight nn.Parameter(torch.stack([t.weight for t in self.dt_projs], dim0)) # (K1, inner, rank) self.dt_projs_bias nn.Parameter(torch.stack([t.bias for t in self.dt_projs], dim0)) # (K1, inner) del self.dt_projs self.A_logs self.A_log_init(self.d_state, self.d_inner, copies1, mergeTrue) # (K1, D, N) self.Ds self.D_init(self.d_inner, copies1, mergeTrue) # (K1, D) self.selective_scan selective_scan_fn self.out_norm nn.LayerNorm(self.d_inner) self.out_proj nn.Linear(self.d_inner, self.d_model, biasbias, **factory_kwargs) self.dropout nn.Dropout(dropout) if dropout 0. else None self.f nn.Conv2d(self.d_inner, self.d_inner * self.heads, kernel_size1) # for similarity self.proj nn.Conv2d(self.d_inner * self.heads, self.d_inner, kernel_size1) # for projecting channel number self.v nn.Conv2d(self.d_inner, self.d_inner * self.heads, kernel_size1) # for value self.sim_alpha nn.Parameter(torch.ones(1)) self.sim_beta nn.Parameter(torch.zeros(1)) self.centers_proposal nn.AdaptiveAvgPool2d((self.proposal_hw, self.proposal_hw)) staticmethod def dt_init(dt_rank, d_inner, dt_scale1.0, dt_initrandom, dt_min0.001, dt_max0.1, dt_init_floor1e-4, **factory_kwargs): dt_proj nn.Linear(dt_rank, d_inner, biasTrue, **factory_kwargs) # Initialize special dt projection to preserve variance at initialization dt_init_std dt_rank ** -0.5 * dt_scale if dt_init constant: nn.init.constant_(dt_proj.weight, dt_init_std) elif dt_init random: nn.init.uniform_(dt_proj.weight, -dt_init_std, dt_init_std) else: raise NotImplementedError # Initialize dt bias so that F.softplus(dt_bias) is between dt_min and dt_max dt torch.exp( torch.rand(d_inner, **factory_kwargs) * (math.log(dt_max) - math.log(dt_min)) math.log(dt_min) ).clamp(mindt_init_floor) # Inverse of softplus: https://github.com/pytorch/pytorch/issues/72759 inv_dt dt torch.log(-torch.expm1(-dt)) with torch.no_grad(): dt_proj.bias.copy_(inv_dt) # Our initialization would set all Linear.bias to zero, need to mark this one as _no_reinit dt_proj.bias._no_reinit True return dt_proj staticmethod def A_log_init(d_state, d_inner, copies1, deviceNone, mergeTrue): # S4D real initialization A repeat( torch.arange(1, d_state 1, dtypetorch.float32, devicedevice), n - d n, dd_inner, ).contiguous() A_log torch.log(A) # Keep A_log in fp32 if copies 1: A_log repeat(A_log, d n - r d n, rcopies) if merge: A_log A_log.flatten(0, 1) A_log nn.Parameter(A_log) A_log._no_weight_decay True return A_log staticmethod def D_init(d_inner, copies1, deviceNone, mergeTrue): # D skip parameter D torch.ones(d_inner, devicedevice) if copies 1: D repeat(D, n1 - r n1, rcopies) if merge: D D.flatten(0, 1) D nn.Parameter(D) # Keep in fp32 D._no_weight_decay True return D # CCSM核心前向聚类中心生成→相似度匹配→稀疏聚合→Mamba选择性扫描→特征还原 def forward_core(self, x): value self.v(x) x self.f(x) x rearrange(x, b (e c) w h - (b e) c w h, eself.heads) value rearrange(value, b (e c) w h - (b e) c w h, eself.heads) if self.fold_hw 1: # split the big feature maps to small local regions to reduce computations. b0, c0, w0, h0 x.shape assert w0 % self.fold_hw 0 and h0 % self.fold_hw 0, \ fEnsure the feature map size ({w0}*{h0}) can be divided by fold {self.fold_hw}*{self.fold_hw} x rearrange(x, b c (f1 w) (f2 h) - (b f1 f2) c w h, f1self.fold_hw, f2self.fold_hw) # [bs*blocks,c,ks[0],ks[1]] value rearrange(value, b c (f1 w) (f2 h) - (b f1 f2) c w h, f1self.fold_hw, f2self.fold_hw) b, c, w, h x.shape centers self.centers_proposal(x) # [b,c,C_W,C_H], we set M C_W*C_H and N w*h value_centers rearrange(self.centers_proposal(value), b c w h - b (w h) c) # [b,C_W,C_H,c] b, c, ww, hh centers.shape # 计算聚类中心与所有像素点的余弦相似度 [B,M,N] MCW*CH, Nw*h sim torch.sigmoid( self.sim_beta self.sim_alpha * pairwise_cos_sim( centers.reshape(b, c, -1).permute(0, 2, 1), x.reshape(b, c, -1).permute(0, 2, 1) ) ) # [B,M,N] # we use mask to sololy assign each point to one center # 稀疏掩码每个像素点仅匹配相似度最高的聚类中心硬分配实现稀疏化 sim_max, sim_max_idx sim.max(dim1, keepdimTrue) mask torch.zeros_like(sim) # binary #[B,M,N] mask.scatter_(1, sim_max_idx, 1.) sim sim * mask value2 rearrange(value, b c w h - b (w h) c) # [B,N,D] # aggregate step, out shape [B,M,D] out ((value2.unsqueeze(dim1) * sim.unsqueeze(dim-1)).sum(dim2) value_centers) / (sim.sum(dim-1, keepdimTrue) 1.0) # [B,M,D] B, L, C out.shape K 1 # 修复维度问题 xs rearrange(out, b l c - b c l) # [B, C, L] xs xs.unsqueeze(1) # [B, 1, C, L] # 修复 einsum 操作 x_dbl torch.einsum(b k c l, k d c - b k d l, xs, self.x_proj_weight) x_dbl x_dbl.view(B, K, -1, L) x_dbl self.x_conv(x_dbl.squeeze(1).contiguous()).unsqueeze(1) dts, Bs_, Cs_ torch.split(x_dbl, [self.dt_rank, self.d_state, self.d_state], dim2) # 修复 dt_projs_weight 的维度 dts torch.einsum(b k r l, k d r - b k d l, dts, self.dt_projs_weight) xs xs.float().view(B, -1, L) dts dts.contiguous().float().view(B, -1, L) Bs_ Bs_.float().view(B, K, -1, L) Cs_ Cs_.float().view(B, K, -1, L) Ds self.Ds.float().view(-1) As -torch.exp(self.A_logs.float()).view(-1, self.d_state) dt_projs_bias self.dt_projs_bias.float().view(-1) # 调用选择性扫描函数 out_y self.selective_scan( xs, dts, As, Bs_, Cs_, Ds, delta_biasdt_projs_bias, delta_softplusTrue, ).view(B, K, -1, L) assert out_y.dtype torch.float out rearrange(out_y[:, 0], b c l - b l c) out (out.unsqueeze(dim2) * sim.unsqueeze(dim-1)).sum(dim1) # [B,N,D] out rearrange(out, b (w h) c - b c w h, ww) if self.fold_hw 1: # recover the splited regions back to big feature maps if use the region partition. out rearrange(out, (b f1 f2) c w h - b c (f1 w) (f2 h), f1self.fold_hw, f2self.fold_hw) out rearrange(out, (b e) c w h - b (e c) w h, eself.heads) out self.proj(out) return out def forward(self, x: torch.Tensor, **kwargs): x rearrange(x, b c h w - b h w c) B, H, W, C x.shape xz self.in_proj(x) x, z xz.chunk(2, dim-1) x x.permute(0, 3, 1, 2).contiguous() x self.act(self.conv2d(x)) y self.forward_core(x) assert y.dtype torch.float32 y torch.transpose(y, dim01, dim12).contiguous().view(B, H, W, -1) y self.out_norm(y) y y * F.silu(z) out self.out_proj(y) out rearrange(out, b h w c - b c h w) return out if __name__ __main__: device torch.device(cuda:0 if torch.cuda.is_available() else cpu) x torch.randn(1, 64, 32, 32).to(device) model SparseStateSpaceModule(64, 2, 4, 1).to(device) y model(x) print(输入特征维度, x.shape) print(输出特征维度, y.shape)结合自己的思路可将其即插即用至任何模型做结构创新设计该模块博主已成功嵌入至YOLO26模型中可订阅博主YOLO系列算法改进或YOLO26自研改进专栏YOLO系列算法改进专栏链接、YOLO26自研改进系列专栏
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2440953.html
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!