助力质检维护,基于超轻量级分割模型ege-unet开发构建水泥基建裂缝分割识别系统

news2025/6/12 2:09:43

在前面的博文:

《参数量仅有50KB的超轻量级unet变种网络egeunet【参数和计算量降低494和160倍】医疗图像分割实践》

初步学习和实践了最新的超轻量级的unet变种网络在医疗图像领域内的表现,在上文中我们就说过会后续考虑将该网络模型应用于实际的生产业务场景里面助力实际业务发展。

本文的主要目的就是基于ege-unet网络模型来开发构建水泥基建场景中质检维护作业中常用到的裂缝分割检测识别系统,首先看下实际效果:

 原始的项目结构我们如果想要套入自己的数据集的话只需要稍作改变即可,首先来看数据集:

 标注数据如下所示:

 接下来修改configs目录下config_setting.py模块,如下所示:

from torchvision import transforms
from utils import *

from datetime import datetime

class setting_config:
    """
    the config of training setting.
    """

    network = 'egeunet'
    model_config = {
        'num_classes': 1, 
        'input_channels': 3, 
        'c_list': [8,16,24,32,48,64], 
        'bridge': True,
        'gt_ds': True,
    }

    datasets = 'isic18' 
    if datasets == 'isic18':
        data_path = './data/isic2018/'
    elif datasets == 'isic17':
        data_path = './data/isic2017/'
    else:
        raise Exception('datasets in not right!')

    criterion = GT_BceDiceLoss(wb=1, wd=1)

    pretrained_path = './pre_trained/'
    num_classes = 1
    input_size_h = 256
    input_size_w = 256
    input_channels = 3
    distributed = False
    local_rank = -1
    num_workers = 0
    seed = 42
    world_size = None
    rank = None
    amp = False
    gpu_id = '0'
    batch_size = 192
    epochs = 100

    work_dir = 'results/' + network + '_' + datasets + '_' + datetime.now().strftime('%A_%d_%B_%Y_%Hh_%Mm_%Ss') + '/'

    print_interval = 20
    val_interval = 30
    save_interval = 10
    threshold = 0.5

    train_transformer = transforms.Compose([
        myNormalize(datasets, train=True),
        myToTensor(),
        myRandomHorizontalFlip(p=0.5),
        myRandomVerticalFlip(p=0.5),
        myRandomRotation(p=0.5, degree=[0, 360]),
        myResize(input_size_h, input_size_w)
    ])
    test_transformer = transforms.Compose([
        myNormalize(datasets, train=False),
        myToTensor(),
        myResize(input_size_h, input_size_w)
    ])

    opt = 'AdamW'
    assert opt in ['Adadelta', 'Adagrad', 'Adam', 'AdamW', 'Adamax', 'ASGD', 'RMSprop', 'Rprop', 'SGD'], 'Unsupported optimizer!'
    if opt == 'Adadelta':
        lr = 0.01 # default: 1.0 – coefficient that scale delta before it is applied to the parameters
        rho = 0.9 # default: 0.9 – coefficient used for computing a running average of squared gradients
        eps = 1e-6 # default: 1e-6 – term added to the denominator to improve numerical stability 
        weight_decay = 0.05 # default: 0 – weight decay (L2 penalty) 
    elif opt == 'Adagrad':
        lr = 0.01 # default: 0.01 – learning rate
        lr_decay = 0 # default: 0 – learning rate decay
        eps = 1e-10 # default: 1e-10 – term added to the denominator to improve numerical stability
        weight_decay = 0.05 # default: 0 – weight decay (L2 penalty)
    elif opt == 'Adam':
        lr = 0.001 # default: 1e-3 – learning rate
        betas = (0.9, 0.999) # default: (0.9, 0.999) – coefficients used for computing running averages of gradient and its square
        eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability 
        weight_decay = 0.0001 # default: 0 – weight decay (L2 penalty) 
        amsgrad = False # default: False – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond
    elif opt == 'AdamW':
        lr = 0.001 # default: 1e-3 – learning rate
        betas = (0.9, 0.999) # default: (0.9, 0.999) – coefficients used for computing running averages of gradient and its square
        eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
        weight_decay = 1e-2 # default: 1e-2 – weight decay coefficient
        amsgrad = False # default: False – whether to use the AMSGrad variant of this algorithm from the paper On the Convergence of Adam and Beyond 
    elif opt == 'Adamax':
        lr = 2e-3 # default: 2e-3 – learning rate
        betas = (0.9, 0.999) # default: (0.9, 0.999) – coefficients used for computing running averages of gradient and its square
        eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
        weight_decay = 0 # default: 0 – weight decay (L2 penalty) 
    elif opt == 'ASGD':
        lr = 0.01 # default: 1e-2 – learning rate 
        lambd = 1e-4 # default: 1e-4 – decay term
        alpha = 0.75 # default: 0.75 – power for eta update
        t0 = 1e6 # default: 1e6 – point at which to start averaging
        weight_decay = 0 # default: 0 – weight decay
    elif opt == 'RMSprop':
        lr = 1e-2 # default: 1e-2 – learning rate
        momentum = 0 # default: 0 – momentum factor
        alpha = 0.99 # default: 0.99 – smoothing constant
        eps = 1e-8 # default: 1e-8 – term added to the denominator to improve numerical stability
        centered = False # default: False – if True, compute the centered RMSProp, the gradient is normalized by an estimation of its variance
        weight_decay = 0 # default: 0 – weight decay (L2 penalty)
    elif opt == 'Rprop':
        lr = 1e-2 # default: 1e-2 – learning rate
        etas = (0.5, 1.2) # default: (0.5, 1.2) – pair of (etaminus, etaplis), that are multiplicative increase and decrease factors
        step_sizes = (1e-6, 50) # default: (1e-6, 50) – a pair of minimal and maximal allowed step sizes 
    elif opt == 'SGD':
        lr = 0.01 # – learning rate
        momentum = 0.9 # default: 0 – momentum factor 
        weight_decay = 0.05 # default: 0 – weight decay (L2 penalty) 
        dampening = 0 # default: 0 – dampening for momentum
        nesterov = False # default: False – enables Nesterov momentum 
    
    sch = 'CosineAnnealingLR'
    if sch == 'StepLR':
        step_size = epochs // 5 # – Period of learning rate decay.
        gamma = 0.5 # – Multiplicative factor of learning rate decay. Default: 0.1
        last_epoch = -1 # – The index of last epoch. Default: -1.
    elif sch == 'MultiStepLR':
        milestones = [60, 120, 150] # – List of epoch indices. Must be increasing.
        gamma = 0.1 # – Multiplicative factor of learning rate decay. Default: 0.1.
        last_epoch = -1 # – The index of last epoch. Default: -1.
    elif sch == 'ExponentialLR':
        gamma = 0.99 #  – Multiplicative factor of learning rate decay.
        last_epoch = -1 # – The index of last epoch. Default: -1.
    elif sch == 'CosineAnnealingLR':
        T_max = 50 # – Maximum number of iterations. Cosine function period.
        eta_min = 0.00001 # – Minimum learning rate. Default: 0.
        last_epoch = -1 # – The index of last epoch. Default: -1.  
    elif sch == 'ReduceLROnPlateau':
        mode = 'min' # – One of min, max. In min mode, lr will be reduced when the quantity monitored has stopped decreasing; in max mode it will be reduced when the quantity monitored has stopped increasing. Default: ‘min’.
        factor = 0.1 # – Factor by which the learning rate will be reduced. new_lr = lr * factor. Default: 0.1.
        patience = 10 # – Number of epochs with no improvement after which learning rate will be reduced. For example, if patience = 2, then we will ignore the first 2 epochs with no improvement, and will only decrease the LR after the 3rd epoch if the loss still hasn’t improved then. Default: 10.
        threshold = 0.0001 # – Threshold for measuring the new optimum, to only focus on significant changes. Default: 1e-4.
        threshold_mode = 'rel' # – One of rel, abs. In rel mode, dynamic_threshold = best * ( 1 + threshold ) in ‘max’ mode or best * ( 1 - threshold ) in min mode. In abs mode, dynamic_threshold = best + threshold in max mode or best - threshold in min mode. Default: ‘rel’.
        cooldown = 0 # – Number of epochs to wait before resuming normal operation after lr has been reduced. Default: 0.
        min_lr = 0 # – A scalar or a list of scalars. A lower bound on the learning rate of all param groups or each group respectively. Default: 0.
        eps = 1e-08 # – Minimal decay applied to lr. If the difference between new and old lr is smaller than eps, the update is ignored. Default: 1e-8.
    elif sch == 'CosineAnnealingWarmRestarts':
        T_0 = 50 # – Number of iterations for the first restart.
        T_mult = 2 # – A factor increases T_{i} after a restart. Default: 1.
        eta_min = 1e-6 # – Minimum learning rate. Default: 0.
        last_epoch = -1 # – The index of last epoch. Default: -1. 
    elif sch == 'WP_MultiStepLR':
        warm_up_epochs = 10
        gamma = 0.1
        milestones = [125, 225]
    elif sch == 'WP_CosineLR':
        warm_up_epochs = 20

可以看到:这里datasets我没有修改,数据集是直接在内容层面替换了isic2018的数据集,大家也可以这么做,当然了也可以再代码里面加入新的逻辑,比如:

datasets = 'self' 
if datasets == 'isic18':
    data_path = './data/isic2018/'
elif datasets == 'isic17':
    data_path = './data/isic2017/'
elif datasets == "self":
    data_path = "/data/self/"
else:
    raise Exception('datasets in not right!')

我是不想麻烦,所以选择了直接数据替换的方式。

主要的配置参数如下:

pretrained_path = './pre_trained/'
num_classes = 1
input_size_h = 256
input_size_w = 256
input_channels = 3
distributed = False
local_rank = -1
num_workers = 0
seed = 42
world_size = None
rank = None
amp = False
gpu_id = '0'
batch_size = 192
epochs = 100

work_dir = 'results/' + network + '_' + datasets + '_' + datetime.now().strftime('%A_%d_%B_%Y_%Hh_%Mm_%Ss') + '/'

print_interval = 20
val_interval = 30
save_interval = 10
threshold = 0.5

官方默认是300个epoch的迭代计算,这里我为了节省时间修改为了100个epoch。

默认batch_size为8,这里我的显存比较大我修改为了192,可以根据自己的实际情况进行修改即可。

默认save_interval为100,这里因为我的总训练epoch就是100,所以我修改为了10,让模型训练过程中的存储频率更高点。

之后即可在终端执行train.py模块,输出如下所示:

 接下来看下outputs中存储的训练实例,如下所示:

 这里看下实际可视化推理效果:

对比分割结果显示:

 单独分割结果显示:

 在推理时间消耗上面跟前文的医疗图像相比更长了一些,这个跟图像本身有关,医疗的数据集分辨率更低,不过这里单张图像的推理时耗相比于以往的模型依旧是很有优势的。

如果实际业务中有分割的相关需求都可以自行开发尝试下。

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/821689.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

GNN+RA 文献阅读-- GNN对RA的建模

简述:主要是几篇 如何利用GNN 对 资源分配进行建模的paper,【1】【2】都是对无线链路建模,【3】比较有参考性,【4】偏于RL,对GNN表述模糊。 用GNN建模网络的思路: 1.Graph 是有向图还是无向图&#xff1f…

HarmonyOS 开发基础(二)组件拼凑简单登录页面

一、简单登录页面 Entry Component /* 组件可以基于struct实现,组件不能有继承关系,struct可以比class更加快速的创建和销毁。*/ struct Index {State message: string Hello Worldbuild() {// https://developer.harmonyos.com/cn/docs/documentation/…

万应低代码 7 月重点更新内容速递

速览版 详情版 低代码开发能力提升 业务逻辑 业务逻辑是什么? 在万应低代码中,「业务逻辑」指的是应用程序中的核心规则和功能,它决定了数据如何被处理和操作。就像搭积木一样,业务逻辑告诉计算机在特定情况下如何运行和响应。比…

【C语言】操作符----详解

🍁 博客主页:江池俊的博客 💫收录专栏:C语言——探索高效编程的基石 💻 其他专栏:数据结构探索 💡代码仓库:江池俊的代码仓库 🎪 社区:C/C之家社区 🍁 如果觉…

【投资笔记】美股要变天了?

美股上涨,但是风险溢价创新低 7月,标普和纳指创两年依赖首次5个月连涨,但是风险溢价创二十年新低;https://asset.wsj.net/dynamic-insets/charts/cdc_5387e4742a1ca607d6defc38_embed.html 如果这个指标为正,说明投资…

视频媒体有哪些?视频媒体采访服务怎么做?

传媒如春雨,润物细无声,大家好,我是51媒体网胡老师。 一,在国内,主流的视频媒体包括: 1. 电视台:包括国家级、地方性和专业性电视频道,涵盖各类新闻、综艺、娱乐、体育等节目。 2…

链表OJ:环形链表

Lei宝啊:个人主页 愿所有美好与我们不期而遇 题目描述 : 给你一个链表的头节点 head ,判断链表中是否有环。 接口: bool hasCycle(struct ListNode *head) 示例1: 示例2: 返回值: true或…

电力系统基础知识(东方电子)持续更新

文章目录 三相电、相电压、线电压断路器和继电器电压互感器、电流互感器GOOSE、SV 三相电、相电压、线电压 因为交流电可以通过变压器升降,很容易实现远距离输电,而直流电无法升降,远距离输电会造成巨大浪费。 三相电:三相交流电…

go程序使用tcp短连接报:only one usage of each socket address

环境及现象 Win10上位机(C#,WPF)后台使用go作为服务。 连接情况 C#连接大概60个TCP长连接(设备)。 后台go服务连接60个UDP短连接(设备附属硬件), 10个TCP短连接(PLC,modbus通讯&a…

【python】使用Selenium和Chrome WebDriver来获取 【腾讯云 Cloud Studio 实战训练营】中的文章信息

文章目录 前言导入依赖库设置ChromeDriver的路径创建Chrome WebDriver对象打开网页找到结果元素创建一个空列表用于存储数据遍历结果元素并提取数据提取标题、作者、发布时间等信息判断是否为目标文章提取目标文章的描述、阅读数量、点赞数量、评论数量等信息将提取的数据存储为…

坚鹏:中国邮储银行金融科技前沿技术发展与应用场景第2期培训

中国邮政储蓄银行金融科技前沿技术发展与应用场景第2期培训圆满结束 中国邮政储蓄银行拥有优良的资产质量和显著的成长潜力,是中国领先的大型零售银行。2016年9月在香港联交所挂牌上市,2019年12月在上交所挂牌上市。中国邮政储蓄银行拥有近4万个营业网点…

ByteBuffer

ByteBuffer 1.创建方式创建方式1:ByteBuffer buf ByteBuffer.allocate(int size);2.创建方式2:ByteBuffer buf ByteBuffer.allocateDirect(int size); 2.字符串转成ByteBuffer的3三种方式方式1: 采用put()方法,读数据时需要调用flip()切换为读模式方式2:以特定编码…

ChatGPT能否撰写科研论文?

ChatGPT,这款被许多人誉为语言处理领域的“黑马”,究竟能否应用于撰写科研论文?近期,以色列理工学院生物学家兼数据科学家Roy Kishony带领的团队,针对这一问题进行了系列研究,其结果已在《Nature》杂志上发…

MySQL表的内外连接

MySQL表的内外连接 一.内连接二.外连接1. 左外连接2. 右外连接 三.案例 表的连接分为内连和外连。 一.内连接 内连接实际上就是利用where子句对两种表形成的笛卡儿积进行筛选,我们前面学习的查询都是内连接,也是在开发过程中使用的最多的连接查询。而使…

使用 Go 语言实现二叉搜索树

原文链接: 使用 Go 语言实现二叉搜索树 二叉树是一种常见并且非常重要的数据结构,在很多项目中都能看到二叉树的身影。 它有很多变种,比如红黑树,常被用作 std::map 和 std::set 的底层实现;B 树和 B 树,…

刷题笔记 day4

力扣 611 有效三角形的个数 首先需要知道如何判断 三个数是否能构成三角形。 假如 存在三个数 a < b < c&#xff0c;如果要构成三角形&#xff0c;需要满足&#xff1a; ab > c ; a c > b ; b c > a ; 任意两个数大于第三个数就可构成三角形。 其实不难…

F5 LTM 知识点和实验 8-配置和管理高可用性部署

第八章:配置和管理高可用性部署 设备集群(device service clustering) 与许多关键网络和IT基础设施组件一样,BIGIP系统可以部署在高可用性(HA)配置中,以支持持续的应用程序交付,而不会在很长一段时间内中断服务。当前BIG-IP系统的HA功能主要由DSC (Device Service Clust…

实验-路由器配置静态路由

软件&#xff1a;cicso packet tracer 8.0 拓扑图&#xff1a;路由器&#xff1a;Router-PT、连接线&#xff1a;Serial DTE、连接口&#xff1a;Serial口&#xff08;serial是串行口,一般用于连接设备,不能连接电脑&#xff09; 实验步骤&#xff1a; 1、构建拓扑图&#xf…

安全基础 --- 编码(02)+ form表单实现交互

浏览器解析机制和XSS向量编码 <!-- javascript伪协议不能被urlcode编码&#xff0c;但可以被html实体编码:也是js协议的一部分&#xff0c;不能被编码js协议被解码后&#xff0c;URL解析器继续解析链接剩下的部分unicode编码可识别实现解码但符号不能被编码&#xff0c;编码…

技术应用:我有一个存放着在线SQL语句测试的收藏夹想要与你分享

&#x1f525; 技术相关&#xff1a;《技术应用》 ⛺️ I Love you, like a fire! 文章目录 &#x1f4cb; 前言&#x1f9e9; SQL Fiddle&#x1f9e9; DB-Fiddle&#x1f9e9; dbfiddle&#x1f9e9; Live SQL -&#xff08;Oracle&#xff09;&#x1f9e9; Free MySQL Host…