基于数据湖的多流拼接方案-HUDI实操篇

news2025/6/20 3:36:59

目录

一、前情提要

二、代码Demo

(一)多写问题

(二)如果要两个流写一个表,这种情况怎么处理?

 (三)测试结果

三、后序


一、前情提要

基于数据湖对两条实时流进行拼接(如前端埋点+服务端埋点、日志流+订单流等);

基础概念见前一篇文章:基于数据湖的多流拼接方案-HUDI概念篇_Leonardo_KY的博客-CSDN博客

二、代码Demo

下文demo均使用datagen生成mock数据进行测试,如到生产,改成Kafka或者其他source即可。

第一个job:stream A,落hudi表:

        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(minPauseBetweenCP);  // 1s
        env.getCheckpointConfig().setCheckpointTimeout(checkpointTimeout);   // 2 min
        env.getCheckpointConfig().setMaxConcurrentCheckpoints(maxConcurrentCheckpoints);

        // env.getCheckpointConfig().setCheckpointStorage("file:///D:/Users/yakang.lu/tmp/checkpoints/");

        TableEnvironment tableEnv = StreamTableEnvironment.create(env);

        // datagen====================================================================
        tableEnv.executeSql("CREATE TABLE sourceA (\n" +
                " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
                " `name` VARCHAR(3)," +
                " _ts1 TIMESTAMP(3)\n" +
                ") WITH (\n" +
                " 'connector' = 'datagen', \n" +
                " 'fields.uuid.kind'='sequence',\n" +
                " 'fields.uuid.start'='0', \n" +
                " 'fields.uuid.end'='1000000', \n" +
                " 'rows-per-second' = '1' \n" +
                ")");

        // hudi====================================================================
        tableEnv.executeSql("create table hudi_tableA(\n"
                + " uuid bigint PRIMARY KEY NOT ENFORCED,\n"
                + " age int,\n"
                + " name VARCHAR(3),\n"
                + " _ts1 TIMESTAMP(3),\n"
                + " _ts2 TIMESTAMP(3),\n"
                + " d VARCHAR(10)\n"
                + ")\n"
                + " PARTITIONED BY (d)\n"
                + " with (\n"
                + " 'connector' = 'hudi',\n"
                + " 'path' = 'hdfs://ns/user/hive/warehouse/ctripdi_prodb.db/hudi_mor_mutil_source_test', \n"   // hdfs path
                + " 'table.type' = 'MERGE_ON_READ',\n"
                + " 'write.bucket_assign.tasks' = '10',\n"
                + " 'write.tasks' = '10',\n"
                + " 'write.partition.format' = 'yyyyMMddHH',\n"
                + " 'write.partition.timestamp.type' = 'EPOCHMILLISECONDS',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                + " 'changelog.enabled' = 'true',\n"
                + " 'index.type' = 'BUCKET',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                 + String.format(" '%s' = '%s',\n", FlinkOptions.PRECOMBINE_FIELD.key(), "_ts1")
                 + " 'write.payload.class' = '" + PartialUpdateAvroPayload.class.getName() + "',\n"
                + " 'hoodie.write.log.suffix' = 'job1',\n"
                + " 'hoodie.write.concurrency.mode' = 'optimistic_concurrency_control',\n"
                + " 'hoodie.write.lock.provider' = 'org.apache.hudi.client.transaction.lock.FileSystemBasedLockProvider',\n"
                + " 'hoodie.cleaner.policy.failed.writes' = 'LAZY',\n"
                + " 'hoodie.cleaner.policy' = 'KEEP_LATEST_BY_HOURS',\n"
                + " 'hoodie.consistency.check.enabled' = 'false',\n"
                // + " 'hoodie.write.lock.early.conflict.detection.enable' = 'true',\n"   // todo
                // + " 'hoodie.write.lock.early.conflict.detection.strategy' = '"
                // + SimpleTransactionDirectMarkerBasedEarlyConflictDetectionStrategy.class.getName() + "',\n"  //
                + " 'hoodie.keep.min.commits' = '1440',\n"
                + " 'hoodie.keep.max.commits' = '2880',\n"
                + " 'compaction.schedule.enabled'='false',\n"
                + " 'compaction.async.enabled'='false',\n"
                + " 'compaction.trigger.strategy'='num_or_time',\n"
                + " 'compaction.delta_commits' ='3',\n"
                + " 'compaction.delta_seconds' ='60',\n"
                + " 'compaction.max_memory' = '3096',\n"
                + " 'clean.async.enabled' ='false',\n"
                + " 'hive_sync.enable' = 'false'\n"
                // + " 'hive_sync.mode' = 'hms',\n"
                // + " 'hive_sync.db' = '%s',\n"
                // + " 'hive_sync.table' = '%s',\n"
                // + " 'hive_sync.metastore.uris' = '%s'\n"
                + ")");

        // sql====================================================================
        StatementSet statementSet = tableEnv.createStatementSet();

        String sqlString = "insert into hudi_tableA(uuid, name, _ts1, d) select * from " +
                "(select *,date_format(CURRENT_TIMESTAMP,'yyyyMMdd') AS d from sourceA) view1";
        statementSet.addInsertSql(sqlString);
        statementSet.execute();

第二个job:stream B,落hudi表:

        StreamExecutionEnvironment env = manager.getEnv();
        env.getCheckpointConfig().setMinPauseBetweenCheckpoints(minPauseBetweenCP);  // 1s
        env.getCheckpointConfig().setCheckpointTimeout(checkpointTimeout);   // 2 min
        env.getCheckpointConfig().setMaxConcurrentCheckpoints(maxConcurrentCheckpoints);

        // env.getCheckpointConfig().setCheckpointStorage("file:///D:/Users/yakang.lu/tmp/checkpoints/");

        TableEnvironment tableEnv = StreamTableEnvironment.create(env);

        // datagen====================================================================
        tableEnv.executeSql("CREATE TABLE sourceB (\n" +
                " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
                " `age` int," +
                " _ts2 TIMESTAMP(3)\n" +
                ") WITH (\n" +
                " 'connector' = 'datagen', \n" +
                " 'fields.uuid.kind'='sequence',\n" +
                " 'fields.uuid.start'='0', \n" +
                " 'fields.uuid.end'='1000000', \n" +
                " 'rows-per-second' = '1' \n" +
                ")");

        // hudi====================================================================
        tableEnv.executeSql("create table hudi_tableB(\n"
                + " uuid bigint PRIMARY KEY NOT ENFORCED,\n"
                + " age int,\n"
                + " name VARCHAR(3),\n"
                + " _ts1 TIMESTAMP(3),\n"
                + " _ts2 TIMESTAMP(3),\n"
                + " d VARCHAR(10)\n"
                + ")\n"
                + " PARTITIONED BY (d)\n"
                + " with (\n"
                + " 'connector' = 'hudi',\n"
                + " 'path' = 'hdfs://ns/user/hive/warehouse/ctripdi_prodb.db/hudi_mor_mutil_source_test', \n"   // hdfs path
                + " 'table.type' = 'MERGE_ON_READ',\n"
                + " 'write.bucket_assign.tasks' = '10',\n"
                + " 'write.tasks' = '10',\n"
                + " 'write.partition.format' = 'yyyyMMddHH',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                + " 'changelog.enabled' = 'true',\n"
                + " 'index.type' = 'BUCKET',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                + String.format(" '%s' = '%s',\n", FlinkOptions.PRECOMBINE_FIELD.key(), "_ts1")
                + " 'write.payload.class' = '" + PartialUpdateAvroPayload.class.getName() + "',\n"
                + " 'hoodie.write.log.suffix' = 'job2',\n"
                + " 'hoodie.write.concurrency.mode' = 'optimistic_concurrency_control',\n"
                + " 'hoodie.write.lock.provider' = 'org.apache.hudi.client.transaction.lock.FileSystemBasedLockProvider',\n"
                + " 'hoodie.cleaner.policy.failed.writes' = 'LAZY',\n"
                + " 'hoodie.cleaner.policy' = 'KEEP_LATEST_BY_HOURS',\n"
                + " 'hoodie.consistency.check.enabled' = 'false',\n"
                // + " 'hoodie.write.lock.early.conflict.detection.enable' = 'true',\n"   // todo
                // + " 'hoodie.write.lock.early.conflict.detection.strategy' = '"
                // + SimpleTransactionDirectMarkerBasedEarlyConflictDetectionStrategy.class.getName() + "',\n"
                + " 'hoodie.keep.min.commits' = '1440',\n"
                + " 'hoodie.keep.max.commits' = '2880',\n"
                + " 'compaction.schedule.enabled'='true',\n"
                + " 'compaction.async.enabled'='true',\n"
                + " 'compaction.trigger.strategy'='num_or_time',\n"
                + " 'compaction.delta_commits' ='2',\n"
                + " 'compaction.delta_seconds' ='90',\n"
                + " 'compaction.max_memory' = '3096',\n"
                + " 'clean.async.enabled' ='false'\n"
                // + " 'hive_sync.mode' = 'hms',\n"
                // + " 'hive_sync.db' = '%s',\n"
                // + " 'hive_sync.table' = '%s',\n"
                // + " 'hive_sync.metastore.uris' = '%s'\n"
                + ")");

        // sql====================================================================
        StatementSet statementSet = tableEnv.createStatementSet();
        String sqlString = "insert into hudi_tableB(uuid, age, _ts1, _ts2, d) select * from " +
                "(select *, _ts2 as ts1, date_format(CURRENT_TIMESTAMP,'yyyyMMdd') AS d from sourceB) view2";
        // statementSet.addInsertSql("insert into hudi_tableB(uuid, age, _ts2) select * from sourceB");
        statementSet.addInsertSql(sqlString);
        statementSet.execute();

也可以将两个 writer 放到同一个app中(使用statement):

import java.time.ZoneOffset;
import org.apache.flink.configuration.Configuration;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.table.api.StatementSet;
import org.apache.flink.table.api.TableConfig;
import org.apache.flink.table.api.TableEnvironment;
import org.apache.flink.table.api.bridge.java.StreamTableEnvironment;
import org.apache.flink.table.api.config.TableConfigOptions;
import org.apache.hudi.common.model.PartialUpdateAvroPayload;
import org.apache.hudi.configuration.FlinkOptions;
// import org.apache.hudi.table.marker.SimpleTransactionDirectMarkerBasedEarlyConflictDetectionStrategy;

public class Test00 {
    public static void main(String[] args) {
        Configuration configuration = TableConfig.getDefault().getConfiguration();
        configuration.setString(TableConfigOptions.LOCAL_TIME_ZONE, ZoneOffset.ofHours(8).toString());//设置东八区
        // configuration.setInteger("rest.port", 8086);
        StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironment(
                configuration);
        env.setParallelism(1);
        env.enableCheckpointing(12000L);
        // env.getCheckpointConfig().setCheckpointStorage("file:///Users/laifei/tmp/checkpoints/");
        TableEnvironment tableEnv = StreamTableEnvironment.create(env);

        // datagen====================================================================
        tableEnv.executeSql("CREATE TABLE sourceA (\n" +
                " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
                " `name` VARCHAR(3)," +
                " _ts1 TIMESTAMP(3)\n" +
                ") WITH (\n" +
                " 'connector' = 'datagen', \n" +
                " 'fields.uuid.kind'='sequence',\n" +
                " 'fields.uuid.start'='0', \n" +
                " 'fields.uuid.end'='1000000', \n" +
                " 'rows-per-second' = '1' \n" +
                ")");
        tableEnv.executeSql("CREATE TABLE sourceB (\n" +
                " uuid bigint PRIMARY KEY NOT ENFORCED,\n" +
                " `age` int," +
                " _ts2 TIMESTAMP(3)\n" +
                ") WITH (\n" +
                " 'connector' = 'datagen', \n" +
                " 'fields.uuid.kind'='sequence',\n" +
                " 'fields.uuid.start'='0', \n" +
                " 'fields.uuid.end'='1000000', \n" +
                " 'rows-per-second' = '1' \n" +
                ")");

        // hudi====================================================================
        tableEnv.executeSql("create table hudi_tableA(\n"
                + " uuid bigint PRIMARY KEY NOT ENFORCED,\n"
                + " name VARCHAR(3),\n"
                + " age int,\n"
                + " _ts1 TIMESTAMP(3),\n"
                + " _ts2 TIMESTAMP(3)\n"
                + ")\n"
                + " PARTITIONED BY (_ts1)\n"
                + " with (\n"
                + " 'connector' = 'hudi',\n"
                + " 'path' = 'file:\\D:\\Ctrip\\dataWork\\tmp', \n"   // hdfs path
                + " 'table.type' = 'MERGE_ON_READ',\n"
                + " 'write.bucket_assign.tasks' = '2',\n"
                + " 'write.tasks' = '2',\n"
                + " 'write.partition.format' = 'yyyyMMddHH',\n"
                + " 'write.partition.timestamp.type' = 'EPOCHMILLISECONDS',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                + " 'changelog.enabled' = 'true',\n"
                + " 'index.type' = 'BUCKET',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                // + String.format(" '%s' = '%s',\n", FlinkOptions.PRECOMBINE_FIELD.key(), "_ts1:name|_ts2:age")
                // + " 'write.payload.class' = '" + PartialUpdateAvroPayload.class.getName() + "',\n"
                + " 'hoodie.write.log.suffix' = 'job1',\n"
                + " 'hoodie.write.concurrency.mode' = 'optimistic_concurrency_control',\n"
                + " 'hoodie.write.lock.provider' = 'org.apache.hudi.client.transaction.lock.FileSystemBasedLockProvider',\n"
                + " 'hoodie.cleaner.policy.failed.writes' = 'LAZY',\n"
                + " 'hoodie.cleaner.policy' = 'KEEP_LATEST_BY_HOURS',\n"
                + " 'hoodie.consistency.check.enabled' = 'false',\n"
                + " 'hoodie.write.lock.early.conflict.detection.enable' = 'true',\n"
                + " 'hoodie.write.lock.early.conflict.detection.strategy' = '"
                // + SimpleTransactionDirectMarkerBasedEarlyConflictDetectionStrategy.class.getName() + "',\n"  //
                + " 'hoodie.keep.min.commits' = '1440',\n"
                + " 'hoodie.keep.max.commits' = '2880',\n"
                + " 'compaction.schedule.enabled'='false',\n"
                + " 'compaction.async.enabled'='false',\n"
                + " 'compaction.trigger.strategy'='num_or_time',\n"
                + " 'compaction.delta_commits' ='3',\n"
                + " 'compaction.delta_seconds' ='60',\n"
                + " 'compaction.max_memory' = '3096',\n"
                + " 'clean.async.enabled' ='false',\n"
                + " 'hive_sync.enable' = 'false'\n"
                // + " 'hive_sync.mode' = 'hms',\n"
                // + " 'hive_sync.db' = '%s',\n"
                // + " 'hive_sync.table' = '%s',\n"
                // + " 'hive_sync.metastore.uris' = '%s'\n"
                + ")");

        tableEnv.executeSql("create table hudi_tableB(\n"
                + " uuid bigint PRIMARY KEY NOT ENFORCED,\n"
                + " name VARCHAR(3),\n"
                + " age int,\n"
                + " _ts1 TIMESTAMP(3),\n"
                + " _ts2 TIMESTAMP(3)\n"
                + ")\n"
                + " PARTITIONED BY (_ts2)\n"
                + " with (\n"
                + " 'connector' = 'hudi',\n"
                + " 'path' = '/Users/laifei/tmp/hudi/local.db/mutiwrite1', \n"   // hdfs path
                + " 'table.type' = 'MERGE_ON_READ',\n"
                + " 'write.bucket_assign.tasks' = '2',\n"
                + " 'write.tasks' = '2',\n"
                + " 'write.partition.format' = 'yyyyMMddHH',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                + " 'changelog.enabled' = 'true',\n"
                + " 'index.type' = 'BUCKET',\n"
                + " 'hoodie.bucket.index.num.buckets' = '2',\n"
                // + String.format(" '%s' = '%s',\n", FlinkOptions.PRECOMBINE_FIELD.key(), "_ts1:name|_ts2:age")
                // + " 'write.payload.class' = '" + PartialUpdateAvroPayload.class.getName() + "',\n"
                + " 'hoodie.write.log.suffix' = 'job2',\n"
                + " 'hoodie.write.concurrency.mode' = 'optimistic_concurrency_control',\n"
                + " 'hoodie.write.lock.provider' = 'org.apache.hudi.client.transaction.lock.FileSystemBasedLockProvider',\n"
                + " 'hoodie.cleaner.policy.failed.writes' = 'LAZY',\n"
                + " 'hoodie.cleaner.policy' = 'KEEP_LATEST_BY_HOURS',\n"
                + " 'hoodie.consistency.check.enabled' = 'false',\n"
                + " 'hoodie.write.lock.early.conflict.detection.enable' = 'true',\n"
                + " 'hoodie.write.lock.early.conflict.detection.strategy' = '"
                // + SimpleTransactionDirectMarkerBasedEarlyConflictDetectionStrategy.class.getName() + "',\n"
                + " 'hoodie.keep.min.commits' = '1440',\n"
                + " 'hoodie.keep.max.commits' = '2880',\n"
                + " 'compaction.schedule.enabled'='true',\n"
                + " 'compaction.async.enabled'='true',\n"
                + " 'compaction.trigger.strategy'='num_or_time',\n"
                + " 'compaction.delta_commits' ='2',\n"
                + " 'compaction.delta_seconds' ='90',\n"
                + " 'compaction.max_memory' = '3096',\n"
                + " 'clean.async.enabled' ='false'\n"
                // + " 'hive_sync.mode' = 'hms',\n"
                // + " 'hive_sync.db' = '%s',\n"
                // + " 'hive_sync.table' = '%s',\n"
                // + " 'hive_sync.metastore.uris' = '%s'\n"
                + ")");

        // sql====================================================================
        StatementSet statementSet = tableEnv.createStatementSet();
        statementSet.addInsertSql("insert into hudi_tableA(uuid, name, _ts1) select * from sourceA");
        statementSet.addInsertSql("insert into hudi_tableB(uuid, age, _ts2) select * from sourceB");
        statementSet.execute();

    }
}

(一)多写问题

由于HUDI官方提供的code打成jar包是不支持“多写”的,这里使用Tencent改造之后的code进行打包测试;

如果使用官方包,多个writer写入同一个hudi表,则会报如下异常:

而且:

hudi中有个preCombineField,在建表的时候只能指定其中一个字段为preCombineField,但是如果使用官方版本,双流写同一个hudi的时候出现两种情况:

1. 一条流写preCombineField,另一条流不写这个字段,后者会出现 ordering value不能为null;

2. 两条流都写这个字段,出现字段冲突异常;

(二)如果要两个流写一个表,这种情况怎么处理?

经过本地测试:

hudi0.12-multiWrite版本(Tencent修改版),可以支持多 precombineField(在此版本中,只要保证主键、分区字段之外的字段,在多个流中不冲突即可实现多写!)

hudi0.13版本,不支持,而且存在上述问题; 

 (三)测试结果

0

Tencent文章链接:https://cloud.tencent.com/developer/article/2189914

github链接:GitHub - XuQianJin-Stars/hudi at multiwrite-master-7

hudi打包很麻烦,如果需要我将后续上传打好的jar包;

三、后序

基于上述code,当流量比较大的时候,似乎会存在一定程度的数据丢失(在其中一条流进行compact,则另一条流就会存在一定程度的数据丢失);

可以尝试:

(1)先将两个流UNION为一个流,再sink到hudi表(也避免了写冲突);

(2)使用其他数据湖工具,比如apache paimon,参考:新一代数据湖存储技术Apache Paimon入门Demo_Leonardo_KY的博客-CSDN博客

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/941136.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

AI代码生成辅助工具

有许多AI代码生成辅助工具和平台可用,它们可以帮助开发人员生成、优化和理解代码。以下是一些常见的AI代码生成辅助工具,以及它们的特点,希望对大家有所帮助。北京木奇移动技术有限公司,专业的软件外包开发公司,欢迎交…

大规模网络爬虫系统架构设计 - 云计算和Docker部署

在大规模网络爬虫系统中,合理的架构设计和高效的部署方式是确保系统稳定性和可扩展性的关键。本文将介绍如何利用云计算和Docker技术进行大规模网络爬虫系统的架构设计和部署,帮助你构建高效、可靠的爬虫系统。 1、架构设计原则 在设计大规模网络爬虫系…

字符设备驱动(内核态用户态内存交互)

前言 内核驱动:运行在内核态的动态模块,遵循内核模块框架接口,更倾向于插件。 应用程序:运行在用户态的进程。 应用程序与内核驱动交互通过既定接口,内核态和用户态访问依然遵循内核既定接口。 环境搭建 系统&#…

安防监控视频平台EasyCVR视频汇聚平台调用接口出现跨域现象的问题解决方案

视频监控汇聚EasyCVR可拓展性强、视频能力灵活、部署轻快,可支持的主流标准协议有GB28181、RTSP/Onvif、RTMP等,以及厂家私有协议与SDK接入,包括海康Ehome、海大宇等设备的SDK等,能对外分发RTSP、RTMP、FLV、HLS、WebRTC等格式的视…

科技云报道:软件供应链安全如此重要,但为什么难以解决?

科技云报道原创。 软件供应链安全如今已经成了一个世界性难题。从2021年底Apache Log4j“核弹级”风险爆发,时至今日影响仍然存在,保障软件供应链安全已成为业界关注焦点。 但近2年时间过去了,软件供应链安全问题似乎并没有得以缓解&#x…

微服务事务管理(Dubbo)

Seata 是什么 Seata 是一款开源的分布式事务解决方案,致力于提供高性能和简单易用的分布式事务服务。Seata 将为用户提供了 AT、TCC、SAGA 和 XA 事务模式,为用户打造一站式的分布式解决方案。 一、示例架构说明 可在此查看本示例完整代码地址&#x…

FreeSWITCH 1.10.10 简单图形化界面6 - 配置讯时网关落地

FreeSWITCH 1.10.10 简单图形化界面6 - 配置讯时网关落地 0、 界面预览1、 创建一个话务台2、 创建PBX SIP中继并设置呼入权限3、 设置呼叫权限4、 设置分机呼出权限5、 设置FXO 网关相关信息6、 设置FXO网关呼叫路由(呼入及呼出)7、 查看SIP中继状态 0、…

研磨设计模式day15策略模式

场景 问题描述 经常会有这样的需要,在不同的时候,要使用不同的计算方式。 解决方案 策略模式 定义: 解决思路:

HEGERLS智能四向穿梭车是如何解决机械制造领域内SKU种类复杂且量多的问题?

伴随着电子商务和智能制造技术的快速发展,对于自动化立体仓库系统、密集存储系统、自动输送系统、自动识别系统、无线通讯系统、条码扫描、手持终端及其系统集成的需求急剧增加,物流装备系统密集化、自动化、智能化、绿色环保等技术特征日益明显。密集存…

简单的springboot应用部署后内存占用量过大问题排查

1.问题背景 需要部署一个演示环境。所有组件都要部署到一台服务器,采用Docker容器部署,发现多个简单的springboot应用占用内存高达2G,后续的应用因为内存不足就部署不了了。排查下内存占用大的原因: docker stats命令&#xff1a…

ucharts修改ToolTip边框阴影文字居中

ucharts修改ToolTip边框阴影文字居中 效果 Demo 链接: https://pan.baidu.com/s/1k0FxmBPKAHlHksFR3YQSlQ 提取码:ytv7

在vue.config.js中配置文件路径代理名

今天在公司项目中看到一个非常有趣的导入路径 crud 先是一蒙 这是个啥 突然想起一个被自己遗漏的知识点 在vue.config.js中配置路径指向 这里 我们随便找一个vue项目 在src下找到 components 目录 如果没有就创建一个 下面找到HelloWorld.vue 如果没有也是自己创建一个就好 然…

LabVIEW开发异步电动机定子故障在线诊断系统

LabVIEW开发异步电动机定子故障在线诊断系统 三相感应电机(IM)因其简单性、坚固性和可靠性而广泛用于许多工业应用。然而,对于需要高可靠性的特定领域,如汽车、航空航天、军事和核能,使用经典的三相IM似乎不再适用&am…

JavaScript函数复习

这节课我们来通过我们之前学过的函数来逐渐完善! const yearsUntilRetiremen (birthyear, firstName) > {const age 2037 - birthyear;const retirement 65 - age;return ${firstName}还有${retirement}年就退休了!;}这个是我们之前写的代码&…

循环结构(个人学习笔记黑马学习)

while循环语句 在屏幕中打印0~9这十个数字 #include <iostream> using namespace std;int main() {int i 0;while (i < 10) {cout << i << endl;i;}system("pause");return 0; } 练习案例: 猜数字 案例描述:系统随机生成一个1到100之间的数字&…

数字电路-二进制学习

什么是二进制&#xff1f; 数字电路 中 只有 高电平 和低电平 就是 1 和0 进位规则是“逢二进一”&#xff0c;借位规则是“借一当二”。 二进制、八进制 、十进制、十六进制 二进制 有两个数来表示 &#xff1a; 0、1 八进制 有8个数来表示 &#xff1a; 0、1、2、3、4、…

ASEMI肖特基模块MBR400100CT功能应用介绍

编辑-Z 肖特基模块MBR400100CT是一款高性能半导体器件&#xff0c;常用于电源和开关电路中。该模块采用肖特基二极管技术&#xff0c;具有低导通压降和高速开关特性&#xff0c;适合在高频率和高温环境下使用。 肖特基二极管是基于金属-半导体接触的特殊结构的二极管。与传统P…

Go几种读取配置文件的方式

比较有名的方案有 使用viper管理配置[1] 支持多种配置文件格式&#xff0c;包括 JSON,TOML,YAML,HECL,envfile&#xff0c;甚至还包括Java properties 支持为配置项设置默认值 可以通过命令行参数覆盖指定的配置项 支持参数别名 viper[2]按照这个优先级&#xff08;从高到低&am…

博客系统后台前端UI设计

效果展示 API编写 index.js import axios from "./request"const fastdfs {delete: file/fastdfs/delete } const permission {search: "/sys/permission/search",add: "/sys/permission/add",update: "/sys/permission/update",d…

适合新手程序员的体质,一键代码审查轻松搞定

很多刚入行的程序员会面临一个问题&#xff0c;写完代码进行运行会出现很多bug但是不能准确的定位问题的所在&#xff0c;很多人对于自己的代码结构和层次也摸不着头脑&#xff0c;为了提高代码的质量经常会消耗大量的人力物力来做这件事情。 在&#xff08;软件工程的事实与谬…