Flink常用Sink(elasticsearch(es)Sink、RedisSink、KafkaSink、MysqlSink、FileSink)

news2025/7/6 6:11:43

flink输出到es、redis、mysql、kafka、file

文章目录

    • 配置pom文件
    • 公共实体类
    • KafkaSInk
    • ElasticsearchSink(EsSink)
    • RedisSink
    • MysqlSink(JdbcSink)
    • FileSink

自己先准备一下相关环境

配置pom文件

 <properties>
        <maven.compiler.source>8</maven.compiler.source>
        <maven.compiler.target>8</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <flink.version>1.13.0</flink.version>
        <java.version>1.8</java.version>
        <scala.binary.version>2.12</scala.binary.version>
        <slf4j.version>1.7.30</slf4j.version>
    </properties>
    <dependencies>
        <!-- 引入 Flink 相关依赖-->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-java</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-streaming-java_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-clients_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <!-- 引入日志管理相关依赖-->
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-api</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.slf4j</groupId>
            <artifactId>slf4j-log4j12</artifactId>
            <version>${slf4j.version}</version>
        </dependency>
        <dependency>
            <groupId>org.apache.logging.log4j</groupId>
            <artifactId>log4j-to-slf4j</artifactId>
            <version>2.14.0</version>
        </dependency>

        <!--  引入kafka  -->
        <dependency>
            <groupId>org.apache.flink</groupId>
            <artifactId>flink-connector-kafka_${scala.binary.version}</artifactId>
            <version>${flink.version}</version>
        </dependency>
        <dependency>
            <groupId>org.projectlombok</groupId>
            <artifactId>lombok</artifactId>
            <version>1.18.22</version>
            <scope>compile</scope>
        </dependency>

        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>fastjson</artifactId>
            <version>1.2.58</version>
        </dependency>

    <!--  redis依赖  -->
    <dependency>
        <groupId>org.apache.bahir</groupId>
        <artifactId>flink-connector-redis_2.11</artifactId>
        <version>1.0</version>
    </dependency>

    <!--  ES依赖  -->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-connector-elasticsearch7_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>

     <!-- Mysql依赖-->
    <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-connector-jdbc_${scala.binary.version}</artifactId>
        <version>${flink.version}</version>
    </dependency>
    <dependency>
        <groupId>mysql</groupId>
        <artifactId>mysql-connector-java</artifactId>
        <version>5.1.47</version>
    </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-assembly-plugin</artifactId>
                <version>3.0.0</version>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.2</version>
                <executions>
                    <execution>
                        <id>scala-compile-first</id>
                        <phase>process-resources</phase>
                        <goals>
                            <goal>add-source</goal>
                            <goal>compile</goal>
                        </goals>
                    </execution>
                    <execution>
                        <id>scala-test-compile</id>
                        <phase>process-test-resources</phase>
                        <goals>
                            <goal>testCompile</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
        </plugins>
    </build>

公共实体类

import lombok.AllArgsConstructor;
import lombok.Data;
import lombok.NoArgsConstructor;
import lombok.ToString;

@Data
@NoArgsConstructor
@ToString
@AllArgsConstructor
public class UserEvent {
    private String userName;
    private String url;
    private Long timestemp;
}

KafkaSInk

将数据输出到kafka中,先启动kafka consumer,再运行程序

import com.event.UserEvent;
import org.apache.flink.api.common.functions.MapFunction;
import org.apache.flink.api.common.serialization.SimpleStringSchema;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.datastream.SingleOutputStreamOperator;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer;
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer;

import java.lang.reflect.Array;
import java.util.Arrays;
import java.util.Properties;

public class KafkaSinkTest {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env =
                StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);
        Properties properties = new Properties();
        //kafka相关配置
        properties.setProperty("bootstrap.servers", "hadoop01:9092");
        properties.setProperty("group.id", "consumer-group");
        properties.setProperty("key.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("value.deserializer",
                "org.apache.kafka.common.serialization.StringDeserializer");
        properties.setProperty("auto.offset.reset", "latest");

        DataStreamSource<String> stream = env.fromCollection(Arrays.asList(
                "xiaoming,www.baidu.com,1287538716253",
                "Mr Li,www.baidu.com,1287538710000",
                "Mr Zhang,www.baidu.com,1287538710900"
        ));

        SingleOutputStreamOperator<String> result = stream.map(new MapFunction<String, String>() {
            @Override
            public String map(String value) throws Exception {
            	//输出规则
                String[] split = value.split(",");
                return new UserEvent(split[0].trim(), split[1].trim(), Long.valueOf(split[2].trim())).toString();
            }
        });
		//启动kafkaconsumer指令
		// ./bin/kafka-console-consumer.sh  --bootstrap-server  localhost:9092 --topic events
        result.addSink(new FlinkKafkaProducer<String>(
        		//kafka所在地址
                "hadoop01:9092",
                //指定输出的topic
                "events",
                new SimpleStringSchema()
        ));

        env.execute();

    }
}

运行结果

在这里插入图片描述

ElasticsearchSink(EsSink)

将数据输出到elasticsearch中

示例代码


import com.event.UserEvent;
import org.apache.flink.api.common.functions.RuntimeContext;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction;
import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer;
import org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink;
import org.apache.flink.table.descriptors.Elasticsearch;
import org.apache.http.HttpHost;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.Requests;

import java.util.Arrays;
import java.util.HashMap;
import java.util.List;

public class EsSinkTest {

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<UserEvent> userEventDataStreamSource =
                env.fromCollection(
                        Arrays.asList(
                                new UserEvent("zhangsan", "path?test123", System.currentTimeMillis() - 2000L),
                                new UserEvent("zhangsan", "path?test", System.currentTimeMillis() + 2000L),
                                new UserEvent("lisi", "path?checkParam", System.currentTimeMillis()),
                                new UserEvent("bob", "path?test", System.currentTimeMillis() + 2000L),
                                new UserEvent("mary", "path?checkParam", System.currentTimeMillis()),
                                new UserEvent("lisi", "path?checkParam123", System.currentTimeMillis() - 2000L)
                        ));


        //定义host列表
        List<HttpHost> hosts = Arrays.asList(new HttpHost("hadoop01", 9200));

        //定义ElasticsearchSinkFunction
        ElasticsearchSinkFunction<UserEvent> elasticsearchSinkFunction = new ElasticsearchSinkFunction<UserEvent>() {
            @Override
            public void process(UserEvent userEvent, RuntimeContext runtimeContext, RequestIndexer requestIndexer) {
                IndexRequest indexRequest = Requests.indexRequest()
                        .index("events")
                        .type("type")
                        .source(new HashMap<String, String>() {{
                            put(userEvent.getUserName(), userEvent.getUrl());
                        }});
                requestIndexer.add(indexRequest);
            }
        };

        //写入es
        userEventDataStreamSource.addSink(new ElasticsearchSink.Builder<>(hosts, elasticsearchSinkFunction).build());

        env.execute();
    }
}

指令

GET _cat/indices

GET _cat/indices/events

GET events/_search

运行结果
在这里插入图片描述

RedisSink

将数据输出到Redis

示例代码


import com.event.UserEvent;
import my.test.source.CustomSouce;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.redis.RedisSink;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisClusterConfig;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisConfigBase;
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommand;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisCommandDescription;
import org.apache.flink.streaming.connectors.redis.common.mapper.RedisMapper;

public class RedisSinkTest {
    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        DataStreamSource<UserEvent> streamSource = env.addSource(new CustomSouce());

        //创建jedis连接配置
        FlinkJedisPoolConfig config = new FlinkJedisPoolConfig.Builder()
                .setHost("master")
                .setTimeout(10000)
                .setPort(6379)
                .build();
        

        //写到redis
        streamSource.addSink(new RedisSink<>(config, new MyRedisMapper()));


        env.execute();
    }

    public static class MyRedisMapper implements RedisMapper<UserEvent>{
        @Override
        public RedisCommandDescription getCommandDescription() {
            //写入方式为hset
            return new RedisCommandDescription(RedisCommand.HSET, "events"); //additionalKey参数标识存储再哪里
        }

        @Override
        public String getKeyFromData(UserEvent userEvent) {
            return userEvent.getUserName();
        }

        @Override
        public String getValueFromData(UserEvent userEvent) {
            return userEvent.getUrl();
        }
    }



}

自定义source

import com.event.UserEvent;
import org.apache.flink.streaming.api.functions.source.SourceFunction;

import java.util.Calendar;
import java.util.Random;

public class CustomSouce implements SourceFunction<UserEvent> {
    // 声明一个布尔变量,作为控制数据生成的标识位
    private Boolean running = true;

    @Override
    public void run(SourceContext<UserEvent> ctx) throws Exception {
        Random random = new Random(); // 在指定的数据集中随机选取数据
        String[] users = {"Mary", "Alice", "Bob", "Cary"};
        String[] urls = {"./home", "./cart", "./fav", "./prod?id=1",
                "./prod?id=2"};
        while (running) {
            ctx.collect(new UserEvent(
                    users[random.nextInt(users.length)],
                    urls[random.nextInt(urls.length)],
                    Calendar .getInstance().getTimeInMillis()
            ));
            // 隔 1 秒生成一个点击事件,方便观测
            Thread.sleep(1000);
        }
    }
    @Override
    public void cancel() {
        running = false;
    }
}

运行结果
因为上述source是一个无界流,所以数据一直会变化
在这里插入图片描述

MysqlSink(JdbcSink)

将数据输出到mysql

表结构

create table events(
    user_name varchar(20) not null,
    url varchar(100) not null
);

示例代码


import com.event.UserEvent;
import org.apache.flink.api.common.functions.RuntimeContext;
import org.apache.flink.connector.jdbc.JdbcConnectionOptions;
import org.apache.flink.connector.jdbc.JdbcSink;
import org.apache.flink.connector.jdbc.JdbcStatementBuilder;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.connectors.elasticsearch.ElasticsearchSinkFunction;
import org.apache.flink.streaming.connectors.elasticsearch.RequestIndexer;
import org.apache.flink.streaming.connectors.elasticsearch7.ElasticsearchSink;
import org.apache.http.HttpHost;
import org.elasticsearch.action.index.IndexRequest;
import org.elasticsearch.client.Requests;

import java.sql.PreparedStatement;
import java.sql.SQLException;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;

public class MysqlSinkTest {

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        //一组数据
        DataStreamSource<UserEvent> userEventDataStreamSource =
                env.fromCollection(
                        Arrays.asList(
                                new UserEvent("zhangsan", "/path?test123", System.currentTimeMillis() - 2000L),
                                new UserEvent("zhangsan", "/path?test", System.currentTimeMillis() + 2000L),
                                new UserEvent("lisi", "/path?checkParam", System.currentTimeMillis()),
                                new UserEvent("bob", "/path?test", System.currentTimeMillis() + 2000L),
                                new UserEvent("mary", "/path?checkParam", System.currentTimeMillis()),
                                new UserEvent("lisi", "/path?checkParam123", System.currentTimeMillis() - 2000L)
                        ));


       userEventDataStreamSource.addSink(JdbcSink.sink(
       			//要执行的sql语句
               "INSERT INTO events (user_name, url) VALUES(?, ?)",
               new JdbcStatementBuilder<UserEvent>() {
                   @Override
                   public void accept(PreparedStatement preparedStatement, UserEvent userEvent) throws SQLException {
                   		//sql占位符赋值
                        preparedStatement.setString(1, userEvent.getUserName());
                        preparedStatement.setString(2, userEvent.getUrl());
                   }
               },
               //jdbc相关参数配置
               new JdbcConnectionOptions.JdbcConnectionOptionsBuilder()
                       .withUrl("jdbc:mysql://hadoop01:3306/mysql")
                       .withUsername("root")
                       .withPassword("123456")
                       .withDriverName("com.mysql.jdbc.Driver")
                       .build()
       ));

        env.execute();
    }
}

当程序运行结束之后可以看到mysql的events表里面多了数据
在这里插入图片描述

FileSink

将数据输出到文件中(可以输出分区文件)

import com.event.UserEvent;
import org.apache.flink.api.common.serialization.SimpleStringEncoder;
import org.apache.flink.core.fs.Path;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
import org.apache.flink.streaming.api.functions.sink.filesystem.StreamingFileSink;
import org.apache.flink.streaming.api.functions.sink.filesystem.rollingpolicies.DefaultRollingPolicy;
import org.apache.flink.util.TimeUtils;

import java.util.Arrays;
import java.util.concurrent.TimeUnit;

public class FileSinkTest {

    public static void main(String[] args) throws Exception {
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        DataStreamSource<UserEvent> userEventDataStreamSource =
                env.fromCollection(
                        Arrays.asList(
                                new UserEvent("zhangsan", "path?test123", System.currentTimeMillis() - 2000L),
                                new UserEvent("zhangsan", "path?test", System.currentTimeMillis() + 2000L),
                                new UserEvent("lisi", "path?checkParam", System.currentTimeMillis()),
                                new UserEvent("bob", "path?test", System.currentTimeMillis() + 2000L),
                                new UserEvent("mary", "path?checkParam", System.currentTimeMillis()),
                                new UserEvent("lisi", "path?checkParam123", System.currentTimeMillis() - 2000L)
                        ));


        StreamingFileSink<String> streamingFileSink = StreamingFileSink.
                <String>forRowFormat(new Path("./output/"), new SimpleStringEncoder<>("UTF-8"))
                .withRollingPolicy(
                        DefaultRollingPolicy.builder()
                                .withMaxPartSize(1024 * 1024 * 1024)
                                .withRolloverInterval(TimeUnit.MINUTES.toMillis(15))
                                //不活跃的间隔时间,用于归档保存使用
                                .withInactivityInterval(TimeUnit.MINUTES.toMillis(5))
                                .build()
                ).build();

        userEventDataStreamSource.map(data -> data.getUserName()).addSink(streamingFileSink);


        env.execute();

    }
}

运行结束后会多出来一些文件
在这里插入图片描述

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/37339.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

【概率论与数理统计】第四章知识点复习与习题

思维导图 基础知识 数学期望 定义 数学期望其实很好理解&#xff0c;就是均值&#xff0c;当然这里并不是直接计算样本的均值&#xff0c;而是考虑到样本对应的概率。我们分离散和连续两类来讨论数学期望。 离散型 对随机变量X的分布律为 若级数 绝对收敛&#xff0c;则称该…

BaGet搭建Nuget私仓(window10docker)

文章目录一、搭建背景二、框架简介三、私仓搭建1、环境2、win10上部署2.1安装SDK2.2下载和解压BaGet包2.3运行项目2.4类库项目2.5将包发布到私有Nuget中2.6使用BaGetFirstLib2.7使用密码增加安全性3、Docker上部署3.1创建相关文件3.2拉取镜像3.3运行3.4访问四、结束一、搭建背景…

微服务入门

文章目录一、微服务大概认识二、单体架构架构和分布式架构三、微服务架构特征四、微服务技术对比五、SpringCloud 与 SpringBoot版本兼容关系如下&#xff1a;一、微服务大概认识 二、单体架构架构和分布式架构 单体架构&#xff1a;将业务的所有功能集中在一个项目中开发&…

“加密上海·喜玛拉雅Web3.0数字艺术大展”落幕,AIGC和数字艺术衍生品是最大赢家?...

图片来源&#xff1a;由无界版图 AI 绘画工具生成11月11日&#xff0c;为期一个月的第一届“加密上海喜玛拉雅3eb3.0数字艺术大展”在喜玛拉雅美术馆拉开帷幕。这无疑是当下中国最盛大、最集中的一次数字艺术展览。艺术展吸引了像Soul 、小红书、网易星球、bilibili、酷天下、无…

mysql实战操作总结

1、问题描述 关于mysql操作&#xff0c;记录下&#xff1b; 2、问题说明 1.停止正在执行的sql 数据量太大&#xff0c;数据库没反应&#xff0c;用的navicat&#xff0c;就在查询页面&#xff0c;执行&#xff1a; show processlist;---会显示对应的查询sql找到最前面是id…

vue js实现文件上传压缩优化处理

vue js实现文件上传压缩优化处理 两种方法 &#xff1a; 第1种是借助canvas的封装的文件压缩上传第2种&#xff08;扩展方法&#xff09;使用compressorjs第三方插件实现 目录 vue js实现文件上传压缩优化处理 借助canvas的封装的文件压缩上传 1.新建imgUpload.js 2.全局引…

高清免费壁纸网站推荐

本期内容&#xff0c;为大家整理了6个相当不错的免费壁纸网站&#xff0c;访问量极大、活跃度极高。 无需登录、注册&#xff0c;打开右键就可以下载&#xff0c;而且壁纸图片的尺寸大小&#xff0c;可以选择&#xff0c;从手机、平板、再到电脑壁纸&#xff0c;全部都是高清。…

Windows/Ubuntu安装frida和objection

​Windows/Ubuntu安装frida和objection 1.Windows环境使用管理员权限安装frida,Ubuntu使用普通或Root权限安装均可 https://github.com/frida/frida (1).安装frida(Python2.7.8及以上版本) pip install numpy matplotlib -i https://mirrors.aliyun.com/pypi/simplepip insta…

imx6ull pro BSP 工具链

BSP&#xff0c;Board Support Package&#xff0c;指板级支持包&#xff0c;是构建嵌入式操作系统所 需的引导程序(Bootload)、内核(Kernel)、根文件系统(Rootfs)和工具链 (Toolchain)。 每种开发板的 BSP 都不一样&#xff0c;并且这些源码都非常庞大。我们把这些源码都 放在…

BI-SQL丨JOB

JOB 在SQL Server中&#xff0c;JOB属于常用功能&#xff0c;我们经常需要通过JOB来执行一些定时的作业任务&#xff0c;例如数据备份、存储过程、SSIS任务、SSAS刷新等等。 通常情况下&#xff0c;我们都是在SSMS中对JOB进行创建、删除、维护等任务的。 前置条件 使用JOB功…

基于Mxnet实现实例分割-MaskRCNN【附部分源码】

文章目录前言一、什么是实例分割二、数据集的准备1.数据集标注2.VOC数据集转COCO数据集三、基于Mxnet搭建MaskRCNN1.引入库2.CPU/GPU配置3.获取训练的dataset1.coco数据集2.自定义数据集4.获取类别标签5.模型构建6.数据迭代器7.模型训练1.优化器设置2.loss计算3.acc计算4.循环训…

堆 堆排序 TopK问题

堆一&#xff0c;堆的相关函数接口实现1&#xff0c;堆的初始化2&#xff0c;堆的销毁3&#xff0c;插入4&#xff0c;向上调整5&#xff0c;删除6&#xff0c;向下调整7&#xff0c;建堆8&#xff0c;取堆顶9&#xff0c;判空10&#xff0c;堆的大小二&#xff0c;向上建堆与向…

用DIV+CSS技术设计的鲜花网站(web前端网页制作课作业)

&#x1f389;精彩专栏推荐 &#x1f4ad;文末获取联系 ✍️ 作者简介: 一个热爱把逻辑思维转变为代码的技术博主 &#x1f482; 作者主页: 【主页——&#x1f680;获取更多优质源码】 &#x1f393; web前端期末大作业&#xff1a; 【&#x1f4da;毕设项目精品实战案例 (10…

(人工智能的数学基础)第一章特征向量与矩阵分析——第一节:向量、向量空间和线性相关性

文章目录一&#xff1a;标量和向量&#xff08;1&#xff09;基本概念&#xff08;2&#xff09;坐标系中的向量表示二&#xff1a;向量运算&#xff08;1&#xff09;加减与数乘&#xff08;2&#xff09;向量内积A&#xff1a;为什么需要向量内积B&#xff1a;向量内积C&…

Linux之分区【详细总结】

目录分区介绍分区查看指令lsblk ![请添加图片描述](https://img-blog.csdnimg.cn/d7ea5468d719433ea6ee4ab0eb145770.png)lsblk -f挂载案例分五部分组成 虚拟机添加硬盘 分区 格式化 挂载 设置自动挂载虚拟机增加硬盘查看整个系统磁盘情况查询查看整个目录磁盘占用情况磁盘情况…

初识 MySQL HeatWave

MySQL 作为全球最欢迎的数据库&#xff0c;已在交易场景叱咤风云多年。在 2020 年底&#xff0c;OCI&#xff08;Oracle Cloud Infrastructure&#xff09;推出了一个黑科技插件&#xff0c;它弥补了 MySQL 在分析场景的短板&#xff0c;Oracle 官方称它比 Aurora 快 1400 倍&a…

GIS 分析常用的 7 个地理处理工具

以下这7 个地理处理工具总是在 GIS 大师的热门列表中名列前茅&#xff0c;似乎如我们的精神食粮&#xff0c;像面包和黄油一样。从裁剪到缓冲&#xff0c;您将学习处理GIS 数据的基础知识&#xff0c;以便更好地了解如何将这些 GIS 工具用于实际应用程序。在ArcGIS 和 QGIS等 G…

Gradle学习笔记之第一个Gradle项目

文章目录前言创建gradle项目gradle目录结构gradle常用命令修改maven仓库地址启用init.gradle的方法关于gradle仓库gradle包装器前言 Gradle是Android构建的基本工具&#xff0c;因此作为Android研发&#xff0c;有必要系统地学一学Gradle&#xff0c;环境windows就可以。 创建…

学生个人网页模板 学生个人网页设计作品 简单个人主页成品 个人网页制作 HTML学生个人网站作业设计代做

&#x1f389;精彩专栏推荐&#x1f447;&#x1f3fb;&#x1f447;&#x1f3fb;&#x1f447;&#x1f3fb; ✍️ 作者简介: 一个热爱把逻辑思维转变为代码的技术博主 &#x1f482; 作者主页: 【主页——&#x1f680;获取更多优质源码】 &#x1f393; web前端期末大作业…

git原理和命令以及工具

原理 工作区、暂存区和版本库 分支结构 origin 对象模型 命令 配置 $ git config --global user.name “John Doe” $ git config --global user.email johndoeexample.com 针对特定项目使用不同的用户名称与邮件地址时&#xff0c;可以在那个项目目录下运行没有 --globa…