1、启动zookeeper
zk.sh start
2、启动DFS,Hadoop集群
start-dfs.sh
3、启动yarn
start-yarn.sh
4、启动kafka
启动Kafka集群
bin/kafka-server-start.sh -daemon config/server.properties
查看Kafka topic 列表
bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
创建Kafka ods层topic 副本
bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic ods_cars_log --replication-factor 1 --partitions 1
bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic ods_entrance_guard_log --replication-factor 1 --partitions 1
删除Kafka ods层topic
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic ods_cars_log
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic ods_entrance_guard_lo
5、启动Flink SQL-client服务
#启动flink yarn-session
bin/yarn-session.sh -s 1 -jm 1024 -tm 1024
#关闭yarn-session命令
bin/yarn application -kill application_1659752641091_0003
6、#启动flink sql
bin/sql-client.sh -s yarn-session
7、#设置checkpoint配置
SET ‘execution.checkpointing.interval’ = ‘10s’;
SET ‘parallelism.default’ = ‘3’;
flink启动之前需要:
#启动HDFS
start-dfs.sh
#启动集群
start-cluster.sh
#启动历史服务器
historyserver.sh start
FlinkCDC环境部署
sql-client提交作业模式
1.Standalone模式
启动sql-client:bin/sql-client.sh embedded
注意,如果使用standalone模式运行,需要先启动一个Flink standalone集群,方法如下:
bin/start-cluster.sh
2.yarn-session模式(本案例使用方式)
先启动Flink yarn-session集群:bin/yarn-session.sh -s 1 -jm 1024 -tm 1024
然后再启动sql-client:bin/sql-client.sh -s yarn-session
启动kafka
启动flink
CH的命令
基于默认配置启动ClickHouse
service clickhouse-server start
查看ClickHouse启动进程
ps -ef | grep clickhouse
重新启动
systemctl restart clickhouse-server
#客户端连接ClickHouse
clickhouse-client
#查询数据库
show databases;
用户名:default
密码:123456
Could not find any factory for identifier ‘mysql-cdc’ that implements ‘org.apache.flink.table.factories.DynamicTableFactory’ in the classpath.