k8s部署ELK补充篇:kubernetes-event-exporter收集Kubernetes集群中的事件
文章目录
- k8s部署ELK补充篇:kubernetes-event-exporter收集Kubernetes集群中的事件
- 一、kubernetes-event-exporter简介
- 二、kubernetes-event-exporter实战部署
- 1. 创建Namespace(elk-namespace.yaml)
- 2. 部署Logstash(event-logstash.yaml)
- 3. 部署kubernetes-event-exporter(event-exporter.yaml)
- 4. 部署所有资源
- 5. 验证Logstash Pod状态
- 三、Kibana页面展示
- 总结
在 Kubernetes 集群中,事件(Event) 是用于记录资源状态变化和异常信息的重要机制,常用于排查部署问题、资源调度异常或系统错误等。然而,Kubernetes 默认只保留最近一小时内的事件,且仅能通过 kubectl get events
命令临时查看,无法满足集中管理、持久化存储和告警通知等生产需求。
为了解决这一问题,我们可以引入 kubernetes-event-exporter
,它能够实时捕捉集群中的事件,并将其导出到日志系统、告警平台或其他后端存储系统中。配合 ELK(Elasticsearch + Logstash + Kibana)这一成熟的日志分析平台,我们可以实现对集群事件的统一采集、分析和可视化展示。
本篇文章将介绍如何在 Kubernetes 环境中部署 kubernetes-event-exporter,实现事件的收集与输出,为集群可观测性和故障排查提供更全面的数据支撑。
一、kubernetes-event-exporter简介
kubernetes-event-exporter 是一款专为 Kubernetes 设计的轻量级事件收集与导出工具,能够实时捕捉 Kubernetes 集群中的事件(Events),并将其导出到指定的后端系统,如 Elasticsearch、Loki、Kafka、Webhook 或文件等。
相比于通过 kubectl get events 命令手动查看事件,这款工具支持 自动化
、持久化
、结构化
地处理事件数据,特别适用于与日志系统、监控平台或告警系统集成,实现对集群状态的可观测性增强和异常快速响应。
kubernetes-event-exporter 提供了灵活的过滤规则和接收器配置,用户可以根据事件的类型、命名空间、资源或原因等条件,自定义输出策略,是构建 Kubernetes 运维监控体系的有力补充组件。
二、kubernetes-event-exporter实战部署
1. 创建Namespace(elk-namespace.yaml)
首先,创建一个新的命名空间,用于部署 ELK 相关的资源
apiVersion: v1
kind: Namespace
metadata:
name: elk
2. 部署Logstash(event-logstash.yaml)
以下配置将部署 Logstash
,并从 Kafka 中拉取事件数据,再输出到 Elasticsearch
kind: Deployment
apiVersion: apps/v1
metadata:
name: kube-event-logstash
namespace: elk
labels:
app: kube-event-logstash
spec:
replicas: 1
selector:
matchLabels:
app: kube-event-logstash
template:
metadata:
creationTimestamp: null
labels:
app: kube-event-logstash
annotations:
kubesphere.io/restartedAt: '2024-02-22T09:03:36.215Z'
spec:
volumes:
- name: kube-event-logstash-pipeline-config
configMap:
name: kube-event-logstash-pipeline-config
defaultMode: 420
- name: kube-event-logstash-config
configMap:
name: kube-event-logstash-config
- name: logstash-config
emptyDir: {}
initContainers:
- name: copy-logstash-config
image: harbor.local/k8s/busybox:1.37.0
command: ['sh', '-c', 'cp /tmp/logstash.yml /usr/share/logstash/config/logstash.yml && chmod 777 /usr/share/logstash/config/logstash.yml']
volumeMounts:
- name: kube-event-logstash-config
mountPath: /tmp/logstash.yml
subPath: logstash.yml
- name: logstash-config
mountPath: /usr/share/logstash/config
containers:
- name: kube-event-logstash
image: harbor.local/k8s/logstash:7.17.0
env:
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: PIPELINE_BATCH_SIZE
value: '4000'
- name: PIPELINE_BATCH_DELAY
value: '100'
- name: PIPELINE_WORKERS
value: '4'
- name: LS_JAVA_OPTS
value: '-Xms2g -Xmx3500m' #JVM内存设置
resources:
limits:
cpu: '2'
memory: 4Gi
requests:
cpu: '1'
memory: 1Gi
volumeMounts:
- name: kube-event-logstash-pipeline-config
mountPath: /usr/share/logstash/pipeline
- name: logstash-config
mountPath: /usr/share/logstash/config/logstash.yml
subPath: logstash.yml
livenessProbe:
tcpSocket:
port: 9600
initialDelaySeconds: 39
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 2
readinessProbe:
tcpSocket:
port: 9600
initialDelaySeconds: 39
timeoutSeconds: 5
periodSeconds: 30
successThreshold: 1
failureThreshold: 2
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
imagePullPolicy: IfNotPresent
restartPolicy: Always
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirst
securityContext: {}
schedulerName: default-scheduler
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 25%
maxSurge: 25%
revisionHistoryLimit: 10
progressDeadlineSeconds: 600
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-event-logstash-pipeline-config
namespace: elk
data:
logstash.conf: |-
input {
kafka {
bootstrap_servers => "kafka-0.kafka-headless.elk.svc.cluster.local:9092"
group_id => "logstash-consumer-group-event"
topics => ["k8s-event"]
codec => "json"
consumer_threads => 1
decorate_events => true
security_protocol => "PLAINTEXT"
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch-0.elasticsearch-cluster.elk.svc.cluster.local:9200"]
index => "k8s-event-%{+YYYY.MM.dd}"
}
}
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-event-logstash-config
namespace: elk
data:
logstash.yml: |-
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: ["http://elasticsearch-0.elasticsearch-cluster.elk.svc.cluster.local:9200"]
3. 部署kubernetes-event-exporter(event-exporter.yaml)
包含 ServiceAccount、ClusterRoleBinding、配置文件 ConfigMap 及 Deployment
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: elk
name: event-exporter
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: event-exporter
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: view
subjects:
- kind: ServiceAccount
namespace: elk
name: event-exporter
---
apiVersion: v1
kind: ConfigMap
metadata:
name: event-exporter-cfg
namespace: elk
data:
config.yaml: |
logLevel: error
logFormat: json
route:
routes:
- match:
- receiver: "kafka"
drop:
- kind: "Service" #可选,忽略 Service 类型事件
receivers:
- name: "kafka"
kafka:
clientId: "kubernetes"
topic: "k8s-event"
brokers:
- "kafka-0.kafka-headless.elk.svc.cluster.local:9092"
compressionCodec: "snappy"
layout: #自定义字段格式
kind: "{{ .InvolvedObject.Kind }}"
namespace: "{{ .InvolvedObject.Namespace }}"
name: "{{ .InvolvedObject.Name }}"
reason: "{{ .Reason }}"
message: "{{ .Message }}"
type: "{{ .Type }}"
timestamp: "{{ .GetTimestampISO8601 }}"
cluster: "sda-pre-center"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: event-exporter
namespace: elk
spec:
replicas: 1
template:
metadata:
labels:
app: event-exporter
version: v1
spec:
serviceAccountName: event-exporter
containers:
- name: event-exporter
image: harbor.local/k8s/kubernetes-event-exporter:v1
imagePullPolicy: IfNotPresent
args:
- -conf=/data/config.yaml
volumeMounts:
- mountPath: /data
name: cfg
volumes:
- name: cfg
configMap:
name: event-exporter-cfg
selector:
matchLabels:
app: event-exporter
version: v1
4. 部署所有资源
将上述 YAML 文件保存后,使用以下命令统一部署
kubectl apply -f elk-namespace.yaml
kubectl apply -f event-logstash.yaml
kubectl apply -f event-exporter.yaml
5. 验证Logstash Pod状态
kubectl get pod -n elk
三、Kibana页面展示
访问地址:http://ip:30601
在 Kibana 中创建索引模式 k8s-event-*
,可视化展示集群事件数据(不知道怎么创建的可以看我前面的文章)
总结
📌 本文作为对 ELK 在 Kubernetes 中部署使用的补充,介绍了如何通过部署 kubernetes-event-exporter 实时采集集群事件,并结合 Kafka 与 Logstash 将事件数据统一输出至 Elasticsearch,从而实现对集群事件的集中管理与可视化展示。
通过这一方案,Kubernetes 原生事件不再局限于短时查看和手动排查,而是纳入到完整的日志与监控体系中,极大提升了运维的效率与可观测性,是生产环境中不可或缺的重要组成部分。