Setup EFK (elasticsearch fluent-bit kibana) Stack in Kubernetes

EFK stack is stack to collect log data and analysis. EFK stack can be install in top of Kubernetes to collect log from kubernetes, virtual machine, or baremetal server.

Prerequisite:

  • Kubernetes cluster running
  • Dynamic Volume Provisioning, read here
  • Helm installed

now let’s install the stack inside kubernetes:

  • First, create namespace logging
kubectl create namespace logging
  • Install Elastic Search with helm
helm install stable/elasticsearch --name elastic-search --namespace logging --set data.persistence.storageClass=gluster-heketi-external,data.persistence.size=2Gi,master.persistence.storageClass=gluster-heketi-external,master.persistence.size=2Gi

note that gluster-heketi-external is my storage-class name, adjust by your own. This helm is based on elasticsearch chart.

  • Get ClusterIP IP Address from elastic search services.
kubectl get svc -n logging
  • Install Kibana

Download values.html, and change some variable, if you want kibana to be exposed, you can change the service type.

wget https://raw.githubusercontent.com/helm/charts/master/stable/kibana/values.yaml
nano values.yaml
...
elasticsearch.url: http://elasticsearch_ClusterIP:9200
chart.
service:
  type: NodePort
  nodePort:31500
...

Install via helm

helm install stable/kibana --name kibana -f values.yaml --namespace logging --set persistentVolumeClaim.storageClass=gluster-heketi-external,persistentVolumeClaim.size=2Gi,plugins.enabled=true,persistentVolumeClaim.enabled=true

note that gluster-heketi-external is my storage-class name, adjust by your own. This helm is based on kibana chart.

  • Install fluent-bit

The installation procedure of fluent-bit is a little bit tricky, I have tried the helm/charts but get some error in the apiVersion. I decide to install manually.

kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-service-account.yaml
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role.yaml
kubectl create -f https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/fluent-bit-role-binding.yaml

Download the config map, you need to edit the config map before applying it to the cluster.

wget https://raw.githubusercontent.com/fluent/fluent-bit-kubernetes-logging/master/output/elasticsearch/fluent-bit-configmap.yaml
nano fluent-bit-configmap.yaml
...
  output-elasticsearch.conf: |
    [OUTPUT]
        Name            es
        Match           *
        Host            elasticsearch_ClusterIP
        Port            9200
...
kubectl create -f fluent-bit-configmap.yaml

Let’s create the DaemonSet

nano fluent-bit.yaml

...
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
  namespace: logging
  name: fluent-bit
  labels:
    component: fluent-bit-logging
    version: v1
    kubernetes.io/cluster-service: "true"
spec:
  selector:
    matchLabels:
      component: fluent-bit-logging
  template:
    metadata:
      labels:
        component: fluent-bit-logging
        version: v1
        kubernetes.io/cluster-service: "true"
    spec:
      containers:
      - name: fluent-bit
        image: fluent/fluent-bit:0.12.17
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
        - name: fluent-bit-config
          mountPath: /fluent-bit/etc/
      terminationGracePeriodSeconds: 10
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
      - name: fluent-bit-config
        configMap:
          name: fluent-bit-config
      serviceAccountName: fluent-bit
      tolerations:
      - key: node-role.kubernetes.io/master
        operator: Exists
        effect: NoSchedule
...
kubectl create -f fluent-bit.yaml

Comments are closed.