ElasticSearch Installation On Openshift

tarafından
151
ElasticSearch Installation On Openshift

Installation of elasticsearch and kibana will explain under this topic.

Step1:
Create new service account and give privileged permission for this service account. We will use this service account in the containers to run specific commands that needs the root permission.

oc adm policy add-scc-to-user privileged system:serviceaccount:<your_namespace_name>:privileged-sa 
oc create serviceaccount privileged-sa -n <your_namespace_name>

Step2:
Create persistent volume claim to keep elasticsearch data. Please edit the below yaml for namespace field.

vim elk_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: es01-pvc
  namespace: <your_namespace_name>
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: es02-pvc
  namespace: <your_namespace_name>
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
oc create -f elk_pvc.yaml

Step3:
Add Tag your image and create image pull secret. We have been tagged image for nexus repository . If you use the internal registry you have not to use image pull secret, just you must pull the image from docker repository image and add the new tag then  push the image  internal registry of openshift.  I added all commands for image pulling and tagging on the below code space.

docker pull docker.elastic.co/elasticsearch/elasticsearch:7.16.2
docker pull busybox
docker pull docker.elastic.co/kibana/kibana:7.16.2
docker tag docker.elastic.co/elasticsearch/elasticsearch:7.16.2 <your_nexus_repository_address:port>/repository/<your_repository_name>/elasticsearch:7.16.2
docker tag busybox <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
docker tag docker.elastic.co/kibana/kibana:7.16.2 <your_nexus_repository_address:port>/repository/<your_repository_name>/kibana:7.16.2
oc create secret docker-registry openshift-image-registry-nexus    --docker-server=<your_nexus_repository_address:port> --docker-username=<nexus_user_username>     --docker-password=<password>    --docker-email=email@email.com.tr

Step4:
Create service resources to access and establish the new connection between pods.

vim elk_svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: es01
  namespace: <your_namespace_name>
  labels:
    app: es01
spec:
  ports:
  - name: rest
    port: 9200
    protocol: TCP
  - name: internode
    port: 9300
    protocol: TCP
  #clusterIP: None
  selector:
    app: es01
---
apiVersion: v1
kind: Service
metadata:
  name: es02
  namespace: <your_namespace_name>
  labels:
    app: es02
spec:
  ports:
  - name: rest
    port: 9200
    protocol: TCP
  - name: internode
    port: 9300
    protocol: TCP
  #clusterIP: None
  selector:
    app: es02
oc apply -f elk_svc.yaml

Step5:
Create deployment file and deploy two elasticsearch applicaiton. Do not forget to change deployment yaml file with your namespace, service account and image pull secret if you used the external imag repository.

vim elk.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: es01
  namespace: <your_namespace_name>
  labels:
    app: es01
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/instance: elasticsearch-01 
    app.kubernetes.io/version: "7.16.2"
    app.kubernetes.io/component: indexingandsearch
    app.kubernetes.io/part-of: logging
    app.kubernetes.io/managed-by: kubectl
    app.kubernetes.io/created-by: hands
spec:
  replicas: 1
  selector:
    matchLabels:
      app: es01
  template:
    metadata:
      labels:
        app: es01
    spec:
      serviceAccountName: privileged-sa
      containers:
      - name: es01
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/elasticsearch:7.16.2
        resources:
         limits:
           cpu: 1000m
         requests:
           cpu: 100m
        ports:
        - name: req
          containerPort: 9200
          protocol: TCP
        - name: inter-node
          containerPort: 9300
          protocol: TCP
        volumeMounts:
        - mountPath: "/usr/share/elasticsearch/data"
          name: es01
        env:
        - name: cluster.name
          value: es-k8s-cluster
        - name: node.name
          value: es01
        - name: discovery.seed_hosts
          value: "es01,es02"
        - name: cluster.initial_master_nodes
          value: "es01,es02"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        volumeMounts:
        - mountPath: "/usr/share/elasticsearch/data"
          name: es01
      - name: increase-vm-max-map
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      volumes:
      - name: es01
        persistentVolumeClaim:
          claimName: es01-pvc
      imagePullSecrets:
      - name: <your_image_registry_secret_name>
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: es02
  namespace: <your_namespace_name>
  labels:
    app: es02
    app.kubernetes.io/name: elasticsearch
    app.kubernetes.io/instance: elasticsearch-02
    app.kubernetes.io/version: "7.16.2"
    app.kubernetes.io/component: indexingandsearch
    app.kubernetes.io/part-of: logging
    app.kubernetes.io/managed-by: kubectl
    app.kubernetes.io/created-by: hands
spec:
  replicas: 1
  selector:
    matchLabels:
      app: es02
  template:
    metadata:
      labels:
        app: es02
    spec:
      serviceAccountName: privileged-sa
      containers:
      - name: es02
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/elasticsearch:7.16.2
        resources:
         limits:
           cpu: 1000m
         requests:
           cpu: 100m
        ports:
        - name: req
          containerPort: 9200
          protocol: TCP
        - name: inter-node
          containerPort: 9300
          protocol: TCP
        volumeMounts:
        - mountPath: "/usr/share/elasticsearch/data"
          name: es02
        env:
        - name: cluster.name
          value: es-k8s-cluster
        - name: node.name
          value: es02
        - name: discovery.seed_hosts
          value: "es01,es02"
        - name: cluster.initial_master_nodes
          value: "es01,es02"
        - name: ES_JAVA_OPTS
          value: "-Xms512m -Xmx512m"
      initContainers:
      - name: fix-permissions
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
        command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
        volumeMounts:
        - mountPath: "/usr/share/elasticsearch/data"
          name: es02
      - name: increase-vm-max-map
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
        command: ["sysctl", "-w", "vm.max_map_count=262144"]
        securityContext:
          privileged: true
      - name: increase-fd-ulimit
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/busybox
        command: ["sh", "-c", "ulimit -n 65536"]
        securityContext:
          privileged: true
      volumes:
      - name: es02
        persistentVolumeClaim:
          claimName: es02-pvc
      imagePullSecrets:
      - name: <your_image_registry_secret_name>
oc create -f elk.yaml
oc get pods

Step6:
Create persistent volume claim for kibana.

vim kibana_pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: kibana-pvc
  namespace: <your_namespace_name>
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
oc apply -f kibana_pvc.yaml

Step7:
Create kibana service to access the kibana dashboard and interface.

vim kibana_svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: kibana
  namespace: <your_namespace_name
  labels:
    app: kibana
spec:
 ports:
 - port: 5601
   protocol: TCP
 selector:
   app: kibana
oc apply -f kibana_svc.yaml

Step8:
Create deployment file to deploy the application.

vim kibana.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kibana
  namespace: <your_namespace_name>
  labels:
    app: kibana
    app.kubernetes.io/name: kibana
    app.kubernetes.io/instance: kibana-01
    app.kubernetes.io/version: "7.16.2"
    app.kubernetes.io/component: monitoring
    app.kubernetes.io/part-of: logging
    app.kubernetes.io/managed-by: kubectl
    app.kubernetes.io/created-by: hands
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kibana
  template:
    metadata:
      labels:
        app: kibana
    spec:
      containers:
      - name: kibana
        image: <your_nexus_repository_address:port>/repository/<your_repository_name>/elasticsearch:7.16.2
        resources:
         limits:
          cpu: 1000m
         requests:
          cpu: 200m
        ports:
        - name: req
          containerPort: 5601
          protocol: TCP
        env:
        - name: ELASTICSEARCH_HOSTS
          value: http://your_elasticsearch_cluster_ip:9200
        volumeMounts:
        - mountPath: "/etc"
          name: kibana
      volumes:
      - name: kibana
        persistentVolumeClaim:
          claimName: kibana-pvc
      imagePullSecrets:
      - name: <your_image_registry_secret_name>
oc apply -f kibana.yaml

Step9:
Create Route to access the kibana and dash board vith openshift interface.

That’s it, have good works!!!