Scheduling with NodeSelector Kubernetes

Pod can be schedule into spesific worker nodes with NodeSelector

  • List all kubernetes nodes
btech@zu-master:~$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
zu-master    Ready    master   12d   v1.13.3
zu-worker1   Ready    <none>   12d   v1.13.3
zu-worker2   Ready    <none>   12d   v1.13.3
zu-worker3   Ready    <none>   12d   v1.13.3
btech@zu-master:~$ kubectl get nodes --show-labels
NAME         STATUS   ROLES    AGE   VERSION   LABELS
zu-master    Ready    master   12d   v1.13.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=zu-master,node-role.kubernetes.io/master=
zu-worker1   Ready    <none>   12d   v1.13.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=zu-worker1
zu-worker2   Ready    <none>   12d   v1.13.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=zu-worker2
zu-worker3   Ready    <none>   12d   v1.13.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=zu-worker3
  • use label kubernetes.io/hostname to schedule spesific pod into spesific node.
apiVersion: v1
kind: Pod
metadata:
  name: apache2
spec:
  nodeSelector:
    kubernetes.io/hostname: zu-worker2
  containers:
  - name: apache2
    image: httpd/latest
    ports:
    - containerPort: 80
      protocol: TCP
  • deploy and verify
btech@zu-master:~/node_selector$ kubectl create -f pod.yaml 
pod/apache2 created
btech@zu-master:~/node_selector$ kubectl get pod -o wide
NAME      READY   STATUS         RESTARTS   AGE    IP            NODE         NOMINATED NODE   READINESS GATES
apache2   0/1     ErrImagePull   0          12s    10.244.2.55   zu-worker2   <none>           <none>

Using custom Label

if you want to use custom label di group some node into one label. example:

btech@zu-master:~/node_selector$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
zu-master    Ready    master   12d   v1.13.3
zu-worker1   Ready    <none>   12d   v1.13.3
zu-worker2   Ready    <none>   12d   v1.13.3
zu-worker3   Ready    <none>   12d   v1.13.3
btech@zu-master:~/node_selector$ kubectl label nodes zu-worker1 disk=ssd
node/zu-worker1 labeled
btech@zu-master:~/node_selector$ kubectl label nodes zu-worker3 disk=ssd
node/zu-worker3 labeled
btech@zu-master:~/node_selector$ kubectl label nodes zu-worker2 disk=harddisk
node/zu-worker2 labeled
btech@zu-master:~/node_selector$ kubectl get nodes -L disk
NAME         STATUS   ROLES    AGE   VERSION   DISK
zu-master    Ready    master   12d   v1.13.3   
zu-worker1   Ready    <none>   12d   v1.13.3   ssd
zu-worker2   Ready    <none>   12d   v1.13.3   harddisk
zu-worker3   Ready    <none>   12d   v1.13.3   ssd

you want to deploy apps into node with disk=ssd.

  • create deployment for this example (with 10 replica)
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 10
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      nodeSelector:
        disk: ssd
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80
  • verify, you can see that nginx-deployment pod has sechedule into worker1 or worker3 because they have tag disk=ssd.
btech@zu-master:~/node_selector$ nano deployment.yaml
btech@zu-master:~/node_selector$ kubectl create -f deployment.yaml 
deployment.apps/nginx-deployment created
btech@zu-master:~/node_selector$ kubectl get pod -o wide
NAME                               READY   STATUS    RESTARTS   AGE    IP            NODE         NOMINATED NODE   READINESS GATES
kubia1                             1/1     Running   0          160m   10.244.2.53   zu-worker2   <none>           <none>
kubia2                             1/1     Running   0          160m   10.244.2.54   zu-worker2   <none>           <none>
kubia3                             1/1     Running   0          160m   10.244.3.52   zu-worker3   <none>           <none>
kubia4                             1/1     Running   0          160m   10.244.1.52   zu-worker1   <none>           <none>
nginx-deployment-b9d8b6fdc-bzncf   1/1     Running   0          7s     10.244.1.56   zu-worker1   <none>           <none>
nginx-deployment-b9d8b6fdc-g87cg   1/1     Running   0          6s     10.244.3.58   zu-worker3   <none>           <none>
nginx-deployment-b9d8b6fdc-j9pj6   1/1     Running   0          6s     10.244.1.57   zu-worker1   <none>           <none>
nginx-deployment-b9d8b6fdc-kj446   1/1     Running   0          7s     10.244.3.55   zu-worker3   <none>           <none>
nginx-deployment-b9d8b6fdc-l2l6g   1/1     Running   0          6s     10.244.3.56   zu-worker3   <none>           <none>
nginx-deployment-b9d8b6fdc-mjhgl   1/1     Running   0          7s     10.244.3.54   zu-worker3   <none>           <none>
nginx-deployment-b9d8b6fdc-p49wg   1/1     Running   0          7s     10.244.1.59   zu-worker1   <none>           <none>
nginx-deployment-b9d8b6fdc-pg2vc   1/1     Running   0          6s     10.244.1.58   zu-worker1   <none>           <none>
nginx-deployment-b9d8b6fdc-vsqcz   1/1     Running   0          6s     10.244.1.55   zu-worker1   <none>           <none>
nginx-deployment-b9d8b6fdc-wc86c   1/1     Running   0          6s     10.244.3.57   zu-worker3   <none>           <none>

 

 

Comments are closed.