Total Pageviews

Friday, December 29, 2023

Hands-on – MetalLB Load Balancer: External Traffic into Kubernetes

To connect an application running inside the Kubernetes Cluster, a traffic routing mechanism is required. This mechanism is generally known as the Proxy Service. In this hands-on tutorial, we will be using the MetalLB load balancer which is widely used in bare metal Kubernetes environment and supports both L2 and BGP mode.

A pod in Kubernetes is ephemeral in nature so each time a pod restarts on the same or a different node, Kubernetes assigns a new IP. Although a nodePort IP can be used from outside the Kubernetes Cluster, the application connection string will need to be changed if the pod starts on a different cluster node. To solve this problem a “Service Proxy” is required and this service proxy will reroute (routing) the external traffic to the appropriate pod automatically.

There are three supported ways of installing MetalLB: using plain Kubernetes manifests, using Kustomize, or using Helm. In this tutorial, we will use the Kubernetes manifests method in our bare metal Kubernetes cluster.

Step#1: Installing MetalLB:

Before installing MetalLB, please review the official documentation for any further requirements. Note that we’ll need to perform all steps on the control plane as the root user.

Apply the MetalLB manifest:
# kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.12/config/manifests/metallb-native.yaml

If firewall is enabled, then open tcp and udp port:
# ufw allow 7946/tcp
# ufw allow 7946/udp
# ufw reload


Verify that MetalLB is up and running:
# kubectl get pods -n metallb-system

MetalLB pods are up and running

Step#2: Create CRD for MetalLB:

We need to create an IP address pool for the Load Balancer Service. Please note that multiple instances of IPAddressPools can co-exist and addresses can be defined by CIDR notation, by range for both IPV4 and IPV6 addresses.

Create a Yaml file “metallb.yaml” file with the following contents. This will create two MetalLB custom resources (CRD). You will need to change the IP range as per your network.

 
# Create IP Address pool
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
  name: nat
  namespace: metallb-system
spec:
  addresses:
    - 192.168.0.70-192.168.0.75
  autoAssign: true

---

# Define as L2 mode
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
  name: empty
  namespace: metallb-system
 

Step#3: Creating LoadBalancer Type Service:

In our NFS deployment tutorial, we had created NodePort Services for external traffic. We can delete those NodePort services and then create new LoadBalancer type services for our pods. Please note that assigning NodePort IP is not recommended; it is best to let Kubernetes assign the IP to eliminate any possibility of IP conflicts.

Make sure that the app selector in service definition matches the pod selector.

 
# first Load Balancer Example
apiVersion: v1
kind: Service
metadata:
  name: srvsql01-svc
spec:
  type: LoadBalancer
  selector:
    app: srvsql01
  ports:
    - name: srvsql01
      port: 1433
      targetPort: 1433
      protocol: TCP

# second eaxmple
apiVersion: v1
kind: Service
metadata:
  name: srvsql02-svc
spec:
  type: LoadBalancer
  selector:
    app: srvsql02
  ports:
    - name: srvsql02
      port: 2433
      targetPort: 2433
      protocol: TCP
 
# third Load balancer
apiVersion: v1
kind: Service
metadata:
  name: srvsql03-svc
spec:
  type: LoadBalancer
  selector:
    app: srvsql03
  ports:
    - name: srvsql03
      port: 3433
      targetPort: 3433
      protocol: TCP
 

MetalLB: Load balancer services
Example#1: A simple deployment with LoadBalancer

Following is a complete example of a simple deployment of SQL Server pod using MeltalLB LoadBalancer:


# Simple deployment of SQL Server 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: srvsql02
spec:
  replicas: 1
  strategy:
    type: Recreate  
  selector:
    matchLabels:
      app: srvsql02
  template:
    metadata:
      labels:
        app: srvsql02
    spec:
      terminationGracePeriodSeconds: 0
      hostname: srvsql02
      securityContext:
        fsGroup: 10001
      containers:
      - name: srvsql02
        image: mcr.microsoft.com/mssql/server:2019-latest
        ports:
        - containerPort: 2433
        env:
        - name: MSSQL_SA_PASSWORD
          value: "YourPassowrdHere"
        - name: MSSQL_PID
          value: "XXXXX-KKKKK-NNNNN-KKKKK-YYYYY"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_TCP_PORT
          value: "2433"
        - name: MSSQL_AGENT_ENABLED
          value: "true"  
        resources:
          requests:
            memory: 4Gi
            cpu: '2'
          limits:
            memory: 4Gi
        volumeMounts:
        - name: srvsql02-vol
          mountPath: /var/opt/mssql
          subPath: srvsql02
      volumes:
      - name: srvsql02-vol
        persistentVolumeClaim:
          claimName: nfs-srvsql02-pvc

---
# Load balance service
apiVersion: v1
kind: Service
metadata:
  name: srvsql02-svc
spec:
  type: LoadBalancer
  selector:
    app: srvsql02
  ports:
    - name: srvsql02
      port: 2433
      targetPort: 2433
      protocol: TCP

Example#2: A StateFul deployment with LoadBalancer

Following is a complete example of StateFulSet deplyment of SQL Server pod using MeltalLB LoadBalancer:

 
# StateFulSet deployment of SQL Server
apiVersion: v1
kind: Service
metadata:
  name: srvsql03-svc
spec:
  type: LoadBalancer
  selector:
    app: srvsql03
  ports:
    - name: srvsql03
      port: 3433
      targetPort: 3433
      protocol: TCP
---
# Create the stateful replica
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: srvsql03
spec:
  replicas: 1
  selector:
    matchLabels:
      app: srvsql03
  serviceName: "srvsql03-svc"    
  template:
    metadata:
      labels:
        app: srvsql03
    spec:
      terminationGracePeriodSeconds: 10
      hostname: srvsql03
      securityContext:
        fsGroup: 10001
      containers:
      - name: srvsql03
        image: mcr.microsoft.com/mssql/server:2022-latest
        ports:
        - containerPort: 3433
        env:
        - name: MSSQL_SA_PASSWORD
          value: "YourPasswordHere"
        - name: MSSQL_PID
          value: "QQQQQ-PPPPP-DDDDD-GGGGG-XXXXX"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_TCP_PORT
          value: "3433"
        - name: MSSQL_AGENT_ENABLED
          value: "true"  
        resources:
          requests:
            memory: 4Gi
            cpu: '2'
          limits:
            memory: 4Gi
        volumeMounts:
        - name: nfs-srvsql03-pvc
          mountPath: /var/opt/mssql
          subPath: srvsql03
  # Dynamic volume claim goes here
  volumeClaimTemplates:
  - metadata:
      name: nfs-srvsql03-pvc
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-data"
      resources:
        requests:
          storage: 6Gi


Screenshot #1: Using SSMS to connect SQL Server using MetalLB Load Balancer:

Using SSMS: externernal traffic to Kubernetes using load balancer
 
References:
Service Proxy:
https://landscape.cncf.io/card-mode?category=service-proxy

MetalLB:
https://metallb.universe.tf/installation/

Sunday, December 24, 2023

Hands-on - Dynamic volume provisioning in Kubernetes: using NFS CSI for stateful application

A stateful application such as a database application requires persistent storage to preserve all changes to the storage layer. It also requires that the Kubernetes pod be able to bind to the same volume when it gets rescheduled on the same or on a different node. In Kubernetes, a persistent volume can be created manually or dynamically. The dynamic volume provisioning allows storage volumes to be created on-demand and automatically. The dynamic provisioning mechanism reduces administrative overhead since the storage volume will be managed automatically by a set of predefined rules..

Dynamic Volume Provisioning using NFS CSI - OpenLens dashboard

Using NFS CSI driver for Kubernetes: A properly configured NFS volume in Kubernetes can satisfy persistent volume requirements and can support a container’s moderate workload (Read/Write). A popular NFS CSI driver is “csi-driver-nfs” which supports dynamically persistent volume creation, volume snapshot, volume cloning and volume expansion (references are at the bottom).

Installing NFS CSI plugins in Kubernetes: Before installing “csi-driver-nfs” in Kubernetes, review the GitHub documentation at https://github.com/kubernetes-csi/csi-driver-nfs.

In this tutorial, we will be using Helm Package Manager to install the NFS CSI driver.

SQL Server as stateful application in Kubernetes

Step#1: Install helm package manger:

Consult the official Helm Package Manger install process at https://helm.sh/docs/intro/install/

Login or SSH to the control plane node and then execute the following commands as root:

Switch to root:
# sudo -i

Install Helm packagfe Manager:
# curl https://baltocdn.com/helm/signing.asc | gpg --dearmor | sudo tee /usr/share/keyrings/helm.gpg > /dev/null

# apt-get install apt-transport-https --yes

# echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/helm.gpg] https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list

# apt-get update

# apt-get install helm

Step#2: Using Helm to install NFS CSI:

# helm repo add csi-driver-nfs https://raw.githubusercontent.com/kubernetes-csi/csi-driver-nfs/master/charts

Search latest chart version:
# helm search repo -l csi-driver-nfs

Use the latest version of NFS CSI driver or use the specific version you like:
# helm install csi-driver-nfs csi-driver-nfs/csi-driver-nfs --namespace kube-system --version v4.5.0

Once installed, verify that NFS CSI is running on all nodes:
# kubectl get pod -n kube-system -o wide | grep nfs

Step#3: Creating a StorageClass for Dynamic Volume Provisioning:

Once we have installed NFS CSI driver in Kubernetes, the next step is to create:

  1. A Storage Class (SC)
  2. A Persistent Volume Claim (PVC)
  3. A pod which will claim the PVC
To create a storage class, we need to have a NFS share somewhere in the network. If you don’t have one, then you’ll need to install and configure a NFS share and then map the NFS client (control plane) root user to the NFS server root user. In this example, we are using QNAP NFS v4.1. The required permissions have been granted to access the share as root from the Kubernetes control plane.

NFS Server IP: 192.168.0.25
NFS Share: kubedata

Create a storage class:

Following is the yaml for the storage class object. Save it as "nfs_sc.yaml".

 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs-data
provisioner: nfs.csi.k8s.io
parameters:
  server: 192.168.0.25
  share: /kubedata
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
mountOptions:
  - hard
  - nfsvers=4.1
 

Now apply the yaml:
# kubectl apply -f nfs_sc.yaml

Example #1: A Pod using Dynamic Volume Provisioning:

When we use the dynamic volume provisioning method, we don’t need to manually create the Persistent Volume (PV) in advance. Instead, when the Persistent Volume Claim (PVC) is created, the required PV will also be automatically created and bound to the PVC. When a pod is created by referring to the PVC name, the required storage will be attached with the pod.

When creating a stateful pod, the Persistent Volume Claim (PVC) name must be provided in the specification section of the storageClassName. If we make the storage class the default storage in the Kubernetes cluster, then the storageClassName is not required. A PV will automatically be created from the default storage of Kubernetes as per the PVC and the required storage will be attached to the container.

Create a Persistent Volume Claim (PVC) yaml. Save the file as "nfs_pvc.yaml":

 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-srvsql01-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: nfs-data
 

Apply the yaml to create the PVC (nfs-srvsql01-pvc):
# kubectl apply -f nfs-pvc.yaml

Creating a deployment and claiming the PVC:
In this step, we will create a SQL Server container:
  1. Create a SQL Server deployment yaml file (sql1.yaml)
  2. Create a Nodeport service to connect the SQL Server Instance from the network

The "sql1.yaml" file contains the following definition:

 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: srvsql01
spec:
  replicas: 1
  strategy:
    type: Recreate  
  selector:
    matchLabels:
      app: srvsql01
  template:
    metadata:
      labels:
        app: srvsql01
    spec:
      terminationGracePeriodSeconds: 0
      hostname: srvsql01
      securityContext:
        fsGroup: 10001
      containers:
      - name: srvsql01
        image: mcr.microsoft.com/mssql/server:2022-latest
        ports:
        - containerPort: 1433
        env:
        - name: MSSQL_SA_PASSWORD
          value: "FantasticPassword"
        - name: MSSQL_PID
          value: "QQQQQ-PPPPPP-DDDDD-WWWWW-RRRRR"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_TCP_PORT
          value: "1433"
        - name: MSSQL_AGENT_ENABLED
          value: "true"  
        resources:
          requests:
            memory: 4Gi
            cpu: '2'
          limits:
            memory: 4Gi
        volumeMounts:
        - name: srvsql01-vol
          mountPath: /var/opt/mssql
          subPath: srvsql01
      volumes:
      - name: srvsql01-vol
        persistentVolumeClaim:
          claimName: nfs-srvsql01-pvc

---
apiVersion: v1
kind: Service
metadata:
  name: srvsql01-svc
spec:
  type: NodePort
  selector:
    app: srvsql01
  ports:
    - name: srvsql01
      port: 1433
      nodePort: 31433
      targetPort: 1433
      protocol: TCP
 

Create the deployment:
# kubectl apply -f sql1.yaml

Example #2: Creating StatefulSet replica using Dynamic Volume Provisioning.

A stateful deployment of a container is slightly different than a simple deployment. The basic steps are:

  1. Create a service definition
  2. Create a StatefulSet definition with volumeClaimTemplates

In the stateful definition, the critical part is the volumeClaimTemplates section. This is the section where we define the PersistentVolumeClaim. When a StatefulSet needs to create a pod replica, it uses the volumeClaimTemplates definition to create a PVC, and then a PV will automatically be created with the required volume for the pod.

Following is the StatefulSet definition. Save it as "sql2.yaml" and then apply it using kubectl.


# First define the service
apiVersion: v1
kind: Service
metadata:
  name: srvsql03-svc
spec:
  type: NodePort
  selector:
    app: srvsql03
  ports:
    - name: srvsql03
      port: 3433
      nodePort: 31033
      targetPort: 3433
      protocol: TCP
---
# Create the stateful replica
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: srvsql03
spec:
  replicas: 1
  selector:
    matchLabels:
      app: srvsql03
  serviceName: "srvsql03-svc"    
  template:
    metadata:
      labels:
        app: srvsql03
    spec:
      terminationGracePeriodSeconds: 10
      hostname: srvsql03
      securityContext:
        fsGroup: 10001
      containers:
      - name: srvsql03
        image: mcr.microsoft.com/mssql/server:2022-latest
        ports:
        - containerPort: 3433
        env:
        - name: MSSQL_SA_PASSWORD
          value: "YourPasswordHere"
        - name: MSSQL_PID
          value: "ABCDE-XYZXY-ZZZZZ-GGGGG-XZZZZ"
        - name: ACCEPT_EULA
          value: "Y"
        - name: MSSQL_TCP_PORT
          value: "3433"
        - name: MSSQL_AGENT_ENABLED
          value: "true"  
        resources:
          requests:
            memory: 4Gi
            cpu: '2'
          limits:
            memory: 4Gi
        volumeMounts:
        - name: nfs-srvsql03-pvc
          mountPath: /var/opt/mssql
          subPath: srvsql03
  # Dynamic volume claim goes here
  volumeClaimTemplates:
  - metadata:
      name: nfs-srvsql03-pvc
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "nfs-data"
      resources:
        requests:
          storage: 6Gi


Create the stateful replica:
# kubectl apply -f sql2.yaml

Using NFS CSI in Kubernetes cluster

References:

Dynamic Volume Provisioning:
https://kubernetes.io/docs/concepts/storage/dynamic-provisioning/

StatefulSets:
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/

NFS CSI driver for Kubernetes:
https://github.com/kubernetes-csi/csi-driver-nfs

Kubernetes Container Storage Interface (CSI) Documentation:
https://kubernetes-csi.github.io/docs/drivers.html

Helm Package Manager:
https://helm.sh/docs/intro/install/