Stateful Deployment

❗️

This is a legacy Apache Ignite documentation

The new documentation is hosted here: https://ignite.apache.org/docs/latest/

If Ignite will be deployed as a memory-centric database with Ignite persistence enabled then it needs to be deployed as a stateful solution.

Prerequisities

Makes sure that you've done the following:

Kubernetes IP Finder

To enable Apache Ignite nodes auto-discovery in Kubernetes, you need to enable TcpDiscoveryKubernetesIpFinder in IgniteConfiguration. Let's create an example configuration file called example-kube-persistence.xml and define the IP finder configuration as follows:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans" xmlns:util="http://www.springframework.org/schema/util" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="         http://www.springframework.org/schema/beans         http://www.springframework.org/schema/beans/spring-beans.xsd         http://www.springframework.org/schema/util         http://www.springframework.org/schema/util/spring-util.xsd">
    <bean class="org.apache.ignite.configuration.IgniteConfiguration">
        <property name="workDirectory" value="/persistence/work"/>
        <!-- Enabling Apache Ignite Persistent Store. -->
        <property name="dataStorageConfiguration">
            <bean class="org.apache.ignite.configuration.DataStorageConfiguration">
                <property name="defaultDataRegionConfiguration">
                    <bean class="org.apache.ignite.configuration.DataRegionConfiguration">
                        <property name="persistenceEnabled" value="true"/>
                    </bean>
                </property>
                <!--
                   Sets a path to the root directory where data and indexes are
                   to be persisted. It's assumed the directory is on a dedicated disk.
                -->
                <property name="storagePath" value="/persistence"/>
                <!--
                    Sets a path to the directory where WAL is stored.
                    It's assumed the directory is on a dedicated disk.
                -->
                <property name="walPath" value="/wal"/>
                <!--
                    Sets a path to the directory where WAL archive is stored.
                    It's assumed the directory is on the same drive with the WAL files.
                -->
                <property name="walArchivePath" value="/wal/archive"/>
            </bean>
        </property>

        <!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
        <property name="discoverySpi">
            <bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
                <property name="ipFinder">
                    <!--
                    Enables Kubernetes IP finder and setting custom namespace name.
                    -->
                    <bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
                        <property name="namespace" value="ignite"/>
                    </bean>
                </property>
            </bean>
        </property>
    </bean>
</beans>

This configuration enables Ignite persistence ensuring that the whole data set will be always stored on disk.

📘

Kubernetes IP finder

To learn more about Kubernetes IP finder and Apache Ignite nodes auto-discovery in Kuberentes environment, refer to this documentation page.

Now, it's time to prepare a Kubernetes StatefulSet configuration for Ignite pods deployment.

StatefulSet Deployment

The final step is to deploy Ignite pods in Kubernetes in a form of the StatefulSet.

It's suggested to have separate disk drives for write-ahead log files (WAL) and database files for better performance. That's why this section provides instructions for 2 deployment scenarios - when WAL and database files are located in the same storage and kept separately.

🚧

StatefulSet Deployment Time

It might take a while for a Kubernetes environment to allocate requested persistence volumes and, as a result, deploy all the pods successfully. While the volumes will be being assigned, the deployment status of the pods might show the message like this - "pod has unbound PersistentVolumeClaims (repeated 4 times)".

Separate Disk for WAL

To make sure the WAL is stored on a different disk drive we need to request Kuberenetes for a dedicated storage class. Depending on your Kubernetes environment, the storage class will vary. In this document, we provide storage class templates for Amazon AWS, Google Compute Engin, and Microsoft Azure.

Use the following template to request a storage class for your WAL:

#Amazon AWS Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-wal-storage-class  #StorageClass name
  namespace: ignite #Ignite namespace
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2 #Volume type io1, gp2, sc1, st1. Default: gp2
  zones: us-east-1d
#Google Compute Engine Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-wal-storage-class #StorageClass Name
  namespace: ignite #Ignite namespace
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd #Volume type pd-standard or pd-ssd. Default: pd-standard
  zones: europe-west1-b	
  replication-type: none
#Microsoft Azure Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-wal-storage-class  #StorageClass name
  namespace: ignite #Ignite namespace
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: managed

Request the storage by issuing the following command:

#Request storage class
kubectl create -f ignite-wal-storage-class.yaml

Perform a similar operation requesting a dedicated storage class for the database files:

#Amazon AWS Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-persistence-storage-class  #StorageClass name
  namespace: ignite         #Ignite namespace
provisioner: kubernetes.io/aws-ebs
parameters:
  type: gp2 #Volume type io1, gp2, sc1, st1. Default: gp2
  zones: us-east-1d
#Google Compute Engine configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-persistence-storage-class  #StorageClass Name
  namespace: ignite         #Ignite namespace
provisioner: kubernetes.io/gce-pd
parameters:
  type: pd-ssd #Type pd-standard or pd-ssd. Default: pd-standard
  zones: europe-west1-b	
  replication-type: none
#Microsoft Azure Configuration
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: ignite-persistence-storage-class  #StorageClass name
  namespace: ignite #Ignite namespace
provisioner: kubernetes.io/azure-disk
parameters:
  storageaccounttype: Standard_LRS
  kind: managed

Request the storage for the database files by issuing the following command:

#Request storage class
kubectl create -f ignite-persistence-storage-class.yaml
👍

Storage Class Parameter

Feel free to adjust zone, storage type and other parameters of the storage class configurations depending on your needs.

Ensure that both storage classes were created and available for usage:

kubectl get sc

Next, go ahead and deploy the StatefulSet in Kubernetes:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: ignite
  namespace: ignite
spec:
  selector:
    matchLabels:
      app: ignite
  serviceName: ignite
  replicas: 2
  template:
    metadata:
      labels:
        app: ignite
    spec:
      serviceAccountName: ignite
      containers:
      - name: ignite
        image: apacheignite/ignite:2.6.0
        env:
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        - name: CONFIG_URI
          value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence-and-wal.xml
        - name: IGNITE_QUIET
          value: "false"
        - name: JVM_OPTS
          value: "-Djava.net.preferIPv4Stack=true"
        ports:
        - containerPort: 11211 # JDBC port number.
        - containerPort: 47100 # communication SPI port number.
        - containerPort: 47500 # discovery SPI port number.
        - containerPort: 49112 # JMX port number.
        - containerPort: 10800 # SQL port number.
        - containerPort: 8080 # REST port number.
        - containerPort: 10900 #Thin clients port number.
        volumeMounts:
        - mountPath: "/wal"
          name: ignite-wal
        - mountPath: "/persistence"
          name: ignite-persistence
  volumeClaimTemplates:
  - metadata:
      name: ignite-persistence
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ignite-persistence-storage-class"
      resources:
        requests:
          storage: "1Gi"
  - metadata:
      name: ignite-wal
    spec:
      accessModes: [ "ReadWriteOnce" ]
      storageClassName: "ignite-wal-storage-class"
      resources:
        requests:
          storage: "1Gi"
# Create the stateful set
kubectl create -f ignite-stateful-set.yaml

Check that Apache Ignite pods are up and running:

kubectl get pods --namespace=ignite

Same storage for the database and WAL files

If for some reason, your preference is to store both WAL and database files in the same disk drive then use the configuration template below to get your StatefulSet deployed and started:

apiVersion: apps/v1beta2
kind: StatefulSet
metadata:
  name: ignite
  namespace: ignite
spec:
  selector:
    matchLabels:
      app: ignite
  serviceName: ignite
  replicas: 2
  template:
    metadata:
      labels:
        app: ignite
    spec:
      serviceAccountName: ignite
      containers:
      - name: ignite
        image: apacheignite/ignite:2.6.0
        env:
        - name: OPTION_LIBS
          value: ignite-kubernetes,ignite-rest-http
        - name: CONFIG_URI
          value: https://raw.githubusercontent.com/apache/ignite/master/modules/kubernetes/config/example-kube-persistence.xml
        - name: IGNITE_QUIET
          value: "false"
        - name: JVM_OPTS
          value: "-Djava.net.preferIPv4Stack=true"
        ports:
        - containerPort: 11211 # JDBC port number.
        - containerPort: 47100 # communication SPI port number.
        - containerPort: 47500 # discovery SPI port number.
        - containerPort: 49112 # JMX port number.
        - containerPort: 10800 # SQL port number.
        - containerPort: 8080 # REST port number.
        - containerPort: 10900 #Thin clients port number.
        volumeMounts:
        - mountPath: "/data/ignite"
          name: ignite-storage
  volumeClaimTemplates:
  - metadata:
      name: ignite-storage
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
# Create the stateful set
kubectl create -f ignite-stateful-set.yaml

As you can see, the configuration defines a couple of environment variables (OPTION_LIBS and CONFIG_URIL) that will be processed by a special shell script used by Ignite's docker image. The full list of docker image's configuration parameters is available on Docker Deployment page.

Check that Ignite pods are up and running:

kubectl get pods --namespace=ignite

Retrieving Ignite pods logs

Use the procedure below to retrieve the logs generated by Apache Ignite pods.

Get a list of the Ignite pods running​:

kubectl get pods --namespace=ignite

Pick a name of one of the pods available:

NAME       READY     STATUS    RESTARTS   AGE
ignite-0   1/1       Running   0          7m
ignite-1   1/1       Running   0          4m

and get the logs from it:

kubectl logs ignite-0 --namespace=ignite

Adjusting Ignite Cluster Size

You can adjust Apache Ignite cluster size on the fly using the standard Kubernetes API. For instance, if you want to scale out the cluster from 2 to 4 nodes then the command below can be used:

kubectl scale sts ignite --replicas=4 --namespace=ignite

Double check the cluster was scaled out successfully:

kubectl get pods --namespace=ignite

The output should show that you now have 4 Ignite pods up and running:

NAME       READY     STATUS    RESTARTS   AGE
ignite-0   1/1       Running   0          21m
ignite-1   1/1       Running   0          18m
ignite-2   1/1       Running   0          12m
ignite-3   1/1       Running   0          9m

Ignite Cluster Activation

Since we're using Ignite native persistence for our deployment we need to activate the Ignite cluster after it's started. To do that, connect to one of the pods:

kubectl exec -it ignite-0 --namespace=ignite -- /bin/bash

Go to the following folder:

cd /opt/ignite/apache-ignite-fabric/bin/

And activate the cluster by running the following command:

./control.sh --activate