This tutorial shows you how to develop a native cloud Cassandra deployment on Kubernetes. In this example, a custom Cassandra SeedProvider enables Cassandra to discover new Cassandra nodes as they join the cluster.
StatefulSets make it easier to deploy stateful applications within a clustered environment. For more information on the features used in this tutorial, see the StatefulSet documentation.
Cassandra on Docker
The Pods in this tutorial use the gcr.io/google-samples/cassandra:v13
image from Google’s container registry.
The Docker image above is based on debian-base
and includes OpenJDK 8.
This image includes a standard Cassandra installation from the Apache Debian repo.
By using environment variables you can change values that are inserted into cassandra.yaml
.
ENV VAR | DEFAULT VALUE |
---|---|
CASSANDRA_CLUSTER_NAME |
'Test Cluster' |
CASSANDRA_NUM_TOKENS |
32 |
CASSANDRA_RPC_ADDRESS |
0.0.0.0 |
To complete this tutorial, you should already have a basic familiarity with Pods, Services, and StatefulSets. In addition, you should:
Install and Configure the kubectl command-line tool
Download cassandra-service.yaml
and cassandra-statefulset.yaml
Have a supported Kubernetes cluster running
Note: Please read the getting started guides if you do not already have a cluster.
Caution:Minikube defaults to 1024MB of memory and 1 CPU. Running Minikube with the default resource configuration results in insufficient resource errors during this tutorial. To avoid these errors, start Minikube with the following settings:
minikube start --memory 5120 --cpus=4
A Kubernetes Service describes a set of Pods that perform the same task.
The following Service
is used for DNS lookups between Cassandra Pods and clients within the Kubernetes cluster.
application/cassandra/cassandra-service.yaml
|
---|
|
Create a Service to track all Cassandra StatefulSet nodes from the cassandra-service.yaml
file:
kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-service.yaml
Get the Cassandra Service.
kubectl get svc cassandra
The response is
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra ClusterIP None <none> 9042/TCP 45s
Service creation failed if anything else is returned. Read Debug Services for common issues.
The StatefulSet manifest, included below, creates a Cassandra ring that consists of three Pods.
Note: This example uses the default provisioner for Minikube. Please update the following StatefulSet for the cloud you are working with.
application/cassandra/cassandra-statefulset.yaml
|
---|
|
Create the Cassandra StatefulSet from the cassandra-statefulset.yaml
file:
kubectl create -f https://k8s.io/examples/application/cassandra/cassandra-statefulset.yaml
Get the Cassandra StatefulSet:
kubectl get statefulset cassandra
The response should be:
NAME DESIRED CURRENT AGE
cassandra 3 0 13s
The StatefulSet
resource deploys Pods sequentially.
Get the Pods to see the ordered creation status:
kubectl get pods -l="app=cassandra"
The response should be:
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 1m
cassandra-1 0/1 ContainerCreating 0 8s
It can take several minutes for all three Pods to deploy. Once they are deployed, the same command returns:
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 10m
cassandra-1 1/1 Running 0 9m
cassandra-2 1/1 Running 0 8m
Run the Cassandra nodetool to display the status of the ring.
kubectl exec -it cassandra-0 -- nodetool status
The response should look something like this:
Datacenter: DC1-K8Demo
======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID Rack
UN 172.17.0.5 83.57 KiB 32 74.0% e2dd09e6-d9d3-477e-96c5-45094c08db0f Rack1-K8Demo
UN 172.17.0.4 101.04 KiB 32 58.8% f89d6835-3a42-4419-92b3-0e62cae1479c Rack1-K8Demo
UN 172.17.0.6 84.74 KiB 32 67.1% a6a1e8c2-3dc5-4417-b1a0-26507af2aaad Rack1-K8Demo
Use kubectl edit
to modify the size of a Cassandra StatefulSet.
Run the following command:
kubectl edit statefulset cassandra
This command opens an editor in your terminal. The line you need to change is the replicas
field. The following sample is an excerpt of the StatefulSet
file:
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: StatefulSet
metadata:
creationTimestamp: 2016-08-13T18:40:58Z
generation: 1
labels:
app: cassandra
name: cassandra
namespace: default
resourceVersion: "323"
selfLink: /apis/apps/v1/namespaces/default/statefulsets/cassandra
uid: 7a219483-6185-11e6-a910-42010a8a0fc0
spec:
replicas: 3
Change the number of replicas to 4, and then save the manifest.
The StatefulSet
now contains 4 Pods.
Get the Cassandra StatefulSet to verify:
kubectl get statefulset cassandra
The response should be
NAME DESIRED CURRENT AGE
cassandra 4 4 36m
Deleting or scaling a StatefulSet down does not delete the volumes associated with the StatefulSet. This setting is for your safety because your data is more valuable than automatically purging all related StatefulSet resources.
Warning: Depending on the storage class and reclaim policy, deleting the PersistentVolumeClaims may cause the associated volumes to also be deleted. Never assume you’ll be able to access data if its volume claims are deleted.
Run the following commands (chained together into a single command) to delete everything in the Cassandra StatefulSet
:
grace=$(kubectl get po cassandra-0 -o=jsonpath='{.spec.terminationGracePeriodSeconds}') \
&& kubectl delete statefulset -l app=cassandra \
&& echo "Sleeping $grace" \
&& sleep $grace \
&& kubectl delete pvc -l app=cassandra
Run the following command to delete the Cassandra Service.
kubectl delete service -l app=cassandra
Was this page helpful?
Thanks for the feedback. If you have a specific, answerable question about how to use Kubernetes, ask it on Stack Overflow. Open an issue in the GitHub repo if you want to report a problem or suggest an improvement.