Skip to content

Setting Up K8ssandra

ragsns edited this page Oct 10, 2021 · 7 revisions

1. Setting Up K8ssandra

First things first. Helm is kind of like a high-powered package manager for Kubernetes. In order to use the packages for today's workshop, we will need to first add the correct repositories for helm to use.

We will pull recipies from https://helm.k8ssandra.io/ where you can find all the documentation.

✅ Step 1: Add the repository to Helm

helm repo add k8ssandra https://helm.k8ssandra.io/stable
Click to show output 📃
ec2-user@ip-172-31-5-5:~/kubernetes-workshop> helm repo add k8ssandra https://helm.k8ssandra.io/stable
"k8ssandra" has been added to your repositories      

✅ Step 2: Update helm

helm repo update
Click to show output 📃
ec2-user@ip-172-31-5-5:~/kubernetes-workshop> helm repo update
Hang tight while we grab the latest from your chart repositories...                                                                                              
...Successfully got an update from the "k8ssandra" chart repository                                                                                              
Update Complete. ⎈Happy Helming!

✅ Step 3: Add the repository to Traefik(leveraging Ingress)

In Kubernetes, network ports and services are most often handled by an Ingress controller. For today's lab, the K8ssandra side of things will be using Traefik. Let's install that now.

helm repo add traefik https://helm.traefik.io/traefik
Click to show output 📃
ec2-user@ip-172-31-5-5:~/kubernetes-workshop> helm repo add traefik https://helm.traefik.io/traefik
"traefik" has been added to your repositories  

✅ Step 4: Update Helm as before

helm repo update
Click to show output 📃
ec2-user@ip-172-31-5-5:~/kubernetes-workshop> helm repo update
Hang tight while we grab the latest from your chart repositories...                                                                                              
...Successfully got an update from the "k8ssandra" chart repository                                                                                              
...Successfully got an update from the "traefik" chart repository                                                                                                
Update Complete. ⎈Happy Helming!⎈                                                                                                                                
ec2-user@ip-172-31-5-5:

✅ Step 5: Install traefik with following configuration

✅ Step 5a: For Local/Civo installs:

We install a slightly older version of traefik. This is a workaround since the latest version won't enable k8ssandra ingress properly.

helm install traefik traefik/traefik -f traefik-local-civo.yaml --version 9.20.1

For local and Civo installs, set up port forwarding as below.

kubectl port-forward $(kubectl get pods --selector "app.kubernetes.io/name=traefik" --output=name) 9000:9000 &

OR

✅ Step 5b: For Datastax provided VMs:

helm install traefik traefik/traefik -f traefik.yaml --version 9.20.1
Click to show output 📃
ec2-user@ip-172-31-5-5:~/kubernetes-workshop> helm install traefik traefik/traefik -f traefik.yaml
NAME: traefik                                                                                                                                                    
LAST DEPLOYED: Tue Nov 17 15:00:53 2020                                                                                                                          
NAMESPACE: default                                                                                                                                               
STATUS: deployed                                                                                                                                                 
REVISION: 1                                                                                                                                                      
TEST SUITE: None    

✅ Step 6: Check Installation

Check installation with the following command.

watch kubectl get pods

To exit watch use Ctrl + C.

When the pod is ready and running, open the traefik dashboard using http://localhost:9000/dashboard/ OR URL http://<YOURADDRESS>:9000/dashboard/.

image

✅ Step 7: Use Helm to Install K8ssandra

Let's install k8ssandra by running a helm install of K8ssandra.

✅ Step 7a: For Local/Civo installs:

# Use this for Local/Civo installs
helm install -f k8ssandra-local-civo.yaml k8ssandra k8ssandra/k8ssandra

OR

✅ Step 7b: Datastax provided VMs:

Edit the file accordingly! (Replace <YOURADDRESS> with the URL you are using). Notice there are three places to be adjusted with your environment as you will see below,

grep YOURADDRESS k8ssandra.yaml

The output will be something like below.

    host: repair.<YOURADDRESS>
      externalUrl: http://<YOURADDRESS>:8080/prometheus
        root_url: http://<YOURADDRESS>:8080/grafana

Verify that it's properly replaced. When properly replaced the output of the following command

grep datastaxtraining k8ssandra.yaml

should be something like below.

    host: repair.xxxx999999.name.datastaxtraining.com
      externalUrl: http://xxxx999999.name.datastaxtraining.com:8080/prometheus
        root_url: http://xxxx999999.name.datastaxtraining.com:8080/grafana

# Use this for Datastax provided VM installs
# MAKE SURE YOU'VE REPLACED YOURADDRESS
#
helm install -f k8ssandra.yaml k8ssandra k8ssandra/k8ssandra
Click to show output 📃
ec2-user@ip-172-31-5-5:~/kubernetes-workshop> helm install -f k8ssandra.yaml k8ssandra k8ssandra/k8ssandra
NAME: k8ssandra
LAST DEPLOYED: Tue Mar  2 20:56:12 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1

✅ Step 8: Verify the pods are running*

Verify everything is up running. We need to wait till everything has running or completed status before moving on. It may need up to 3 minutes

watch kubectl get pods
Click to show output 📃
Every 2.0s: kubectl get pods
NAME                                                READY   STATUS     RESTARTS   AGE                                                                   
k8ssandra-cass-operator-7d5df6d49-kx7gk             1/1     Running    0          83s
k8ssandra-dc1-default-sts-0                         0/2     Pending    0          65s
k8ssandra-dc1-stargate-644f7fd75b-t6dht             0/1     Init:0/1   0          83s
k8ssandra-grafana-5c6d5b8f5f-dzfv7                  2/2     Running    0          83s
k8ssandra-kube-prometheus-operator-85695ffb-vvhmr   1/1     Running    0          83s
k8ssandra-reaper-operator-79fd5b4655-p8jc9          1/1     Running    0          83s
prometheus-k8ssandra-kube-prometheus-prometheus-0   2/2     Running    1          77s
traefik-746ff9cc4c-tpc56                            1/1     Running    0          72m   

To exit watch use Ctrl + C

From this command, we will be able to see the pods as they come online. Notice the steps as they complete.

Verifying Installation

You can verify the installation with the following command

kubectl get cassandradatacenters
Click to show output 📃
NAME   AGE
dc1    5m41s  

and get more details using the command below.

kubectl describe CassandraDataCenter dc1
Click to show output 📃
Name:         dc1
Namespace:    default
Labels:       app.kubernetes.io/instance=k8ssandra
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=k8ssandra
              app.kubernetes.io/part-of=k8ssandra-k8ssandra-default
              helm.sh/chart=k8ssandra-1.3.1
Annotations:  meta.helm.sh/release-name: k8ssandra
              meta.helm.sh/release-namespace: default
              reaper.cassandra-reaper.io/instance: k8ssandra-reaper
API Version:  cassandra.datastax.com/v1beta1
Kind:         CassandraDatacenter
Metadata:
  Creation Timestamp:  2021-09-29T11:48:58Z
  Finalizers:
    finalizer.cassandra.datastax.com
  Generation:  2
  Managed Fields:
    API Version:  cassandra.datastax.com/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:meta.helm.sh/release-name:
          f:meta.helm.sh/release-namespace:
          f:reaper.cassandra-reaper.io/instance:
        f:labels:
          .:
          f:app.kubernetes.io/instance:
          f:app.kubernetes.io/managed-by:
          f:app.kubernetes.io/name:
          f:app.kubernetes.io/part-of:
          f:helm.sh/chart:
      f:spec:
        .:
        f:allowMultipleNodesPerWorker:
        f:clusterName:
        f:config:
          .:
          f:cassandra-yaml:
            .:
            f:allocate_tokens_for_local_replication_factor:
            f:authenticator:
            f:authorizer:
            f:credentials_update_interval_in_ms:
            f:credentials_validity_in_ms:
            f:num_tokens:
            f:permissions_update_interval_in_ms:
            f:permissions_validity_in_ms:
            f:role_manager:
            f:roles_update_interval_in_ms:
            f:roles_validity_in_ms:
          f:jvm-server-options:
            .:
            f:additional-jvm-opts:
            f:heap_size_young_generation:
            f:initial_heap_size:
            f:max_heap_size:
        f:configBuilderImage:
        f:dockerImageRunsAsCassandra:
        f:managementApiAuth:
          .:
          f:insecure:
        f:podTemplateSpec:
          .:
          f:spec:
            .:
            f:volumes:
        f:racks:
        f:resources:
          .:
          f:limits:
            .:
            f:memory:
          f:requests:
            .:
            f:memory:
        f:serverImage:
        f:serverType:
        f:serverVersion:
        f:size:
        f:storageConfig:
          .:
          f:cassandraDataVolumeClaimSpec:
            .:
            f:accessModes:
            f:resources:
              .:
              f:requests:
                .:
                f:storage:
            f:storageClassName:
        f:systemLoggerImage:
        f:users:
    Manager:      Go-http-client
    Operation:    Update
    Time:         2021-09-29T11:48:58Z
    API Version:  cassandra.datastax.com/v1beta1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
      f:spec:
        f:additionalServiceConfig:
          .:
          f:additionalSeedService:
          f:allpodsService:
          f:dcService:
          f:nodePortService:
          f:seedService:
        f:configBuilderResources:
        f:podTemplateSpec:
          f:metadata:
            .:
            f:creationTimestamp:
          f:spec:
            f:containers:
            f:initContainers:
        f:resources:
          f:limits:
            f:cpu:
          f:requests:
            f:cpu:
        f:systemLoggerResources:
      f:status:
        .:
        f:cassandraOperatorProgress:
        f:conditions:
        f:lastRollingRestart:
        f:lastServerNodeStarted:
        f:nodeReplacements:
        f:nodeStatuses:
          .:
          f:k8ssandra-dc1-default-sts-0:
            .:
            f:hostID:
        f:observedGeneration:
        f:quietPeriod:
        f:superUserUpserted:
        f:usersUpserted:
    Manager:         operator
    Operation:       Update
    Time:            2021-09-29T11:52:34Z
  Resource Version:  268915
  Self Link:         /apis/cassandra.datastax.com/v1beta1/namespaces/default/cassandradatacenters/dc1
  UID:               f6ed7db2-af10-4f1c-9233-93ad9253ee13
Spec:
  Additional Service Config:
    Additional Seed Service:
    Allpods Service:
    Dc Service:
    Node Port Service:
    Seed Service:
  Allow Multiple Nodes Per Worker:  true
  Cluster Name:                     k8ssandra
  Config:
    Cassandra - Yaml:
      allocate_tokens_for_local_replication_factor:  3
      Authenticator:                                 PasswordAuthenticator
      Authorizer:                                    CassandraAuthorizer
      credentials_update_interval_in_ms:             3600000
      credentials_validity_in_ms:                    3600000
      num_tokens:                                    16
      permissions_update_interval_in_ms:             3600000
      permissions_validity_in_ms:                    3600000
      role_manager:                                  CassandraRoleManager
      roles_update_interval_in_ms:                   3600000
      roles_validity_in_ms:                          3600000
    Jvm - Server - Options:
      Additional - Jvm - Opts:
        -Dcassandra.system_distributed_replication_dc_names=dc1
        -Dcassandra.system_distributed_replication_per_dc=1
      heap_size_young_generation:  1G
      initial_heap_size:           1G
      max_heap_size:               1G
  Config Builder Image:            docker.io/datastax/cass-config-builder:1.0.4
  Config Builder Resources:
  Docker Image Runs As Cassandra:  true
  Management API Auth:
    Insecure:
  Pod Template Spec:
    Metadata:
      Creation Timestamp:  <nil>
    Spec:
      Containers:
        Env:
          Name:   LOCAL_JMX
          Value:  no
        Name:     cassandra
        Resources:
      Init Containers:
        Args:
          -c
          cp -r /etc/cassandra/* /cassandra-base-config/
        Command:
          /bin/sh
        Image:              k8ssandra/cass-management-api:4.0.0-v0.1.28
        Image Pull Policy:  IfNotPresent
        Name:               base-config-init
        Resources:
        Volume Mounts:
          Mount Path:  /cassandra-base-config/
          Name:        cassandra-config
        Name:          server-config-init
        Resources:
        Args:
          /bin/sh
          -c
          echo "$REAPER_JMX_USERNAME $REAPER_JMX_PASSWORD" > /config/jmxremote.password && echo "$SUPERUSER_JMX_USERNAME $SUPERUSER_JMX_PASSWORD" >> /config/jmxremote.password
        Env:
          Name:  REAPER_JMX_USERNAME
          Value From:
            Secret Key Ref:
              Key:   username
              Name:  k8ssandra-reaper-jmx
          Name:      REAPER_JMX_PASSWORD
          Value From:
            Secret Key Ref:
              Key:   password
              Name:  k8ssandra-reaper-jmx
          Name:      SUPERUSER_JMX_USERNAME
          Value From:
            Secret Key Ref:
              Key:   username
              Name:  k8ssandra-superuser
          Name:      SUPERUSER_JMX_PASSWORD
          Value From:
            Secret Key Ref:
              Key:          password
              Name:         k8ssandra-superuser
        Image:              docker.io/busybox:1.33.1
        Image Pull Policy:  IfNotPresent
        Name:               jmx-credentials
        Resources:
        Volume Mounts:
          Mount Path:  /config
          Name:        server-config
      Volumes:
        Empty Dir:
        Name:  cassandra-config
  Racks:
    Name:  default
  Resources:
    Limits:
      Cpu:     1
      Memory:  2Gi
    Requests:
      Cpu:         1
      Memory:      2Gi
  Server Image:    k8ssandra/cass-management-api:4.0.0-v0.1.28
  Server Type:     cassandra
  Server Version:  4.0.0
  Size:            1
  Storage Config:
    Cassandra Data Volume Claim Spec:
      Access Modes:
        ReadWriteOnce
      Resources:
        Requests:
          Storage:         5Gi
      Storage Class Name:  standard
  System Logger Image:     docker.io/k8ssandra/system-logger:9c4c3692
  System Logger Resources:
  Users:
    Secret Name:  k8ssandra-reaper
    Superuser:    true
    Secret Name:  k8ssandra-stargate
    Superuser:    true
Status:
  Cassandra Operator Progress:  Ready
  Conditions:
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    ScalingUp
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    Updating
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    Stopped
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    ReplacingNodes
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    RollingRestart
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    Resuming
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  False
    Type:                    ScalingDown
    Last Transition Time:    2021-09-29T11:52:14Z
    Message:                 
    Reason:                  
    Status:                  True
    Type:                    Valid
    Last Transition Time:    2021-09-29T11:52:33Z
    Message:                 
    Reason:                  
    Status:                  True
    Type:                    Initialized
    Last Transition Time:    2021-09-29T11:52:33Z
    Message:                 
    Reason:                  
    Status:                  True
    Type:                    Ready
  Last Server Node Started:  2021-09-29T11:50:50Z
  Node Statuses:
    k8ssandra-dc1-default-sts-0:
      Host ID:          0298145e-7177-4de1-af8c-f917fad3fea3
  Observed Generation:  2
  Quiet Period:         2021-09-29T11:52:39Z
  Super User Upserted:  2021-09-29T11:52:34Z
  Users Upserted:       2021-09-29T11:52:34Z
Events:
  Type    Reason             Age                   From           Message
  ----    ------             ----                  ----           -------
  Normal  CreatedResource    6m18s                 cass-operator  Created service k8ssandra-dc1-service
  Normal  CreatedResource    6m18s                 cass-operator  Created service k8ssandra-seed-service
  Normal  CreatedResource    6m18s                 cass-operator  Created service k8ssandra-dc1-all-pods-service
  Normal  CreatedResource    6m18s                 cass-operator  Created statefulset k8ssandra-dc1-default-sts
  Normal  ScalingUpRack      6m18s                 cass-operator  Scaling up rack default
  Normal  LabeledPodAsSeed   4m50s                 cass-operator  Labeled pod a seed node k8ssandra-dc1-default-sts-0
  Normal  StartingCassandra  4m45s                 cass-operator  Starting Cassandra for pod k8ssandra-dc1-default-sts-0
  Normal  StartedCassandra   3m22s                 cass-operator  Started Cassandra for pod k8ssandra-dc1-default-sts-0
  Normal  CreatedResource    3m22s                 cass-operator  Created PodDisruptionBudget dc1-pdb
  Normal  UpdatingRack       3m22s                 cass-operator  Updating rack default
  Normal  CreatedUsers       3m1s (x3 over 3m21s)  cass-operator  Created users
  Normal  CreatedSuperuser   3m1s (x3 over 3m21s)  cass-operator  Created superuser 

Verify that it's up and running with the following command

kubectl describe CassandraDataCenter dc1 | grep "Cassandra Operator Progress:" 

which should output a "Ready".

Click to show output 📃
  Cassandra Operator Progress:  Ready 

You can get the username and password for the k8ssandra install with the following commands

kubectl get secret k8ssandra-superuser -o jsonpath="{.data.username}" | base64 --decode ; echo
kubectl get secret k8ssandra-superuser -o jsonpath="{.data.password}" | base64 --decode ; echo

Once it's "Ready" you can also run some nodetool commands. It is a power tool used by many cassandra admins . You can use the command as shown below by plugging the password from the output of commandabove.

kubectl exec -it k8ssandra-dc1-default-sts-0 -c cassandra -- nodetool -u k8ssandra-superuser -pw <password> status
Click to show output 📃
Datacenter: dc1
===============
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
--  Address     Load        Tokens  Owns  Host ID                               Rack   
UN  10.244.1.5  284.53 KiB  16      ?     5a927985-5fa2-4b5e-b20b-b8d64415dd5a  default

Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

Next Step

Proceed to the Step II