Running Cicada Distributed Tests in Kubernetes

This guide is also available on the Cicada Distributed docsite.

Setting up the cluster

Begin by installing k3d and ensuring that Kustomize is installed and available to use with kubectl -k. Once this is installed, start the k3d cluster:

k3d cluster create -p "8283:30083@server[0]" -p "8284:30084@server[0]"

This will create a cluster with two node ports exposed on localhost:8283 and localhost:8284. Because these ports map to 30083 and 30084 respectively in the cluster, we’ll also have to modify the install of Cicada using Kustomize.

First, create a directory for the overlay and get the installation YAML into a file:

mkdir cicada-distributed-overlay
cicada-distributed start-cluster --mode=KUBE > cicada-distributed-overlay/cicada.yaml

Next, create a file called cicada-distributed-overlay/patch.yaml and add this to the contents:

apiVersion: v1
kind: Service
metadata:
  name: cicada-distributed-datastore-client
spec:
  ports:
  - port: 8283
    protocol: TCP
    targetPort: 8283
    nodePort: 30083
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: cicada-distributed-container-service
spec:
  ports:
  - port: 8284
    protocol: TCP
    targetPort: 8284
    nodePort: 30084
  type: NodePort

This will override the datastore-client and container-service services to use a NodePort bound to 30083 and 30084 in the cluster, so we can access them locally.

Next, you’ll need to merge the files using Kustomize. To do this, add a file called cicada-distributed-overlay/kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - cicada.yaml
patchesStrategicMerge:
  - patch.yaml

Next, apply the directory by using the -k flag in kubectl:

kubectl apply -k cicada-distributed-overlay

This will create all the resources and modify the services for usage in k3d.

Getting an API into K8s

In a previous article, a simple REST API was developed to demonstrate Cicada Distributed. The source code for that API is available here. For this example, you’ll need to add the API image to the k3d cluster and start it using the provided Kube YAML.

The demo app can be cloned from cicadatesting/cicada-distributed-demos:

git clone https://github.com/cicadatesting/cicada-distributed-demos.git

First, build the API and database and add the images to the cluster:

cicada-distributed-demos/rest-api/app :
docker build -t cicadatesting/demo-api-app:local .
docker build -t cicadatesting/demo-api-flyway:local -f flyway.dockerfile .
k3d image import cicadatesting/demo-api-app:local
k3d image import cicadatesting/demo-api-flyway:local

Next, install the app with the code in kube-app.yaml:

kubectl apply -f kube-app.yaml

This should start the API, database, and a job to install the database schema.

Running the tests

Once an example app is running, we can run Cicada tests against it. Navigate to the cicada-distributed-demos/rest-api/integration-tests. Since it is running in k3d, the image needs to be imported into the cluster. To build, run:

docker build -t cicadatesting/cicada-distributed-demo-integration-test:local .

Next, import the image with:

k3d image import cicadatesting/cicada-distributed-demo-integration-test:local

Finally, start the test by running:

cicada-distributed run --mode=KUBE --image=cicadatesting/cicada-distributed-demo-integration-test:local

You should see the test spin up and execute the 4 test scenarios.

(Originally posted on Medium)

19