diff --git a/k3d-local/.gitignore b/k3d-local/.gitignore new file mode 100644 index 0000000..636d2cd --- /dev/null +++ b/k3d-local/.gitignore @@ -0,0 +1,5 @@ +ca.crt +ca.key +ca-secret.yaml +ca-issuer.yaml +gooddata-license.env diff --git a/k3d-local/README.md b/k3d-local/README.md new file mode 100644 index 0000000..e09179e --- /dev/null +++ b/k3d-local/README.md @@ -0,0 +1,161 @@ +g GoodData.CN in K3D +This script allows you to deploy a 3-node Kubernetes cluster on your local +machine within docker containers, deploy Apache Pulsar and GoodData.CN, +configure cert-manager with self-signed certificate authority and set up +Ningx ingress controller. + +## Requirements + +HW requirements are pretty high, I recommend at least 8 CPU cores and 12 GB +RAM available to Docker. Only `x86_64` CPU architecture is supported. + +Here is a list of things you need to have installed before running the script: +* [k3d](https://github.com/rancher/k3d/releases/tag/v5.4.0) 5.4,x +* [kubectl](https://kubernetes.io/docs/tasks/tools/) 1.24 or higher +* openssl 1.x +* [Helm](https://helm.sh/docs/intro/install/) 3.x +* [Docker](https://www.docker.com/) +* envsubst (part of gettext) +* (optional but recommended) [crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) + +Environment variables that need to be set: +* `GDCN_LICENSE` contaning GoodData.CN license key + +These services must be running and accessible by the user running the script: +* Docker daemon + +Store your GoodData.CN license to environment variable GDCN_LICENSE: +``` +export GDCN_LICENSE="key/......." +``` + +## Configuration +``` + Usage: ./k3d.sh [options] + Options are: + -c - create cluster + -v VERSION - use this version of gooddata-cn helm chart + -f FILE - full path to a yaml with GD.CN helm values +``` + +`-c` should be used to create or recreate cluster. If you want just update +existing cluster, do not use this parameter. Script will perform helm upgrade +of all components within existing cluster. + +## Usage +``` +./k3d.sh -c +``` +The script pulls all images to local docker registry (to save network +bandwidth), creates 3-node kubernetes cluster, install apps, generates +CA certificate (the certificate is printed to output including the steps +describing how to add this certificate to your browser). + +When script finishes, your kubeconfig is automatically set to your new +k3d cluster context. You can immediately use `kubectl` to control the cluster. + +``` +kubectl get node +NAME STATUS ROLES AGE VERSION +k3d-default-agent-1 Ready 33m v1.26.6+k3s1 +k3d-default-server-0 Ready control-plane,master 33m v1.26.6+k3s1 +k3d-default-agent-0 Ready 33m v1.26.6+k3s1 +``` + +## What next? +Create Organization resource, and apply to cluster: + +``` +# contents of org-demo.yaml file +apiVersion: controllers.gooddata.com/v1 +kind: Organization +metadata: + name: demo-org + namespace: gooddata +spec: + id: demo + name: Demo organization + hostname: localhost + adminGroup: adminGroup + adminUser: admin + adminUserToken: "$5$marO0ghT5pAg/CLe$ZiGVePUEPHryyEpSnnd9DmbLs4uNUKWGXjjmGnYT1NA" + tls: + secretName: demo-org-tls + issuerName: ca-issuer +``` + +Explanation of important attributes: +* `metadata.name` - name of Kubernetes resource +* `metadata.namespace` - where the Organization resource will be created +* `spec.id` - ID of organization in metadata +* `spec.name` - friendly name of organization +* `spec.hostname` - FQDN of organization endpoint. Ingress with the same host will be + created by the GoodData.CN +* `spec.adminUser` - name of initial organization administrator account +* `spec.adminUserToken` - SHA256 hash (salted) of the secret part of bootstrap + token +* `spec.tls.secretName` - name of k8s Secret where TLS certificate and key will + be stored by cert-manager +* `spec.tls.issuerName` - name of cert-manager's Issuer or ClusterIsuer +* `spec.tls.issuerType` - type of cert-manager's issuer. Either `Issuer` (default) + or `ClusterIssuer`. + +The easiest way on how to generate salted hash from plaintext secret is: + +``` +docker run -it --rm alpine:latest mkpasswd -m sha256crypt example + +$5$marO0ghT5pAg/CLe$ZiGVePUEPHryyEpSnnd9DmbLs4uNUKWGXjjmGnYT1NA +``` + +Apply organization resource: +``` +kubectl apply -f org-demo.yaml +``` + +## Try accessing the organization +**Note**: As the system is using self-signed CA, you may also want to add its +certificate to your operating system and web browser. + +To access organization for the first time, create so-called bootstrap token from +the secret you used to create `spec.adminUserToken` above. We used "example" +word as a secret, and "admin" as `spec.adminUser` account name. "bootstrap" is +given and can not be changed. Later on, you can create your own API tokens and +name them as you wish. + +``` +echo -n "admin:bootstrap:example" | base64 +YWRtaW46Ym9vdHN0cmFwOmV4YW1wbGU= +``` + +So `YWRtaW46Ym9vdHN0cmFwOmV4YW1wbGU=` is your secret token you will use for +initial setup of the organization using JSON:API and REST API. + +``` +curl -k -H 'Authorization: Bearer YWRtaW46Ym9vdHN0cmFwOmV4YW1wbGU=' \ + https://localhost/api/v1/entities/admin/organizations/demo + +{ + "data": { + "attributes": { + "name": "Demo organization", + "hostname": "localhost", + "oauthClientId": "114f0a8b-2335-4bf4-ba54-37e1ee04afe1" + }, + "id": "demo", + "type": "organization" + }, + "links": { + "self": "https://localhost/api/v1/entities/admin/organizations/demo" + } +} +``` + +## Next steps +Follow the [documentation](https://www.gooddata.com/developers/cloud-native/doc) +to learn more about adding users, configuring data sources and further steps. + +## Cleanup +If you want to wipe the environment, perform these steps: +* delete k3d cluster: `k3d cluster delete default` +* remove registry volume: `docker volume rm registry-data` diff --git a/k3d-local/cert-manager.yaml b/k3d-local/cert-manager.yaml new file mode 100644 index 0000000..9eb2712 --- /dev/null +++ b/k3d-local/cert-manager.yaml @@ -0,0 +1,19 @@ +apiVersion: v1 +kind: Namespace +metadata: + name: cert-manager +--- +apiVersion: helm.cattle.io/v1 +kind: HelmChart +metadata: + name: cert-manager + namespace: kube-system +spec: + repo: https://charts.jetstack.io + chart: cert-manager + version: v1.6.0 + targetNamespace: cert-manager + valuesContent: |- + installCRDs: "true" + extraArgs: + - "--enable-certificate-owner-ref=true" diff --git a/k3d-local/gen_keys.sh b/k3d-local/gen_keys.sh new file mode 100755 index 0000000..b8b18c9 --- /dev/null +++ b/k3d-local/gen_keys.sh @@ -0,0 +1,123 @@ +#!/bin/bash + +# Creates local CA key/cert pair and configures cert-manager +# Issuer or ClusterIssuer (in case the namespace=cert-manager) +# Usage: $0 [namespace] +# Namespace defaults to "gooddata" + +usage() { + cat > /dev/stderr < ca-secret.yaml << EOF +apiVersion: v1 +kind: Secret +type: kubernetes.io/tls +metadata: + name: ca-key-pair + namespace: ${NAMESPACE} +data: + tls.crt: $cert + tls.key: $key +EOF + +# Create Issuer +cat > ca-issuer.yaml << EOF +apiVersion: cert-manager.io/v1 +kind: ${CLUSTERWIDE}Issuer +metadata: + name: ca-issuer + namespace: ${NAMESPACE} +spec: + ca: + secretName: ca-key-pair +EOF + +echo "Uploading resources to kuberenetes" +kubectl create -f ca-secret.yaml +kubectl create -f ca-issuer.yaml + +rm ca-issuer.yaml ca-secret.yaml diff --git a/k3d-local/ingress-nginx.yaml b/k3d-local/ingress-nginx.yaml new file mode 100644 index 0000000..42f31ed --- /dev/null +++ b/k3d-local/ingress-nginx.yaml @@ -0,0 +1,18 @@ +apiVersion: helm.cattle.io/v1 +kind: HelmChart +metadata: + name: ingress-nginx + namespace: kube-system +spec: + repo: https://kubernetes.github.io/ingress-nginx + chart: ingress-nginx + version: 4.1.1 + targetNamespace: kube-system + valuesContent: |- + controller: + config: + use-forwarded-headers: "true" + tolerations: + - key: "node-role.kubernetes.io/master" + operator: "Exists" + effect: "NoSchedule" diff --git a/k3d-local/k3d-config.yaml b/k3d-local/k3d-config.yaml new file mode 100644 index 0000000..3916d78 --- /dev/null +++ b/k3d-local/k3d-config.yaml @@ -0,0 +1,56 @@ +apiVersion: k3d.io/v1alpha5 +kind: Simple +metadata: + name: default +servers: 1 +agents: 2 +kubeAPI: + host: "localhost" + hostIP: "0.0.0.0" + hostPort: "6443" +# If not specified, the latest version will be used +# image: rancher/k3s:v1.30.1-k3s1 +network: k3d-default +volumes: +- volume: "${PWD}/cert-manager.yaml:/var/lib/rancher/k3s/server/manifests/cert-manager.yaml" + nodeFilters: + - server:0 +- volume: "${PWD}/ingress-nginx.yaml:/var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml" + nodeFilters: + - server:0 +ports: +- port: "${K3D_HTTPS_PORT}:443" + nodeFilters: + - loadbalancer +- port: "${K3D_HTTP_PORT}:80" + nodeFilters: + - loadbalancer +registries: + create: + name: k3d-registry + host: "0.0.0.0" + hostPort: "${DOCKER_REGISTRY_PORT}" + volumes: + - registry-data:/var/lib/registry + config: | + mirrors: + "k3d-registry:${DOCKER_REGISTRY_PORT}": + endpoint: + - http://k3d-registry:${DOCKER_REGISTRY_PORT} +options: + k3d: + wait: true + timeout: "60s" + k3s: + extraArgs: + - arg: --disable=traefik + nodeFilters: + - server:* + # nodeLabels: + # - label: role=infra + # nodeFilters: + # - agent:* + # - server:* + kubeconfig: + updateDefaultKubeconfig: true + switchCurrentContext: true diff --git a/k3d-local/k3d.sh b/k3d-local/k3d.sh new file mode 100755 index 0000000..61bdba0 --- /dev/null +++ b/k3d-local/k3d.sh @@ -0,0 +1,294 @@ +#!/usr/bin/env bash +# (C) 2020-2024 GoodData Corporation +# +# Create K3D cluster with Pulsar and GoodData.CN Helm charts deployed. +# SW Requirements: +# * kubectl 1.24+ +# * k3d 5.4.x+ +# * helm 3.x +# * dockerd 20.x+ +# * jq +# * envsubst (part of gettext package, also available in Homebrew) +# * (optional but recommended) crane (https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) +# Environment requirements: +# * GDCN_LICENSE contaning GoodData.CN license key +# +# Steps to start local k3d cluster: +# 1. (Optional) prepare GD.CN deployment customization - check section "Customizing GD.CN deployment values" +# 2. run './k3d.sh -c' to deploy whole cluster (can take ~ 10min, rerun without "-c" if it times out) +# 3. (Optional) Imort generated self-signed CA certificate to browser and operating system. +# 3. Create Organization with localhost hostname, create user in dex and metadata. Follow GoodData docs. +# 4. go to https://localhost, and use credentials for user created in step 3. +## +# Customizing GD.CN deployment values: +# Values customizations is available through option '-f FILE' of './k3d-deploy.sh'. Below is an example how to set +# offline feature flag ENABLE_PDM_REMOVAL_DEPRECATION_PHASE: +# 1. create file /tmp/custom-values.yaml with content: +# commonEnv: +# - name: GDC_FEATURES_VALUES_ENABLE_PDM_REMOVAL_DEPRECATION_PHASE +# value: "true" +# 2. In step (3) of "Steps to start local k3d cluster" run './k3d-deploy.sh -c -b -v /tmp/custom-values.yaml' +# + +set -e +DOCKERHUB_NAMESPACE="gooddata" +PULSAR_VERSION="3.1.2" +PULSAR_CHART_VERSION="3.1.0" +GDCN_VERSION="3.13.0" +CLUSTER_NAME="default" +# set to empty string if you want port 80 +LBPORT="" +# set to empty string if you want port 443 +LBSSLPORT="" +export K3D_HTTP_PORT="${LBPORT:-80}" +export K3D_HTTPS_PORT="${LBSSLPORT:-443}" + +DOCKET_REGISTRY_HOST=localhost +export DOCKER_REGISTRY_PORT=5000 +# Hint: if you want to use your own registry, you can set DOCKET_REGISTRY and REPOSITORY_PREFIX to some +# other compatible registry. Images will be copied to that registry. +gOCKER_REGISTRY="$DOCKET_REGISTRY_HOST:$DOCKER_REGISTRY_PORT" +REPOSITORY_PREFIX="k3d-registry:$DOCKER_REGISTRY_PORT" + +SCRIPT_DIR=$(cd "$(dirname "${BASH_SOURCE[0]}")" &>/dev/null && pwd -P) + +pushd "$SCRIPT_DIR" +K3D_CONFIG_FILE="k3d-config.yaml" +# Fancy colors +C_RED='\033[0;31m' +C_RESET='\033[0m' + +# List of images to be copied from docker hub to local docker registry +declare -A IMAGES=( + ["afm-exec-api"]="$DOCKERHUB_NAMESPACE/afm-exec-api" + ["api-gateway"]="$DOCKERHUB_NAMESPACE/api-gateway" + ["auth-service"]="$DOCKERHUB_NAMESPACE/auth-service" + ["automation"]="$DOCKERHUB_NAMESPACE/automation" + ["calcique"]="$DOCKERHUB_NAMESPACE/calcique" + ["export-controller"]="$DOCKERHUB_NAMESPACE/export-controller" + ["metadata-api"]="$DOCKERHUB_NAMESPACE/metadata-api" + ["result-cache"]="$DOCKERHUB_NAMESPACE/result-cache" + ["scan-model"]="$DOCKERHUB_NAMESPACE/scan-model" + ["sql-executor"]="$DOCKERHUB_NAMESPACE/sql-executor" + ["tabular-exporter"]="$DOCKERHUB_NAMESPACE/tabular-exporter" + ["apidocs"]="$DOCKERHUB_NAMESPACE/apidocs" + ["dex"]="$DOCKERHUB_NAMESPACE/dex" + ["organization-controller"]="$DOCKERHUB_NAMESPACE/organization-controller" + ["tools"]="$DOCKERHUB_NAMESPACE/tools" + # amd64/arm64 + ["analytical-designer"]="$DOCKERHUB_NAMESPACE/analytical-designer" + # amd64/arm64 + ["dashboards"]="$DOCKERHUB_NAMESPACE/dashboards" + # amd64/arm64 + ["home-ui"]="$DOCKERHUB_NAMESPACE/home-ui" + # amd64/arm64 + ["ldm-modeler"]="$DOCKERHUB_NAMESPACE/ldm-modeler" + # amd64/arm64 + ["measure-editor"]="$DOCKERHUB_NAMESPACE/measure-editor" + # amd64/arm64 + ["web-components"]="$DOCKERHUB_NAMESPACE/web-components" + # amd64/arm64 + ["quiver"]="$DOCKERHUB_NAMESPACE/quiver" + # amd64 only + ["visual-exporter-chromium"]="$DOCKERHUB_NAMESPACE/visual-exporter-chromium" + ["visual-exporter-proxy"]="$DOCKERHUB_NAMESPACE/visual-exporter-proxy" + ["visual-exporter-service"]="$DOCKERHUB_NAMESPACE/visual-exporter-service" + ["pdf-stapler-service"]="$DOCKERHUB_NAMESPACE/pdf-stapler-service" +) + +# Play safe and cover all imaginable options +case $(arch) in + x86_64 | amd64) ARCH=amd64 ;; + aarch64 | arm64) ARCH=arm64 ;; + *) + echo -e"${C_RED}⛔ Unsupported architecture: $(arch)${C_RESET}" + exit 1 + ;; +esac + +usage() { + cat >/dev/stderr </dev/null +command -v envsubst >/dev/null || { + echo "⛔ Can not find 'envsubst' command." + exit 1 +} +command -v crane >/dev/null 2>&1 && { + echo "🎉 Found crane, will use it to copy images to local registry" + HAS_CRANE=1 +} + +echo "🔎 Checking GoodData CN version" +# If this command fails, provided version does not exist in Docker Hub +docker manifest inspect $DOCKERHUB_NAMESPACE/dex:$GDCN_VERSION >/dev/null || { + echo "⛔ Can not access images in Docker Hub. Make sure $GDCN_VERSION is valid." + exit 1 +} + +# This is needed to make kubedns working in pods running on the same host +# where kubedns pod is running; valid for Linux +if [ "$(uname -s)" == "Linux" ]; then + echo "🔎 Checking bridge netfilter policy" + if ! grep -q 1 /proc/sys/net/bridge/bridge-nf-call-iptables; then + echo " Enabling bridge-nf-call-iptables" + echo '1' | sudo tee /proc/sys/net/bridge/bridge-nf-call-iptables + fi + # This is needed for klipper in order to configure NAT table + echo "🔎 Checking if iptables module is present" + if [ ! -d /sys/module/ip_tables ]; then + echo "Loading iptables to kernel" + insmod ip_tables || exit 1 + fi +fi + +# Create cluster +if [ "$CREATE_CLUSTER" ]; then + k3d cluster delete $CLUSTER_NAME || : + k3d cluster create -c "$K3D_CONFIG_FILE" +fi + +# Use ext-image files to get correct image tag. Note this tag refers to the repository +# where the image was built. +echo "Copying images" +for app in "${!IMAGES[@]}"; do + docker_cp "${IMAGES[$app]}:$GDCN_VERSION" "$DOCKER_REGISTRY/${app}:$GDCN_VERSION" & +done + +wait # wait for copying to complete + +if [ "$CREATE_CLUSTER" ]; then + k3d kubeconfig merge default -d + kubectl config use-context k3d-$CLUSTER_NAME + kubectl create ns pulsar + kubectl create ns gooddata +fi + +# Preload pulsar to local registry +# The reason is that it is a huge image and DockerHub token expires before the image +# is pulled by containerd. Furthermore, is is pulled 3-4 times in parallel. +docker_cp apachepulsar/pulsar:$PULSAR_VERSION $DOCKER_REGISTRY/apachepulsar/pulsar:$PULSAR_VERSION + +kubectl config use-context k3d-$CLUSTER_NAME +kubectl cluster-info + +echo "license=$GDCN_LICENSE" > gooddata-license.env + +if [ "$CREATE_CLUSTER" ]; then + # Wait for cert-manager. Kubectl is completely happy when listing + # resources in non-existent namespace but it fails when waiting + # on non-existent resources if using selector. Therefore we must + # poll for namespace and some specific resource first and then + # we can wait for deployment to become available. + echo "Waiting for cert-manager to come up" + while ! kubectl get ns cert-manager &>/dev/null; do + echo -n "." + sleep 10 + done + while ! kubectl -n cert-manager get deployment cert-manager &>/dev/null; do + echo -n "." + sleep 10 + done + kubectl -n cert-manager wait deployment --for=condition=available \ + --selector=app.kubernetes.io/instance=cert-manager \ + --timeout=240s + echo "Done" + # This script generates key/cert pair (if not already available). + # If the pair is already available, it will reuse it. + # Key, certificate, and issuer will be loaded by kustomize + ./gen_keys.sh -k +fi + +# Generate manifests by kustomize, substitute env variables, and apply +# shellcheck disable=SC2016 +kubectl kustomize . | + PULSAR_CHART_VERSION=$PULSAR_CHART_VERSION \ + DOCKER_REGISTRY_PORT=$DOCKER_REGISTRY_PORT \ + PULSAR_VERSION=$PULSAR_VERSION \ + envsubst '$DOCKER_REGISTRY_PORT $PULSAR_CHART_VERSION $PULSAR_VERSION' | + kubectl apply -f - + +if [ -n "$VALUES_FILE" ]; then + echo "Deploying GD.CN with values file $VALUES_FILE" +fi + +# deployVisualExporter=false: visualExporterService is not supported in k3d deployment now because: +# - chromium is not able to load dashboard on internal domain (some /etc/hosts magic has to be done there) +# - chromium container in visualExporterChromium service is not multiarch and needs to be built manually for arm64 +v helm -n gooddata upgrade --install gooddata-cn \ + --repo https://charts.gooddata.com --version "$GDCN_VERSION" \ + --wait --timeout 10m \ + ${VALUES_FILE:+--values $VALUES_FILE} \ + --set image.repositoryPrefix=$REPOSITORY_PREFIX \ + --set ingress.lbProtocol=https --set ingress.lbPort="$LBSSLPORT" \ + --set replicaCount=1 --set dex.ingress.annotations."cert-manager\.io/issuer"=ca-issuer \ + --set dex.ingress.tls.authSecretName=gooddata-cn-dex-tls \ + --set metadataApi.encryptor.enabled=false \ + --set license.existingSecret=gdcn-license-key gooddata-cn + +cat < + -XX:+UseG1GC + -XX:MaxGCPauseMillis=10 + -XX:+ParallelRefProcEnabled + -XX:+UnlockExperimentalVMOptions + -XX:+DoEscapeAnalysis + -XX:ParallelGCThreads=4 + -XX:ConcGCThreads=4 + -XX:G1NewSizePercent=50 + -XX:+DisableExplicitGC + -XX:-ResizePLAB + -XX:+ExitOnOutOfMemoryError + -XX:+PerfDisableSharedMem + + autorecovery: + podMonitor: + enabled: false + restartPodsOnConfigMapChange: true + configData: + BOOKIE_MEM: > + -Xms64m -Xmx128m -XX:MaxDirectMemorySize=128m + + pulsar_metadata: + image: + repository: k3d-registry:${DOCKER_REGISTRY_PORT}/apachepulsar/pulsar + + broker: + replicaCount: 2 + podMonitor: + enabled: false + restartPodsOnConfigMapChange: true + configData: + exposeConsumerLevelMetricsInPrometheus: "true" + exposeManagedCursorMetricsInPrometheus: "true" + exposeManagedLedgerMetricsInPrometheus: "true" + exposeProducerLevelMetricsInPrometheus: "true" + exposeTopicLevelMetricsInPrometheus: "true" + managedLedgerDefaultEnsembleSize: "2" + managedLedgerDefaultWriteQuorum: "2" + managedLedgerDefaultAckQuorum: "2" + subscriptionExpirationTimeMinutes: "5" + systemTopicEnabled: "true" + topicLevelPoliciesEnabled: "true" + + proxy: + podMonitor: + enabled: false + + kube-prometheus-stack: + enabled: false diff --git a/k3d-local/unlimited-cpu.yaml b/k3d-local/unlimited-cpu.yaml new file mode 100644 index 0000000..4abd011 --- /dev/null +++ b/k3d-local/unlimited-cpu.yaml @@ -0,0 +1,126 @@ +etcd: + resources: + limits: + cpu: ~ +dex: + resources: + limits: + cpu: ~ +resources: + limits: + cpu: ~ +afmExecApi: + resources: + limits: + cpu: ~ +analyticalDesigner: + resources: + limits: + cpu: ~ +authService: + resources: + limits: + cpu: ~ +dashboards: + resources: + limits: + cpu: ~ +homUi: + resources: + limits: + cpu: ~ +ldmModeler: + resources: + limits: + cpu: ~ +metadataApi: + resources: + limits: + cpu: ~ +apiGateway: + resources: + limits: + cpu: ~ +calcique: + resources: + limits: + cpu: ~ +measureEditor: + resources: + limits: + cpu: ~ +organizationController: + resources: + limits: + cpu: ~ +resultCache: + resources: + limits: + cpu: ~ +scanModel: + resources: + limits: + cpu: ~ +sqlExecutor: + resources: + limits: + cpu: ~ +webComponents: + resources: + limits: + cpu: ~ +apiDocs: + resources: + limits: + cpu: ~ +tabularExporter: + resources: + limits: + cpu: ~ +exportController: + resources: + limits: + cpu: ~ +visualExporterChromium: + resources: + limits: + cpu: ~ +visualExporterProxy: + resources: + limits: + cpu: ~ +visualExporterService: + resources: + limits: + cpu: ~ +pdfStaplerService: + resources: + limits: + cpu: ~ +tools: + resources: + limits: + cpu: ~ +cacheGC: + resources: + limits: + cpu: ~ + +chartTests: + resources: + limits: + cpu: ~ +quiver: + resources: + cache: + limits: + cpu: ~ + datasource: + limits: + cpu: ~ + xtab: + limits: + cpu: ~ + ml: + limits: + cpu: ~