Skip to content

Commit

Permalink
Release 0.2.1
Browse files Browse the repository at this point in the history
  • Loading branch information
vrabbi committed Feb 10, 2022
1 parent 0ab12d5 commit dabc2f1
Show file tree
Hide file tree
Showing 29 changed files with 1,010 additions and 166 deletions.
44 changes: 26 additions & 18 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,32 +1,40 @@
# CHANGE LOG FOR THIS REPO
## 0.2.1
* Added Gitops supply chain with service bindings using the service claims struct of the workload yaml
* Added workload example for binding to a rabbitMQ cluster
* Fixed issue in gitops git-writer task where it failed if the folder already existed in git
* Fixed Service bindings RBAC to allow work with RabbitMQ (OSS and Tanzu) and Tanzu PostgreSQL operator based objects
* Added Cert Injection Webhook Package
* Validated support for running on EKS, kind and minikube

## 0.2.0
* Add Service Bindings Package
* Add Service Bindings for MySQL example workload
* Add Service Bindings Supply Chain example
* Make all package optional (opt out mechanism added)
* Make OOTB Supply chains optional (opt out mechanism added)
* Added Kaniko based example workload
* Standardized the labels for workloads and supply chains
* Added additional docs for getting started
* Enhanced experience for deployment on a Local Docker based environment
* Added script to get the status of the platform
* Add Service Bindings Package
* Add Service Bindings for MySQL example workload
* Add Service Bindings Supply Chain example
* Make all package optional (opt out mechanism added)
* Make OOTB Supply chains optional (opt out mechanism added)
* Added Kaniko based example workload
* Standardized the labels for workloads and supply chains
* Added additional docs for getting started
* Enhanced experience for deployment on a Local Docker based environment
* Added script to get the status of the platform

## 0.1.5
* Add kaniko base supply chain for building images
* add workload examples for building images with Kaniko instead of Kpack
* Add kaniko base supply chain for building images
* add workload examples for building images with Kaniko instead of Kpack

## 0.1.4
* Fix issue with knative to local docker TCE clusters
* Fix issue with knative to local docker TCE clusters

## 0.1.3
* Tech Debt cleanup
* Tech Debt cleanup

## 0.1.2
* Add support for setting Contour to use a ClusterIP for local clusters
* Add support for setting Contour to use a ClusterIP for local clusters

## 0.1.1
* Fix issue with openapiv3 schema for meta package
* Add example workloads for utilization with the supply chains
* Fix issue with openapiv3 schema for meta package
* Add example workloads for utilization with the supply chains

## 0.1.0
Initial Release
Initial Release
13 changes: 12 additions & 1 deletion INSTALL_VALUES_EXPLANATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,16 @@ kpack_config:
builder:
tag: # The full path where you want the builder created in your registry
```
## CERT INJECTION WEBHOOK
This is needed to support self signed registries or source control systems with kpack. This allows us to configure proxy and CA values to be injected into kpack build pods automatically via a mutating webhook
The Supported values are:
```
cert_injection_webhook:
ca_cert_data: # BASE64 encoded CA cert data to inject into the Pods
http_proxy: # HTTP Proxy ENV Variable value to inject into the build pods
https_proxy: # HTTPS Proxy ENV Variable value to inject into the build pods
no_proxy: # NO Proxy ENV Variable value to inject into the build pods
```

## KNATIVE
The configuration options are the same as the TCE Package.
Expand Down Expand Up @@ -65,7 +75,7 @@ This package will install 1 to 3 supply chains to help you getting started with
The required values are:
```
ootb_supply_chains:
disable_specific_supply_chains: # Array of supply chains to not install. options are: ootb-basic-supply-chain, ootb-gitops-supply-chain, ootb-basic-supply-chain-with-kaniko, and ootb-testing-supply-chain
disable_specific_supply_chains: # Array of supply chains to not install. options are: ootb-basic-supply-chain, ootb-gitops-supply-chain, ootb-basic-supply-chain-with-kaniko, ootb-testing-supply-chain and ootb-gitops-supply-chain-with-svc-bindings
image_prefix: # Prefix for image creation path. the workload name will be added as the suffix. should be in the format of <REGISTRY>/<PROJECT or USERNAME>/ or <REGISTRY>/<PROJECT or USERNAME>/<SOME STRING>
gitops:
configure: # boolean value of true or false. if set to true, a gitops supply chain will be created. this requires additional inputs which are found in the gitops.git_writer section bellow.
Expand Down Expand Up @@ -102,3 +112,4 @@ The supported values for this array are:
* **ootb-supply-chains.tap.oss** - Gives an easy way to get started by exposing different supply chains to get you started. if disabled, you will need to manually create a supply chain before you can deploy a workload.
* **tekton.tap.oss** - Used in all but 1 OOTB supply chain. should be disabled only if you have pre installed tekton
* **service-bindings.tap.oss** - Used in 1 OOTB supply chain currently. if you disable this package, binding to backend services will be very complex.
* **cert-injection-webhook.tap.oss** - Used in kpack. if you disable this package, you cannot build images with source from a self signed source or push to a registry with self signed certs.
42 changes: 35 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,14 +16,37 @@ This package repository includes the following packages:
5. **kpack-config.tap.oss** - A package with configuration to setup kpack with Paketo buildpacks
6. **ootb-supply-chains.tap.oss** - A package that includes Supply chains for use in the cluster
7. **tekton.tap.oss** - A package to install Tekton to run pipelines within our supply chains
8. **kpack.tap.oss** - This is the TCE Kpack package simply in the same repo to not have a requirement to install the TCE repo as well
9. **knative-serving.tap.oss** - This is the TCE Knative Serving package simply in the same repo to not have a requirement to install the TCE repo as well
10. **cert-manager.tap.oss** - This is the TCE Cert Manager package simply in the same repo to not have a requirement to install the TCE repo as well
11. **contour.tap.oss** - This is the TCE Contour package simply in the same repo to not have a requirement to install the TCE repo as well
12. **service-bindings.tap.oss** - This is a package that allows simple binding of workloads to backend service using the service bindings project
8. **service-bindings.tap.oss** - This is a package that allows simple binding of workloads to backend service using the service bindings project
9. **cert-injection-webhook.tap.oss** - This is a package that allows injection via webhook of CA certs into pods (used primarily for Kpack) to suppot registries and source control systems with self signed certs
10. **kpack.tap.oss** - This is the TCE Kpack package simply in the same repo to not have a requirement to install the TCE repo as well
11. **knative-serving.tap.oss** - This is the TCE Knative Serving package simply in the same repo to not have a requirement to install the TCE repo as well
12. **cert-manager.tap.oss** - This is the TCE Cert Manager package simply in the same repo to not have a requirement to install the TCE repo as well
13. **contour.tap.oss** - This is the TCE Contour package simply in the same repo to not have a requirement to install the TCE repo as well

## Installation instructions
### TCE and TKGm 1.4+ users can skip to step 3 right away
If you are not running on a TKGm 1.4+ or TCE 0.9.1+ cluster, you must install the Tanzu CLI on your machine and install Kapp Controller in your cluster.
1. Install Tanzu CLI - [Full instructions on TCE website](https://tanzucommunityedition.io/docs/latest/cli-installation/)
* Linux
```bash
curl -H "Accept: application/vnd.github.v3.raw" \
-L https://api.github.com/repos/vmware-tanzu/community-edition/contents/hack/get-tce-release.sh | \
bash -s v0.9.1 linux
```
* Mac
```bash
brew install vmware-tanzu/tanzu/tanzu-community-edition
```
* Windows
```bash
choco install tanzu-community-edition
```
2. Install Kapp Controller
```bash
kubectl apply -f https://github.com/vmware-tanzu/carvel-kapp-controller/releases/latest/download/release.yml
```

## Installation instructions on TCE and TKGm
#### NOTE: Should work on any Kubernetes platform but has not been tested on other platforms yet and would require installing kapp controller first
#### NOTE: Should work on any Kubernetes platform but has not been tested on all major platforms yet. if you have any issue with use on different platforms please open an issue
1. Create the TAP OSS namespace
```bash
kubectl create namespace tap-oss
Expand All @@ -46,6 +69,11 @@ kpack:
kpack_config:
builder:
tag: <FILL ME IN>
cert_injection_webhook:
ca_cert_data: <FILL ME IN>
http_proxy: <FILL ME IN>
https_proxy: <FILL ME IN>
no_proxy: <FILL ME IN>
knative:
domain:
name: <FILL ME IN>
Expand Down
157 changes: 157 additions & 0 deletions example-workloads/ootb-basic-supply-chain-with-kaniko/go/pipeline.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: kaniko
spec:
params:
- name: IMAGE
description: Name (reference) of the image to build.
- name: DOCKERFILE
description: Path to the Dockerfile to build.
default: ./Dockerfile
- name: CONTEXT
description: The build context used by Kaniko.
default: ./
- name: EXTRA_ARGS
type: array
default: []
- name: BUILDER_IMAGE
description: The image on which builds will run (default is v1.5.1)
default: gcr.io/kaniko-project/executor:v1.5.1@sha256:c6166717f7fe0b7da44908c986137ecfeab21f31ec3992f6e128fff8a94be8a5
workspaces:
- name: source
description: Holds the context and docker file
- name: dockerconfig
description: Includes a docker `config.json`
optional: true
mountPath: /kaniko/.docker
results:
- name: IMAGE-DIGEST
description: Digest of the image just built.
steps:
- name: build-and-push
workingDir: $(workspaces.source.path)
image: $(params.BUILDER_IMAGE)
args:
- $(params.EXTRA_ARGS[*])
- --dockerfile=$(params.DOCKERFILE)
- --context=$(workspaces.source.path)/$(params.CONTEXT)
- --destination=$(params.IMAGE)
- --digest-file=/tekton/results/IMAGE-DIGEST
securityContext:
runAsUser: 0
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: kaniko-source
spec:
params:
- name: blob-url
type: string
- name: blob-revision
type: string
steps:
- command:
- bash
- -cxe
- |-
set -o pipefail
echo $(params.blob-revision)
cd $(workspaces.output.path)
curl -SL $(params.blob-url) | tar xvzf -
image: ghcr.io/vrabbi/golang:latest
name: extract-source
resources: {}
workspaces:
- name: output
---
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: export-image-ref
spec:
params:
- name: image-url
type: string
- name: image-digest
type: string
steps:
- command:
- bash
- -cxe
- |-
set -o pipefail
echo $(params.image-url)@$(params.image-digest) | tr -d '\n' | tee $(results.imageRef.path)
image: ghcr.io/vrabbi/golang:latest
name: extract-source
resources: {}
workspaces:
- name: output
results:
- name: imageRef
description: The Image Ref to be used by TAP for future supply chain steps
---
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
annotations:
name: kaniko-runner
spec:
params:
- description: Flux GitRepository URL source
name: source-url
type: string
- description: Flux GitRepository Revision
name: source-revision
type: string
- description: Image path to be pushed to
name: image_url
type: string
results:
- description: ""
name: imageRef
value: $(tasks.export-image-ref.results.imageRef)
tasks:
- name: unpack-source
params:
- name: blob-url
value: $(params.source-url)
- name: blob-revision
value: $(params.source-revision)
taskRef:
kind: Task
name: kaniko-source
workspaces:
- name: output
workspace: source-ws
- name: kaniko
params:
- name: IMAGE
value: $(params.image_url)
runAfter:
- unpack-source
taskRef:
kind: Task
name: kaniko
workspaces:
- name: source
workspace: source-ws
- name: export-image-ref
params:
- name: image-url
value: $(params.image_url)
- name: image-digest
value: $(tasks.kaniko.results.IMAGE-DIGEST)
runAfter:
- kaniko
taskRef:
kind: Task
name: export-image-ref
workspaces:
- name: output
workspace: source-ws
workspaces:
- name: source-ws
- name: dockerconfig
optional: true
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Usage
This example uses the rabbitmq cluster operator and the service binding controller to bind an application to your workload.
This is implemented in a gitops workflow as this is an optimal way to handle backend service bindings when deploying to multiple clusters.
While we are doing this in the same cluster in this example the gitops approach is being used to show that this is possible and is a very powerfull approach for workload management.
# Pre Reqs
1. Install the rabbitMQ Cluster Operator:
```bash
kubectl apply -f https://github.com/rabbitmq/cluster-operator/releases/latest/download/cluster-operator.yml
```
2. Create a rabbitMQ cluster using the manifest in this repo
```bash
kubectl apply -f rabbitmq.yaml
```

# Installation
1. Deploy the workload
```bash
kubectl apply -f workload.yaml
```
2. when the workload is complete the generated YAML will be uploaded to your configured git repo
3. Download the file and apply it to your cluster
4. Watch the magic happen!
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
metadata:
name: rabbitmqcluster-sample
spec:
service:
type: LoadBalancer
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
---
apiVersion: carto.run/v1alpha1
kind: Workload
metadata:
labels:
apps.tanzu.vmware.com/gitops: "true"
apps.tanzu.vmware.com/has-bindings: "true"
apps.tanzu.vmware.com/workload-type: "web"
name: sensors
namespace: default
spec:
serviceClaims:
- name: rmq
ref:
apiVersion: rabbitmq.com/v1beta1
kind: RabbitmqCluster
name: rabbitmqcluster-sample
source:
git:
ref:
branch: v0.2.0
url: https://github.com/jhvhs/rabbitmq-sample

This file was deleted.

Loading

0 comments on commit dabc2f1

Please sign in to comment.