diff --git a/src/reference-designs/patch.diff b/src/reference-designs/patch.diff deleted file mode 100644 index eb538713..00000000 --- a/src/reference-designs/patch.diff +++ /dev/null @@ -1,219 +0,0 @@ -diff --git a/src/reference-designs/tap-architecture-dev-components.md b/src/reference-designs/tap-architecture-dev-components.md -index 715d5ac..e053ed9 100644 ---- a/src/reference-designs/tap-architecture-dev-components.md -+++ b/src/reference-designs/tap-architecture-dev-components.md -@@ -27,25 +27,25 @@ Tanzu Application Platform Dev components include the following options: - * Supply Chain Security Tools (SCST) - Scan - - ## Accelerator - - The Application Accelerator user interface (UI) enables you to discover available accelerators, configure them, and generate new projects to download. - - ### Accelerator Architecture - - ![Accelerator Architecture](img/tap-architecture-planning/accelerator-arch.jpg) - --Application Accelerator allows you to generate new projects from files in Git repositories. An `accelerator.yaml` file in the repository declares input options for the accelerator. Accelerator custom resources (CRs) control which repositories appear in the Application Accelerator UI. The Accelerator controller reconciles the CRs with a Flux2 Source Controller to fetch files from GitHub or GitLab. For more information, see [Tanzu Application Platform Accelerator](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/application-accelerator-about-application-accelerator.html). -+The Application Accelerator allows you to generate new projects from files in Git repositories. An `accelerator.yaml` file in the repository declares input options for the accelerator. The Accelerator custom resources (CRs) control which repositories appear in the Application Accelerator UI. The Accelerator controller reconciles the CRs with a Flux2 Source Controller to fetch files from GitHub or GitLab. For more information, see [Tanzu Application Platform Accelerator](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/application-accelerator-about-application-accelerator.html). - - ## API Portal - --API portal enables API consumers to find APIs they can use in their own applications. API portal assembles its dashboard and detailed API documentation views by ingesting OpenAPI documentation from the source URLs. For more information, see [Tanzu Application Platform API portal](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/api-portal-install-api-portal.html). -+The API portal enables API consumers to find APIs that they can use in their own applications. The API portal assembles its dashboard and detailed API documentation views by ingesting OpenAPI documentation from the source URLs. For more information, see [Tanzu Application Platform API portal](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/api-portal-install-api-portal.html). - - ## AppSSO - - The AppSSO conforms to the OIDC standard, and enables the use of external identity providers for user management, registration, and authentication. It supports OIDC providers such as Microsoft Active Directory, Okta, Google, Facebook, and so on. - - ### AppSSO Architecture - - ![AppSSO Architecture](img/tap-architecture-planning/appsso-arch.jpg) - - The following components must be installed on the `run` cluster: -@@ -99,25 +99,25 @@ spec: - - "authorization_code" - scopes: - - name: "openid" - - name: "email" - - name: "profile" - - name: "roles" - - name: "message.read" - - ``` - --The settings in `ClientRegistration` contain the redirectURL pointing to a page in the end-user application to be redirected to after successful authentication. The settings here also reference the AuthServer by its pod’s labels on behalf of the end-user application. For more information, see [Tanzu Application Platform AppSSO](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/app-sso-about.html). -+The settings in `ClientRegistration` contain the redirectURL pointing to a page in the end-user application to be redirected to after the successful authentication. The settings here also reference the AuthServer by its pod’s labels on behalf of the end-user application. For more information, see [Tanzu Application Platform AppSSO](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/app-sso-about.html). - - ## API Auto Registration - --API Auto Registration automates the registration of API specifications defined in a workload’s configuration and makes them accessible in the Tanzu Application Platform GUI without additional steps. An automated workflow, using a supply chain, leverages API Auto Registration to create and manage a Kubernetes Custom Resource (CR) of kind [APIDescriptor](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/api-auto-registration-key-concepts.html). It automatically generates and provides API specifications in OpenAPI, AsyncAPI, GraphQL, or gRPC API formats to the Tanzu Application GUI [API Documentation plugin](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/tap-gui-plugins-api-docs.html). -+The API Auto Registration automates the registration of API specifications defined in a workload’s configuration, and makes them accessible in the Tanzu Application Platform GUI without additional steps. An automated workflow, using a supply chain, leverages API Auto Registration to create and manage a Kubernetes Custom Resource (CR) of kind [APIDescriptor](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/api-auto-registration-key-concepts.html). It automatically generates and provides API specifications in OpenAPI, AsyncAPI, GraphQL, or gRPC API formats to the Tanzu Application GUI [API Documentation plugin](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/tap-gui-plugins-api-docs.html). - - ### Tanzu Application Platform GUI Automated Workflow - - ![TAP GUI automated workflow](img/tap-architecture-planning/api-auto-regis-workflow.jpg) - - The API Documentation plug-in displays a list of APIs provided by components registered in the Catalog providing an easy way for developers to find APIs in a single location. - - API Auto Registration components are installed by the `run` and `full` profiles. - - ### Recommendations -@@ -201,15 +201,15 @@ grype: - importFromNamespace: metadata-store-secrets - ``` - - ### SCST Dependencies - - * Scan requires the installation of SCST Store on the `view` cluster to send and save the results of source and image scans. - - * The `view` cluster certificate and token must be extracted and set in the `build` profile to enable the scanner components to communicate with the `view` cluster where the results of scans are stored and available for inquiry. - - --## CI/CD Pipelines for custom supply chain -+## CI/CD Pipelines for Custom Supply Chain - - Tanzu Application Platform supports Tekton pipelines using the `tekton-pipelines package` to customize the supply chain. It allows developers to build, test, and deploy across cloud providers and on-premises systems. For more information, see [Tekton documentation](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/tekton-tekton-about.html). - - To learn more about all Tanzu Application Platform components, see [Component documentation](https://docs.vmware.com/en/VMware-Tanzu-Application-Platform/1.7/tap/components.html). -diff --git a/src/reference-designs/tap-architecture-planning.md b/src/reference-designs/tap-architecture-planning.md -index dff6930..30cd7fc 100644 ---- a/src/reference-designs/tap-architecture-planning.md -+++ b/src/reference-designs/tap-architecture-planning.md -@@ -9,31 +9,31 @@ Design decisions enumerated in this document exemplify the main design issues yo - - ## Cluster Layout - - For production deployments, VMware recommends two fully independent instances of Tanzu Application Platform. One instance for operators to conduct their own reliability tests, and the other instance hosts development, test, QA, and production environments isolated by separate clusters. - - | Decision ID | Design Decision | Justification | Implication - |--- |--- |--- |--- - |TAP-001 | Install using multiple clusters. | Utilizing multiple clusters allows you to separate your workloads and environments while still leveraging combined build infrastructure. | Multiple cluster design requires more installation effort and possibly more maintenance versus a single cluster design. - |TAP-002 | Create an operator sandbox environment. | An operator sandbox environment allows platform operators to test upgrades and architectural changes before introducing them to production. | An operator sandbox requires additional computer resources. - |TAP-003 | Utilize a single Build Cluster and multiple Run Clusters | Utilizing a single Build Cluster with multiple Run Clusters creates the correct production environment for the build system vs separating into dev/test/qa/prod build systems. Additionally, a single Build Cluster ensures that the container image does not change between environments. A single Build Cluster is also easier to manage than separate components. | *Changes lower environments are not as separated as having separate build environments.* --|TAP-004 | Utilize a View Cluster | Utilizing a single View Cluster with multiple Run Clusters creates the correct production perception for the common systems like developer portal, app resource monitoring,appsso etc. | None -+|TAP-004 | Utilize a View Cluster | Utilizing a single View Cluster with multiple Run Clusters creates the correct production perception for the common systems like developer portal, app resource monitoring, appsso, and so on. | None - - ## Build Cluster Requirements - - The Build Cluster is responsible for taking a developer's source code commits and applying a supply chain that will produce a container image and Kubernetes manifests for deploying on a Run Cluster. - - The Kubernetes Build Cluster will see bursty workloads as each build or series of builds kicks off. The Build Cluster will see very high pod scheduling loads as these events happen. The amount of resources assigned to the Build Cluster will directly correlate to how quickly parallel builds are able to be completed. - - ### Kubernetes Requirements - Build Cluster - --* Supported Kubernetes versions are 1.26,1.27,1.28. -+* Supported Kubernetes versions are 1.26, 1.27, 1.28. - * Default storage class. - * At least 16 GB available memory that is allocatable across clusters, with at least 8 GB per node. - * 12 vCPUs available across all nodes. - * 100 GB of disk space available per node. - * Logging is enabled and targets the desired application logging platform. - * Monitoring is enabled and targets the desired application observability platform. - * Container Storage Interface (CSI) Driver is installed. - - - ### Recommendations - Build Cluster -@@ -125,21 +125,21 @@ tap_telemetry: - ``` - - ## Run Cluster Requirements - - The Run Cluster reads the container image and Kubernetes resources created by the Build Cluster and runs them as defined in the `Deliverable` object for each application. - - The Run Cluster's requirements are driven primarily by the applications that it will run. Horizontal and vertical scale is determined based on the type of applications that will be scheduled. - - ### Kubernetes Requirements - Run Cluster - --* Supported Kubernetes versions are 1.26,1.27,1.28. -+* Supported Kubernetes versions are 1.26, 1.27, 1.28. - * LoadBalancer for ingress controller (requires 1 external IP address). - * Default storage class. - * At least 16 GB available memory that is allocatable across clusters, with at least 16 GB per node. - * 12 vCPUs available across all nodes. - * 100 GB of disk space available per node. - * Logging is enabled and targets the desired application logging platform. - * Monitoring is enabled and targets the desired application observability platform. - * Container Storage Interface (CSI) Driver is installed. - * A subdomain for the host name that you point at the tanzu-shared-ingress service’s external IP address. - -@@ -201,27 +201,27 @@ appliveview_connector: - ingressEnabled: true - host: appliveview.VIEW-CLUSTER-INGRESS-DOMAIN - - tap_telemetry: - customer_entitlement_account_number: "CUSTOMER-ENTITLEMENT-ACCOUNT-NUMBER" # (Optional) Identify data for creating Tanzu Application Platform usage reports. - - ``` - - ## View Cluster Requirements - --The View Cluster is designed to run the web applications for Tanzu Application Platform. specifically,Tanzu Application Portal GUI/Tanzu Developer Portal(TDP), and Tanzu API Portal. -+The View Cluster is designed to run the web applications for Tanzu Application Platform; specifically, Tanzu Application Portal GUI/Tanzu Developer Portal(TDP), and Tanzu API Portal. - - The View Cluster's requirements are driven primarily by the respective applications that it will be running. - - ### Kubernetes Requirements - View Cluster - --* Supported Kubernetes versions are 1.26,1.27,1.28. -+* Supported Kubernetes versions are 1.26, 1.27, 1.28. - * LoadBalancer for ingress controller (requires 1 external IP address). - * Default storage class. - * At least 16 GB available memory that is allocatable across clusters, with at least 8 GB per node. - * 8 vCPUs available across all nodes. - * 100 GB of disk space available per node. - * Logging is enabled and targets the desired application logging platform. - * Monitoring is enabled and targets the desired application observability platform. - * Container Storage Interface (CSI) Driver is installed. - * A subdomain for the host name that you point at the tanzu-shared-ingress service’s external IP address. - -@@ -301,21 +301,21 @@ tap_telemetry: - - ## Iterate Cluster Requirements - - The Iterate Cluster is for "inner loop" development iteration. Developers connect to the Iterate Cluster via their IDE to rapidly iterate on new software features. The Iterate Cluster operates distinctly from the outer loop infrastructure. Each developer should be given their own namespace within the Iterate Cluster during their platform onboarding. - - ![Iterate Cluster components](img/tap-architecture-planning/iterate-cluster.jpg) - - - ### Kubernetes Requirements - Iterate Cluster - --* Supported Kubernetes versions are 1.26,1.27,1.28. -+* Supported Kubernetes versions are 1.26, 1.27, 1.28. - * LoadBalancer for ingress controller (2 external IP addresses). - * Default storage class. - * At least 16 GB available memory that is allocatable across clusters, with at least 8 GB per node. - * 12 vCPUs available across all nodes. - * 100 GB of disk space available per node. - * Logging is enabled and targets the desired application logging platform. - * Monitoring is enabled and targets the desired application observability platform. - * Container Storage Interface (CSI) Driver is installed. - * A subdomain for the host name that you point at the tanzu-shared-ingress service’s external IP address. - -diff --git a/src/reference-designs/tap-networking.md b/src/reference-designs/tap-networking.md -index 628269b..4d8fe35 100644 ---- a/src/reference-designs/tap-networking.md -+++ b/src/reference-designs/tap-networking.md -@@ -12,21 +12,21 @@ The following table describes the networking flow in the Tanzu Application Platf - |Authentication (Pinniped concierge is an external Component, not part of Tanzu Application Platform.Coexist in same cluster) | View,Build ,Run Cluster | View (Pinniped Supervisor) | 443 | https | Traffic between pinniped supervisor to pinniped concierge and vice versa. - |API Auto Registration | Run | View | 443 | https | Traffic routes through shared ingress. - |Application Accelerator | Git Repo | View | 443 | https | Traffic to create app accelerator from git from internet egress. - |Application Accelerator | Contour ingress | View | 443 | https | Download templated app, traffic routes through shared ingress. Contour/envoy proxy access Accelerator service inside cluster. - |Application Live view | Build(app live view conventions) | View (App live view Backend) | 443 | https | Traffic between appliveview conventions and backend. - |Application Live view | Run/Iterate (app live view connector) | View (App live view Backend) | 443 | https | Traffic between appliveview connector backend. - |AppSso | Run (appsso)| View (App live view Backend) | 443 | https | Traffic between external auth server and appsso. - |AppSso | Contour ingress | Run, Iterate | 443 | https | User request to AppSSO with login token (via shared ingress).Contour/envoy proxy access appsso service inside cluster. - |Contour ingress /Envoy proxy | External Load Balancer | Run,View,Iterate | 443,80 | https(443) , http(80) | Shared ingress for view/run/iterate cluster. - |Fluxcd source controller | Run,Build | External git/helm repository | 443,80 | https(443) , http(80) | Traffic to pull or push from git repo from internet egress. --|Supply Chain Security Tools/Metadata| Build (security scan plugin)| View (Gui CVE Dashboard) | 443 | WebSocket | Traffic routes through shared ingress to report the scan results to view gui cve’s dashboard. -+|Supply Chain Security Tools/Metadata| Build (security scan plug-in)| View (Gui CVE Dashboard) | 443 | WebSocket | Traffic routes through shared ingress to report the scan results to view gui cve’s dashboard. - |Tanzu Application Platform Gui web| Contour ingress| View (tap-gui package) | 443 | https | Traffic routes through shared ingress for external web url. Contour/envoy proxy access tap-gui service inside cluster. - |Gui backend| View | Gui backend DB(postgres) | 5432 | tcp | Gui backend DB within the k8s cluster to persist tap gui data (read/write), this traffic remains in-cluster if the database is hosted inside the same cluster. - |Tanzu Build Service| Build | Third party dependencies repositories | 443 | https | Downloading artifacts necessary to compile applications in different languages (Python, Java, .NET, JavaScript, golang, etc.). - |Tanzu Build Service| Build | Container images used for compilation of developer’s custom containers repository | 443 | https | Container images from the relocated Tanzu Build Service buildpacks designated container repository. - |Config Writer (Tekton task in OOTB supply chain)| Build | Container Registry | 443 | https / registry v2 protocol (imgpkg) | Push app configuration to registry for later deployment. - |Knative Serving (Cloud Native Runtimes)| Run, Iterate | Container Registry | 443 | https / registry v2 protocol (imgpkg) | Resolve tagged images to digests. - |Namespace Provisioner| Run, Iterate | Container Registry | 443 | https / registry v2 protocol (imgpkg) | Read TAP default per-namespace configuration. - |Namespace Provisioner| Run, Iterate | Git Repository | 443 ,22 | https(443) , ssh(22) / git fetch protocol | Read user customized additional per-namespace resources from internet egress. - |Knative Eventing| Run, Iterate | Various endpoints | 443 | https | Eventing sources include cloud providers from internet egress - |View Gui Runtime Resources plugin| View k8 cluster | Build,Run k8 API server | 6443 | tcp | To access Build, Run clusters k8 resources in tap gui runtime resource plugin -\ No newline at end of file