diff --git a/docs/architecture.md b/docs/architecture.md index 666fee5e..285414da 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -7,7 +7,8 @@ functions as commands. In the future, an API server may be built from the library as well. Arlon adds CRDs (custom resource definitions) for several custom resources such as ClusterRegistration and Profile. -## Management cluster +## Components +### Management cluster The management cluster is a Kubernetes cluster hosting all the components needed by Arlon, including: @@ -22,7 +23,7 @@ needed by Arlon, including: The user is responsible for supplying the management cluster, and to have access to a kubeconfig granting administrator permissions on the cluster. -## Controller +### Controller The Arlon controller observes and responds to changes in `clusterregistration` custom resources. The Arlon library creates a `clusterregistration` at the @@ -31,7 +32,7 @@ causing the controller to wait for the cluster's kubeconfig to become available, at which point it registers the cluster with ArgoCD to enable manifests described by bundles to be deployed to the cluster. -## Library +### Library The Arlon library is a Go module that contains the functions that communicate with the Management Cluster to manipulate the Arlon state (bundles, profiles, clusterspecs) @@ -39,7 +40,7 @@ and transforms them into git directory structures to drive ArgoCD's gitops engin library is exposed via a CLI utility. In the future, it may also be embodied into a server an exposed via a network API. -## Workspace repository +### Workspace Repository As mentioned earlier, Arlon creates and maintains directory structures in a git repository to drive ArgoCD *sync* operations. @@ -51,7 +52,7 @@ register the workspace registry in ArgoCD before referencing it from Arlon data Starting from release v0.9.0, Arlon now includes two commands to help with managing various git repository URLs. With these commands in place, the `--repo-url` flag in commands requiring a hosted git repository is no longer needed. A more detailed explanation is given in the next [section](#repo-aliases). -### Repo Aliases +#### Repo Aliases A repo(repository) alias allows an Arlon user to register a GitHub repository with ArgoCD and store a local configuration file on their system that can be referenced by the CLI to then determine a repository URL and fetch its credentials when needed. All commands that require a repository, support a `--repo-url` flag also support a `repo-alias` flag to specify an alias instead of an alias, such commands will consider the "default" alias to be used when no `--repo-alias` and no `--repo-url` flags are given. @@ -83,11 +84,11 @@ The structure of the file is as shown: On running `arlon git unregister ALIAS`, it removes that entry from the configuration file. However, it does NOT remove the repository from `argocd`. When the "default" alias is deleted, we also clear the "default" entry from the JSON file. -#### Examples +##### Examples Given below are some examples for registering and unregistering a repository. -##### Registering Repositories +###### Registering Repositories Registering a repository requires the repository link, the GitHub username(`--user`), and a personal access token(`--password`). When the `--password` flag isn't provided at the command line, the CLI will prompt for a password(this is the recommended approach). @@ -105,7 +106,7 @@ arlon git register https://github.com/GhUser/manifests --user GhUser --password arlon git register https://github.com/GhUser/prod-manifests --user GhUser --alias prod --password $GH_PAT ``` -##### Unregistering Repositories +###### Unregistering Repositories Unregistering an alias only requires a positional argument: the repository alias. diff --git a/docs/concepts.md b/docs/concepts.md index e9374c20..dd6d8d73 100644 --- a/docs/concepts.md +++ b/docs/concepts.md @@ -1,14 +1,20 @@ # Concepts +## Understanding the Fundamentals + +It is important to understand the fundamentals of the underlying technology that Arlon is built on, before you start making use of Arlon. +We recommend a good understanding of core concepts around [Docker](https://www.docker.com/), containerization, [Kubernetes](https://kubernetes.io/), GitOps and Continuous Delivery to understand the fundamentals behind Arlon. +We also recommend having a basic understanding of application templating technologies such as [Helm](https://helm.sh) and [Kustomize](https://kustomize.io) that are commonly used with Kubernetes. + ## Management cluster -This Kubernetes cluster hosts the following components: +Before you can use arlon, you need a Kubernetes cluster to host the following components: - ArgoCD - Arlon - Cluster management stacks e.g. Cluster API and/or Crossplane -The Arlon state and controllers reside in the arlon namespace. +The Arlon state and controllers reside in the arlon namespace on this cluster. ## Configuration bundle @@ -17,6 +23,15 @@ produce a set of Kubernetes manifests via a *tool*. This closely follows ArgoCD' definition of *tool types*. Consequently, the list of supported bundle types mirrors ArgoCD's supported set of manifest-producing tools. Each bundle is defined using a Kubernetes ConfigMap resource in the arlon namespace. +Additionally, a bundle can embed the data itself ("static bundle"), or contain a reference +to the data ("dynamic bundle"). A reference can be a URL, GitHub location, or Helm repo location. +The current list of supported bundle types is: + +- manifest_inline: a single manifest yaml file embedded in the resource +- manifest_ref: a reference to a single manifest yaml file +- dir_inline: an embedded tarball that expands to a directory of YAML files +- helm_inline: an embedded Helm chart package +- helm_ref: an external reference to a Helm chart ### Static bundle @@ -41,6 +56,18 @@ by ArgoCD, including plain YAML, Helm and Kustomize. When the user updates a dynamic bundle in git, all clusters consuming that bundle (through a profile specified at cluster creation time) will acquire the change. +### Bundle purpose + +Bundles can specify an optional *purpose* to help classify and organize them. +In the future, Arlon may order bundle installation by purpose order (for e.g. +install bundles with purpose=*networking* before others) but that is not the +case today. The currently *suggested* purpose values are: + +- networking +- add-on +- data-service +- application + ### Other properties A bundle can also have a comma-separated list of tags, and a description. @@ -51,6 +78,15 @@ Tags can be useful for classifying bundles, for e.g. by type A profile expresses a desired configuration for a Kubernetes cluster. It is just a set of references to bundles (static, dynamic, or a combination). +A profile is composed of: + +- An optional clusterspec. If specified, it allows the profile + to be used to create new clusters. + If absent, the profile can only be applied to existing clusters. +- A list of bundles specifying the configuration to apply onto the cluster + once it is operational +- An optional list of `values.yaml` settings for any Helm Chart type bundle + in the bundle list A profile can be static or dynamic. ### Static profile @@ -81,13 +117,11 @@ or acquire new bundles in real time. ## Cluster -An Arlon cluster, also known as workload cluster, is a Kubernetes cluster +An Arlon cluster, also called a 'workload cluster', is a Kubernetes cluster that Arlon creates and manages via a git directory structure stored in the workspace repository. -(Under construction) - -## Cluster spec +## Cluster Specification A cluster spec contains desired settings when creating a new cluster. They currently include: @@ -99,6 +133,33 @@ They currently include: - The initial (worker) node count - The Kubernetes version -## Base cluster +## Base Cluster + +NOTE: The 'Base Cluster' is a new and evolved way of specifying cluster configuration. It will become the default mechanism for specifying cluster configuration starting version 0.10.0 of Arlon + +A Base Cluster allows you to create workload clusters from a manifest file stored in a git repository. The manifest +typically contains multiple related resources that together define an arbitrarily complex cluster. +If you make subsequent changes to the Base Cluster manifest, workload clusters originally created from it will automatically acquire the changes. +The Base Cluster defines: + +- A predefined list of Cluster API objects: Cluster, Machines, Machine Deployments, etc. to be deployed in the current namespace +- The specific infrastructure provider to be used (e.g aws) +- Kubernetes version +- Cluster nodepool type that need to be used for creating the cluster manifest (e.g eks, eks-managedmachinepool) + +To know more about 'Base Cluster', read about it [here](./baseclusters.md) + +## Cluster Chart + +The cluster chart is a Helm chart that creates (and optionally applies) the manifests necessary to create a cluster and deploy desired configurations and applications to it as a part of cluster creation, the following resources are created: The profile's Cluster Specification, bundle list and other settings are used to generate values for the cluster chart, and the chart is deployed as a Helm release into the *arlon* namespace in the management cluster. + +Here is a summary of the kinds of resources generated and deployed by the chart: -To know more about basecluster (Arlon gen2 clusters), read it [here](./baseclusters.md) +- A unique namespace with a name based on the cluster's name. All subsequent + resources below are created inside that namespace. +- The stack-specific resources to create the cluster (for e.g. Cluster API resources) +- A ClusterRegistration to automatically register the cluster with ArgoCD +- A GitRepoDir to automatically create a git repo and/or directory to host a copy + of the expanded bundles. Every bundle referenced by the profile is + copied/unpacked into its own subdirectory. +- One ArgoCD Application resource for each bundle. diff --git a/docs/design.md b/docs/design.md deleted file mode 100644 index a3810aa3..00000000 --- a/docs/design.md +++ /dev/null @@ -1,126 +0,0 @@ -# Arlon Design and Concepts - -## Management cluster - -This Kubernetes cluster hosts the following components: - -- ArgoCD -- Arlon -- Cluster management stacks e.g. Cluster API and/or Crossplane - -The Arlon state and controllers reside in the arlon namespace. - -## Configuration bundle - -A configuration bundle (or just "bundle") is grouping of data files that -produce a set of Kubernetes manifests via a *tool*. This closely follows ArgoCD's -definition of *tool types*. Consequently, the list of supported bundle -types mirrors ArgoCD's supported set of manifest-producing tools. -Each bundle is defined using a Kubernetes ConfigMap resource in the arlo namespace. -Additionally, a bundle can embed the data itself ("static bundle"), or contain a reference -to the data ("dynamic bundle"). A reference can be a URL, GitHub location, or Helm repo location. -The current list of supported bundle types is: - -- manifest_inline: a single manifest yaml file embedded in the resource -- manifest_ref: a reference to a single manifest yaml file -- dir_inline: an embedded tarball that expands to a directory of YAML files -- helm_inline: an embedded Helm chart package -- helm_ref: an external reference to a Helm chart - -### Bundle purpose - -Bundles can specify an optional *purpose* to help classify and organize them. -In the future, Arlon may order bundle installation by purpose order (for e.g. -install bundles with purpose=*networking* before others) but that is not the -case today. The currently *suggested* purpose values are: - -- networking -- add-on -- data-service -- application - -## Profile - -A profile expresses a desired configuration for a Kubernetes cluster. -It is composed of - -- An optional clusterspec. If specified, it allows the profile - to be used to create new clusters. - If absent, the profile can only be applied to existing clusters. -- A list of bundles specifying the configuration to apply onto the cluster - once it is operational -- An optional list of `values.yaml` settings for any Helm Chart type bundle - in the bundle list - -## Cluster - -### Cluster Specification/ Metadata - -A Cluster Specification contains desired settings when creating a new cluster. These settings are the values that define the shape and the configurations of the cluster. - -Currently, there is a difference in the cluster specification for gen1 and gen2 clusters. The main difference in these cluster specifications is that gen2 Cluster Specification allow users to deploy arbitrarily complex clusters using the full Cluster API feature set.This is also closer to the gitops and declarative style of cluster creation and gives users more control over the cluster that they deploy. - -#### gen1 - -A `clusterspec` contains desired settings when creating a new cluster. For gen1 clusters, this Cluster Specification is called [ClusterSpec](https://github.com/arlonproj/arlon/blob/main/docs/concepts.md#cluster-spec). - -Clusterspec currently includes: - -- Stack: the cluster provisioning stack, for e.g. *cluster-api* or *crossplane* -- Provider: the specific cluster management provider under that stack, - if applicable. Example: - for *cluster-api*, the possible values are *eks* and *kubeadm* -- Other settings that specify the "shape" of the cluster, such as the size of - the control plane and the initial number of nodes of the data plane. -- The pod networking technology (under discussion: this may be moved to a - bundle because most if not all CNI providers can be installed as manifests) - -#### gen2 - -for gen2 clusters, the Cluster Specification is called the base cluster, which is described in detail [here](https://github.com/arlonproj/arlon/blob/main/docs/baseclusters.md). - -A base cluster manifest consists of: - -- A predefined list of Cluster API objects: Cluster, Machines, Machine Deployments, etc. to be deployed in the current namespace -- The specific infrastructure provider to be used (e.g aws).ß -- Kubernetes version -- Cluster templates/ flavors that need to be used for creating the cluster manifest (e.g eks, eks-managedmachinepool) - -### Cluster Preparation - -Once these cluster specifications are created successfully, the next step is to prepare the cluster for deployment. - -#### gen1 - -Once the clusterspec is created for a gen-1 cluster, there is no need to prepare a workspace repository to create a new cluster. - -#### gen2 - -Once the base cluster manifest is created, the next step is to preare the workspace repository directory in which this base cluster manifest is present. This is explained in detail [here](https://github.com/arlonproj/arlon/blob/main/docs/baseclusters.md#preparation) - -### Cluster Creation - -Now, all the prerequisites for creating a cluster are completed and the cluster can be created/deployed. - -#### Cluster Chart - -The cluster chart is a Helm chart that creates (and optionally applies) the manifests necessary to create a cluster and deploy desired configurations and applications to it as a part of cluster creation, the following resources are created: The profile's Cluster Specification, bundle list and other settings are used to generate values for the cluster chart, and the chart is deployed as a Helm release into the *arlon* namespace in the management cluster. - -Here is a summary of the kinds of resources generated and deployed by the chart: - -- A unique namespace with a name based on the cluster's name. All subsequent - resources below are created inside that namespace. -- The stack-specific resources to create the cluster (for e.g. Cluster API resources) -- A ClusterRegistration to automatically register the cluster with ArgoCD -- A GitRepoDir to automatically create a git repo and/or directory to host a copy - of the expanded bundles. Every bundle referenced by the profile is - copied/unpacked into its own subdirectory. -- One ArgoCD Application resource for each bundle. - -#### gen1 - -Cluster deployment is explained [here](https://github.com/arlonproj/arlon/blob/main/docs/tutorial.md#clusters-gen1) - -#### gen2 - -Base cluster creation is explained [here](https://github.com/arlonproj/arlon/blob/main/docs/baseclusters.md#creation) diff --git a/docs/gen2_Tutorial.md b/docs/gen2_Tutorial.md index 21e477a1..c7714855 100644 --- a/docs/gen2_Tutorial.md +++ b/docs/gen2_Tutorial.md @@ -1,26 +1,19 @@ -# Next-Generation (gen2) Clusters - New in version 0.9.x +# Tutorial -Gen1 clusters are limited in capability by the Helm chart used to deploy the infrastructure resources. -Advanced Cluster API configurations, such as those using multiple MachinePools, each with different -instance types, is not supported. +The following tutorial provides steps to create an AWS EKS cluster using Arlon. Once created, the cluster will be automatically monitored for changes by Arlon. Any changes to the cluster will be auto-corrected to conform to the manifest specifided in git. Any updates to the manifest in git will automatically be applied to the cluster. -Gen2 clusters solve this problem by allowing you to create workload clusters from a *base cluster* -that you design and provide in the form of a manifest file stored in a git directory. The manifest -typically contains multiple related resources that together define an arbitrarily complex cluster. -If you make subsequent changes to the base cluster, workload clusters originally created from it -will automatically acquire the changes. +**NOTE - Base Clusters only support dynamic profiles.** -## Creating Cluster-API cluster manifest +## Pre-requisites -Note: The CAPA version used here is v2.0 and the manifests created here are in accordance with this version. +* The Cluster API AWS Provider (CAPA) version used here is v2.0 and the manifests created here are in accordance with this version. Refer to the [compatibility matrix for Cluster API provider and CAPA versions](https://github.com/kubernetes-sigs/cluster-api-provider-aws#compatibility-with-cluster-api-and-kubernetes-versions) for supported versions. +* Setup the AWS Environment as stated in the [quickstart giude for CAPI](https://cluster-api.sigs.k8s.io/user/quick-start.html) -Refer the [compatibility matrix for Cluster API provider and CAPA versions](https://github.com/kubernetes-sigs/cluster-api-provider-aws#compatibility-with-cluster-api-and-kubernetes-versions) for supported versions. +## 1. Create Cluster-API Cluster Manifest -Before deploying a EKS cluster, make sure to setup the AWS Environment as stated in the [quickstart giude for CAPI](https://cluster-api.sigs.k8s.io/user/quick-start.html) +### Create Manifest for EKS MachineDeployment -### MachineDeployment - -Here is an example of a manifest file that we can use to create a *base cluster*. This manifest file helps in +Here is an example of a manifest file that we can use to create a *Base Cluster*. This manifest file helps in deploying an EKS cluster with 'machine deployment' component from the cluster API (CAPI). This file has been generated by the following command ```shell @@ -113,11 +106,11 @@ spec: template: {} ``` -### AWSManagedMachinePool +### Create Manifest for EKS AWSManagedMachinePool -Initialize the environment for AWSManagedMachinePool as stated [here](https://cluster-api-aws.sigs.k8s.io/topics/machinepools.html#awsmanagedmachinepool) +* Initialize the environment for AWSManagedMachinePool as stated [here](https://cluster-api-aws.sigs.k8s.io/topics/machinepools.html#awsmanagedmachinepool) -Before deploying an EKS cluster, make sure that the MachinePool feature gate is enabled. To do so, run this command: +* Before deploying an EKS cluster, make sure that the MachinePool feature gate is enabled. To do so, run this command: ```shell kubectl describe deployment capa-controller-manager -n capa-system @@ -135,7 +128,7 @@ AutoControllerIdentityCreator=true,BootstrapFormatIgnition=false,ExternalResourc .......... ``` -This manifest file helps in deploying an EKS cluster with 'AWSManagedMachinePool' component from the cluster API (CAPI). This file has been generated by the following command +* The manifest file below helps in deploying an EKS cluster with 'AWSManagedMachinePool' component from the cluster API (CAPI). This file has been generated by the following command ```shell clusterctl generate cluster awsmanaged-cluster --kubernetes-version v1.22.0 --flavor eks-managedmachinepool > manifest.yaml @@ -205,13 +198,13 @@ metadata: spec: {} ``` -### AWSMachinePool +### Create Manifest for EKS AWSMachinePool An AWSMachinePool corresponds to an AWS AutoScaling Groups, which provides the cloud provider specific resource for orchestrating a group of EC2 machines. -Initialize the environment for AWSMachinePool as stated [here]() +* Initialize the environment for AWSMachinePool as stated [here]() -Before deploying an EKS cluster, make sure that the AWSMachinePool feature gate is enabled. To do so, run this command: +* Before deploying an EKS cluster, make sure that the AWSMachinePool feature gate is enabled. To do so, run this command: ```shell kubectl describe deployment capa-controller-manager -n capa-system @@ -229,7 +222,7 @@ AutoControllerIdentityCreator=true,BootstrapFormatIgnition=false,ExternalResourc .......... ``` -This manifest file helps in deploying an EKS cluster with 'AWSManagedMachinePool' component from the cluster API (CAPI). This file has been generated by the following command +* The below manifest file helps in deploying an EKS cluster with 'AWSManagedMachinePool' component from the cluster API (CAPI). This file has been generated by the following command ```shell clusterctl generate cluster awsmanaged-cluster --kubernetes-version v1.22.0 --flavor eks-machinepool > manifest.yaml @@ -316,7 +309,7 @@ metadata: spec: {} ``` -## gen2 cluster creation using Arlon +## 2. Create Cluster using Arlon This manifest file needs to be pushed to the workspace repository before the manifest directory is prepped and then validated. @@ -324,7 +317,7 @@ Before a manifest directory can be used as a base cluster, it must first be "pre by Arlon. The "prep" phase makes minor changes to the directory and manifest to help Arlon deploy multiple copies of the cluster without naming conflicts. -## manifest directory preparation +### Prepare the Manifest Directory To prepare a git directory to serve as base cluster, use the `basecluster preparegit` command: @@ -338,7 +331,7 @@ arlon basecluster preparegit --repo-path [--repo-revision revi arlon basecluster preparegit --repo-alias prod --repo-path [--repo-revision revision] ``` -## manifest directory validation +### Validate the Manifest Directory Post the successful preparation of the basecluster manifest directory using `basecluster preparegit`, the basecluster manifest directory needs to be validated before the basecluster is created. @@ -354,7 +347,7 @@ arlon basecluster validategit --repo-path [--repo-revision rev arlon basecluster validategit --repo-alias prod --repo-path [--repo-revision revision] ``` -## gen2 cluster creation +### Create Cluster **Note: Base clusters only support dynamic profiles.** @@ -370,7 +363,6 @@ arlon cluster create --cluster-name --repo-path arlon cluster create --cluster-name --repo-alias prod --repo-path [--output-yaml] [--profile ] [--repo-revision ] ``` - ## gen2 cluster creation with overrides We call the concept of constructing various clusters with patches from the same base manifest as cluster overrides. @@ -409,7 +401,7 @@ Runnning the above command will create a cluster named folder in patch repo path Note that the patch file repo url can be different or same from the base manifest repo url acoording to the requirement of the user. A user can use a different repo url for string patch files for the cluster. -## gen2 cluster update +## 3. Update Cluster To update the profiles of a gen2 workload cluster: @@ -426,7 +418,7 @@ to the existing cluster which will create profile app in argocd along with bundl An existing profile can be deleted from the cluster as well using the above command. Executing this command will delete the profile app and all the bundles associated with the profile in argocd. -## gen2 cluster deletion +## 4. Delete Cluster To destroy a gen2 workload cluster: diff --git a/docs/installation.md b/docs/installation.md index f961c611..b420477e 100644 --- a/docs/installation.md +++ b/docs/installation.md @@ -1,23 +1,52 @@ # Installation +For a quickstart minimal demonstration setup, follow the instructions to set up a KIND based testbed with Arlon and ArgoCD running [here](https://github.com/arlonproj/arlon/blob/main/testing/README.md). + +Please follow the manual instructions in [this](#customised-setup) section for a customised setup or refer the instructions for automated installation [here](#automatic-setup). + +# Pre-requisites + +Make sure that you have: +- A 'Management cluster'. You can use any Kubernetes cluster that you have admin access to. +- [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/) command line tool is installed and is in your path +- Have a valid [kubeconfig](https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/) file (default location is `~/.kube/config`). +- `KUBECONFIG` environment variable is pointing to the right file and the context is set properly +- A hosted Git repository that will be used to store arlon artifacts, with at least a `README` file present. +- Pre-requisites for supported Cluster API infrastructure providers (AWS and Docker as of now). + +# Automatic Setup + +## 1. Download Arlon CLI + Arlon CLI downloads are provided on GitHub. The CLI is not a self-contained standalone executable though. It is required to point the CLI to a management cluster and set up the Arlon controller in this management cluster. -For a quickstart minimal demonstration setup, follow the instructions to set up a KIND based testbed with Arlon and ArgoCD running [here](https://github.com/arlonproj/arlon/blob/main/testing/README.md). +* Download the CLI for the [latest release](https://github.com/arlonproj/arlon/releases/latest) from GitHub. +Currently, Linux and MacOS operating systems are supported. +* Uncompress the tarball, rename it as `arlon` and add to your PATH +* Run `arlon verify` to check for prerequisites. +* Run `arlon install` to install any missing prerequisites. -Please follow the manual instructions in [this](#customised-setup) section for a customised setup or refer the instructions for automated installation [here](#automatic-setup). -# Customised Setup +## 2. Setup Arlon -## Management cluster +Arlon CLI provides an `init` command to install "itself" on a management cluster. +This command performs a basic setup of `argocd`(if needed) and `arlon` controller. +If `argocd` is already installed, it assumes that `admin` password is the same as in `argocd-initial-admin-secret` ConfigMap and that `argocd` resides in the `argocd` namespace. +Similar assumptions are made for detecting Arlon installation as well: assuming that the existence of `arlon` namespace means Arlon controller exists. -You can use any Kubernetes cluster that you have admin access to. Ensure: +* To start the installation process, run the following command +`arlon init -e --username --repoURL --password --examples -y`. +* This installs the controller, argocd(if not already present) +* `-e` flag adds basecluster manifests to the for using the given credentials. To not add examples, just remove the `-e` flag. +* The `-y` flag refers to silent installation, which is useful for scripts. For an interactive installation, exclude the `-y` or `--no-confirm` flag. -- `kubectl` is in your path -- `KUBECONFIG` is pointing to the right file and the context set properly +# Customized Setup -## ArgoCD +Use the customized setup if you would like to understand and potentially customize the different steps of the Arlon installation. For example, if you'd like to use Arlon with an existing instalation of ArgoCD. + +## 1. Install ArgoCD - Follow steps 1-4 of the [ArgoCD installation guide](https://argo-cd.readthedocs.io/en/stable/getting_started/) to install ArgoCD onto your management cluster. After this step, you should be logged in as `admin` and a config file was created at `${HOME}/.config/argocd/config` @@ -51,7 +80,7 @@ kind: ConfigMap the previous step. Save changes. This file will be used to configure the Arlon controller's ArgoCD credentials during the next steps. -## Arlon controller +## 2. Install Arlon controller - Create the arlon namespace: `kubectl create ns arlon` - Create the ArgoCD credentials secret from the temporary config file: @@ -62,7 +91,7 @@ kind: ConfigMap - Deploy the controller: `kubectl apply -f deploy/manifests/` - Ensure the controller eventually enters the Running state: `watch kubectl -n arlon get pod` -## Arlon CLI +## 3. Install Arlon CLI Download the CLI for the [latest release](https://github.com/arlonproj/arlon/releases/latest) from GitHub. Currently, Linux and MacOS operating systems are supported. @@ -81,12 +110,13 @@ The following instructions are to manually build CLI from this code repository. (e.g. `/usr/local/bin`) included in your ${PATH} to the `bin/arlon` binary to make it easy to invoke the command. -## Cluster orchestration API providers +## 4. Install Cluster Orchestration API providers Arlon currently supports Cluster API on AWS cloud. It also has experimental -support for Crossplane on AWS. +support for Crossplane on AWS. +Install one of the following. -### Cluster API +### Install Cluster API Using the [Cluster API Quickstart Guide](https://cluster-api.sigs.k8s.io/user/quick-start.html) as reference, complete these steps: @@ -96,7 +126,7 @@ as reference, complete these steps: In particular, follow instructions for your specific cloud provider (AWS in this example) Ensure `clusterctl init` completes successfully and produces the expected output. -### Crossplane (experimental) +### Install Crossplane (experimental) Using the [Upbound AWS Reference Platform Quickstart Guide](https://github.com/upbound/platform-ref-aws#quick-start) as reference, complete these steps: @@ -115,19 +145,3 @@ cluster can noticeably slow down kubectl, and you may see a warning that looks l I0222 17:31:14.112689 27922 request.go:668] Waited for 1.046146023s due to client-side throttling, not priority and fairness, request: GET:https://AA61XXXXXXXXXXX.gr7.us-west-2.eks.amazonaws.com/apis/servicediscovery.aws.crossplane.io/v1alpha1?timeout=32s ``` -# Automatic Setup - -Arlon CLI provides an `init` command to install "itself" on a management cluster. -This command performs a basic setup of `argocd`(if needed) and `arlon` controller. -If `argocd` is already installed, it assumes that `admin` password is the same as in `argocd-initial-admin-secret` ConfigMap and that `argocd` resides in the `argocd` namespace. -Similar assumptions are made for detecting Arlon installation as well: assuming that the existence of `arlon` namespace means Arlon controller exists. -To install Arlon controller using the init command these pre-requisites need to be met: - -- A valid kubeconfig pointing to the management cluster. -- A hosted Git repository with at least a `README` file present. -- Pre-requisites for supported CAPI infrastructure providers(AWS and Docker as of now). - -To start the installation process, simply run `arlon init -e --username --repoURL --password --examples -y`. -This installs the controller, argocd(if not already present) `-e` flag adds basecluster manifests to the for using the given credentials. To not add examples, just remove the `-e` flag. -The `-y` flag refers to silent installation, which is useful for scripts. -For an interactive installation, exclude the `-y` or `--no-confirm` flag. diff --git a/mkdocs.yml b/mkdocs.yml index 97731dd2..6fa646a5 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -51,7 +51,6 @@ nav: - Overview: 'README.md' - Concepts: 'concepts.md' - Base clusters: 'baseclusters.md' - - Design: 'design.md' - Architecture: 'architecture.md' - Installation: 'installation.md' - Tutorial: 'gen2_Tutorial.md'