diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..82ff1517 --- /dev/null +++ b/404.html @@ -0,0 +1,1688 @@ + + + + + + + + + + + + + + + + + + + Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ +

404 - Not found

+ +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/CNAME b/CNAME new file mode 100644 index 00000000..50343c5a --- /dev/null +++ b/CNAME @@ -0,0 +1 @@ +docs.cluster.dev diff --git a/DevOpsDays21/index.html b/DevOpsDays21/index.html new file mode 100644 index 00000000..4a90d6ba --- /dev/null +++ b/DevOpsDays21/index.html @@ -0,0 +1,1848 @@ + + + + + + + + + + + + + + + + + + + + + DevOps Days 2021 - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

DevOps Days 2021

+

Hi Guys, I'm Vova from SHALB!

+

In SHALB we build and support a hundreds of infrastructures so we have some outcome and experience that we'd like to share.

+

Problems of the modern Cloud Native infrastructures

+

Multiple technologies needs to be coupled

+

Infrastructure code for complete infra contains a different technologies: +Terraform, Helm, Docker, Bash, Ansible, Cloud-Init, CI/CD-scripts, SQL's, GitOps applications, Secrets, etc..

+

With a bunch of specific DSL'es: yaml, hcl, go-template, json(net).

+

And each with the specific code styles: declarative, imperative, interrogative.
+With the different diff'ing: two or three way merges.
+And even using different patching across one tool, like: patchesStrategicMerge, patchesJson6902 in kustomize.

+

So you need to compile all that stuff together to be able spawn a whole infra with one shot.
+And you need one-shot to be clear that it is fully automated and can be GitOps-ed :)!

+

Even super-powerful tool has own limits

+

So thats why:

+
    +
  • Terragrunt, Terraspace and Atlantis exist for Terraform.
  • +
  • Helmfile, Helm Operator exist form Helm.
  • +
  • and Helm exist for K8s yaml :).
  • +
+

Its hard to deal with variables and secrets

+
    +
  1. +

    Should be passed between different technologies in sometimes unpredictable sequences.
    +In example you need to set the IAM role arn created by Terraform to Cert-Manager controller deployed with Helm values.

    +
  2. +
  3. +

    Variables should be passed across different infrastructures, even located on different clouds.
    +Imagine you need to obtain DNS Zone from CloudFlare, then set 'NS' records in AWS Route53, and then grant an External-DNS controller which is deployed in + on-prem K8s provisioned with Rancher to change this zone in AWS...

    +
  4. +
  5. +

    Secrets that needs to be secured and shared across different team members and teams.
    +Team members sometime leave, or accounts could be compromised and you need completely revoke access from them across a set of infras with one shot.

    +
  6. +
  7. +

    Variables should be decoupled from infrastructure pattern itself and needs a wise sane defaults. + If you hardcode variables - its hard to reuse such code.

    +
  8. +
+

Development and Testing

+

You'd like to maximize reusage of the existing infrastructure patterns:

+
- Terraform modules
+- Helm Charts
+- K8s Operators
+- Dockerfile's
+
+ +

Pin versions for all you have in your infra, in example:
+Pin the aws cli and terraform binary version along with Helm, Prometheus operator version and your private kustomize application.

+

Available solutions

+

So to couple their infrastructure with some 'glue' most of engineers have a several ways:

+
    +
  • CI/CD sequential appying, ex Jenkins/Gitlab job that deploys infra components one by one.
  • +
  • Own bash scripts and Makefiles, that pulls code from different repos and applies using hardcoded sequence.
  • +
  • Some of them struggle to write everything with one tech: ex Pulumi(but you need to know how to code in JS, GO, .NET), or Terraform (and you fail) :)
  • +
  • Some of them rely on existing API (Kuberenetes) architecture like a Crossplane.
  • +
+

We create own tool - cluster.dev or 'cdev'

+

It's Capabilities:

+
    +
  • Re-using all existing Terraform private and public modules and Helm Charts.
  • +
  • Templating anything with Go-template functions, even Terraform modules in Helm-style templates.
  • +
  • Applying parallel changes in multiple infrastructures concurrently.
  • +
  • Using the same global variables and secrets across different infrastructures, clouds and technologies.
  • +
  • Create and manage secrets with Sops or cloud secret storages.
  • +
  • Generate a ready to use Terraform code.
  • +
+

Short Demo

+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/ROADMAP/index.html b/ROADMAP/index.html new file mode 100644 index 00000000..ed292909 --- /dev/null +++ b/ROADMAP/index.html @@ -0,0 +1,1813 @@ + + + + + + + + + + + + + + + + + + + + + Project Roadmap - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ + + + Skip to content + + +
+
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Project Roadmap

+

v.0.1.x - Basic Scenario

+
    +
  • Create a state storage (AWS S3+Dynamo) for infrastructure resources
  • +
  • Deploy a Kubernetes(Minikube) in AWS using default VPC
  • +
  • Provision Kubernetes with addons: Ingress-Nginx, Load Balancer, Cert-Manager, ExtDNS, ArgoCD
  • +
  • Deploy a sample "WordPress" application to Kubernetes cluster using ArgoCD
  • +
  • Delivered as GitHub Actions and Docker Image
  • +
+

v0.2.x - Bash-based PoC

+
    +
  • Deliver with cluster creation a default DNS sub-zone: + *.username-clustername.cluster.dev
  • +
  • Create a cluster.dev backend to register newly created clusters
  • +
  • Support for GitLab CI Pipelines
  • +
  • ArgoCD sample applications (raw manifests, local helm chart, public helm chart)
  • +
  • Support for DigitalOcean Kubernetes cluster 59
  • +
  • DigitalOcean Domains sub-zones 65
  • +
  • AWS EKS provisioning. Spot and Mixed ASG support.
  • +
  • Support for Operator Lifecycle Manager
  • +
+

v0.3.x - Go-based Beta

+
    +
  • Go-based reconciler
  • +
  • External secrets management with Sops and godaddy/kubernetes-external-secrets
  • +
  • Team and user management with Keycloak
  • +
  • Apps deployment: Kubernetes Dashboard, Grafana and Kibana.
  • +
  • OIDC access to kubeconfig with Keycloak and jetstack/kube-oidc-proxy/ 53
  • +
  • SSO access to ArgoCD and base applications: Kubernetes Dashboard, Grafana, Kibana
  • +
  • OIDC integration with GitHub, GitLab, Google Auth, Okta
  • +
+

v0.4.x

+
    +
  • CLI Installer 54
  • +
  • Add GitHub runner and test GitHub Action Continuous Integration workflow
  • +
  • Argo Workflows for DAG and CI tasks inside Kubernetes cluster
  • +
  • Google Cloud Platform Kubernetes (GKE) support
  • +
  • Custom Terraform modules and reconcilation
  • +
  • Kind provisioner
  • +
+

v0.5.x

+
    +
  • kops provisioner support
  • +
  • k3s provisioner
  • +
  • Cost $$$ estimation during installation
  • +
  • Web user interface design
  • +
+

v0.6.x

+
    +
  • Rancher RKE provisioner support
  • +
  • Multi-cluster support for user management and SSO
  • +
  • Multi-cluster support for ArgoCD
  • +
  • Crossplane integration
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/assets/images/favicon.png b/assets/images/favicon.png new file mode 100644 index 00000000..1cf13b9f Binary files /dev/null and b/assets/images/favicon.png differ diff --git a/assets/javascripts/bundle.aecac24b.min.js b/assets/javascripts/bundle.aecac24b.min.js new file mode 100644 index 00000000..464603d8 --- /dev/null +++ b/assets/javascripts/bundle.aecac24b.min.js @@ -0,0 +1,29 @@ +"use strict";(()=>{var wi=Object.create;var ur=Object.defineProperty;var Si=Object.getOwnPropertyDescriptor;var Ti=Object.getOwnPropertyNames,kt=Object.getOwnPropertySymbols,Oi=Object.getPrototypeOf,dr=Object.prototype.hasOwnProperty,Zr=Object.prototype.propertyIsEnumerable;var Xr=(e,t,r)=>t in e?ur(e,t,{enumerable:!0,configurable:!0,writable:!0,value:r}):e[t]=r,R=(e,t)=>{for(var r in t||(t={}))dr.call(t,r)&&Xr(e,r,t[r]);if(kt)for(var r of kt(t))Zr.call(t,r)&&Xr(e,r,t[r]);return e};var eo=(e,t)=>{var r={};for(var o in e)dr.call(e,o)&&t.indexOf(o)<0&&(r[o]=e[o]);if(e!=null&&kt)for(var o of kt(e))t.indexOf(o)<0&&Zr.call(e,o)&&(r[o]=e[o]);return r};var hr=(e,t)=>()=>(t||e((t={exports:{}}).exports,t),t.exports);var Mi=(e,t,r,o)=>{if(t&&typeof t=="object"||typeof t=="function")for(let n of Ti(t))!dr.call(e,n)&&n!==r&&ur(e,n,{get:()=>t[n],enumerable:!(o=Si(t,n))||o.enumerable});return e};var Ht=(e,t,r)=>(r=e!=null?wi(Oi(e)):{},Mi(t||!e||!e.__esModule?ur(r,"default",{value:e,enumerable:!0}):r,e));var ro=hr((br,to)=>{(function(e,t){typeof br=="object"&&typeof to!="undefined"?t():typeof define=="function"&&define.amd?define(t):t()})(br,function(){"use strict";function e(r){var o=!0,n=!1,i=null,s={text:!0,search:!0,url:!0,tel:!0,email:!0,password:!0,number:!0,date:!0,month:!0,week:!0,time:!0,datetime:!0,"datetime-local":!0};function a(C){return!!(C&&C!==document&&C.nodeName!=="HTML"&&C.nodeName!=="BODY"&&"classList"in C&&"contains"in C.classList)}function c(C){var it=C.type,Ne=C.tagName;return!!(Ne==="INPUT"&&s[it]&&!C.readOnly||Ne==="TEXTAREA"&&!C.readOnly||C.isContentEditable)}function p(C){C.classList.contains("focus-visible")||(C.classList.add("focus-visible"),C.setAttribute("data-focus-visible-added",""))}function l(C){C.hasAttribute("data-focus-visible-added")&&(C.classList.remove("focus-visible"),C.removeAttribute("data-focus-visible-added"))}function f(C){C.metaKey||C.altKey||C.ctrlKey||(a(r.activeElement)&&p(r.activeElement),o=!0)}function u(C){o=!1}function d(C){a(C.target)&&(o||c(C.target))&&p(C.target)}function v(C){a(C.target)&&(C.target.classList.contains("focus-visible")||C.target.hasAttribute("data-focus-visible-added"))&&(n=!0,window.clearTimeout(i),i=window.setTimeout(function(){n=!1},100),l(C.target))}function b(C){document.visibilityState==="hidden"&&(n&&(o=!0),z())}function z(){document.addEventListener("mousemove",G),document.addEventListener("mousedown",G),document.addEventListener("mouseup",G),document.addEventListener("pointermove",G),document.addEventListener("pointerdown",G),document.addEventListener("pointerup",G),document.addEventListener("touchmove",G),document.addEventListener("touchstart",G),document.addEventListener("touchend",G)}function K(){document.removeEventListener("mousemove",G),document.removeEventListener("mousedown",G),document.removeEventListener("mouseup",G),document.removeEventListener("pointermove",G),document.removeEventListener("pointerdown",G),document.removeEventListener("pointerup",G),document.removeEventListener("touchmove",G),document.removeEventListener("touchstart",G),document.removeEventListener("touchend",G)}function G(C){C.target.nodeName&&C.target.nodeName.toLowerCase()==="html"||(o=!1,K())}document.addEventListener("keydown",f,!0),document.addEventListener("mousedown",u,!0),document.addEventListener("pointerdown",u,!0),document.addEventListener("touchstart",u,!0),document.addEventListener("visibilitychange",b,!0),z(),r.addEventListener("focus",d,!0),r.addEventListener("blur",v,!0),r.nodeType===Node.DOCUMENT_FRAGMENT_NODE&&r.host?r.host.setAttribute("data-js-focus-visible",""):r.nodeType===Node.DOCUMENT_NODE&&(document.documentElement.classList.add("js-focus-visible"),document.documentElement.setAttribute("data-js-focus-visible",""))}if(typeof window!="undefined"&&typeof document!="undefined"){window.applyFocusVisiblePolyfill=e;var t;try{t=new CustomEvent("focus-visible-polyfill-ready")}catch(r){t=document.createEvent("CustomEvent"),t.initCustomEvent("focus-visible-polyfill-ready",!1,!1,{})}window.dispatchEvent(t)}typeof document!="undefined"&&e(document)})});var Vr=hr((Ot,Dr)=>{/*! + * clipboard.js v2.0.11 + * https://clipboardjs.com/ + * + * Licensed MIT © Zeno Rocha + */(function(t,r){typeof Ot=="object"&&typeof Dr=="object"?Dr.exports=r():typeof define=="function"&&define.amd?define([],r):typeof Ot=="object"?Ot.ClipboardJS=r():t.ClipboardJS=r()})(Ot,function(){return function(){var e={686:function(o,n,i){"use strict";i.d(n,{default:function(){return Ei}});var s=i(279),a=i.n(s),c=i(370),p=i.n(c),l=i(817),f=i.n(l);function u(U){try{return document.execCommand(U)}catch(O){return!1}}var d=function(O){var S=f()(O);return u("cut"),S},v=d;function b(U){var O=document.documentElement.getAttribute("dir")==="rtl",S=document.createElement("textarea");S.style.fontSize="12pt",S.style.border="0",S.style.padding="0",S.style.margin="0",S.style.position="absolute",S.style[O?"right":"left"]="-9999px";var $=window.pageYOffset||document.documentElement.scrollTop;return S.style.top="".concat($,"px"),S.setAttribute("readonly",""),S.value=U,S}var z=function(O,S){var $=b(O);S.container.appendChild($);var F=f()($);return u("copy"),$.remove(),F},K=function(O){var S=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body},$="";return typeof O=="string"?$=z(O,S):O instanceof HTMLInputElement&&!["text","search","url","tel","password"].includes(O==null?void 0:O.type)?$=z(O.value,S):($=f()(O),u("copy")),$},G=K;function C(U){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?C=function(S){return typeof S}:C=function(S){return S&&typeof Symbol=="function"&&S.constructor===Symbol&&S!==Symbol.prototype?"symbol":typeof S},C(U)}var it=function(){var O=arguments.length>0&&arguments[0]!==void 0?arguments[0]:{},S=O.action,$=S===void 0?"copy":S,F=O.container,Q=O.target,_e=O.text;if($!=="copy"&&$!=="cut")throw new Error('Invalid "action" value, use either "copy" or "cut"');if(Q!==void 0)if(Q&&C(Q)==="object"&&Q.nodeType===1){if($==="copy"&&Q.hasAttribute("disabled"))throw new Error('Invalid "target" attribute. Please use "readonly" instead of "disabled" attribute');if($==="cut"&&(Q.hasAttribute("readonly")||Q.hasAttribute("disabled")))throw new Error(`Invalid "target" attribute. You can't cut text from elements with "readonly" or "disabled" attributes`)}else throw new Error('Invalid "target" value, use a valid Element');if(_e)return G(_e,{container:F});if(Q)return $==="cut"?v(Q):G(Q,{container:F})},Ne=it;function Pe(U){"@babel/helpers - typeof";return typeof Symbol=="function"&&typeof Symbol.iterator=="symbol"?Pe=function(S){return typeof S}:Pe=function(S){return S&&typeof Symbol=="function"&&S.constructor===Symbol&&S!==Symbol.prototype?"symbol":typeof S},Pe(U)}function ui(U,O){if(!(U instanceof O))throw new TypeError("Cannot call a class as a function")}function Jr(U,O){for(var S=0;S0&&arguments[0]!==void 0?arguments[0]:{};this.action=typeof F.action=="function"?F.action:this.defaultAction,this.target=typeof F.target=="function"?F.target:this.defaultTarget,this.text=typeof F.text=="function"?F.text:this.defaultText,this.container=Pe(F.container)==="object"?F.container:document.body}},{key:"listenClick",value:function(F){var Q=this;this.listener=p()(F,"click",function(_e){return Q.onClick(_e)})}},{key:"onClick",value:function(F){var Q=F.delegateTarget||F.currentTarget,_e=this.action(Q)||"copy",Ct=Ne({action:_e,container:this.container,target:this.target(Q),text:this.text(Q)});this.emit(Ct?"success":"error",{action:_e,text:Ct,trigger:Q,clearSelection:function(){Q&&Q.focus(),window.getSelection().removeAllRanges()}})}},{key:"defaultAction",value:function(F){return fr("action",F)}},{key:"defaultTarget",value:function(F){var Q=fr("target",F);if(Q)return document.querySelector(Q)}},{key:"defaultText",value:function(F){return fr("text",F)}},{key:"destroy",value:function(){this.listener.destroy()}}],[{key:"copy",value:function(F){var Q=arguments.length>1&&arguments[1]!==void 0?arguments[1]:{container:document.body};return G(F,Q)}},{key:"cut",value:function(F){return v(F)}},{key:"isSupported",value:function(){var F=arguments.length>0&&arguments[0]!==void 0?arguments[0]:["copy","cut"],Q=typeof F=="string"?[F]:F,_e=!!document.queryCommandSupported;return Q.forEach(function(Ct){_e=_e&&!!document.queryCommandSupported(Ct)}),_e}}]),S}(a()),Ei=yi},828:function(o){var n=9;if(typeof Element!="undefined"&&!Element.prototype.matches){var i=Element.prototype;i.matches=i.matchesSelector||i.mozMatchesSelector||i.msMatchesSelector||i.oMatchesSelector||i.webkitMatchesSelector}function s(a,c){for(;a&&a.nodeType!==n;){if(typeof a.matches=="function"&&a.matches(c))return a;a=a.parentNode}}o.exports=s},438:function(o,n,i){var s=i(828);function a(l,f,u,d,v){var b=p.apply(this,arguments);return l.addEventListener(u,b,v),{destroy:function(){l.removeEventListener(u,b,v)}}}function c(l,f,u,d,v){return typeof l.addEventListener=="function"?a.apply(null,arguments):typeof u=="function"?a.bind(null,document).apply(null,arguments):(typeof l=="string"&&(l=document.querySelectorAll(l)),Array.prototype.map.call(l,function(b){return a(b,f,u,d,v)}))}function p(l,f,u,d){return function(v){v.delegateTarget=s(v.target,f),v.delegateTarget&&d.call(l,v)}}o.exports=c},879:function(o,n){n.node=function(i){return i!==void 0&&i instanceof HTMLElement&&i.nodeType===1},n.nodeList=function(i){var s=Object.prototype.toString.call(i);return i!==void 0&&(s==="[object NodeList]"||s==="[object HTMLCollection]")&&"length"in i&&(i.length===0||n.node(i[0]))},n.string=function(i){return typeof i=="string"||i instanceof String},n.fn=function(i){var s=Object.prototype.toString.call(i);return s==="[object Function]"}},370:function(o,n,i){var s=i(879),a=i(438);function c(u,d,v){if(!u&&!d&&!v)throw new Error("Missing required arguments");if(!s.string(d))throw new TypeError("Second argument must be a String");if(!s.fn(v))throw new TypeError("Third argument must be a Function");if(s.node(u))return p(u,d,v);if(s.nodeList(u))return l(u,d,v);if(s.string(u))return f(u,d,v);throw new TypeError("First argument must be a String, HTMLElement, HTMLCollection, or NodeList")}function p(u,d,v){return u.addEventListener(d,v),{destroy:function(){u.removeEventListener(d,v)}}}function l(u,d,v){return Array.prototype.forEach.call(u,function(b){b.addEventListener(d,v)}),{destroy:function(){Array.prototype.forEach.call(u,function(b){b.removeEventListener(d,v)})}}}function f(u,d,v){return a(document.body,u,d,v)}o.exports=c},817:function(o){function n(i){var s;if(i.nodeName==="SELECT")i.focus(),s=i.value;else if(i.nodeName==="INPUT"||i.nodeName==="TEXTAREA"){var a=i.hasAttribute("readonly");a||i.setAttribute("readonly",""),i.select(),i.setSelectionRange(0,i.value.length),a||i.removeAttribute("readonly"),s=i.value}else{i.hasAttribute("contenteditable")&&i.focus();var c=window.getSelection(),p=document.createRange();p.selectNodeContents(i),c.removeAllRanges(),c.addRange(p),s=c.toString()}return s}o.exports=n},279:function(o){function n(){}n.prototype={on:function(i,s,a){var c=this.e||(this.e={});return(c[i]||(c[i]=[])).push({fn:s,ctx:a}),this},once:function(i,s,a){var c=this;function p(){c.off(i,p),s.apply(a,arguments)}return p._=s,this.on(i,p,a)},emit:function(i){var s=[].slice.call(arguments,1),a=((this.e||(this.e={}))[i]||[]).slice(),c=0,p=a.length;for(c;c{"use strict";/*! + * escape-html + * Copyright(c) 2012-2013 TJ Holowaychuk + * Copyright(c) 2015 Andreas Lubbe + * Copyright(c) 2015 Tiancheng "Timothy" Gu + * MIT Licensed + */var Ha=/["'&<>]/;Un.exports=$a;function $a(e){var t=""+e,r=Ha.exec(t);if(!r)return t;var o,n="",i=0,s=0;for(i=r.index;i0&&i[i.length-1])&&(p[0]===6||p[0]===2)){r=0;continue}if(p[0]===3&&(!i||p[1]>i[0]&&p[1]=e.length&&(e=void 0),{value:e&&e[o++],done:!e}}};throw new TypeError(t?"Object is not iterable.":"Symbol.iterator is not defined.")}function N(e,t){var r=typeof Symbol=="function"&&e[Symbol.iterator];if(!r)return e;var o=r.call(e),n,i=[],s;try{for(;(t===void 0||t-- >0)&&!(n=o.next()).done;)i.push(n.value)}catch(a){s={error:a}}finally{try{n&&!n.done&&(r=o.return)&&r.call(o)}finally{if(s)throw s.error}}return i}function D(e,t,r){if(r||arguments.length===2)for(var o=0,n=t.length,i;o1||a(u,d)})})}function a(u,d){try{c(o[u](d))}catch(v){f(i[0][3],v)}}function c(u){u.value instanceof Ze?Promise.resolve(u.value.v).then(p,l):f(i[0][2],u)}function p(u){a("next",u)}function l(u){a("throw",u)}function f(u,d){u(d),i.shift(),i.length&&a(i[0][0],i[0][1])}}function io(e){if(!Symbol.asyncIterator)throw new TypeError("Symbol.asyncIterator is not defined.");var t=e[Symbol.asyncIterator],r;return t?t.call(e):(e=typeof we=="function"?we(e):e[Symbol.iterator](),r={},o("next"),o("throw"),o("return"),r[Symbol.asyncIterator]=function(){return this},r);function o(i){r[i]=e[i]&&function(s){return new Promise(function(a,c){s=e[i](s),n(a,c,s.done,s.value)})}}function n(i,s,a,c){Promise.resolve(c).then(function(p){i({value:p,done:a})},s)}}function k(e){return typeof e=="function"}function at(e){var t=function(o){Error.call(o),o.stack=new Error().stack},r=e(t);return r.prototype=Object.create(Error.prototype),r.prototype.constructor=r,r}var Rt=at(function(e){return function(r){e(this),this.message=r?r.length+` errors occurred during unsubscription: +`+r.map(function(o,n){return n+1+") "+o.toString()}).join(` + `):"",this.name="UnsubscriptionError",this.errors=r}});function De(e,t){if(e){var r=e.indexOf(t);0<=r&&e.splice(r,1)}}var Ie=function(){function e(t){this.initialTeardown=t,this.closed=!1,this._parentage=null,this._finalizers=null}return e.prototype.unsubscribe=function(){var t,r,o,n,i;if(!this.closed){this.closed=!0;var s=this._parentage;if(s)if(this._parentage=null,Array.isArray(s))try{for(var a=we(s),c=a.next();!c.done;c=a.next()){var p=c.value;p.remove(this)}}catch(b){t={error:b}}finally{try{c&&!c.done&&(r=a.return)&&r.call(a)}finally{if(t)throw t.error}}else s.remove(this);var l=this.initialTeardown;if(k(l))try{l()}catch(b){i=b instanceof Rt?b.errors:[b]}var f=this._finalizers;if(f){this._finalizers=null;try{for(var u=we(f),d=u.next();!d.done;d=u.next()){var v=d.value;try{ao(v)}catch(b){i=i!=null?i:[],b instanceof Rt?i=D(D([],N(i)),N(b.errors)):i.push(b)}}}catch(b){o={error:b}}finally{try{d&&!d.done&&(n=u.return)&&n.call(u)}finally{if(o)throw o.error}}}if(i)throw new Rt(i)}},e.prototype.add=function(t){var r;if(t&&t!==this)if(this.closed)ao(t);else{if(t instanceof e){if(t.closed||t._hasParent(this))return;t._addParent(this)}(this._finalizers=(r=this._finalizers)!==null&&r!==void 0?r:[]).push(t)}},e.prototype._hasParent=function(t){var r=this._parentage;return r===t||Array.isArray(r)&&r.includes(t)},e.prototype._addParent=function(t){var r=this._parentage;this._parentage=Array.isArray(r)?(r.push(t),r):r?[r,t]:t},e.prototype._removeParent=function(t){var r=this._parentage;r===t?this._parentage=null:Array.isArray(r)&&De(r,t)},e.prototype.remove=function(t){var r=this._finalizers;r&&De(r,t),t instanceof e&&t._removeParent(this)},e.EMPTY=function(){var t=new e;return t.closed=!0,t}(),e}();var gr=Ie.EMPTY;function Pt(e){return e instanceof Ie||e&&"closed"in e&&k(e.remove)&&k(e.add)&&k(e.unsubscribe)}function ao(e){k(e)?e():e.unsubscribe()}var Ae={onUnhandledError:null,onStoppedNotification:null,Promise:void 0,useDeprecatedSynchronousErrorHandling:!1,useDeprecatedNextContext:!1};var st={setTimeout:function(e,t){for(var r=[],o=2;o0},enumerable:!1,configurable:!0}),t.prototype._trySubscribe=function(r){return this._throwIfClosed(),e.prototype._trySubscribe.call(this,r)},t.prototype._subscribe=function(r){return this._throwIfClosed(),this._checkFinalizedStatuses(r),this._innerSubscribe(r)},t.prototype._innerSubscribe=function(r){var o=this,n=this,i=n.hasError,s=n.isStopped,a=n.observers;return i||s?gr:(this.currentObservers=null,a.push(r),new Ie(function(){o.currentObservers=null,De(a,r)}))},t.prototype._checkFinalizedStatuses=function(r){var o=this,n=o.hasError,i=o.thrownError,s=o.isStopped;n?r.error(i):s&&r.complete()},t.prototype.asObservable=function(){var r=new P;return r.source=this,r},t.create=function(r,o){return new ho(r,o)},t}(P);var ho=function(e){ie(t,e);function t(r,o){var n=e.call(this)||this;return n.destination=r,n.source=o,n}return t.prototype.next=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.next)===null||n===void 0||n.call(o,r)},t.prototype.error=function(r){var o,n;(n=(o=this.destination)===null||o===void 0?void 0:o.error)===null||n===void 0||n.call(o,r)},t.prototype.complete=function(){var r,o;(o=(r=this.destination)===null||r===void 0?void 0:r.complete)===null||o===void 0||o.call(r)},t.prototype._subscribe=function(r){var o,n;return(n=(o=this.source)===null||o===void 0?void 0:o.subscribe(r))!==null&&n!==void 0?n:gr},t}(x);var yt={now:function(){return(yt.delegate||Date).now()},delegate:void 0};var Et=function(e){ie(t,e);function t(r,o,n){r===void 0&&(r=1/0),o===void 0&&(o=1/0),n===void 0&&(n=yt);var i=e.call(this)||this;return i._bufferSize=r,i._windowTime=o,i._timestampProvider=n,i._buffer=[],i._infiniteTimeWindow=!0,i._infiniteTimeWindow=o===1/0,i._bufferSize=Math.max(1,r),i._windowTime=Math.max(1,o),i}return t.prototype.next=function(r){var o=this,n=o.isStopped,i=o._buffer,s=o._infiniteTimeWindow,a=o._timestampProvider,c=o._windowTime;n||(i.push(r),!s&&i.push(a.now()+c)),this._trimBuffer(),e.prototype.next.call(this,r)},t.prototype._subscribe=function(r){this._throwIfClosed(),this._trimBuffer();for(var o=this._innerSubscribe(r),n=this,i=n._infiniteTimeWindow,s=n._buffer,a=s.slice(),c=0;c0?e.prototype.requestAsyncId.call(this,r,o,n):(r.actions.push(this),r._scheduled||(r._scheduled=lt.requestAnimationFrame(function(){return r.flush(void 0)})))},t.prototype.recycleAsyncId=function(r,o,n){var i;if(n===void 0&&(n=0),n!=null?n>0:this.delay>0)return e.prototype.recycleAsyncId.call(this,r,o,n);var s=r.actions;o!=null&&((i=s[s.length-1])===null||i===void 0?void 0:i.id)!==o&&(lt.cancelAnimationFrame(o),r._scheduled=void 0)},t}(jt);var go=function(e){ie(t,e);function t(){return e!==null&&e.apply(this,arguments)||this}return t.prototype.flush=function(r){this._active=!0;var o=this._scheduled;this._scheduled=void 0;var n=this.actions,i;r=r||n.shift();do if(i=r.execute(r.state,r.delay))break;while((r=n[0])&&r.id===o&&n.shift());if(this._active=!1,i){for(;(r=n[0])&&r.id===o&&n.shift();)r.unsubscribe();throw i}},t}(Wt);var Oe=new go(vo);var L=new P(function(e){return e.complete()});function Ut(e){return e&&k(e.schedule)}function Or(e){return e[e.length-1]}function Qe(e){return k(Or(e))?e.pop():void 0}function Me(e){return Ut(Or(e))?e.pop():void 0}function Nt(e,t){return typeof Or(e)=="number"?e.pop():t}var mt=function(e){return e&&typeof e.length=="number"&&typeof e!="function"};function Dt(e){return k(e==null?void 0:e.then)}function Vt(e){return k(e[pt])}function zt(e){return Symbol.asyncIterator&&k(e==null?void 0:e[Symbol.asyncIterator])}function qt(e){return new TypeError("You provided "+(e!==null&&typeof e=="object"?"an invalid object":"'"+e+"'")+" where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.")}function Pi(){return typeof Symbol!="function"||!Symbol.iterator?"@@iterator":Symbol.iterator}var Kt=Pi();function Qt(e){return k(e==null?void 0:e[Kt])}function Yt(e){return no(this,arguments,function(){var r,o,n,i;return $t(this,function(s){switch(s.label){case 0:r=e.getReader(),s.label=1;case 1:s.trys.push([1,,9,10]),s.label=2;case 2:return[4,Ze(r.read())];case 3:return o=s.sent(),n=o.value,i=o.done,i?[4,Ze(void 0)]:[3,5];case 4:return[2,s.sent()];case 5:return[4,Ze(n)];case 6:return[4,s.sent()];case 7:return s.sent(),[3,2];case 8:return[3,10];case 9:return r.releaseLock(),[7];case 10:return[2]}})})}function Bt(e){return k(e==null?void 0:e.getReader)}function I(e){if(e instanceof P)return e;if(e!=null){if(Vt(e))return Ii(e);if(mt(e))return Fi(e);if(Dt(e))return ji(e);if(zt(e))return xo(e);if(Qt(e))return Wi(e);if(Bt(e))return Ui(e)}throw qt(e)}function Ii(e){return new P(function(t){var r=e[pt]();if(k(r.subscribe))return r.subscribe(t);throw new TypeError("Provided object does not correctly implement Symbol.observable")})}function Fi(e){return new P(function(t){for(var r=0;r=2;return function(o){return o.pipe(e?M(function(n,i){return e(n,i,o)}):ue,xe(1),r?He(t):Io(function(){return new Jt}))}}function Fo(){for(var e=[],t=0;t=2,!0))}function le(e){e===void 0&&(e={});var t=e.connector,r=t===void 0?function(){return new x}:t,o=e.resetOnError,n=o===void 0?!0:o,i=e.resetOnComplete,s=i===void 0?!0:i,a=e.resetOnRefCountZero,c=a===void 0?!0:a;return function(p){var l,f,u,d=0,v=!1,b=!1,z=function(){f==null||f.unsubscribe(),f=void 0},K=function(){z(),l=u=void 0,v=b=!1},G=function(){var C=l;K(),C==null||C.unsubscribe()};return g(function(C,it){d++,!b&&!v&&z();var Ne=u=u!=null?u:r();it.add(function(){d--,d===0&&!b&&!v&&(f=Hr(G,c))}),Ne.subscribe(it),!l&&d>0&&(l=new tt({next:function(Pe){return Ne.next(Pe)},error:function(Pe){b=!0,z(),f=Hr(K,n,Pe),Ne.error(Pe)},complete:function(){v=!0,z(),f=Hr(K,s),Ne.complete()}}),I(C).subscribe(l))})(p)}}function Hr(e,t){for(var r=[],o=2;oe.next(document)),e}function q(e,t=document){return Array.from(t.querySelectorAll(e))}function W(e,t=document){let r=ce(e,t);if(typeof r=="undefined")throw new ReferenceError(`Missing element: expected "${e}" to be present`);return r}function ce(e,t=document){return t.querySelector(e)||void 0}function Re(){return document.activeElement instanceof HTMLElement&&document.activeElement||void 0}var na=_(h(document.body,"focusin"),h(document.body,"focusout")).pipe(ke(1),V(void 0),m(()=>Re()||document.body),J(1));function Zt(e){return na.pipe(m(t=>e.contains(t)),X())}function Je(e){return{x:e.offsetLeft,y:e.offsetTop}}function No(e){return _(h(window,"load"),h(window,"resize")).pipe(Ce(0,Oe),m(()=>Je(e)),V(Je(e)))}function er(e){return{x:e.scrollLeft,y:e.scrollTop}}function dt(e){return _(h(e,"scroll"),h(window,"resize")).pipe(Ce(0,Oe),m(()=>er(e)),V(er(e)))}function Do(e,t){if(typeof t=="string"||typeof t=="number")e.innerHTML+=t.toString();else if(t instanceof Node)e.appendChild(t);else if(Array.isArray(t))for(let r of t)Do(e,r)}function T(e,t,...r){let o=document.createElement(e);if(t)for(let n of Object.keys(t))typeof t[n]!="undefined"&&(typeof t[n]!="boolean"?o.setAttribute(n,t[n]):o.setAttribute(n,""));for(let n of r)Do(o,n);return o}function tr(e){if(e>999){let t=+((e-950)%1e3>99);return`${((e+1e-6)/1e3).toFixed(t)}k`}else return e.toString()}function ht(e){let t=T("script",{src:e});return H(()=>(document.head.appendChild(t),_(h(t,"load"),h(t,"error").pipe(E(()=>Mr(()=>new ReferenceError(`Invalid script: ${e}`))))).pipe(m(()=>{}),A(()=>document.head.removeChild(t)),xe(1))))}var Vo=new x,ia=H(()=>typeof ResizeObserver=="undefined"?ht("https://unpkg.com/resize-observer-polyfill"):j(void 0)).pipe(m(()=>new ResizeObserver(e=>{for(let t of e)Vo.next(t)})),E(e=>_(Ve,j(e)).pipe(A(()=>e.disconnect()))),J(1));function he(e){return{width:e.offsetWidth,height:e.offsetHeight}}function ye(e){return ia.pipe(w(t=>t.observe(e)),E(t=>Vo.pipe(M(({target:r})=>r===e),A(()=>t.unobserve(e)),m(()=>he(e)))),V(he(e)))}function bt(e){return{width:e.scrollWidth,height:e.scrollHeight}}function zo(e){let t=e.parentElement;for(;t&&(e.scrollWidth<=t.scrollWidth&&e.scrollHeight<=t.scrollHeight);)t=(e=t).parentElement;return t?e:void 0}var qo=new x,aa=H(()=>j(new IntersectionObserver(e=>{for(let t of e)qo.next(t)},{threshold:0}))).pipe(E(e=>_(Ve,j(e)).pipe(A(()=>e.disconnect()))),J(1));function rr(e){return aa.pipe(w(t=>t.observe(e)),E(t=>qo.pipe(M(({target:r})=>r===e),A(()=>t.unobserve(e)),m(({isIntersecting:r})=>r))))}function Ko(e,t=16){return dt(e).pipe(m(({y:r})=>{let o=he(e),n=bt(e);return r>=n.height-o.height-t}),X())}var or={drawer:W("[data-md-toggle=drawer]"),search:W("[data-md-toggle=search]")};function Qo(e){return or[e].checked}function Ke(e,t){or[e].checked!==t&&or[e].click()}function We(e){let t=or[e];return h(t,"change").pipe(m(()=>t.checked),V(t.checked))}function sa(e,t){switch(e.constructor){case HTMLInputElement:return e.type==="radio"?/^Arrow/.test(t):!0;case HTMLSelectElement:case HTMLTextAreaElement:return!0;default:return e.isContentEditable}}function ca(){return _(h(window,"compositionstart").pipe(m(()=>!0)),h(window,"compositionend").pipe(m(()=>!1))).pipe(V(!1))}function Yo(){let e=h(window,"keydown").pipe(M(t=>!(t.metaKey||t.ctrlKey)),m(t=>({mode:Qo("search")?"search":"global",type:t.key,claim(){t.preventDefault(),t.stopPropagation()}})),M(({mode:t,type:r})=>{if(t==="global"){let o=Re();if(typeof o!="undefined")return!sa(o,r)}return!0}),le());return ca().pipe(E(t=>t?L:e))}function pe(){return new URL(location.href)}function ot(e,t=!1){if(te("navigation.instant")&&!t){let r=T("a",{href:e.href});document.body.appendChild(r),r.click(),r.remove()}else location.href=e.href}function Bo(){return new x}function Go(){return location.hash.slice(1)}function nr(e){let t=T("a",{href:e});t.addEventListener("click",r=>r.stopPropagation()),t.click()}function pa(e){return _(h(window,"hashchange"),e).pipe(m(Go),V(Go()),M(t=>t.length>0),J(1))}function Jo(e){return pa(e).pipe(m(t=>ce(`[id="${t}"]`)),M(t=>typeof t!="undefined"))}function Fr(e){let t=matchMedia(e);return Xt(r=>t.addListener(()=>r(t.matches))).pipe(V(t.matches))}function Xo(){let e=matchMedia("print");return _(h(window,"beforeprint").pipe(m(()=>!0)),h(window,"afterprint").pipe(m(()=>!1))).pipe(V(e.matches))}function jr(e,t){return e.pipe(E(r=>r?t():L))}function ir(e,t){return new P(r=>{let o=new XMLHttpRequest;o.open("GET",`${e}`),o.responseType="blob",o.addEventListener("load",()=>{o.status>=200&&o.status<300?(r.next(o.response),r.complete()):r.error(new Error(o.statusText))}),o.addEventListener("error",()=>{r.error(new Error("Network Error"))}),o.addEventListener("abort",()=>{r.error(new Error("Request aborted"))}),typeof(t==null?void 0:t.progress$)!="undefined"&&(o.addEventListener("progress",n=>{t.progress$.next(n.loaded/n.total*100)}),t.progress$.next(5)),o.send()})}function Ue(e,t){return ir(e,t).pipe(E(r=>r.text()),m(r=>JSON.parse(r)),J(1))}function Zo(e,t){let r=new DOMParser;return ir(e,t).pipe(E(o=>o.text()),m(o=>r.parseFromString(o,"text/xml")),J(1))}function en(){return{x:Math.max(0,scrollX),y:Math.max(0,scrollY)}}function tn(){return _(h(window,"scroll",{passive:!0}),h(window,"resize",{passive:!0})).pipe(m(en),V(en()))}function rn(){return{width:innerWidth,height:innerHeight}}function on(){return h(window,"resize",{passive:!0}).pipe(m(rn),V(rn()))}function nn(){return B([tn(),on()]).pipe(m(([e,t])=>({offset:e,size:t})),J(1))}function ar(e,{viewport$:t,header$:r}){let o=t.pipe(ee("size")),n=B([o,r]).pipe(m(()=>Je(e)));return B([r,t,n]).pipe(m(([{height:i},{offset:s,size:a},{x:c,y:p}])=>({offset:{x:s.x-c,y:s.y-p+i},size:a})))}function la(e){return h(e,"message",t=>t.data)}function ma(e){let t=new x;return t.subscribe(r=>e.postMessage(r)),t}function an(e,t=new Worker(e)){let r=la(t),o=ma(t),n=new x;n.subscribe(o);let i=o.pipe(Z(),re(!0));return n.pipe(Z(),qe(r.pipe(Y(i))),le())}var fa=W("#__config"),vt=JSON.parse(fa.textContent);vt.base=`${new URL(vt.base,pe())}`;function me(){return vt}function te(e){return vt.features.includes(e)}function be(e,t){return typeof t!="undefined"?vt.translations[e].replace("#",t.toString()):vt.translations[e]}function Ee(e,t=document){return W(`[data-md-component=${e}]`,t)}function oe(e,t=document){return q(`[data-md-component=${e}]`,t)}function ua(e){let t=W(".md-typeset > :first-child",e);return h(t,"click",{once:!0}).pipe(m(()=>W(".md-typeset",e)),m(r=>({hash:__md_hash(r.innerHTML)})))}function sn(e){if(!te("announce.dismiss")||!e.childElementCount)return L;if(!e.hidden){let t=W(".md-typeset",e);__md_hash(t.innerHTML)===__md_get("__announce")&&(e.hidden=!0)}return H(()=>{let t=new x;return t.subscribe(({hash:r})=>{e.hidden=!0,__md_set("__announce",r)}),ua(e).pipe(w(r=>t.next(r)),A(()=>t.complete()),m(r=>R({ref:e},r)))})}function da(e,{target$:t}){return t.pipe(m(r=>({hidden:r!==e})))}function cn(e,t){let r=new x;return r.subscribe(({hidden:o})=>{e.hidden=o}),da(e,t).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))}function ha(e,t){let r=H(()=>B([No(e),dt(t)])).pipe(m(([{x:o,y:n},i])=>{let{width:s,height:a}=he(e);return{x:o-i.x+s/2,y:n-i.y+a/2}}));return Zt(e).pipe(E(o=>r.pipe(m(n=>({active:o,offset:n})),xe(+!o||1/0))))}function pn(e,t,{target$:r}){let[o,n]=Array.from(e.children);return H(()=>{let i=new x,s=i.pipe(Z(),re(!0));return i.subscribe({next({offset:a}){e.style.setProperty("--md-tooltip-x",`${a.x}px`),e.style.setProperty("--md-tooltip-y",`${a.y}px`)},complete(){e.style.removeProperty("--md-tooltip-x"),e.style.removeProperty("--md-tooltip-y")}}),rr(e).pipe(Y(s)).subscribe(a=>{e.toggleAttribute("data-md-visible",a)}),_(i.pipe(M(({active:a})=>a)),i.pipe(ke(250),M(({active:a})=>!a))).subscribe({next({active:a}){a?e.prepend(o):o.remove()},complete(){e.prepend(o)}}),i.pipe(Ce(16,Oe)).subscribe(({active:a})=>{o.classList.toggle("md-tooltip--active",a)}),i.pipe(Pr(125,Oe),M(()=>!!e.offsetParent),m(()=>e.offsetParent.getBoundingClientRect()),m(({x:a})=>a)).subscribe({next(a){a?e.style.setProperty("--md-tooltip-0",`${-a}px`):e.style.removeProperty("--md-tooltip-0")},complete(){e.style.removeProperty("--md-tooltip-0")}}),h(n,"click").pipe(Y(s),M(a=>!(a.metaKey||a.ctrlKey))).subscribe(a=>{a.stopPropagation(),a.preventDefault()}),h(n,"mousedown").pipe(Y(s),ne(i)).subscribe(([a,{active:c}])=>{var p;if(a.button!==0||a.metaKey||a.ctrlKey)a.preventDefault();else if(c){a.preventDefault();let l=e.parentElement.closest(".md-annotation");l instanceof HTMLElement?l.focus():(p=Re())==null||p.blur()}}),r.pipe(Y(s),M(a=>a===o),ze(125)).subscribe(()=>e.focus()),ha(e,t).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))})}function Wr(e){return T("div",{class:"md-tooltip",id:e},T("div",{class:"md-tooltip__inner md-typeset"}))}function ln(e,t){if(t=t?`${t}_annotation_${e}`:void 0,t){let r=t?`#${t}`:void 0;return T("aside",{class:"md-annotation",tabIndex:0},Wr(t),T("a",{href:r,class:"md-annotation__index",tabIndex:-1},T("span",{"data-md-annotation-id":e})))}else return T("aside",{class:"md-annotation",tabIndex:0},Wr(t),T("span",{class:"md-annotation__index",tabIndex:-1},T("span",{"data-md-annotation-id":e})))}function mn(e){return T("button",{class:"md-clipboard md-icon",title:be("clipboard.copy"),"data-clipboard-target":`#${e} > code`})}function Ur(e,t){let r=t&2,o=t&1,n=Object.keys(e.terms).filter(c=>!e.terms[c]).reduce((c,p)=>[...c,T("del",null,p)," "],[]).slice(0,-1),i=me(),s=new URL(e.location,i.base);te("search.highlight")&&s.searchParams.set("h",Object.entries(e.terms).filter(([,c])=>c).reduce((c,[p])=>`${c} ${p}`.trim(),""));let{tags:a}=me();return T("a",{href:`${s}`,class:"md-search-result__link",tabIndex:-1},T("article",{class:"md-search-result__article md-typeset","data-md-score":e.score.toFixed(2)},r>0&&T("div",{class:"md-search-result__icon md-icon"}),r>0&&T("h1",null,e.title),r<=0&&T("h2",null,e.title),o>0&&e.text.length>0&&e.text,e.tags&&e.tags.map(c=>{let p=a?c in a?`md-tag-icon md-tag--${a[c]}`:"md-tag-icon":"";return T("span",{class:`md-tag ${p}`},c)}),o>0&&n.length>0&&T("p",{class:"md-search-result__terms"},be("search.result.term.missing"),": ",...n)))}function fn(e){let t=e[0].score,r=[...e],o=me(),n=r.findIndex(l=>!`${new URL(l.location,o.base)}`.includes("#")),[i]=r.splice(n,1),s=r.findIndex(l=>l.scoreUr(l,1)),...c.length?[T("details",{class:"md-search-result__more"},T("summary",{tabIndex:-1},T("div",null,c.length>0&&c.length===1?be("search.result.more.one"):be("search.result.more.other",c.length))),...c.map(l=>Ur(l,1)))]:[]];return T("li",{class:"md-search-result__item"},p)}function un(e){return T("ul",{class:"md-source__facts"},Object.entries(e).map(([t,r])=>T("li",{class:`md-source__fact md-source__fact--${t}`},typeof r=="number"?tr(r):r)))}function Nr(e){let t=`tabbed-control tabbed-control--${e}`;return T("div",{class:t,hidden:!0},T("button",{class:"tabbed-button",tabIndex:-1,"aria-hidden":"true"}))}function dn(e){return T("div",{class:"md-typeset__scrollwrap"},T("div",{class:"md-typeset__table"},e))}function ba(e){let t=me(),r=new URL(`../${e.version}/`,t.base);return T("li",{class:"md-version__item"},T("a",{href:`${r}`,class:"md-version__link"},e.title))}function hn(e,t){return T("div",{class:"md-version"},T("button",{class:"md-version__current","aria-label":be("select.version")},t.title),T("ul",{class:"md-version__list"},e.map(ba)))}function va(e){return e.tagName==="CODE"?q(".c, .c1, .cm",e):[e]}function ga(e){let t=[];for(let r of va(e)){let o=[],n=document.createNodeIterator(r,NodeFilter.SHOW_TEXT);for(let i=n.nextNode();i;i=n.nextNode())o.push(i);for(let i of o){let s;for(;s=/(\(\d+\))(!)?/.exec(i.textContent);){let[,a,c]=s;if(typeof c=="undefined"){let p=i.splitText(s.index);i=p.splitText(a.length),t.push(p)}else{i.textContent=a,t.push(i);break}}}}return t}function bn(e,t){t.append(...Array.from(e.childNodes))}function sr(e,t,{target$:r,print$:o}){let n=t.closest("[id]"),i=n==null?void 0:n.id,s=new Map;for(let a of ga(t)){let[,c]=a.textContent.match(/\((\d+)\)/);ce(`:scope > li:nth-child(${c})`,e)&&(s.set(c,ln(c,i)),a.replaceWith(s.get(c)))}return s.size===0?L:H(()=>{let a=new x,c=a.pipe(Z(),re(!0)),p=[];for(let[l,f]of s)p.push([W(".md-typeset",f),W(`:scope > li:nth-child(${l})`,e)]);return o.pipe(Y(c)).subscribe(l=>{e.hidden=!l,e.classList.toggle("md-annotation-list",l);for(let[f,u]of p)l?bn(f,u):bn(u,f)}),_(...[...s].map(([,l])=>pn(l,t,{target$:r}))).pipe(A(()=>a.complete()),le())})}function vn(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return vn(t)}}function gn(e,t){return H(()=>{let r=vn(e);return typeof r!="undefined"?sr(r,e,t):L})}var yn=Ht(Vr());var xa=0;function En(e){if(e.nextElementSibling){let t=e.nextElementSibling;if(t.tagName==="OL")return t;if(t.tagName==="P"&&!t.children.length)return En(t)}}function xn(e){return ye(e).pipe(m(({width:t})=>({scrollable:bt(e).width>t})),ee("scrollable"))}function wn(e,t){let{matches:r}=matchMedia("(hover)"),o=H(()=>{let n=new x;if(n.subscribe(({scrollable:s})=>{s&&r?e.setAttribute("tabindex","0"):e.removeAttribute("tabindex")}),yn.default.isSupported()&&(e.closest(".copy")||te("content.code.copy")&&!e.closest(".no-copy"))){let s=e.closest("pre");s.id=`__code_${xa++}`,s.insertBefore(mn(s.id),e)}let i=e.closest(".highlight");if(i instanceof HTMLElement){let s=En(i);if(typeof s!="undefined"&&(i.classList.contains("annotate")||te("content.code.annotate"))){let a=sr(s,e,t);return xn(e).pipe(w(c=>n.next(c)),A(()=>n.complete()),m(c=>R({ref:e},c)),qe(ye(i).pipe(m(({width:c,height:p})=>c&&p),X(),E(c=>c?a:L))))}}return xn(e).pipe(w(s=>n.next(s)),A(()=>n.complete()),m(s=>R({ref:e},s)))});return te("content.lazy")?rr(e).pipe(M(n=>n),xe(1),E(()=>o)):o}function ya(e,{target$:t,print$:r}){let o=!0;return _(t.pipe(m(n=>n.closest("details:not([open])")),M(n=>e===n),m(()=>({action:"open",reveal:!0}))),r.pipe(M(n=>n||!o),w(()=>o=e.open),m(n=>({action:n?"open":"close"}))))}function Sn(e,t){return H(()=>{let r=new x;return r.subscribe(({action:o,reveal:n})=>{e.toggleAttribute("open",o==="open"),n&&e.scrollIntoView()}),ya(e,t).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}var Tn=".node circle,.node ellipse,.node path,.node polygon,.node rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}marker{fill:var(--md-mermaid-edge-color)!important}.edgeLabel .label rect{fill:#0000}.label{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.label foreignObject{line-height:normal;overflow:visible}.label div .edgeLabel{color:var(--md-mermaid-label-fg-color)}.edgeLabel,.edgeLabel rect,.label div .edgeLabel{background-color:var(--md-mermaid-label-bg-color)}.edgeLabel,.edgeLabel rect{fill:var(--md-mermaid-label-bg-color);color:var(--md-mermaid-edge-color)}.edgePath .path,.flowchart-link{stroke:var(--md-mermaid-edge-color);stroke-width:.05rem}.edgePath .arrowheadPath{fill:var(--md-mermaid-edge-color);stroke:none}.cluster rect{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}.cluster span{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}g #flowchart-circleEnd,g #flowchart-circleStart,g #flowchart-crossEnd,g #flowchart-crossStart,g #flowchart-pointEnd,g #flowchart-pointStart{stroke:none}g.classGroup line,g.classGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.classGroup text{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.classLabel .box{fill:var(--md-mermaid-label-bg-color);background-color:var(--md-mermaid-label-bg-color);opacity:1}.classLabel .label{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node .divider{stroke:var(--md-mermaid-node-fg-color)}.relation{stroke:var(--md-mermaid-edge-color)}.cardinality{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.cardinality text{fill:inherit!important}defs #classDiagram-compositionEnd,defs #classDiagram-compositionStart,defs #classDiagram-dependencyEnd,defs #classDiagram-dependencyStart,defs #classDiagram-extensionEnd,defs #classDiagram-extensionStart{fill:var(--md-mermaid-edge-color)!important;stroke:var(--md-mermaid-edge-color)!important}defs #classDiagram-aggregationEnd,defs #classDiagram-aggregationStart{fill:var(--md-mermaid-label-bg-color)!important;stroke:var(--md-mermaid-edge-color)!important}g.stateGroup rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}g.stateGroup .state-title{fill:var(--md-mermaid-label-fg-color)!important;font-family:var(--md-mermaid-font-family)}g.stateGroup .composit{fill:var(--md-mermaid-label-bg-color)}.nodeLabel{color:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.node circle.state-end,.node circle.state-start,.start-state{fill:var(--md-mermaid-edge-color);stroke:none}.end-state-inner,.end-state-outer{fill:var(--md-mermaid-edge-color)}.end-state-inner,.node circle.state-end{stroke:var(--md-mermaid-label-bg-color)}.transition{stroke:var(--md-mermaid-edge-color)}[id^=state-fork] rect,[id^=state-join] rect{fill:var(--md-mermaid-edge-color)!important;stroke:none!important}.statediagram-cluster.statediagram-cluster .inner{fill:var(--md-default-bg-color)}.statediagram-cluster rect{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.statediagram-state rect.divider{fill:var(--md-default-fg-color--lightest);stroke:var(--md-default-fg-color--lighter)}defs #statediagram-barbEnd{stroke:var(--md-mermaid-edge-color)}.attributeBoxEven,.attributeBoxOdd{fill:var(--md-mermaid-node-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityBox{fill:var(--md-mermaid-label-bg-color);stroke:var(--md-mermaid-node-fg-color)}.entityLabel{fill:var(--md-mermaid-label-fg-color);font-family:var(--md-mermaid-font-family)}.relationshipLabelBox{fill:var(--md-mermaid-label-bg-color);fill-opacity:1;background-color:var(--md-mermaid-label-bg-color);opacity:1}.relationshipLabel{fill:var(--md-mermaid-label-fg-color)}.relationshipLine{stroke:var(--md-mermaid-edge-color)}defs #ONE_OR_MORE_END *,defs #ONE_OR_MORE_START *,defs #ONLY_ONE_END *,defs #ONLY_ONE_START *,defs #ZERO_OR_MORE_END *,defs #ZERO_OR_MORE_START *,defs #ZERO_OR_ONE_END *,defs #ZERO_OR_ONE_START *{stroke:var(--md-mermaid-edge-color)!important}defs #ZERO_OR_MORE_END circle,defs #ZERO_OR_MORE_START circle{fill:var(--md-mermaid-label-bg-color)}.actor{fill:var(--md-mermaid-sequence-actor-bg-color);stroke:var(--md-mermaid-sequence-actor-border-color)}text.actor>tspan{fill:var(--md-mermaid-sequence-actor-fg-color);font-family:var(--md-mermaid-font-family)}line{stroke:var(--md-mermaid-sequence-actor-line-color)}.actor-man circle,.actor-man line{fill:var(--md-mermaid-sequence-actorman-bg-color);stroke:var(--md-mermaid-sequence-actorman-line-color)}.messageLine0,.messageLine1{stroke:var(--md-mermaid-sequence-message-line-color)}.note{fill:var(--md-mermaid-sequence-note-bg-color);stroke:var(--md-mermaid-sequence-note-border-color)}.loopText,.loopText>tspan,.messageText,.noteText>tspan{stroke:none;font-family:var(--md-mermaid-font-family)!important}.messageText{fill:var(--md-mermaid-sequence-message-fg-color)}.loopText,.loopText>tspan{fill:var(--md-mermaid-sequence-loop-fg-color)}.noteText>tspan{fill:var(--md-mermaid-sequence-note-fg-color)}#arrowhead path{fill:var(--md-mermaid-sequence-message-line-color);stroke:none}.loopLine{fill:var(--md-mermaid-sequence-loop-bg-color);stroke:var(--md-mermaid-sequence-loop-border-color)}.labelBox{fill:var(--md-mermaid-sequence-label-bg-color);stroke:none}.labelText,.labelText>span{fill:var(--md-mermaid-sequence-label-fg-color);font-family:var(--md-mermaid-font-family)}.sequenceNumber{fill:var(--md-mermaid-sequence-number-fg-color)}rect.rect{fill:var(--md-mermaid-sequence-box-bg-color);stroke:none}rect.rect+text.text{fill:var(--md-mermaid-sequence-box-fg-color)}defs #sequencenumber{fill:var(--md-mermaid-sequence-number-bg-color)!important}";var zr,wa=0;function Sa(){return typeof mermaid=="undefined"||mermaid instanceof Element?ht("https://unpkg.com/mermaid@9.4.3/dist/mermaid.min.js"):j(void 0)}function On(e){return e.classList.remove("mermaid"),zr||(zr=Sa().pipe(w(()=>mermaid.initialize({startOnLoad:!1,themeCSS:Tn,sequence:{actorFontSize:"16px",messageFontSize:"16px",noteFontSize:"16px"}})),m(()=>{}),J(1))),zr.subscribe(()=>{e.classList.add("mermaid");let t=`__mermaid_${wa++}`,r=T("div",{class:"mermaid"}),o=e.textContent;mermaid.mermaidAPI.render(t,o,(n,i)=>{let s=r.attachShadow({mode:"closed"});s.innerHTML=n,e.replaceWith(r),i==null||i(s)})}),zr.pipe(m(()=>({ref:e})))}var Mn=T("table");function Ln(e){return e.replaceWith(Mn),Mn.replaceWith(dn(e)),j({ref:e})}function Ta(e){let t=q(":scope > input",e),r=t.find(o=>o.checked)||t[0];return _(...t.map(o=>h(o,"change").pipe(m(()=>W(`label[for="${o.id}"]`))))).pipe(V(W(`label[for="${r.id}"]`)),m(o=>({active:o})))}function _n(e,{viewport$:t}){let r=Nr("prev");e.append(r);let o=Nr("next");e.append(o);let n=W(".tabbed-labels",e);return H(()=>{let i=new x,s=i.pipe(Z(),re(!0));return B([i,ye(e)]).pipe(Ce(1,Oe),Y(s)).subscribe({next([{active:a},c]){let p=Je(a),{width:l}=he(a);e.style.setProperty("--md-indicator-x",`${p.x}px`),e.style.setProperty("--md-indicator-width",`${l}px`);let f=er(n);(p.xf.x+c.width)&&n.scrollTo({left:Math.max(0,p.x-16),behavior:"smooth"})},complete(){e.style.removeProperty("--md-indicator-x"),e.style.removeProperty("--md-indicator-width")}}),B([dt(n),ye(n)]).pipe(Y(s)).subscribe(([a,c])=>{let p=bt(n);r.hidden=a.x<16,o.hidden=a.x>p.width-c.width-16}),_(h(r,"click").pipe(m(()=>-1)),h(o,"click").pipe(m(()=>1))).pipe(Y(s)).subscribe(a=>{let{width:c}=he(n);n.scrollBy({left:c*a,behavior:"smooth"})}),te("content.tabs.link")&&i.pipe(je(1),ne(t)).subscribe(([{active:a},{offset:c}])=>{let p=a.innerText.trim();if(a.hasAttribute("data-md-switching"))a.removeAttribute("data-md-switching");else{let l=e.offsetTop-c.y;for(let u of q("[data-tabs]"))for(let d of q(":scope > input",u)){let v=W(`label[for="${d.id}"]`);if(v!==a&&v.innerText.trim()===p){v.setAttribute("data-md-switching",""),d.click();break}}window.scrollTo({top:e.offsetTop-l});let f=__md_get("__tabs")||[];__md_set("__tabs",[...new Set([p,...f])])}}),i.pipe(Y(s)).subscribe(()=>{for(let a of q("audio, video",e))a.pause()}),Ta(e).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))}).pipe(rt(ae))}function An(e,{viewport$:t,target$:r,print$:o}){return _(...q(".annotate:not(.highlight)",e).map(n=>gn(n,{target$:r,print$:o})),...q("pre:not(.mermaid) > code",e).map(n=>wn(n,{target$:r,print$:o})),...q("pre.mermaid",e).map(n=>On(n)),...q("table:not([class])",e).map(n=>Ln(n)),...q("details",e).map(n=>Sn(n,{target$:r,print$:o})),...q("[data-tabs]",e).map(n=>_n(n,{viewport$:t})))}function Oa(e,{alert$:t}){return t.pipe(E(r=>_(j(!0),j(!1).pipe(ze(2e3))).pipe(m(o=>({message:r,active:o})))))}function Cn(e,t){let r=W(".md-typeset",e);return H(()=>{let o=new x;return o.subscribe(({message:n,active:i})=>{e.classList.toggle("md-dialog--active",i),r.textContent=n}),Oa(e,t).pipe(w(n=>o.next(n)),A(()=>o.complete()),m(n=>R({ref:e},n)))})}function Ma({viewport$:e}){if(!te("header.autohide"))return j(!1);let t=e.pipe(m(({offset:{y:n}})=>n),Le(2,1),m(([n,i])=>[nMath.abs(i-n.y)>100),m(([,[n]])=>n),X()),o=We("search");return B([e,o]).pipe(m(([{offset:n},i])=>n.y>400&&!i),X(),E(n=>n?r:j(!1)),V(!1))}function kn(e,t){return H(()=>B([ye(e),Ma(t)])).pipe(m(([{height:r},o])=>({height:r,hidden:o})),X((r,o)=>r.height===o.height&&r.hidden===o.hidden),J(1))}function Hn(e,{header$:t,main$:r}){return H(()=>{let o=new x,n=o.pipe(Z(),re(!0));return o.pipe(ee("active"),Ge(t)).subscribe(([{active:i},{hidden:s}])=>{e.classList.toggle("md-header--shadow",i&&!s),e.hidden=s}),r.subscribe(o),t.pipe(Y(n),m(i=>R({ref:e},i)))})}function La(e,{viewport$:t,header$:r}){return ar(e,{viewport$:t,header$:r}).pipe(m(({offset:{y:o}})=>{let{height:n}=he(e);return{active:o>=n}}),ee("active"))}function $n(e,t){return H(()=>{let r=new x;r.subscribe({next({active:n}){e.classList.toggle("md-header__title--active",n)},complete(){e.classList.remove("md-header__title--active")}});let o=ce(".md-content h1");return typeof o=="undefined"?L:La(o,t).pipe(w(n=>r.next(n)),A(()=>r.complete()),m(n=>R({ref:e},n)))})}function Rn(e,{viewport$:t,header$:r}){let o=r.pipe(m(({height:i})=>i),X()),n=o.pipe(E(()=>ye(e).pipe(m(({height:i})=>({top:e.offsetTop,bottom:e.offsetTop+i})),ee("bottom"))));return B([o,n,t]).pipe(m(([i,{top:s,bottom:a},{offset:{y:c},size:{height:p}}])=>(p=Math.max(0,p-Math.max(0,s-c,i)-Math.max(0,p+c-a)),{offset:s-i,height:p,active:s-i<=c})),X((i,s)=>i.offset===s.offset&&i.height===s.height&&i.active===s.active))}function _a(e){let t=__md_get("__palette")||{index:e.findIndex(r=>matchMedia(r.getAttribute("data-md-color-media")).matches)};return j(...e).pipe(se(r=>h(r,"change").pipe(m(()=>r))),V(e[Math.max(0,t.index)]),m(r=>({index:e.indexOf(r),color:{scheme:r.getAttribute("data-md-color-scheme"),primary:r.getAttribute("data-md-color-primary"),accent:r.getAttribute("data-md-color-accent")}})),J(1))}function Pn(e){let t=T("meta",{name:"theme-color"});document.head.appendChild(t);let r=T("meta",{name:"color-scheme"});return document.head.appendChild(r),H(()=>{let o=new x;o.subscribe(i=>{document.body.setAttribute("data-md-color-switching","");for(let[s,a]of Object.entries(i.color))document.body.setAttribute(`data-md-color-${s}`,a);for(let s=0;s{let i=Ee("header"),s=window.getComputedStyle(i);return r.content=s.colorScheme,s.backgroundColor.match(/\d+/g).map(a=>(+a).toString(16).padStart(2,"0")).join("")})).subscribe(i=>t.content=`#${i}`),o.pipe(Se(ae)).subscribe(()=>{document.body.removeAttribute("data-md-color-switching")});let n=q("input",e);return _a(n).pipe(w(i=>o.next(i)),A(()=>o.complete()),m(i=>R({ref:e},i)))})}function In(e,{progress$:t}){return H(()=>{let r=new x;return r.subscribe(({value:o})=>{e.style.setProperty("--md-progress-value",`${o}`)}),t.pipe(w(o=>r.next({value:o})),A(()=>r.complete()),m(o=>({ref:e,value:o})))})}var qr=Ht(Vr());function Aa(e){e.setAttribute("data-md-copying","");let t=e.closest("[data-copy]"),r=t?t.getAttribute("data-copy"):e.innerText;return e.removeAttribute("data-md-copying"),r}function Fn({alert$:e}){qr.default.isSupported()&&new P(t=>{new qr.default("[data-clipboard-target], [data-clipboard-text]",{text:r=>r.getAttribute("data-clipboard-text")||Aa(W(r.getAttribute("data-clipboard-target")))}).on("success",r=>t.next(r))}).pipe(w(t=>{t.trigger.focus()}),m(()=>be("clipboard.copied"))).subscribe(e)}function Ca(e){if(e.length<2)return[""];let[t,r]=[...e].sort((n,i)=>n.length-i.length).map(n=>n.replace(/[^/]+$/,"")),o=0;if(t===r)o=t.length;else for(;t.charCodeAt(o)===r.charCodeAt(o);)o++;return e.map(n=>n.replace(t.slice(0,o),""))}function cr(e){let t=__md_get("__sitemap",sessionStorage,e);if(t)return j(t);{let r=me();return Zo(new URL("sitemap.xml",e||r.base)).pipe(m(o=>Ca(q("loc",o).map(n=>n.textContent))),de(()=>L),He([]),w(o=>__md_set("__sitemap",o,sessionStorage,e)))}}function jn(e){let t=W("[rel=canonical]",e);t.href=t.href.replace("//localhost:","//127.0.0.1");let r=new Map;for(let o of q(":scope > *",e)){let n=o.outerHTML;for(let i of["href","src"]){let s=o.getAttribute(i);if(s===null)continue;let a=new URL(s,t.href),c=o.cloneNode();c.setAttribute(i,`${a}`),n=c.outerHTML;break}r.set(n,o)}return r}function Wn({location$:e,viewport$:t,progress$:r}){let o=me();if(location.protocol==="file:")return L;let n=cr().pipe(m(l=>l.map(f=>`${new URL(f,o.base)}`))),i=h(document.body,"click").pipe(ne(n),E(([l,f])=>{if(!(l.target instanceof Element))return L;let u=l.target.closest("a");if(u===null)return L;if(u.target||l.metaKey||l.ctrlKey)return L;let d=new URL(u.href);return d.search=d.hash="",f.includes(`${d}`)?(l.preventDefault(),j(new URL(u.href))):L}),le());i.pipe(xe(1)).subscribe(()=>{let l=ce("link[rel=icon]");typeof l!="undefined"&&(l.href=l.href)}),h(window,"beforeunload").subscribe(()=>{history.scrollRestoration="auto"}),i.pipe(ne(t)).subscribe(([l,{offset:f}])=>{history.scrollRestoration="manual",history.replaceState(f,""),history.pushState(null,"",l)}),i.subscribe(e);let s=e.pipe(V(pe()),ee("pathname"),je(1),E(l=>ir(l,{progress$:r}).pipe(de(()=>(ot(l,!0),L))))),a=new DOMParser,c=s.pipe(E(l=>l.text()),E(l=>{let f=a.parseFromString(l,"text/html");for(let b of["[data-md-component=announce]","[data-md-component=container]","[data-md-component=header-topic]","[data-md-component=outdated]","[data-md-component=logo]","[data-md-component=skip]",...te("navigation.tabs.sticky")?["[data-md-component=tabs]"]:[]]){let z=ce(b),K=ce(b,f);typeof z!="undefined"&&typeof K!="undefined"&&z.replaceWith(K)}let u=jn(document.head),d=jn(f.head);for(let[b,z]of d)z.getAttribute("rel")==="stylesheet"||z.hasAttribute("src")||(u.has(b)?u.delete(b):document.head.appendChild(z));for(let b of u.values())b.getAttribute("rel")==="stylesheet"||b.hasAttribute("src")||b.remove();let v=Ee("container");return Fe(q("script",v)).pipe(E(b=>{let z=f.createElement("script");if(b.src){for(let K of b.getAttributeNames())z.setAttribute(K,b.getAttribute(K));return b.replaceWith(z),new P(K=>{z.onload=()=>K.complete()})}else return z.textContent=b.textContent,b.replaceWith(z),L}),Z(),re(f))}),le());return h(window,"popstate").pipe(m(pe)).subscribe(e),e.pipe(V(pe()),Le(2,1),M(([l,f])=>l.pathname===f.pathname&&l.hash!==f.hash),m(([,l])=>l)).subscribe(l=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):(history.scrollRestoration="auto",nr(l.hash),history.scrollRestoration="manual")}),e.pipe(Cr(i),V(pe()),Le(2,1),M(([l,f])=>l.pathname===f.pathname&&l.hash===f.hash),m(([,l])=>l)).subscribe(l=>{history.scrollRestoration="auto",nr(l.hash),history.scrollRestoration="manual",history.back()}),c.pipe(ne(e)).subscribe(([,l])=>{var f,u;history.state!==null||!l.hash?window.scrollTo(0,(u=(f=history.state)==null?void 0:f.y)!=null?u:0):nr(l.hash)}),t.pipe(ee("offset"),ke(100)).subscribe(({offset:l})=>{history.replaceState(l,"")}),c}var Dn=Ht(Nn());function Vn(e){let t=e.separator.split("|").map(n=>n.replace(/(\(\?[!=<][^)]+\))/g,"").length===0?"\uFFFD":n).join("|"),r=new RegExp(t,"img"),o=(n,i,s)=>`${i}${s}`;return n=>{n=n.replace(/[\s*+\-:~^]+/g," ").trim();let i=new RegExp(`(^|${e.separator}|)(${n.replace(/[|\\{}()[\]^$+*?.-]/g,"\\$&").replace(r,"|")})`,"img");return s=>(0,Dn.default)(s).replace(i,o).replace(/<\/mark>(\s+)]*>/img,"$1")}}function Mt(e){return e.type===1}function pr(e){return e.type===3}function zn(e,t){let r=an(e);return _(j(location.protocol!=="file:"),We("search")).pipe($e(o=>o),E(()=>t)).subscribe(({config:o,docs:n})=>r.next({type:0,data:{config:o,docs:n,options:{suggest:te("search.suggest")}}})),r}function qn({document$:e}){let t=me(),r=Ue(new URL("../versions.json",t.base)).pipe(de(()=>L)),o=r.pipe(m(n=>{let[,i]=t.base.match(/([^/]+)\/?$/);return n.find(({version:s,aliases:a})=>s===i||a.includes(i))||n[0]}));r.pipe(m(n=>new Map(n.map(i=>[`${new URL(`../${i.version}/`,t.base)}`,i]))),E(n=>h(document.body,"click").pipe(M(i=>!i.metaKey&&!i.ctrlKey),ne(o),E(([i,s])=>{if(i.target instanceof Element){let a=i.target.closest("a");if(a&&!a.target&&n.has(a.href)){let c=a.href;return!i.target.closest(".md-version")&&n.get(c)===s?L:(i.preventDefault(),j(c))}}return L}),E(i=>{let{version:s}=n.get(i);return cr(new URL(i)).pipe(m(a=>{let p=pe().href.replace(t.base,"");return a.includes(p.split("#")[0])?new URL(`../${s}/${p}`,t.base):new URL(i)}))})))).subscribe(n=>ot(n,!0)),B([r,o]).subscribe(([n,i])=>{W(".md-header__topic").appendChild(hn(n,i))}),e.pipe(E(()=>o)).subscribe(n=>{var s;let i=__md_get("__outdated",sessionStorage);if(i===null){i=!0;let a=((s=t.version)==null?void 0:s.default)||"latest";Array.isArray(a)||(a=[a]);e:for(let c of a)for(let p of n.aliases)if(new RegExp(c,"i").test(p)){i=!1;break e}__md_set("__outdated",i,sessionStorage)}if(i)for(let a of oe("outdated"))a.hidden=!1})}function Pa(e,{worker$:t}){let{searchParams:r}=pe();r.has("q")&&(Ke("search",!0),e.value=r.get("q"),e.focus(),We("search").pipe($e(i=>!i)).subscribe(()=>{let i=pe();i.searchParams.delete("q"),history.replaceState({},"",`${i}`)}));let o=Zt(e),n=_(t.pipe($e(Mt)),h(e,"keyup"),o).pipe(m(()=>e.value),X());return B([n,o]).pipe(m(([i,s])=>({value:i,focus:s})),J(1))}function Kn(e,{worker$:t}){let r=new x,o=r.pipe(Z(),re(!0));B([t.pipe($e(Mt)),r],(i,s)=>s).pipe(ee("value")).subscribe(({value:i})=>t.next({type:2,data:i})),r.pipe(ee("focus")).subscribe(({focus:i})=>{i&&Ke("search",i)}),h(e.form,"reset").pipe(Y(o)).subscribe(()=>e.focus());let n=W("header [for=__search]");return h(n,"click").subscribe(()=>e.focus()),Pa(e,{worker$:t}).pipe(w(i=>r.next(i)),A(()=>r.complete()),m(i=>R({ref:e},i)),J(1))}function Qn(e,{worker$:t,query$:r}){let o=new x,n=Ko(e.parentElement).pipe(M(Boolean)),i=e.parentElement,s=W(":scope > :first-child",e),a=W(":scope > :last-child",e);We("search").subscribe(l=>a.setAttribute("role",l?"list":"presentation")),o.pipe(ne(r),$r(t.pipe($e(Mt)))).subscribe(([{items:l},{value:f}])=>{switch(l.length){case 0:s.textContent=f.length?be("search.result.none"):be("search.result.placeholder");break;case 1:s.textContent=be("search.result.one");break;default:let u=tr(l.length);s.textContent=be("search.result.other",u)}});let c=o.pipe(w(()=>a.innerHTML=""),E(({items:l})=>_(j(...l.slice(0,10)),j(...l.slice(10)).pipe(Le(4),Ir(n),E(([f])=>f)))),m(fn),le());return c.subscribe(l=>a.appendChild(l)),c.pipe(se(l=>{let f=ce("details",l);return typeof f=="undefined"?L:h(f,"toggle").pipe(Y(o),m(()=>f))})).subscribe(l=>{l.open===!1&&l.offsetTop<=i.scrollTop&&i.scrollTo({top:l.offsetTop})}),t.pipe(M(pr),m(({data:l})=>l)).pipe(w(l=>o.next(l)),A(()=>o.complete()),m(l=>R({ref:e},l)))}function Ia(e,{query$:t}){return t.pipe(m(({value:r})=>{let o=pe();return o.hash="",r=r.replace(/\s+/g,"+").replace(/&/g,"%26").replace(/=/g,"%3D"),o.search=`q=${r}`,{url:o}}))}function Yn(e,t){let r=new x,o=r.pipe(Z(),re(!0));return r.subscribe(({url:n})=>{e.setAttribute("data-clipboard-text",e.href),e.href=`${n}`}),h(e,"click").pipe(Y(o)).subscribe(n=>n.preventDefault()),Ia(e,t).pipe(w(n=>r.next(n)),A(()=>r.complete()),m(n=>R({ref:e},n)))}function Bn(e,{worker$:t,keyboard$:r}){let o=new x,n=Ee("search-query"),i=_(h(n,"keydown"),h(n,"focus")).pipe(Se(ae),m(()=>n.value),X());return o.pipe(Ge(i),m(([{suggest:a},c])=>{let p=c.split(/([\s-]+)/);if(a!=null&&a.length&&p[p.length-1]){let l=a[a.length-1];l.startsWith(p[p.length-1])&&(p[p.length-1]=l)}else p.length=0;return p})).subscribe(a=>e.innerHTML=a.join("").replace(/\s/g," ")),r.pipe(M(({mode:a})=>a==="search")).subscribe(a=>{switch(a.type){case"ArrowRight":e.innerText.length&&n.selectionStart===n.value.length&&(n.value=e.innerText);break}}),t.pipe(M(pr),m(({data:a})=>a)).pipe(w(a=>o.next(a)),A(()=>o.complete()),m(()=>({ref:e})))}function Gn(e,{index$:t,keyboard$:r}){let o=me();try{let n=zn(o.search,t),i=Ee("search-query",e),s=Ee("search-result",e);h(e,"click").pipe(M(({target:c})=>c instanceof Element&&!!c.closest("a"))).subscribe(()=>Ke("search",!1)),r.pipe(M(({mode:c})=>c==="search")).subscribe(c=>{let p=Re();switch(c.type){case"Enter":if(p===i){let l=new Map;for(let f of q(":first-child [href]",s)){let u=f.firstElementChild;l.set(f,parseFloat(u.getAttribute("data-md-score")))}if(l.size){let[[f]]=[...l].sort(([,u],[,d])=>d-u);f.click()}c.claim()}break;case"Escape":case"Tab":Ke("search",!1),i.blur();break;case"ArrowUp":case"ArrowDown":if(typeof p=="undefined")i.focus();else{let l=[i,...q(":not(details) > [href], summary, details[open] [href]",s)],f=Math.max(0,(Math.max(0,l.indexOf(p))+l.length+(c.type==="ArrowUp"?-1:1))%l.length);l[f].focus()}c.claim();break;default:i!==Re()&&i.focus()}}),r.pipe(M(({mode:c})=>c==="global")).subscribe(c=>{switch(c.type){case"f":case"s":case"/":i.focus(),i.select(),c.claim();break}});let a=Kn(i,{worker$:n});return _(a,Qn(s,{worker$:n,query$:a})).pipe(qe(...oe("search-share",e).map(c=>Yn(c,{query$:a})),...oe("search-suggest",e).map(c=>Bn(c,{worker$:n,keyboard$:r}))))}catch(n){return e.hidden=!0,Ve}}function Jn(e,{index$:t,location$:r}){return B([t,r.pipe(V(pe()),M(o=>!!o.searchParams.get("h")))]).pipe(m(([o,n])=>Vn(o.config)(n.searchParams.get("h"))),m(o=>{var s;let n=new Map,i=document.createNodeIterator(e,NodeFilter.SHOW_TEXT);for(let a=i.nextNode();a;a=i.nextNode())if((s=a.parentElement)!=null&&s.offsetHeight){let c=a.textContent,p=o(c);p.length>c.length&&n.set(a,p)}for(let[a,c]of n){let{childNodes:p}=T("span",null,c);a.replaceWith(...Array.from(p))}return{ref:e,nodes:n}}))}function Fa(e,{viewport$:t,main$:r}){let o=e.closest(".md-grid"),n=o.offsetTop-o.parentElement.offsetTop;return B([r,t]).pipe(m(([{offset:i,height:s},{offset:{y:a}}])=>(s=s+Math.min(n,Math.max(0,a-i))-n,{height:s,locked:a>=i+n})),X((i,s)=>i.height===s.height&&i.locked===s.locked))}function Kr(e,o){var n=o,{header$:t}=n,r=eo(n,["header$"]);let i=W(".md-sidebar__scrollwrap",e),{y:s}=Je(i);return H(()=>{let a=new x,c=a.pipe(Z(),re(!0)),p=a.pipe(Ce(0,Oe));return p.pipe(ne(t)).subscribe({next([{height:l},{height:f}]){i.style.height=`${l-2*s}px`,e.style.top=`${f}px`},complete(){i.style.height="",e.style.top=""}}),p.pipe($e()).subscribe(()=>{for(let l of q(".md-nav__link--active[href]",e)){if(!l.clientHeight)continue;let f=l.closest(".md-sidebar__scrollwrap");if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=he(f);f.scrollTo({top:u-d/2})}}}),ge(q("label[tabindex]",e)).pipe(se(l=>h(l,"click").pipe(Se(ae),m(()=>l),Y(c)))).subscribe(l=>{let f=W(`[id="${l.htmlFor}"]`);W(`[aria-labelledby="${l.id}"]`).setAttribute("aria-expanded",`${f.checked}`)}),Fa(e,r).pipe(w(l=>a.next(l)),A(()=>a.complete()),m(l=>R({ref:e},l)))})}function Xn(e,t){if(typeof t!="undefined"){let r=`https://api.github.com/repos/${e}/${t}`;return St(Ue(`${r}/releases/latest`).pipe(de(()=>L),m(o=>({version:o.tag_name})),He({})),Ue(r).pipe(de(()=>L),m(o=>({stars:o.stargazers_count,forks:o.forks_count})),He({}))).pipe(m(([o,n])=>R(R({},o),n)))}else{let r=`https://api.github.com/users/${e}`;return Ue(r).pipe(m(o=>({repositories:o.public_repos})),He({}))}}function Zn(e,t){let r=`https://${e}/api/v4/projects/${encodeURIComponent(t)}`;return Ue(r).pipe(de(()=>L),m(({star_count:o,forks_count:n})=>({stars:o,forks:n})),He({}))}function ei(e){let t=e.match(/^.+github\.com\/([^/]+)\/?([^/]+)?/i);if(t){let[,r,o]=t;return Xn(r,o)}if(t=e.match(/^.+?([^/]*gitlab[^/]+)\/(.+?)\/?$/i),t){let[,r,o]=t;return Zn(r,o)}return L}var ja;function Wa(e){return ja||(ja=H(()=>{let t=__md_get("__source",sessionStorage);if(t)return j(t);if(oe("consent").length){let o=__md_get("__consent");if(!(o&&o.github))return L}return ei(e.href).pipe(w(o=>__md_set("__source",o,sessionStorage)))}).pipe(de(()=>L),M(t=>Object.keys(t).length>0),m(t=>({facts:t})),J(1)))}function ti(e){let t=W(":scope > :last-child",e);return H(()=>{let r=new x;return r.subscribe(({facts:o})=>{t.appendChild(un(o)),t.classList.add("md-source__repository--active")}),Wa(e).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}function Ua(e,{viewport$:t,header$:r}){return ye(document.body).pipe(E(()=>ar(e,{header$:r,viewport$:t})),m(({offset:{y:o}})=>({hidden:o>=10})),ee("hidden"))}function ri(e,t){return H(()=>{let r=new x;return r.subscribe({next({hidden:o}){e.hidden=o},complete(){e.hidden=!1}}),(te("navigation.tabs.sticky")?j({hidden:!1}):Ua(e,t)).pipe(w(o=>r.next(o)),A(()=>r.complete()),m(o=>R({ref:e},o)))})}function Na(e,{viewport$:t,header$:r}){let o=new Map,n=q("[href^=\\#]",e);for(let a of n){let c=decodeURIComponent(a.hash.substring(1)),p=ce(`[id="${c}"]`);typeof p!="undefined"&&o.set(a,p)}let i=r.pipe(ee("height"),m(({height:a})=>{let c=Ee("main"),p=W(":scope > :first-child",c);return a+.8*(p.offsetTop-c.offsetTop)}),le());return ye(document.body).pipe(ee("height"),E(a=>H(()=>{let c=[];return j([...o].reduce((p,[l,f])=>{for(;c.length&&o.get(c[c.length-1]).tagName>=f.tagName;)c.pop();let u=f.offsetTop;for(;!u&&f.parentElement;)f=f.parentElement,u=f.offsetTop;let d=f.offsetParent;for(;d;d=d.offsetParent)u+=d.offsetTop;return p.set([...c=[...c,l]].reverse(),u)},new Map))}).pipe(m(c=>new Map([...c].sort(([,p],[,l])=>p-l))),Ge(i),E(([c,p])=>t.pipe(kr(([l,f],{offset:{y:u},size:d})=>{let v=u+d.height>=Math.floor(a.height);for(;f.length;){let[,b]=f[0];if(b-p=u&&!v)f=[l.pop(),...f];else break}return[l,f]},[[],[...c]]),X((l,f)=>l[0]===f[0]&&l[1]===f[1])))))).pipe(m(([a,c])=>({prev:a.map(([p])=>p),next:c.map(([p])=>p)})),V({prev:[],next:[]}),Le(2,1),m(([a,c])=>a.prev.length{let i=new x,s=i.pipe(Z(),re(!0));if(i.subscribe(({prev:a,next:c})=>{for(let[p]of c)p.classList.remove("md-nav__link--passed"),p.classList.remove("md-nav__link--active");for(let[p,[l]]of a.entries())l.classList.add("md-nav__link--passed"),l.classList.toggle("md-nav__link--active",p===a.length-1)}),te("toc.follow")){let a=_(t.pipe(ke(1),m(()=>{})),t.pipe(ke(250),m(()=>"smooth")));i.pipe(M(({prev:c})=>c.length>0),Ge(o.pipe(Se(ae))),ne(a)).subscribe(([[{prev:c}],p])=>{let[l]=c[c.length-1];if(l.offsetHeight){let f=zo(l);if(typeof f!="undefined"){let u=l.offsetTop-f.offsetTop,{height:d}=he(f);f.scrollTo({top:u-d/2,behavior:p})}}})}return te("navigation.tracking")&&t.pipe(Y(s),ee("offset"),ke(250),je(1),Y(n.pipe(je(1))),Tt({delay:250}),ne(i)).subscribe(([,{prev:a}])=>{let c=pe(),p=a[a.length-1];if(p&&p.length){let[l]=p,{hash:f}=new URL(l.href);c.hash!==f&&(c.hash=f,history.replaceState({},"",`${c}`))}else c.hash="",history.replaceState({},"",`${c}`)}),Na(e,{viewport$:t,header$:r}).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))})}function Da(e,{viewport$:t,main$:r,target$:o}){let n=t.pipe(m(({offset:{y:s}})=>s),Le(2,1),m(([s,a])=>s>a&&a>0),X()),i=r.pipe(m(({active:s})=>s));return B([i,n]).pipe(m(([s,a])=>!(s&&a)),X(),Y(o.pipe(je(1))),re(!0),Tt({delay:250}),m(s=>({hidden:s})))}function ni(e,{viewport$:t,header$:r,main$:o,target$:n}){let i=new x,s=i.pipe(Z(),re(!0));return i.subscribe({next({hidden:a}){e.hidden=a,a?(e.setAttribute("tabindex","-1"),e.blur()):e.removeAttribute("tabindex")},complete(){e.style.top="",e.hidden=!0,e.removeAttribute("tabindex")}}),r.pipe(Y(s),ee("height")).subscribe(({height:a})=>{e.style.top=`${a+16}px`}),h(e,"click").subscribe(a=>{a.preventDefault(),window.scrollTo({top:0})}),Da(e,{viewport$:t,main$:o,target$:n}).pipe(w(a=>i.next(a)),A(()=>i.complete()),m(a=>R({ref:e},a)))}function ii({document$:e,tablet$:t}){e.pipe(E(()=>q(".md-toggle--indeterminate")),w(r=>{r.indeterminate=!0,r.checked=!1}),se(r=>h(r,"change").pipe(Rr(()=>r.classList.contains("md-toggle--indeterminate")),m(()=>r))),ne(t)).subscribe(([r,o])=>{r.classList.remove("md-toggle--indeterminate"),o&&(r.checked=!1)})}function Va(){return/(iPad|iPhone|iPod)/.test(navigator.userAgent)}function ai({document$:e}){e.pipe(E(()=>q("[data-md-scrollfix]")),w(t=>t.removeAttribute("data-md-scrollfix")),M(Va),se(t=>h(t,"touchstart").pipe(m(()=>t)))).subscribe(t=>{let r=t.scrollTop;r===0?t.scrollTop=1:r+t.offsetHeight===t.scrollHeight&&(t.scrollTop=r-1)})}function si({viewport$:e,tablet$:t}){B([We("search"),t]).pipe(m(([r,o])=>r&&!o),E(r=>j(r).pipe(ze(r?400:100))),ne(e)).subscribe(([r,{offset:{y:o}}])=>{if(r)document.body.setAttribute("data-md-scrolllock",""),document.body.style.top=`-${o}px`;else{let n=-1*parseInt(document.body.style.top,10);document.body.removeAttribute("data-md-scrolllock"),document.body.style.top="",n&&window.scrollTo(0,n)}})}Object.entries||(Object.entries=function(e){let t=[];for(let r of Object.keys(e))t.push([r,e[r]]);return t});Object.values||(Object.values=function(e){let t=[];for(let r of Object.keys(e))t.push(e[r]);return t});typeof Element!="undefined"&&(Element.prototype.scrollTo||(Element.prototype.scrollTo=function(e,t){typeof e=="object"?(this.scrollLeft=e.left,this.scrollTop=e.top):(this.scrollLeft=e,this.scrollTop=t)}),Element.prototype.replaceWith||(Element.prototype.replaceWith=function(...e){let t=this.parentNode;if(t){e.length===0&&t.removeChild(this);for(let r=e.length-1;r>=0;r--){let o=e[r];typeof o=="string"?o=document.createTextNode(o):o.parentNode&&o.parentNode.removeChild(o),r?t.insertBefore(this.previousSibling,o):t.replaceChild(o,this)}}}));function za(){return location.protocol==="file:"?ht(`${new URL("search/search_index.js",Qr.base)}`).pipe(m(()=>__index),J(1)):Ue(new URL("search/search_index.json",Qr.base))}document.documentElement.classList.remove("no-js");document.documentElement.classList.add("js");var nt=Uo(),_t=Bo(),gt=Jo(_t),Yr=Yo(),Te=nn(),lr=Fr("(min-width: 960px)"),pi=Fr("(min-width: 1220px)"),li=Xo(),Qr=me(),mi=document.forms.namedItem("search")?za():Ve,Br=new x;Fn({alert$:Br});var Gr=new x;te("navigation.instant")&&Wn({location$:_t,viewport$:Te,progress$:Gr}).subscribe(nt);var ci;((ci=Qr.version)==null?void 0:ci.provider)==="mike"&&qn({document$:nt});_(_t,gt).pipe(ze(125)).subscribe(()=>{Ke("drawer",!1),Ke("search",!1)});Yr.pipe(M(({mode:e})=>e==="global")).subscribe(e=>{switch(e.type){case"p":case",":let t=ce("link[rel=prev]");typeof t!="undefined"&&ot(t);break;case"n":case".":let r=ce("link[rel=next]");typeof r!="undefined"&&ot(r);break;case"Enter":let o=Re();o instanceof HTMLLabelElement&&o.click()}});ii({document$:nt,tablet$:lr});ai({document$:nt});si({viewport$:Te,tablet$:lr});var Xe=kn(Ee("header"),{viewport$:Te}),Lt=nt.pipe(m(()=>Ee("main")),E(e=>Rn(e,{viewport$:Te,header$:Xe})),J(1)),qa=_(...oe("consent").map(e=>cn(e,{target$:gt})),...oe("dialog").map(e=>Cn(e,{alert$:Br})),...oe("header").map(e=>Hn(e,{viewport$:Te,header$:Xe,main$:Lt})),...oe("palette").map(e=>Pn(e)),...oe("progress").map(e=>In(e,{progress$:Gr})),...oe("search").map(e=>Gn(e,{index$:mi,keyboard$:Yr})),...oe("source").map(e=>ti(e))),Ka=H(()=>_(...oe("announce").map(e=>sn(e)),...oe("content").map(e=>An(e,{viewport$:Te,target$:gt,print$:li})),...oe("content").map(e=>te("search.highlight")?Jn(e,{index$:mi,location$:_t}):L),...oe("header-title").map(e=>$n(e,{viewport$:Te,header$:Xe})),...oe("sidebar").map(e=>e.getAttribute("data-md-type")==="navigation"?jr(pi,()=>Kr(e,{viewport$:Te,header$:Xe,main$:Lt})):jr(lr,()=>Kr(e,{viewport$:Te,header$:Xe,main$:Lt}))),...oe("tabs").map(e=>ri(e,{viewport$:Te,header$:Xe})),...oe("toc").map(e=>oi(e,{viewport$:Te,header$:Xe,main$:Lt,target$:gt})),...oe("top").map(e=>ni(e,{viewport$:Te,header$:Xe,main$:Lt,target$:gt})))),fi=nt.pipe(E(()=>Ka),qe(qa),J(1));fi.subscribe();window.document$=nt;window.location$=_t;window.target$=gt;window.keyboard$=Yr;window.viewport$=Te;window.tablet$=lr;window.screen$=pi;window.print$=li;window.alert$=Br;window.progress$=Gr;window.component$=fi;})(); +//# sourceMappingURL=bundle.aecac24b.min.js.map + diff --git a/assets/javascripts/bundle.aecac24b.min.js.map b/assets/javascripts/bundle.aecac24b.min.js.map new file mode 100644 index 00000000..b1534de5 --- /dev/null +++ b/assets/javascripts/bundle.aecac24b.min.js.map @@ -0,0 +1,7 @@ +{ + "version": 3, + "sources": ["node_modules/focus-visible/dist/focus-visible.js", "node_modules/clipboard/dist/clipboard.js", "node_modules/escape-html/index.js", "src/templates/assets/javascripts/bundle.ts", "node_modules/rxjs/node_modules/tslib/tslib.es6.js", "node_modules/rxjs/src/internal/util/isFunction.ts", "node_modules/rxjs/src/internal/util/createErrorClass.ts", "node_modules/rxjs/src/internal/util/UnsubscriptionError.ts", "node_modules/rxjs/src/internal/util/arrRemove.ts", "node_modules/rxjs/src/internal/Subscription.ts", "node_modules/rxjs/src/internal/config.ts", "node_modules/rxjs/src/internal/scheduler/timeoutProvider.ts", "node_modules/rxjs/src/internal/util/reportUnhandledError.ts", "node_modules/rxjs/src/internal/util/noop.ts", "node_modules/rxjs/src/internal/NotificationFactories.ts", "node_modules/rxjs/src/internal/util/errorContext.ts", "node_modules/rxjs/src/internal/Subscriber.ts", "node_modules/rxjs/src/internal/symbol/observable.ts", "node_modules/rxjs/src/internal/util/identity.ts", "node_modules/rxjs/src/internal/util/pipe.ts", "node_modules/rxjs/src/internal/Observable.ts", "node_modules/rxjs/src/internal/util/lift.ts", "node_modules/rxjs/src/internal/operators/OperatorSubscriber.ts", "node_modules/rxjs/src/internal/scheduler/animationFrameProvider.ts", "node_modules/rxjs/src/internal/util/ObjectUnsubscribedError.ts", "node_modules/rxjs/src/internal/Subject.ts", "node_modules/rxjs/src/internal/scheduler/dateTimestampProvider.ts", "node_modules/rxjs/src/internal/ReplaySubject.ts", "node_modules/rxjs/src/internal/scheduler/Action.ts", "node_modules/rxjs/src/internal/scheduler/intervalProvider.ts", "node_modules/rxjs/src/internal/scheduler/AsyncAction.ts", "node_modules/rxjs/src/internal/Scheduler.ts", "node_modules/rxjs/src/internal/scheduler/AsyncScheduler.ts", "node_modules/rxjs/src/internal/scheduler/async.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameAction.ts", "node_modules/rxjs/src/internal/scheduler/AnimationFrameScheduler.ts", "node_modules/rxjs/src/internal/scheduler/animationFrame.ts", "node_modules/rxjs/src/internal/observable/empty.ts", "node_modules/rxjs/src/internal/util/isScheduler.ts", "node_modules/rxjs/src/internal/util/args.ts", "node_modules/rxjs/src/internal/util/isArrayLike.ts", "node_modules/rxjs/src/internal/util/isPromise.ts", "node_modules/rxjs/src/internal/util/isInteropObservable.ts", "node_modules/rxjs/src/internal/util/isAsyncIterable.ts", "node_modules/rxjs/src/internal/util/throwUnobservableError.ts", "node_modules/rxjs/src/internal/symbol/iterator.ts", "node_modules/rxjs/src/internal/util/isIterable.ts", "node_modules/rxjs/src/internal/util/isReadableStreamLike.ts", "node_modules/rxjs/src/internal/observable/innerFrom.ts", "node_modules/rxjs/src/internal/util/executeSchedule.ts", "node_modules/rxjs/src/internal/operators/observeOn.ts", "node_modules/rxjs/src/internal/operators/subscribeOn.ts", "node_modules/rxjs/src/internal/scheduled/scheduleObservable.ts", "node_modules/rxjs/src/internal/scheduled/schedulePromise.ts", "node_modules/rxjs/src/internal/scheduled/scheduleArray.ts", "node_modules/rxjs/src/internal/scheduled/scheduleIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleAsyncIterable.ts", "node_modules/rxjs/src/internal/scheduled/scheduleReadableStreamLike.ts", "node_modules/rxjs/src/internal/scheduled/scheduled.ts", "node_modules/rxjs/src/internal/observable/from.ts", "node_modules/rxjs/src/internal/observable/of.ts", "node_modules/rxjs/src/internal/observable/throwError.ts", "node_modules/rxjs/src/internal/util/EmptyError.ts", "node_modules/rxjs/src/internal/util/isDate.ts", "node_modules/rxjs/src/internal/operators/map.ts", "node_modules/rxjs/src/internal/util/mapOneOrManyArgs.ts", "node_modules/rxjs/src/internal/util/argsArgArrayOrObject.ts", "node_modules/rxjs/src/internal/util/createObject.ts", "node_modules/rxjs/src/internal/observable/combineLatest.ts", "node_modules/rxjs/src/internal/operators/mergeInternals.ts", "node_modules/rxjs/src/internal/operators/mergeMap.ts", "node_modules/rxjs/src/internal/operators/mergeAll.ts", "node_modules/rxjs/src/internal/operators/concatAll.ts", "node_modules/rxjs/src/internal/observable/concat.ts", "node_modules/rxjs/src/internal/observable/defer.ts", "node_modules/rxjs/src/internal/observable/fromEvent.ts", "node_modules/rxjs/src/internal/observable/fromEventPattern.ts", "node_modules/rxjs/src/internal/observable/timer.ts", "node_modules/rxjs/src/internal/observable/merge.ts", "node_modules/rxjs/src/internal/observable/never.ts", "node_modules/rxjs/src/internal/util/argsOrArgArray.ts", "node_modules/rxjs/src/internal/operators/filter.ts", "node_modules/rxjs/src/internal/observable/zip.ts", "node_modules/rxjs/src/internal/operators/audit.ts", "node_modules/rxjs/src/internal/operators/auditTime.ts", "node_modules/rxjs/src/internal/operators/bufferCount.ts", "node_modules/rxjs/src/internal/operators/catchError.ts", "node_modules/rxjs/src/internal/operators/scanInternals.ts", "node_modules/rxjs/src/internal/operators/combineLatest.ts", "node_modules/rxjs/src/internal/operators/combineLatestWith.ts", "node_modules/rxjs/src/internal/operators/debounceTime.ts", "node_modules/rxjs/src/internal/operators/defaultIfEmpty.ts", "node_modules/rxjs/src/internal/operators/take.ts", "node_modules/rxjs/src/internal/operators/ignoreElements.ts", "node_modules/rxjs/src/internal/operators/mapTo.ts", "node_modules/rxjs/src/internal/operators/delayWhen.ts", "node_modules/rxjs/src/internal/operators/delay.ts", "node_modules/rxjs/src/internal/operators/distinctUntilChanged.ts", "node_modules/rxjs/src/internal/operators/distinctUntilKeyChanged.ts", "node_modules/rxjs/src/internal/operators/throwIfEmpty.ts", "node_modules/rxjs/src/internal/operators/endWith.ts", "node_modules/rxjs/src/internal/operators/finalize.ts", "node_modules/rxjs/src/internal/operators/first.ts", "node_modules/rxjs/src/internal/operators/merge.ts", "node_modules/rxjs/src/internal/operators/mergeWith.ts", "node_modules/rxjs/src/internal/operators/repeat.ts", "node_modules/rxjs/src/internal/operators/sample.ts", "node_modules/rxjs/src/internal/operators/scan.ts", "node_modules/rxjs/src/internal/operators/share.ts", "node_modules/rxjs/src/internal/operators/shareReplay.ts", "node_modules/rxjs/src/internal/operators/skip.ts", "node_modules/rxjs/src/internal/operators/skipUntil.ts", "node_modules/rxjs/src/internal/operators/startWith.ts", "node_modules/rxjs/src/internal/operators/switchMap.ts", "node_modules/rxjs/src/internal/operators/takeUntil.ts", "node_modules/rxjs/src/internal/operators/takeWhile.ts", "node_modules/rxjs/src/internal/operators/tap.ts", "node_modules/rxjs/src/internal/operators/throttle.ts", "node_modules/rxjs/src/internal/operators/throttleTime.ts", "node_modules/rxjs/src/internal/operators/withLatestFrom.ts", "node_modules/rxjs/src/internal/operators/zip.ts", "node_modules/rxjs/src/internal/operators/zipWith.ts", "src/templates/assets/javascripts/browser/document/index.ts", "src/templates/assets/javascripts/browser/element/_/index.ts", "src/templates/assets/javascripts/browser/element/focus/index.ts", "src/templates/assets/javascripts/browser/element/offset/_/index.ts", "src/templates/assets/javascripts/browser/element/offset/content/index.ts", "src/templates/assets/javascripts/utilities/h/index.ts", "src/templates/assets/javascripts/utilities/round/index.ts", "src/templates/assets/javascripts/browser/script/index.ts", "src/templates/assets/javascripts/browser/element/size/_/index.ts", "src/templates/assets/javascripts/browser/element/size/content/index.ts", "src/templates/assets/javascripts/browser/element/visibility/index.ts", "src/templates/assets/javascripts/browser/toggle/index.ts", "src/templates/assets/javascripts/browser/keyboard/index.ts", "src/templates/assets/javascripts/browser/location/_/index.ts", "src/templates/assets/javascripts/browser/location/hash/index.ts", "src/templates/assets/javascripts/browser/media/index.ts", "src/templates/assets/javascripts/browser/request/index.ts", "src/templates/assets/javascripts/browser/viewport/offset/index.ts", "src/templates/assets/javascripts/browser/viewport/size/index.ts", "src/templates/assets/javascripts/browser/viewport/_/index.ts", "src/templates/assets/javascripts/browser/viewport/at/index.ts", "src/templates/assets/javascripts/browser/worker/index.ts", "src/templates/assets/javascripts/_/index.ts", "src/templates/assets/javascripts/components/_/index.ts", "src/templates/assets/javascripts/components/announce/index.ts", "src/templates/assets/javascripts/components/consent/index.ts", "src/templates/assets/javascripts/components/content/annotation/_/index.ts", "src/templates/assets/javascripts/templates/tooltip/index.tsx", "src/templates/assets/javascripts/templates/annotation/index.tsx", "src/templates/assets/javascripts/templates/clipboard/index.tsx", "src/templates/assets/javascripts/templates/search/index.tsx", "src/templates/assets/javascripts/templates/source/index.tsx", "src/templates/assets/javascripts/templates/tabbed/index.tsx", "src/templates/assets/javascripts/templates/table/index.tsx", "src/templates/assets/javascripts/templates/version/index.tsx", "src/templates/assets/javascripts/components/content/annotation/list/index.ts", "src/templates/assets/javascripts/components/content/annotation/block/index.ts", "src/templates/assets/javascripts/components/content/code/_/index.ts", "src/templates/assets/javascripts/components/content/details/index.ts", "src/templates/assets/javascripts/components/content/mermaid/index.css", "src/templates/assets/javascripts/components/content/mermaid/index.ts", "src/templates/assets/javascripts/components/content/table/index.ts", "src/templates/assets/javascripts/components/content/tabs/index.ts", "src/templates/assets/javascripts/components/content/_/index.ts", "src/templates/assets/javascripts/components/dialog/index.ts", "src/templates/assets/javascripts/components/header/_/index.ts", "src/templates/assets/javascripts/components/header/title/index.ts", "src/templates/assets/javascripts/components/main/index.ts", "src/templates/assets/javascripts/components/palette/index.ts", "src/templates/assets/javascripts/components/progress/index.ts", "src/templates/assets/javascripts/integrations/clipboard/index.ts", "src/templates/assets/javascripts/integrations/sitemap/index.ts", "src/templates/assets/javascripts/integrations/instant/index.ts", "src/templates/assets/javascripts/integrations/search/highlighter/index.ts", "src/templates/assets/javascripts/integrations/search/worker/message/index.ts", "src/templates/assets/javascripts/integrations/search/worker/_/index.ts", "src/templates/assets/javascripts/integrations/version/index.ts", "src/templates/assets/javascripts/components/search/query/index.ts", "src/templates/assets/javascripts/components/search/result/index.ts", "src/templates/assets/javascripts/components/search/share/index.ts", "src/templates/assets/javascripts/components/search/suggest/index.ts", "src/templates/assets/javascripts/components/search/_/index.ts", "src/templates/assets/javascripts/components/search/highlight/index.ts", "src/templates/assets/javascripts/components/sidebar/index.ts", "src/templates/assets/javascripts/components/source/facts/github/index.ts", "src/templates/assets/javascripts/components/source/facts/gitlab/index.ts", "src/templates/assets/javascripts/components/source/facts/_/index.ts", "src/templates/assets/javascripts/components/source/_/index.ts", "src/templates/assets/javascripts/components/tabs/index.ts", "src/templates/assets/javascripts/components/toc/index.ts", "src/templates/assets/javascripts/components/top/index.ts", "src/templates/assets/javascripts/patches/indeterminate/index.ts", "src/templates/assets/javascripts/patches/scrollfix/index.ts", "src/templates/assets/javascripts/patches/scrolllock/index.ts", "src/templates/assets/javascripts/polyfills/index.ts"], + "sourcesContent": ["(function (global, factory) {\n typeof exports === 'object' && typeof module !== 'undefined' ? factory() :\n typeof define === 'function' && define.amd ? define(factory) :\n (factory());\n}(this, (function () { 'use strict';\n\n /**\n * Applies the :focus-visible polyfill at the given scope.\n * A scope in this case is either the top-level Document or a Shadow Root.\n *\n * @param {(Document|ShadowRoot)} scope\n * @see https://github.com/WICG/focus-visible\n */\n function applyFocusVisiblePolyfill(scope) {\n var hadKeyboardEvent = true;\n var hadFocusVisibleRecently = false;\n var hadFocusVisibleRecentlyTimeout = null;\n\n var inputTypesAllowlist = {\n text: true,\n search: true,\n url: true,\n tel: true,\n email: true,\n password: true,\n number: true,\n date: true,\n month: true,\n week: true,\n time: true,\n datetime: true,\n 'datetime-local': true\n };\n\n /**\n * Helper function for legacy browsers and iframes which sometimes focus\n * elements like document, body, and non-interactive SVG.\n * @param {Element} el\n */\n function isValidFocusTarget(el) {\n if (\n el &&\n el !== document &&\n el.nodeName !== 'HTML' &&\n el.nodeName !== 'BODY' &&\n 'classList' in el &&\n 'contains' in el.classList\n ) {\n return true;\n }\n return false;\n }\n\n /**\n * Computes whether the given element should automatically trigger the\n * `focus-visible` class being added, i.e. whether it should always match\n * `:focus-visible` when focused.\n * @param {Element} el\n * @return {boolean}\n */\n function focusTriggersKeyboardModality(el) {\n var type = el.type;\n var tagName = el.tagName;\n\n if (tagName === 'INPUT' && inputTypesAllowlist[type] && !el.readOnly) {\n return true;\n }\n\n if (tagName === 'TEXTAREA' && !el.readOnly) {\n return true;\n }\n\n if (el.isContentEditable) {\n return true;\n }\n\n return false;\n }\n\n /**\n * Add the `focus-visible` class to the given element if it was not added by\n * the author.\n * @param {Element} el\n */\n function addFocusVisibleClass(el) {\n if (el.classList.contains('focus-visible')) {\n return;\n }\n el.classList.add('focus-visible');\n el.setAttribute('data-focus-visible-added', '');\n }\n\n /**\n * Remove the `focus-visible` class from the given element if it was not\n * originally added by the author.\n * @param {Element} el\n */\n function removeFocusVisibleClass(el) {\n if (!el.hasAttribute('data-focus-visible-added')) {\n return;\n }\n el.classList.remove('focus-visible');\n el.removeAttribute('data-focus-visible-added');\n }\n\n /**\n * If the most recent user interaction was via the keyboard;\n * and the key press did not include a meta, alt/option, or control key;\n * then the modality is keyboard. Otherwise, the modality is not keyboard.\n * Apply `focus-visible` to any current active element and keep track\n * of our keyboard modality state with `hadKeyboardEvent`.\n * @param {KeyboardEvent} e\n */\n function onKeyDown(e) {\n if (e.metaKey || e.altKey || e.ctrlKey) {\n return;\n }\n\n if (isValidFocusTarget(scope.activeElement)) {\n addFocusVisibleClass(scope.activeElement);\n }\n\n hadKeyboardEvent = true;\n }\n\n /**\n * If at any point a user clicks with a pointing device, ensure that we change\n * the modality away from keyboard.\n * This avoids the situation where a user presses a key on an already focused\n * element, and then clicks on a different element, focusing it with a\n * pointing device, while we still think we're in keyboard modality.\n * @param {Event} e\n */\n function onPointerDown(e) {\n hadKeyboardEvent = false;\n }\n\n /**\n * On `focus`, add the `focus-visible` class to the target if:\n * - the target received focus as a result of keyboard navigation, or\n * - the event target is an element that will likely require interaction\n * via the keyboard (e.g. a text box)\n * @param {Event} e\n */\n function onFocus(e) {\n // Prevent IE from focusing the document or HTML element.\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (hadKeyboardEvent || focusTriggersKeyboardModality(e.target)) {\n addFocusVisibleClass(e.target);\n }\n }\n\n /**\n * On `blur`, remove the `focus-visible` class from the target.\n * @param {Event} e\n */\n function onBlur(e) {\n if (!isValidFocusTarget(e.target)) {\n return;\n }\n\n if (\n e.target.classList.contains('focus-visible') ||\n e.target.hasAttribute('data-focus-visible-added')\n ) {\n // To detect a tab/window switch, we look for a blur event followed\n // rapidly by a visibility change.\n // If we don't see a visibility change within 100ms, it's probably a\n // regular focus change.\n hadFocusVisibleRecently = true;\n window.clearTimeout(hadFocusVisibleRecentlyTimeout);\n hadFocusVisibleRecentlyTimeout = window.setTimeout(function() {\n hadFocusVisibleRecently = false;\n }, 100);\n removeFocusVisibleClass(e.target);\n }\n }\n\n /**\n * If the user changes tabs, keep track of whether or not the previously\n * focused element had .focus-visible.\n * @param {Event} e\n */\n function onVisibilityChange(e) {\n if (document.visibilityState === 'hidden') {\n // If the tab becomes active again, the browser will handle calling focus\n // on the element (Safari actually calls it twice).\n // If this tab change caused a blur on an element with focus-visible,\n // re-apply the class when the user switches back to the tab.\n if (hadFocusVisibleRecently) {\n hadKeyboardEvent = true;\n }\n addInitialPointerMoveListeners();\n }\n }\n\n /**\n * Add a group of listeners to detect usage of any pointing devices.\n * These listeners will be added when the polyfill first loads, and anytime\n * the window is blurred, so that they are active when the window regains\n * focus.\n */\n function addInitialPointerMoveListeners() {\n document.addEventListener('mousemove', onInitialPointerMove);\n document.addEventListener('mousedown', onInitialPointerMove);\n document.addEventListener('mouseup', onInitialPointerMove);\n document.addEventListener('pointermove', onInitialPointerMove);\n document.addEventListener('pointerdown', onInitialPointerMove);\n document.addEventListener('pointerup', onInitialPointerMove);\n document.addEventListener('touchmove', onInitialPointerMove);\n document.addEventListener('touchstart', onInitialPointerMove);\n document.addEventListener('touchend', onInitialPointerMove);\n }\n\n function removeInitialPointerMoveListeners() {\n document.removeEventListener('mousemove', onInitialPointerMove);\n document.removeEventListener('mousedown', onInitialPointerMove);\n document.removeEventListener('mouseup', onInitialPointerMove);\n document.removeEventListener('pointermove', onInitialPointerMove);\n document.removeEventListener('pointerdown', onInitialPointerMove);\n document.removeEventListener('pointerup', onInitialPointerMove);\n document.removeEventListener('touchmove', onInitialPointerMove);\n document.removeEventListener('touchstart', onInitialPointerMove);\n document.removeEventListener('touchend', onInitialPointerMove);\n }\n\n /**\n * When the polfyill first loads, assume the user is in keyboard modality.\n * If any event is received from a pointing device (e.g. mouse, pointer,\n * touch), turn off keyboard modality.\n * This accounts for situations where focus enters the page from the URL bar.\n * @param {Event} e\n */\n function onInitialPointerMove(e) {\n // Work around a Safari quirk that fires a mousemove on whenever the\n // window blurs, even if you're tabbing out of the page. \u00AF\\_(\u30C4)_/\u00AF\n if (e.target.nodeName && e.target.nodeName.toLowerCase() === 'html') {\n return;\n }\n\n hadKeyboardEvent = false;\n removeInitialPointerMoveListeners();\n }\n\n // For some kinds of state, we are interested in changes at the global scope\n // only. For example, global pointer input, global key presses and global\n // visibility change should affect the state at every scope:\n document.addEventListener('keydown', onKeyDown, true);\n document.addEventListener('mousedown', onPointerDown, true);\n document.addEventListener('pointerdown', onPointerDown, true);\n document.addEventListener('touchstart', onPointerDown, true);\n document.addEventListener('visibilitychange', onVisibilityChange, true);\n\n addInitialPointerMoveListeners();\n\n // For focus and blur, we specifically care about state changes in the local\n // scope. This is because focus / blur events that originate from within a\n // shadow root are not re-dispatched from the host element if it was already\n // the active element in its own scope:\n scope.addEventListener('focus', onFocus, true);\n scope.addEventListener('blur', onBlur, true);\n\n // We detect that a node is a ShadowRoot by ensuring that it is a\n // DocumentFragment and also has a host property. This check covers native\n // implementation and polyfill implementation transparently. If we only cared\n // about the native implementation, we could just check if the scope was\n // an instance of a ShadowRoot.\n if (scope.nodeType === Node.DOCUMENT_FRAGMENT_NODE && scope.host) {\n // Since a ShadowRoot is a special kind of DocumentFragment, it does not\n // have a root element to add a class to. So, we add this attribute to the\n // host element instead:\n scope.host.setAttribute('data-js-focus-visible', '');\n } else if (scope.nodeType === Node.DOCUMENT_NODE) {\n document.documentElement.classList.add('js-focus-visible');\n document.documentElement.setAttribute('data-js-focus-visible', '');\n }\n }\n\n // It is important to wrap all references to global window and document in\n // these checks to support server-side rendering use cases\n // @see https://github.com/WICG/focus-visible/issues/199\n if (typeof window !== 'undefined' && typeof document !== 'undefined') {\n // Make the polyfill helper globally available. This can be used as a signal\n // to interested libraries that wish to coordinate with the polyfill for e.g.,\n // applying the polyfill to a shadow root:\n window.applyFocusVisiblePolyfill = applyFocusVisiblePolyfill;\n\n // Notify interested libraries of the polyfill's presence, in case the\n // polyfill was loaded lazily:\n var event;\n\n try {\n event = new CustomEvent('focus-visible-polyfill-ready');\n } catch (error) {\n // IE11 does not support using CustomEvent as a constructor directly:\n event = document.createEvent('CustomEvent');\n event.initCustomEvent('focus-visible-polyfill-ready', false, false, {});\n }\n\n window.dispatchEvent(event);\n }\n\n if (typeof document !== 'undefined') {\n // Apply the polyfill to the global document, so that no JavaScript\n // coordination is required to use the polyfill in the top-level document:\n applyFocusVisiblePolyfill(document);\n }\n\n})));\n", "/*!\n * clipboard.js v2.0.11\n * https://clipboardjs.com/\n *\n * Licensed MIT \u00A9 Zeno Rocha\n */\n(function webpackUniversalModuleDefinition(root, factory) {\n\tif(typeof exports === 'object' && typeof module === 'object')\n\t\tmodule.exports = factory();\n\telse if(typeof define === 'function' && define.amd)\n\t\tdefine([], factory);\n\telse if(typeof exports === 'object')\n\t\texports[\"ClipboardJS\"] = factory();\n\telse\n\t\troot[\"ClipboardJS\"] = factory();\n})(this, function() {\nreturn /******/ (function() { // webpackBootstrap\n/******/ \tvar __webpack_modules__ = ({\n\n/***/ 686:\n/***/ (function(__unused_webpack_module, __webpack_exports__, __webpack_require__) {\n\n\"use strict\";\n\n// EXPORTS\n__webpack_require__.d(__webpack_exports__, {\n \"default\": function() { return /* binding */ clipboard; }\n});\n\n// EXTERNAL MODULE: ./node_modules/tiny-emitter/index.js\nvar tiny_emitter = __webpack_require__(279);\nvar tiny_emitter_default = /*#__PURE__*/__webpack_require__.n(tiny_emitter);\n// EXTERNAL MODULE: ./node_modules/good-listener/src/listen.js\nvar listen = __webpack_require__(370);\nvar listen_default = /*#__PURE__*/__webpack_require__.n(listen);\n// EXTERNAL MODULE: ./node_modules/select/src/select.js\nvar src_select = __webpack_require__(817);\nvar select_default = /*#__PURE__*/__webpack_require__.n(src_select);\n;// CONCATENATED MODULE: ./src/common/command.js\n/**\n * Executes a given operation type.\n * @param {String} type\n * @return {Boolean}\n */\nfunction command(type) {\n try {\n return document.execCommand(type);\n } catch (err) {\n return false;\n }\n}\n;// CONCATENATED MODULE: ./src/actions/cut.js\n\n\n/**\n * Cut action wrapper.\n * @param {String|HTMLElement} target\n * @return {String}\n */\n\nvar ClipboardActionCut = function ClipboardActionCut(target) {\n var selectedText = select_default()(target);\n command('cut');\n return selectedText;\n};\n\n/* harmony default export */ var actions_cut = (ClipboardActionCut);\n;// CONCATENATED MODULE: ./src/common/create-fake-element.js\n/**\n * Creates a fake textarea element with a value.\n * @param {String} value\n * @return {HTMLElement}\n */\nfunction createFakeElement(value) {\n var isRTL = document.documentElement.getAttribute('dir') === 'rtl';\n var fakeElement = document.createElement('textarea'); // Prevent zooming on iOS\n\n fakeElement.style.fontSize = '12pt'; // Reset box model\n\n fakeElement.style.border = '0';\n fakeElement.style.padding = '0';\n fakeElement.style.margin = '0'; // Move element out of screen horizontally\n\n fakeElement.style.position = 'absolute';\n fakeElement.style[isRTL ? 'right' : 'left'] = '-9999px'; // Move element to the same position vertically\n\n var yPosition = window.pageYOffset || document.documentElement.scrollTop;\n fakeElement.style.top = \"\".concat(yPosition, \"px\");\n fakeElement.setAttribute('readonly', '');\n fakeElement.value = value;\n return fakeElement;\n}\n;// CONCATENATED MODULE: ./src/actions/copy.js\n\n\n\n/**\n * Create fake copy action wrapper using a fake element.\n * @param {String} target\n * @param {Object} options\n * @return {String}\n */\n\nvar fakeCopyAction = function fakeCopyAction(value, options) {\n var fakeElement = createFakeElement(value);\n options.container.appendChild(fakeElement);\n var selectedText = select_default()(fakeElement);\n command('copy');\n fakeElement.remove();\n return selectedText;\n};\n/**\n * Copy action wrapper.\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @return {String}\n */\n\n\nvar ClipboardActionCopy = function ClipboardActionCopy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n var selectedText = '';\n\n if (typeof target === 'string') {\n selectedText = fakeCopyAction(target, options);\n } else if (target instanceof HTMLInputElement && !['text', 'search', 'url', 'tel', 'password'].includes(target === null || target === void 0 ? void 0 : target.type)) {\n // If input type doesn't support `setSelectionRange`. Simulate it. https://developer.mozilla.org/en-US/docs/Web/API/HTMLInputElement/setSelectionRange\n selectedText = fakeCopyAction(target.value, options);\n } else {\n selectedText = select_default()(target);\n command('copy');\n }\n\n return selectedText;\n};\n\n/* harmony default export */ var actions_copy = (ClipboardActionCopy);\n;// CONCATENATED MODULE: ./src/actions/default.js\nfunction _typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { _typeof = function _typeof(obj) { return typeof obj; }; } else { _typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return _typeof(obj); }\n\n\n\n/**\n * Inner function which performs selection from either `text` or `target`\n * properties and then executes copy or cut operations.\n * @param {Object} options\n */\n\nvar ClipboardActionDefault = function ClipboardActionDefault() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n // Defines base properties passed from constructor.\n var _options$action = options.action,\n action = _options$action === void 0 ? 'copy' : _options$action,\n container = options.container,\n target = options.target,\n text = options.text; // Sets the `action` to be performed which can be either 'copy' or 'cut'.\n\n if (action !== 'copy' && action !== 'cut') {\n throw new Error('Invalid \"action\" value, use either \"copy\" or \"cut\"');\n } // Sets the `target` property using an element that will be have its content copied.\n\n\n if (target !== undefined) {\n if (target && _typeof(target) === 'object' && target.nodeType === 1) {\n if (action === 'copy' && target.hasAttribute('disabled')) {\n throw new Error('Invalid \"target\" attribute. Please use \"readonly\" instead of \"disabled\" attribute');\n }\n\n if (action === 'cut' && (target.hasAttribute('readonly') || target.hasAttribute('disabled'))) {\n throw new Error('Invalid \"target\" attribute. You can\\'t cut text from elements with \"readonly\" or \"disabled\" attributes');\n }\n } else {\n throw new Error('Invalid \"target\" value, use a valid Element');\n }\n } // Define selection strategy based on `text` property.\n\n\n if (text) {\n return actions_copy(text, {\n container: container\n });\n } // Defines which selection strategy based on `target` property.\n\n\n if (target) {\n return action === 'cut' ? actions_cut(target) : actions_copy(target, {\n container: container\n });\n }\n};\n\n/* harmony default export */ var actions_default = (ClipboardActionDefault);\n;// CONCATENATED MODULE: ./src/clipboard.js\nfunction clipboard_typeof(obj) { \"@babel/helpers - typeof\"; if (typeof Symbol === \"function\" && typeof Symbol.iterator === \"symbol\") { clipboard_typeof = function _typeof(obj) { return typeof obj; }; } else { clipboard_typeof = function _typeof(obj) { return obj && typeof Symbol === \"function\" && obj.constructor === Symbol && obj !== Symbol.prototype ? \"symbol\" : typeof obj; }; } return clipboard_typeof(obj); }\n\nfunction _classCallCheck(instance, Constructor) { if (!(instance instanceof Constructor)) { throw new TypeError(\"Cannot call a class as a function\"); } }\n\nfunction _defineProperties(target, props) { for (var i = 0; i < props.length; i++) { var descriptor = props[i]; descriptor.enumerable = descriptor.enumerable || false; descriptor.configurable = true; if (\"value\" in descriptor) descriptor.writable = true; Object.defineProperty(target, descriptor.key, descriptor); } }\n\nfunction _createClass(Constructor, protoProps, staticProps) { if (protoProps) _defineProperties(Constructor.prototype, protoProps); if (staticProps) _defineProperties(Constructor, staticProps); return Constructor; }\n\nfunction _inherits(subClass, superClass) { if (typeof superClass !== \"function\" && superClass !== null) { throw new TypeError(\"Super expression must either be null or a function\"); } subClass.prototype = Object.create(superClass && superClass.prototype, { constructor: { value: subClass, writable: true, configurable: true } }); if (superClass) _setPrototypeOf(subClass, superClass); }\n\nfunction _setPrototypeOf(o, p) { _setPrototypeOf = Object.setPrototypeOf || function _setPrototypeOf(o, p) { o.__proto__ = p; return o; }; return _setPrototypeOf(o, p); }\n\nfunction _createSuper(Derived) { var hasNativeReflectConstruct = _isNativeReflectConstruct(); return function _createSuperInternal() { var Super = _getPrototypeOf(Derived), result; if (hasNativeReflectConstruct) { var NewTarget = _getPrototypeOf(this).constructor; result = Reflect.construct(Super, arguments, NewTarget); } else { result = Super.apply(this, arguments); } return _possibleConstructorReturn(this, result); }; }\n\nfunction _possibleConstructorReturn(self, call) { if (call && (clipboard_typeof(call) === \"object\" || typeof call === \"function\")) { return call; } return _assertThisInitialized(self); }\n\nfunction _assertThisInitialized(self) { if (self === void 0) { throw new ReferenceError(\"this hasn't been initialised - super() hasn't been called\"); } return self; }\n\nfunction _isNativeReflectConstruct() { if (typeof Reflect === \"undefined\" || !Reflect.construct) return false; if (Reflect.construct.sham) return false; if (typeof Proxy === \"function\") return true; try { Date.prototype.toString.call(Reflect.construct(Date, [], function () {})); return true; } catch (e) { return false; } }\n\nfunction _getPrototypeOf(o) { _getPrototypeOf = Object.setPrototypeOf ? Object.getPrototypeOf : function _getPrototypeOf(o) { return o.__proto__ || Object.getPrototypeOf(o); }; return _getPrototypeOf(o); }\n\n\n\n\n\n\n/**\n * Helper function to retrieve attribute value.\n * @param {String} suffix\n * @param {Element} element\n */\n\nfunction getAttributeValue(suffix, element) {\n var attribute = \"data-clipboard-\".concat(suffix);\n\n if (!element.hasAttribute(attribute)) {\n return;\n }\n\n return element.getAttribute(attribute);\n}\n/**\n * Base class which takes one or more elements, adds event listeners to them,\n * and instantiates a new `ClipboardAction` on each click.\n */\n\n\nvar Clipboard = /*#__PURE__*/function (_Emitter) {\n _inherits(Clipboard, _Emitter);\n\n var _super = _createSuper(Clipboard);\n\n /**\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n * @param {Object} options\n */\n function Clipboard(trigger, options) {\n var _this;\n\n _classCallCheck(this, Clipboard);\n\n _this = _super.call(this);\n\n _this.resolveOptions(options);\n\n _this.listenClick(trigger);\n\n return _this;\n }\n /**\n * Defines if attributes would be resolved using internal setter functions\n * or custom functions that were passed in the constructor.\n * @param {Object} options\n */\n\n\n _createClass(Clipboard, [{\n key: \"resolveOptions\",\n value: function resolveOptions() {\n var options = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : {};\n this.action = typeof options.action === 'function' ? options.action : this.defaultAction;\n this.target = typeof options.target === 'function' ? options.target : this.defaultTarget;\n this.text = typeof options.text === 'function' ? options.text : this.defaultText;\n this.container = clipboard_typeof(options.container) === 'object' ? options.container : document.body;\n }\n /**\n * Adds a click event listener to the passed trigger.\n * @param {String|HTMLElement|HTMLCollection|NodeList} trigger\n */\n\n }, {\n key: \"listenClick\",\n value: function listenClick(trigger) {\n var _this2 = this;\n\n this.listener = listen_default()(trigger, 'click', function (e) {\n return _this2.onClick(e);\n });\n }\n /**\n * Defines a new `ClipboardAction` on each click event.\n * @param {Event} e\n */\n\n }, {\n key: \"onClick\",\n value: function onClick(e) {\n var trigger = e.delegateTarget || e.currentTarget;\n var action = this.action(trigger) || 'copy';\n var text = actions_default({\n action: action,\n container: this.container,\n target: this.target(trigger),\n text: this.text(trigger)\n }); // Fires an event based on the copy operation result.\n\n this.emit(text ? 'success' : 'error', {\n action: action,\n text: text,\n trigger: trigger,\n clearSelection: function clearSelection() {\n if (trigger) {\n trigger.focus();\n }\n\n window.getSelection().removeAllRanges();\n }\n });\n }\n /**\n * Default `action` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultAction\",\n value: function defaultAction(trigger) {\n return getAttributeValue('action', trigger);\n }\n /**\n * Default `target` lookup function.\n * @param {Element} trigger\n */\n\n }, {\n key: \"defaultTarget\",\n value: function defaultTarget(trigger) {\n var selector = getAttributeValue('target', trigger);\n\n if (selector) {\n return document.querySelector(selector);\n }\n }\n /**\n * Allow fire programmatically a copy action\n * @param {String|HTMLElement} target\n * @param {Object} options\n * @returns Text copied.\n */\n\n }, {\n key: \"defaultText\",\n\n /**\n * Default `text` lookup function.\n * @param {Element} trigger\n */\n value: function defaultText(trigger) {\n return getAttributeValue('text', trigger);\n }\n /**\n * Destroy lifecycle.\n */\n\n }, {\n key: \"destroy\",\n value: function destroy() {\n this.listener.destroy();\n }\n }], [{\n key: \"copy\",\n value: function copy(target) {\n var options = arguments.length > 1 && arguments[1] !== undefined ? arguments[1] : {\n container: document.body\n };\n return actions_copy(target, options);\n }\n /**\n * Allow fire programmatically a cut action\n * @param {String|HTMLElement} target\n * @returns Text cutted.\n */\n\n }, {\n key: \"cut\",\n value: function cut(target) {\n return actions_cut(target);\n }\n /**\n * Returns the support of the given action, or all actions if no action is\n * given.\n * @param {String} [action]\n */\n\n }, {\n key: \"isSupported\",\n value: function isSupported() {\n var action = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : ['copy', 'cut'];\n var actions = typeof action === 'string' ? [action] : action;\n var support = !!document.queryCommandSupported;\n actions.forEach(function (action) {\n support = support && !!document.queryCommandSupported(action);\n });\n return support;\n }\n }]);\n\n return Clipboard;\n}((tiny_emitter_default()));\n\n/* harmony default export */ var clipboard = (Clipboard);\n\n/***/ }),\n\n/***/ 828:\n/***/ (function(module) {\n\nvar DOCUMENT_NODE_TYPE = 9;\n\n/**\n * A polyfill for Element.matches()\n */\nif (typeof Element !== 'undefined' && !Element.prototype.matches) {\n var proto = Element.prototype;\n\n proto.matches = proto.matchesSelector ||\n proto.mozMatchesSelector ||\n proto.msMatchesSelector ||\n proto.oMatchesSelector ||\n proto.webkitMatchesSelector;\n}\n\n/**\n * Finds the closest parent that matches a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @return {Function}\n */\nfunction closest (element, selector) {\n while (element && element.nodeType !== DOCUMENT_NODE_TYPE) {\n if (typeof element.matches === 'function' &&\n element.matches(selector)) {\n return element;\n }\n element = element.parentNode;\n }\n}\n\nmodule.exports = closest;\n\n\n/***/ }),\n\n/***/ 438:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar closest = __webpack_require__(828);\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction _delegate(element, selector, type, callback, useCapture) {\n var listenerFn = listener.apply(this, arguments);\n\n element.addEventListener(type, listenerFn, useCapture);\n\n return {\n destroy: function() {\n element.removeEventListener(type, listenerFn, useCapture);\n }\n }\n}\n\n/**\n * Delegates event to a selector.\n *\n * @param {Element|String|Array} [elements]\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @param {Boolean} useCapture\n * @return {Object}\n */\nfunction delegate(elements, selector, type, callback, useCapture) {\n // Handle the regular Element usage\n if (typeof elements.addEventListener === 'function') {\n return _delegate.apply(null, arguments);\n }\n\n // Handle Element-less usage, it defaults to global delegation\n if (typeof type === 'function') {\n // Use `document` as the first parameter, then apply arguments\n // This is a short way to .unshift `arguments` without running into deoptimizations\n return _delegate.bind(null, document).apply(null, arguments);\n }\n\n // Handle Selector-based usage\n if (typeof elements === 'string') {\n elements = document.querySelectorAll(elements);\n }\n\n // Handle Array-like based usage\n return Array.prototype.map.call(elements, function (element) {\n return _delegate(element, selector, type, callback, useCapture);\n });\n}\n\n/**\n * Finds closest match and invokes callback.\n *\n * @param {Element} element\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Function}\n */\nfunction listener(element, selector, type, callback) {\n return function(e) {\n e.delegateTarget = closest(e.target, selector);\n\n if (e.delegateTarget) {\n callback.call(element, e);\n }\n }\n}\n\nmodule.exports = delegate;\n\n\n/***/ }),\n\n/***/ 879:\n/***/ (function(__unused_webpack_module, exports) {\n\n/**\n * Check if argument is a HTML element.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.node = function(value) {\n return value !== undefined\n && value instanceof HTMLElement\n && value.nodeType === 1;\n};\n\n/**\n * Check if argument is a list of HTML elements.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.nodeList = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return value !== undefined\n && (type === '[object NodeList]' || type === '[object HTMLCollection]')\n && ('length' in value)\n && (value.length === 0 || exports.node(value[0]));\n};\n\n/**\n * Check if argument is a string.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.string = function(value) {\n return typeof value === 'string'\n || value instanceof String;\n};\n\n/**\n * Check if argument is a function.\n *\n * @param {Object} value\n * @return {Boolean}\n */\nexports.fn = function(value) {\n var type = Object.prototype.toString.call(value);\n\n return type === '[object Function]';\n};\n\n\n/***/ }),\n\n/***/ 370:\n/***/ (function(module, __unused_webpack_exports, __webpack_require__) {\n\nvar is = __webpack_require__(879);\nvar delegate = __webpack_require__(438);\n\n/**\n * Validates all params and calls the right\n * listener function based on its target type.\n *\n * @param {String|HTMLElement|HTMLCollection|NodeList} target\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listen(target, type, callback) {\n if (!target && !type && !callback) {\n throw new Error('Missing required arguments');\n }\n\n if (!is.string(type)) {\n throw new TypeError('Second argument must be a String');\n }\n\n if (!is.fn(callback)) {\n throw new TypeError('Third argument must be a Function');\n }\n\n if (is.node(target)) {\n return listenNode(target, type, callback);\n }\n else if (is.nodeList(target)) {\n return listenNodeList(target, type, callback);\n }\n else if (is.string(target)) {\n return listenSelector(target, type, callback);\n }\n else {\n throw new TypeError('First argument must be a String, HTMLElement, HTMLCollection, or NodeList');\n }\n}\n\n/**\n * Adds an event listener to a HTML element\n * and returns a remove listener function.\n *\n * @param {HTMLElement} node\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNode(node, type, callback) {\n node.addEventListener(type, callback);\n\n return {\n destroy: function() {\n node.removeEventListener(type, callback);\n }\n }\n}\n\n/**\n * Add an event listener to a list of HTML elements\n * and returns a remove listener function.\n *\n * @param {NodeList|HTMLCollection} nodeList\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenNodeList(nodeList, type, callback) {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.addEventListener(type, callback);\n });\n\n return {\n destroy: function() {\n Array.prototype.forEach.call(nodeList, function(node) {\n node.removeEventListener(type, callback);\n });\n }\n }\n}\n\n/**\n * Add an event listener to a selector\n * and returns a remove listener function.\n *\n * @param {String} selector\n * @param {String} type\n * @param {Function} callback\n * @return {Object}\n */\nfunction listenSelector(selector, type, callback) {\n return delegate(document.body, selector, type, callback);\n}\n\nmodule.exports = listen;\n\n\n/***/ }),\n\n/***/ 817:\n/***/ (function(module) {\n\nfunction select(element) {\n var selectedText;\n\n if (element.nodeName === 'SELECT') {\n element.focus();\n\n selectedText = element.value;\n }\n else if (element.nodeName === 'INPUT' || element.nodeName === 'TEXTAREA') {\n var isReadOnly = element.hasAttribute('readonly');\n\n if (!isReadOnly) {\n element.setAttribute('readonly', '');\n }\n\n element.select();\n element.setSelectionRange(0, element.value.length);\n\n if (!isReadOnly) {\n element.removeAttribute('readonly');\n }\n\n selectedText = element.value;\n }\n else {\n if (element.hasAttribute('contenteditable')) {\n element.focus();\n }\n\n var selection = window.getSelection();\n var range = document.createRange();\n\n range.selectNodeContents(element);\n selection.removeAllRanges();\n selection.addRange(range);\n\n selectedText = selection.toString();\n }\n\n return selectedText;\n}\n\nmodule.exports = select;\n\n\n/***/ }),\n\n/***/ 279:\n/***/ (function(module) {\n\nfunction E () {\n // Keep this empty so it's easier to inherit from\n // (via https://github.com/lipsmack from https://github.com/scottcorgan/tiny-emitter/issues/3)\n}\n\nE.prototype = {\n on: function (name, callback, ctx) {\n var e = this.e || (this.e = {});\n\n (e[name] || (e[name] = [])).push({\n fn: callback,\n ctx: ctx\n });\n\n return this;\n },\n\n once: function (name, callback, ctx) {\n var self = this;\n function listener () {\n self.off(name, listener);\n callback.apply(ctx, arguments);\n };\n\n listener._ = callback\n return this.on(name, listener, ctx);\n },\n\n emit: function (name) {\n var data = [].slice.call(arguments, 1);\n var evtArr = ((this.e || (this.e = {}))[name] || []).slice();\n var i = 0;\n var len = evtArr.length;\n\n for (i; i < len; i++) {\n evtArr[i].fn.apply(evtArr[i].ctx, data);\n }\n\n return this;\n },\n\n off: function (name, callback) {\n var e = this.e || (this.e = {});\n var evts = e[name];\n var liveEvents = [];\n\n if (evts && callback) {\n for (var i = 0, len = evts.length; i < len; i++) {\n if (evts[i].fn !== callback && evts[i].fn._ !== callback)\n liveEvents.push(evts[i]);\n }\n }\n\n // Remove event from queue to prevent memory leak\n // Suggested by https://github.com/lazd\n // Ref: https://github.com/scottcorgan/tiny-emitter/commit/c6ebfaa9bc973b33d110a84a307742b7cf94c953#commitcomment-5024910\n\n (liveEvents.length)\n ? e[name] = liveEvents\n : delete e[name];\n\n return this;\n }\n};\n\nmodule.exports = E;\nmodule.exports.TinyEmitter = E;\n\n\n/***/ })\n\n/******/ \t});\n/************************************************************************/\n/******/ \t// The module cache\n/******/ \tvar __webpack_module_cache__ = {};\n/******/ \t\n/******/ \t// The require function\n/******/ \tfunction __webpack_require__(moduleId) {\n/******/ \t\t// Check if module is in cache\n/******/ \t\tif(__webpack_module_cache__[moduleId]) {\n/******/ \t\t\treturn __webpack_module_cache__[moduleId].exports;\n/******/ \t\t}\n/******/ \t\t// Create a new module (and put it into the cache)\n/******/ \t\tvar module = __webpack_module_cache__[moduleId] = {\n/******/ \t\t\t// no module.id needed\n/******/ \t\t\t// no module.loaded needed\n/******/ \t\t\texports: {}\n/******/ \t\t};\n/******/ \t\n/******/ \t\t// Execute the module function\n/******/ \t\t__webpack_modules__[moduleId](module, module.exports, __webpack_require__);\n/******/ \t\n/******/ \t\t// Return the exports of the module\n/******/ \t\treturn module.exports;\n/******/ \t}\n/******/ \t\n/************************************************************************/\n/******/ \t/* webpack/runtime/compat get default export */\n/******/ \t!function() {\n/******/ \t\t// getDefaultExport function for compatibility with non-harmony modules\n/******/ \t\t__webpack_require__.n = function(module) {\n/******/ \t\t\tvar getter = module && module.__esModule ?\n/******/ \t\t\t\tfunction() { return module['default']; } :\n/******/ \t\t\t\tfunction() { return module; };\n/******/ \t\t\t__webpack_require__.d(getter, { a: getter });\n/******/ \t\t\treturn getter;\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/define property getters */\n/******/ \t!function() {\n/******/ \t\t// define getter functions for harmony exports\n/******/ \t\t__webpack_require__.d = function(exports, definition) {\n/******/ \t\t\tfor(var key in definition) {\n/******/ \t\t\t\tif(__webpack_require__.o(definition, key) && !__webpack_require__.o(exports, key)) {\n/******/ \t\t\t\t\tObject.defineProperty(exports, key, { enumerable: true, get: definition[key] });\n/******/ \t\t\t\t}\n/******/ \t\t\t}\n/******/ \t\t};\n/******/ \t}();\n/******/ \t\n/******/ \t/* webpack/runtime/hasOwnProperty shorthand */\n/******/ \t!function() {\n/******/ \t\t__webpack_require__.o = function(obj, prop) { return Object.prototype.hasOwnProperty.call(obj, prop); }\n/******/ \t}();\n/******/ \t\n/************************************************************************/\n/******/ \t// module exports must be returned from runtime so entry inlining is disabled\n/******/ \t// startup\n/******/ \t// Load entry module and return exports\n/******/ \treturn __webpack_require__(686);\n/******/ })()\n.default;\n});", "/*!\n * escape-html\n * Copyright(c) 2012-2013 TJ Holowaychuk\n * Copyright(c) 2015 Andreas Lubbe\n * Copyright(c) 2015 Tiancheng \"Timothy\" Gu\n * MIT Licensed\n */\n\n'use strict';\n\n/**\n * Module variables.\n * @private\n */\n\nvar matchHtmlRegExp = /[\"'&<>]/;\n\n/**\n * Module exports.\n * @public\n */\n\nmodule.exports = escapeHtml;\n\n/**\n * Escape special characters in the given string of html.\n *\n * @param {string} string The string to escape for inserting into HTML\n * @return {string}\n * @public\n */\n\nfunction escapeHtml(string) {\n var str = '' + string;\n var match = matchHtmlRegExp.exec(str);\n\n if (!match) {\n return str;\n }\n\n var escape;\n var html = '';\n var index = 0;\n var lastIndex = 0;\n\n for (index = match.index; index < str.length; index++) {\n switch (str.charCodeAt(index)) {\n case 34: // \"\n escape = '"';\n break;\n case 38: // &\n escape = '&';\n break;\n case 39: // '\n escape = ''';\n break;\n case 60: // <\n escape = '<';\n break;\n case 62: // >\n escape = '>';\n break;\n default:\n continue;\n }\n\n if (lastIndex !== index) {\n html += str.substring(lastIndex, index);\n }\n\n lastIndex = index + 1;\n html += escape;\n }\n\n return lastIndex !== index\n ? html + str.substring(lastIndex, index)\n : html;\n}\n", "/*\n * Copyright (c) 2016-2023 Martin Donath \n *\n * Permission is hereby granted, free of charge, to any person obtaining a copy\n * of this software and associated documentation files (the \"Software\"), to\n * deal in the Software without restriction, including without limitation the\n * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or\n * sell copies of the Software, and to permit persons to whom the Software is\n * furnished to do so, subject to the following conditions:\n *\n * The above copyright notice and this permission notice shall be included in\n * all copies or substantial portions of the Software.\n *\n * THE SOFTWARE IS PROVIDED \"AS IS\", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR\n * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,\n * FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE\n * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER\n * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING\n * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS\n * IN THE SOFTWARE.\n */\n\nimport \"focus-visible\"\n\nimport {\n EMPTY,\n NEVER,\n Observable,\n Subject,\n defer,\n delay,\n filter,\n map,\n merge,\n mergeWith,\n shareReplay,\n switchMap\n} from \"rxjs\"\n\nimport { configuration, feature } from \"./_\"\nimport {\n at,\n getActiveElement,\n getOptionalElement,\n requestJSON,\n setLocation,\n setToggle,\n watchDocument,\n watchKeyboard,\n watchLocation,\n watchLocationTarget,\n watchMedia,\n watchPrint,\n watchScript,\n watchViewport\n} from \"./browser\"\nimport {\n getComponentElement,\n getComponentElements,\n mountAnnounce,\n mountBackToTop,\n mountConsent,\n mountContent,\n mountDialog,\n mountHeader,\n mountHeaderTitle,\n mountPalette,\n mountProgress,\n mountSearch,\n mountSearchHiglight,\n mountSidebar,\n mountSource,\n mountTableOfContents,\n mountTabs,\n watchHeader,\n watchMain\n} from \"./components\"\nimport {\n SearchIndex,\n setupClipboardJS,\n setupInstantNavigation,\n setupVersionSelector\n} from \"./integrations\"\nimport {\n patchIndeterminate,\n patchScrollfix,\n patchScrolllock\n} from \"./patches\"\nimport \"./polyfills\"\n\n/* ----------------------------------------------------------------------------\n * Functions - @todo refactor\n * ------------------------------------------------------------------------- */\n\n/**\n * Fetch search index\n *\n * @returns Search index observable\n */\nfunction fetchSearchIndex(): Observable {\n if (location.protocol === \"file:\") {\n return watchScript(\n `${new URL(\"search/search_index.js\", config.base)}`\n )\n .pipe(\n // @ts-ignore - @todo fix typings\n map(() => __index),\n shareReplay(1)\n )\n } else {\n return requestJSON(\n new URL(\"search/search_index.json\", config.base)\n )\n }\n}\n\n/* ----------------------------------------------------------------------------\n * Application\n * ------------------------------------------------------------------------- */\n\n/* Yay, JavaScript is available */\ndocument.documentElement.classList.remove(\"no-js\")\ndocument.documentElement.classList.add(\"js\")\n\n/* Set up navigation observables and subjects */\nconst document$ = watchDocument()\nconst location$ = watchLocation()\nconst target$ = watchLocationTarget(location$)\nconst keyboard$ = watchKeyboard()\n\n/* Set up media observables */\nconst viewport$ = watchViewport()\nconst tablet$ = watchMedia(\"(min-width: 960px)\")\nconst screen$ = watchMedia(\"(min-width: 1220px)\")\nconst print$ = watchPrint()\n\n/* Retrieve search index, if search is enabled */\nconst config = configuration()\nconst index$ = document.forms.namedItem(\"search\")\n ? fetchSearchIndex()\n : NEVER\n\n/* Set up Clipboard.js integration */\nconst alert$ = new Subject()\nsetupClipboardJS({ alert$ })\n\n/* Set up progress indicator */\nconst progress$ = new Subject()\n\n/* Set up instant navigation, if enabled */\nif (feature(\"navigation.instant\"))\n setupInstantNavigation({ location$, viewport$, progress$ })\n .subscribe(document$)\n\n/* Set up version selector */\nif (config.version?.provider === \"mike\")\n setupVersionSelector({ document$ })\n\n/* Always close drawer and search on navigation */\nmerge(location$, target$)\n .pipe(\n delay(125)\n )\n .subscribe(() => {\n setToggle(\"drawer\", false)\n setToggle(\"search\", false)\n })\n\n/* Set up global keyboard handlers */\nkeyboard$\n .pipe(\n filter(({ mode }) => mode === \"global\")\n )\n .subscribe(key => {\n switch (key.type) {\n\n /* Go to previous page */\n case \"p\":\n case \",\":\n const prev = getOptionalElement(\"link[rel=prev]\")\n if (typeof prev !== \"undefined\")\n setLocation(prev)\n break\n\n /* Go to next page */\n case \"n\":\n case \".\":\n const next = getOptionalElement(\"link[rel=next]\")\n if (typeof next !== \"undefined\")\n setLocation(next)\n break\n\n /* Expand navigation, see https://bit.ly/3ZjG5io */\n case \"Enter\":\n const active = getActiveElement()\n if (active instanceof HTMLLabelElement)\n active.click()\n }\n })\n\n/* Set up patches */\npatchIndeterminate({ document$, tablet$ })\npatchScrollfix({ document$ })\npatchScrolllock({ viewport$, tablet$ })\n\n/* Set up header and main area observable */\nconst header$ = watchHeader(getComponentElement(\"header\"), { viewport$ })\nconst main$ = document$\n .pipe(\n map(() => getComponentElement(\"main\")),\n switchMap(el => watchMain(el, { viewport$, header$ })),\n shareReplay(1)\n )\n\n/* Set up control component observables */\nconst control$ = merge(\n\n /* Consent */\n ...getComponentElements(\"consent\")\n .map(el => mountConsent(el, { target$ })),\n\n /* Dialog */\n ...getComponentElements(\"dialog\")\n .map(el => mountDialog(el, { alert$ })),\n\n /* Header */\n ...getComponentElements(\"header\")\n .map(el => mountHeader(el, { viewport$, header$, main$ })),\n\n /* Color palette */\n ...getComponentElements(\"palette\")\n .map(el => mountPalette(el)),\n\n /* Progress bar */\n ...getComponentElements(\"progress\")\n .map(el => mountProgress(el, { progress$ })),\n\n /* Search */\n ...getComponentElements(\"search\")\n .map(el => mountSearch(el, { index$, keyboard$ })),\n\n /* Repository information */\n ...getComponentElements(\"source\")\n .map(el => mountSource(el))\n)\n\n/* Set up content component observables */\nconst content$ = defer(() => merge(\n\n /* Announcement bar */\n ...getComponentElements(\"announce\")\n .map(el => mountAnnounce(el)),\n\n /* Content */\n ...getComponentElements(\"content\")\n .map(el => mountContent(el, { viewport$, target$, print$ })),\n\n /* Search highlighting */\n ...getComponentElements(\"content\")\n .map(el => feature(\"search.highlight\")\n ? mountSearchHiglight(el, { index$, location$ })\n : EMPTY\n ),\n\n /* Header title */\n ...getComponentElements(\"header-title\")\n .map(el => mountHeaderTitle(el, { viewport$, header$ })),\n\n /* Sidebar */\n ...getComponentElements(\"sidebar\")\n .map(el => el.getAttribute(\"data-md-type\") === \"navigation\"\n ? at(screen$, () => mountSidebar(el, { viewport$, header$, main$ }))\n : at(tablet$, () => mountSidebar(el, { viewport$, header$, main$ }))\n ),\n\n /* Navigation tabs */\n ...getComponentElements(\"tabs\")\n .map(el => mountTabs(el, { viewport$, header$ })),\n\n /* Table of contents */\n ...getComponentElements(\"toc\")\n .map(el => mountTableOfContents(el, {\n viewport$, header$, main$, target$\n })),\n\n /* Back-to-top button */\n ...getComponentElements(\"top\")\n .map(el => mountBackToTop(el, { viewport$, header$, main$, target$ }))\n))\n\n/* Set up component observables */\nconst component$ = document$\n .pipe(\n switchMap(() => content$),\n mergeWith(control$),\n shareReplay(1)\n )\n\n/* Subscribe to all components */\ncomponent$.subscribe()\n\n/* ----------------------------------------------------------------------------\n * Exports\n * ------------------------------------------------------------------------- */\n\nwindow.document$ = document$ /* Document observable */\nwindow.location$ = location$ /* Location subject */\nwindow.target$ = target$ /* Location target observable */\nwindow.keyboard$ = keyboard$ /* Keyboard observable */\nwindow.viewport$ = viewport$ /* Viewport observable */\nwindow.tablet$ = tablet$ /* Media tablet observable */\nwindow.screen$ = screen$ /* Media screen observable */\nwindow.print$ = print$ /* Media print observable */\nwindow.alert$ = alert$ /* Alert subject */\nwindow.progress$ = progress$ /* Progress indicator subject */\nwindow.component$ = component$ /* Component observable */\n", "/*! *****************************************************************************\r\nCopyright (c) Microsoft Corporation.\r\n\r\nPermission to use, copy, modify, and/or distribute this software for any\r\npurpose with or without fee is hereby granted.\r\n\r\nTHE SOFTWARE IS PROVIDED \"AS IS\" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH\r\nREGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY\r\nAND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT,\r\nINDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM\r\nLOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR\r\nOTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR\r\nPERFORMANCE OF THIS SOFTWARE.\r\n***************************************************************************** */\r\n/* global Reflect, Promise */\r\n\r\nvar extendStatics = function(d, b) {\r\n extendStatics = Object.setPrototypeOf ||\r\n ({ __proto__: [] } instanceof Array && function (d, b) { d.__proto__ = b; }) ||\r\n function (d, b) { for (var p in b) if (Object.prototype.hasOwnProperty.call(b, p)) d[p] = b[p]; };\r\n return extendStatics(d, b);\r\n};\r\n\r\nexport function __extends(d, b) {\r\n if (typeof b !== \"function\" && b !== null)\r\n throw new TypeError(\"Class extends value \" + String(b) + \" is not a constructor or null\");\r\n extendStatics(d, b);\r\n function __() { this.constructor = d; }\r\n d.prototype = b === null ? Object.create(b) : (__.prototype = b.prototype, new __());\r\n}\r\n\r\nexport var __assign = function() {\r\n __assign = Object.assign || function __assign(t) {\r\n for (var s, i = 1, n = arguments.length; i < n; i++) {\r\n s = arguments[i];\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p)) t[p] = s[p];\r\n }\r\n return t;\r\n }\r\n return __assign.apply(this, arguments);\r\n}\r\n\r\nexport function __rest(s, e) {\r\n var t = {};\r\n for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p) && e.indexOf(p) < 0)\r\n t[p] = s[p];\r\n if (s != null && typeof Object.getOwnPropertySymbols === \"function\")\r\n for (var i = 0, p = Object.getOwnPropertySymbols(s); i < p.length; i++) {\r\n if (e.indexOf(p[i]) < 0 && Object.prototype.propertyIsEnumerable.call(s, p[i]))\r\n t[p[i]] = s[p[i]];\r\n }\r\n return t;\r\n}\r\n\r\nexport function __decorate(decorators, target, key, desc) {\r\n var c = arguments.length, r = c < 3 ? target : desc === null ? desc = Object.getOwnPropertyDescriptor(target, key) : desc, d;\r\n if (typeof Reflect === \"object\" && typeof Reflect.decorate === \"function\") r = Reflect.decorate(decorators, target, key, desc);\r\n else for (var i = decorators.length - 1; i >= 0; i--) if (d = decorators[i]) r = (c < 3 ? d(r) : c > 3 ? d(target, key, r) : d(target, key)) || r;\r\n return c > 3 && r && Object.defineProperty(target, key, r), r;\r\n}\r\n\r\nexport function __param(paramIndex, decorator) {\r\n return function (target, key) { decorator(target, key, paramIndex); }\r\n}\r\n\r\nexport function __metadata(metadataKey, metadataValue) {\r\n if (typeof Reflect === \"object\" && typeof Reflect.metadata === \"function\") return Reflect.metadata(metadataKey, metadataValue);\r\n}\r\n\r\nexport function __awaiter(thisArg, _arguments, P, generator) {\r\n function adopt(value) { return value instanceof P ? value : new P(function (resolve) { resolve(value); }); }\r\n return new (P || (P = Promise))(function (resolve, reject) {\r\n function fulfilled(value) { try { step(generator.next(value)); } catch (e) { reject(e); } }\r\n function rejected(value) { try { step(generator[\"throw\"](value)); } catch (e) { reject(e); } }\r\n function step(result) { result.done ? resolve(result.value) : adopt(result.value).then(fulfilled, rejected); }\r\n step((generator = generator.apply(thisArg, _arguments || [])).next());\r\n });\r\n}\r\n\r\nexport function __generator(thisArg, body) {\r\n var _ = { label: 0, sent: function() { if (t[0] & 1) throw t[1]; return t[1]; }, trys: [], ops: [] }, f, y, t, g;\r\n return g = { next: verb(0), \"throw\": verb(1), \"return\": verb(2) }, typeof Symbol === \"function\" && (g[Symbol.iterator] = function() { return this; }), g;\r\n function verb(n) { return function (v) { return step([n, v]); }; }\r\n function step(op) {\r\n if (f) throw new TypeError(\"Generator is already executing.\");\r\n while (_) try {\r\n if (f = 1, y && (t = op[0] & 2 ? y[\"return\"] : op[0] ? y[\"throw\"] || ((t = y[\"return\"]) && t.call(y), 0) : y.next) && !(t = t.call(y, op[1])).done) return t;\r\n if (y = 0, t) op = [op[0] & 2, t.value];\r\n switch (op[0]) {\r\n case 0: case 1: t = op; break;\r\n case 4: _.label++; return { value: op[1], done: false };\r\n case 5: _.label++; y = op[1]; op = [0]; continue;\r\n case 7: op = _.ops.pop(); _.trys.pop(); continue;\r\n default:\r\n if (!(t = _.trys, t = t.length > 0 && t[t.length - 1]) && (op[0] === 6 || op[0] === 2)) { _ = 0; continue; }\r\n if (op[0] === 3 && (!t || (op[1] > t[0] && op[1] < t[3]))) { _.label = op[1]; break; }\r\n if (op[0] === 6 && _.label < t[1]) { _.label = t[1]; t = op; break; }\r\n if (t && _.label < t[2]) { _.label = t[2]; _.ops.push(op); break; }\r\n if (t[2]) _.ops.pop();\r\n _.trys.pop(); continue;\r\n }\r\n op = body.call(thisArg, _);\r\n } catch (e) { op = [6, e]; y = 0; } finally { f = t = 0; }\r\n if (op[0] & 5) throw op[1]; return { value: op[0] ? op[1] : void 0, done: true };\r\n }\r\n}\r\n\r\nexport var __createBinding = Object.create ? (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n Object.defineProperty(o, k2, { enumerable: true, get: function() { return m[k]; } });\r\n}) : (function(o, m, k, k2) {\r\n if (k2 === undefined) k2 = k;\r\n o[k2] = m[k];\r\n});\r\n\r\nexport function __exportStar(m, o) {\r\n for (var p in m) if (p !== \"default\" && !Object.prototype.hasOwnProperty.call(o, p)) __createBinding(o, m, p);\r\n}\r\n\r\nexport function __values(o) {\r\n var s = typeof Symbol === \"function\" && Symbol.iterator, m = s && o[s], i = 0;\r\n if (m) return m.call(o);\r\n if (o && typeof o.length === \"number\") return {\r\n next: function () {\r\n if (o && i >= o.length) o = void 0;\r\n return { value: o && o[i++], done: !o };\r\n }\r\n };\r\n throw new TypeError(s ? \"Object is not iterable.\" : \"Symbol.iterator is not defined.\");\r\n}\r\n\r\nexport function __read(o, n) {\r\n var m = typeof Symbol === \"function\" && o[Symbol.iterator];\r\n if (!m) return o;\r\n var i = m.call(o), r, ar = [], e;\r\n try {\r\n while ((n === void 0 || n-- > 0) && !(r = i.next()).done) ar.push(r.value);\r\n }\r\n catch (error) { e = { error: error }; }\r\n finally {\r\n try {\r\n if (r && !r.done && (m = i[\"return\"])) m.call(i);\r\n }\r\n finally { if (e) throw e.error; }\r\n }\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spread() {\r\n for (var ar = [], i = 0; i < arguments.length; i++)\r\n ar = ar.concat(__read(arguments[i]));\r\n return ar;\r\n}\r\n\r\n/** @deprecated */\r\nexport function __spreadArrays() {\r\n for (var s = 0, i = 0, il = arguments.length; i < il; i++) s += arguments[i].length;\r\n for (var r = Array(s), k = 0, i = 0; i < il; i++)\r\n for (var a = arguments[i], j = 0, jl = a.length; j < jl; j++, k++)\r\n r[k] = a[j];\r\n return r;\r\n}\r\n\r\nexport function __spreadArray(to, from, pack) {\r\n if (pack || arguments.length === 2) for (var i = 0, l = from.length, ar; i < l; i++) {\r\n if (ar || !(i in from)) {\r\n if (!ar) ar = Array.prototype.slice.call(from, 0, i);\r\n ar[i] = from[i];\r\n }\r\n }\r\n return to.concat(ar || Array.prototype.slice.call(from));\r\n}\r\n\r\nexport function __await(v) {\r\n return this instanceof __await ? (this.v = v, this) : new __await(v);\r\n}\r\n\r\nexport function __asyncGenerator(thisArg, _arguments, generator) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var g = generator.apply(thisArg, _arguments || []), i, q = [];\r\n return i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i;\r\n function verb(n) { if (g[n]) i[n] = function (v) { return new Promise(function (a, b) { q.push([n, v, a, b]) > 1 || resume(n, v); }); }; }\r\n function resume(n, v) { try { step(g[n](v)); } catch (e) { settle(q[0][3], e); } }\r\n function step(r) { r.value instanceof __await ? Promise.resolve(r.value.v).then(fulfill, reject) : settle(q[0][2], r); }\r\n function fulfill(value) { resume(\"next\", value); }\r\n function reject(value) { resume(\"throw\", value); }\r\n function settle(f, v) { if (f(v), q.shift(), q.length) resume(q[0][0], q[0][1]); }\r\n}\r\n\r\nexport function __asyncDelegator(o) {\r\n var i, p;\r\n return i = {}, verb(\"next\"), verb(\"throw\", function (e) { throw e; }), verb(\"return\"), i[Symbol.iterator] = function () { return this; }, i;\r\n function verb(n, f) { i[n] = o[n] ? function (v) { return (p = !p) ? { value: __await(o[n](v)), done: n === \"return\" } : f ? f(v) : v; } : f; }\r\n}\r\n\r\nexport function __asyncValues(o) {\r\n if (!Symbol.asyncIterator) throw new TypeError(\"Symbol.asyncIterator is not defined.\");\r\n var m = o[Symbol.asyncIterator], i;\r\n return m ? m.call(o) : (o = typeof __values === \"function\" ? __values(o) : o[Symbol.iterator](), i = {}, verb(\"next\"), verb(\"throw\"), verb(\"return\"), i[Symbol.asyncIterator] = function () { return this; }, i);\r\n function verb(n) { i[n] = o[n] && function (v) { return new Promise(function (resolve, reject) { v = o[n](v), settle(resolve, reject, v.done, v.value); }); }; }\r\n function settle(resolve, reject, d, v) { Promise.resolve(v).then(function(v) { resolve({ value: v, done: d }); }, reject); }\r\n}\r\n\r\nexport function __makeTemplateObject(cooked, raw) {\r\n if (Object.defineProperty) { Object.defineProperty(cooked, \"raw\", { value: raw }); } else { cooked.raw = raw; }\r\n return cooked;\r\n};\r\n\r\nvar __setModuleDefault = Object.create ? (function(o, v) {\r\n Object.defineProperty(o, \"default\", { enumerable: true, value: v });\r\n}) : function(o, v) {\r\n o[\"default\"] = v;\r\n};\r\n\r\nexport function __importStar(mod) {\r\n if (mod && mod.__esModule) return mod;\r\n var result = {};\r\n if (mod != null) for (var k in mod) if (k !== \"default\" && Object.prototype.hasOwnProperty.call(mod, k)) __createBinding(result, mod, k);\r\n __setModuleDefault(result, mod);\r\n return result;\r\n}\r\n\r\nexport function __importDefault(mod) {\r\n return (mod && mod.__esModule) ? mod : { default: mod };\r\n}\r\n\r\nexport function __classPrivateFieldGet(receiver, state, kind, f) {\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a getter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot read private member from an object whose class did not declare it\");\r\n return kind === \"m\" ? f : kind === \"a\" ? f.call(receiver) : f ? f.value : state.get(receiver);\r\n}\r\n\r\nexport function __classPrivateFieldSet(receiver, state, value, kind, f) {\r\n if (kind === \"m\") throw new TypeError(\"Private method is not writable\");\r\n if (kind === \"a\" && !f) throw new TypeError(\"Private accessor was defined without a setter\");\r\n if (typeof state === \"function\" ? receiver !== state || !f : !state.has(receiver)) throw new TypeError(\"Cannot write private member to an object whose class did not declare it\");\r\n return (kind === \"a\" ? f.call(receiver, value) : f ? f.value = value : state.set(receiver, value)), value;\r\n}\r\n", "/**\n * Returns true if the object is a function.\n * @param value The value to check\n */\nexport function isFunction(value: any): value is (...args: any[]) => any {\n return typeof value === 'function';\n}\n", "/**\n * Used to create Error subclasses until the community moves away from ES5.\n *\n * This is because compiling from TypeScript down to ES5 has issues with subclassing Errors\n * as well as other built-in types: https://github.com/Microsoft/TypeScript/issues/12123\n *\n * @param createImpl A factory function to create the actual constructor implementation. The returned\n * function should be a named function that calls `_super` internally.\n */\nexport function createErrorClass(createImpl: (_super: any) => any): T {\n const _super = (instance: any) => {\n Error.call(instance);\n instance.stack = new Error().stack;\n };\n\n const ctorFunc = createImpl(_super);\n ctorFunc.prototype = Object.create(Error.prototype);\n ctorFunc.prototype.constructor = ctorFunc;\n return ctorFunc;\n}\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface UnsubscriptionError extends Error {\n readonly errors: any[];\n}\n\nexport interface UnsubscriptionErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (errors: any[]): UnsubscriptionError;\n}\n\n/**\n * An error thrown when one or more errors have occurred during the\n * `unsubscribe` of a {@link Subscription}.\n */\nexport const UnsubscriptionError: UnsubscriptionErrorCtor = createErrorClass(\n (_super) =>\n function UnsubscriptionErrorImpl(this: any, errors: (Error | string)[]) {\n _super(this);\n this.message = errors\n ? `${errors.length} errors occurred during unsubscription:\n${errors.map((err, i) => `${i + 1}) ${err.toString()}`).join('\\n ')}`\n : '';\n this.name = 'UnsubscriptionError';\n this.errors = errors;\n }\n);\n", "/**\n * Removes an item from an array, mutating it.\n * @param arr The array to remove the item from\n * @param item The item to remove\n */\nexport function arrRemove(arr: T[] | undefined | null, item: T) {\n if (arr) {\n const index = arr.indexOf(item);\n 0 <= index && arr.splice(index, 1);\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { UnsubscriptionError } from './util/UnsubscriptionError';\nimport { SubscriptionLike, TeardownLogic, Unsubscribable } from './types';\nimport { arrRemove } from './util/arrRemove';\n\n/**\n * Represents a disposable resource, such as the execution of an Observable. A\n * Subscription has one important method, `unsubscribe`, that takes no argument\n * and just disposes the resource held by the subscription.\n *\n * Additionally, subscriptions may be grouped together through the `add()`\n * method, which will attach a child Subscription to the current Subscription.\n * When a Subscription is unsubscribed, all its children (and its grandchildren)\n * will be unsubscribed as well.\n *\n * @class Subscription\n */\nexport class Subscription implements SubscriptionLike {\n /** @nocollapse */\n public static EMPTY = (() => {\n const empty = new Subscription();\n empty.closed = true;\n return empty;\n })();\n\n /**\n * A flag to indicate whether this Subscription has already been unsubscribed.\n */\n public closed = false;\n\n private _parentage: Subscription[] | Subscription | null = null;\n\n /**\n * The list of registered finalizers to execute upon unsubscription. Adding and removing from this\n * list occurs in the {@link #add} and {@link #remove} methods.\n */\n private _finalizers: Exclude[] | null = null;\n\n /**\n * @param initialTeardown A function executed first as part of the finalization\n * process that is kicked off when {@link #unsubscribe} is called.\n */\n constructor(private initialTeardown?: () => void) {}\n\n /**\n * Disposes the resources held by the subscription. May, for instance, cancel\n * an ongoing Observable execution or cancel any other type of work that\n * started when the Subscription was created.\n * @return {void}\n */\n unsubscribe(): void {\n let errors: any[] | undefined;\n\n if (!this.closed) {\n this.closed = true;\n\n // Remove this from it's parents.\n const { _parentage } = this;\n if (_parentage) {\n this._parentage = null;\n if (Array.isArray(_parentage)) {\n for (const parent of _parentage) {\n parent.remove(this);\n }\n } else {\n _parentage.remove(this);\n }\n }\n\n const { initialTeardown: initialFinalizer } = this;\n if (isFunction(initialFinalizer)) {\n try {\n initialFinalizer();\n } catch (e) {\n errors = e instanceof UnsubscriptionError ? e.errors : [e];\n }\n }\n\n const { _finalizers } = this;\n if (_finalizers) {\n this._finalizers = null;\n for (const finalizer of _finalizers) {\n try {\n execFinalizer(finalizer);\n } catch (err) {\n errors = errors ?? [];\n if (err instanceof UnsubscriptionError) {\n errors = [...errors, ...err.errors];\n } else {\n errors.push(err);\n }\n }\n }\n }\n\n if (errors) {\n throw new UnsubscriptionError(errors);\n }\n }\n }\n\n /**\n * Adds a finalizer to this subscription, so that finalization will be unsubscribed/called\n * when this subscription is unsubscribed. If this subscription is already {@link #closed},\n * because it has already been unsubscribed, then whatever finalizer is passed to it\n * will automatically be executed (unless the finalizer itself is also a closed subscription).\n *\n * Closed Subscriptions cannot be added as finalizers to any subscription. Adding a closed\n * subscription to a any subscription will result in no operation. (A noop).\n *\n * Adding a subscription to itself, or adding `null` or `undefined` will not perform any\n * operation at all. (A noop).\n *\n * `Subscription` instances that are added to this instance will automatically remove themselves\n * if they are unsubscribed. Functions and {@link Unsubscribable} objects that you wish to remove\n * will need to be removed manually with {@link #remove}\n *\n * @param teardown The finalization logic to add to this subscription.\n */\n add(teardown: TeardownLogic): void {\n // Only add the finalizer if it's not undefined\n // and don't add a subscription to itself.\n if (teardown && teardown !== this) {\n if (this.closed) {\n // If this subscription is already closed,\n // execute whatever finalizer is handed to it automatically.\n execFinalizer(teardown);\n } else {\n if (teardown instanceof Subscription) {\n // We don't add closed subscriptions, and we don't add the same subscription\n // twice. Subscription unsubscribe is idempotent.\n if (teardown.closed || teardown._hasParent(this)) {\n return;\n }\n teardown._addParent(this);\n }\n (this._finalizers = this._finalizers ?? []).push(teardown);\n }\n }\n }\n\n /**\n * Checks to see if a this subscription already has a particular parent.\n * This will signal that this subscription has already been added to the parent in question.\n * @param parent the parent to check for\n */\n private _hasParent(parent: Subscription) {\n const { _parentage } = this;\n return _parentage === parent || (Array.isArray(_parentage) && _parentage.includes(parent));\n }\n\n /**\n * Adds a parent to this subscription so it can be removed from the parent if it\n * unsubscribes on it's own.\n *\n * NOTE: THIS ASSUMES THAT {@link _hasParent} HAS ALREADY BEEN CHECKED.\n * @param parent The parent subscription to add\n */\n private _addParent(parent: Subscription) {\n const { _parentage } = this;\n this._parentage = Array.isArray(_parentage) ? (_parentage.push(parent), _parentage) : _parentage ? [_parentage, parent] : parent;\n }\n\n /**\n * Called on a child when it is removed via {@link #remove}.\n * @param parent The parent to remove\n */\n private _removeParent(parent: Subscription) {\n const { _parentage } = this;\n if (_parentage === parent) {\n this._parentage = null;\n } else if (Array.isArray(_parentage)) {\n arrRemove(_parentage, parent);\n }\n }\n\n /**\n * Removes a finalizer from this subscription that was previously added with the {@link #add} method.\n *\n * Note that `Subscription` instances, when unsubscribed, will automatically remove themselves\n * from every other `Subscription` they have been added to. This means that using the `remove` method\n * is not a common thing and should be used thoughtfully.\n *\n * If you add the same finalizer instance of a function or an unsubscribable object to a `Subscription` instance\n * more than once, you will need to call `remove` the same number of times to remove all instances.\n *\n * All finalizer instances are removed to free up memory upon unsubscription.\n *\n * @param teardown The finalizer to remove from this subscription\n */\n remove(teardown: Exclude): void {\n const { _finalizers } = this;\n _finalizers && arrRemove(_finalizers, teardown);\n\n if (teardown instanceof Subscription) {\n teardown._removeParent(this);\n }\n }\n}\n\nexport const EMPTY_SUBSCRIPTION = Subscription.EMPTY;\n\nexport function isSubscription(value: any): value is Subscription {\n return (\n value instanceof Subscription ||\n (value && 'closed' in value && isFunction(value.remove) && isFunction(value.add) && isFunction(value.unsubscribe))\n );\n}\n\nfunction execFinalizer(finalizer: Unsubscribable | (() => void)) {\n if (isFunction(finalizer)) {\n finalizer();\n } else {\n finalizer.unsubscribe();\n }\n}\n", "import { Subscriber } from './Subscriber';\nimport { ObservableNotification } from './types';\n\n/**\n * The {@link GlobalConfig} object for RxJS. It is used to configure things\n * like how to react on unhandled errors.\n */\nexport const config: GlobalConfig = {\n onUnhandledError: null,\n onStoppedNotification: null,\n Promise: undefined,\n useDeprecatedSynchronousErrorHandling: false,\n useDeprecatedNextContext: false,\n};\n\n/**\n * The global configuration object for RxJS, used to configure things\n * like how to react on unhandled errors. Accessible via {@link config}\n * object.\n */\nexport interface GlobalConfig {\n /**\n * A registration point for unhandled errors from RxJS. These are errors that\n * cannot were not handled by consuming code in the usual subscription path. For\n * example, if you have this configured, and you subscribe to an observable without\n * providing an error handler, errors from that subscription will end up here. This\n * will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onUnhandledError: ((err: any) => void) | null;\n\n /**\n * A registration point for notifications that cannot be sent to subscribers because they\n * have completed, errored or have been explicitly unsubscribed. By default, next, complete\n * and error notifications sent to stopped subscribers are noops. However, sometimes callers\n * might want a different behavior. For example, with sources that attempt to report errors\n * to stopped subscribers, a caller can configure RxJS to throw an unhandled error instead.\n * This will _always_ be called asynchronously on another job in the runtime. This is because\n * we do not want errors thrown in this user-configured handler to interfere with the\n * behavior of the library.\n */\n onStoppedNotification: ((notification: ObservableNotification, subscriber: Subscriber) => void) | null;\n\n /**\n * The promise constructor used by default for {@link Observable#toPromise toPromise} and {@link Observable#forEach forEach}\n * methods.\n *\n * @deprecated As of version 8, RxJS will no longer support this sort of injection of a\n * Promise constructor. If you need a Promise implementation other than native promises,\n * please polyfill/patch Promise as you see appropriate. Will be removed in v8.\n */\n Promise?: PromiseConstructorLike;\n\n /**\n * If true, turns on synchronous error rethrowing, which is a deprecated behavior\n * in v6 and higher. This behavior enables bad patterns like wrapping a subscribe\n * call in a try/catch block. It also enables producer interference, a nasty bug\n * where a multicast can be broken for all observers by a downstream consumer with\n * an unhandled error. DO NOT USE THIS FLAG UNLESS IT'S NEEDED TO BUY TIME\n * FOR MIGRATION REASONS.\n *\n * @deprecated As of version 8, RxJS will no longer support synchronous throwing\n * of unhandled errors. All errors will be thrown on a separate call stack to prevent bad\n * behaviors described above. Will be removed in v8.\n */\n useDeprecatedSynchronousErrorHandling: boolean;\n\n /**\n * If true, enables an as-of-yet undocumented feature from v5: The ability to access\n * `unsubscribe()` via `this` context in `next` functions created in observers passed\n * to `subscribe`.\n *\n * This is being removed because the performance was severely problematic, and it could also cause\n * issues when types other than POJOs are passed to subscribe as subscribers, as they will likely have\n * their `this` context overwritten.\n *\n * @deprecated As of version 8, RxJS will no longer support altering the\n * context of next functions provided as part of an observer to Subscribe. Instead,\n * you will have access to a subscription or a signal or token that will allow you to do things like\n * unsubscribe and test closed status. Will be removed in v8.\n */\n useDeprecatedNextContext: boolean;\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetTimeoutFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearTimeoutFunction = (handle: TimerHandle) => void;\n\ninterface TimeoutProvider {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n delegate:\n | {\n setTimeout: SetTimeoutFunction;\n clearTimeout: ClearTimeoutFunction;\n }\n | undefined;\n}\n\nexport const timeoutProvider: TimeoutProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setTimeout(handler: () => void, timeout?: number, ...args) {\n const { delegate } = timeoutProvider;\n if (delegate?.setTimeout) {\n return delegate.setTimeout(handler, timeout, ...args);\n }\n return setTimeout(handler, timeout, ...args);\n },\n clearTimeout(handle) {\n const { delegate } = timeoutProvider;\n return (delegate?.clearTimeout || clearTimeout)(handle as any);\n },\n delegate: undefined,\n};\n", "import { config } from '../config';\nimport { timeoutProvider } from '../scheduler/timeoutProvider';\n\n/**\n * Handles an error on another job either with the user-configured {@link onUnhandledError},\n * or by throwing it on that new job so it can be picked up by `window.onerror`, `process.on('error')`, etc.\n *\n * This should be called whenever there is an error that is out-of-band with the subscription\n * or when an error hits a terminal boundary of the subscription and no error handler was provided.\n *\n * @param err the error to report\n */\nexport function reportUnhandledError(err: any) {\n timeoutProvider.setTimeout(() => {\n const { onUnhandledError } = config;\n if (onUnhandledError) {\n // Execute the user-configured error handler.\n onUnhandledError(err);\n } else {\n // Throw so it is picked up by the runtime's uncaught error mechanism.\n throw err;\n }\n });\n}\n", "/* tslint:disable:no-empty */\nexport function noop() { }\n", "import { CompleteNotification, NextNotification, ErrorNotification } from './types';\n\n/**\n * A completion object optimized for memory use and created to be the\n * same \"shape\" as other notifications in v8.\n * @internal\n */\nexport const COMPLETE_NOTIFICATION = (() => createNotification('C', undefined, undefined) as CompleteNotification)();\n\n/**\n * Internal use only. Creates an optimized error notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function errorNotification(error: any): ErrorNotification {\n return createNotification('E', undefined, error) as any;\n}\n\n/**\n * Internal use only. Creates an optimized next notification that is the same \"shape\"\n * as other notifications.\n * @internal\n */\nexport function nextNotification(value: T) {\n return createNotification('N', value, undefined) as NextNotification;\n}\n\n/**\n * Ensures that all notifications created internally have the same \"shape\" in v8.\n *\n * TODO: This is only exported to support a crazy legacy test in `groupBy`.\n * @internal\n */\nexport function createNotification(kind: 'N' | 'E' | 'C', value: any, error: any) {\n return {\n kind,\n value,\n error,\n };\n}\n", "import { config } from '../config';\n\nlet context: { errorThrown: boolean; error: any } | null = null;\n\n/**\n * Handles dealing with errors for super-gross mode. Creates a context, in which\n * any synchronously thrown errors will be passed to {@link captureError}. Which\n * will record the error such that it will be rethrown after the call back is complete.\n * TODO: Remove in v8\n * @param cb An immediately executed function.\n */\nexport function errorContext(cb: () => void) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n const isRoot = !context;\n if (isRoot) {\n context = { errorThrown: false, error: null };\n }\n cb();\n if (isRoot) {\n const { errorThrown, error } = context!;\n context = null;\n if (errorThrown) {\n throw error;\n }\n }\n } else {\n // This is the general non-deprecated path for everyone that\n // isn't crazy enough to use super-gross mode (useDeprecatedSynchronousErrorHandling)\n cb();\n }\n}\n\n/**\n * Captures errors only in super-gross mode.\n * @param err the error to capture\n */\nexport function captureError(err: any) {\n if (config.useDeprecatedSynchronousErrorHandling && context) {\n context.errorThrown = true;\n context.error = err;\n }\n}\n", "import { isFunction } from './util/isFunction';\nimport { Observer, ObservableNotification } from './types';\nimport { isSubscription, Subscription } from './Subscription';\nimport { config } from './config';\nimport { reportUnhandledError } from './util/reportUnhandledError';\nimport { noop } from './util/noop';\nimport { nextNotification, errorNotification, COMPLETE_NOTIFICATION } from './NotificationFactories';\nimport { timeoutProvider } from './scheduler/timeoutProvider';\nimport { captureError } from './util/errorContext';\n\n/**\n * Implements the {@link Observer} interface and extends the\n * {@link Subscription} class. While the {@link Observer} is the public API for\n * consuming the values of an {@link Observable}, all Observers get converted to\n * a Subscriber, in order to provide Subscription-like capabilities such as\n * `unsubscribe`. Subscriber is a common type in RxJS, and crucial for\n * implementing operators, but it is rarely used as a public API.\n *\n * @class Subscriber\n */\nexport class Subscriber extends Subscription implements Observer {\n /**\n * A static factory for a Subscriber, given a (potentially partial) definition\n * of an Observer.\n * @param next The `next` callback of an Observer.\n * @param error The `error` callback of an\n * Observer.\n * @param complete The `complete` callback of an\n * Observer.\n * @return A Subscriber wrapping the (partially defined)\n * Observer represented by the given arguments.\n * @nocollapse\n * @deprecated Do not use. Will be removed in v8. There is no replacement for this\n * method, and there is no reason to be creating instances of `Subscriber` directly.\n * If you have a specific use case, please file an issue.\n */\n static create(next?: (x?: T) => void, error?: (e?: any) => void, complete?: () => void): Subscriber {\n return new SafeSubscriber(next, error, complete);\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected isStopped: boolean = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n protected destination: Subscriber | Observer; // this `any` is the escape hatch to erase extra type param (e.g. R)\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * There is no reason to directly create an instance of Subscriber. This type is exported for typings reasons.\n */\n constructor(destination?: Subscriber | Observer) {\n super();\n if (destination) {\n this.destination = destination;\n // Automatically chain subscriptions together here.\n // if destination is a Subscription, then it is a Subscriber.\n if (isSubscription(destination)) {\n destination.add(this);\n }\n } else {\n this.destination = EMPTY_OBSERVER;\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `next` from\n * the Observable, with a value. The Observable may call this method 0 or more\n * times.\n * @param {T} [value] The `next` value.\n * @return {void}\n */\n next(value?: T): void {\n if (this.isStopped) {\n handleStoppedNotification(nextNotification(value), this);\n } else {\n this._next(value!);\n }\n }\n\n /**\n * The {@link Observer} callback to receive notifications of type `error` from\n * the Observable, with an attached `Error`. Notifies the Observer that\n * the Observable has experienced an error condition.\n * @param {any} [err] The `error` exception.\n * @return {void}\n */\n error(err?: any): void {\n if (this.isStopped) {\n handleStoppedNotification(errorNotification(err), this);\n } else {\n this.isStopped = true;\n this._error(err);\n }\n }\n\n /**\n * The {@link Observer} callback to receive a valueless notification of type\n * `complete` from the Observable. Notifies the Observer that the Observable\n * has finished sending push-based notifications.\n * @return {void}\n */\n complete(): void {\n if (this.isStopped) {\n handleStoppedNotification(COMPLETE_NOTIFICATION, this);\n } else {\n this.isStopped = true;\n this._complete();\n }\n }\n\n unsubscribe(): void {\n if (!this.closed) {\n this.isStopped = true;\n super.unsubscribe();\n this.destination = null!;\n }\n }\n\n protected _next(value: T): void {\n this.destination.next(value);\n }\n\n protected _error(err: any): void {\n try {\n this.destination.error(err);\n } finally {\n this.unsubscribe();\n }\n }\n\n protected _complete(): void {\n try {\n this.destination.complete();\n } finally {\n this.unsubscribe();\n }\n }\n}\n\n/**\n * This bind is captured here because we want to be able to have\n * compatibility with monoid libraries that tend to use a method named\n * `bind`. In particular, a library called Monio requires this.\n */\nconst _bind = Function.prototype.bind;\n\nfunction bind any>(fn: Fn, thisArg: any): Fn {\n return _bind.call(fn, thisArg);\n}\n\n/**\n * Internal optimization only, DO NOT EXPOSE.\n * @internal\n */\nclass ConsumerObserver implements Observer {\n constructor(private partialObserver: Partial>) {}\n\n next(value: T): void {\n const { partialObserver } = this;\n if (partialObserver.next) {\n try {\n partialObserver.next(value);\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n\n error(err: any): void {\n const { partialObserver } = this;\n if (partialObserver.error) {\n try {\n partialObserver.error(err);\n } catch (error) {\n handleUnhandledError(error);\n }\n } else {\n handleUnhandledError(err);\n }\n }\n\n complete(): void {\n const { partialObserver } = this;\n if (partialObserver.complete) {\n try {\n partialObserver.complete();\n } catch (error) {\n handleUnhandledError(error);\n }\n }\n }\n}\n\nexport class SafeSubscriber extends Subscriber {\n constructor(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((e?: any) => void) | null,\n complete?: (() => void) | null\n ) {\n super();\n\n let partialObserver: Partial>;\n if (isFunction(observerOrNext) || !observerOrNext) {\n // The first argument is a function, not an observer. The next\n // two arguments *could* be observers, or they could be empty.\n partialObserver = {\n next: (observerOrNext ?? undefined) as (((value: T) => void) | undefined),\n error: error ?? undefined,\n complete: complete ?? undefined,\n };\n } else {\n // The first argument is a partial observer.\n let context: any;\n if (this && config.useDeprecatedNextContext) {\n // This is a deprecated path that made `this.unsubscribe()` available in\n // next handler functions passed to subscribe. This only exists behind a flag\n // now, as it is *very* slow.\n context = Object.create(observerOrNext);\n context.unsubscribe = () => this.unsubscribe();\n partialObserver = {\n next: observerOrNext.next && bind(observerOrNext.next, context),\n error: observerOrNext.error && bind(observerOrNext.error, context),\n complete: observerOrNext.complete && bind(observerOrNext.complete, context),\n };\n } else {\n // The \"normal\" path. Just use the partial observer directly.\n partialObserver = observerOrNext;\n }\n }\n\n // Wrap the partial observer to ensure it's a full observer, and\n // make sure proper error handling is accounted for.\n this.destination = new ConsumerObserver(partialObserver);\n }\n}\n\nfunction handleUnhandledError(error: any) {\n if (config.useDeprecatedSynchronousErrorHandling) {\n captureError(error);\n } else {\n // Ideal path, we report this as an unhandled error,\n // which is thrown on a new call stack.\n reportUnhandledError(error);\n }\n}\n\n/**\n * An error handler used when no error handler was supplied\n * to the SafeSubscriber -- meaning no error handler was supplied\n * do the `subscribe` call on our observable.\n * @param err The error to handle\n */\nfunction defaultErrorHandler(err: any) {\n throw err;\n}\n\n/**\n * A handler for notifications that cannot be sent to a stopped subscriber.\n * @param notification The notification being sent\n * @param subscriber The stopped subscriber\n */\nfunction handleStoppedNotification(notification: ObservableNotification, subscriber: Subscriber) {\n const { onStoppedNotification } = config;\n onStoppedNotification && timeoutProvider.setTimeout(() => onStoppedNotification(notification, subscriber));\n}\n\n/**\n * The observer used as a stub for subscriptions where the user did not\n * pass any arguments to `subscribe`. Comes with the default error handling\n * behavior.\n */\nexport const EMPTY_OBSERVER: Readonly> & { closed: true } = {\n closed: true,\n next: noop,\n error: defaultErrorHandler,\n complete: noop,\n};\n", "/**\n * Symbol.observable or a string \"@@observable\". Used for interop\n *\n * @deprecated We will no longer be exporting this symbol in upcoming versions of RxJS.\n * Instead polyfill and use Symbol.observable directly *or* use https://www.npmjs.com/package/symbol-observable\n */\nexport const observable: string | symbol = (() => (typeof Symbol === 'function' && Symbol.observable) || '@@observable')();\n", "/**\n * This function takes one parameter and just returns it. Simply put,\n * this is like `(x: T): T => x`.\n *\n * ## Examples\n *\n * This is useful in some cases when using things like `mergeMap`\n *\n * ```ts\n * import { interval, take, map, range, mergeMap, identity } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(5));\n *\n * const result$ = source$.pipe(\n * map(i => range(i)),\n * mergeMap(identity) // same as mergeMap(x => x)\n * );\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * Or when you want to selectively apply an operator\n *\n * ```ts\n * import { interval, take, identity } from 'rxjs';\n *\n * const shouldLimit = () => Math.random() < 0.5;\n *\n * const source$ = interval(1000);\n *\n * const result$ = source$.pipe(shouldLimit() ? take(5) : identity);\n *\n * result$.subscribe({\n * next: console.log\n * });\n * ```\n *\n * @param x Any value that is returned by this function\n * @returns The value passed as the first parameter to this function\n */\nexport function identity(x: T): T {\n return x;\n}\n", "import { identity } from './identity';\nimport { UnaryFunction } from '../types';\n\nexport function pipe(): typeof identity;\nexport function pipe(fn1: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction): UnaryFunction;\nexport function pipe(fn1: UnaryFunction, fn2: UnaryFunction, fn3: UnaryFunction): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction\n): UnaryFunction;\nexport function pipe(\n fn1: UnaryFunction,\n fn2: UnaryFunction,\n fn3: UnaryFunction,\n fn4: UnaryFunction,\n fn5: UnaryFunction,\n fn6: UnaryFunction,\n fn7: UnaryFunction,\n fn8: UnaryFunction,\n fn9: UnaryFunction,\n ...fns: UnaryFunction[]\n): UnaryFunction;\n\n/**\n * pipe() can be called on one or more functions, each of which can take one argument (\"UnaryFunction\")\n * and uses it to return a value.\n * It returns a function that takes one argument, passes it to the first UnaryFunction, and then\n * passes the result to the next one, passes that result to the next one, and so on. \n */\nexport function pipe(...fns: Array>): UnaryFunction {\n return pipeFromArray(fns);\n}\n\n/** @internal */\nexport function pipeFromArray(fns: Array>): UnaryFunction {\n if (fns.length === 0) {\n return identity as UnaryFunction;\n }\n\n if (fns.length === 1) {\n return fns[0];\n }\n\n return function piped(input: T): R {\n return fns.reduce((prev: any, fn: UnaryFunction) => fn(prev), input as any);\n };\n}\n", "import { Operator } from './Operator';\nimport { SafeSubscriber, Subscriber } from './Subscriber';\nimport { isSubscription, Subscription } from './Subscription';\nimport { TeardownLogic, OperatorFunction, Subscribable, Observer } from './types';\nimport { observable as Symbol_observable } from './symbol/observable';\nimport { pipeFromArray } from './util/pipe';\nimport { config } from './config';\nimport { isFunction } from './util/isFunction';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A representation of any set of values over any amount of time. This is the most basic building block\n * of RxJS.\n *\n * @class Observable\n */\nexport class Observable implements Subscribable {\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n source: Observable | undefined;\n\n /**\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n */\n operator: Operator | undefined;\n\n /**\n * @constructor\n * @param {Function} subscribe the function that is called when the Observable is\n * initially subscribed to. This function is given a Subscriber, to which new values\n * can be `next`ed, or an `error` method can be called to raise an error, or\n * `complete` can be called to notify of a successful completion.\n */\n constructor(subscribe?: (this: Observable, subscriber: Subscriber) => TeardownLogic) {\n if (subscribe) {\n this._subscribe = subscribe;\n }\n }\n\n // HACK: Since TypeScript inherits static properties too, we have to\n // fight against TypeScript here so Subject can have a different static create signature\n /**\n * Creates a new Observable by calling the Observable constructor\n * @owner Observable\n * @method create\n * @param {Function} subscribe? the subscriber function to be passed to the Observable constructor\n * @return {Observable} a new observable\n * @nocollapse\n * @deprecated Use `new Observable()` instead. Will be removed in v8.\n */\n static create: (...args: any[]) => any = (subscribe?: (subscriber: Subscriber) => TeardownLogic) => {\n return new Observable(subscribe);\n };\n\n /**\n * Creates a new Observable, with this Observable instance as the source, and the passed\n * operator defined as the new observable's operator.\n * @method lift\n * @param operator the operator defining the operation to take on the observable\n * @return a new observable with the Operator applied\n * @deprecated Internal implementation detail, do not use directly. Will be made internal in v8.\n * If you have implemented an operator using `lift`, it is recommended that you create an\n * operator by simply returning `new Observable()` directly. See \"Creating new operators from\n * scratch\" section here: https://rxjs.dev/guide/operators\n */\n lift(operator?: Operator): Observable {\n const observable = new Observable();\n observable.source = this;\n observable.operator = operator;\n return observable;\n }\n\n subscribe(observerOrNext?: Partial> | ((value: T) => void)): Subscription;\n /** @deprecated Instead of passing separate callback arguments, use an observer argument. Signatures taking separate callback arguments will be removed in v8. Details: https://rxjs.dev/deprecations/subscribe-arguments */\n subscribe(next?: ((value: T) => void) | null, error?: ((error: any) => void) | null, complete?: (() => void) | null): Subscription;\n /**\n * Invokes an execution of an Observable and registers Observer handlers for notifications it will emit.\n *\n * Use it when you have all these Observables, but still nothing is happening.\n *\n * `subscribe` is not a regular operator, but a method that calls Observable's internal `subscribe` function. It\n * might be for example a function that you passed to Observable's constructor, but most of the time it is\n * a library implementation, which defines what will be emitted by an Observable, and when it be will emitted. This means\n * that calling `subscribe` is actually the moment when Observable starts its work, not when it is created, as it is often\n * the thought.\n *\n * Apart from starting the execution of an Observable, this method allows you to listen for values\n * that an Observable emits, as well as for when it completes or errors. You can achieve this in two\n * of the following ways.\n *\n * The first way is creating an object that implements {@link Observer} interface. It should have methods\n * defined by that interface, but note that it should be just a regular JavaScript object, which you can create\n * yourself in any way you want (ES6 class, classic function constructor, object literal etc.). In particular, do\n * not attempt to use any RxJS implementation details to create Observers - you don't need them. Remember also\n * that your object does not have to implement all methods. If you find yourself creating a method that doesn't\n * do anything, you can simply omit it. Note however, if the `error` method is not provided and an error happens,\n * it will be thrown asynchronously. Errors thrown asynchronously cannot be caught using `try`/`catch`. Instead,\n * use the {@link onUnhandledError} configuration option or use a runtime handler (like `window.onerror` or\n * `process.on('error)`) to be notified of unhandled errors. Because of this, it's recommended that you provide\n * an `error` method to avoid missing thrown errors.\n *\n * The second way is to give up on Observer object altogether and simply provide callback functions in place of its methods.\n * This means you can provide three functions as arguments to `subscribe`, where the first function is equivalent\n * of a `next` method, the second of an `error` method and the third of a `complete` method. Just as in case of an Observer,\n * if you do not need to listen for something, you can omit a function by passing `undefined` or `null`,\n * since `subscribe` recognizes these functions by where they were placed in function call. When it comes\n * to the `error` function, as with an Observer, if not provided, errors emitted by an Observable will be thrown asynchronously.\n *\n * You can, however, subscribe with no parameters at all. This may be the case where you're not interested in terminal events\n * and you also handled emissions internally by using operators (e.g. using `tap`).\n *\n * Whichever style of calling `subscribe` you use, in both cases it returns a Subscription object.\n * This object allows you to call `unsubscribe` on it, which in turn will stop the work that an Observable does and will clean\n * up all resources that an Observable used. Note that cancelling a subscription will not call `complete` callback\n * provided to `subscribe` function, which is reserved for a regular completion signal that comes from an Observable.\n *\n * Remember that callbacks provided to `subscribe` are not guaranteed to be called asynchronously.\n * It is an Observable itself that decides when these functions will be called. For example {@link of}\n * by default emits all its values synchronously. Always check documentation for how given Observable\n * will behave when subscribed and if its default behavior can be modified with a `scheduler`.\n *\n * #### Examples\n *\n * Subscribe with an {@link guide/observer Observer}\n *\n * ```ts\n * import { of } from 'rxjs';\n *\n * const sumObserver = {\n * sum: 0,\n * next(value) {\n * console.log('Adding: ' + value);\n * this.sum = this.sum + value;\n * },\n * error() {\n * // We actually could just remove this method,\n * // since we do not really care about errors right now.\n * },\n * complete() {\n * console.log('Sum equals: ' + this.sum);\n * }\n * };\n *\n * of(1, 2, 3) // Synchronously emits 1, 2, 3 and then completes.\n * .subscribe(sumObserver);\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Subscribe with functions ({@link deprecations/subscribe-arguments deprecated})\n *\n * ```ts\n * import { of } from 'rxjs'\n *\n * let sum = 0;\n *\n * of(1, 2, 3).subscribe(\n * value => {\n * console.log('Adding: ' + value);\n * sum = sum + value;\n * },\n * undefined,\n * () => console.log('Sum equals: ' + sum)\n * );\n *\n * // Logs:\n * // 'Adding: 1'\n * // 'Adding: 2'\n * // 'Adding: 3'\n * // 'Sum equals: 6'\n * ```\n *\n * Cancel a subscription\n *\n * ```ts\n * import { interval } from 'rxjs';\n *\n * const subscription = interval(1000).subscribe({\n * next(num) {\n * console.log(num)\n * },\n * complete() {\n * // Will not be called, even when cancelling subscription.\n * console.log('completed!');\n * }\n * });\n *\n * setTimeout(() => {\n * subscription.unsubscribe();\n * console.log('unsubscribed!');\n * }, 2500);\n *\n * // Logs:\n * // 0 after 1s\n * // 1 after 2s\n * // 'unsubscribed!' after 2.5s\n * ```\n *\n * @param {Observer|Function} observerOrNext (optional) Either an observer with methods to be called,\n * or the first of three possible handlers, which is the handler for each value emitted from the subscribed\n * Observable.\n * @param {Function} error (optional) A handler for a terminal event resulting from an error. If no error handler is provided,\n * the error will be thrown asynchronously as unhandled.\n * @param {Function} complete (optional) A handler for a terminal event resulting from successful completion.\n * @return {Subscription} a subscription reference to the registered handlers\n * @method subscribe\n */\n subscribe(\n observerOrNext?: Partial> | ((value: T) => void) | null,\n error?: ((error: any) => void) | null,\n complete?: (() => void) | null\n ): Subscription {\n const subscriber = isSubscriber(observerOrNext) ? observerOrNext : new SafeSubscriber(observerOrNext, error, complete);\n\n errorContext(() => {\n const { operator, source } = this;\n subscriber.add(\n operator\n ? // We're dealing with a subscription in the\n // operator chain to one of our lifted operators.\n operator.call(subscriber, source)\n : source\n ? // If `source` has a value, but `operator` does not, something that\n // had intimate knowledge of our API, like our `Subject`, must have\n // set it. We're going to just call `_subscribe` directly.\n this._subscribe(subscriber)\n : // In all other cases, we're likely wrapping a user-provided initializer\n // function, so we need to catch errors and handle them appropriately.\n this._trySubscribe(subscriber)\n );\n });\n\n return subscriber;\n }\n\n /** @internal */\n protected _trySubscribe(sink: Subscriber): TeardownLogic {\n try {\n return this._subscribe(sink);\n } catch (err) {\n // We don't need to return anything in this case,\n // because it's just going to try to `add()` to a subscription\n // above.\n sink.error(err);\n }\n }\n\n /**\n * Used as a NON-CANCELLABLE means of subscribing to an observable, for use with\n * APIs that expect promises, like `async/await`. You cannot unsubscribe from this.\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * #### Example\n *\n * ```ts\n * import { interval, take } from 'rxjs';\n *\n * const source$ = interval(1000).pipe(take(4));\n *\n * async function getTotal() {\n * let total = 0;\n *\n * await source$.forEach(value => {\n * total += value;\n * console.log('observable -> ' + value);\n * });\n *\n * return total;\n * }\n *\n * getTotal().then(\n * total => console.log('Total: ' + total)\n * );\n *\n * // Expected:\n * // 'observable -> 0'\n * // 'observable -> 1'\n * // 'observable -> 2'\n * // 'observable -> 3'\n * // 'Total: 6'\n * ```\n *\n * @param next a handler for each value emitted by the observable\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n */\n forEach(next: (value: T) => void): Promise;\n\n /**\n * @param next a handler for each value emitted by the observable\n * @param promiseCtor a constructor function used to instantiate the Promise\n * @return a promise that either resolves on observable completion or\n * rejects with the handled error\n * @deprecated Passing a Promise constructor will no longer be available\n * in upcoming versions of RxJS. This is because it adds weight to the library, for very\n * little benefit. If you need this functionality, it is recommended that you either\n * polyfill Promise, or you create an adapter to convert the returned native promise\n * to whatever promise implementation you wanted. Will be removed in v8.\n */\n forEach(next: (value: T) => void, promiseCtor: PromiseConstructorLike): Promise;\n\n forEach(next: (value: T) => void, promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n const subscriber = new SafeSubscriber({\n next: (value) => {\n try {\n next(value);\n } catch (err) {\n reject(err);\n subscriber.unsubscribe();\n }\n },\n error: reject,\n complete: resolve,\n });\n this.subscribe(subscriber);\n }) as Promise;\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): TeardownLogic {\n return this.source?.subscribe(subscriber);\n }\n\n /**\n * An interop point defined by the es7-observable spec https://github.com/zenparsing/es-observable\n * @method Symbol.observable\n * @return {Observable} this instance of the observable\n */\n [Symbol_observable]() {\n return this;\n }\n\n /* tslint:disable:max-line-length */\n pipe(): Observable;\n pipe(op1: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction): Observable;\n pipe(op1: OperatorFunction, op2: OperatorFunction, op3: OperatorFunction): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction\n ): Observable;\n pipe(\n op1: OperatorFunction,\n op2: OperatorFunction,\n op3: OperatorFunction,\n op4: OperatorFunction,\n op5: OperatorFunction,\n op6: OperatorFunction,\n op7: OperatorFunction,\n op8: OperatorFunction,\n op9: OperatorFunction,\n ...operations: OperatorFunction[]\n ): Observable;\n /* tslint:enable:max-line-length */\n\n /**\n * Used to stitch together functional operators into a chain.\n * @method pipe\n * @return {Observable} the Observable result of all of the operators having\n * been called in the order they were passed in.\n *\n * ## Example\n *\n * ```ts\n * import { interval, filter, map, scan } from 'rxjs';\n *\n * interval(1000)\n * .pipe(\n * filter(x => x % 2 === 0),\n * map(x => x + x),\n * scan((acc, x) => acc + x)\n * )\n * .subscribe(x => console.log(x));\n * ```\n */\n pipe(...operations: OperatorFunction[]): Observable {\n return pipeFromArray(operations)(this);\n }\n\n /* tslint:disable:max-line-length */\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: typeof Promise): Promise;\n /** @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise */\n toPromise(PromiseCtor: PromiseConstructorLike): Promise;\n /* tslint:enable:max-line-length */\n\n /**\n * Subscribe to this Observable and get a Promise resolving on\n * `complete` with the last emission (if any).\n *\n * **WARNING**: Only use this with observables you *know* will complete. If the source\n * observable does not complete, you will end up with a promise that is hung up, and\n * potentially all of the state of an async function hanging out in memory. To avoid\n * this situation, look into adding something like {@link timeout}, {@link take},\n * {@link takeWhile}, or {@link takeUntil} amongst others.\n *\n * @method toPromise\n * @param [promiseCtor] a constructor function used to instantiate\n * the Promise\n * @return A Promise that resolves with the last value emit, or\n * rejects on an error. If there were no emissions, Promise\n * resolves with undefined.\n * @deprecated Replaced with {@link firstValueFrom} and {@link lastValueFrom}. Will be removed in v8. Details: https://rxjs.dev/deprecations/to-promise\n */\n toPromise(promiseCtor?: PromiseConstructorLike): Promise {\n promiseCtor = getPromiseCtor(promiseCtor);\n\n return new promiseCtor((resolve, reject) => {\n let value: T | undefined;\n this.subscribe(\n (x: T) => (value = x),\n (err: any) => reject(err),\n () => resolve(value)\n );\n }) as Promise;\n }\n}\n\n/**\n * Decides between a passed promise constructor from consuming code,\n * A default configured promise constructor, and the native promise\n * constructor and returns it. If nothing can be found, it will throw\n * an error.\n * @param promiseCtor The optional promise constructor to passed by consuming code\n */\nfunction getPromiseCtor(promiseCtor: PromiseConstructorLike | undefined) {\n return promiseCtor ?? config.Promise ?? Promise;\n}\n\nfunction isObserver(value: any): value is Observer {\n return value && isFunction(value.next) && isFunction(value.error) && isFunction(value.complete);\n}\n\nfunction isSubscriber(value: any): value is Subscriber {\n return (value && value instanceof Subscriber) || (isObserver(value) && isSubscription(value));\n}\n", "import { Observable } from '../Observable';\nimport { Subscriber } from '../Subscriber';\nimport { OperatorFunction } from '../types';\nimport { isFunction } from './isFunction';\n\n/**\n * Used to determine if an object is an Observable with a lift function.\n */\nexport function hasLift(source: any): source is { lift: InstanceType['lift'] } {\n return isFunction(source?.lift);\n}\n\n/**\n * Creates an `OperatorFunction`. Used to define operators throughout the library in a concise way.\n * @param init The logic to connect the liftedSource to the subscriber at the moment of subscription.\n */\nexport function operate(\n init: (liftedSource: Observable, subscriber: Subscriber) => (() => void) | void\n): OperatorFunction {\n return (source: Observable) => {\n if (hasLift(source)) {\n return source.lift(function (this: Subscriber, liftedSource: Observable) {\n try {\n return init(liftedSource, this);\n } catch (err) {\n this.error(err);\n }\n });\n }\n throw new TypeError('Unable to lift unknown Observable type');\n };\n}\n", "import { Subscriber } from '../Subscriber';\n\n/**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional teardown logic here. This will only be called on teardown if the\n * subscriber itself is not already closed. This is called after all other teardown logic is executed.\n */\nexport function createOperatorSubscriber(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n onFinalize?: () => void\n): Subscriber {\n return new OperatorSubscriber(destination, onNext, onComplete, onError, onFinalize);\n}\n\n/**\n * A generic helper for allowing operators to be created with a Subscriber and\n * use closures to capture necessary state from the operator function itself.\n */\nexport class OperatorSubscriber extends Subscriber {\n /**\n * Creates an instance of an `OperatorSubscriber`.\n * @param destination The downstream subscriber.\n * @param onNext Handles next values, only called if this subscriber is not stopped or closed. Any\n * error that occurs in this function is caught and sent to the `error` method of this subscriber.\n * @param onError Handles errors from the subscription, any errors that occur in this handler are caught\n * and send to the `destination` error handler.\n * @param onComplete Handles completion notification from the subscription. Any errors that occur in\n * this handler are sent to the `destination` error handler.\n * @param onFinalize Additional finalization logic here. This will only be called on finalization if the\n * subscriber itself is not already closed. This is called after all other finalization logic is executed.\n * @param shouldUnsubscribe An optional check to see if an unsubscribe call should truly unsubscribe.\n * NOTE: This currently **ONLY** exists to support the strange behavior of {@link groupBy}, where unsubscription\n * to the resulting observable does not actually disconnect from the source if there are active subscriptions\n * to any grouped observable. (DO NOT EXPOSE OR USE EXTERNALLY!!!)\n */\n constructor(\n destination: Subscriber,\n onNext?: (value: T) => void,\n onComplete?: () => void,\n onError?: (err: any) => void,\n private onFinalize?: () => void,\n private shouldUnsubscribe?: () => boolean\n ) {\n // It's important - for performance reasons - that all of this class's\n // members are initialized and that they are always initialized in the same\n // order. This will ensure that all OperatorSubscriber instances have the\n // same hidden class in V8. This, in turn, will help keep the number of\n // hidden classes involved in property accesses within the base class as\n // low as possible. If the number of hidden classes involved exceeds four,\n // the property accesses will become megamorphic and performance penalties\n // will be incurred - i.e. inline caches won't be used.\n //\n // The reasons for ensuring all instances have the same hidden class are\n // further discussed in this blog post from Benedikt Meurer:\n // https://benediktmeurer.de/2018/03/23/impact-of-polymorphism-on-component-based-frameworks-like-react/\n super(destination);\n this._next = onNext\n ? function (this: OperatorSubscriber, value: T) {\n try {\n onNext(value);\n } catch (err) {\n destination.error(err);\n }\n }\n : super._next;\n this._error = onError\n ? function (this: OperatorSubscriber, err: any) {\n try {\n onError(err);\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._error;\n this._complete = onComplete\n ? function (this: OperatorSubscriber) {\n try {\n onComplete();\n } catch (err) {\n // Send any errors that occur down stream.\n destination.error(err);\n } finally {\n // Ensure finalization.\n this.unsubscribe();\n }\n }\n : super._complete;\n }\n\n unsubscribe() {\n if (!this.shouldUnsubscribe || this.shouldUnsubscribe()) {\n const { closed } = this;\n super.unsubscribe();\n // Execute additional teardown if we have any and we didn't already do so.\n !closed && this.onFinalize?.();\n }\n }\n}\n", "import { Subscription } from '../Subscription';\n\ninterface AnimationFrameProvider {\n schedule(callback: FrameRequestCallback): Subscription;\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n delegate:\n | {\n requestAnimationFrame: typeof requestAnimationFrame;\n cancelAnimationFrame: typeof cancelAnimationFrame;\n }\n | undefined;\n}\n\nexport const animationFrameProvider: AnimationFrameProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n schedule(callback) {\n let request = requestAnimationFrame;\n let cancel: typeof cancelAnimationFrame | undefined = cancelAnimationFrame;\n const { delegate } = animationFrameProvider;\n if (delegate) {\n request = delegate.requestAnimationFrame;\n cancel = delegate.cancelAnimationFrame;\n }\n const handle = request((timestamp) => {\n // Clear the cancel function. The request has been fulfilled, so\n // attempting to cancel the request upon unsubscription would be\n // pointless.\n cancel = undefined;\n callback(timestamp);\n });\n return new Subscription(() => cancel?.(handle));\n },\n requestAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.requestAnimationFrame || requestAnimationFrame)(...args);\n },\n cancelAnimationFrame(...args) {\n const { delegate } = animationFrameProvider;\n return (delegate?.cancelAnimationFrame || cancelAnimationFrame)(...args);\n },\n delegate: undefined,\n};\n", "import { createErrorClass } from './createErrorClass';\n\nexport interface ObjectUnsubscribedError extends Error {}\n\nexport interface ObjectUnsubscribedErrorCtor {\n /**\n * @deprecated Internal implementation detail. Do not construct error instances.\n * Cannot be tagged as internal: https://github.com/ReactiveX/rxjs/issues/6269\n */\n new (): ObjectUnsubscribedError;\n}\n\n/**\n * An error thrown when an action is invalid because the object has been\n * unsubscribed.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n *\n * @class ObjectUnsubscribedError\n */\nexport const ObjectUnsubscribedError: ObjectUnsubscribedErrorCtor = createErrorClass(\n (_super) =>\n function ObjectUnsubscribedErrorImpl(this: any) {\n _super(this);\n this.name = 'ObjectUnsubscribedError';\n this.message = 'object unsubscribed';\n }\n);\n", "import { Operator } from './Operator';\nimport { Observable } from './Observable';\nimport { Subscriber } from './Subscriber';\nimport { Subscription, EMPTY_SUBSCRIPTION } from './Subscription';\nimport { Observer, SubscriptionLike, TeardownLogic } from './types';\nimport { ObjectUnsubscribedError } from './util/ObjectUnsubscribedError';\nimport { arrRemove } from './util/arrRemove';\nimport { errorContext } from './util/errorContext';\n\n/**\n * A Subject is a special type of Observable that allows values to be\n * multicasted to many Observers. Subjects are like EventEmitters.\n *\n * Every Subject is an Observable and an Observer. You can subscribe to a\n * Subject, and you can call next to feed values as well as error and complete.\n */\nexport class Subject extends Observable implements SubscriptionLike {\n closed = false;\n\n private currentObservers: Observer[] | null = null;\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n observers: Observer[] = [];\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n isStopped = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n hasError = false;\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n thrownError: any = null;\n\n /**\n * Creates a \"subject\" by basically gluing an observer to an observable.\n *\n * @nocollapse\n * @deprecated Recommended you do not use. Will be removed at some point in the future. Plans for replacement still under discussion.\n */\n static create: (...args: any[]) => any = (destination: Observer, source: Observable): AnonymousSubject => {\n return new AnonymousSubject(destination, source);\n };\n\n constructor() {\n // NOTE: This must be here to obscure Observable's constructor.\n super();\n }\n\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n lift(operator: Operator): Observable {\n const subject = new AnonymousSubject(this, this);\n subject.operator = operator as any;\n return subject as any;\n }\n\n /** @internal */\n protected _throwIfClosed() {\n if (this.closed) {\n throw new ObjectUnsubscribedError();\n }\n }\n\n next(value: T) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n if (!this.currentObservers) {\n this.currentObservers = Array.from(this.observers);\n }\n for (const observer of this.currentObservers) {\n observer.next(value);\n }\n }\n });\n }\n\n error(err: any) {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.hasError = this.isStopped = true;\n this.thrownError = err;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.error(err);\n }\n }\n });\n }\n\n complete() {\n errorContext(() => {\n this._throwIfClosed();\n if (!this.isStopped) {\n this.isStopped = true;\n const { observers } = this;\n while (observers.length) {\n observers.shift()!.complete();\n }\n }\n });\n }\n\n unsubscribe() {\n this.isStopped = this.closed = true;\n this.observers = this.currentObservers = null!;\n }\n\n get observed() {\n return this.observers?.length > 0;\n }\n\n /** @internal */\n protected _trySubscribe(subscriber: Subscriber): TeardownLogic {\n this._throwIfClosed();\n return super._trySubscribe(subscriber);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._checkFinalizedStatuses(subscriber);\n return this._innerSubscribe(subscriber);\n }\n\n /** @internal */\n protected _innerSubscribe(subscriber: Subscriber) {\n const { hasError, isStopped, observers } = this;\n if (hasError || isStopped) {\n return EMPTY_SUBSCRIPTION;\n }\n this.currentObservers = null;\n observers.push(subscriber);\n return new Subscription(() => {\n this.currentObservers = null;\n arrRemove(observers, subscriber);\n });\n }\n\n /** @internal */\n protected _checkFinalizedStatuses(subscriber: Subscriber) {\n const { hasError, thrownError, isStopped } = this;\n if (hasError) {\n subscriber.error(thrownError);\n } else if (isStopped) {\n subscriber.complete();\n }\n }\n\n /**\n * Creates a new Observable with this Subject as the source. You can do this\n * to create custom Observer-side logic of the Subject and conceal it from\n * code that uses the Observable.\n * @return {Observable} Observable that the Subject casts to\n */\n asObservable(): Observable {\n const observable: any = new Observable();\n observable.source = this;\n return observable;\n }\n}\n\n/**\n * @class AnonymousSubject\n */\nexport class AnonymousSubject extends Subject {\n constructor(\n /** @deprecated Internal implementation detail, do not use directly. Will be made internal in v8. */\n public destination?: Observer,\n source?: Observable\n ) {\n super();\n this.source = source;\n }\n\n next(value: T) {\n this.destination?.next?.(value);\n }\n\n error(err: any) {\n this.destination?.error?.(err);\n }\n\n complete() {\n this.destination?.complete?.();\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n return this.source?.subscribe(subscriber) ?? EMPTY_SUBSCRIPTION;\n }\n}\n", "import { TimestampProvider } from '../types';\n\ninterface DateTimestampProvider extends TimestampProvider {\n delegate: TimestampProvider | undefined;\n}\n\nexport const dateTimestampProvider: DateTimestampProvider = {\n now() {\n // Use the variable rather than `this` so that the function can be called\n // without being bound to the provider.\n return (dateTimestampProvider.delegate || Date).now();\n },\n delegate: undefined,\n};\n", "import { Subject } from './Subject';\nimport { TimestampProvider } from './types';\nimport { Subscriber } from './Subscriber';\nimport { Subscription } from './Subscription';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * A variant of {@link Subject} that \"replays\" old values to new subscribers by emitting them when they first subscribe.\n *\n * `ReplaySubject` has an internal buffer that will store a specified number of values that it has observed. Like `Subject`,\n * `ReplaySubject` \"observes\" values by having them passed to its `next` method. When it observes a value, it will store that\n * value for a time determined by the configuration of the `ReplaySubject`, as passed to its constructor.\n *\n * When a new subscriber subscribes to the `ReplaySubject` instance, it will synchronously emit all values in its buffer in\n * a First-In-First-Out (FIFO) manner. The `ReplaySubject` will also complete, if it has observed completion; and it will\n * error if it has observed an error.\n *\n * There are two main configuration items to be concerned with:\n *\n * 1. `bufferSize` - This will determine how many items are stored in the buffer, defaults to infinite.\n * 2. `windowTime` - The amount of time to hold a value in the buffer before removing it from the buffer.\n *\n * Both configurations may exist simultaneously. So if you would like to buffer a maximum of 3 values, as long as the values\n * are less than 2 seconds old, you could do so with a `new ReplaySubject(3, 2000)`.\n *\n * ### Differences with BehaviorSubject\n *\n * `BehaviorSubject` is similar to `new ReplaySubject(1)`, with a couple of exceptions:\n *\n * 1. `BehaviorSubject` comes \"primed\" with a single value upon construction.\n * 2. `ReplaySubject` will replay values, even after observing an error, where `BehaviorSubject` will not.\n *\n * @see {@link Subject}\n * @see {@link BehaviorSubject}\n * @see {@link shareReplay}\n */\nexport class ReplaySubject extends Subject {\n private _buffer: (T | number)[] = [];\n private _infiniteTimeWindow = true;\n\n /**\n * @param bufferSize The size of the buffer to replay on subscription\n * @param windowTime The amount of time the buffered items will stay buffered\n * @param timestampProvider An object with a `now()` method that provides the current timestamp. This is used to\n * calculate the amount of time something has been buffered.\n */\n constructor(\n private _bufferSize = Infinity,\n private _windowTime = Infinity,\n private _timestampProvider: TimestampProvider = dateTimestampProvider\n ) {\n super();\n this._infiniteTimeWindow = _windowTime === Infinity;\n this._bufferSize = Math.max(1, _bufferSize);\n this._windowTime = Math.max(1, _windowTime);\n }\n\n next(value: T): void {\n const { isStopped, _buffer, _infiniteTimeWindow, _timestampProvider, _windowTime } = this;\n if (!isStopped) {\n _buffer.push(value);\n !_infiniteTimeWindow && _buffer.push(_timestampProvider.now() + _windowTime);\n }\n this._trimBuffer();\n super.next(value);\n }\n\n /** @internal */\n protected _subscribe(subscriber: Subscriber): Subscription {\n this._throwIfClosed();\n this._trimBuffer();\n\n const subscription = this._innerSubscribe(subscriber);\n\n const { _infiniteTimeWindow, _buffer } = this;\n // We use a copy here, so reentrant code does not mutate our array while we're\n // emitting it to a new subscriber.\n const copy = _buffer.slice();\n for (let i = 0; i < copy.length && !subscriber.closed; i += _infiniteTimeWindow ? 1 : 2) {\n subscriber.next(copy[i] as T);\n }\n\n this._checkFinalizedStatuses(subscriber);\n\n return subscription;\n }\n\n private _trimBuffer() {\n const { _bufferSize, _timestampProvider, _buffer, _infiniteTimeWindow } = this;\n // If we don't have an infinite buffer size, and we're over the length,\n // use splice to truncate the old buffer values off. Note that we have to\n // double the size for instances where we're not using an infinite time window\n // because we're storing the values and the timestamps in the same array.\n const adjustedBufferSize = (_infiniteTimeWindow ? 1 : 2) * _bufferSize;\n _bufferSize < Infinity && adjustedBufferSize < _buffer.length && _buffer.splice(0, _buffer.length - adjustedBufferSize);\n\n // Now, if we're not in an infinite time window, remove all values where the time is\n // older than what is allowed.\n if (!_infiniteTimeWindow) {\n const now = _timestampProvider.now();\n let last = 0;\n // Search the array for the first timestamp that isn't expired and\n // truncate the buffer up to that point.\n for (let i = 1; i < _buffer.length && (_buffer[i] as number) <= now; i += 2) {\n last = i;\n }\n last && _buffer.splice(0, last + 1);\n }\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Subscription } from '../Subscription';\nimport { SchedulerAction } from '../types';\n\n/**\n * A unit of work to be executed in a `scheduler`. An action is typically\n * created from within a {@link SchedulerLike} and an RxJS user does not need to concern\n * themselves about creating and manipulating an Action.\n *\n * ```ts\n * class Action extends Subscription {\n * new (scheduler: Scheduler, work: (state?: T) => void);\n * schedule(state?: T, delay: number = 0): Subscription;\n * }\n * ```\n *\n * @class Action\n */\nexport class Action extends Subscription {\n constructor(scheduler: Scheduler, work: (this: SchedulerAction, state?: T) => void) {\n super();\n }\n /**\n * Schedules this action on its parent {@link SchedulerLike} for execution. May be passed\n * some context object, `state`. May happen at some point in the future,\n * according to the `delay` parameter, if specified.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler.\n * @return {void}\n */\n public schedule(state?: T, delay: number = 0): Subscription {\n return this;\n }\n}\n", "import type { TimerHandle } from './timerHandle';\ntype SetIntervalFunction = (handler: () => void, timeout?: number, ...args: any[]) => TimerHandle;\ntype ClearIntervalFunction = (handle: TimerHandle) => void;\n\ninterface IntervalProvider {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n delegate:\n | {\n setInterval: SetIntervalFunction;\n clearInterval: ClearIntervalFunction;\n }\n | undefined;\n}\n\nexport const intervalProvider: IntervalProvider = {\n // When accessing the delegate, use the variable rather than `this` so that\n // the functions can be called without being bound to the provider.\n setInterval(handler: () => void, timeout?: number, ...args) {\n const { delegate } = intervalProvider;\n if (delegate?.setInterval) {\n return delegate.setInterval(handler, timeout, ...args);\n }\n return setInterval(handler, timeout, ...args);\n },\n clearInterval(handle) {\n const { delegate } = intervalProvider;\n return (delegate?.clearInterval || clearInterval)(handle as any);\n },\n delegate: undefined,\n};\n", "import { Action } from './Action';\nimport { SchedulerAction } from '../types';\nimport { Subscription } from '../Subscription';\nimport { AsyncScheduler } from './AsyncScheduler';\nimport { intervalProvider } from './intervalProvider';\nimport { arrRemove } from '../util/arrRemove';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncAction extends Action {\n public id: TimerHandle | undefined;\n public state?: T;\n // @ts-ignore: Property has no initializer and is not definitely assigned\n public delay: number;\n protected pending: boolean = false;\n\n constructor(protected scheduler: AsyncScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n public schedule(state?: T, delay: number = 0): Subscription {\n if (this.closed) {\n return this;\n }\n\n // Always replace the current state with the new state.\n this.state = state;\n\n const id = this.id;\n const scheduler = this.scheduler;\n\n //\n // Important implementation note:\n //\n // Actions only execute once by default, unless rescheduled from within the\n // scheduled callback. This allows us to implement single and repeat\n // actions via the same code path, without adding API surface area, as well\n // as mimic traditional recursion but across asynchronous boundaries.\n //\n // However, JS runtimes and timers distinguish between intervals achieved by\n // serial `setTimeout` calls vs. a single `setInterval` call. An interval of\n // serial `setTimeout` calls can be individually delayed, which delays\n // scheduling the next `setTimeout`, and so on. `setInterval` attempts to\n // guarantee the interval callback will be invoked more precisely to the\n // interval period, regardless of load.\n //\n // Therefore, we use `setInterval` to schedule single and repeat actions.\n // If the action reschedules itself with the same delay, the interval is not\n // canceled. If the action doesn't reschedule, or reschedules with a\n // different delay, the interval will be canceled after scheduled callback\n // execution.\n //\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, delay);\n }\n\n // Set the pending flag indicating that this action has been scheduled, or\n // has recursively rescheduled itself.\n this.pending = true;\n\n this.delay = delay;\n // If this action has already an async Id, don't request a new one.\n this.id = this.id ?? this.requestAsyncId(scheduler, this.id, delay);\n\n return this;\n }\n\n protected requestAsyncId(scheduler: AsyncScheduler, _id?: TimerHandle, delay: number = 0): TimerHandle {\n return intervalProvider.setInterval(scheduler.flush.bind(scheduler, this), delay);\n }\n\n protected recycleAsyncId(_scheduler: AsyncScheduler, id?: TimerHandle, delay: number | null = 0): TimerHandle | undefined {\n // If this action is rescheduled with the same delay time, don't clear the interval id.\n if (delay != null && this.delay === delay && this.pending === false) {\n return id;\n }\n // Otherwise, if the action's delay time is different from the current delay,\n // or the action has been rescheduled before it's executed, clear the interval id\n if (id != null) {\n intervalProvider.clearInterval(id);\n }\n\n return undefined;\n }\n\n /**\n * Immediately executes this action and the `work` it contains.\n * @return {any}\n */\n public execute(state: T, delay: number): any {\n if (this.closed) {\n return new Error('executing a cancelled action');\n }\n\n this.pending = false;\n const error = this._execute(state, delay);\n if (error) {\n return error;\n } else if (this.pending === false && this.id != null) {\n // Dequeue if the action didn't reschedule itself. Don't call\n // unsubscribe(), because the action could reschedule later.\n // For example:\n // ```\n // scheduler.schedule(function doWork(counter) {\n // /* ... I'm a busy worker bee ... */\n // var originalAction = this;\n // /* wait 100ms before rescheduling the action */\n // setTimeout(function () {\n // originalAction.schedule(counter + 1);\n // }, 100);\n // }, 1000);\n // ```\n this.id = this.recycleAsyncId(this.scheduler, this.id, null);\n }\n }\n\n protected _execute(state: T, _delay: number): any {\n let errored: boolean = false;\n let errorValue: any;\n try {\n this.work(state);\n } catch (e) {\n errored = true;\n // HACK: Since code elsewhere is relying on the \"truthiness\" of the\n // return here, we can't have it return \"\" or 0 or false.\n // TODO: Clean this up when we refactor schedulers mid-version-8 or so.\n errorValue = e ? e : new Error('Scheduled action threw falsy error');\n }\n if (errored) {\n this.unsubscribe();\n return errorValue;\n }\n }\n\n unsubscribe() {\n if (!this.closed) {\n const { id, scheduler } = this;\n const { actions } = scheduler;\n\n this.work = this.state = this.scheduler = null!;\n this.pending = false;\n\n arrRemove(actions, this);\n if (id != null) {\n this.id = this.recycleAsyncId(scheduler, id, null);\n }\n\n this.delay = null!;\n super.unsubscribe();\n }\n }\n}\n", "import { Action } from './scheduler/Action';\nimport { Subscription } from './Subscription';\nimport { SchedulerLike, SchedulerAction } from './types';\nimport { dateTimestampProvider } from './scheduler/dateTimestampProvider';\n\n/**\n * An execution context and a data structure to order tasks and schedule their\n * execution. Provides a notion of (potentially virtual) time, through the\n * `now()` getter method.\n *\n * Each unit of work in a Scheduler is called an `Action`.\n *\n * ```ts\n * class Scheduler {\n * now(): number;\n * schedule(work, delay?, state?): Subscription;\n * }\n * ```\n *\n * @class Scheduler\n * @deprecated Scheduler is an internal implementation detail of RxJS, and\n * should not be used directly. Rather, create your own class and implement\n * {@link SchedulerLike}. Will be made internal in v8.\n */\nexport class Scheduler implements SchedulerLike {\n public static now: () => number = dateTimestampProvider.now;\n\n constructor(private schedulerActionCtor: typeof Action, now: () => number = Scheduler.now) {\n this.now = now;\n }\n\n /**\n * A getter method that returns a number representing the current time\n * (at the time this function was called) according to the scheduler's own\n * internal clock.\n * @return {number} A number that represents the current time. May or may not\n * have a relation to wall-clock time. May or may not refer to a time unit\n * (e.g. milliseconds).\n */\n public now: () => number;\n\n /**\n * Schedules a function, `work`, for execution. May happen at some point in\n * the future, according to the `delay` parameter, if specified. May be passed\n * some context object, `state`, which will be passed to the `work` function.\n *\n * The given arguments will be processed an stored as an Action object in a\n * queue of actions.\n *\n * @param {function(state: ?T): ?Subscription} work A function representing a\n * task, or some unit of work to be executed by the Scheduler.\n * @param {number} [delay] Time to wait before executing the work, where the\n * time unit is implicit and defined by the Scheduler itself.\n * @param {T} [state] Some contextual data that the `work` function uses when\n * called by the Scheduler.\n * @return {Subscription} A subscription in order to be able to unsubscribe\n * the scheduled work.\n */\n public schedule(work: (this: SchedulerAction, state?: T) => void, delay: number = 0, state?: T): Subscription {\n return new this.schedulerActionCtor(this, work).schedule(state, delay);\n }\n}\n", "import { Scheduler } from '../Scheduler';\nimport { Action } from './Action';\nimport { AsyncAction } from './AsyncAction';\nimport { TimerHandle } from './timerHandle';\n\nexport class AsyncScheduler extends Scheduler {\n public actions: Array> = [];\n /**\n * A flag to indicate whether the Scheduler is currently executing a batch of\n * queued actions.\n * @type {boolean}\n * @internal\n */\n public _active: boolean = false;\n /**\n * An internal ID used to track the latest asynchronous task such as those\n * coming from `setTimeout`, `setInterval`, `requestAnimationFrame`, and\n * others.\n * @type {any}\n * @internal\n */\n public _scheduled: TimerHandle | undefined;\n\n constructor(SchedulerAction: typeof Action, now: () => number = Scheduler.now) {\n super(SchedulerAction, now);\n }\n\n public flush(action: AsyncAction): void {\n const { actions } = this;\n\n if (this._active) {\n actions.push(action);\n return;\n }\n\n let error: any;\n this._active = true;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions.shift()!)); // exhaust the scheduler queue\n\n this._active = false;\n\n if (error) {\n while ((action = actions.shift()!)) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\n/**\n *\n * Async Scheduler\n *\n * Schedule task as if you used setTimeout(task, duration)\n *\n * `async` scheduler schedules tasks asynchronously, by putting them on the JavaScript\n * event loop queue. It is best used to delay tasks in time or to schedule tasks repeating\n * in intervals.\n *\n * If you just want to \"defer\" task, that is to perform it right after currently\n * executing synchronous code ends (commonly achieved by `setTimeout(deferredTask, 0)`),\n * better choice will be the {@link asapScheduler} scheduler.\n *\n * ## Examples\n * Use async scheduler to delay task\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * const task = () => console.log('it works!');\n *\n * asyncScheduler.schedule(task, 2000);\n *\n * // After 2 seconds logs:\n * // \"it works!\"\n * ```\n *\n * Use async scheduler to repeat task in intervals\n * ```ts\n * import { asyncScheduler } from 'rxjs';\n *\n * function task(state) {\n * console.log(state);\n * this.schedule(state + 1, 1000); // `this` references currently executing Action,\n * // which we reschedule with new state and delay\n * }\n *\n * asyncScheduler.schedule(task, 3000, 0);\n *\n * // Logs:\n * // 0 after 3s\n * // 1 after 4s\n * // 2 after 5s\n * // 3 after 6s\n * ```\n */\n\nexport const asyncScheduler = new AsyncScheduler(AsyncAction);\n\n/**\n * @deprecated Renamed to {@link asyncScheduler}. Will be removed in v8.\n */\nexport const async = asyncScheduler;\n", "import { AsyncAction } from './AsyncAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\nimport { SchedulerAction } from '../types';\nimport { animationFrameProvider } from './animationFrameProvider';\nimport { TimerHandle } from './timerHandle';\n\nexport class AnimationFrameAction extends AsyncAction {\n constructor(protected scheduler: AnimationFrameScheduler, protected work: (this: SchedulerAction, state?: T) => void) {\n super(scheduler, work);\n }\n\n protected requestAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle {\n // If delay is greater than 0, request as an async action.\n if (delay !== null && delay > 0) {\n return super.requestAsyncId(scheduler, id, delay);\n }\n // Push the action to the end of the scheduler queue.\n scheduler.actions.push(this);\n // If an animation frame has already been requested, don't request another\n // one. If an animation frame hasn't been requested yet, request one. Return\n // the current animation frame request id.\n return scheduler._scheduled || (scheduler._scheduled = animationFrameProvider.requestAnimationFrame(() => scheduler.flush(undefined)));\n }\n\n protected recycleAsyncId(scheduler: AnimationFrameScheduler, id?: TimerHandle, delay: number = 0): TimerHandle | undefined {\n // If delay exists and is greater than 0, or if the delay is null (the\n // action wasn't rescheduled) but was originally scheduled as an async\n // action, then recycle as an async action.\n if (delay != null ? delay > 0 : this.delay > 0) {\n return super.recycleAsyncId(scheduler, id, delay);\n }\n // If the scheduler queue has no remaining actions with the same async id,\n // cancel the requested animation frame and set the scheduled flag to\n // undefined so the next AnimationFrameAction will request its own.\n const { actions } = scheduler;\n if (id != null && actions[actions.length - 1]?.id !== id) {\n animationFrameProvider.cancelAnimationFrame(id as number);\n scheduler._scheduled = undefined;\n }\n // Return undefined so the action knows to request a new async id if it's rescheduled.\n return undefined;\n }\n}\n", "import { AsyncAction } from './AsyncAction';\nimport { AsyncScheduler } from './AsyncScheduler';\n\nexport class AnimationFrameScheduler extends AsyncScheduler {\n public flush(action?: AsyncAction): void {\n this._active = true;\n // The async id that effects a call to flush is stored in _scheduled.\n // Before executing an action, it's necessary to check the action's async\n // id to determine whether it's supposed to be executed in the current\n // flush.\n // Previous implementations of this method used a count to determine this,\n // but that was unsound, as actions that are unsubscribed - i.e. cancelled -\n // are removed from the actions array and that can shift actions that are\n // scheduled to be executed in a subsequent flush into positions at which\n // they are executed within the current flush.\n const flushId = this._scheduled;\n this._scheduled = undefined;\n\n const { actions } = this;\n let error: any;\n action = action || actions.shift()!;\n\n do {\n if ((error = action.execute(action.state, action.delay))) {\n break;\n }\n } while ((action = actions[0]) && action.id === flushId && actions.shift());\n\n this._active = false;\n\n if (error) {\n while ((action = actions[0]) && action.id === flushId && actions.shift()) {\n action.unsubscribe();\n }\n throw error;\n }\n }\n}\n", "import { AnimationFrameAction } from './AnimationFrameAction';\nimport { AnimationFrameScheduler } from './AnimationFrameScheduler';\n\n/**\n *\n * Animation Frame Scheduler\n *\n * Perform task when `window.requestAnimationFrame` would fire\n *\n * When `animationFrame` scheduler is used with delay, it will fall back to {@link asyncScheduler} scheduler\n * behaviour.\n *\n * Without delay, `animationFrame` scheduler can be used to create smooth browser animations.\n * It makes sure scheduled task will happen just before next browser content repaint,\n * thus performing animations as efficiently as possible.\n *\n * ## Example\n * Schedule div height animation\n * ```ts\n * // html:
\n * import { animationFrameScheduler } from 'rxjs';\n *\n * const div = document.querySelector('div');\n *\n * animationFrameScheduler.schedule(function(height) {\n * div.style.height = height + \"px\";\n *\n * this.schedule(height + 1); // `this` references currently executing Action,\n * // which we reschedule with new state\n * }, 0, 0);\n *\n * // You will see a div element growing in height\n * ```\n */\n\nexport const animationFrameScheduler = new AnimationFrameScheduler(AnimationFrameAction);\n\n/**\n * @deprecated Renamed to {@link animationFrameScheduler}. Will be removed in v8.\n */\nexport const animationFrame = animationFrameScheduler;\n", "import { Observable } from '../Observable';\nimport { SchedulerLike } from '../types';\n\n/**\n * A simple Observable that emits no items to the Observer and immediately\n * emits a complete notification.\n *\n * Just emits 'complete', and nothing else.\n *\n * ![](empty.png)\n *\n * A simple Observable that only emits the complete notification. It can be used\n * for composing with other Observables, such as in a {@link mergeMap}.\n *\n * ## Examples\n *\n * Log complete notification\n *\n * ```ts\n * import { EMPTY } from 'rxjs';\n *\n * EMPTY.subscribe({\n * next: () => console.log('Next'),\n * complete: () => console.log('Complete!')\n * });\n *\n * // Outputs\n * // Complete!\n * ```\n *\n * Emit the number 7, then complete\n *\n * ```ts\n * import { EMPTY, startWith } from 'rxjs';\n *\n * const result = EMPTY.pipe(startWith(7));\n * result.subscribe(x => console.log(x));\n *\n * // Outputs\n * // 7\n * ```\n *\n * Map and flatten only odd numbers to the sequence `'a'`, `'b'`, `'c'`\n *\n * ```ts\n * import { interval, mergeMap, of, EMPTY } from 'rxjs';\n *\n * const interval$ = interval(1000);\n * const result = interval$.pipe(\n * mergeMap(x => x % 2 === 1 ? of('a', 'b', 'c') : EMPTY),\n * );\n * result.subscribe(x => console.log(x));\n *\n * // Results in the following to the console:\n * // x is equal to the count on the interval, e.g. (0, 1, 2, 3, ...)\n * // x will occur every 1000ms\n * // if x % 2 is equal to 1, print a, b, c (each on its own)\n * // if x % 2 is not equal to 1, nothing will be output\n * ```\n *\n * @see {@link Observable}\n * @see {@link NEVER}\n * @see {@link of}\n * @see {@link throwError}\n */\nexport const EMPTY = new Observable((subscriber) => subscriber.complete());\n\n/**\n * @param scheduler A {@link SchedulerLike} to use for scheduling\n * the emission of the complete notification.\n * @deprecated Replaced with the {@link EMPTY} constant or {@link scheduled} (e.g. `scheduled([], scheduler)`). Will be removed in v8.\n */\nexport function empty(scheduler?: SchedulerLike) {\n return scheduler ? emptyScheduled(scheduler) : EMPTY;\n}\n\nfunction emptyScheduled(scheduler: SchedulerLike) {\n return new Observable((subscriber) => scheduler.schedule(() => subscriber.complete()));\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport function isScheduler(value: any): value is SchedulerLike {\n return value && isFunction(value.schedule);\n}\n", "import { SchedulerLike } from '../types';\nimport { isFunction } from './isFunction';\nimport { isScheduler } from './isScheduler';\n\nfunction last(arr: T[]): T | undefined {\n return arr[arr.length - 1];\n}\n\nexport function popResultSelector(args: any[]): ((...args: unknown[]) => unknown) | undefined {\n return isFunction(last(args)) ? args.pop() : undefined;\n}\n\nexport function popScheduler(args: any[]): SchedulerLike | undefined {\n return isScheduler(last(args)) ? args.pop() : undefined;\n}\n\nexport function popNumber(args: any[], defaultValue: number): number {\n return typeof last(args) === 'number' ? args.pop()! : defaultValue;\n}\n", "export const isArrayLike = ((x: any): x is ArrayLike => x && typeof x.length === 'number' && typeof x !== 'function');", "import { isFunction } from \"./isFunction\";\n\n/**\n * Tests to see if the object is \"thennable\".\n * @param value the object to test\n */\nexport function isPromise(value: any): value is PromiseLike {\n return isFunction(value?.then);\n}\n", "import { InteropObservable } from '../types';\nimport { observable as Symbol_observable } from '../symbol/observable';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being Observable (but not necessary an Rx Observable) */\nexport function isInteropObservable(input: any): input is InteropObservable {\n return isFunction(input[Symbol_observable]);\n}\n", "import { isFunction } from './isFunction';\n\nexport function isAsyncIterable(obj: any): obj is AsyncIterable {\n return Symbol.asyncIterator && isFunction(obj?.[Symbol.asyncIterator]);\n}\n", "/**\n * Creates the TypeError to throw if an invalid object is passed to `from` or `scheduled`.\n * @param input The object that was passed.\n */\nexport function createInvalidObservableTypeError(input: any) {\n // TODO: We should create error codes that can be looked up, so this can be less verbose.\n return new TypeError(\n `You provided ${\n input !== null && typeof input === 'object' ? 'an invalid object' : `'${input}'`\n } where a stream was expected. You can provide an Observable, Promise, ReadableStream, Array, AsyncIterable, or Iterable.`\n );\n}\n", "export function getSymbolIterator(): symbol {\n if (typeof Symbol !== 'function' || !Symbol.iterator) {\n return '@@iterator' as any;\n }\n\n return Symbol.iterator;\n}\n\nexport const iterator = getSymbolIterator();\n", "import { iterator as Symbol_iterator } from '../symbol/iterator';\nimport { isFunction } from './isFunction';\n\n/** Identifies an input as being an Iterable */\nexport function isIterable(input: any): input is Iterable {\n return isFunction(input?.[Symbol_iterator]);\n}\n", "import { ReadableStreamLike } from '../types';\nimport { isFunction } from './isFunction';\n\nexport async function* readableStreamLikeToAsyncGenerator(readableStream: ReadableStreamLike): AsyncGenerator {\n const reader = readableStream.getReader();\n try {\n while (true) {\n const { value, done } = await reader.read();\n if (done) {\n return;\n }\n yield value!;\n }\n } finally {\n reader.releaseLock();\n }\n}\n\nexport function isReadableStreamLike(obj: any): obj is ReadableStreamLike {\n // We don't want to use instanceof checks because they would return\n // false for instances from another Realm, like an + +
+

In this article we will use and modify the basic AWS-K3s Cluster.dev template to deploy the Prometheus monitoring stack to a cluster. As a result we will have a K3s cluster on AWS with a set of required controllers (Ingress, cert-manager, Argo CD) and installed kube-prometheus stack. The code samples are available in the GitHub repository.

+

Requirements

+

OS

+

We should have some client host with Ubuntu 20.04 to use this manual without any customization.

+

Docker

+

We should install Docker to the client host.

+

AWS account

+
    +
  • +

    Log in into existing AWS account or register a new one.

    +
  • +
  • +

    Select an AWS region in order to deploy the cluster in that region.

    +
  • +
  • +

    Add a programmatic access key for a new or existing user. Note that it should be an IAM user with granted administrative permissions.

    +
  • +
  • +

    Open bash terminal on the client host.

    +
  • +
  • +

    Get an example environment file env to set our AWS credentials:

    +
        curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/env > env
    +
    +
  • +
  • +

    Add the programmatic access key to the environment file env:

    +
        editor env
    +
    +
  • +
+

Create and deploy the project

+

Get example code

+
mkdir -p cdev && mv env cdev/ && cd cdev && chmod 777 ./
+alias cdev='docker run -it -v $(pwd):/workspace/cluster-dev --env-file=env clusterdev/cluster.dev:v0.6.3'
+cdev project create https://github.com/shalb/cdev-aws-k3s?ref=v0.3.0
+curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/stack.yaml > stack.yaml
+curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/project.yaml > project.yaml
+curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/monitoring.yaml > monitoring.yaml
+
+

Create S3 bucket to store the project state

+

Go to AWS S3 and create a new bucket. Replace the value of state_bucket_name key in config file project.yaml by the name of the created bucket:

+
editor project.yaml
+
+

Customize project settings

+

We shall set all the settings needed for our project in the project.yaml config file. We should customize all the variables that have # example comment in the end of line.

+

Select AWS region

+

We should replace the value of region key in config file project.yaml by our region.

+

Set unique cluster name

+

By default we shall use cluster.dev domain as a root domain for cluster ingresses. We should replace the value of cluster_name key by a unique string in config file project.yaml, because the default ingress will use it in resulting DNS name.

+

This command may help us to generate a random name and check whether it is in use:

+
CLUSTER_NAME=$(echo "$(tr -dc a-z0-9 </dev/urandom | head -c 5)") 
+dig argocd.${CLUSTER_NAME}.cluster.dev | grep -q "^${CLUSTER_NAME}" || echo "OK to use cluster_name: ${CLUSTER_NAME}"
+
+

If the cluster name is available we should see the message OK to use cluster_name: ...

+

Set SSH keys

+

We should have access to cluster nodes via SSH. To add the existing SSH key we should replace the value of public_key key in config file project.yaml. If we have no SSH key, then we should create it.

+

Set Argo CD password

+

In our project we shall use Argo CD to deploy our applications to the cluster. To secure Argo CD we should replace the value of argocd_server_admin_password key by a unique password in config file project.yaml. The default value is a bcrypted password string.

+

To encrypt our custom password we may use an online tool or encrypt the password by command:

+
alias cdev_bash='docker run -it -v $(pwd):/workspace/cluster-dev --env-file=env --network=host --entrypoint="" clusterdev/cluster.dev:v0.6.3 bash'
+cdev_bash
+password=$(tr -dc a-zA-Z0-9,._! </dev/urandom | head -c 20)
+apt install -y apache2-utils && htpasswd -bnBC 10 "" ${password} | tr -d ':\n' ; echo ''
+echo "Password: $password"
+exit
+
+

Set Grafana password

+

Now we are going to add a custom password for Grafana. To secure Grafana we should replace the value of grafana_password key by a unique password in config file project.yaml. This command may help us to generate a random password:

+
echo "$(tr -dc a-zA-Z0-9,._! </dev/urandom | head -c 20)"
+
+

Run Bash in Cluster.dev container

+

To avoid installation of all needed tools directly to the client host, we will run all commands inside the Cluster.dev container. In order to execute Bash inside the Cluster.dev container and proceed to deploy, run the command:

+
cdev_bash
+
+

Deploy the project

+

Now we should deploy our project to AWS via cdev command:

+
cdev apply -l debug | tee apply.log
+
+

In case of successful deployment we should get further instructions on how to access Kubernetes, and the URLs of Argo CD and Grafana web UIs. Sometimes, because of DNS update delays we need to wait some time to access those web UIs. In such case we can forward all needed services via kubectl to the client host:

+
kubectl port-forward svc/argocd-server -n argocd 18080:443  > /dev/null 2>&1 &
+kubectl port-forward svc/monitoring-grafana -n monitoring 28080:80  > /dev/null 2>&1 &
+
+

We may test our forwards via curl:

+
curl 127.0.0.1:18080
+curl 127.0.0.1:28080
+
+

If we see no errors from curl, then the client host should access these endpoints via any browser.

+

Destroy the project

+

We can delete our cluster with the command:

+
cdev apply -l debug
+cdev destroy -l debug | tee destroy.log
+
+

Conclusion

+

Within this article we have learnt how to deploy the Prometheus monitoring stack with the Cluster.dev AWS-K3s template. The resulting stack allows us to monitor workloads in our cluster. We can also reuse the stack as a prepared infrastructure pattern to launch environments for testing monitoring cases, before applying them to production.

+ + + + + + + + + + + + + + + + + + + + +
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/examples-aws-k3s/index.html b/examples-aws-k3s/index.html new file mode 100644 index 00000000..2791bb92 --- /dev/null +++ b/examples-aws-k3s/index.html @@ -0,0 +1,2053 @@ + + + + + + + + + + + + + + + + + + + + + + + + + AWS-K3s - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

AWS-K3s

+

Cluster.dev uses stack templates to generate users' projects in a desired cloud. AWS-K3s is a stack template that creates and provisions Kubernetes clusters in AWS cloud by means of k3s utility.

+

On this page you will find guidance on how to create a K3s cluster on AWS using one of the Cluster.dev prepared samples – the AWS-K3s stack template. Running the example code will have the following resources created:

+
    +
  • +

    K3s cluster with addons:

    +
      +
    • +

      cert-manager

      +
    • +
    • +

      ingress-nginx

      +
    • +
    • +

      external-dns

      +
    • +
    • +

      argocd

      +
    • +
    +
  • +
  • +

    AWS Key Pair to access the cluster running instances

    +
  • +
  • +

    AWS IAM Policy for managing your DNS zone by external-dns

    +
  • +
  • +

    (optional, if you use cluster.dev domain) Route53 zone .cluster.dev

    +
  • +
  • +

    (optional, if vpc_id is not set) VPC for EKS cluster

    +
  • +
+

Prerequisites

+
    +
  1. +

    Terraform version 1.4+

    +
  2. +
  3. +

    AWS account

    +
  4. +
  5. +

    AWS CLI installed

    +
  6. +
  7. +

    kubectl installed

    +
  8. +
  9. +

    Cluster.dev client installed

    +
  10. +
+

Authentication

+

Cluster.dev requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways:

+
+

Info

+

Please note that you have to use IAM user with granted administrative permissions.

+
+
    +
  • +

    Environment variables: provide your credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the AWS_DEFAULT_REGION or AWS_REGION environment variable to set region, if needed. Example usage:

    +
    export AWS_ACCESS_KEY_ID="MYACCESSKEY"
    +export AWS_SECRET_ACCESS_KEY="MYSECRETKEY"
    +export AWS_DEFAULT_REGION="eu-central-1"
    +
    +
  • +
  • +

    Shared Credentials File (recommended): set up an AWS configuration file to specify your credentials.

    +

    Credentials file ~/.aws/credentials example:

    +
    [cluster-dev]
    +aws_access_key_id = MYACCESSKEY
    +aws_secret_access_key = MYSECRETKEY
    +
    +

    Config: ~/.aws/config example:

    +
    [profile cluster-dev]
    +region = eu-central-1
    +
    +

    Then export AWS_PROFILE environment variable.

    +
    export AWS_PROFILE=cluster-dev
    +
    +
  • +
+

Install AWS client

+

If you don't have the AWS CLI installed, refer to AWS CLI official installation guide, or use commands from the example:

+
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
+unzip awscliv2.zip
+sudo ./aws/install
+aws s3 ls
+
+

Create S3 bucket

+

Cluster.dev uses S3 bucket for storing states. Create the bucket with the command:

+
aws s3 mb s3://cdev-states
+
+

DNS Zone

+

In AWS-K3s stack template example you need to define a Route 53 hosted zone. Options:

+
    +
  1. +

    You already have a Route 53 hosted zone.

    +
  2. +
  3. +

    Create a new hosted zone using a Route 53 documentation example.

    +
  4. +
  5. +

    Use "cluster.dev" domain for zone delegation.

    +
  6. +
+

Create project

+
    +
  1. +

    Configure access to AWS and export required variables.

    +
  2. +
  3. +

    Create locally a project directory, cd into it and execute the command:

    +

      cdev project create https://github.com/shalb/cdev-aws-k3s
    +
    +This will create a new empty project.

    +
  4. +
  5. +

    Edit variables in the example's files, if necessary:

    +
      +
    • +

      project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

      +
    • +
    • +

      backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See backend docs.

      +
    • +
    • +

      stack.yaml - describes stack configuration. See stack docs.

      +
    • +
    +
  6. +
  7. +

    Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.

    +
    +

    Note

    +

    Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.

    +
    +
  8. +
  9. +

    Run cdev apply

    +
    +

    Tip

    +

    We highly recommend to run cdev apply in a debug mode so that you could see the Cluster.dev logging in the output: cdev apply -l debug

    +
    +
  10. +
  11. +

    After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the stack.yaml.

    +
  12. +
  13. +

    Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

    +
  14. +
  15. +

    Destroy the cluster and all created resources with the command cdev destroy

    +
  16. +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/examples-develop-stack-template/index.html b/examples-develop-stack-template/index.html new file mode 100644 index 00000000..55b22fd8 --- /dev/null +++ b/examples-develop-stack-template/index.html @@ -0,0 +1,1818 @@ + + + + + + + + + + + + + + + + + + + + + Develop Stack Template - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Develop Stack Template

+

Cluster.dev gives you freedom to modify existing templates or create your own. You can add inputs and outputs to already preset units, take the output of one unit and send it as an input to another, or write new units and add them to a template.

+

In our example we shall use the tmpl-development sample to create a K3s cluster on AWS. Then we shall modify the project stack template by adding new parameters to the units.

+

Workflow steps

+
    +
  1. +

    Create a project following the steps described in Create Own Project section.

    +
  2. +
  3. +

    To start working with the stack template, cd into the template directory and open the template.yaml file: ./template/template.yaml.

    +

    Our sample stack template contains 3 units. Now, let's elaborate on each of them and see how we can modify it.

    +
  4. +
  5. +

    The create-bucket unit uses a remote Terraform module to create an S3 bucket on AWS:

    +
    name: create-bucket
    +type: tfmodule
    +providers: *provider_aws
    +source: terraform-aws-modules/s3-bucket/aws
    +version: "2.9.0"
    +inputs:
    +  bucket: {{ .variables.bucket_name }}
    +  force_destroy: true
    +
    +

    We can modify the unit by adding more parameters in inputs. For example, let's add some tags using the insertYAML function:

    +
    name: create-bucket
    +type: tfmodule
    +providers: *provider_aws
    +source: terraform-aws-modules/s3-bucket/aws
    +version: "2.9.0"
    +inputs:
    +  bucket: {{ .variables.bucket_name }}
    +  force_destroy: true
    +  tags: {{ insertYAML .variables.tags }}
    +
    +

    Now we can see the tags in stack.yaml:

    +
    name: cdev-tests-local
    +template: ./template/
    +kind: Stack
    +backend: aws-backend
    +variables:
    +  bucket_name: "tmpl-dev-test"
    +  region: {{ .project.variables.region }}
    +  organization: {{ .project.variables.organization }}
    +  name: "Developer"
    +  tags:
    +    tag1_name: "tag 1 value"
    +    tag2_name: "tag 2 value"
    +
    +

    To check the configuration, run cdev plan --tf-plan command. In the output you can see that Terraform will create a bucket with the defined tags. Run cdev apply -l debug to have the configuration applied.

    +
  6. +
  7. +

    The create-s3-object unit uses local Terraform module to get the bucket ID and save data inside the bucket. The Terraform module is stored in s3-file directory, main.tf file:

    +
    name: create-s3-object
    +type: tfmodule
    +providers: *provider_aws
    +source: ./s3-file/
    +depends_on: this.create-bucket
    +inputs:
    +  bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }}
    +  data: |
    +    The data that will be saved in the S3 bucket after being processed by the template engine.
    +    Organization: {{ .variables.organization }}
    +    Name: {{ .variables.name }}
    +
    +

    The unit sends 2 parameters. The bucket_name is retrieved from the create-bucket unit by means of remoteState function. The data parameter uses templating to retrieve the Organization and Name variables from stack.yaml.

    +

    Let's add to data input bucket_regional_domain_name variable to obtain the region-specific domain name of the bucket:

    +
    name: create-s3-object
    +type: tfmodule
    +providers: *provider_aws
    +source: ./s3-file/
    +depends_on: this.create-bucket
    +inputs:
    +  bucket_name: {{ remoteState "this.create-bucket.s3_bucket_id" }}
    +  data: |
    +    The data that will be saved in the s3 bucket after being processed by the template engine.
    +    Organization: {{ .variables.organization }}
    +    Name: {{ .variables.name }}
    +    Bucket regional domain name: {{ remoteState "this.create-bucket.s3_bucket_bucket_regional_domain_name" }}
    +
    +

    Check the configuration by running cdev plan command; apply it with cdev apply -l debug.

    +
  8. +
  9. +

    The print_outputs unit retrieves data from two other units by means of remoteState function: bucket_domain variable from create-bucket unit and s3_file_info from create-s3-object unit:

    +
    name: print_outputs
    +type: printer
    +inputs:
    +  bucket_domain: {{ remoteState "this.create-bucket.s3_bucket_bucket_domain_name" }}
    +  s3_file_info: "To get file use: aws s3 cp {{ remoteState "this.create-s3-object.file_s3_url" }} ./my_file && cat my_file"
    +
    +
  10. +
  11. +

    Having finished your work, run cdev destroy to eliminate the created resources.

    +
  12. +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/examples-do-k8s/index.html b/examples-do-k8s/index.html new file mode 100644 index 00000000..602b994f --- /dev/null +++ b/examples-do-k8s/index.html @@ -0,0 +1,1992 @@ + + + + + + + + + + + + + + + + + + + + + + + + + DO-K8s - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

DO-K8s

+

Cluster.dev uses stack templates to generate users' projects in a desired cloud. DO-K8s is a stack template that creates and provisions Kubernetes clusters in the DigitalOcean cloud.

+

On this page you will find guidance on how to create a Kubernetes cluster on DigitalOcean using one of the Cluster.dev prepared samples – the DO-K8s stack template. Running the example code will have the following resources created:

+
    +
  • +

    DO Kubernetes cluster with addons:

    +
      +
    • +

      cert-manager

      +
    • +
    • +

      argocd

      +
    • +
    +
  • +
  • +

    (optional, if vpc_id is not set) VPC for Kubernetes cluster

    +
  • +
+

Prerequisites

+
    +
  1. +

    Terraform version 1.4+

    +
  2. +
  3. +

    DigitalOcean account

    +
  4. +
  5. +

    doctl installed

    +
  6. +
  7. +

    Cluster.dev client installed

    +
  8. +
+

Authentication

+

Create an access token for a user.

+
+

Info

+

Make sure to grant the user with administrative permissions.

+
+

For details on using DO spaces bucket as a backend, see here.

+

DO access configuration

+
    +
  1. +

    Install doctl. For more information, see the official documentation.

    +
    cd ~
    +wget https://github.com/digitalocean/doctl/releases/download/v1.57.0/doctl-1.57.0-linux-amd64.tar.gz
    +tar xf ~/doctl-1.57.0-linux-amd64.tar.gz
    +sudo mv ~/doctl /usr/local/bin
    +
    +
  2. +
  3. +

    Export your DIGITALOCEAN_TOKEN, for details see here.

    +
    export DIGITALOCEAN_TOKEN="MyDIGITALOCEANToken"
    +
    +
  4. +
  5. +

    Export SPACES_ACCESS_KEY_ID and SPACES_SECRET_ACCESS_KEY environment variables, for details see here.

    +
    export SPACES_ACCESS_KEY_ID="dSUGdbJqa6xwJ6Fo8qV2DSksdjh..."
    +export SPACES_SECRET_ACCESS_KEY="TEaKjdj8DSaJl7EnOdsa..."
    +
    +
  6. +
  7. +

    Create a spaces bucket for Terraform states in the chosen region (in the example we used the 'cdev-data' bucket name).

    +
  8. +
  9. +

    Create a domain in DigitalOcean domains service.

    +
  10. +
+
+

Info

+

In the project generated by default we used 'k8s.cluster.dev' zone as an example. Please make sure to change it.

+
+

Create project

+
    +
  1. +

    Configure access to DigitalOcean and export required variables.

    +
  2. +
  3. +

    Create locally a project directory, cd into it and execute the command:

    +

      cdev project create https://github.com/shalb/cdev-do-k8s
    +
    +This will create a new empty project.

    +
  4. +
  5. +

    Edit variables in the example's files, if necessary:

    +
      +
    • +

      project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

      +
    • +
    • +

      backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See backend docs.

      +
    • +
    • +

      stack.yaml - describes stack configuration. See stack docs.

      +
    • +
    +
  6. +
  7. +

    Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.

    +
    +

    Note

    +

    Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.

    +
    +
  8. +
  9. +

    Run cdev apply

    +
    +

    Tip

    +

    We highly recommend to run cdev apply in a debug mode so that you could see the Cluster.dev logging in the output: cdev apply -l debug

    +
    +
  10. +
  11. +

    After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the "admin" login and the bcrypted password that you have generated for the stack.yaml.

    +
  12. +
  13. +

    Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

    +
  14. +
  15. +

    Destroy the cluster and all created resources with the command cdev destroy

    +
  16. +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/examples-gcp-gke/index.html b/examples-gcp-gke/index.html new file mode 100644 index 00000000..88514985 --- /dev/null +++ b/examples-gcp-gke/index.html @@ -0,0 +1,1957 @@ + + + + + + + + + + + + + + + + + + + + + + + + + GCP-GKE - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

GCP-GKE

+

Cluster.dev uses stack templates to generate users' projects in a desired cloud. GCP-GKE is a stack template that creates and provisions Kubernetes clusters in GCP cloud by means of Google Kubernetes Engine (GKE).

+

On this page you will find guidance on how to create a GKE cluster on GCP using one of the Cluster.dev prepared samples – the GCP-GKE stack template. Running the example code will have the following resources created:

+
    +
  • +

    VPC

    +
  • +
  • +

    GKE Kubernetes cluster with addons:

    +
      +
    • +

      cert-manager

      +
    • +
    • +

      ingress-nginx

      +
    • +
    • +

      external-secrets (with GCP Secret Manager backend)

      +
    • +
    • +

      external-dns

      +
    • +
    • +

      argocd

      +
    • +
    +
  • +
+

Prerequisites

+
    +
  1. Terraform version >= 1.4
  2. +
  3. GCP account and project
  4. +
  5. GCloud CLI installed and configured with your GCP account
  6. +
  7. kubectl installed
  8. +
  9. Cluster.dev client installed
  10. +
  11. Parent Domain
  12. +
+

Before you begin

+
    +
  1. +

    Create or select a Google Cloud project: +

    gcloud projects create cdev-demo
    +gcloud config set project cdev-demo
    +

    +
  2. +
  3. +

    Enable billing for your project.

    +
  4. +
  5. +

    Enable the Google Kubernetes Engine API.

    +
  6. +
  7. +

    Enable Secret Manager: +

    gcloud services enable secretmanager.googleapis.com
    +

    +
  8. +
+

Quick Start

+
    +
  1. Clone example project: +
    git clone https://github.com/shalb/cdev-gcp-gke.git
    +cd examples/
    +
  2. +
  3. Update project.yaml: +
    name: demo-project
    +kind: Project
    +backend: gcs-backend
    +variables:
    +  organization: my-organization
    +  project: cdev-demo
    +  region: us-west1
    +  state_bucket_name: gke-demo-state
    +  state_bucket_prefix: demo
    +
  4. +
  5. Create GCP bucket for Terraform backend: +
    gcloud projects create cdev-demo
    +gcloud config set project cdev-demo
    +gsutil mb gs://gke-demo-state
    +
  6. +
  7. Edit variables in the example's files, if necessary.
  8. +
  9. Run cdev plan
  10. +
  11. Run cdev apply
  12. +
  13. +

    Set up DNS delegation for subdomain by creating + NS records for subdomain in parent domain. + Run cdev output: +

    cdev output
    +12:58:52 [INFO] Printer: 'cluster.outputs', Output:
    +domain = demo.gcp.cluster.dev.
    +name_server = [
    +  "ns-cloud-d1.googledomains.com.",
    +  "ns-cloud-d2.googledomains.com.",
    +  "ns-cloud-d3.googledomains.com.",
    +  "ns-cloud-d4.googledomains.com."
    +]
    +region = us-west1
    +
    + Add records from name_server list.

    +
  14. +
  15. +

    Authorize cdev/Terraform to interact with GCP via SDK: +

    gcloud auth application-default login
    +

    +
  16. +
  17. Connect to GKE cluster: +
    gcloud components install gke-gcloud-auth-plugin
    +gcloud container clusters get-credentials demo-cluster --zone us-west1-a --project cdev-demo
    +
  18. +
  19. Retrieve ArgoCD admin password, + install the ArgoCD CLI: +
    kubectl -n argocd get secret argocd-initial-admin-secret  -o jsonpath="{.data.password}" | base64 -d; echo
    +
  20. +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/examples-modify-aws-eks/index.html b/examples-modify-aws-eks/index.html new file mode 100644 index 00000000..af57d069 --- /dev/null +++ b/examples-modify-aws-eks/index.html @@ -0,0 +1,1967 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Modify AWS-EKS - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Modify AWS-EKS

+

The code and the text prepared by Orest Kapko, a DevOps engineer at SHALB.

+

In this article we shall customize the basic AWS-EKS Cluster.dev template in order to add some features.

+

Workflow steps

+
    +
  1. +

    Go to the GitHub page via the AWS-EKS link and download the stack template.

    +
  2. +
  3. +

    If you are not planning to use some preset addons, edit aws-eks.yaml to exclude them. In our case, it was cert-manager, cert-manager-issuer, ingress-nginx, argocd, and argocd_apps.

    +
  4. +
  5. +

    In order to dynamically retrieve the AWS account ID parameter, we have added a data block to our stack template:

    +
      - name: data
    +    type: tfmodule
    +    providers: *provider_aws
    +    depends_on: this.eks
    +    source: ./terraform-submodules/data/
    +
    +
    {{ remoteState "this.data.account_id" }}
    +
    +

    The block is also used in eks_auth ConfigMap and expands its functionality with groups of users:

    +
      apiVersion: v1
    +  data:
    +    mapAccounts: |
    +      []
    +    mapRoles: |
    +      - "groups":
    +        - "system:bootstrappers"
    +        - "system:nodes"
    +        "rolearn": "{{ remoteState "this.eks.worker_iam_role_arn" }}"
    +        "username": "system:node:{{ "{{EC2PrivateDNSName}}" }}"
    +    - "groups":
    +      - "system:masters"
    +      "rolearn": "arn:aws:iam::{{ remoteState "this.data.account_id" }}:role/OrganizationAccountAccessRole"
    +      "username": "general-role"
    +    mapUsers: |
    +      - "groups":
    +        - "system:masters"
    +        "userarn": "arn:aws:iam::{{ remoteState "this.data.account_id" }}:user/jenkins-eks"
    +        "username": "jenkins-eks"
    +  kind: ConfigMap
    +  metadata:
    +    name: aws-auth
    +    namespace: kube-system
    +
    +

    The data block configuration in main.tf: data "aws_caller_identity" "current" {}

    +

    In output.tf:

    +

    yaml +output "account_id" { + value = data.aws_caller_identity.current.account_id +}

    +
  6. +
  7. +

    As it was decided to use Traefik Ingress controller instead of basic Nginx, we spun up two load balancers (first - internet-facing ALB for public ingresses, and second - internal ALB for private ingresses) and security groups necessary for its work, and described them in albs unit. The unit configuration within the template:

    +
    {{- if .variables.ingressControllerEnabled }}
    +- name: albs
    +  type: tfmodule
    +  providers: *provider_aws
    +  source: ./terraform-submodules/albs/
    +  inputs:
    +    main_domain: {{ .variables.alb_main_domain }}
    +    main_external_domain: {{ .variables.alb_main_external_domain }}
    +    main_vpc: {{ .variables.vpc_id }}
    +    acm_external_certificate_arn: {{ .variables.alb_acm_external_certificate_arn }}
    +    private_subnets: {{ insertYAML .variables.private_subnets }}
    +    public_subnets: {{ insertYAML .variables.public_subnets }}
    +    environment: {{ .name }}
    +{{- end }}
    +
    +
  8. +
  9. +

    Also we have created a dedicated unit for testing Ingress through Route 53 records:

    +
    data "aws_route53_zone" "existing" {
    +  name         = var.domain
    +  private_zone = var.private_zone
    +}
    +module "records" {
    +  source  = "terraform-aws-modules/route53/aws//modules/records"
    +  version = "~> 2.0"
    +  zone_id      = data.aws_route53_zone.existing.zone_id
    +  private_zone = var.private_zone
    +  records = [
    +    {
    +      name    = "test-ingress-eks"
    +      type    = "A"
    +      alias   = {
    +        name    = var.private_lb_dns_name
    +        zone_id = var.private_lb_zone_id
    +        evaluate_target_health = false
    +      }
    +    },
    +    {
    +      name    = "test-ingress-2-eks"
    +      type    = "A"
    +      alias   = {
    +        name    = var.private_lb_dns_name
    +        zone_id = var.private_lb_zone_id
    +        evaluate_target_health = false
    +      }
    +    }
    +  ]
    +}
    +
    +

    The unit configuration within the template:

    +
     {{- if .variables.ingressControllerRoute53Enabled }}
    + - name: route53_records
    +   type: tfmodule
    +   providers: *provider_aws
    +   source: ./terraform-submodules/route53_records/
    +   inputs:
    +     private_zone: {{ .variables.private_zone }}
    +     domain: {{ .variables.domain }}
    +     private_lb_dns_name: {{ remoteState "this.albs.eks_ingress_lb_dns_name" }}
    +     public_lb_dns_name: {{ remoteState "this.albs.eks_public_lb_dns_name" }}
    +     private_lb_zone_id: {{ remoteState "this.albs.eks_ingress_lb_zone_id" }}
    +{{- end }}
    +
    +
  10. +
  11. +

    Also, to map service accounts to AWS IAM roles we have created a separate template for IRSA. Example configuration for a cluster autoscaler:

    +
      kind: StackTemplate
    +  name: aws-eks
    +  units:
    +    {{- if .variables.cluster_autoscaler_irsa.enabled }}
    +    - name: iam_assumable_role_autoscaling_autoscaler
    +      type: tfmodule
    +      source: "terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc"
    +      version: "~> 3.0"
    +      providers: *provider_aws
    +      inputs:
    +        role_name: "eks-autoscaling-autoscaler-{{ .variables.cluster_name }}"
    +        create_role: true
    +        role_policy_arns:
    +          - {{ remoteState "this.iam_policy_autoscaling_autoscaler.arn" }}
    +        oidc_fully_qualified_subjects: {{ insertYAML .variables.cluster_autoscaler_irsa.subjects }}
    +        provider_url: {{ .variables.provider_url }}
    +    - name: iam_policy_autoscaling_autoscaler
    +      type: tfmodule
    +      source: "terraform-aws-modules/iam/aws//modules/iam-policy"
    +      version: "~> 3.0"
    +      providers: *provider_aws
    +      inputs:
    +        name: AllowAutoScalingAccessforClusterAutoScaler-{{ .variables.cluster_name }}
    +        policy: {{ insertYAML .variables.cluster_autoscaler_irsa.policy }}
    +    {{- end }}
    +
    +
  12. +
+

In our example we have modified the prepared AWS-EKS stack template by adding a customized data block and excluding some addons.

+

We have also changed the template's structure by placing the Examples directory into a separate repository, in order to decouple the abstract template from its implementation for concrete setups. This enabled us to use the template via Git and mark the template's version with Git tags.

+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/examples-overview/index.html b/examples-overview/index.html new file mode 100644 index 00000000..ec6d95d7 --- /dev/null +++ b/examples-overview/index.html @@ -0,0 +1,1792 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Overview

+

In the Examples section you will find ready-to-use Cluster.dev samples that will help you bootstrap cloud infrastructures. Running the sample code will get you a provisioned Kubernetes cluster with add-ons in the cloud. The available options include:

+ +

You will also find examples on how to customize the existing templates in order to expand their functionality:

+ +

Also please check our Medium blog

+ + + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/generators-overview/index.html b/generators-overview/index.html new file mode 100644 index 00000000..dcc5a32f --- /dev/null +++ b/generators-overview/index.html @@ -0,0 +1,1845 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Generators - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Overview

+

Generators are part of the Cluster.dev functionality. They enable users to create parts of infrastructure just by filling stack variables in script dialogues, with no infrastructure coding required. This simplifies the creation of new stacks for developers who may lack the Ops skills, and could be useful for quick infrastructure deployment from ready parts (units).

+

Generators create project from a preset profile - a set of data predefined as a project, with variables for stack template. Each template may have a profile for generator, which is stored in the .cdev-metadata/generator directory.

+

How it works

+

Generator creates backend.yaml, project.yaml, infra.yaml by populating the files with user-entered values. The asked-for stack variables are listed in config.yaml under options:

+
  options:
+    - name: name
+      description: Project name
+      regex: "^[a-zA-Z][a-zA-Z_0-9\\-]{0,32}$"
+      default: "demo-project"
+    - name: organization
+      description: Organization name
+      regex: "^[a-zA-Z][a-zA-Z_0-9\\-]{0,64}$"
+      default: "my-organization"
+    - name: region
+      description: DigitalOcean region
+      regex: "^[a-zA-Z][a-zA-Z_0-9\\-]{0,32}$"
+      default: "ams3"
+    - name: domain
+      description: DigitalOcean DNS zone domain name
+      regex: "^[a-zA-Z0-9][a-zA-Z0-9-\\.]{1,61}[a-zA-Z0-9]\\.[a-zA-Z]{2,}$"
+      default: "cluster.dev"
+    - name: bucket_name
+      description: DigitalOcean spaces bucket name for states
+      regex: "^[a-zA-Z][a-zA-Z0-9\\-]{0,64}$"
+      default: "cdev-state"
+
+

In options you can define default parameters and add other variables to the generator's list. The variables included by default are project name, organization name, region, domain and bucket name.

+

In config.yaml you can also define a help message text.

+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/get-started-cdev-aws/index.html b/get-started-cdev-aws/index.html new file mode 100644 index 00000000..19fcac93 --- /dev/null +++ b/get-started-cdev-aws/index.html @@ -0,0 +1,2301 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Quick Start on AWS - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Getting Started with Cluster.dev on AWS

+

This guide will walk you through the steps to deploy your first project with Cluster.dev on AWS.

+
                          +-------------------------+
+                          | Project.yaml            |
+                          |  - region               |
+                          +------------+------------+
+                                       |
+                                       |
+                          +------------v------------+
+                          | Stack.yaml              |
+                          |  - bucket_name          |
+                          |  - region               |
+                          |  - content              |
+                          +------------+------------+
+                                       |
+                                       |
++--------------------------------------v-----------------------------------------+
+| StackTemplate: s3-website                                                      |
+|                                                                                |
+|  +---------------------+     +-------------------------+     +--------------+  |
+|  | bucket              |     | web-page-object         |     | outputs      |  |
+|  | type: tfmodule      |     | type: tfmodule          |     | type: printer|  |
+|  | inputs:             |     | inputs:                 |     | outputs:     |  |
+|  |  bucket_name        |     | bucket (from bucket ID) |     | websiteUrl   |  |
+|  |  region             |     | content                 |     +--------------+  |
+|  |  website settings   |     |                         |             |         |
+|  +---------------------+     +-----------^-------------+             |         |
+|        |                          | bucket ID                        |         |
+|        |                          | via remoteState                  |         |
++--------|--------------------------|----------------------------------|---------+
+         |                          |                                  |
+         v                          v                                  v
+   AWS S3 Bucket              AWS S3 Object (index.html)       WebsiteUrl Output
+
+

Prerequisites

+

Ensure the following are installed and set up:

+ +
terraform --version
+
+
    +
  • AWS CLI:
  • +
+
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
+unzip awscliv2.zip
+sudo ./aws/install
+aws --version
+
+
    +
  • Cluster.dev client:
  • +
+
curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh
+cdev --version
+
+

Authentication

+

Choose one of the two methods below:

+
    +
  1. +

    Shared Credentials File (recommended):

    +
      +
    • +

      Populate ~/.aws/credentials:

      +
      [cluster-dev]
      +aws_access_key_id = YOUR_AWS_ACCESS_KEY
      +aws_secret_access_key = YOUR_AWS_SECRET_KEY
      +
      +
    • +
    • +

      Configure ~/.aws/config:

      +
      [profile cluster-dev]
      +region = eu-central-1
      +
      +
    • +
    • +

      Set the AWS profile:

      +
      export AWS_PROFILE=cluster-dev
      +
      +
    • +
    +
  2. +
  3. +

    Environment Variables:

    +
  4. +
+
export AWS_ACCESS_KEY_ID="YOUR_AWS_ACCESS_KEY"
+export AWS_SECRET_ACCESS_KEY="YOUR_AWS_SECRET_KEY"
+export AWS_DEFAULT_REGION="eu-central-1"
+
+

Creating an S3 Bucket for Storing State

+
aws s3 mb s3://cdev-states
+
+

Setting Up Your Project

+

Project Configuration (project.yaml)

+
    +
  • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
  • +
  • It points to aws-backend as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored in the S3 bucket specified in backend.yaml.
  • +
  • Project-level variables are defined here and can be referenced in other configurations.
  • +
+
cat <<EOF > project.yaml
+name: dev
+kind: Project
+backend: aws-backend
+variables:
+  organization: cluster.dev
+  region: eu-central-1
+  state_bucket_name: cdev-states
+EOF
+
+

Backend Configuration (backend.yaml)

+

This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. Given the backend type as S3, it's clear that AWS is the chosen cloud provider.

+
cat <<EOF > backend.yaml
+name: aws-backend
+kind: Backend
+provider: s3
+spec:
+  bucket: {{ .project.variables.state_bucket_name }}
+  region: {{ .project.variables.region }}
+EOF
+
+

Stack Configuration (stack.yaml)

+
    +
  • This represents a distinct set of infrastructure resources to be provisioned.
  • +
  • It references a local template (in this case, the previously provided stack template) to know what resources to create.
  • +
  • Variables specified in this file will be passed to the Terraform modules called in the template.
  • +
  • The content variable here is especially useful; it dynamically populates the content of an S3 bucket object (a webpage in this case) using the local index.html file.
  • +
+
cat <<EOF > stack.yaml
+name: s3-website
+template: ./template/
+kind: Stack
+backend: aws-backend
+variables:
+  bucket_name: "tmpl-dev-test"
+  region: {{ .project.variables.region }}
+  content: |
+    {{- readFile "./files/index.html" | nindent 4 }}
+EOF
+
+

Stack Template (template.yaml)

+

The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

+
mkdir template
+cat <<EOF > template/template.yaml
+_p: &provider_aws
+- aws:
+    region: {{ .variables.region }}
+
+name: s3-website
+kind: StackTemplate
+units:
+  -
+    name: bucket
+    type: tfmodule
+    providers: *provider_aws
+    source: terraform-aws-modules/s3-bucket/aws
+    inputs:
+      bucket: {{ .variables.bucket_name }}
+      force_destroy: true
+      acl: "public-read"
+      control_object_ownership: true
+      object_ownership: "BucketOwnerPreferred"
+      attach_public_policy: true
+      block_public_acls: false
+      block_public_policy: false
+      ignore_public_acls: false
+      restrict_public_buckets: false
+      website:
+        index_document: "index.html"
+        error_document: "error.html"
+  -
+    name: web-page-object
+    type: tfmodule
+    providers: *provider_aws
+    source: "terraform-aws-modules/s3-bucket/aws//modules/object"
+    version: "3.15.1"
+    inputs:
+      bucket: {{ remoteState "this.bucket.s3_bucket_id" }}
+      key: "index.html"
+      acl: "public-read"
+      content_type: "text/html"
+      content: |
+        {{- .variables.content | nindent 8 }}
+
+  -
+    name: outputs
+    type: printer
+    depends_on: this.web-page-object
+    outputs:
+      websiteUrl: http://{{ .variables.bucket_name }}.s3-website.{{ .variables.region }}.amazonaws.com/
+EOF
+
+
+ Click to expand explanation of the Stack Template + +

1. Provider Definition (_p)


+ +This section employs a YAML anchor, pre-setting the cloud provider and region for the resources in the stack. For this example, AWS is the designated provider, and the region is dynamically passed from the variables: + +
_p: &provider_aws
+- aws:
+    region: {{ .variables.region }}
+
+ +

2. Units


+ +The units section is where the real action is. Each unit is a self-contained "piece" of infrastructure, typically associated with a particular Terraform module or a direct cloud resource.
+ +  + +
Bucket Unit

+ +This unit is utilizing the `terraform-aws-modules/s3-bucket/aws` module to provision an S3 bucket. Inputs for the module, such as the bucket name, are populated using variables passed into the Stack. + +
name: bucket
+type: tfmodule
+providers: *provider_aws
+source: terraform-aws-modules/s3-bucket/aws
+inputs:
+  bucket: {{ .variables.bucket_name }}
+  ...
+
+ +
Web-page Object Unit

+ +After the bucket is created, this unit takes on the responsibility of creating a web-page object inside it. This is done using a sub-module from the S3 bucket module specifically designed for object creation. A notable feature is the remoteState function, which dynamically pulls the ID of the S3 bucket created by the previous unit: + +
name: web-page-object
+type: tfmodule
+providers: *provider_aws
+source: "terraform-aws-modules/s3-bucket/aws//modules/object"
+inputs:
+  bucket: {{ remoteState "this.bucket.s3_bucket_id" }}
+  ...
+
+ +
Outputs Unit

+ +Lastly, this unit is designed to provide outputs, allowing users to view certain results of the Stack execution. For this template, it provides the website URL of the hosted S3 website. + +
name: outputs
+type: printer
+depends_on: this.web-page-object
+outputs:
+  websiteUrl: http://{{ .variables.bucket_name }}.s3-website.{{ .variables.region }}.amazonaws.com/
+
+ +

3. Variables and Data Flow


+ +The Stack Template is adept at harnessing variables, not just from the Stack (e.g., `stack.yaml`), but also from other resources via the remoteState function. This facilitates a seamless flow of data between resources and units, enabling dynamic infrastructure creation based on real-time cloud resource states and user-defined variables. +
+ +

Sample Website File (files/index.html)

+
mkdir files
+cat <<EOF > files/index.html
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+    <title>Cdev Demo Website Home Page</title>
+</head>
+<body>
+  <h1>Welcome to my website</h1>
+  <p>Now hosted on Amazon S3!</p>
+  <h2>See you!</h2>
+</body>
+</html>
+EOF
+
+

Deploying with Cluster.dev

+
    +
  • +

    Plan the deployment:

    +
    cdev plan
    +
    +
  • +
  • +

    Apply the changes:

    +
    cdev apply
    +
    +
  • +
+

Example Screen Cast

+

+

Clean up

+

To remove the cluster with created resources run the command:

+
cdev destroy
+
+

More Examples

+

In the Examples section you will find ready-to-use advanced Cluster.dev samples that will help you bootstrap more complex cloud infrastructures with Helm and Terraform compositions:

+ + + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/get-started-cdev-azure/index.html b/get-started-cdev-azure/index.html new file mode 100644 index 00000000..5c713005 --- /dev/null +++ b/get-started-cdev-azure/index.html @@ -0,0 +1,2291 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Quick Start on Azure - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Getting Started with Cluster.dev on Azure Cloud

+

This guide will walk you through the steps to deploy your first project with Cluster.dev on Azure Cloud.

+
                          +-------------------------+
+                          | Project.yaml            |
+                          |  - location             |
+                          +------------+------------+
+                                       |
+                                       |
+                          +------------v------------+
+                          | Stack.yaml              |
+                          |  - storage_account_name |
+                          |  - location             |
+                          |  - file_content         |
+                          +------------+------------+
+                                       |
+                                       |
++--------------------------------------v----------------------------------------+
+| StackTemplate: azure-static-website                                           |
+|                                                                               |
+|  +---------------------+     +---------------------+     +-----------------+  |
+|  | resource-group      |     | storage-account     |     | web-page-blob   |  |
+|  | type: tfmodule      |     | type: tfmodule      |     | type: tfmodule  |  |
+|  | inputs:             |     | inputs:             |     | inputs:         |  |
+|  |  location           |     | storage_account_name|     |  file_content   |  |
+|  |  resource_group_name|     |                     |     |                 |  |
+|  +---------------------+     +----------^----------+     +--------^--------+  |
+|        |                       | resource-group           | storage-account   |
+|        |                       | name & location          | name              |
+|        |                       | via remoteState          | via remoteState   |
++--------|-----------------------|--------------------------|-------------------+
+         |                       |                          |
+         v                       v                          v
+Azure Resource Group    Azure Storage Account      Azure Blob (in $web container)
+                                 |
+                                 v
+                       Printer: Static WebsiteUrl
+
+

Prerequisites

+

Ensure the following are installed and set up:

+ +
terraform --version
+
+
    +
  • Azure CLI:
  • +
+
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
+az --version
+
+
    +
  • Cluster.dev client:
  • +
+
curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh
+cdev --version
+
+

Authentication

+

Before using the Azure CLI, you'll need to authenticate:

+
 az login --use-device-code
+
+

Follow the prompt to sign in.

+

Creating an Azure Blob Storage for Storing State

+

First, create a resource group:

+
az group create --name cdevResourceGroup --location EastUS
+
+

Then, create a storage account:

+
az storage account create --name cdevstates --resource-group cdevResourceGroup --location EastUS --sku Standard_LRS
+
+

Then, create a storage container:

+
az storage container create --name tfstate --account-name cdevstates
+
+

Setting Up Your Project

+

Project Configuration (project.yaml)

+
    +
  • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
  • +
  • It points to default as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored locally.
  • +
  • Project-level variables are defined here and can be referenced in other configurations.
  • +
+
cat <<EOF > project.yaml
+name: dev
+kind: Project
+backend: default
+variables:
+  organization: cluster.dev
+  location: eastus
+  state_storage_account_name: cdevstates
+EOF
+
+

Backend Configuration (backend.yaml)

+

This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. Given the backend type as S3, it's clear that AWS is the chosen cloud provider.

+
cat <<EOF > backend.yaml
+name: azure-backend
+kind: Backend
+provider: azurerm
+spec:
+  resource_group_name: cdevResourceGroup
+  storage_account_name: {{ .project.variables.state_storage_account_name }}
+  container_name: tfstate
+EOF
+
+

Stack Configuration (stack.yaml)

+
    +
  • This represents a distinct set of infrastructure resources to be provisioned.
  • +
  • It references a local template (in this case, the previously provided stack template) to know what resources to create.
  • +
  • Variables specified in this file will be passed to the Terraform modules called in the template.
  • +
  • The content variable here is especially useful; it dynamically populates the content of an S3 bucket object (a webpage in this case) using the local index.html file.
  • +
+
cat <<EOF > stack.yaml
+name: az-blob-website
+template: ./template/
+kind: Stack
+backend: azure-backend
+variables:
+  storage_account_name: "tmpldevtest"
+  resource_group_name: "demo-resource-group"
+  location: {{ .project.variables.location }}
+  file_content: |
+    {{- readFile "./files/index.html" | nindent 4 }}
+EOF
+
+

Stack Template (template.yaml)

+

The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

+
mkdir template
+cat <<EOF > template/template.yaml
+_p: &provider_azurerm
+- azurerm:
+    features:
+      resource_group:
+        prevent_deletion_if_contains_resources: false
+
+_globals: &global_settings
+  default_region: "region1"
+  regions:
+    region1: {{ .variables.location }}
+  prefixes: ["dev"]
+  random_length: 4
+  passthrough: false
+  use_slug: false
+  inherit_tags: false
+
+_version: &module_version 5.7.5
+
+name: azure-static-website
+kind: StackTemplate
+units:
+  -
+    name: resource-group
+    type: tfmodule
+    providers: *provider_azurerm
+    source: aztfmod/caf/azurerm//modules/resource_group
+    version: *module_version
+    inputs:
+      global_settings: *global_settings
+      resource_group_name: {{ .variables.resource_group_name }}
+      settings:
+        region: "region1"
+  -
+    name: storage-account
+    type: tfmodule
+    providers: *provider_azurerm
+    source: aztfmod/caf/azurerm//modules/storage_account
+    version: *module_version
+    inputs:
+      base_tags: false
+      global_settings: *global_settings
+      client_config:
+        key: demo
+      resource_group:
+        name: {{ remoteState "this.resource-group.name" }}
+        location: {{ remoteState "this.resource-group.location" }}
+      storage_account:
+        name: {{ .variables.storage_account_name }}
+        account_kind: "StorageV2"
+        account_tier: "Standard"
+        static_website:
+          index_document: "index.html"
+          error_404_document: "error.html"
+      var_folder_path: "./"
+  -
+    name: web-page-blob
+    type: tfmodule
+    providers: *provider_azurerm
+    source: aztfmod/caf/azurerm//modules/storage_account/blob
+    version: *module_version
+    inputs:
+      settings:
+        name: "index.html"
+        content_type: "text/html"
+        source_content: |
+          {{- .variables.file_content | nindent 12 }}
+      storage_account_name: {{ remoteState "this.storage-account.name" }}
+      storage_container_name: "$web"
+      var_folder_path: "./"
+  -
+    name: outputs
+    type: printer
+    depends_on: this.web-page-blob
+    outputs:
+      websiteUrl: https://{{ remoteState "this.storage-account.primary_web_host" }}
+EOF
+
+
+ Click to expand explanation of the Stack Template + +

1. Provider Definition (_p)


+ +This section uses a YAML anchor, defining the cloud provider and location for the resources in the stack. For this case, Azure is the chosen provider, and the location is dynamically retrieved from the variables: + +
_p: &provider_azurerm
+- azurerm:
+    features:
+      resource_group:
+        prevent_deletion_if_contains_resources: false
+
+ +

2. Units


+ +The units section is where the real action is. Each unit is a self-contained "piece" of infrastructure, typically associated with a particular Terraform module or a direct cloud resource.
+ +  + +
Storage Account Unit

+ +This unit leverages the `aztfmod/caf/azurerm//modules/storage_account` module to provision an Azure Blob Storage account. Inputs for the module, such as the storage account name, are filled using variables passed into the Stack. + +
name: storage-account
+type: tfmodule
+providers: *provider_azurerm
+source: aztfmod/caf/azurerm//modules/storage_account
+inputs:
+  name: {{ .variables.storage_account_name }}
+  ...
+
+ +
Web-page Object Unit

+ +Upon creating the storage account, this unit takes the role of establishing a web-page object inside it. This action is carried out using a sub-module from the storage account module specifically designed for blob creation. A standout feature is the remoteState function, which dynamically extracts the name of the Azure Storage account produced by the preceding unit: + +
name: web-page-blob
+type: tfmodule
+providers: *provider_azurerm
+source: aztfmod/caf/azurerm//modules/storage_account/blob
+inputs:
+  storage_account_name: {{ remoteState "this.storage-account.name" }}
+  ...
+
+ +
Outputs Unit

+ +Lastly, this unit is designed to provide outputs, allowing users to view certain results of the Stack execution. For this template, it provides the website URL of the hosted Azure website. + +
name: outputs
+type: printer
+depends_on: this.web-page-blob
+outputs:
+  websiteUrl: https://{{ remoteState "this.storage-account.primary_web_host" }}
+
+ +

3. Variables and Data Flow


+ +The Stack Template is adept at harnessing variables, not just from the Stack (e.g., `stack.yaml`), but also from other resources via the remoteState function. This facilitates a seamless flow of data between resources and units, enabling dynamic infrastructure creation based on real-time cloud resource states and user-defined variables. +
+ +

Sample Website File (files/index.html)

+
mkdir files
+cat <<EOF > files/index.html
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+    <title>Cdev Demo Website Home Page</title>
+</head>
+<body>
+  <h1>Welcome to my website</h1>
+  <p>Now hosted on Azure!</p>
+  <h2>See you!</h2>
+</body>
+</html>
+EOF
+
+

Deploying with Cluster.dev

+
    +
  • +

    Plan the deployment:

    +
    cdev plan
    +
    +
  • +
  • +

    Apply the changes:

    +
    cdev apply
    +
    +
  • +
+

Example Screen Cast

+

+

Clean up

+

To remove the cluster with created resources run the command:

+
cdev destroy
+
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/get-started-cdev-gcp/index.html b/get-started-cdev-gcp/index.html new file mode 100644 index 00000000..422768f5 --- /dev/null +++ b/get-started-cdev-gcp/index.html @@ -0,0 +1,2355 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Quick Start on GCP - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Getting Started with Cluster.dev on Google Cloud

+

This guide will walk you through the steps to deploy your first project with Cluster.dev on Google Cloud.

+
                          +---------------------------------+
+                          | Project.yaml                    |
+                          |  - project_name                 |
+                          |  - google_project_id            |
+                          |  - google_cloud_region          |
+                          |  - google_cloud_bucket_location |
+                          +------------+--------------------+
+                                       |
+                                       |
+                          +------------v------------+
+                          | Stack.yaml              |
+                          |  - web_page_content     |
+                          +------------+------------+
+                                       |
+                                       |
++--------------------------------------v-----------------------------------------------------------------+
+| StackTemplate: gcs-static-website                                                                      |
+|                                                                                                        |
+|  +---------------------+     +---------------------+     +-----------------+    +-----------------+    |
+|  | cloud-storage       |     | cloud-bucket-object |     | cloud-url-map   |    | cloud-lb        |    |
+|  | type: tfmodule      |     | type: tfmodule      |     | type: tfmodule  |    | type: tfmodule  |    |
+|  | inputs:             |     | inputs:             |     | inputs:         |    | inputs:         |    |
+|  |  names              |     |   bucket_name       |     |  name           |    |  name           |    |
+|  |  randomize_suffix   |     |   object_name       |     |  bucket_name    |    |  project        |    |
+|  |  project_id         |     |   object_content    |     +--------^--------+    |  url_map        |    |
+|  |  location           |     +----------^----------+      |                     +--------^--------+    |
+|  +---------------------+       |                          |                       |                    |
+|        |                       | cloud-storage            | cloud-storage         | cloud-url-map      |
+|        |                       | bucket name              | bucket name           | url_map            |
+|        |                       | via remoteState          | via remoteState       | via remoteState    |
++--------|-----------------------|--------------------------|--------------------------------------------+
+         |                       |                          |                       |
+         v                       v                          v                       v
+  Storage Bucket             Storage Bucket Object     Url Map & Bucket Backend   Load Balancer
+                                 |
+                                 v
+                       Printer: Static WebsiteUrl
+
+

Prerequisites

+

Ensure the following are installed and set up:

+ +
terraform --version
+
+ +
gcloud --version
+
+
    +
  • Cluster.dev client:
  • +
+
curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh
+cdev --version
+
+

Authentication

+

Before using the Google Cloud CLI, you'll need to authenticate:

+
gcloud auth login
+
+

Authorize cdev/terraform to interact with GCP via SD

+
gcloud auth application-default login
+
+

Creating an Storage Bucket for Storing State

+
gsutil mb gs://cdevstates
+
+

Setting Up Your Project

+
+

Tip

+
+

You can clone example files from repo:

+
git clone https://github.com/shalb/cdev-examples
+cd cdev-examples/gcp/gcs-website/
+
+

Project Configuration (project.yaml)

+
    +
  • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
  • +
  • It points to aws-backend as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored in the Google Storage bucket specified in backend.yaml.
  • +
  • Project-level variables are defined here and can be referenced in other configurations.
  • +
+
cat <<EOF > project.yaml
+name: dev
+kind: Project
+backend: gcs-backend
+variables:
+  project_name: dev-test
+  google_project_id: cdev-demo
+  google_cloud_region: us-west1
+  google_cloud_bucket_location: EU
+  google_bucket_name: cdevstates
+  google_bucket_prefix: dev
+EOF
+
+

Backend Configuration (backend.yaml)

+

This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. Given the backend type as GCS.

+
name: gcs-backend
+kind: Backend
+provider: gcs
+spec:
+  project: {{ .project.variables.google_project_id }}
+  bucket: {{ .project.variables.google_bucket_name }}
+  prefix: {{ .project.variables.google_bucket_prefix }}
+EOF
+
+

Stack Configuration (stack.yaml)

+
    +
  • This represents a distinct set of infrastructure resources to be provisioned.
  • +
  • It references a local template (in this case, the previously provided stack template) to know what resources to create.
  • +
  • Variables specified in this file will be passed to the Terraform modules called in the template.
  • +
  • The content variable here is especially useful; it dynamically populates the content of an Google Storage bucket object (a webpage in this case) using the local index.html file.
  • +
+
cat <<EOF > stack.yaml
+name: cloud-storage
+template: ./template/
+kind: Stack
+backend: gcs-backend
+variables:
+  project_name: {{ .project.variables.project_name }}
+  google_cloud_region: {{ .project.variables.google_cloud_region }}
+  google_cloud_bucket_location: {{ .project.variables.google_cloud_bucket_location }}
+  google_project_id: {{ .project.variables.google_project_id }}
+  web_page_content: |
+    {{- readFile "./files/index.html" | nindent 4 }}
+EOF
+
+

Stack Template (template.yaml)

+

The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

+
mkdir template
+cat <<EOF > template/template.yaml
+_p: &provider_gcp
+- google:
+    project: {{ .variables.google_project_id }}
+    region: {{ .variables.google_cloud_region }}
+
+name: gcs-static-website
+kind: StackTemplate
+units:
+  -
+    name: cloud-storage
+    type: tfmodule
+    providers: *provider_gcp
+    source: "github.com/terraform-google-modules/terraform-google-cloud-storage.git?ref=v4.0.1"
+    inputs:
+      names:
+        - {{ .variables.project_name }}
+      randomize_suffix: true
+      project_id: {{ .variables.google_project_id }}
+      location: {{ .variables.google_cloud_bucket_location }}
+      set_viewer_roles: true
+      viewers:
+        - allUsers
+      website:
+        main_page_suffix: "index.html"
+        not_found_page: "index.html"
+  -
+    name: cloud-bucket-object
+    type: tfmodule
+    providers: *provider_gcp
+    depends_on: this.cloud-storage
+    source: "bootlabstech/cloud-storage-bucket-object/google"
+    version: "1.0.1"
+    inputs:
+      bucket_name: {{ remoteState "this.cloud-storage.name" }}
+      object_name: "index.html"
+      object_content: |
+        {{- .variables.web_page_content | nindent 8 }}
+  -
+    name: cloud-url-map
+    type: tfmodule
+    providers: *provider_gcp
+    depends_on: this.cloud-storage
+    source: "github.com/shalb/terraform-gcs-bucket-backend.git?ref=0.0.1"
+    inputs:
+      name: {{ .variables.project_name }}
+      bucket_name: {{ remoteState "this.cloud-storage.name" }}
+  -
+    name: cloud-lb
+    type: tfmodule
+    providers: *provider_gcp
+    depends_on: this.cloud-url-map
+    source: "GoogleCloudPlatform/lb-http/google"
+    version: "9.2.0"
+    inputs:
+      name: {{ .variables.project_name }}
+      project: {{ .variables.google_project_id }}
+      url_map: {{ remoteState "this.cloud-url-map.url_map_self_link" }}
+      create_url_map: false
+      ssl: false
+      backends:
+        default:
+          protocol: "HTTP"
+          port: 80
+          port_name: "http"
+          timeout_sec: 10
+          enable_cdn: false
+          groups: [] 
+          health_check:
+            request_path: "/"
+            port: 80
+          log_config:
+            enable: true
+            sample_rate: 1.0
+          iap_config:
+            enable: false
+  -
+    name: outputs
+    type: printer
+    depends_on: this.cloud-storage
+    outputs:
+      websiteUrl: http://{{ remoteState "this.cloud-lb.external_ip" }}
+EOF
+
+
+ Click to expand explanation of the Stack Template + +

1. Provider Definition (_p)


+ +This section uses a YAML anchor, defining the cloud provider and location for the resources in the stack. For this case, GCS is the chosen provider, and the location is dynamically retrieved from the variables: + +
_p: &provider_gcp
+- google:
+    project: {{ .variables.google_project_id }}
+    region: {{ .variables.google_cloud_region }}
+
+ +

2. Units


+ +The units section is where the real action is. Each unit is a self-contained "piece" of infrastructure, typically associated with a particular Terraform module or a direct cloud resource.
+ +  + +
Cloud Storage Unit

+ +This unit leverages the `github.com/terraform-google-modules/terraform-google-cloud-storage` module to provision an Google Storage Bucket. Inputs for the module, such as the bucket name and project, are filled using variables passed into the Stack. + +
name: cloud-storage
+type: tfmodule
+providers: *provider_gcp
+source: "github.com/terraform-google-modules/terraform-google-cloud-storage.git?ref=v4.0.1"
+  inputs:
+    names:
+      - {{ .variables.name }}
+    randomize_suffix: true
+    project_id: {{ .variables.google_project_id }}
+    location: {{ .variables.google_cloud_bucket_location }}
+    set_viewer_roles: true
+    viewers:
+      - allUsers
+    website:
+      main_page_suffix: "index.html"
+      not_found_page: "index.html"
+
+ +
Cloud Bucket Object Unit

+ +Upon creating the storage bucket, this unit takes the role of establishing a web-page object inside it. This action is carried out using a module storage bucket object module specifically designed for blob creation. A standout feature is the remoteState function, which dynamically extracts the name of the Storage Bucket name produced by the preceding unit: + +
name: cloud-bucket-object
+type: tfmodule
+providers: *provider_gcp
+depends_on: this.cloud-storage
+source: "bootlabstech/cloud-storage-bucket-object/google"
+version: "1.0.1"
+inputs:
+  bucket_name: {{ remoteState "this.cloud-storage.name" }}
+  object_name: "index.html"
+  object_content: |
+    {{- .variables.web_page_content | nindent 8 }}
+
+ +
Cloud URL Map Unit

+ +This unit create google_compute_url_map and google_compute_backend_bucket in order to supply it to cloud-lb unit. A standout feature is the remoteState function, which dynamically extracts the name of the Storage Bucket name produced by Cloud Storage unit: + +
name: cloud-url-map
+type: tfmodule
+providers: *provider_gcp
+depends_on: this.cloud-storage
+source: "github.com/shalb/terraform-gcs-bucket-backend.git?ref=0.0.1"
+inputs:
+  name: {{ .variables.project_name }}
+  bucket_name: {{ remoteState "this.cloud-storage.name" }}
+
+ +
Cloud Load Balancer Unit

+ +This unit create google load balancer. A standout feature is the remoteState function, which dynamically extracts the name of the URL Map URI produced by Cloud URL Map unit: + +
name: cloud-lb
+type: tfmodule
+providers: *provider_gcp
+depends_on: this.cloud-url-map
+source: "GoogleCloudPlatform/lb-http/google"
+version: "9.2.0"
+inputs:
+  name: {{ .variables.project_name }}
+  project: {{ .variables.google_project_id }}
+  url_map: {{ remoteState "this.cloud-url-map.url_map_self_link" }}
+  create_url_map: false
+  ssl: false
+  backends:
+    default:
+      protocol: "HTTP"
+      port: 80
+      port_name: "http"
+      timeout_sec: 10
+      enable_cdn: false
+      groups: [] 
+      health_check:
+        request_path: "/"
+        port: 80
+      log_config:
+        enable: true
+        sample_rate: 1.0
+      iap_config:
+        enable: false
+
+ +
Outputs Unit

+ +Lastly, this unit is designed to provide outputs, allowing users to view certain results of the Stack execution. For this template, it provides the website URL of the hosted website exposed by load balancer. + +
name: outputs
+type: printer
+depends_on: this.cloud-storage
+outputs:
+  websiteUrl: http://{{ remoteState "this.cloud-lb.external_ip" }}
+
+ +

3. Variables and Data Flow


+ +The Stack Template is adept at harnessing variables, not just from the Stack (e.g., `stack.yaml`), but also from other resources via the remoteState function. This facilitates a seamless flow of data between resources and units, enabling dynamic infrastructure creation based on real-time cloud resource states and user-defined variables. +
+ +

Sample Website File (files/index.html)

+
mkdir files
+cat <<EOF > files/index.html
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+    <title>Cdev Demo Website Home Page</title>
+</head>
+<body>
+  <h1>Welcome to my website</h1>
+  <p>Now hosted on GCS!</p>
+  <h2>See you!</h2>
+</body>
+</html>
+EOF
+
+

Deploying with Cluster.dev

+
    +
  • +

    Plan the deployment:

    +
    cdev plan
    +
    +
  • +
  • +

    Apply the changes:

    +
    cdev apply
    +
    +
  • +
+

Clean up

+

To remove the cluster with created resources run the command:

+
cdev destroy
+
+

More Examples

+

In the Examples section you will find ready-to-use advanced Cluster.dev samples that will help you bootstrap more complex cloud infrastructures with Helm and Terraform compositions:

+ + + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/get-started-cdev-helm/index.html b/get-started-cdev-helm/index.html new file mode 100644 index 00000000..20b60051 --- /dev/null +++ b/get-started-cdev-helm/index.html @@ -0,0 +1,2309 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Quick Start with Kubernetes - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + + + + + +
+
+ + + + + + + +

Getting Started with Kubernetes and Helm

+

This guide will walk you through the steps to deploy a WordPress application along with a MySQL database on a Kubernetes cluster using StackTemplates with Helm units.

+
                          +-------------------------+
+                          | Stack.yaml              |
+                          |  - domain               |
+                          |  - kubeconfig_path      |
+                          +------------+------------+
+                                       |
+                                       |
++--------------------------------------v---------------------------------+
+| StackTemplate: wordpress                                               |
+|                                                                        |
+|  +---------------------+               +---------------------+         |
+|  | mysql-wp-pass-user  |-------------->| mysql-wordpress     |         |
+|  | type: tfmodule      |               | type: helm          |         |
+|  | output:             |               | inputs:             |         |
+|  |  generated password |               |  kubeconfig         |         |
+|  |                     |               |  values (from mysql.yaml)     |
+|  +---------------------+               +----------|----------+         |
+|                                                   |                    |
+|                                                   v                    |
+|                                           MySQL Deployment             |
+|                                                   |                    |
+|  +---------------------+               +----------|----------+         |
+|  | wp-pass             |-------------->| wordpress           |         |
+|  | type: tfmodule      |               | type: helm          |         |
+|  | output:             |               | inputs:             |         |
+|  |  generated password |               |  kubeconfig         |         |
+|  |                     |               |  values (from wordpress.yaml) |
+|  +---------------------+               +----------|----------+         |
+|                                                   |                    |
+|                                                   v                    |
+|                                           WordPress Deployment         |
+|                                                                        |
+|  +---------------------+                                               |
+|  | outputs             |                                               |
+|  | type: printer       |                                               |
+|  | outputs:            |                                               |
+|  |  wordpress_url      |                                               |
+|  +---------------------+                                               |
+|            |                                                           |
++------------|-----------------------------------------------------------+
+             |
+             v
+      wordpress_url Output
+
+

Prerequisites

+
    +
  1. A running Kubernetes cluster.
  2. +
  3. Your domain name (for this tutorial, we'll use example.com as a placeholder).
  4. +
  5. The kubeconfig file for your Kubernetes cluster.
  6. +
+

Setting Up Your Project

+
+

Tip

+
+

You can clone example files from repo: +

git clone https://github.com/shalb/cdev-examples
+cd cdev-examples/helm/wordpress/
+

+

Project Configuration (project.yaml)

+
    +
  • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
  • +
  • It points to aws-backend as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored in the S3 bucket specified in backend.yaml.
  • +
  • Project-level variables are defined here and can be referenced in other configurations.
  • +
+
cat <<EOF > project.yaml
+name: wordpress-demo
+kind: Project
+backend: aws-backend
+variables:
+  region: eu-central-1
+  state_bucket_name: cdev-states
+EOF
+
+

Backend Configuration (backend.yaml)

+

This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. In this example the AWS s3 is used, but you can choose any other provider.

+
cat <<EOF > backend.yaml
+name: aws-backend
+kind: Backend
+provider: s3
+spec:
+  bucket: {{ .project.variables.state_bucket_name }}
+  region: {{ .project.variables.region }}
+EOF
+
+

Setting Up the Stack File (stack.yaml)

+
    +
  • This represents a high level of infrastructure pattern configuration.
  • +
  • It references a local template to know what resources to create.
  • +
  • Variables specified in this file will be passed to the Terraform modules and Helm charts called in the template.
  • +
+

Replace placeholders in stack.yaml with your actual kubeconfig path and domain.

+
cat <<EOF > stack.yaml
+name: wordpress
+template: "./template/"
+kind: Stack
+backend: aws-backend
+cliVersion: ">= 0.7.14"
+variables:
+  kubeconfig_path: "/data/home/voa/projects/cdev-aws-eks/examples/kubeconfig" # Change to your path
+  domain: demo.cluster.dev # Change to your domain
+EOF
+
+

Stack Template (template.yaml)

+

The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

+
mkdir template
+cat <<EOF > template/template.yaml
+kind: StackTemplate
+name: wordpress
+cliVersion: ">= 0.7.15"
+units:
+## Generate Passwords with Terraform for MySQL and Wordpress
+  -
+    name: mysql-wp-pass-user
+    type: tfmodule
+    source: github.com/romanprog/terraform-password?ref=0.0.1
+    inputs:
+      length: 12
+      special: false
+  -
+    name: wp-pass
+    type: tfmodule
+    source: github.com/romanprog/terraform-password?ref=0.0.1
+    inputs:
+      length: 12
+      special: false
+## Install MySQL and Wordpress with Helm
+  -
+    name: mysql-wordpress
+    type: helm
+    kubeconfig: {{ .variables.kubeconfig_path }}
+    source:
+      repository: "oci://registry-1.docker.io/bitnamicharts"
+      chart: "mysql"
+      version: "9.9.1"
+    additional_options:
+      namespace: "wordpress"
+      create_namespace: true
+    values:
+      - file: ./files/mysql.yaml
+  -
+    name: wordpress
+    type: helm
+    depends_on: this.mysql-wordpress
+    kubeconfig: {{ .variables.kubeconfig_path }}
+    source:
+      repository: "oci://registry-1.docker.io/bitnamicharts"
+      chart: "wordpress"
+      version: "16.1.2"
+    additional_options:
+      namespace: "wordpress"
+      create_namespace: true
+    values:
+      - file: ./files/wordpress.yaml
+
+  - name: outputs
+    type: printer
+    depends_on: this.wordpress
+    outputs:
+      wordpress_url: https://wordpress.{{ .variables.domain }}/admin/
+      wordpress_user: user
+      wordpress_password: {{ remoteState "this.wp-pass.result" }}
+EOF
+
+

As you can see the StackTemplate contains Helm units and they could use inputs from values.yaml files where it is possible to use outputs from other type of units(like tfmodule) or even other stacks. Lets create that values for MySQL and Wordpress:

+
mkdir files
+cat <<EOF > files/mysql.yaml
+fullNameOverride: mysql-wordpress
+auth:
+  rootPassword: {{ remoteState "this.mysql-wp-pass-user.result" }}
+  username: user
+  password: {{ remoteState "this.mysql-wp-pass-user.result" }}
+EOF
+
+
cat <<EOF > files/wordpress.yaml
+containerSecurityContext:
+  enabled: false
+mariadb:
+  enabled: false
+externalDatabase:
+  port: 3306
+  user: user
+  password: {{ remoteState "this.mysql-wp-pass-user.result" }}
+  database: my_database
+wordpressPassword: {{ remoteState "this.wp-pass.result" }}
+allowOverrideNone: false
+ingress:
+  enabled: true
+  ingressClassName: "nginx"
+  pathType: Prefix
+  hostname: wordpress.{{ .variables.domain }}
+  path: /
+  tls: true
+  annotations:
+    cert-manager.io/cluster-issuer: "letsencrypt-prod"
+EOF
+
+
+ Click to expand explanation of the Stack Template + +

1. Units

+ +The units section is a list of infrastructure components that are provisioned sequentially. Each unit has a type, which indicates whether it's a Terraform module (`tfmodule`), a Helm chart (`helm`), or simply outputs (`printer`). + +
Password Generation Units
+ +There are two password generation units which use the Terraform module `github.com/romanprog/terraform-password` to generate random passwords. + +
name: mysql-wp-pass-user
+type: tfmodule
+source: github.com/romanprog/terraform-password?ref=0.0.1
+inputs:
+  length: 12
+  special: false
+
+ +These units will create passwords with a length of 12 characters without special characters. The outputs of these units (the generated passwords) are used in subsequent units. + +
MySQL Helm Chart Unit
+ +This unit installs the MySQL chart from the `bitnamicharts` Helm repository. + +
name: mysql-wordpress
+type: helm
+kubeconfig: {{ .variables.kubeconfig_path }}
+source:
+  repository: "oci://registry-1.docker.io/bitnamicharts"
+  chart: "mysql"
+  version: "9.9.1"
+
+ +The `kubeconfig` field uses a variable to point to the Kubeconfig file, enabling Helm to deploy to the correct Kubernetes cluster. + +
WordPress Helm Chart Unit
+ +This unit installs the WordPress chart from the same Helm repository as MySQL. It depends on the `mysql-wordpress` unit, ensuring MySQL is installed first. + +
name: wordpress
+type: helm
+depends_on: this.mysql-wordpress
+
+ +Both Helm units utilize external YAML files (`mysql.yaml` and `wordpress.yaml`) to populate values for the Helm charts. These values files leverage the `remoteState` function to fetch passwords generated by the Terraform modules. + +
Outputs Unit
+ +This unit outputs the URL to access the WordPress site. + +
name: outputs
+type: printer
+depends_on: this.wordpress
+outputs:
+  wordpress_url: https://wordpress.{{ .variables.domain }}/admin/
+
+ +It waits for the WordPress Helm unit to complete (`depends_on: this.wordpress`) and then provides the URL. + +

2. Variables and Data Flow

+ +In this stack template: + +The `.variables` placeholders, like `{{ .variables.kubeconfig_path }}` and `{{ .variables.domain }}`, fetch values from the stack's variables. +The `remoteState` function, such as `{{ remoteState "this.wp-pass.result" }}`, fetches the outputs from previous units. For example, it retrieves the randomly generated password for WordPress. +These mechanisms ensure dynamic configurations based on real-time resource states and user-defined variables. They enable values generated in one unit (e.g., a password from a Terraform module) to be utilized in a subsequent unit (e.g., a Helm deployment). + +

3. Additional File (`mysql.yaml` and `wordpress.yaml`) Explanation

+ +Both files serve as value configurations for their respective Helm charts. +`mysql.yaml` sets overrides for the MySQL deployment, specifically the authentication details. +`wordpress.yaml` customizes the WordPress deployment, such as its database settings, ingress configuration, and password. + +Both files leverage the `remoteState` function to pull in passwords generated by the Terraform password modules. + +In summary, this stack template and its additional files define a robust deployment that sets up a WordPress application with its database, all while dynamically creating and injecting passwords. It showcases the synergy between Terraform for infrastructure provisioning and Helm for Kubernetes-based application deployments. + +
+ +

Deploying WordPress and MySQL with cluster.dev

+

1. Planning the Deployment

+
cdev plan
+
+

2. Applying the StackTemplate

+
cdev apply
+
+

Upon executing these commands, WordPress and MySQL will be deployed on your Kubernetes cluster using cluster.dev.

+

Example Screen Cast

+

+

Clean up

+

To remove the cluster with created resources run the command:

+
cdev destroy
+
+

Conclusion

+

StackTemplates provide a modular approach to deploying applications on Kubernetes. With Helm and StackTemplates, you can efficiently maintain, scale, and manage your deployments. This guide walked you through deploying WordPress and MySQL seamlessly on a Kubernetes cluster using these tools.

+

More Examples

+

In the Examples section you will find ready-to-use advanced Cluster.dev samples that will help you bootstrap more complex cloud infrastructures with Helm and Terraform compositions:

+ + + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/get-started-create-project/index.html b/get-started-create-project/index.html new file mode 100644 index 00000000..5abafba7 --- /dev/null +++ b/get-started-create-project/index.html @@ -0,0 +1,1755 @@ + + + + + + + + + + + + + + + + + + + + + Create New Project - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Create New Project

+

Quick start

+

In our example we shall use the tmpl-development sample to create a new project on AWS cloud.

+
    +
  1. +

    Install the Cluster.dev client.

    +
  2. +
  3. +

    Create a project directory, cd into it and generate a project with the command:

    +

    cdev project create https://github.com/shalb/cluster.dev tmpl-development

    +
  4. +
  5. +

    Export environmental variables via an AWS profile.

    +
  6. +
  7. +

    Run cdev plan to build the project and see the infrastructure that will be created.

    +
  8. +
  9. +

    Run cdev apply to deploy the stack.

    +
  10. +
+

Workflow diagram

+

The diagram below describes the steps of creating a new project without generators.

+

create new project diagram

+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/get-started-overview/index.html b/get-started-overview/index.html new file mode 100644 index 00000000..405aa67e --- /dev/null +++ b/get-started-overview/index.html @@ -0,0 +1,1829 @@ + + + + + + + + + + + + + + + + + + + + + Cluster.dev Examples Overview - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+ +
+
+ + + +
+
+ + + + + + + +

Cluster.dev Examples Overview

+

Working with Terraform Modules

+

Example of how to create static website hosting on different clouds.

+ + + + + + + + + + + + + + + + + + + + + + + + + +
Cloud ProviderSample LinkTechnology Image
AWSQuick Start on AWSAWS Logo Terraform Logo
AzureQuick Start on AzureAzure Logo Terraform Logo
GCPQuick Start on GCPGCP Logo Terraform Logo
+

Kubernetes Deployment with Helm Charts

+

Example of how to deploy application with Helm and Terraform to Kubernetes.

+ + + + + + + + + + + + + + + +
DescriptionSample LinkTechnology Image
Terraform Kubernetes HelmQuick Start with KubernetesKubernetes Logo Helm Logo
+

Bootstrapping Kubernetes in Different Clouds

+

Create fully featured Kubernetes clusters with required addons.

+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Cloud ProviderKubernetes TypeSample LinkTechnology Image
AWSEKSAWS-EKSAWS Logo Kubernetes Logo
AWSK3sAWS-K3sAWS Logo K3s Logo
GCPGKEGCP-GKEGCP Logo Kubernetes Logo
AWSK3s + PrometheusAWS-K3s PrometheusAWS Logo K3s Logo Prometheus Logo
DOK8sDO-K8sDO Logo Kubernetes Logo
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/google-cloud-provider/index.html b/google-cloud-provider/index.html new file mode 100644 index 00000000..0db725b3 --- /dev/null +++ b/google-cloud-provider/index.html @@ -0,0 +1,1728 @@ + + + + + + + + + + + + + + + + + + + + + Deploying to GCE - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+ +
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/how-does-cdev-work/index.html b/how-does-cdev-work/index.html new file mode 100644 index 00000000..5024676e --- /dev/null +++ b/how-does-cdev-work/index.html @@ -0,0 +1,1947 @@ + + + + + + + + + + + + + + + + + + + + + + + + + How Does It Work? - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

How Does It Work?

+

With Cluster.dev you create or download a predefined stack template, set the variables, then render and deploy a whole stack.

+

Capabilities:

+
    +
  • Re-using all existing Terraform private and public modules and Helm Charts.
  • +
  • Applying parallel changes in multiple infrastructures concurrently.
  • +
  • Using the same global variables and secrets across different infrastructures, clouds, and technologies.
  • +
  • Templating anything with the Go-template function, even Terraform modules in Helm style templates.
  • +
  • Create and manage secrets with SOPS or cloud secret storages.
  • +
  • Generate a ready-to-use Terraform code.
  • +
+

Basic diagram

+

cdev diagram

+

Templating

+

Templating is one of the key features that underlie the powerful capabilities of Cluster.dev. Similar to Helm, the cdev templating is based on Go template language and uses Sprig and some other extra functions to expose objects to the templates.

+

Cluster.dev has two levels of templating, one that involves template rendering on a project level and one on a stack template level. For more information please refer to the Templating section.

+

How to use Cluster.dev

+

Cluster.dev is a powerful framework that can be operated in several modes.

+

Create your own stack template

+

In this mode you can create your own stack templates. Having your own template enables you to launch or copy environments (like dev/stage/prod) with the same template. You'll be able to develop and propagate changes together with your team members, just using Git. Operating Cluster.dev in the developer mode requires some prerequisites. The most important is understanding Terraform and how to work with its modules. The knowledge of go-template syntax or Helm is advisable but not mandatory.

+

Deploy infrastructures from existing stack templates

+

This mode, also known as user mode, gives you the ability to launch ready-to-use infrastructures from prepared stack templates by just adding your cloud credentials and setting variables (such as name, zones, number of instances, etc.). +You don't need to know background tooling like Terraform or Helm, it's as simple as downloading a sample and launching commands. Here are the steps:

+
    +
  • Install Cluster.dev binary
  • +
  • Choose and download a stack template
  • +
  • Set cloud credentials
  • +
  • Define variables for the stack template
  • +
  • Run Cluster.dev and get a cloud infrastructure
  • +
+

Workflow

+

Let's assume you are starting a new infrastructure project. Let's see what your workflow would look like.

+
    +
  1. +

    Define what kind of infrastructure pattern you need to achieve.

    +

    a. What Terraform modules it would include (for example: I need to have VPC, Subnet definitions, IAM's and Roles).

    +

    b. Whether you need to apply any Bash scripts before and after the module, or inside as pre/post-hooks.

    +

    c. If you are using Kubernetes, check what controllers would be deployed and how (by Helm chart or K8s manifests).

    +
  2. +
  3. +

    Check if there is any similar sample template that already exists.

    +
  4. +
  5. +

    Clone the stack template locally and modify it if needed.

    +
  6. +
  7. +

    Apply it.

    +
  8. +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/howto-tf-versions/index.html b/howto-tf-versions/index.html new file mode 100644 index 00000000..02fe1a36 --- /dev/null +++ b/howto-tf-versions/index.html @@ -0,0 +1,1772 @@ + + + + + + + + + + + + + + + + + + + + + + + Use Different Terraform versions - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Use Different Terraform Versions

+

By default Cluster.dev runs that version of Terraform which is installed on a local machine. If you need to switch between versions, use some third-party utilities, such as Terraform Switcher.

+

Example of tfswitch usage:

+

tfswitch 0.15.5
+
+cdev apply
+
+This will tell Cluster.dev to use Terraform v0.15.5.

+

Use CDEV_TF_BINARY variable to indicate which Terraform binary to use.

+
+

Info

+

The variable is recommended to use for debug and template development only.

+
+

You can pin it in project.yaml:

+
    name: dev
+    kind: Project
+    backend: aws-backend
+    variables:
+      organization: cluster-dev
+      region: eu-central-1
+      state_bucket_name: cluster-dev-gha-tests
+    exports:
+      CDEV_TF_BINARY: "terraform_14"
+
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/images/cdev-base-diagram-shema1.png b/images/cdev-base-diagram-shema1.png new file mode 100644 index 00000000..5f74f032 Binary files /dev/null and b/images/cdev-base-diagram-shema1.png differ diff --git a/images/cdev-base-diagram-shema1_red.png b/images/cdev-base-diagram-shema1_red.png new file mode 100644 index 00000000..7991ba85 Binary files /dev/null and b/images/cdev-base-diagram-shema1_red.png differ diff --git a/images/cdev-base-diagram.png b/images/cdev-base-diagram.png new file mode 100644 index 00000000..5e8497fa Binary files /dev/null and b/images/cdev-base-diagram.png differ diff --git a/images/cdev-module-banner.png b/images/cdev-module-banner.png new file mode 100644 index 00000000..4a0da0d0 Binary files /dev/null and b/images/cdev-module-banner.png differ diff --git a/images/cdev-template-example.png b/images/cdev-template-example.png new file mode 100644 index 00000000..d43d5866 Binary files /dev/null and b/images/cdev-template-example.png differ diff --git a/images/cdev-template-shema2.png b/images/cdev-template-shema2.png new file mode 100644 index 00000000..301008f8 Binary files /dev/null and b/images/cdev-template-shema2.png differ diff --git a/images/cdev-unit-example.png b/images/cdev-unit-example.png new file mode 100644 index 00000000..7789ba0d Binary files /dev/null and b/images/cdev-unit-example.png differ diff --git a/images/cdev-unit-shema4.png b/images/cdev-unit-shema4.png new file mode 100644 index 00000000..8de7198b Binary files /dev/null and b/images/cdev-unit-shema4.png differ diff --git a/images/cluster-dev-logo-site.png b/images/cluster-dev-logo-site.png new file mode 100644 index 00000000..7c1f9dd9 Binary files /dev/null and b/images/cluster-dev-logo-site.png differ diff --git a/images/create-project-diagram-shema5.png b/images/create-project-diagram-shema5.png new file mode 100644 index 00000000..55702a55 Binary files /dev/null and b/images/create-project-diagram-shema5.png differ diff --git a/images/create-project-diagram.png b/images/create-project-diagram.png new file mode 100644 index 00000000..de76b2c5 Binary files /dev/null and b/images/create-project-diagram.png differ diff --git a/images/demo.gif b/images/demo.gif new file mode 100644 index 00000000..5b9172b9 Binary files /dev/null and b/images/demo.gif differ diff --git a/images/favicon.png b/images/favicon.png new file mode 100644 index 00000000..7988a70d Binary files /dev/null and b/images/favicon.png differ diff --git a/images/gh-secrets.png b/images/gh-secrets.png new file mode 100644 index 00000000..2eec255d Binary files /dev/null and b/images/gh-secrets.png differ diff --git a/images/gha_get_credentials.png b/images/gha_get_credentials.png new file mode 100644 index 00000000..b85a0544 Binary files /dev/null and b/images/gha_get_credentials.png differ diff --git a/images/go_executor_diagram.png b/images/go_executor_diagram.png new file mode 100644 index 00000000..f981565b Binary files /dev/null and b/images/go_executor_diagram.png differ diff --git a/images/templating-shema6.png b/images/templating-shema6.png new file mode 100644 index 00000000..45feb106 Binary files /dev/null and b/images/templating-shema6.png differ diff --git a/images/templating.png b/images/templating.png new file mode 100644 index 00000000..f421ce55 Binary files /dev/null and b/images/templating.png differ diff --git a/index.html b/index.html new file mode 100644 index 00000000..06a237ee --- /dev/null +++ b/index.html @@ -0,0 +1,1850 @@ + + + + + + + + + + + + + + + + + + + + + + + What Is Cluster.dev? - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

What is Cluster.dev?

+

Cluster.dev is an open-source tool designed to manage cloud native infrastructures with simple declarative manifests - stack templates. It allows you to describe an entire infrastructure and deploy it with a single tool.

+

Stack templates can be based on Terraform modules, Kubernetes manifests, Shell scripts, Helm charts and Argo CD/Flux applications, OPA policies, etc. Cluster.dev brings those components together so that you can deploy, test and distribute a whole set of components with pinned versions.

+

When do I need Cluster.dev?

+
    +
  1. If you have a common infrastructure pattern that contains multiple components stuck together. This could be a bunch of TF-modules, or a set of K8s add-ons where you need to re-use this pattern inside your projects.
  2. +
  3. If you develop an infrastructure platform that you ship to other teams and they need to launch new infrastructures from your template.
  4. +
  5. If you build a complex infrastructure that contains different technologies and you need to perform integration testing to confirm the components' interoperability. Once done, you can then promote the changes to next environments.
  6. +
  7. If you are a software vendor and need to deliver infrastructure deployment along with your software.
  8. +
+

Base concept diagrams

+

Stack templates are composed of units - Lego-like building blocks responsible for passing variables to a particular technology.

+

cdev unit example diagram

+

Templates define infrastructure patterns or even the whole platform.

+

cdev template example diagram

+

Features

+
    +
  • Common variables, secrets, and templating for different technologies.
  • +
  • Same GitOps Development experience for Terraform, Shell, Kubernetes.
  • +
  • Can be used with any cloud, on-premises or hybrid scenarios.
  • +
  • Encourage teams to follow technology best practices.
  • +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/installation-upgrade/index.html b/installation-upgrade/index.html new file mode 100644 index 00000000..8817014f --- /dev/null +++ b/installation-upgrade/index.html @@ -0,0 +1,1979 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Installation and Upgrade - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
+ +
+ + + + +
+ + +
+ +
+ + + + + + + + + +
+
+ + + +
+
+
+ + + + + + + +
+
+
+ + + +
+
+
+ + + +
+
+
+ + + +
+
+ + + + + + + +

Installation and Upgrade

+

Prerequisites

+

To start using Cluster.dev please make sure that you comply with the following preconditions.

+

Supported operating systems:

+
    +
  • +

    Linux amd64

    +
  • +
  • +

    Darwin amd64

    +
  • +
+

Required software installed:

+
    +
  • +

    Git console client

    +
  • +
  • +

    Terraform

    +
  • +
+

Terraform

+

The Cluster.dev client uses the Terraform binary. The required Terraform version is 1.4 or higher. Refer to the Terraform installation instructions to install Terraform.

+

Install From Script

+
+

Tip

+

This is the easiest way to have the Cluster.dev client installed. For other options see the Install From Sources section.

+
+

Cluster.dev has an installer script that takes the latest version of Cluster.dev client and installs it for you locally.

+

Fetch the script and execute it locally with the command:

+
curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh
+
+

Install From Sources

+

Download from release

+

Each stable version of Cluster.dev has a binary that can be downloaded and installed manually. The documentation is suitable for v0.4.0 or higher of the Cluster.dev client.

+

Installation example for Linux amd64:

+
    +
  1. +

    Download your desired version from the releases page.

    +
  2. +
  3. +

    Unpack it.

    +
  4. +
  5. +

    Find the Cluster.dev binary in the unpacked directory.

    +
  6. +
  7. +

    Move the binary to the bin folder (/usr/local/bin/).

    +
  8. +
+

Building from source

+

Go version 16 or higher is required - see Golang installation instructions.

+

To build the Cluster.dev client from source:

+
    +
  1. +

    Clone the Cluster.dev Git repo:

    +
    git clone https://github.com/shalb/cluster.dev/
    +
    +
  2. +
  3. +

    Build the binary:

    +
    cd cluster.dev/ && make
    +
    +
  4. +
  5. +

    Check Cluster.dev and move the binary to the bin folder:

    +
    ./bin/cdev --help
    +mv ./bin/cdev /usr/local/bin/
    +
    +
  6. +
+ + + + + + + + +
+
+ + +
+ +
+ + + + +
+
+
+
+ + + + + + + + + + \ No newline at end of file diff --git a/search/search_index.json b/search/search_index.json new file mode 100644 index 00000000..3b4f979d --- /dev/null +++ b/search/search_index.json @@ -0,0 +1 @@ +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"What is Cluster.dev?","text":"

Cluster.dev is an open-source tool designed to manage cloud native infrastructures with simple declarative manifests - stack templates. It allows you to describe an entire infrastructure and deploy it with a single tool.

Stack templates can be based on Terraform modules, Kubernetes manifests, Shell scripts, Helm charts and Argo CD/Flux applications, OPA policies, etc. Cluster.dev brings those components together so that you can deploy, test and distribute a whole set of components with pinned versions.

"},{"location":"#when-do-i-need-clusterdev","title":"When do I need Cluster.dev?","text":"
  1. If you have a common infrastructure pattern that contains multiple components stuck together. This could be a bunch of TF-modules, or a set of K8s add-ons where you need to re-use this pattern inside your projects.
  2. If you develop an infrastructure platform that you ship to other teams and they need to launch new infrastructures from your template.
  3. If you build a complex infrastructure that contains different technologies and you need to perform integration testing to confirm the components' interoperability. Once done, you can then promote the changes to next environments.
  4. If you are a software vendor and need to deliver infrastructure deployment along with your software.
"},{"location":"#base-concept-diagrams","title":"Base concept diagrams","text":"

Stack templates are composed of units - Lego-like building blocks responsible for passing variables to a particular technology.

Templates define infrastructure patterns or even the whole platform.

"},{"location":"#features","title":"Features","text":"
  • Common variables, secrets, and templating for different technologies.
  • Same GitOps Development experience for Terraform, Shell, Kubernetes.
  • Can be used with any cloud, on-premises or hybrid scenarios.
  • Encourage teams to follow technology best practices.
"},{"location":"DevOpsDays21/","title":"DevOps Days 2021","text":"

Hi Guys, I'm Vova from SHALB!

In SHALB we build and support a hundreds of infrastructures so we have some outcome and experience that we'd like to share.

"},{"location":"DevOpsDays21/#problems-of-the-modern-cloud-native-infrastructures","title":"Problems of the modern Cloud Native infrastructures","text":""},{"location":"DevOpsDays21/#multiple-technologies-needs-to-be-coupled","title":"Multiple technologies needs to be coupled","text":"

Infrastructure code for complete infra contains a different technologies: Terraform, Helm, Docker, Bash, Ansible, Cloud-Init, CI/CD-scripts, SQL's, GitOps applications, Secrets, etc..

With a bunch of specific DSL'es: yaml, hcl, go-template, json(net).

And each with the specific code styles: declarative, imperative, interrogative. With the different diff'ing: two or three way merges. And even using different patching across one tool, like: patchesStrategicMerge, patchesJson6902 in kustomize.

So you need to compile all that stuff together to be able spawn a whole infra with one shot. And you need one-shot to be clear that it is fully automated and can be GitOps-ed :)!

"},{"location":"DevOpsDays21/#even-super-powerful-tool-has-own-limits","title":"Even super-powerful tool has own limits","text":"

So thats why:

  • Terragrunt, Terraspace and Atlantis exist for Terraform.
  • Helmfile, Helm Operator exist form Helm.
  • and Helm exist for K8s yaml :).
"},{"location":"DevOpsDays21/#its-hard-to-deal-with-variables-and-secrets","title":"Its hard to deal with variables and secrets","text":"
  1. Should be passed between different technologies in sometimes unpredictable sequences. In example you need to set the IAM role arn created by Terraform to Cert-Manager controller deployed with Helm values.

  2. Variables should be passed across different infrastructures, even located on different clouds. Imagine you need to obtain DNS Zone from CloudFlare, then set 'NS' records in AWS Route53, and then grant an External-DNS controller which is deployed in on-prem K8s provisioned with Rancher to change this zone in AWS...

  3. Secrets that needs to be secured and shared across different team members and teams. Team members sometime leave, or accounts could be compromised and you need completely revoke access from them across a set of infras with one shot.

  4. Variables should be decoupled from infrastructure pattern itself and needs a wise sane defaults. If you hardcode variables - its hard to reuse such code.

"},{"location":"DevOpsDays21/#development-and-testing","title":"Development and Testing","text":"

You'd like to maximize reusage of the existing infrastructure patterns:

- Terraform modules\n- Helm Charts\n- K8s Operators\n- Dockerfile's\n

Pin versions for all you have in your infra, in example: Pin the aws cli and terraform binary version along with Helm, Prometheus operator version and your private kustomize application.

"},{"location":"DevOpsDays21/#available-solutions","title":"Available solutions","text":"

So to couple their infrastructure with some 'glue' most of engineers have a several ways:

  • CI/CD sequential appying, ex Jenkins/Gitlab job that deploys infra components one by one.
  • Own bash scripts and Makefiles, that pulls code from different repos and applies using hardcoded sequence.
  • Some of them struggle to write everything with one tech: ex Pulumi(but you need to know how to code in JS, GO, .NET), or Terraform (and you fail) :)
  • Some of them rely on existing API (Kuberenetes) architecture like a Crossplane.
"},{"location":"DevOpsDays21/#we-create-own-tool-clusterdev-or-cdev","title":"We create own tool - cluster.dev or 'cdev'","text":"

It's Capabilities:

  • Re-using all existing Terraform private and public modules and Helm Charts.
  • Templating anything with Go-template functions, even Terraform modules in Helm-style templates.
  • Applying parallel changes in multiple infrastructures concurrently.
  • Using the same global variables and secrets across different infrastructures, clouds and technologies.
  • Create and manage secrets with Sops or cloud secret storages.
  • Generate a ready to use Terraform code.
"},{"location":"DevOpsDays21/#short-demo","title":"Short Demo","text":""},{"location":"ROADMAP/","title":"Project Roadmap","text":""},{"location":"ROADMAP/#v01x-basic-scenario","title":"v.0.1.x - Basic Scenario","text":"
  • Create a state storage (AWS S3+Dynamo) for infrastructure resources
  • Deploy a Kubernetes(Minikube) in AWS using default VPC
  • Provision Kubernetes with addons: Ingress-Nginx, Load Balancer, Cert-Manager, ExtDNS, ArgoCD
  • Deploy a sample \"WordPress\" application to Kubernetes cluster using ArgoCD
  • Delivered as GitHub Actions and Docker Image
"},{"location":"ROADMAP/#v02x-bash-based-poc","title":"v0.2.x - Bash-based PoC","text":"
  • Deliver with cluster creation a default DNS sub-zone: *.username-clustername.cluster.dev
  • Create a cluster.dev backend to register newly created clusters
  • Support for GitLab CI Pipelines
  • ArgoCD sample applications (raw manifests, local helm chart, public helm chart)
  • Support for DigitalOcean Kubernetes cluster 59
  • DigitalOcean Domains sub-zones 65
  • AWS EKS provisioning. Spot and Mixed ASG support.
  • Support for Operator Lifecycle Manager
"},{"location":"ROADMAP/#v03x-go-based-beta","title":"v0.3.x - Go-based Beta","text":"
  • Go-based reconciler
  • External secrets management with Sops and godaddy/kubernetes-external-secrets
  • Team and user management with Keycloak
  • Apps deployment: Kubernetes Dashboard, Grafana and Kibana.
  • OIDC access to kubeconfig with Keycloak and jetstack/kube-oidc-proxy/ 53
  • SSO access to ArgoCD and base applications: Kubernetes Dashboard, Grafana, Kibana
  • OIDC integration with GitHub, GitLab, Google Auth, Okta
"},{"location":"ROADMAP/#v04x","title":"v0.4.x","text":"
  • CLI Installer 54
  • Add GitHub runner and test GitHub Action Continuous Integration workflow
  • Argo Workflows for DAG and CI tasks inside Kubernetes cluster
  • Google Cloud Platform Kubernetes (GKE) support
  • Custom Terraform modules and reconcilation
  • Kind provisioner
"},{"location":"ROADMAP/#v05x","title":"v0.5.x","text":"
  • kops provisioner support
  • k3s provisioner
  • Cost $$$ estimation during installation
  • Web user interface design
"},{"location":"ROADMAP/#v06x","title":"v0.6.x","text":"
  • Rancher RKE provisioner support
  • Multi-cluster support for user management and SSO
  • Multi-cluster support for ArgoCD
  • Crossplane integration
"},{"location":"azure-cloud-provider/","title":"Deploying to Azure","text":"

Work on setting up access to Azure is in progress, examples are coming soon!

"},{"location":"azure-cloud-provider/#authentication","title":"Authentication","text":"

See Terraform Azure provider documentation.

"},{"location":"cdev-vs-helmfile/","title":"Cluster.dev vs. Helmfile: Managing Kubernetes Helm Charts","text":"

Kubernetes, with its dynamic and versatile nature, requires efficient tools to manage its deployments. Two tools that have gained significant attention for this purpose are Cluster.dev and Helmfile. Both are designed to manage Kubernetes Helm charts but with varying focuses and features. This article offers a comparative analysis of Cluster.dev and Helmfile, spotlighting their respective strengths.

"},{"location":"cdev-vs-helmfile/#1-introduction","title":"1. Introduction","text":"

Cluster.dev:

  • A versatile tool designed for managing cloud-native infrastructures with declarative manifests, known as stack templates.
  • Integrates with various technologies, including Terraform modules, Kubernetes manifests, Helm charts, and more.
  • Promotes a unified approach to deploying, testing, and distributing infrastructure components.

Helmfile:

  • A declarative specification for deploying and synchronizing Helm charts.
  • Provides automation and workflow tooling around the Helm tool, making it easier to deploy and manage Helm charts across several clusters or environments.
"},{"location":"cdev-vs-helmfile/#2-core-features-abilities","title":"2. Core Features & Abilities","text":""},{"location":"cdev-vs-helmfile/#declarative-manifests","title":"Declarative Manifests","text":"
  • Cluster.dev: Uses stack templates, allowing integration with various technologies. This versatility makes Cluster.dev suitable for describing and deploying an entire infrastructure.

  • Helmfile: Uses a specific declarative structure for Helm charts. Helmfile's helmfile.yaml describes the desired state of Helm releases, promoting consistent deployments across environments.

"},{"location":"cdev-vs-helmfile/#integration-and-flexibility","title":"Integration and Flexibility","text":"
  • Cluster.dev: Supports a wide array of technologies beyond Helm, such as Terraform and Kubernetes manifests. This broad scope makes it suitable for diverse cloud-native projects.

  • Helmfile: Exclusively focuses on Helm, providing tailored utilities, commands, and functions that enhance the Helm experience.

"},{"location":"cdev-vs-helmfile/#configuration-management","title":"Configuration Management","text":"
  • Cluster.dev: Uses stack templates to handle configurations, integrating them with the respective technology modules. For Helm, it provides a dedicated \"helm\" unit type.

  • Helmfile: Employs helmfile.yaml, where users can specify Helm chart details, dependencies, repositories, and values. Helmfile also supports templating and layering of values, providing powerful configuration management.

"},{"location":"cdev-vs-helmfile/#workflow-and-automation","title":"Workflow and Automation","text":"
  • Cluster.dev: Offers a GitOps Development experience across different technologies, ensuring consistent deployment practices.

  • Helmfile: Provides a suite of commands (apply, sync, diff, etc.) tailored for Helm workflows, making it easy to manage Helm releases in an automated manner.

"},{"location":"cdev-vs-helmfile/#values-and-templating","title":"Values and Templating","text":"
  • Cluster.dev: Supports values templating for Helm units, and offers functions like remoteState and insertYAML for dynamic inputs.

  • Helmfile: Robustly supports values templating, with features like environment-specific value files and Go templating. It allows for dynamic generation of values based on the environment or external commands.

"},{"location":"cdev-vs-helmfile/#3-ideal-use-cases","title":"3. Ideal Use Cases","text":"

Cluster.dev:

  • Large-scale cloud-native projects integrating various technologies.
  • Unified deployment and management of multi-technology stacks.
  • Organizations aiming for a consistent GitOps approach across their stack.

Helmfile:

  • Projects heavily reliant on Helm for deployments.
  • Organizations needing advanced configuration management for Helm charts.
  • Scenarios requiring repetitive and consistent Helm chart deployments across various clusters or environments.
"},{"location":"cdev-vs-helmfile/#4-conclusion","title":"4. Conclusion","text":"

Cluster.dev and Helmfile, while both capable of managing Helm charts, cater to different spectrums of the Kubernetes deployment landscape. Cluster.dev aims for a holistic approach to cloud-native infrastructure management, integrating various technologies. Helmfile, on the other hand, delves deep into Helm's ecosystem, offering advanced tooling for Helm chart management.

Your choice between the two should depend on the specifics of your infrastructure needs, the technologies you're predominantly using, and your desired management granularity.

Note: Always consider evaluating the tools in your specific context, and it may even be beneficial to use them in tandem if they fit the project's requirements.

"},{"location":"cdev-vs-pulumi/","title":"Cluster.dev vs. Pulumi and Crossplane","text":"

Pulumi and Crossplane are modern alternatives to Terraform.

These are great tools and we admire alternative views on infrastructure management.

What makes Cluster.dev different is its purpose and limitations. Tools like Pulumi, Crossplane, and Terraform are aimed to manage clouds - creating new instances or clusters, cloud resources like databases, and others. While Cluster.dev is designed to manage the whole infrastructure, including those tools as units. That means you can run Terraform, then run Pulumi, or Bash, or Ansible with variables received from Terraform, and then run Crossplane or something else. Cluster.dev is created to connect and manage all infrastructure tools.

With infrastructure tools, users are often restricted with one-tool usage that has specific language or DSL. Whereas Cluster.dev allows to have a limitless number of units and workflow combinations between tools.

For now Cluster.dev has a major support for Terraform only, mostly because we want to provide the best experience for the majority of users. Moreover, Terraform is a de-facto industry standard and already has a lot of modules created by the community. To read more on the subject please refer to Cluster.dev vs. Terraform section.

If you or your company would like to use Pulumi or Crossplane with Cluster.dev, please feel free to contact us.

"},{"location":"cdev-vs-terraform/","title":"Cluster.dev vs. Terraform","text":"

Terraform is a great and popular tool for creating infrastructures. Despite the fact that it was founded more than five years ago, Terraform supports many providers and resources, which is impressive.

Cluster.dev loves Terraform (and even supports export to the plain Terraform code). Still, Terraform lacks a robust relation system, fast plans, automatic reconciliation, and configuration templates.

Cluster.dev, on the other hand, is a managing software that uses Terraform alongside other infrastructure tools as building blocks.

As a higher abstraction, Cluster.dev fixes all listed problems: builds a single source of truth, and combines and orchestrates different infrastructure tools under the same roof.

Let's dig more into the problems that Cluster.dev solves.

"},{"location":"cdev-vs-terraform/#internal-relation","title":"Internal relation","text":"

As Terraform has pretty complex rendering logic, it affects the relations between its pieces. For example, you cannot define a provider for, let say, k8s or Helm, in the same codebase that creates a k8s cluster. This forces users to resort to internal hacks or employ a custom wrapper to have two different deploys \u2014 that is a problem we solved via Cluster.dev.

Another problem with internal relations concerns huge execution plans that Terraform creates for massive projects. Users who tried to avoid this issue by using small Terraform repos, faced the challenges of weak \"remote state\" relations and limited possibilities of reconciliation: it was not possible to trigger a related module, if the output of the module it relied upon had been changed.

On the contrary, Cluster.dev allows you to trigger only the necessary parts, as it is a GitOps-first tool.

"},{"location":"cdev-vs-terraform/#templating","title":"Templating","text":"

The second limitation of Terraform is templating: Terraform doesn\u2019t support templating of tf files that it uses. This forces users to hacks that further tangle their Terraform files. And while Cluster.dev uses templating, it allows to include, let\u2019s say, Jenkins Terraform module with custom inputs for dev environment and not to include it for staging and production \u2014 all in the same codebase.

"},{"location":"cdev-vs-terraform/#third-party","title":"Third Party","text":"

Terraform allows for executing Bash or Ansible. However, it doesn't contain many instruments to control where and how these external tools will be run.

While Cluster.dev as a cloud native manager provides all tools the same level of support and integration.

"},{"location":"cdev-vs-terragrunt/","title":"Cluster.dev vs. Terragrunt: A Comparative Analysis","text":"

Both Cluster.dev and Terragrunt have been increasingly popular tools within the DevOps community, particularly among those working with Terraform. However, each tool brings its unique offerings to the table. This article dives deep into a comparison of these tools to provide a clear understanding of their capabilities and respective strengths.

"},{"location":"cdev-vs-terragrunt/#1-introduction","title":"1. Introduction","text":"

Cluster.dev

  • A comprehensive tool designed for managing cloud-native infrastructures using declarative manifests called stack templates.
  • Integrates with various components such as Terraform modules, Kubernetes manifests, Shell scripts, Helm charts, Argo CD/Flux applications, and OPA policies.
  • Provides a unified approach to deploy, test, and distribute components.

Terragrunt

  • An extension for Terraform designed to provide additional utilities to manage Terraform modules.
  • Helps in keeping Terraform configurations DRY (Don\u2019t Repeat Yourself), ensuring modularity and reuse across multiple environments.
  • Offers a layered approach to configuration, simplifying the management of Terraform deployments.
"},{"location":"cdev-vs-terragrunt/#2-core-features-abilities","title":"2. Core Features & Abilities","text":""},{"location":"cdev-vs-terragrunt/#configuration-management","title":"Configuration Management","text":"
  • Cluster.dev: Uses stack templates for configuration. Templates can integrate with various technologies like Terraform, Kubernetes, and Helm. A single template can describe and deploy an entire infrastructure.
  • Terragrunt: Primarily deals with Terraform configurations. Enables reuse and modularity of configurations by linking to Terraform modules and managing inputs/outputs between them.
"},{"location":"cdev-vs-terragrunt/#flexibility-integration","title":"Flexibility & Integration","text":"
  • Cluster.dev: Highly flexible, supporting a multitude of components from Terraform modules to Kubernetes manifests. Its design promotes integrating diverse cloud-native technologies.
  • Terragrunt: Primarily focuses on Terraform. While it offers great utility functions for Terraform, its integration capabilities are confined to Terraform's ecosystem.
"},{"location":"cdev-vs-terragrunt/#workflow-management","title":"Workflow Management","text":"
  • Cluster.dev: Aims for a consistent GitOps Development experience across multiple technologies.
  • Terragrunt: Facilitates workflows within Terraform, such as ensuring consistent remote state management and modular Terraform deployments.
"},{"location":"cdev-vs-terragrunt/#versioning-source-management","title":"Versioning & Source Management","text":"
  • Cluster.dev: Allows pinning versions for components and supports specifying module versions directly within the stack templates.
  • Terragrunt: Uses a version reference for Terraform modules, making it easier to manage and switch between different versions of modules.
"},{"location":"cdev-vs-terragrunt/#special-features","title":"Special Features","text":"
  • Cluster.dev: Provides templating for different technologies, can be used in any cloud or on-premises scenarios, and promotes technology best practices.
  • Terragrunt: Provides utilities like automatic retries, locking, and helper scripts for advanced scenarios in Terraform.
"},{"location":"cdev-vs-terragrunt/#3-when-to-use-which","title":"3. When to Use Which?","text":"

Cluster.dev is ideal for:

  • Managing infrastructures that integrate multiple cloud-native technologies.
  • Projects that need unified deployment, testing, and distribution.
  • Environments that require a consistent GitOps development experience across technologies.

Terragrunt shines when:

  • You're working exclusively or primarily with Terraform.
  • Needing to maintain configurations DRY and modular across multiple environments.
  • Complex Terraform projects that require additional utilities like locking, retries, and advanced configuration management.
"},{"location":"cdev-vs-terragrunt/#4-conclusion","title":"4. Conclusion","text":"

While both Cluster.dev and Terragrunt cater to infrastructure as code and Terraform enthusiasts, their ideal use cases differ. Cluster.dev provides a more holistic approach to cloud-native infrastructure management, incorporating a range of technologies. In contrast, Terragrunt focuses on enhancing the Terraform experience.

Your choice between Cluster.dev and Terragrunt should be influenced by your specific project requirements, the technologies you're using, and the level of integration you desire.

Remember, the choice of tool often depends on the specifics of the project, organizational practices, and individual preferences. Always evaluate tools in the context of your needs.

"},{"location":"cli-commands/","title":"CLI Commands","text":""},{"location":"cli-commands/#general","title":"General","text":"
  • apply Deploy or update an infrastructure according to project configuration.

  • build Build cache dirs for all units in the current project.

  • destroy Destroy an infrastructure deployed by the current project.

  • cdev Refer to Cluster.dev docs for details.

  • help Get help about any command.

  • output Display project outputs.

  • plan Show changes that will be applied in the current project.

"},{"location":"cli-commands/#project","title":"Project","text":"
  • project Manage projects.

  • project info Show detailed information about the current project, such as the number of units and their types, the number of stacks, etc.

  • project create Generate a new project from generator-template in the current directory. The directory should not contain yaml or yml files.

"},{"location":"cli-commands/#secret","title":"Secret","text":"
  • secret Manage secrets.

  • secret ls List secrets in the current project.

  • secret edit [secret_name] Create a new secret or edit the existing one.

  • secret create Generate a new secret in the current directory. The directory must contain the project.

"},{"location":"cli-commands/#state","title":"State","text":"
  • state State operations.

  • state unlock Unlock state forcibly.

  • state pull Download the remote state.

  • state update Update the state of the current project to version %v. Make sure that the state of the project is consistent (run cdev apply with the old version before updating).

"},{"location":"cli-options/","title":"CLI Options","text":""},{"location":"cli-options/#global-flags","title":"Global flags","text":"
  • --cache Use previously cached build directory.

  • -l, --log-level string Set the logging level ('debug'|'info'|'warn'|'error'|'fatal') (default \"info\").

  • --parallelism int Max parallel threads for module applying (default - 3).

"},{"location":"cli-options/#apply-flags","title":"Apply flags","text":"
  • --force Skip interactive approval.

  • -h, --help Help for apply.

  • --ignore-state Apply even if the state has not changed.

"},{"location":"cli-options/#create-flags","title":"Create flags","text":"
  • -h, --help Help for create.

  • --interactive Use interactive mode for project generation.

  • --list-templates Show all available templates for project generator.

"},{"location":"cli-options/#destroy-flags","title":"Destroy flags","text":"
  • --force Skip interactive approval.

  • -h, --help Help for destroy.

  • --ignore-state Destroy current configuration and ignore state.

"},{"location":"cluster-state/","title":"Cluster State","text":"

Cluster.dev state is a data set that describes the current actual state of an infrastructure. Cluster.dev uses the state to map real world resources to your configuration, keep track of infrastructure changes and store dependencies between units.

Cluster.dev operates both with cdev and Terraform states. The cdev state is an abstraction atop of the Terraform state, which allows to save time during state validation. For more information on Terraform state refer to official documentation.

The cdev and Terraform states can be stored locally or remotely. The location of where to store the states is defined in the backend. By default Cluster.dev will use local backend to store the cluster state unless the remote storage location is specified in the project.yaml:

  name: dev\n  kind: Project\n  backend: aws-backend\n  variables:\n    organization: cluster.dev\n    region: eu-central-1\n    state_bucket_name: test-tmpl-dev\n

State is created during units applying stage. When we make changes into a project, Cluster.dev builds state from the current project considering the changes. Then it compares the two configurations (the actual with the desired one) and shows the difference between them, i.e. the units to be modified, applied or destroyed. Executing cdev apply deploys the changes and updates the state.

Deleting the cdev state is discouraged, however, is not critical unlike Terraform state. This is because Cluster.dev units are Terraform-based and have their own states. In case of deletion the state will be redeployed with the next cdev apply.

To work with the cdev state, use dedicated commands. Manual editing of the state file is highly undesirable.

"},{"location":"env-variables/","title":"Environment Variables","text":"
  • CDEV_TF_BINARY Indicates which Terraform binary to use. Recommended usage: for debug during template development.
"},{"location":"examples-aws-eks/","title":"AWS-EKS","text":"

Cluster.dev uses stack templates to generate users' projects in a desired cloud. AWS-EKS is a stack template that creates and provisions Kubernetes clusters in AWS cloud by means of Amazon Elastic Kubernetes Service (EKS).

On this page you will find guidance on how to create an EKS cluster on AWS using one of the Cluster.dev prepared samples \u2013 the AWS-EKS stack template. Running the example code will have the following resources created:

  • EKS cluster with addons:

    • cert-manager

    • ingress-nginx

    • external-dns

    • argocd

  • AWS IAM roles for EKS IRSA cert-manager and external-dns

  • (optional, if you use cluster.dev domain) Route53 zone .cluster.dev

  • (optional, if vpc_id is not set) VPC for EKS cluster

  • "},{"location":"examples-aws-eks/#prerequisites","title":"Prerequisites","text":"
    1. Terraform version 1.4+

    2. AWS account

    3. AWS CLI installed

    4. kubectl installed

    5. Cluster.dev client installed

    "},{"location":"examples-aws-eks/#authentication","title":"Authentication","text":"

    Cluster.dev requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways:

    Info

    Please note that you have to use IAM user with granted administrative permissions.

    • Environment variables: provide your credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the AWS_DEFAULT_REGION or AWS_REGION environment variable to set region, if needed. Example usage:

      export AWS_ACCESS_KEY_ID=\"MYACCESSKEY\"\nexport AWS_SECRET_ACCESS_KEY=\"MYSECRETKEY\"\nexport AWS_DEFAULT_REGION=\"eu-central-1\"\n
    • Shared Credentials File (recommended): set up an AWS configuration file to specify your credentials.

      Credentials file ~/.aws/credentials example:

      [cluster-dev]\naws_access_key_id = MYACCESSKEY\naws_secret_access_key = MYSECRETKEY\n

      Config: ~/.aws/config example:

      [profile cluster-dev]\nregion = eu-central-1\n

      Then export AWS_PROFILE environment variable.

      export AWS_PROFILE=cluster-dev\n
    "},{"location":"examples-aws-eks/#install-aws-client","title":"Install AWS client","text":"

    If you don't have the AWS CLI installed, refer to AWS CLI official installation guide, or use commands from the example:

    curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\naws s3 ls\n
    "},{"location":"examples-aws-eks/#create-s3-bucket","title":"Create S3 bucket","text":"

    Cluster.dev uses S3 bucket for storing states. Create the bucket with the command:

    aws s3 mb s3://cdev-states\n
    "},{"location":"examples-aws-eks/#dns-zone","title":"DNS Zone","text":"

    In AWS-EKS stack template example you need to define a Route 53 hosted zone. Options:

    1. You already have a Route 53 hosted zone.

    2. Create a new hosted zone using a Route 53 documentation example.

    3. Use \"cluster.dev\" domain for zone delegation.

    "},{"location":"examples-aws-eks/#create-project","title":"Create project","text":"
    1. Configure access to AWS and export required variables.

    2. Create locally a project directory, cd into it and execute the command:

        cdev project create https://github.com/shalb/cdev-aws-eks\n
      This will create a new empty project.

    3. Edit variables in the example's files, if necessary:

      • project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

      • backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See backend docs.

      • stack.yaml - describes stack configuration. See stack docs.

    4. Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.

      Note

      Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.

    5. Run cdev apply

      Tip

      We highly recommend to run cdev apply in a debug mode so that you could see the Cluster.dev logging in the output: cdev apply -l debug

    6. After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the \"admin\" login and the bcrypted password that you have generated for the stack.yaml.

    7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

    8. Destroy the cluster and all created resources with the command cdev destroy

    "},{"location":"examples-aws-k3s-prometheus/","title":"AWS-K3s Prometheus","text":"

    The code, the text and the screencast prepared by Oleksii Kurinnyi, a monitoring engineer at SHALB.

    "},{"location":"examples-aws-k3s-prometheus/#goal","title":"Goal","text":"

    In this article we will use and modify the basic AWS-K3s Cluster.dev template to deploy the Prometheus monitoring stack to a cluster. As a result we will have a K3s cluster on AWS with a set of required controllers (Ingress, cert-manager, Argo CD) and installed kube-prometheus stack. The code samples are available in the GitHub repository.

    "},{"location":"examples-aws-k3s-prometheus/#requirements","title":"Requirements","text":""},{"location":"examples-aws-k3s-prometheus/#os","title":"OS","text":"

    We should have some client host with Ubuntu 20.04 to use this manual without any customization.

    "},{"location":"examples-aws-k3s-prometheus/#docker","title":"Docker","text":"

    We should install Docker to the client host.

    "},{"location":"examples-aws-k3s-prometheus/#aws-account","title":"AWS account","text":"
    • Log in into existing AWS account or register a new one.

    • Select an AWS region in order to deploy the cluster in that region.

    • Add a programmatic access key for a new or existing user. Note that it should be an IAM user with granted administrative permissions.

    • Open bash terminal on the client host.

    • Get an example environment file env to set our AWS credentials:

          curl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/env > env\n
    • Add the programmatic access key to the environment file env:

          editor env\n
    "},{"location":"examples-aws-k3s-prometheus/#create-and-deploy-the-project","title":"Create and deploy the project","text":""},{"location":"examples-aws-k3s-prometheus/#get-example-code","title":"Get example code","text":"
    mkdir -p cdev && mv env cdev/ && cd cdev && chmod 777 ./\nalias cdev='docker run -it -v $(pwd):/workspace/cluster-dev --env-file=env clusterdev/cluster.dev:v0.6.3'\ncdev project create https://github.com/shalb/cdev-aws-k3s?ref=v0.3.0\ncurl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/stack.yaml > stack.yaml\ncurl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/project.yaml > project.yaml\ncurl https://raw.githubusercontent.com/shalb/monitoring-examples/main/cdev/monitoring-cluster-blog/monitoring.yaml > monitoring.yaml\n
    "},{"location":"examples-aws-k3s-prometheus/#create-s3-bucket-to-store-the-project-state","title":"Create S3 bucket to store the project state","text":"

    Go to AWS S3 and create a new bucket. Replace the value of state_bucket_name key in config file project.yaml by the name of the created bucket:

    editor project.yaml\n
    "},{"location":"examples-aws-k3s-prometheus/#customize-project-settings","title":"Customize project settings","text":"

    We shall set all the settings needed for our project in the project.yaml config file. We should customize all the variables that have # example comment in the end of line.

    "},{"location":"examples-aws-k3s-prometheus/#select-aws-region","title":"Select AWS region","text":"

    We should replace the value of region key in config file project.yaml by our region.

    "},{"location":"examples-aws-k3s-prometheus/#set-unique-cluster-name","title":"Set unique cluster name","text":"

    By default we shall use cluster.dev domain as a root domain for cluster ingresses. We should replace the value of cluster_name key by a unique string in config file project.yaml, because the default ingress will use it in resulting DNS name.

    This command may help us to generate a random name and check whether it is in use:

    CLUSTER_NAME=$(echo \"$(tr -dc a-z0-9 </dev/urandom | head -c 5)\") \ndig argocd.${CLUSTER_NAME}.cluster.dev | grep -q \"^${CLUSTER_NAME}\" || echo \"OK to use cluster_name: ${CLUSTER_NAME}\"\n

    If the cluster name is available we should see the message OK to use cluster_name: ...

    "},{"location":"examples-aws-k3s-prometheus/#set-ssh-keys","title":"Set SSH keys","text":"

    We should have access to cluster nodes via SSH. To add the existing SSH key we should replace the value of public_key key in config file project.yaml. If we have no SSH key, then we should create it.

    "},{"location":"examples-aws-k3s-prometheus/#set-argo-cd-password","title":"Set Argo CD password","text":"

    In our project we shall use Argo CD to deploy our applications to the cluster. To secure Argo CD we should replace the value of argocd_server_admin_password key by a unique password in config file project.yaml. The default value is a bcrypted password string.

    To encrypt our custom password we may use an online tool or encrypt the password by command:

    alias cdev_bash='docker run -it -v $(pwd):/workspace/cluster-dev --env-file=env --network=host --entrypoint=\"\" clusterdev/cluster.dev:v0.6.3 bash'\ncdev_bash\npassword=$(tr -dc a-zA-Z0-9,._! </dev/urandom | head -c 20)\napt install -y apache2-utils && htpasswd -bnBC 10 \"\" ${password} | tr -d ':\\n' ; echo ''\necho \"Password: $password\"\nexit\n
    "},{"location":"examples-aws-k3s-prometheus/#set-grafana-password","title":"Set Grafana password","text":"

    Now we are going to add a custom password for Grafana. To secure Grafana we should replace the value of grafana_password key by a unique password in config file project.yaml. This command may help us to generate a random password:

    echo \"$(tr -dc a-zA-Z0-9,._! </dev/urandom | head -c 20)\"\n
    "},{"location":"examples-aws-k3s-prometheus/#run-bash-in-clusterdev-container","title":"Run Bash in Cluster.dev container","text":"

    To avoid installation of all needed tools directly to the client host, we will run all commands inside the Cluster.dev container. In order to execute Bash inside the Cluster.dev container and proceed to deploy, run the command:

    cdev_bash\n
    "},{"location":"examples-aws-k3s-prometheus/#deploy-the-project","title":"Deploy the project","text":"

    Now we should deploy our project to AWS via cdev command:

    cdev apply -l debug | tee apply.log\n

    In case of successful deployment we should get further instructions on how to access Kubernetes, and the URLs of Argo CD and Grafana web UIs. Sometimes, because of DNS update delays we need to wait some time to access those web UIs. In such case we can forward all needed services via kubectl to the client host:

    kubectl port-forward svc/argocd-server -n argocd 18080:443  > /dev/null 2>&1 &\nkubectl port-forward svc/monitoring-grafana -n monitoring 28080:80  > /dev/null 2>&1 &\n

    We may test our forwards via curl:

    curl 127.0.0.1:18080\ncurl 127.0.0.1:28080\n

    If we see no errors from curl, then the client host should access these endpoints via any browser.

    "},{"location":"examples-aws-k3s-prometheus/#destroy-the-project","title":"Destroy the project","text":"

    We can delete our cluster with the command:

    cdev apply -l debug\ncdev destroy -l debug | tee destroy.log\n
    "},{"location":"examples-aws-k3s-prometheus/#conclusion","title":"Conclusion","text":"

    Within this article we have learnt how to deploy the Prometheus monitoring stack with the Cluster.dev AWS-K3s template. The resulting stack allows us to monitor workloads in our cluster. We can also reuse the stack as a prepared infrastructure pattern to launch environments for testing monitoring cases, before applying them to production.

    "},{"location":"examples-aws-k3s/","title":"AWS-K3s","text":"

    Cluster.dev uses stack templates to generate users' projects in a desired cloud. AWS-K3s is a stack template that creates and provisions Kubernetes clusters in AWS cloud by means of k3s utility.

    On this page you will find guidance on how to create a K3s cluster on AWS using one of the Cluster.dev prepared samples \u2013 the AWS-K3s stack template. Running the example code will have the following resources created:

    • K3s cluster with addons:

      • cert-manager

      • ingress-nginx

      • external-dns

      • argocd

    • AWS Key Pair to access the cluster running instances

    • AWS IAM Policy for managing your DNS zone by external-dns

    • (optional, if you use cluster.dev domain) Route53 zone .cluster.dev

    • (optional, if vpc_id is not set) VPC for EKS cluster

    • "},{"location":"examples-aws-k3s/#prerequisites","title":"Prerequisites","text":"
      1. Terraform version 1.4+

      2. AWS account

      3. AWS CLI installed

      4. kubectl installed

      5. Cluster.dev client installed

      "},{"location":"examples-aws-k3s/#authentication","title":"Authentication","text":"

      Cluster.dev requires cloud credentials to manage and provision resources. You can configure access to AWS in two ways:

      Info

      Please note that you have to use IAM user with granted administrative permissions.

      • Environment variables: provide your credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY, the environment variables that represent your AWS Access Key and AWS Secret Key. You can also use the AWS_DEFAULT_REGION or AWS_REGION environment variable to set region, if needed. Example usage:

        export AWS_ACCESS_KEY_ID=\"MYACCESSKEY\"\nexport AWS_SECRET_ACCESS_KEY=\"MYSECRETKEY\"\nexport AWS_DEFAULT_REGION=\"eu-central-1\"\n
      • Shared Credentials File (recommended): set up an AWS configuration file to specify your credentials.

        Credentials file ~/.aws/credentials example:

        [cluster-dev]\naws_access_key_id = MYACCESSKEY\naws_secret_access_key = MYSECRETKEY\n

        Config: ~/.aws/config example:

        [profile cluster-dev]\nregion = eu-central-1\n

        Then export AWS_PROFILE environment variable.

        export AWS_PROFILE=cluster-dev\n
      "},{"location":"examples-aws-k3s/#install-aws-client","title":"Install AWS client","text":"

      If you don't have the AWS CLI installed, refer to AWS CLI official installation guide, or use commands from the example:

      curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\naws s3 ls\n
      "},{"location":"examples-aws-k3s/#create-s3-bucket","title":"Create S3 bucket","text":"

      Cluster.dev uses S3 bucket for storing states. Create the bucket with the command:

      aws s3 mb s3://cdev-states\n
      "},{"location":"examples-aws-k3s/#dns-zone","title":"DNS Zone","text":"

      In AWS-K3s stack template example you need to define a Route 53 hosted zone. Options:

      1. You already have a Route 53 hosted zone.

      2. Create a new hosted zone using a Route 53 documentation example.

      3. Use \"cluster.dev\" domain for zone delegation.

      "},{"location":"examples-aws-k3s/#create-project","title":"Create project","text":"
      1. Configure access to AWS and export required variables.

      2. Create locally a project directory, cd into it and execute the command:

          cdev project create https://github.com/shalb/cdev-aws-k3s\n
        This will create a new empty project.

      3. Edit variables in the example's files, if necessary:

        • project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

        • backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See backend docs.

        • stack.yaml - describes stack configuration. See stack docs.

      4. Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.

        Note

        Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.

      5. Run cdev apply

        Tip

        We highly recommend to run cdev apply in a debug mode so that you could see the Cluster.dev logging in the output: cdev apply -l debug

      6. After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the \"admin\" login and the bcrypted password that you have generated for the stack.yaml.

      7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

      8. Destroy the cluster and all created resources with the command cdev destroy

      "},{"location":"examples-develop-stack-template/","title":"Develop Stack Template","text":"

      Cluster.dev gives you freedom to modify existing templates or create your own. You can add inputs and outputs to already preset units, take the output of one unit and send it as an input to another, or write new units and add them to a template.

      In our example we shall use the tmpl-development sample to create a K3s cluster on AWS. Then we shall modify the project stack template by adding new parameters to the units.

      "},{"location":"examples-develop-stack-template/#workflow-steps","title":"Workflow steps","text":"
      1. Create a project following the steps described in Create Own Project section.

      2. To start working with the stack template, cd into the template directory and open the template.yaml file: ./template/template.yaml.

        Our sample stack template contains 3 units. Now, let's elaborate on each of them and see how we can modify it.

      3. The create-bucket unit uses a remote Terraform module to create an S3 bucket on AWS:

        name: create-bucket\ntype: tfmodule\nproviders: *provider_aws\nsource: terraform-aws-modules/s3-bucket/aws\nversion: \"2.9.0\"\ninputs:\n  bucket: {{ .variables.bucket_name }}\n  force_destroy: true\n

        We can modify the unit by adding more parameters in inputs. For example, let's add some tags using the insertYAML function:

        name: create-bucket\ntype: tfmodule\nproviders: *provider_aws\nsource: terraform-aws-modules/s3-bucket/aws\nversion: \"2.9.0\"\ninputs:\n  bucket: {{ .variables.bucket_name }}\n  force_destroy: true\n  tags: {{ insertYAML .variables.tags }}\n

        Now we can see the tags in stack.yaml:

        name: cdev-tests-local\ntemplate: ./template/\nkind: Stack\nbackend: aws-backend\nvariables:\n  bucket_name: \"tmpl-dev-test\"\n  region: {{ .project.variables.region }}\n  organization: {{ .project.variables.organization }}\n  name: \"Developer\"\n  tags:\n    tag1_name: \"tag 1 value\"\n    tag2_name: \"tag 2 value\"\n

        To check the configuration, run cdev plan --tf-plan command. In the output you can see that Terraform will create a bucket with the defined tags. Run cdev apply -l debug to have the configuration applied.

      4. The create-s3-object unit uses local Terraform module to get the bucket ID and save data inside the bucket. The Terraform module is stored in s3-file directory, main.tf file:

        name: create-s3-object\ntype: tfmodule\nproviders: *provider_aws\nsource: ./s3-file/\ndepends_on: this.create-bucket\ninputs:\n  bucket_name: {{ remoteState \"this.create-bucket.s3_bucket_id\" }}\n  data: |\n    The data that will be saved in the S3 bucket after being processed by the template engine.\n    Organization: {{ .variables.organization }}\n    Name: {{ .variables.name }}\n

        The unit sends 2 parameters. The bucket_name is retrieved from the create-bucket unit by means of remoteState function. The data parameter uses templating to retrieve the Organization and Name variables from stack.yaml.

        Let's add to data input bucket_regional_domain_name variable to obtain the region-specific domain name of the bucket:

        name: create-s3-object\ntype: tfmodule\nproviders: *provider_aws\nsource: ./s3-file/\ndepends_on: this.create-bucket\ninputs:\n  bucket_name: {{ remoteState \"this.create-bucket.s3_bucket_id\" }}\n  data: |\n    The data that will be saved in the s3 bucket after being processed by the template engine.\n    Organization: {{ .variables.organization }}\n    Name: {{ .variables.name }}\n    Bucket regional domain name: {{ remoteState \"this.create-bucket.s3_bucket_bucket_regional_domain_name\" }}\n

        Check the configuration by running cdev plan command; apply it with cdev apply -l debug.

      5. The print_outputs unit retrieves data from two other units by means of remoteState function: bucket_domain variable from create-bucket unit and s3_file_info from create-s3-object unit:

        name: print_outputs\ntype: printer\ninputs:\n  bucket_domain: {{ remoteState \"this.create-bucket.s3_bucket_bucket_domain_name\" }}\n  s3_file_info: \"To get file use: aws s3 cp {{ remoteState \"this.create-s3-object.file_s3_url\" }} ./my_file && cat my_file\"\n
      6. Having finished your work, run cdev destroy to eliminate the created resources.

      "},{"location":"examples-do-k8s/","title":"DO-K8s","text":"

      Cluster.dev uses stack templates to generate users' projects in a desired cloud. DO-K8s is a stack template that creates and provisions Kubernetes clusters in the DigitalOcean cloud.

      On this page you will find guidance on how to create a Kubernetes cluster on DigitalOcean using one of the Cluster.dev prepared samples \u2013 the DO-K8s stack template. Running the example code will have the following resources created:

      • DO Kubernetes cluster with addons:

        • cert-manager

        • argocd

      • (optional, if vpc_id is not set) VPC for Kubernetes cluster

      "},{"location":"examples-do-k8s/#prerequisites","title":"Prerequisites","text":"
      1. Terraform version 1.4+

      2. DigitalOcean account

      3. doctl installed

      4. Cluster.dev client installed

      "},{"location":"examples-do-k8s/#authentication","title":"Authentication","text":"

      Create an access token for a user.

      Info

      Make sure to grant the user with administrative permissions.

      For details on using DO spaces bucket as a backend, see here.

      "},{"location":"examples-do-k8s/#do-access-configuration","title":"DO access configuration","text":"
      1. Install doctl. For more information, see the official documentation.

        cd ~\nwget https://github.com/digitalocean/doctl/releases/download/v1.57.0/doctl-1.57.0-linux-amd64.tar.gz\ntar xf ~/doctl-1.57.0-linux-amd64.tar.gz\nsudo mv ~/doctl /usr/local/bin\n
      2. Export your DIGITALOCEAN_TOKEN, for details see here.

        export DIGITALOCEAN_TOKEN=\"MyDIGITALOCEANToken\"\n
      3. Export SPACES_ACCESS_KEY_ID and SPACES_SECRET_ACCESS_KEY environment variables, for details see here.

        export SPACES_ACCESS_KEY_ID=\"dSUGdbJqa6xwJ6Fo8qV2DSksdjh...\"\nexport SPACES_SECRET_ACCESS_KEY=\"TEaKjdj8DSaJl7EnOdsa...\"\n
      4. Create a spaces bucket for Terraform states in the chosen region (in the example we used the 'cdev-data' bucket name).

      5. Create a domain in DigitalOcean domains service.

      Info

      In the project generated by default we used 'k8s.cluster.dev' zone as an example. Please make sure to change it.

      "},{"location":"examples-do-k8s/#create-project","title":"Create project","text":"
      1. Configure access to DigitalOcean and export required variables.

      2. Create locally a project directory, cd into it and execute the command:

          cdev project create https://github.com/shalb/cdev-do-k8s\n
        This will create a new empty project.

      3. Edit variables in the example's files, if necessary:

        • project.yaml - main project config. Sets common global variables for current project such as organization, region, state bucket name etc. See project configuration docs.

        • backend.yaml - configures backend for Cluster.dev states (including Terraform states). Uses variables from project.yaml. See backend docs.

        • stack.yaml - describes stack configuration. See stack docs.

      4. Run cdev plan to build the project. In the output you will see an infrastructure that is going to be created after running cdev apply.

        Note

        Prior to running cdev apply make sure to look through the stack.yaml file and replace the commented fields with real values. In case you would like to use existing VPC and subnets, uncomment preset options and set correct VPC ID and subnets' IDs. If you leave them as is, Cluster.dev will have VPC and subnets created for you.

      5. Run cdev apply

        Tip

        We highly recommend to run cdev apply in a debug mode so that you could see the Cluster.dev logging in the output: cdev apply -l debug

      6. After cdev apply is successfully executed, in the output you will see the ArgoCD URL of your cluster. Sign in to the console to check whether ArgoCD is up and running and the stack template has been deployed correctly. To sign in, use the \"admin\" login and the bcrypted password that you have generated for the stack.yaml.

      7. Displayed in the output will be also a command on how to get kubeconfig and connect to your Kubernetes cluster.

      8. Destroy the cluster and all created resources with the command cdev destroy

      "},{"location":"examples-gcp-gke/","title":"GCP-GKE","text":"

      Cluster.dev uses stack templates to generate users' projects in a desired cloud. GCP-GKE is a stack template that creates and provisions Kubernetes clusters in GCP cloud by means of Google Kubernetes Engine (GKE).

      On this page you will find guidance on how to create a GKE cluster on GCP using one of the Cluster.dev prepared samples \u2013 the GCP-GKE stack template. Running the example code will have the following resources created:

      • VPC

      • GKE Kubernetes cluster with addons:

        • cert-manager

        • ingress-nginx

        • external-secrets (with GCP Secret Manager backend)

        • external-dns

        • argocd

      "},{"location":"examples-gcp-gke/#prerequisites","title":"Prerequisites","text":"
      1. Terraform version >= 1.4
      2. GCP account and project
      3. GCloud CLI installed and configured with your GCP account
      4. kubectl installed
      5. Cluster.dev client installed
      6. Parent Domain
      "},{"location":"examples-gcp-gke/#before-you-begin","title":"Before you begin","text":"
      1. Create or select a Google Cloud project:

        gcloud projects create cdev-demo\ngcloud config set project cdev-demo\n

      2. Enable billing for your project.

      3. Enable the Google Kubernetes Engine API.

      4. Enable Secret Manager:

        gcloud services enable secretmanager.googleapis.com\n

      "},{"location":"examples-gcp-gke/#quick-start","title":"Quick Start","text":"
      1. Clone example project:
        git clone https://github.com/shalb/cdev-gcp-gke.git\ncd examples/\n
      2. Update project.yaml:
        name: demo-project\nkind: Project\nbackend: gcs-backend\nvariables:\n  organization: my-organization\n  project: cdev-demo\n  region: us-west1\n  state_bucket_name: gke-demo-state\n  state_bucket_prefix: demo\n
      3. Create GCP bucket for Terraform backend:
        gcloud projects create cdev-demo\ngcloud config set project cdev-demo\ngsutil mb gs://gke-demo-state\n
      4. Edit variables in the example's files, if necessary.
      5. Run cdev plan
      6. Run cdev apply
      7. Set up DNS delegation for subdomain by creating NS records for subdomain in parent domain. Run cdev output:

        cdev output\n12:58:52 [INFO] Printer: 'cluster.outputs', Output:\ndomain = demo.gcp.cluster.dev.\nname_server = [\n  \"ns-cloud-d1.googledomains.com.\",\n  \"ns-cloud-d2.googledomains.com.\",\n  \"ns-cloud-d3.googledomains.com.\",\n  \"ns-cloud-d4.googledomains.com.\"\n]\nregion = us-west1\n
        Add records from name_server list.

      8. Authorize cdev/Terraform to interact with GCP via SDK:

        gcloud auth application-default login\n

      9. Connect to GKE cluster:
        gcloud components install gke-gcloud-auth-plugin\ngcloud container clusters get-credentials demo-cluster --zone us-west1-a --project cdev-demo\n
      10. Retrieve ArgoCD admin password, install the ArgoCD CLI:
        kubectl -n argocd get secret argocd-initial-admin-secret  -o jsonpath=\"{.data.password}\" | base64 -d; echo\n
      "},{"location":"examples-modify-aws-eks/","title":"Modify AWS-EKS","text":"

      The code and the text prepared by Orest Kapko, a DevOps engineer at SHALB.

      In this article we shall customize the basic AWS-EKS Cluster.dev template in order to add some features.

      "},{"location":"examples-modify-aws-eks/#workflow-steps","title":"Workflow steps","text":"
      1. Go to the GitHub page via the AWS-EKS link and download the stack template.

      2. If you are not planning to use some preset addons, edit aws-eks.yaml to exclude them. In our case, it was cert-manager, cert-manager-issuer, ingress-nginx, argocd, and argocd_apps.

      3. In order to dynamically retrieve the AWS account ID parameter, we have added a data block to our stack template:

          - name: data\n    type: tfmodule\n    providers: *provider_aws\n    depends_on: this.eks\n    source: ./terraform-submodules/data/\n
        {{ remoteState \"this.data.account_id\" }}\n

        The block is also used in eks_auth ConfigMap and expands its functionality with groups of users:

          apiVersion: v1\n  data:\n    mapAccounts: |\n      []\n    mapRoles: |\n      - \"groups\":\n        - \"system:bootstrappers\"\n        - \"system:nodes\"\n        \"rolearn\": \"{{ remoteState \"this.eks.worker_iam_role_arn\" }}\"\n        \"username\": \"system:node:{{ \"{{EC2PrivateDNSName}}\" }}\"\n    - \"groups\":\n      - \"system:masters\"\n      \"rolearn\": \"arn:aws:iam::{{ remoteState \"this.data.account_id\" }}:role/OrganizationAccountAccessRole\"\n      \"username\": \"general-role\"\n    mapUsers: |\n      - \"groups\":\n        - \"system:masters\"\n        \"userarn\": \"arn:aws:iam::{{ remoteState \"this.data.account_id\" }}:user/jenkins-eks\"\n        \"username\": \"jenkins-eks\"\n  kind: ConfigMap\n  metadata:\n    name: aws-auth\n    namespace: kube-system\n

        The data block configuration in main.tf: data \"aws_caller_identity\" \"current\" {}

        In output.tf:

        yaml output \"account_id\" { value = data.aws_caller_identity.current.account_id }

      4. As it was decided to use Traefik Ingress controller instead of basic Nginx, we spun up two load balancers (first - internet-facing ALB for public ingresses, and second - internal ALB for private ingresses) and security groups necessary for its work, and described them in albs unit. The unit configuration within the template:

        {{- if .variables.ingressControllerEnabled }}\n- name: albs\n  type: tfmodule\n  providers: *provider_aws\n  source: ./terraform-submodules/albs/\n  inputs:\n    main_domain: {{ .variables.alb_main_domain }}\n    main_external_domain: {{ .variables.alb_main_external_domain }}\n    main_vpc: {{ .variables.vpc_id }}\n    acm_external_certificate_arn: {{ .variables.alb_acm_external_certificate_arn }}\n    private_subnets: {{ insertYAML .variables.private_subnets }}\n    public_subnets: {{ insertYAML .variables.public_subnets }}\n    environment: {{ .name }}\n{{- end }}\n
      5. Also we have created a dedicated unit for testing Ingress through Route 53 records:

        data \"aws_route53_zone\" \"existing\" {\n  name         = var.domain\n  private_zone = var.private_zone\n}\nmodule \"records\" {\n  source  = \"terraform-aws-modules/route53/aws//modules/records\"\n  version = \"~> 2.0\"\n  zone_id      = data.aws_route53_zone.existing.zone_id\n  private_zone = var.private_zone\n  records = [\n    {\n      name    = \"test-ingress-eks\"\n      type    = \"A\"\n      alias   = {\n        name    = var.private_lb_dns_name\n        zone_id = var.private_lb_zone_id\n        evaluate_target_health = false\n      }\n    },\n    {\n      name    = \"test-ingress-2-eks\"\n      type    = \"A\"\n      alias   = {\n        name    = var.private_lb_dns_name\n        zone_id = var.private_lb_zone_id\n        evaluate_target_health = false\n      }\n    }\n  ]\n}\n

        The unit configuration within the template:

         {{- if .variables.ingressControllerRoute53Enabled }}\n - name: route53_records\n   type: tfmodule\n   providers: *provider_aws\n   source: ./terraform-submodules/route53_records/\n   inputs:\n     private_zone: {{ .variables.private_zone }}\n     domain: {{ .variables.domain }}\n     private_lb_dns_name: {{ remoteState \"this.albs.eks_ingress_lb_dns_name\" }}\n     public_lb_dns_name: {{ remoteState \"this.albs.eks_public_lb_dns_name\" }}\n     private_lb_zone_id: {{ remoteState \"this.albs.eks_ingress_lb_zone_id\" }}\n{{- end }}\n
      6. Also, to map service accounts to AWS IAM roles we have created a separate template for IRSA. Example configuration for a cluster autoscaler:

          kind: StackTemplate\n  name: aws-eks\n  units:\n    {{- if .variables.cluster_autoscaler_irsa.enabled }}\n    - name: iam_assumable_role_autoscaling_autoscaler\n      type: tfmodule\n      source: \"terraform-aws-modules/iam/aws//modules/iam-assumable-role-with-oidc\"\n      version: \"~> 3.0\"\n      providers: *provider_aws\n      inputs:\n        role_name: \"eks-autoscaling-autoscaler-{{ .variables.cluster_name }}\"\n        create_role: true\n        role_policy_arns:\n          - {{ remoteState \"this.iam_policy_autoscaling_autoscaler.arn\" }}\n        oidc_fully_qualified_subjects: {{ insertYAML .variables.cluster_autoscaler_irsa.subjects }}\n        provider_url: {{ .variables.provider_url }}\n    - name: iam_policy_autoscaling_autoscaler\n      type: tfmodule\n      source: \"terraform-aws-modules/iam/aws//modules/iam-policy\"\n      version: \"~> 3.0\"\n      providers: *provider_aws\n      inputs:\n        name: AllowAutoScalingAccessforClusterAutoScaler-{{ .variables.cluster_name }}\n        policy: {{ insertYAML .variables.cluster_autoscaler_irsa.policy }}\n    {{- end }}\n

      In our example we have modified the prepared AWS-EKS stack template by adding a customized data block and excluding some addons.

      We have also changed the template's structure by placing the Examples directory into a separate repository, in order to decouple the abstract template from its implementation for concrete setups. This enabled us to use the template via Git and mark the template's version with Git tags.

      "},{"location":"examples-overview/","title":"Overview","text":"

      In the Examples section you will find ready-to-use Cluster.dev samples that will help you bootstrap cloud infrastructures. Running the sample code will get you a provisioned Kubernetes cluster with add-ons in the cloud. The available options include:

      • EKS cluster in AWS

      • K3s cluster in AWS

      • GKE cluster in GCP

      • Kubernetes cluster in DigitalOcean

      • Dedicated monitoring cluster in AWS

      You will also find examples on how to customize the existing templates in order to expand their functionality:

      • Modify AWS-EKS template

      Also please check our Medium blog

      • GitOps for Terraform and Helm with Cluster.dev
      • Building UI for DevOps operations with Cluster.dev and Streamlit
      "},{"location":"generators-overview/","title":"Overview","text":"

      Generators are part of the Cluster.dev functionality. They enable users to create parts of infrastructure just by filling stack variables in script dialogues, with no infrastructure coding required. This simplifies the creation of new stacks for developers who may lack the Ops skills, and could be useful for quick infrastructure deployment from ready parts (units).

      Generators create project from a preset profile - a set of data predefined as a project, with variables for stack template. Each template may have a profile for generator, which is stored in the .cdev-metadata/generator directory.

      "},{"location":"generators-overview/#how-it-works","title":"How it works","text":"

      Generator creates backend.yaml, project.yaml, infra.yaml by populating the files with user-entered values. The asked-for stack variables are listed in config.yaml under options:

        options:\n    - name: name\n      description: Project name\n      regex: \"^[a-zA-Z][a-zA-Z_0-9\\\\-]{0,32}$\"\n      default: \"demo-project\"\n    - name: organization\n      description: Organization name\n      regex: \"^[a-zA-Z][a-zA-Z_0-9\\\\-]{0,64}$\"\n      default: \"my-organization\"\n    - name: region\n      description: DigitalOcean region\n      regex: \"^[a-zA-Z][a-zA-Z_0-9\\\\-]{0,32}$\"\n      default: \"ams3\"\n    - name: domain\n      description: DigitalOcean DNS zone domain name\n      regex: \"^[a-zA-Z0-9][a-zA-Z0-9-\\\\.]{1,61}[a-zA-Z0-9]\\\\.[a-zA-Z]{2,}$\"\n      default: \"cluster.dev\"\n    - name: bucket_name\n      description: DigitalOcean spaces bucket name for states\n      regex: \"^[a-zA-Z][a-zA-Z0-9\\\\-]{0,64}$\"\n      default: \"cdev-state\"\n

      In options you can define default parameters and add other variables to the generator's list. The variables included by default are project name, organization name, region, domain and bucket name.

      In config.yaml you can also define a help message text.

      "},{"location":"get-started-cdev-aws/","title":"Getting Started with Cluster.dev on AWS","text":"

      This guide will walk you through the steps to deploy your first project with Cluster.dev on AWS.

                                +-------------------------+\n                          | Project.yaml            |\n                          |  - region               |\n                          +------------+------------+\n                                       |\n                                       |\n                          +------------v------------+\n                          | Stack.yaml              |\n                          |  - bucket_name          |\n                          |  - region               |\n                          |  - content              |\n                          +------------+------------+\n                                       |\n                                       |\n+--------------------------------------v-----------------------------------------+\n| StackTemplate: s3-website                                                      |\n|                                                                                |\n|  +---------------------+     +-------------------------+     +--------------+  |\n|  | bucket              |     | web-page-object         |     | outputs      |  |\n|  | type: tfmodule      |     | type: tfmodule          |     | type: printer|  |\n|  | inputs:             |     | inputs:                 |     | outputs:     |  |\n|  |  bucket_name        |     | bucket (from bucket ID) |     | websiteUrl   |  |\n|  |  region             |     | content                 |     +--------------+  |\n|  |  website settings   |     |                         |             |         |\n|  +---------------------+     +-----------^-------------+             |         |\n|        |                          | bucket ID                        |         |\n|        |                          | via remoteState                  |         |\n+--------|--------------------------|----------------------------------|---------+\n         |                          |                                  |\n         v                          v                                  v\n   AWS S3 Bucket              AWS S3 Object (index.html)       WebsiteUrl Output\n
      "},{"location":"get-started-cdev-aws/#prerequisites","title":"Prerequisites","text":"

      Ensure the following are installed and set up:

      • Terraform: Version 1.4 or above. Install Terraform.
      terraform --version\n
      • AWS CLI:
      curl \"https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip\" -o \"awscliv2.zip\"\nunzip awscliv2.zip\nsudo ./aws/install\naws --version\n
      • Cluster.dev client:
      curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh\ncdev --version\n
      "},{"location":"get-started-cdev-aws/#authentication","title":"Authentication","text":"

      Choose one of the two methods below:

      1. Shared Credentials File (recommended):

        • Populate ~/.aws/credentials:

          [cluster-dev]\naws_access_key_id = YOUR_AWS_ACCESS_KEY\naws_secret_access_key = YOUR_AWS_SECRET_KEY\n
        • Configure ~/.aws/config:

          [profile cluster-dev]\nregion = eu-central-1\n
        • Set the AWS profile:

          export AWS_PROFILE=cluster-dev\n
      2. Environment Variables:

      export AWS_ACCESS_KEY_ID=\"YOUR_AWS_ACCESS_KEY\"\nexport AWS_SECRET_ACCESS_KEY=\"YOUR_AWS_SECRET_KEY\"\nexport AWS_DEFAULT_REGION=\"eu-central-1\"\n
      "},{"location":"get-started-cdev-aws/#creating-an-s3-bucket-for-storing-state","title":"Creating an S3 Bucket for Storing State","text":"
      aws s3 mb s3://cdev-states\n
      "},{"location":"get-started-cdev-aws/#setting-up-your-project","title":"Setting Up Your Project","text":""},{"location":"get-started-cdev-aws/#project-configuration-projectyaml","title":"Project Configuration (project.yaml)","text":"
      • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
      • It points to aws-backend as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored in the S3 bucket specified in backend.yaml.
      • Project-level variables are defined here and can be referenced in other configurations.
      cat <<EOF > project.yaml\nname: dev\nkind: Project\nbackend: aws-backend\nvariables:\n  organization: cluster.dev\n  region: eu-central-1\n  state_bucket_name: cdev-states\nEOF\n
      "},{"location":"get-started-cdev-aws/#backend-configuration-backendyaml","title":"Backend Configuration (backend.yaml)","text":"

      This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. Given the backend type as S3, it's clear that AWS is the chosen cloud provider.

      cat <<EOF > backend.yaml\nname: aws-backend\nkind: Backend\nprovider: s3\nspec:\n  bucket: {{ .project.variables.state_bucket_name }}\n  region: {{ .project.variables.region }}\nEOF\n
      "},{"location":"get-started-cdev-aws/#stack-configuration-stackyaml","title":"Stack Configuration (stack.yaml)","text":"
      • This represents a distinct set of infrastructure resources to be provisioned.
      • It references a local template (in this case, the previously provided stack template) to know what resources to create.
      • Variables specified in this file will be passed to the Terraform modules called in the template.
      • The content variable here is especially useful; it dynamically populates the content of an S3 bucket object (a webpage in this case) using the local index.html file.
      cat <<EOF > stack.yaml\nname: s3-website\ntemplate: ./template/\nkind: Stack\nbackend: aws-backend\nvariables:\n  bucket_name: \"tmpl-dev-test\"\n  region: {{ .project.variables.region }}\n  content: |\n    {{- readFile \"./files/index.html\" | nindent 4 }}\nEOF\n
      "},{"location":"get-started-cdev-aws/#stack-template-templateyaml","title":"Stack Template (template.yaml)","text":"

      The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

      mkdir template\ncat <<EOF > template/template.yaml\n_p: &provider_aws\n- aws:\n    region: {{ .variables.region }}\n\nname: s3-website\nkind: StackTemplate\nunits:\n  -\n    name: bucket\n    type: tfmodule\n    providers: *provider_aws\n    source: terraform-aws-modules/s3-bucket/aws\n    inputs:\n      bucket: {{ .variables.bucket_name }}\n      force_destroy: true\n      acl: \"public-read\"\n      control_object_ownership: true\n      object_ownership: \"BucketOwnerPreferred\"\n      attach_public_policy: true\n      block_public_acls: false\n      block_public_policy: false\n      ignore_public_acls: false\n      restrict_public_buckets: false\n      website:\n        index_document: \"index.html\"\n        error_document: \"error.html\"\n  -\n    name: web-page-object\n    type: tfmodule\n    providers: *provider_aws\n    source: \"terraform-aws-modules/s3-bucket/aws//modules/object\"\n    version: \"3.15.1\"\n    inputs:\n      bucket: {{ remoteState \"this.bucket.s3_bucket_id\" }}\n      key: \"index.html\"\n      acl: \"public-read\"\n      content_type: \"text/html\"\n      content: |\n        {{- .variables.content | nindent 8 }}\n\n  -\n    name: outputs\n    type: printer\n    depends_on: this.web-page-object\n    outputs:\n      websiteUrl: http://{{ .variables.bucket_name }}.s3-website.{{ .variables.region }}.amazonaws.com/\nEOF\n
      Click to expand explanation of the Stack Template 1. Provider Definition (_p) This section employs a YAML anchor, pre-setting the cloud provider and region for the resources in the stack. For this example, AWS is the designated provider, and the region is dynamically passed from the variables:
      _p: &provider_aws\n- aws:\n    region: {{ .variables.region }}\n
      2. Units The units section is where the real action is. Each unit is a self-contained \"piece\" of infrastructure, typically associated with a particular Terraform module or a direct cloud resource. Bucket Unit This unit is utilizing the `terraform-aws-modules/s3-bucket/aws` module to provision an S3 bucket. Inputs for the module, such as the bucket name, are populated using variables passed into the Stack.
      name: bucket\ntype: tfmodule\nproviders: *provider_aws\nsource: terraform-aws-modules/s3-bucket/aws\ninputs:\n  bucket: {{ .variables.bucket_name }}\n  ...\n
      Web-page Object Unit After the bucket is created, this unit takes on the responsibility of creating a web-page object inside it. This is done using a sub-module from the S3 bucket module specifically designed for object creation. A notable feature is the remoteState function, which dynamically pulls the ID of the S3 bucket created by the previous unit:
      name: web-page-object\ntype: tfmodule\nproviders: *provider_aws\nsource: \"terraform-aws-modules/s3-bucket/aws//modules/object\"\ninputs:\n  bucket: {{ remoteState \"this.bucket.s3_bucket_id\" }}\n  ...\n
      Outputs Unit Lastly, this unit is designed to provide outputs, allowing users to view certain results of the Stack execution. For this template, it provides the website URL of the hosted S3 website.
      name: outputs\ntype: printer\ndepends_on: this.web-page-object\noutputs:\n  websiteUrl: http://{{ .variables.bucket_name }}.s3-website.{{ .variables.region }}.amazonaws.com/\n
      3. Variables and Data Flow The Stack Template is adept at harnessing variables, not just from the Stack (e.g., `stack.yaml`), but also from other resources via the remoteState function. This facilitates a seamless flow of data between resources and units, enabling dynamic infrastructure creation based on real-time cloud resource states and user-defined variables."},{"location":"get-started-cdev-aws/#sample-website-file-filesindexhtml","title":"Sample Website File (files/index.html)","text":"
      mkdir files\ncat <<EOF > files/index.html\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n    <title>Cdev Demo Website Home Page</title>\n</head>\n<body>\n  <h1>Welcome to my website</h1>\n  <p>Now hosted on Amazon S3!</p>\n  <h2>See you!</h2>\n</body>\n</html>\nEOF\n
      "},{"location":"get-started-cdev-aws/#deploying-with-clusterdev","title":"Deploying with Cluster.dev","text":"
      • Plan the deployment:

        cdev plan\n
      • Apply the changes:

        cdev apply\n
      "},{"location":"get-started-cdev-aws/#example-screen-cast","title":"Example Screen Cast","text":""},{"location":"get-started-cdev-aws/#clean-up","title":"Clean up","text":"

      To remove the cluster with created resources run the command:

      cdev destroy\n
      "},{"location":"get-started-cdev-aws/#more-examples","title":"More Examples","text":"

      In the Examples section you will find ready-to-use advanced Cluster.dev samples that will help you bootstrap more complex cloud infrastructures with Helm and Terraform compositions:

      • EKS cluster in AWS
      • Modify AWS-EKS
      • K3s cluster in AWS
      • AWS-K3s Prometheus
      "},{"location":"get-started-cdev-azure/","title":"Getting Started with Cluster.dev on Azure Cloud","text":"

      This guide will walk you through the steps to deploy your first project with Cluster.dev on Azure Cloud.

                                +-------------------------+\n                          | Project.yaml            |\n                          |  - location             |\n                          +------------+------------+\n                                       |\n                                       |\n                          +------------v------------+\n                          | Stack.yaml              |\n                          |  - storage_account_name |\n                          |  - location             |\n                          |  - file_content         |\n                          +------------+------------+\n                                       |\n                                       |\n+--------------------------------------v----------------------------------------+\n| StackTemplate: azure-static-website                                           |\n|                                                                               |\n|  +---------------------+     +---------------------+     +-----------------+  |\n|  | resource-group      |     | storage-account     |     | web-page-blob   |  |\n|  | type: tfmodule      |     | type: tfmodule      |     | type: tfmodule  |  |\n|  | inputs:             |     | inputs:             |     | inputs:         |  |\n|  |  location           |     | storage_account_name|     |  file_content   |  |\n|  |  resource_group_name|     |                     |     |                 |  |\n|  +---------------------+     +----------^----------+     +--------^--------+  |\n|        |                       | resource-group           | storage-account   |\n|        |                       | name & location          | name              |\n|        |                       | via remoteState          | via remoteState   |\n+--------|-----------------------|--------------------------|-------------------+\n         |                       |                          |\n         v                       v                          v\nAzure Resource Group    Azure Storage Account      Azure Blob (in $web container)\n                                 |\n                                 v\n                       Printer: Static WebsiteUrl\n
      "},{"location":"get-started-cdev-azure/#prerequisites","title":"Prerequisites","text":"

      Ensure the following are installed and set up:

      • Terraform: Version 1.4 or above. Install Terraform.
      terraform --version\n
      • Azure CLI:
      curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash\naz --version\n
      • Cluster.dev client:
      curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh\ncdev --version\n
      "},{"location":"get-started-cdev-azure/#authentication","title":"Authentication","text":"

      Before using the Azure CLI, you'll need to authenticate:

       az login --use-device-code\n

      Follow the prompt to sign in.

      "},{"location":"get-started-cdev-azure/#creating-an-azure-blob-storage-for-storing-state","title":"Creating an Azure Blob Storage for Storing State","text":"

      First, create a resource group:

      az group create --name cdevResourceGroup --location EastUS\n

      Then, create a storage account:

      az storage account create --name cdevstates --resource-group cdevResourceGroup --location EastUS --sku Standard_LRS\n

      Then, create a storage container:

      az storage container create --name tfstate --account-name cdevstates\n
      "},{"location":"get-started-cdev-azure/#setting-up-your-project","title":"Setting Up Your Project","text":""},{"location":"get-started-cdev-azure/#project-configuration-projectyaml","title":"Project Configuration (project.yaml)","text":"
      • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
      • It points to default as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored locally.
      • Project-level variables are defined here and can be referenced in other configurations.
      cat <<EOF > project.yaml\nname: dev\nkind: Project\nbackend: default\nvariables:\n  organization: cluster.dev\n  location: eastus\n  state_storage_account_name: cdevstates\nEOF\n
      "},{"location":"get-started-cdev-azure/#backend-configuration-backendyaml","title":"Backend Configuration (backend.yaml)","text":"

      This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. Given the backend type as S3, it's clear that AWS is the chosen cloud provider.

      cat <<EOF > backend.yaml\nname: azure-backend\nkind: Backend\nprovider: azurerm\nspec:\n  resource_group_name: cdevResourceGroup\n  storage_account_name: {{ .project.variables.state_storage_account_name }}\n  container_name: tfstate\nEOF\n
      "},{"location":"get-started-cdev-azure/#stack-configuration-stackyaml","title":"Stack Configuration (stack.yaml)","text":"
      • This represents a distinct set of infrastructure resources to be provisioned.
      • It references a local template (in this case, the previously provided stack template) to know what resources to create.
      • Variables specified in this file will be passed to the Terraform modules called in the template.
      • The content variable here is especially useful; it dynamically populates the content of an S3 bucket object (a webpage in this case) using the local index.html file.
      cat <<EOF > stack.yaml\nname: az-blob-website\ntemplate: ./template/\nkind: Stack\nbackend: azure-backend\nvariables:\n  storage_account_name: \"tmpldevtest\"\n  resource_group_name: \"demo-resource-group\"\n  location: {{ .project.variables.location }}\n  file_content: |\n    {{- readFile \"./files/index.html\" | nindent 4 }}\nEOF\n
      "},{"location":"get-started-cdev-azure/#stack-template-templateyaml","title":"Stack Template (template.yaml)","text":"

      The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

      mkdir template\ncat <<EOF > template/template.yaml\n_p: &provider_azurerm\n- azurerm:\n    features:\n      resource_group:\n        prevent_deletion_if_contains_resources: false\n\n_globals: &global_settings\n  default_region: \"region1\"\n  regions:\n    region1: {{ .variables.location }}\n  prefixes: [\"dev\"]\n  random_length: 4\n  passthrough: false\n  use_slug: false\n  inherit_tags: false\n\n_version: &module_version 5.7.5\n\nname: azure-static-website\nkind: StackTemplate\nunits:\n  -\n    name: resource-group\n    type: tfmodule\n    providers: *provider_azurerm\n    source: aztfmod/caf/azurerm//modules/resource_group\n    version: *module_version\n    inputs:\n      global_settings: *global_settings\n      resource_group_name: {{ .variables.resource_group_name }}\n      settings:\n        region: \"region1\"\n  -\n    name: storage-account\n    type: tfmodule\n    providers: *provider_azurerm\n    source: aztfmod/caf/azurerm//modules/storage_account\n    version: *module_version\n    inputs:\n      base_tags: false\n      global_settings: *global_settings\n      client_config:\n        key: demo\n      resource_group:\n        name: {{ remoteState \"this.resource-group.name\" }}\n        location: {{ remoteState \"this.resource-group.location\" }}\n      storage_account:\n        name: {{ .variables.storage_account_name }}\n        account_kind: \"StorageV2\"\n        account_tier: \"Standard\"\n        static_website:\n          index_document: \"index.html\"\n          error_404_document: \"error.html\"\n      var_folder_path: \"./\"\n  -\n    name: web-page-blob\n    type: tfmodule\n    providers: *provider_azurerm\n    source: aztfmod/caf/azurerm//modules/storage_account/blob\n    version: *module_version\n    inputs:\n      settings:\n        name: \"index.html\"\n        content_type: \"text/html\"\n        source_content: |\n          {{- .variables.file_content | nindent 12 }}\n      storage_account_name: {{ remoteState \"this.storage-account.name\" }}\n      storage_container_name: \"$web\"\n      var_folder_path: \"./\"\n  -\n    name: outputs\n    type: printer\n    depends_on: this.web-page-blob\n    outputs:\n      websiteUrl: https://{{ remoteState \"this.storage-account.primary_web_host\" }}\nEOF\n
      Click to expand explanation of the Stack Template 1. Provider Definition (_p) This section uses a YAML anchor, defining the cloud provider and location for the resources in the stack. For this case, Azure is the chosen provider, and the location is dynamically retrieved from the variables:
      _p: &provider_azurerm\n- azurerm:\n    features:\n      resource_group:\n        prevent_deletion_if_contains_resources: false\n
      2. Units The units section is where the real action is. Each unit is a self-contained \"piece\" of infrastructure, typically associated with a particular Terraform module or a direct cloud resource. Storage Account Unit This unit leverages the `aztfmod/caf/azurerm//modules/storage_account` module to provision an Azure Blob Storage account. Inputs for the module, such as the storage account name, are filled using variables passed into the Stack.
      name: storage-account\ntype: tfmodule\nproviders: *provider_azurerm\nsource: aztfmod/caf/azurerm//modules/storage_account\ninputs:\n  name: {{ .variables.storage_account_name }}\n  ...\n
      Web-page Object Unit Upon creating the storage account, this unit takes the role of establishing a web-page object inside it. This action is carried out using a sub-module from the storage account module specifically designed for blob creation. A standout feature is the remoteState function, which dynamically extracts the name of the Azure Storage account produced by the preceding unit:
      name: web-page-blob\ntype: tfmodule\nproviders: *provider_azurerm\nsource: aztfmod/caf/azurerm//modules/storage_account/blob\ninputs:\n  storage_account_name: {{ remoteState \"this.storage-account.name\" }}\n  ...\n
      Outputs Unit Lastly, this unit is designed to provide outputs, allowing users to view certain results of the Stack execution. For this template, it provides the website URL of the hosted Azure website.
      name: outputs\ntype: printer\ndepends_on: this.web-page-blob\noutputs:\n  websiteUrl: https://{{ remoteState \"this.storage-account.primary_web_host\" }}\n
      3. Variables and Data Flow The Stack Template is adept at harnessing variables, not just from the Stack (e.g., `stack.yaml`), but also from other resources via the remoteState function. This facilitates a seamless flow of data between resources and units, enabling dynamic infrastructure creation based on real-time cloud resource states and user-defined variables."},{"location":"get-started-cdev-azure/#sample-website-file-filesindexhtml","title":"Sample Website File (files/index.html)","text":"
      mkdir files\ncat <<EOF > files/index.html\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n    <title>Cdev Demo Website Home Page</title>\n</head>\n<body>\n  <h1>Welcome to my website</h1>\n  <p>Now hosted on Azure!</p>\n  <h2>See you!</h2>\n</body>\n</html>\nEOF\n
      "},{"location":"get-started-cdev-azure/#deploying-with-clusterdev","title":"Deploying with Cluster.dev","text":"
      • Plan the deployment:

        cdev plan\n
      • Apply the changes:

        cdev apply\n
      "},{"location":"get-started-cdev-azure/#example-screen-cast","title":"Example Screen Cast","text":""},{"location":"get-started-cdev-azure/#clean-up","title":"Clean up","text":"

      To remove the cluster with created resources run the command:

      cdev destroy\n
      "},{"location":"get-started-cdev-gcp/","title":"Getting Started with Cluster.dev on Google Cloud","text":"

      This guide will walk you through the steps to deploy your first project with Cluster.dev on Google Cloud.

                                +---------------------------------+\n                          | Project.yaml                    |\n                          |  - project_name                 |\n                          |  - google_project_id            |\n                          |  - google_cloud_region          |\n                          |  - google_cloud_bucket_location |\n                          +------------+--------------------+\n                                       |\n                                       |\n                          +------------v------------+\n                          | Stack.yaml              |\n                          |  - web_page_content     |\n                          +------------+------------+\n                                       |\n                                       |\n+--------------------------------------v-----------------------------------------------------------------+\n| StackTemplate: gcs-static-website                                                                      |\n|                                                                                                        |\n|  +---------------------+     +---------------------+     +-----------------+    +-----------------+    |\n|  | cloud-storage       |     | cloud-bucket-object |     | cloud-url-map   |    | cloud-lb        |    |\n|  | type: tfmodule      |     | type: tfmodule      |     | type: tfmodule  |    | type: tfmodule  |    |\n|  | inputs:             |     | inputs:             |     | inputs:         |    | inputs:         |    |\n|  |  names              |     |   bucket_name       |     |  name           |    |  name           |    |\n|  |  randomize_suffix   |     |   object_name       |     |  bucket_name    |    |  project        |    |\n|  |  project_id         |     |   object_content    |     +--------^--------+    |  url_map        |    |\n|  |  location           |     +----------^----------+      |                     +--------^--------+    |\n|  +---------------------+       |                          |                       |                    |\n|        |                       | cloud-storage            | cloud-storage         | cloud-url-map      |\n|        |                       | bucket name              | bucket name           | url_map            |\n|        |                       | via remoteState          | via remoteState       | via remoteState    |\n+--------|-----------------------|--------------------------|--------------------------------------------+\n         |                       |                          |                       |\n         v                       v                          v                       v\n  Storage Bucket             Storage Bucket Object     Url Map & Bucket Backend   Load Balancer\n                                 |\n                                 v\n                       Printer: Static WebsiteUrl\n
      "},{"location":"get-started-cdev-gcp/#prerequisites","title":"Prerequisites","text":"

      Ensure the following are installed and set up:

      • Terraform: Version 1.4 or above. Install Terraform.
      terraform --version\n
      • Google Cloud CLI: Install Google Cloud CLI.
      gcloud --version\n
      • Cluster.dev client:
      curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh\ncdev --version\n
      "},{"location":"get-started-cdev-gcp/#authentication","title":"Authentication","text":"

      Before using the Google Cloud CLI, you'll need to authenticate:

      gcloud auth login\n

      Authorize cdev/terraform to interact with GCP via SD

      gcloud auth application-default login\n
      "},{"location":"get-started-cdev-gcp/#creating-an-storage-bucket-for-storing-state","title":"Creating an Storage Bucket for Storing State","text":"
      gsutil mb gs://cdevstates\n
      "},{"location":"get-started-cdev-gcp/#setting-up-your-project","title":"Setting Up Your Project","text":"

      Tip

      You can clone example files from repo:

      git clone https://github.com/shalb/cdev-examples\ncd cdev-examples/gcp/gcs-website/\n
      "},{"location":"get-started-cdev-gcp/#project-configuration-projectyaml","title":"Project Configuration (project.yaml)","text":"
      • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
      • It points to aws-backend as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored in the Google Storage bucket specified in backend.yaml.
      • Project-level variables are defined here and can be referenced in other configurations.
      cat <<EOF > project.yaml\nname: dev\nkind: Project\nbackend: gcs-backend\nvariables:\n  project_name: dev-test\n  google_project_id: cdev-demo\n  google_cloud_region: us-west1\n  google_cloud_bucket_location: EU\n  google_bucket_name: cdevstates\n  google_bucket_prefix: dev\nEOF\n
      "},{"location":"get-started-cdev-gcp/#backend-configuration-backendyaml","title":"Backend Configuration (backend.yaml)","text":"

      This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. Given the backend type as GCS.

      name: gcs-backend\nkind: Backend\nprovider: gcs\nspec:\n  project: {{ .project.variables.google_project_id }}\n  bucket: {{ .project.variables.google_bucket_name }}\n  prefix: {{ .project.variables.google_bucket_prefix }}\nEOF\n
      "},{"location":"get-started-cdev-gcp/#stack-configuration-stackyaml","title":"Stack Configuration (stack.yaml)","text":"
      • This represents a distinct set of infrastructure resources to be provisioned.
      • It references a local template (in this case, the previously provided stack template) to know what resources to create.
      • Variables specified in this file will be passed to the Terraform modules called in the template.
      • The content variable here is especially useful; it dynamically populates the content of an Google Storage bucket object (a webpage in this case) using the local index.html file.
      cat <<EOF > stack.yaml\nname: cloud-storage\ntemplate: ./template/\nkind: Stack\nbackend: gcs-backend\nvariables:\n  project_name: {{ .project.variables.project_name }}\n  google_cloud_region: {{ .project.variables.google_cloud_region }}\n  google_cloud_bucket_location: {{ .project.variables.google_cloud_bucket_location }}\n  google_project_id: {{ .project.variables.google_project_id }}\n  web_page_content: |\n    {{- readFile \"./files/index.html\" | nindent 4 }}\nEOF\n
      "},{"location":"get-started-cdev-gcp/#stack-template-templateyaml","title":"Stack Template (template.yaml)","text":"

      The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

      mkdir template\ncat <<EOF > template/template.yaml\n_p: &provider_gcp\n- google:\n    project: {{ .variables.google_project_id }}\n    region: {{ .variables.google_cloud_region }}\n\nname: gcs-static-website\nkind: StackTemplate\nunits:\n  -\n    name: cloud-storage\n    type: tfmodule\n    providers: *provider_gcp\n    source: \"github.com/terraform-google-modules/terraform-google-cloud-storage.git?ref=v4.0.1\"\n    inputs:\n      names:\n        - {{ .variables.project_name }}\n      randomize_suffix: true\n      project_id: {{ .variables.google_project_id }}\n      location: {{ .variables.google_cloud_bucket_location }}\n      set_viewer_roles: true\n      viewers:\n        - allUsers\n      website:\n        main_page_suffix: \"index.html\"\n        not_found_page: \"index.html\"\n  -\n    name: cloud-bucket-object\n    type: tfmodule\n    providers: *provider_gcp\n    depends_on: this.cloud-storage\n    source: \"bootlabstech/cloud-storage-bucket-object/google\"\n    version: \"1.0.1\"\n    inputs:\n      bucket_name: {{ remoteState \"this.cloud-storage.name\" }}\n      object_name: \"index.html\"\n      object_content: |\n        {{- .variables.web_page_content | nindent 8 }}\n  -\n    name: cloud-url-map\n    type: tfmodule\n    providers: *provider_gcp\n    depends_on: this.cloud-storage\n    source: \"github.com/shalb/terraform-gcs-bucket-backend.git?ref=0.0.1\"\n    inputs:\n      name: {{ .variables.project_name }}\n      bucket_name: {{ remoteState \"this.cloud-storage.name\" }}\n  -\n    name: cloud-lb\n    type: tfmodule\n    providers: *provider_gcp\n    depends_on: this.cloud-url-map\n    source: \"GoogleCloudPlatform/lb-http/google\"\n    version: \"9.2.0\"\n    inputs:\n      name: {{ .variables.project_name }}\n      project: {{ .variables.google_project_id }}\n      url_map: {{ remoteState \"this.cloud-url-map.url_map_self_link\" }}\n      create_url_map: false\n      ssl: false\n      backends:\n        default:\n          protocol: \"HTTP\"\n          port: 80\n          port_name: \"http\"\n          timeout_sec: 10\n          enable_cdn: false\n          groups: [] \n          health_check:\n            request_path: \"/\"\n            port: 80\n          log_config:\n            enable: true\n            sample_rate: 1.0\n          iap_config:\n            enable: false\n  -\n    name: outputs\n    type: printer\n    depends_on: this.cloud-storage\n    outputs:\n      websiteUrl: http://{{ remoteState \"this.cloud-lb.external_ip\" }}\nEOF\n
      Click to expand explanation of the Stack Template 1. Provider Definition (_p) This section uses a YAML anchor, defining the cloud provider and location for the resources in the stack. For this case, GCS is the chosen provider, and the location is dynamically retrieved from the variables:
      _p: &provider_gcp\n- google:\n    project: {{ .variables.google_project_id }}\n    region: {{ .variables.google_cloud_region }}\n
      2. Units The units section is where the real action is. Each unit is a self-contained \"piece\" of infrastructure, typically associated with a particular Terraform module or a direct cloud resource. Cloud Storage Unit This unit leverages the `github.com/terraform-google-modules/terraform-google-cloud-storage` module to provision an Google Storage Bucket. Inputs for the module, such as the bucket name and project, are filled using variables passed into the Stack.
      name: cloud-storage\ntype: tfmodule\nproviders: *provider_gcp\nsource: \"github.com/terraform-google-modules/terraform-google-cloud-storage.git?ref=v4.0.1\"\n  inputs:\n    names:\n      - {{ .variables.name }}\n    randomize_suffix: true\n    project_id: {{ .variables.google_project_id }}\n    location: {{ .variables.google_cloud_bucket_location }}\n    set_viewer_roles: true\n    viewers:\n      - allUsers\n    website:\n      main_page_suffix: \"index.html\"\n      not_found_page: \"index.html\"\n
      Cloud Bucket Object Unit Upon creating the storage bucket, this unit takes the role of establishing a web-page object inside it. This action is carried out using a module storage bucket object module specifically designed for blob creation. A standout feature is the remoteState function, which dynamically extracts the name of the Storage Bucket name produced by the preceding unit:
      name: cloud-bucket-object\ntype: tfmodule\nproviders: *provider_gcp\ndepends_on: this.cloud-storage\nsource: \"bootlabstech/cloud-storage-bucket-object/google\"\nversion: \"1.0.1\"\ninputs:\n  bucket_name: {{ remoteState \"this.cloud-storage.name\" }}\n  object_name: \"index.html\"\n  object_content: |\n    {{- .variables.web_page_content | nindent 8 }}\n
      Cloud URL Map Unit This unit create google_compute_url_map and google_compute_backend_bucket in order to supply it to cloud-lb unit. A standout feature is the remoteState function, which dynamically extracts the name of the Storage Bucket name produced by Cloud Storage unit:
      name: cloud-url-map\ntype: tfmodule\nproviders: *provider_gcp\ndepends_on: this.cloud-storage\nsource: \"github.com/shalb/terraform-gcs-bucket-backend.git?ref=0.0.1\"\ninputs:\n  name: {{ .variables.project_name }}\n  bucket_name: {{ remoteState \"this.cloud-storage.name\" }}\n
      Cloud Load Balancer Unit This unit create google load balancer. A standout feature is the remoteState function, which dynamically extracts the name of the URL Map URI produced by Cloud URL Map unit:
      name: cloud-lb\ntype: tfmodule\nproviders: *provider_gcp\ndepends_on: this.cloud-url-map\nsource: \"GoogleCloudPlatform/lb-http/google\"\nversion: \"9.2.0\"\ninputs:\n  name: {{ .variables.project_name }}\n  project: {{ .variables.google_project_id }}\n  url_map: {{ remoteState \"this.cloud-url-map.url_map_self_link\" }}\n  create_url_map: false\n  ssl: false\n  backends:\n    default:\n      protocol: \"HTTP\"\n      port: 80\n      port_name: \"http\"\n      timeout_sec: 10\n      enable_cdn: false\n      groups: [] \n      health_check:\n        request_path: \"/\"\n        port: 80\n      log_config:\n        enable: true\n        sample_rate: 1.0\n      iap_config:\n        enable: false\n
      Outputs Unit Lastly, this unit is designed to provide outputs, allowing users to view certain results of the Stack execution. For this template, it provides the website URL of the hosted website exposed by load balancer.
      name: outputs\ntype: printer\ndepends_on: this.cloud-storage\noutputs:\n  websiteUrl: http://{{ remoteState \"this.cloud-lb.external_ip\" }}\n
      3. Variables and Data Flow The Stack Template is adept at harnessing variables, not just from the Stack (e.g., `stack.yaml`), but also from other resources via the remoteState function. This facilitates a seamless flow of data between resources and units, enabling dynamic infrastructure creation based on real-time cloud resource states and user-defined variables."},{"location":"get-started-cdev-gcp/#sample-website-file-filesindexhtml","title":"Sample Website File (files/index.html)","text":"
      mkdir files\ncat <<EOF > files/index.html\n<html xmlns=\"http://www.w3.org/1999/xhtml\">\n<head>\n    <title>Cdev Demo Website Home Page</title>\n</head>\n<body>\n  <h1>Welcome to my website</h1>\n  <p>Now hosted on GCS!</p>\n  <h2>See you!</h2>\n</body>\n</html>\nEOF\n
      "},{"location":"get-started-cdev-gcp/#deploying-with-clusterdev","title":"Deploying with Cluster.dev","text":"
      • Plan the deployment:

        cdev plan\n
      • Apply the changes:

        cdev apply\n
      "},{"location":"get-started-cdev-gcp/#clean-up","title":"Clean up","text":"

      To remove the cluster with created resources run the command:

      cdev destroy\n
      "},{"location":"get-started-cdev-gcp/#more-examples","title":"More Examples","text":"

      In the Examples section you will find ready-to-use advanced Cluster.dev samples that will help you bootstrap more complex cloud infrastructures with Helm and Terraform compositions:

      • More Advanced example with GKE
      "},{"location":"get-started-cdev-helm/","title":"Getting Started with Kubernetes and Helm","text":"

      This guide will walk you through the steps to deploy a WordPress application along with a MySQL database on a Kubernetes cluster using StackTemplates with Helm units.

                                +-------------------------+\n                          | Stack.yaml              |\n                          |  - domain               |\n                          |  - kubeconfig_path      |\n                          +------------+------------+\n                                       |\n                                       |\n+--------------------------------------v---------------------------------+\n| StackTemplate: wordpress                                               |\n|                                                                        |\n|  +---------------------+               +---------------------+         |\n|  | mysql-wp-pass-user  |-------------->| mysql-wordpress     |         |\n|  | type: tfmodule      |               | type: helm          |         |\n|  | output:             |               | inputs:             |         |\n|  |  generated password |               |  kubeconfig         |         |\n|  |                     |               |  values (from mysql.yaml)     |\n|  +---------------------+               +----------|----------+         |\n|                                                   |                    |\n|                                                   v                    |\n|                                           MySQL Deployment             |\n|                                                   |                    |\n|  +---------------------+               +----------|----------+         |\n|  | wp-pass             |-------------->| wordpress           |         |\n|  | type: tfmodule      |               | type: helm          |         |\n|  | output:             |               | inputs:             |         |\n|  |  generated password |               |  kubeconfig         |         |\n|  |                     |               |  values (from wordpress.yaml) |\n|  +---------------------+               +----------|----------+         |\n|                                                   |                    |\n|                                                   v                    |\n|                                           WordPress Deployment         |\n|                                                                        |\n|  +---------------------+                                               |\n|  | outputs             |                                               |\n|  | type: printer       |                                               |\n|  | outputs:            |                                               |\n|  |  wordpress_url      |                                               |\n|  +---------------------+                                               |\n|            |                                                           |\n+------------|-----------------------------------------------------------+\n             |\n             v\n      wordpress_url Output\n
      "},{"location":"get-started-cdev-helm/#prerequisites","title":"Prerequisites","text":"
      1. A running Kubernetes cluster.
      2. Your domain name (for this tutorial, we'll use example.com as a placeholder).
      3. The kubeconfig file for your Kubernetes cluster.
      "},{"location":"get-started-cdev-helm/#setting-up-your-project","title":"Setting Up Your Project","text":"

      Tip

      You can clone example files from repo:

      git clone https://github.com/shalb/cdev-examples\ncd cdev-examples/helm/wordpress/\n

      "},{"location":"get-started-cdev-helm/#project-configuration-projectyaml","title":"Project Configuration (project.yaml)","text":"
      • Defines the overarching project settings. All subsequent stack configurations will inherit and can override these settings.
      • It points to aws-backend as the backend, meaning that the Cluster.dev state for resources defined in this project will be stored in the S3 bucket specified in backend.yaml.
      • Project-level variables are defined here and can be referenced in other configurations.
      cat <<EOF > project.yaml\nname: wordpress-demo\nkind: Project\nbackend: aws-backend\nvariables:\n  region: eu-central-1\n  state_bucket_name: cdev-states\nEOF\n
      "},{"location":"get-started-cdev-helm/#backend-configuration-backendyaml","title":"Backend Configuration (backend.yaml)","text":"

      This specifies where Cluster.dev will store its own state and the Terraform states for any infrastructure it provisions or manages. In this example the AWS s3 is used, but you can choose any other provider.

      cat <<EOF > backend.yaml\nname: aws-backend\nkind: Backend\nprovider: s3\nspec:\n  bucket: {{ .project.variables.state_bucket_name }}\n  region: {{ .project.variables.region }}\nEOF\n
      "},{"location":"get-started-cdev-helm/#setting-up-the-stack-file-stackyaml","title":"Setting Up the Stack File (stack.yaml)","text":"
      • This represents a high level of infrastructure pattern configuration.
      • It references a local template to know what resources to create.
      • Variables specified in this file will be passed to the Terraform modules and Helm charts called in the template.

      Replace placeholders in stack.yaml with your actual kubeconfig path and domain.

      cat <<EOF > stack.yaml\nname: wordpress\ntemplate: \"./template/\"\nkind: Stack\nbackend: aws-backend\ncliVersion: \">= 0.7.14\"\nvariables:\n  kubeconfig_path: \"/data/home/voa/projects/cdev-aws-eks/examples/kubeconfig\" # Change to your path\n  domain: demo.cluster.dev # Change to your domain\nEOF\n
      "},{"location":"get-started-cdev-helm/#stack-template-templateyaml","title":"Stack Template (template.yaml)","text":"

      The StackTemplate serves as a pivotal object within Cluster.dev. It lays out the actual infrastructure components you intend to provision using Terraform modules and resources. Essentially, it determines how your cloud resources should be laid out and interconnected.

      mkdir template\ncat <<EOF > template/template.yaml\nkind: StackTemplate\nname: wordpress\ncliVersion: \">= 0.7.15\"\nunits:\n## Generate Passwords with Terraform for MySQL and Wordpress\n  -\n    name: mysql-wp-pass-user\n    type: tfmodule\n    source: github.com/romanprog/terraform-password?ref=0.0.1\n    inputs:\n      length: 12\n      special: false\n  -\n    name: wp-pass\n    type: tfmodule\n    source: github.com/romanprog/terraform-password?ref=0.0.1\n    inputs:\n      length: 12\n      special: false\n## Install MySQL and Wordpress with Helm\n  -\n    name: mysql-wordpress\n    type: helm\n    kubeconfig: {{ .variables.kubeconfig_path }}\n    source:\n      repository: \"oci://registry-1.docker.io/bitnamicharts\"\n      chart: \"mysql\"\n      version: \"9.9.1\"\n    additional_options:\n      namespace: \"wordpress\"\n      create_namespace: true\n    values:\n      - file: ./files/mysql.yaml\n  -\n    name: wordpress\n    type: helm\n    depends_on: this.mysql-wordpress\n    kubeconfig: {{ .variables.kubeconfig_path }}\n    source:\n      repository: \"oci://registry-1.docker.io/bitnamicharts\"\n      chart: \"wordpress\"\n      version: \"16.1.2\"\n    additional_options:\n      namespace: \"wordpress\"\n      create_namespace: true\n    values:\n      - file: ./files/wordpress.yaml\n\n  - name: outputs\n    type: printer\n    depends_on: this.wordpress\n    outputs:\n      wordpress_url: https://wordpress.{{ .variables.domain }}/admin/\n      wordpress_user: user\n      wordpress_password: {{ remoteState \"this.wp-pass.result\" }}\nEOF\n

      As you can see the StackTemplate contains Helm units and they could use inputs from values.yaml files where it is possible to use outputs from other type of units(like tfmodule) or even other stacks. Lets create that values for MySQL and Wordpress:

      mkdir files\ncat <<EOF > files/mysql.yaml\nfullNameOverride: mysql-wordpress\nauth:\n  rootPassword: {{ remoteState \"this.mysql-wp-pass-user.result\" }}\n  username: user\n  password: {{ remoteState \"this.mysql-wp-pass-user.result\" }}\nEOF\n
      cat <<EOF > files/wordpress.yaml\ncontainerSecurityContext:\n  enabled: false\nmariadb:\n  enabled: false\nexternalDatabase:\n  port: 3306\n  user: user\n  password: {{ remoteState \"this.mysql-wp-pass-user.result\" }}\n  database: my_database\nwordpressPassword: {{ remoteState \"this.wp-pass.result\" }}\nallowOverrideNone: false\ningress:\n  enabled: true\n  ingressClassName: \"nginx\"\n  pathType: Prefix\n  hostname: wordpress.{{ .variables.domain }}\n  path: /\n  tls: true\n  annotations:\n    cert-manager.io/cluster-issuer: \"letsencrypt-prod\"\nEOF\n
      Click to expand explanation of the Stack Template 1. Units The units section is a list of infrastructure components that are provisioned sequentially. Each unit has a type, which indicates whether it's a Terraform module (`tfmodule`), a Helm chart (`helm`), or simply outputs (`printer`). Password Generation Units There are two password generation units which use the Terraform module `github.com/romanprog/terraform-password` to generate random passwords.
      name: mysql-wp-pass-user\ntype: tfmodule\nsource: github.com/romanprog/terraform-password?ref=0.0.1\ninputs:\n  length: 12\n  special: false\n
      These units will create passwords with a length of 12 characters without special characters. The outputs of these units (the generated passwords) are used in subsequent units. MySQL Helm Chart Unit This unit installs the MySQL chart from the `bitnamicharts` Helm repository.
      name: mysql-wordpress\ntype: helm\nkubeconfig: {{ .variables.kubeconfig_path }}\nsource:\n  repository: \"oci://registry-1.docker.io/bitnamicharts\"\n  chart: \"mysql\"\n  version: \"9.9.1\"\n
      The `kubeconfig` field uses a variable to point to the Kubeconfig file, enabling Helm to deploy to the correct Kubernetes cluster. WordPress Helm Chart Unit This unit installs the WordPress chart from the same Helm repository as MySQL. It depends on the `mysql-wordpress` unit, ensuring MySQL is installed first.
      name: wordpress\ntype: helm\ndepends_on: this.mysql-wordpress\n
      Both Helm units utilize external YAML files (`mysql.yaml` and `wordpress.yaml`) to populate values for the Helm charts. These values files leverage the `remoteState` function to fetch passwords generated by the Terraform modules. Outputs Unit This unit outputs the URL to access the WordPress site.
      name: outputs\ntype: printer\ndepends_on: this.wordpress\noutputs:\n  wordpress_url: https://wordpress.{{ .variables.domain }}/admin/\n
      It waits for the WordPress Helm unit to complete (`depends_on: this.wordpress`) and then provides the URL. 2. Variables and Data Flow In this stack template: The `.variables` placeholders, like `{{ .variables.kubeconfig_path }}` and `{{ .variables.domain }}`, fetch values from the stack's variables. The `remoteState` function, such as `{{ remoteState \"this.wp-pass.result\" }}`, fetches the outputs from previous units. For example, it retrieves the randomly generated password for WordPress. These mechanisms ensure dynamic configurations based on real-time resource states and user-defined variables. They enable values generated in one unit (e.g., a password from a Terraform module) to be utilized in a subsequent unit (e.g., a Helm deployment). 3. Additional File (`mysql.yaml` and `wordpress.yaml`) Explanation Both files serve as value configurations for their respective Helm charts. `mysql.yaml` sets overrides for the MySQL deployment, specifically the authentication details. `wordpress.yaml` customizes the WordPress deployment, such as its database settings, ingress configuration, and password. Both files leverage the `remoteState` function to pull in passwords generated by the Terraform password modules. In summary, this stack template and its additional files define a robust deployment that sets up a WordPress application with its database, all while dynamically creating and injecting passwords. It showcases the synergy between Terraform for infrastructure provisioning and Helm for Kubernetes-based application deployments."},{"location":"get-started-cdev-helm/#deploying-wordpress-and-mysql-with-clusterdev","title":"Deploying WordPress and MySQL with cluster.dev","text":""},{"location":"get-started-cdev-helm/#1-planning-the-deployment","title":"1. Planning the Deployment","text":"
      cdev plan\n
      "},{"location":"get-started-cdev-helm/#2-applying-the-stacktemplate","title":"2. Applying the StackTemplate","text":"
      cdev apply\n

      Upon executing these commands, WordPress and MySQL will be deployed on your Kubernetes cluster using cluster.dev.

      "},{"location":"get-started-cdev-helm/#example-screen-cast","title":"Example Screen Cast","text":""},{"location":"get-started-cdev-helm/#clean-up","title":"Clean up","text":"

      To remove the cluster with created resources run the command:

      cdev destroy\n
      "},{"location":"get-started-cdev-helm/#conclusion","title":"Conclusion","text":"

      StackTemplates provide a modular approach to deploying applications on Kubernetes. With Helm and StackTemplates, you can efficiently maintain, scale, and manage your deployments. This guide walked you through deploying WordPress and MySQL seamlessly on a Kubernetes cluster using these tools.

      "},{"location":"get-started-cdev-helm/#more-examples","title":"More Examples","text":"

      In the Examples section you will find ready-to-use advanced Cluster.dev samples that will help you bootstrap more complex cloud infrastructures with Helm and Terraform compositions:

      • Install EKS cluster with Wordpress as separate stack in one project
      • Install sample application by templating multiple Kubernetes manifests
      • Repo with other examples
      "},{"location":"get-started-create-project/","title":"Create New Project","text":""},{"location":"get-started-create-project/#quick-start","title":"Quick start","text":"

      In our example we shall use the tmpl-development sample to create a new project on AWS cloud.

      1. Install the Cluster.dev client.

      2. Create a project directory, cd into it and generate a project with the command:

        cdev project create https://github.com/shalb/cluster.dev tmpl-development

      3. Export environmental variables via an AWS profile.

      4. Run cdev plan to build the project and see the infrastructure that will be created.

      5. Run cdev apply to deploy the stack.

      "},{"location":"get-started-create-project/#workflow-diagram","title":"Workflow diagram","text":"

      The diagram below describes the steps of creating a new project without generators.

      "},{"location":"get-started-overview/","title":"Cluster.dev Examples Overview","text":""},{"location":"get-started-overview/#working-with-terraform-modules","title":"Working with Terraform Modules","text":"

      Example of how to create static website hosting on different clouds.

      Cloud Provider Sample Link Technology Image AWS Quick Start on AWS Azure Quick Start on Azure GCP Quick Start on GCP"},{"location":"get-started-overview/#kubernetes-deployment-with-helm-charts","title":"Kubernetes Deployment with Helm Charts","text":"

      Example of how to deploy application with Helm and Terraform to Kubernetes.

      Description Sample Link Technology Image Terraform Kubernetes Helm Quick Start with Kubernetes"},{"location":"get-started-overview/#bootstrapping-kubernetes-in-different-clouds","title":"Bootstrapping Kubernetes in Different Clouds","text":"

      Create fully featured Kubernetes clusters with required addons.

      Cloud Provider Kubernetes Type Sample Link Technology Image AWS EKS AWS-EKS AWS K3s AWS-K3s GCP GKE GCP-GKE AWS K3s + Prometheus AWS-K3s Prometheus DO K8s DO-K8s"},{"location":"google-cloud-provider/","title":"Deploying to GCE","text":"

      Work on setting up access to Google Cloud is in progress, examples are coming soon!

      "},{"location":"google-cloud-provider/#authentication","title":"Authentication","text":"

      See Terraform Google cloud provider documentation.

      "},{"location":"how-does-cdev-work/","title":"How Does It Work?","text":"

      With Cluster.dev you create or download a predefined stack template, set the variables, then render and deploy a whole stack.

      Capabilities:

      • Re-using all existing Terraform private and public modules and Helm Charts.
      • Applying parallel changes in multiple infrastructures concurrently.
      • Using the same global variables and secrets across different infrastructures, clouds, and technologies.
      • Templating anything with the Go-template function, even Terraform modules in Helm style templates.
      • Create and manage secrets with SOPS or cloud secret storages.
      • Generate a ready-to-use Terraform code.
      "},{"location":"how-does-cdev-work/#basic-diagram","title":"Basic diagram","text":""},{"location":"how-does-cdev-work/#templating","title":"Templating","text":"

      Templating is one of the key features that underlie the powerful capabilities of Cluster.dev. Similar to Helm, the cdev templating is based on Go template language and uses Sprig and some other extra functions to expose objects to the templates.

      Cluster.dev has two levels of templating, one that involves template rendering on a project level and one on a stack template level. For more information please refer to the Templating section.

      "},{"location":"how-does-cdev-work/#how-to-use-clusterdev","title":"How to use Cluster.dev","text":"

      Cluster.dev is a powerful framework that can be operated in several modes.

      "},{"location":"how-does-cdev-work/#create-your-own-stack-template","title":"Create your own stack template","text":"

      In this mode you can create your own stack templates. Having your own template enables you to launch or copy environments (like dev/stage/prod) with the same template. You'll be able to develop and propagate changes together with your team members, just using Git. Operating Cluster.dev in the developer mode requires some prerequisites. The most important is understanding Terraform and how to work with its modules. The knowledge of go-template syntax or Helm is advisable but not mandatory.

      "},{"location":"how-does-cdev-work/#deploy-infrastructures-from-existing-stack-templates","title":"Deploy infrastructures from existing stack templates","text":"

      This mode, also known as user mode, gives you the ability to launch ready-to-use infrastructures from prepared stack templates by just adding your cloud credentials and setting variables (such as name, zones, number of instances, etc.). You don't need to know background tooling like Terraform or Helm, it's as simple as downloading a sample and launching commands. Here are the steps:

      • Install Cluster.dev binary
      • Choose and download a stack template
      • Set cloud credentials
      • Define variables for the stack template
      • Run Cluster.dev and get a cloud infrastructure
      "},{"location":"how-does-cdev-work/#workflow","title":"Workflow","text":"

      Let's assume you are starting a new infrastructure project. Let's see what your workflow would look like.

      1. Define what kind of infrastructure pattern you need to achieve.

        a. What Terraform modules it would include (for example: I need to have VPC, Subnet definitions, IAM's and Roles).

        b. Whether you need to apply any Bash scripts before and after the module, or inside as pre/post-hooks.

        c. If you are using Kubernetes, check what controllers would be deployed and how (by Helm chart or K8s manifests).

      2. Check if there is any similar sample template that already exists.

      3. Clone the stack template locally and modify it if needed.

      4. Apply it.

      "},{"location":"howto-tf-versions/","title":"Use Different Terraform Versions","text":"

      By default Cluster.dev runs that version of Terraform which is installed on a local machine. If you need to switch between versions, use some third-party utilities, such as Terraform Switcher.

      Example of tfswitch usage:

      tfswitch 0.15.5\n\ncdev apply\n
      This will tell Cluster.dev to use Terraform v0.15.5.

      Use CDEV_TF_BINARY variable to indicate which Terraform binary to use.

      Info

      The variable is recommended to use for debug and template development only.

      You can pin it in project.yaml:

          name: dev\n    kind: Project\n    backend: aws-backend\n    variables:\n      organization: cluster-dev\n      region: eu-central-1\n      state_bucket_name: cluster-dev-gha-tests\n    exports:\n      CDEV_TF_BINARY: \"terraform_14\"\n
      "},{"location":"installation-upgrade/","title":"Installation and Upgrade","text":""},{"location":"installation-upgrade/#prerequisites","title":"Prerequisites","text":"

      To start using Cluster.dev please make sure that you comply with the following preconditions.

      Supported operating systems:

      • Linux amd64

      • Darwin amd64

      Required software installed:

      • Git console client

      • Terraform

      "},{"location":"installation-upgrade/#terraform","title":"Terraform","text":"

      The Cluster.dev client uses the Terraform binary. The required Terraform version is 1.4 or higher. Refer to the Terraform installation instructions to install Terraform.

      "},{"location":"installation-upgrade/#install-from-script","title":"Install From Script","text":"

      Tip

      This is the easiest way to have the Cluster.dev client installed. For other options see the Install From Sources section.

      Cluster.dev has an installer script that takes the latest version of Cluster.dev client and installs it for you locally.

      Fetch the script and execute it locally with the command:

      curl -fsSL https://raw.githubusercontent.com/shalb/cluster.dev/master/scripts/get_cdev.sh | sh\n
      "},{"location":"installation-upgrade/#install-from-sources","title":"Install From Sources","text":""},{"location":"installation-upgrade/#download-from-release","title":"Download from release","text":"

      Each stable version of Cluster.dev has a binary that can be downloaded and installed manually. The documentation is suitable for v0.4.0 or higher of the Cluster.dev client.

      Installation example for Linux amd64:

      1. Download your desired version from the releases page.

      2. Unpack it.

      3. Find the Cluster.dev binary in the unpacked directory.

      4. Move the binary to the bin folder (/usr/local/bin/).

      "},{"location":"installation-upgrade/#building-from-source","title":"Building from source","text":"

      Go version 16 or higher is required - see Golang installation instructions.

      To build the Cluster.dev client from source:

      1. Clone the Cluster.dev Git repo:

        git clone https://github.com/shalb/cluster.dev/\n
      2. Build the binary:

        cd cluster.dev/ && make\n
      3. Check Cluster.dev and move the binary to the bin folder:

        ./bin/cdev --help\nmv ./bin/cdev /usr/local/bin/\n
      "},{"location":"stack-templates-functions/","title":"Functions","text":"

      You can use basic Go template language and Sprig functions to modify the text of a stack template.

      Additionally, you can use some enhanced functions that are listed below. These functions are integrated with the yaml syntax and can't be used everywhere.

      "},{"location":"stack-templates-functions/#insertyaml","title":"insertYAML","text":"

      Allows for passing yaml block as a value of target yaml template.

      Argument: data to pass, any value or reference to a block.

      Allowed use: only as full yaml value, in unit inputs. Example:

      Source yaml:

        values:\n    node_groups:\n      - name: ng1\n        min_size: 1\n        max_size: 5\n      - name: ng2\n        max_size: 2\n        type: spot\n

      Target yaml template:

        units:\n    - name: k3s\n      type: tfmodule\n      node_groups: {{ insertYAML .values.node_groups }}\n

      Rendered stack template:

        units:\n    - name: k3s\n      type: tfmodule\n      node_groups:\n        - name: ng1\n          min_size: 1\n          max_size: 5\n        - name: ng2\n          max_size: 2\n          type: spot\n
      "},{"location":"stack-templates-functions/#remotestate","title":"remoteState","text":"

      Allows for passing data across units and stacks, can be used in pre/post hooks.

      Argument: string, path to remote state consisting of 3 parts separated by a dot: \"stack_name.unit_name.output_name\". Since the name of the stack is unknown inside the stack template, you can use \"this\" instead:\"this.unit_name.output_name\".

      Allowed use:

      • all units types: in inputs;

      • all units types: in units pre/post hooks;

      • in Kubernetes modules: in Kubernetes manifests.

      "},{"location":"stack-templates-functions/#cidrsubnet","title":"cidrSubnet","text":"

      Calculates a subnet address within given IP network address prefix. Same as Terraform function. Example:

      Source:

        {{ cidrSubnet \"172.16.0.0/12\" 4 2 }}\n

      Rendered:

        172.18.0.0/16\n

      "},{"location":"stack-templates-list/","title":"Stack Templates List","text":"

      Currently there are 3 types of stack templates available:

      • AWS-K3s
      • AWS-EKS
      • DO-K8s

      For more information on the templates please refer to the Examples section.

      "},{"location":"stack-templates-overview/","title":"Overview","text":""},{"location":"stack-templates-overview/#description","title":"Description","text":"

      A stack template is a yaml file that tells Cluster.dev which units to run and how. It is a core Cluster.dev resource that makes for its flexibility. Stack templates use Go template language to allow you customise and select the units you want to run.

      The stack template's config files are stored within the stack template directory that could be located either locally or in a Git repo. Cluster.dev reads all _./*.yaml files from the directory (non-recursively), renders a stack template with the project's data, parse the yaml file and loads units - the most primitive elements of a stack template.

      A stack template represents a yaml structure with an array of different invocation units. Common view:

      units:\n  - unit1\n  - unit2\n  - unit3\n  ...\n

      Stack templates can utilize all kinds of Go templates and Sprig functions (similar to Helm). Along with that it is enhanced with functions like insertYAML that could pass yaml blocks directly.

      "},{"location":"structure-backend/","title":"Backends","text":"

      File: searching in ./*.yaml. Optional.

      Backend is an object that describes backend storage for Terraform and Cluster.dev states. A backend could be local or remote, depending on where it stores a state.

      You can use any options of Terraform backends in the remote backend configuration. The options will be mapped to a generated Terraform backend and converted as is.

      "},{"location":"structure-backend/#local-backend","title":"Local backend","text":"

      Local backend stores the cluster state on a local file system in the .cluster.dev/states/cdev-state.json file. Cluster.dev will use the local backend by default unless the remote backend is specified in the project.yaml.

      Example configuration:

      name: my-fs\nkind: backend\nprovider: local\nspec:\n  path: /home/cluster.dev/states/\n

      A path should be absolute or relative to the directory where cdev is running. An absolute path must begin with /, and a relative with ./ or ../.

      "},{"location":"structure-backend/#remote-backend","title":"Remote backend","text":"

      Remote backend uses remote cloud services to store the cluster state, making it accessible for team work.

      "},{"location":"structure-backend/#s3","title":"s3","text":"

      Stores the cluster state in AWS S3 bucket. The S3 backend supports all options of Terraform S3 backend.

      name: aws-backend\nkind: backend\nprovider: s3\nspec:\n  bucket: cdev-states\n  region: {{ .project.variables.region }}\n
      "},{"location":"structure-backend/#azurerm","title":"azurerm","text":"

      Stores the cluster state in Microsoft Azure cloud. The azurerm backend supports all options of Terraform azurerm backend.

      name: azurerm-b\nkind: backend\nprovider: azurerm\nspec:\n  resource_group_name: \"StorageAccount-ResourceGroup\"\n  storage_account_name: \"example\"\n  container_name: \"cdev-states\"\n
      "},{"location":"structure-backend/#gcs","title":"gcs","text":"

      Stores the cluster state in Google Cloud service. The gcs backend supports all options of Terraform gcs backend.

      name: gcs-b\nkind: backend\nprovider: gcs\nspec:\n  bucket: cdev-states\n  prefix: pref\n
      "},{"location":"structure-backend/#digital-ocean-spaces-and-minio","title":"Digital Ocean Spaces and MinIO","text":"

      To use DO spaces or MinIO object storage as a backend, use s3 backend provider with additional options. See details:

      • DO Spaces
      • MinIO

      DO Spaces example:

      name: do-backend\nkind: Backend\nprovider: s3\nspec:\n  bucket: cdev-state\n  region: main\n  access_key: \"<SPACES_SECRET_KEY>\" # Optional, it's better to use environment variable 'export SPACES_SECRET_KEY=\"key\"'\n  secret_key: \"<SPACES_ACCESS_TOKEN>\" # Optional, it's better to use environment variable 'export SPACES_ACCESS_TOKEN=\"token\"'\n  endpoint: \"https://sgp1.digitaloceanspaces.com\"\n  skip_credentials_validation: true\n  skip_region_validation: true\n  skip_metadata_api_check: true\n

      MinIO example:

      name: minio-backend\nkind: Backend\nprovider: s3\nspec:\n  bucket: cdev-state\n  region: main\n  access_key: \"minioadmin\"\n  secret_key: \"minioadmin\"\n  endpoint: http://127.0.0.1:9000\n  skip_credentials_validation: true\n  skip_region_validation: true\n  skip_metadata_api_check: true\n  force_path_style: true\n
      "},{"location":"structure-overview/","title":"Overview","text":""},{"location":"structure-overview/#main-objects","title":"Main objects","text":"

      Unit \u2013 a block that executes Terraform modules, Helm charts, Kubernetes manifests, Terraform code, Bash scripts. Unit could source input variables from configuration (stacks) and from the outputs of other units. Unit could produce outputs that could be used by other units.

      Stack template \u2013 a set of units linked together into one infrastructure pattern (describes whole infrastructure). You can think of it like a complex Helm chart or compound Terraform Module.

      Stack \u2013 a set of variables that would be applied to a stack template (like values.yaml in Helm or tfvars file in Terraform). Is used to configure the resulting infrastructure.

      Project \u2013 a high-level metaobject that could arrange multiple stacks and keep global variables. An infrastructure can consist of multiple stacks, while a project acts like an umbrella object for these stacks.

      "},{"location":"structure-overview/#helper-objects","title":"Helper objects","text":"

      Backend \u2013 describes the location where Cluster.dev hosts its own state and also could store Terraform unit states.

      Secret \u2013 an object that contains sensitive data such as a password, a token, or a key. Is used to pass secret values to the tools that don't have a proper support of secret engines.

      "},{"location":"structure-project/","title":"Project","text":"

      Project is a storage for variables related to all stacks. It is a high-level abstraction to store and reconcile different stacks, and pass values across them.

      File: project.yaml. Optional. Represents a set of configuration options for the whole project. Contains global project variables that can be used in other configuration objects, such as backend or stack (except of secrets). Note that the project.conf file is not rendered with the template and you cannot use template units in it.

      Example of project.yaml:

      name: my_project\nkind: project\nbackend: aws-backend\nvariables:\n  organization: shalb\n  region: eu-central-1\n  state_bucket_name: cdev-states\nexports:\n  AWS_PROFILE: cluster-dev  \n
      • name: project name. Required.

      • kind: object kind. Must be set as project. Required.

      • backend: name of the backend that will be used to store the Cluster.dev state of the current project. Optional.

      • variables: a set of data in yaml format that can be referenced in other configuration objects. For the example above, the link to the organization name will look like this: {{ .project.variables.organization }}.

      • exports: list of environment variables that will be exported while working with the project. Optional.

      "},{"location":"structure-secrets/","title":"Secrets","text":"

      Secret is an object that contains sensitive data such as a password, a token, or a key. It is used to pass secret values to the tools that don't have a proper support of secret engines.

      Cluster.dev allows for two ways of working with secrets.

      "},{"location":"structure-secrets/#sops-secrets","title":"SOPS secrets","text":"

      See SOPS installation instructions in official repo.

      Secrets are encoded/decoded with SOPS utility that supports AWS KMS, GCP KMS, Azure Key Vault and PGP keys. How to use:

      1. Use Cluster.dev console client to create a new secret from scratch:

        cdev secret create\n
      2. Use interactive menu to create a secret.

      3. Edit the secret and set secret data in encrypted_data: section.

      4. Use references to the secret data in a stack template (you can find the examples in the generated secret file).

      "},{"location":"structure-secrets/#aws-secrets-manager","title":"AWS Secrets Manager","text":"

      Cluster.dev client can use AWS SSM as a secret storage. How to use:

      1. Create a new secret in AWS Secrets Manager using AWS CLI or web console. Both raw and JSON data formats are supported.

      2. Use Cluster.dev console client to create a new secret from scratch:

        cdev secret create\n
      3. Answer the questions. For the Name of secret in AWS Secrets Manager enter the name of the AWS secret created above.

      4. Use references to the secret data in a stack template (you can find the examples in the generated secret file).

      To list and edit any secret, use the commands:

      cdev secret ls\n

      and

      cdev secret edit secret_name\n
      "},{"location":"structure-secrets/#secrets-reference","title":"Secrets reference","text":"

      You can refer to a secret data in stack files with {{ .secrets.secret_name.secret_key }} syntax.

      For example, we have a secret in AWS Secrets Manager and want to refer to the secret in our stack.yaml:

      name: my-aws-secret\nkind: Secret\ndriver: aws_secretmanager\nspec: \n    region: eu-central-1\n    aws_secret_name: pass\n

      In order to do this, we need to define the secret as {{ .secrets.my-aws-secret.some-key }} in the stack.yaml:

      name: my-stack\ntemplate: https://<template.git.url>\nkind: Stack\nvariables:\n  region: eu-central-1\n  name: my-test-stack\n  password: {{ .secrets.my-aws-secret.some-key }}\n
      "},{"location":"structure-stack/","title":"Stack","text":"

      Stack is a yaml file that tells Cluster.dev which template to use and what variables to apply to this template. Usually, users have multiple stacks that reflect their environments or tenants, and point to the same template with different variables.

      File: searching in ./*.yaml. Required at least one. Stack object (kind: stack) contains reference to a stack template, variables to render the template and backend for states.

      Example of stack.yaml:

      # Define stack itself\nname: k3s-infra\ntemplate: \"./templates/\"\nkind: stack\nbackend: aws-backend\nvariables:\n  bucket: {{ .project.variables.state_bucket_name }} # Using project variables.\n  region: {{ .project.variables.region }}\n  organization: {{ .project.variables.organization }}\n  domain: cluster.dev\n  instance_type: \"t3.medium\"\n  vpc_id: \"vpc-5ecf1234\"\n
      • name: stack name. Required.

      • kind: object kind. stack. Required.

      • backend: name of the backend that will be used to store the states of this stack. Optional.

      • variables: data set for the stack template rendering. See variables.

      • template: it's either a path to a local directory containing the stack template's configuration files, or a remote Git repository as the stack template source. For more details on stack templates please refer to Stack Template section. A local path must begin with either / for absolute path, ./ or ../ for relative path. For Git source, use this format: <GIT_URL>//<PATH_TO_TEMPLATE_DIR>?ref=<BRANCH_OR_TAG>:

        • <GIT_URL> - required. Standard Git repo url. See details on official Git page.
        • <PATH_TO_TEMPLATE_DIR> - optional, use it if the stack template's configuration is not in repo root.
        • <BRANCH_OR_TAG>- Git branch or tag.
      "},{"location":"structure-stack/#examples","title":"Examples","text":"
      template: /path/to/dir # absolute local path\ntemplate: ./template/ # relative local path\ntemplate: ../../template/ # relative local path\ntemplate: https://github.com/shalb/cdev-k8s # https Git url\ntemplate: https://github.com/shalb/cdev-k8s//some/dir/ # subdirectory\ntemplate: https://github.com/shalb/cdev-k8s//some/dir/?ref=branch-name # branch\ntemplate: https://github.com/shalb/cdev-k8s?ref=v1.1.1 # tag\ntemplate: git@github.com:shalb/cdev-k8s.git # ssh Git url\ntemplate: git@github.com:shalb/cdev-k8s.git//some/dir/ # subdirectory\ntemplate: git@github.com:shalb/cdev-k8s.git//some/dir/?ref=branch-name # branch\ntemplate: git@github.com:shalb/cdev-k8s.git?ref=v1.1.1 # tag\n
      "},{"location":"style-guide/","title":"Style guide","text":"

      For better experience, we recommend using VS Code - we have a list of recommended extensions to prevent many common errors, improve code and save time.

      We use .editorconfig. It fixes basic mistakes on every file saving.

      Please make sure to install pre-commit-terraform with all its dependencies. It checks all changed files when you run git commit for more complex problems and tries to fix them for you.

      "},{"location":"style-guide/#bash","title":"Bash","text":"

      Firstly, please install shellcheck to have vscode-shellcheck extension working properly.

      We use Google Style Guide.

      "},{"location":"style-guide/#terraform","title":"Terraform","text":"

      We use Terraform Best Practices.com code style and conceptions.

      "},{"location":"style-guide/#autogenerated-documentation","title":"Autogenerated Documentation","text":"

      For the successful module documentation initialization, you need to create README.md with:

      <!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->\n\n<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->\n

      It is needed for terraform-docs hooks. The hook rewrites all the things inside with every .tf file change.

      Then run pre-commit run --all-files or make some changes in any .tf file in the same dir (for ex. variable \"name\" { -> variable \"name\"{).

      "},{"location":"templating/","title":"Templating","text":""},{"location":"templating/#levels-of-templating","title":"Levels of templating","text":"

      Cluster.dev has a two-level templating that is applied on the project's and on the stack template's levels.

      On the first level Cluster.dev reads a project.yaml and files with secrets. Then it uses variables from these files to populate and render files from the current project \u2013 stacks and backends.

      On the second level data from the stack object (an outcome of the first stage) is used to render stack template files.

      The templating process could be described as follows:

      1. Reading data from project.yaml and secrets.

      2. Using the data to render all yaml files within the project directory.

      3. Reading data from stack.yaml and backend.yaml (the files rendered in p.#2) \u2013 first-level templating.

      4. Downloading specified stack templates.

      5. Rendering the stack templates from data contained in the corresponding stack.yaml files (p.#3) \u2013 second-level templating.

      6. Reading units from the stack templates.

      7. Executing the project.

      "},{"location":"units-helm/","title":"Helm Unit","text":"

      Describes Terraform Helm provider invocation.

      In the example below we use helm unit to deploy Argo CD to a Kubernetes cluster:

      units:\n  - name: argocd\n    type: helm\n    source:\n      repository: \"https://argoproj.github.io/argo-helm\"\n      chart: \"argo-cd\"\n      version: \"2.11.0\"\n    pre_hook:\n      command: *getKubeconfig\n      on_destroy: true\n    kubeconfig: /home/john/kubeconfig\n    additional_options:\n      namespace: \"argocd\"\n      create_namespace: true\n    values:\n      - file: ./argo/values.yaml\n        apply_template: true\n    inputs:\n      global.image.tag: v1.8.3\n

      In addition to common options the following are available:

      • force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

      • source - map, required. This block describes Helm chart source.

      • chart, repository, version - correspond to options with the same name from helm_release resource. See chart, repository and version.

      • kubeconfig - string, required. Path to the kubeconfig file which is relative to the directory where the unit was executed.

      • provider_version - string, optional. Version of terraform helm provider to use. Default - latest. See terraform helm provider

      • additional_options - map of any, optional. Corresponds to Terraform helm_release resource options. Will be passed as is.

      • values - array, optional. List of values files in raw yaml to be passed to Helm. Values will be merged, in order, as Helm does with multiple -f options.

        • file - string, required. Path to the values file.

        • apply_template - bool, optional. Defines whether a template should be applied to the values file. By default is set to true.

      • inputs - map of any, optional. A map that represents Terraform helm_release sets. This block allows to use functions remoteState and insertYAML. For example:

        inputs:\n    global.image.tag: v1.8.3\n    service.type: LoadBalancer\n  ```\n\nCorresponds to:\n\n```yaml\n      set {\n        name = \"global.image.tag\"\n        value = \"v1.8.3\"\n      }\n      set  {\n        name = \"service.type\"\n        value = \"LoadBalancer\"\n      }\n
      "},{"location":"units-k8s-manifest/","title":"K8s-manifest Unit","text":"

      Applies Kubernetes resources from manifests.

      Example:

      - name: kubectl-test2\n  type: k8s-manifest\n  namespace: default\n  create_namespaces: true\n  path: ./manifests/\n  apply_template: true\n  kubeconfig: {{ output \"this.kubeconfig.kubeconfig_path\" }}\n  kubectl_opts: \"--wait=true\"\n
      "},{"location":"units-k8s-manifest/#options","title":"Options","text":"
      • force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

      • namespace - optional. Corresponds to kubectl -n.

      • create_namespaces - bool, optional. By default is false. If set to true, cdev will create namespaces required for the unit (i.e. the namespaces listed in manifests and the one specified within the namespace option), in case they don't exist.

      • path - required, string. Indicates the resources that are to be applied: a file (in case of a file path), a directory recursively (in case of a directory path) or URL. In case of URL path the unit will download the resources by the link and then apply them.

        Example of URL path:

        - name: kubectl-test2\n  type: k8s-manifest\n  namespace: default\n  path: https://git.io/vPieo\n  kubeconfig: {{ output \"this.kubeconfig.kubeconfig_path\" }}\n
      • apply_template - bool. By default is set to true. See Templating usage below.

      • kubeconfig - optional. Specifies the path to a kubeconfig file. See How to get kubeconfig subsection below.

      • kubectl_opts - optional. Lists additional arguments of the kubectl command.

      "},{"location":"units-k8s-manifest/#templating-usage","title":"Templating usage","text":"

      As manifests are part of a stack template, they also maintain templating options. Specifying the apply_template option enables you to use templating in all Kubernetes manifests located with the specified path.

      "},{"location":"units-k8s-manifest/#how-to-get-kubeconfig","title":"How to get kubeconfig","text":"

      There are several ways to get a kubeconfig from a cluster and pass it to the units that require it (for example, helm, K8s-manifest). The recommended way is to use the shell unit with the option force_apply. Here is an example of such unit:

      - name: kubeconfig\n  type: shell\n  force_apply: true\n  depends_on: this.k3s\n  apply:\n    commands:\n      - aws s3 cp s3://{{ .variables.bucket }}/{{ .variables.cluster_name }}/kubeconfig /tmp/kubeconfig_{{ .variables.cluster_name }}\n      - echo \"kubeconfig_base64=$(cat /tmp/kubeconfig_{{ .variables.cluster_name }} | base64 -w 0)\"\n      - echo \"kubeconfig_path=/tmp/kubeconfig_{{ .variables.cluster_name }}\"\n  outputs:\n    type: separator\n    separator: \"=\"\n

      In the example above, the K3s unit (the one referred to) will deploy a Kubernetes cluster in AWS and place a kubeconfig file in S3 bucket. The kubeconfig unit will download the kubeconfig from the storage and place it within the /tmp directory.

      The kubeconfig can then be passed as an output to other units:

      - name: cert-manager-issuer\n  type: k8s-manifest\n  path: ./cert-manager/issuer.yaml\n  kubeconfig: {{ output \"this.kubeconfig.kubeconfig_path\" }}\n

      An alternative (but not recommended) way is to create a yaml hook in a stack template that would take the required set of commands:

      _: &getKubeconfig \"rm -f ../kubeconfig_{{ .name }}; aws eks --region {{ .variables.region }} update-kubeconfig --name {{ .name }} --kubeconfig ../kubeconfig_{{ .name }}\"\n

      and execute it with a pre-hook in each unit:

      - name: cert-manager-issuer\n  type: kubernetes\n  source: ./cert-manager/\n  provider_version: \"0.6.0\"\n  config_path: ../kubeconfig_{{ .name }}\n  depends_on: this.cert-manager\n  pre_hook:\n    command: *getKubeconfig\n    on_destroy: true\n    on_plan: true\n
      "},{"location":"units-kubernetes/","title":"Kubernetes Unit","text":"

      Info

      This unit is deprecated and will be removed soon. Please use the k8s-manifest unit instead.

      Describes Terraform kubernetes provider invocation.

      Example:

      units:\n  - name: argocd_apps\n    type: kubernetes\n    provider_version: \"0.2.1\"\n    source: ./argocd-apps/app1.yaml\n    kubeconfig: ../kubeconfig\n    depends_on: this.argocd\n
      • force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

      • source - string, required. Path to Kubernetes manifest that will be converted into a representation of kubernetes-alpha provider. Source file will be rendered with the stack template, and also allows to use functions remoteState and insertYAML.

      • kubeconfig - string, required. Path to the kubeconfig file, which is relative to the directory where the unit was executed.

      • provider_version - string, optional. Version of terraform kubernetes-alpha provider to use. Default - latest. See terraform kubernetes-alpha provider
      "},{"location":"units-overview/","title":"Overview","text":""},{"location":"units-overview/#description","title":"Description","text":"

      Units are building blocks that stack templates are made of. It could be anything \u2014 a Terraform module, Helm you want to install or a Bash script that you want to run. Units can be remote or stored in the same repo with other Cluster.dev code. Units may contain reference to other files that are required for work. These files should be located inside the current directory (within the stack template's context). As some of the files will also be rendered with the project's data, you can use Go templates in them.

      Tip

      You can pass variables across units within the stack template by using outputs or remoteState.

      All units described below have a common format and common fields. Base example:

        - name: k3s\n    type: tfmodule\n    depends_on:\n      - this.unit1_name\n      - this.unit2_name\n#   depends_on: this.unit1_name # is allowed to use string for single, or list for multiple dependencies\n    pre_hook:\n      command: \"echo pre_hook\"\n      # script: \"./scripts/hook.sh\"\n      on_apply: true\n      on_destroy: false\n      on_plan: false\n    post_hook:\n      # command: \"echo post_hook\"\n      script: \"./scripts/hook.sh\"\n      on_apply: true\n      on_destroy: false\n      on_plan: false\n
      • name - unit name. Required.

      • type - unit type. One of: shell, tfmodule, helm, kubernetes, printer.

      • depends_on - string or list of strings. One or multiple unit dependencies in the format \"stack_name.unit_name\". Since the name of the stack is unknown inside the stack template, you can use \"this\" instead:\"this.unit_name.output_name\".

      • pre_hook and post_hook blocks: See the description in Shell unit.

      "},{"location":"units-passing-variables/","title":"Units passing variables","text":"

      Note

      If passing outputs across units within one stack template, use \"this\" instead of the stack name: {{ output \"this.unit_name.output\" }}:.

      Example of passing variables across units in the stack template:

      name: s3-static-web\nkind: StackTemplate\nunits:\n  - name: s3-web\n    type: tfmodule\n    source: \"terraform-aws-modules/s3-bucket/aws\"\n    providers:\n    - aws:\n        region: {{ .variables.region }}\n    inputs:\n      bucket: {{ .variables.name }}\n      force_destroy: true\n      acl: \"public-read\"\n  - name: outputs\n    type: printer\n    outputs:\n      bucket_name: {{ remoteState \"this.s3-web.s3_bucket_website_endpoint\" }}\n      name: {{ .variables.name }}\n
      "},{"location":"units-printer/","title":"Printer Unit","text":"

      This unit exposes outputs that can be used in other units and stacks.

      Tip

      If named output, the unit will have all its outputs displayed when running cdev apply or cdev output.

      Example:

      units:\n  - name: outputs\n    type: printer\n    outputs:\n      bucket_name: \"Endpoint: {{ remoteState \"this.s3-web.s3_bucket_website_endpoint\" }}\"\n      name: {{ .variables.name }}\n
      • outputs - any, required - a map that represents data to be printed in the log. The block allows to use functions remoteState and insertYAML.

      • force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

      "},{"location":"units-shell/","title":"Shell Unit","text":"

      Executes Shell commands and scripts.

      Example of a shell unit that creates an index.html file with a greeting message and downloads the file into an S3 bucket. The bucket name is passed as a variable:

      units:\n  - name: upload-web\n    type: shell\n    apply:\n      commands:\n        - aws s3 cp ./index.html s3://{{ .variables.name }}/index.html\n    create_files:\n    - file: ./index.html\n      content: |\n        <h1> Hello from {{ .variables.organization }} </h1>\n        This page was created automatically by cdev tool.\n

      Complete reference example:

      units:\n  - name: my-tf-code\n    type: shell\n    env: \n      AWS_PROFILE: {{ .variables.aws_profile }}\n      TF_VAR_region: {{ .project.region }}\n    create_files:\n      - file: ./terraform.tfvars\n        content: |\n{{- range $key, $value := .variables.tfvars }}\n        $key = \"$value\" \n{{- end}}\n    work_dir: ~/env/prod/\n    apply: \n      commands:\n        - terraform apply -var-file terraform.tfvars {{ range $key, $value := .variables.vars_list }} -var=\"$key=$value\"{{ end }}\n    plan:\n      commands:\n        - terraform plan\n    destroy:\n      commands:\n        - terraform destroy\n        - rm ./.terraform\n    outputs: # how to get outputs\n      type: json (regexp, separator)\n      regexp_key: \"regexp\"\n      regexp_value: \"regexp\"\n      separator: \"=\"\n      command: terraform output -json\n    create_files:\n        - file: ./my_text_file.txt\n          mode: 0644\n          content: \"some text\"\n        - file: ./my_text_file2.txt\n          content: \"some text 2\"\n
      "},{"location":"units-shell/#options","title":"Options","text":"
      • force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

      • env - map, optional. The list of environment variables that will be exported before executing commands of this unit. The variables defined in shell unit have a priority over variables defined in the project (the option exports) and will rewrite them.

      • work_dir - string, required. The working directory within which the code of the unit will be executed.

      • apply - optional, map. Describes commands to be executed when running cdev apply.

        • init - optional. Describes commands to be executed prior to running cdev apply.

        • commands - list of strings, required. The list of commands to be executed when running cdev apply.

      • plan - optional, map. Describes commands to be executed when running cdev plan.

        • init - optional. Describes commands to be executed prior to running cdev plan.

        • commands - list of strings, required. The list of commands to be executed when running cdev plan.

      • destroy - optional, map. Describes commands to be executed when running cdev destroy.

        • init - optional. Describes commands to be executed prior to running cdev destroy.

        • commands - list of strings, required. The list of commands to be executed when running cdev destroy.

      • outputs - optional, map. Describes how to get outputs from a command.

        • type - string, required. A type of format to deliver the output. Could have 3 options: JSON, regexp, separator. According to the type specified, further options will differ.

        • JSON - if the type is defined as JSON, outputs will be parsed as key-value JSON. This type of output makes all other options not required.

        • regexp - if the type is defined as regexp, this introduces an additional required option regexp. Regexp is a regular expression which defines how to parse each line in the module output. Example:

          outputs: # how to get outputs\n  type: regexp\n  regexp: \"^(.*)=(.*)$\"\n  command: | \n  echo \"key1=val1\\nkey2=val2\"\n
        • separator - if the type is defined as separator, this introduces an additional option separator (string). Separator is a symbol that defines how a line is divided in two parts: the key and the value.

          outputs: # how to get outputs\n  type: separator\n  separator: \"=\"\n  command: |\n  echo \"key1=val1\\nkey2=val2\"\n
          * command - string, optional. The command to take the outputs from. Is used regardless of the type option. If the command is not defined, cdev takes the outputs from the apply command.

      • create_files - list of files, optional. The list of files that have to be saved in the state in case of their changing.

      • pre_hook and post_hook blocks: describe the shell commands to be executed before and after the unit, respectively. The commands will be executed in the same context as the actions of the unit. Environment variables are common to the shell commands, the pre_hook and post_hook scripts, and the unit execution. You can export a variable in the pre_hook and it will be available in the post_hook or in the unit.

        • command - string. Shell command in text format. Will be executed in Bash -c \"command\". Can be used if the \"script\" option is not used. One of command or script is required.

        • script - string. Path to shell script file which is relative to template directory. Can be used if the \"command\" option is not used. One of command or script is required.

        • on_apply bool, optional. Turn off/on when unit applying. Default: \"true\".

        • on_destroy - bool, optional. Turn off/on when unit destroying. Default: \"false\".

        • on_plan - bool, optional. Turn off/on when unit plan executing. Default: \"false\".

      "},{"location":"units-terraform/","title":"Tfmodule Unit","text":"

      Describes direct invocation of Terraform modules.

      In the example below we use the tfmodule unit to create an S3 bucket for hosting a static web page. The tfmodule unit applies a dedicated Terraform module.

      units:\n  - name: s3-web\n    type: tfmodule\n    version: \"2.77.0\"\n    source: \"terraform-aws-modules/s3-bucket/aws\"\n    providers:\n    - aws:\n        region: {{ .variables.region }}\n    inputs:\n      bucket: {{ .variables.name }}\n      force_destroy: true\n      acl: \"public-read\"\n      website:\n        index_document: \"index.html\"\n        error_document: \"index.html\"\n      attach_policy: true\n      policy: |\n        {\n            \"Version\": \"2012-10-17\",\n            \"Statement\": [\n                {\n                    \"Sid\": \"PublicReadGetObject\",\n                    \"Effect\": \"Allow\",\n                    \"Principal\": \"*\",\n                    \"Action\": \"s3:GetObject\",\n                    \"Resource\": \"arn:aws:s3:::{{ .variables.name }}/*\"\n                }\n            ]\n        }\n

      In addition to common options the following are available:

      • source - string, required. Terraform module source. It is not allowed to use local folders in source!

      • version - string, optional. Module version.

      • inputs - map of any, required. A map that corresponds to input variables defined by the module. This block allows to use functions remoteState and insertYAML.

      • force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

      "},{"location":"variables/","title":"Variables","text":"

      Stack configuration contains variables that will be applied to a stack template (similar to values.yaml in Helm or tfvars file in Terraform). The variables from stack.yaml are passed to stack template files to render them.

      Example of stack.yaml with variables region, name, organization:

      name: k3s-demo\ntemplate: https://github.com/shalb/cdev-s3-web\nkind: Stack\nvariables:\n  region: eu-central-1\n  name: web-static-page\n  organization: Cluster.dev\n

      The values of the variables are passed to a stack template to configure the resulting infrastructure.

      "},{"location":"variables/#passing-variables-across-stacks","title":"Passing variables across stacks","text":"

      Cluster.dev allows passing variable values across different stacks within one project. This is made possible in 2 ways:

      • using the output of one stack as an input for another stack: {{ output \"stack_name.unit_name.output\" }}

      Example of passing outputs between stacks:

      name: s3-web-page\ntemplate: ../web-page/\nkind: Stack\nvariables:\n  region: eu-central-1\n  name: web-static-page\n  organization: Shalb\n
      name: health-check\ntemplate: ../health-check/\nkind: Stack\nvariables:\n  url: {{ output \"s3-web-page.outputs.url\" }}\n
      • using remoteState with a syntax: {{ remoteState \"stack_name.unit_name.output\" }}
      "},{"location":"variables/#global-variables","title":"Global variables","text":"

      The variables defined on a project level are called global. Global variables are listed in the project.yaml\u2013 a configuration file that defines the parameters and settings for the whole project. From the project.yaml the variables values can be applied to all stack and backend objects within this project.

      Example of the project.yaml file that contains variables organization and region:

      name: demo\nkind: Project\nvariables:\n  organization: shalb\n  region: eu-central-1\n

      To refer to a variable in stack and backend files, use the {{ .project.variables.KEY_NAME }} syntax, where project.variables is the path that corresponds the structure of variables in the project.yaml. The KEY_NAME stands for the variable name defined in the project.yaml and will be replaced by its value.

      Example of the stack.yaml file that contains reference to the project variables organization and region:

      name: eks-demo\ntemplate: https://github.com/shalb/cdev-aws-eks?ref=v0.2.0\nkind: Stack\nbackend: aws-backend\nvariables:\n  region: {{ .project.variables.region }}\n  organization: {{ .project.variables.organization }}\n  domain: cluster.dev\n  instance_type: \"t3.medium\"\n  eks_version: \"1.20\"\n

      Example of the rendered stack.yaml:

      name: eks-demo\ntemplate: https://github.com/shalb/cdev-aws-eks?ref=v0.2.0\nkind: Stack\nbackend: aws-backend\nvariables:\n  region: eu-central-1\n  organization: shalb\n  domain: cluster.dev\n  instance_type: \"t3.medium\"\n  eks_version: \"1.20\"\n
      "}]} \ No newline at end of file diff --git a/sitemap.xml b/sitemap.xml new file mode 100644 index 00000000..1fa04e3b --- /dev/null +++ b/sitemap.xml @@ -0,0 +1,253 @@ + + + + https://docs.cluster.dev/ + 2023-10-31 + daily + + + https://docs.cluster.dev/DevOpsDays21/ + 2023-10-31 + daily + + + https://docs.cluster.dev/ROADMAP/ + 2023-10-31 + daily + + + https://docs.cluster.dev/azure-cloud-provider/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cdev-vs-helmfile/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cdev-vs-pulumi/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cdev-vs-terraform/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cdev-vs-terragrunt/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cli-commands/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cli-options/ + 2023-10-31 + daily + + + https://docs.cluster.dev/cluster-state/ + 2023-10-31 + daily + + + https://docs.cluster.dev/env-variables/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-aws-eks/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-aws-k3s-prometheus/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-aws-k3s/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-develop-stack-template/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-do-k8s/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-gcp-gke/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-modify-aws-eks/ + 2023-10-31 + daily + + + https://docs.cluster.dev/examples-overview/ + 2023-10-31 + daily + + + https://docs.cluster.dev/generators-overview/ + 2023-10-31 + daily + + + https://docs.cluster.dev/get-started-cdev-aws/ + 2023-10-31 + daily + + + https://docs.cluster.dev/get-started-cdev-azure/ + 2023-10-31 + daily + + + https://docs.cluster.dev/get-started-cdev-gcp/ + 2023-10-31 + daily + + + https://docs.cluster.dev/get-started-cdev-helm/ + 2023-10-31 + daily + + + https://docs.cluster.dev/get-started-create-project/ + 2023-10-31 + daily + + + https://docs.cluster.dev/get-started-overview/ + 2023-10-31 + daily + + + https://docs.cluster.dev/google-cloud-provider/ + 2023-10-31 + daily + + + https://docs.cluster.dev/how-does-cdev-work/ + 2023-10-31 + daily + + + https://docs.cluster.dev/howto-tf-versions/ + 2023-10-31 + daily + + + https://docs.cluster.dev/installation-upgrade/ + 2023-10-31 + daily + + + https://docs.cluster.dev/stack-templates-functions/ + 2023-10-31 + daily + + + https://docs.cluster.dev/stack-templates-list/ + 2023-10-31 + daily + + + https://docs.cluster.dev/stack-templates-overview/ + 2023-10-31 + daily + + + https://docs.cluster.dev/structure-backend/ + 2023-10-31 + daily + + + https://docs.cluster.dev/structure-overview/ + 2023-10-31 + daily + + + https://docs.cluster.dev/structure-project/ + 2023-10-31 + daily + + + https://docs.cluster.dev/structure-secrets/ + 2023-10-31 + daily + + + https://docs.cluster.dev/structure-stack/ + 2023-10-31 + daily + + + https://docs.cluster.dev/style-guide/ + 2023-10-31 + daily + + + https://docs.cluster.dev/templating/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-helm/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-k8s-manifest/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-kubernetes/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-overview/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-passing-variables/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-printer/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-shell/ + 2023-10-31 + daily + + + https://docs.cluster.dev/units-terraform/ + 2023-10-31 + daily + + + https://docs.cluster.dev/variables/ + 2023-10-31 + daily + + \ No newline at end of file diff --git a/sitemap.xml.gz b/sitemap.xml.gz new file mode 100644 index 00000000..62a8cb23 Binary files /dev/null and b/sitemap.xml.gz differ diff --git a/stack-templates-functions/index.html b/stack-templates-functions/index.html new file mode 100644 index 00000000..7fc50ba4 --- /dev/null +++ b/stack-templates-functions/index.html @@ -0,0 +1,1902 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Functions - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Functions

      +

      You can use basic Go template language and Sprig functions to modify the text of a stack template.

      +

      Additionally, you can use some enhanced functions that are listed below. These functions are integrated with the yaml syntax and can't be used everywhere.

      +

      insertYAML

      +

      Allows for passing yaml block as a value of target yaml template.

      +

      Argument: data to pass, any value or reference to a block.

      +

      Allowed use: only as full yaml value, in unit inputs. Example:

      +

      Source yaml:

      +
        values:
      +    node_groups:
      +      - name: ng1
      +        min_size: 1
      +        max_size: 5
      +      - name: ng2
      +        max_size: 2
      +        type: spot
      +
      +

      Target yaml template:

      +
        units:
      +    - name: k3s
      +      type: tfmodule
      +      node_groups: {{ insertYAML .values.node_groups }}
      +
      +

      Rendered stack template:

      +
        units:
      +    - name: k3s
      +      type: tfmodule
      +      node_groups:
      +        - name: ng1
      +          min_size: 1
      +          max_size: 5
      +        - name: ng2
      +          max_size: 2
      +          type: spot
      +
      +

      remoteState

      +

      Allows for passing data across units and stacks, can be used in pre/post hooks.

      +

      Argument: string, path to remote state consisting of 3 parts separated by a dot: "stack_name.unit_name.output_name". Since the name of the stack is unknown inside the stack template, you can use "this" instead:"this.unit_name.output_name".

      +

      Allowed use:

      +
        +
      • +

        all units types: in inputs;

        +
      • +
      • +

        all units types: in units pre/post hooks;

        +
      • +
      • +

        in Kubernetes modules: in Kubernetes manifests.

        +
      • +
      +

      cidrSubnet

      +

      Calculates a subnet address within given IP network address prefix. Same as Terraform function. Example:

      +

      Source: +

        {{ cidrSubnet "172.16.0.0/12" 4 2 }}
      +

      +

      Rendered: +

        172.18.0.0/16
      +

      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/stack-templates-list/index.html b/stack-templates-list/index.html new file mode 100644 index 00000000..6222afbb --- /dev/null +++ b/stack-templates-list/index.html @@ -0,0 +1,1717 @@ + + + + + + + + + + + + + + + + + + + + + Stack Templates List - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Stack Templates List

      +

      Currently there are 3 types of stack templates available:

      + +

      For more information on the templates please refer to the Examples section.

      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/stack-templates-overview/index.html b/stack-templates-overview/index.html new file mode 100644 index 00000000..f06d0750 --- /dev/null +++ b/stack-templates-overview/index.html @@ -0,0 +1,1829 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Overview

      +

      cdev template diagram

      +

      Description

      +

      A stack template is a yaml file that tells Cluster.dev which units to run and how. It is a core Cluster.dev resource that makes for its flexibility. Stack templates use Go template language to allow you customise and select the units you want to run.

      +

      The stack template's config files are stored within the stack template directory that could be located either locally or in a Git repo. Cluster.dev reads all _./*.yaml files from the directory (non-recursively), renders a stack template with the project's data, parse the yaml file and loads units - the most primitive elements of a stack template.

      +

      A stack template represents a yaml structure with an array of different invocation units. Common view:

      +
      units:
      +  - unit1
      +  - unit2
      +  - unit3
      +  ...
      +
      +

      Stack templates can utilize all kinds of Go templates and Sprig functions (similar to Helm). Along with that it is enhanced with functions like insertYAML that could pass yaml blocks directly.

      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/structure-backend/index.html b/structure-backend/index.html new file mode 100644 index 00000000..a9c78c0c --- /dev/null +++ b/structure-backend/index.html @@ -0,0 +1,1977 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Backends - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Backends

      +

      File: searching in ./*.yaml. Optional.

      +

      Backend is an object that describes backend storage for Terraform and Cluster.dev states. A backend could be local or remote, depending on where it stores a state.

      +

      You can use any options of Terraform backends in the remote backend configuration. The options will be mapped to a generated Terraform backend and converted as is.

      +

      Local backend

      +

      Local backend stores the cluster state on a local file system in the .cluster.dev/states/cdev-state.json file. Cluster.dev will use the local backend by default unless the remote backend is specified in the project.yaml.

      +

      Example configuration:

      +
      name: my-fs
      +kind: backend
      +provider: local
      +spec:
      +  path: /home/cluster.dev/states/
      +
      +

      A path should be absolute or relative to the directory where cdev is running. An absolute path must begin with /, and a relative with ./ or ../.

      +

      Remote backend

      +

      Remote backend uses remote cloud services to store the cluster state, making it accessible for team work.

      +

      s3

      +

      Stores the cluster state in AWS S3 bucket. The S3 backend supports all options of Terraform S3 backend.

      +
      name: aws-backend
      +kind: backend
      +provider: s3
      +spec:
      +  bucket: cdev-states
      +  region: {{ .project.variables.region }}
      +
      +

      azurerm

      +

      Stores the cluster state in Microsoft Azure cloud. The azurerm backend supports all options of Terraform azurerm backend.

      +
      name: azurerm-b
      +kind: backend
      +provider: azurerm
      +spec:
      +  resource_group_name: "StorageAccount-ResourceGroup"
      +  storage_account_name: "example"
      +  container_name: "cdev-states"
      +
      +

      gcs

      +

      Stores the cluster state in Google Cloud service. The gcs backend supports all options of Terraform gcs backend.

      +
      name: gcs-b
      +kind: backend
      +provider: gcs
      +spec:
      +  bucket: cdev-states
      +  prefix: pref
      +
      +

      Digital Ocean Spaces and MinIO

      +

      To use DO spaces or MinIO object storage as a backend, use s3 backend provider with additional options. See details:

      + +

      DO Spaces example:

      +
      name: do-backend
      +kind: Backend
      +provider: s3
      +spec:
      +  bucket: cdev-state
      +  region: main
      +  access_key: "<SPACES_SECRET_KEY>" # Optional, it's better to use environment variable 'export SPACES_SECRET_KEY="key"'
      +  secret_key: "<SPACES_ACCESS_TOKEN>" # Optional, it's better to use environment variable 'export SPACES_ACCESS_TOKEN="token"'
      +  endpoint: "https://sgp1.digitaloceanspaces.com"
      +  skip_credentials_validation: true
      +  skip_region_validation: true
      +  skip_metadata_api_check: true
      +
      +

      MinIO example:

      +
      name: minio-backend
      +kind: Backend
      +provider: s3
      +spec:
      +  bucket: cdev-state
      +  region: main
      +  access_key: "minioadmin"
      +  secret_key: "minioadmin"
      +  endpoint: http://127.0.0.1:9000
      +  skip_credentials_validation: true
      +  skip_region_validation: true
      +  skip_metadata_api_check: true
      +  force_path_style: true
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/structure-overview/index.html b/structure-overview/index.html new file mode 100644 index 00000000..ff2363f2 --- /dev/null +++ b/structure-overview/index.html @@ -0,0 +1,1839 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Overview

      +

      Main objects

      +

      Unit – a block that executes Terraform modules, Helm charts, Kubernetes manifests, Terraform code, Bash scripts. Unit could source input variables from configuration (stacks) and from the outputs of other units. Unit could produce outputs that could be used by other units.

      +

      Stack template – a set of units linked together into one infrastructure pattern (describes whole infrastructure). You can think of it like a complex Helm chart or compound Terraform Module.

      +

      Stack – a set of variables that would be applied to a stack template (like values.yaml in Helm or tfvars file in Terraform). Is used to configure the resulting infrastructure.

      +

      Project – a high-level metaobject that could arrange multiple stacks and keep global variables. An infrastructure can consist of multiple stacks, while a project acts like an umbrella object for these stacks.

      +

      Helper objects

      +

      Backend – describes the location where Cluster.dev hosts its own state and also could store Terraform unit states.

      +

      Secret – an object that contains sensitive data such as a password, a token, or a key. Is used to pass secret values to the tools that don't have a proper support of secret engines.

      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/structure-project/index.html b/structure-project/index.html new file mode 100644 index 00000000..e7af08e6 --- /dev/null +++ b/structure-project/index.html @@ -0,0 +1,1796 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Project - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Project

      +

      Project is a storage for variables related to all stacks. It is a high-level abstraction to store and reconcile different stacks, and pass values across them.

      +

      File: project.yaml. Optional. +Represents a set of configuration options for the whole project. Contains global project variables that can be used in other configuration objects, such as backend or stack (except of secrets). Note that the project.conf file is not rendered with the template and you cannot use template units in it.

      +

      Example of project.yaml:

      +
      name: my_project
      +kind: project
      +backend: aws-backend
      +variables:
      +  organization: shalb
      +  region: eu-central-1
      +  state_bucket_name: cdev-states
      +exports:
      +  AWS_PROFILE: cluster-dev  
      +
      +
        +
      • +

        name: project name. Required.

        +
      • +
      • +

        kind: object kind. Must be set as project. Required.

        +
      • +
      • +

        backend: name of the backend that will be used to store the Cluster.dev state of the current project. Optional.

        +
      • +
      • +

        variables: a set of data in yaml format that can be referenced in other configuration objects. For the example above, the link to the organization name will look like this: {{ .project.variables.organization }}.

        +
      • +
      • +

        exports: list of environment variables that will be exported while working with the project. Optional.

        +
      • +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/structure-secrets/index.html b/structure-secrets/index.html new file mode 100644 index 00000000..e8f70af3 --- /dev/null +++ b/structure-secrets/index.html @@ -0,0 +1,1909 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Secrets - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Secrets

      +

      Secret is an object that contains sensitive data such as a password, a token, or a key. It is used to pass secret values to the tools that don't have a proper support of secret engines.

      +

      Cluster.dev allows for two ways of working with secrets.

      +

      SOPS secrets

      +

      See SOPS installation instructions in official repo.

      +

      Secrets are encoded/decoded with SOPS utility that supports AWS KMS, GCP KMS, Azure Key Vault and PGP keys. How to use:

      +
        +
      1. +

        Use Cluster.dev console client to create a new secret from scratch:

        +
        cdev secret create
        +
        +
      2. +
      3. +

        Use interactive menu to create a secret.

        +
      4. +
      5. +

        Edit the secret and set secret data in encrypted_data: section.

        +
      6. +
      7. +

        Use references to the secret data in a stack template (you can find the examples in the generated secret file).

        +
      8. +
      +

      AWS Secrets Manager

      +

      Cluster.dev client can use AWS SSM as a secret storage. How to use:

      +
        +
      1. +

        Create a new secret in AWS Secrets Manager using AWS CLI or web console. Both raw and JSON data formats are supported.

        +
      2. +
      3. +

        Use Cluster.dev console client to create a new secret from scratch:

        +
        cdev secret create
        +
        +
      4. +
      5. +

        Answer the questions. For the Name of secret in AWS Secrets Manager enter the name of the AWS secret created above.

        +
      6. +
      7. +

        Use references to the secret data in a stack template (you can find the examples in the generated secret file).

        +
      8. +
      +

      To list and edit any secret, use the commands:

      +
      cdev secret ls
      +
      +

      and

      +
      cdev secret edit secret_name
      +
      +

      Secrets reference

      +

      You can refer to a secret data in stack files with {{ .secrets.secret_name.secret_key }} syntax.

      +

      For example, we have a secret in AWS Secrets Manager and want to refer to the secret in our stack.yaml:

      +
      name: my-aws-secret
      +kind: Secret
      +driver: aws_secretmanager
      +spec: 
      +    region: eu-central-1
      +    aws_secret_name: pass
      +
      +

      In order to do this, we need to define the secret as {{ .secrets.my-aws-secret.some-key }} in the stack.yaml:

      +
      name: my-stack
      +template: https://<template.git.url>
      +kind: Stack
      +variables:
      +  region: eu-central-1
      +  name: my-test-stack
      +  password: {{ .secrets.my-aws-secret.some-key }}
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/structure-stack/index.html b/structure-stack/index.html new file mode 100644 index 00000000..50a6b5ed --- /dev/null +++ b/structure-stack/index.html @@ -0,0 +1,1869 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Stack - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Stack

      +

      Stack is a yaml file that tells Cluster.dev which template to use and what variables to apply to this template. Usually, users have multiple stacks that reflect their environments or tenants, and point to the same template with different variables.

      +

      File: searching in ./*.yaml. Required at least one. +Stack object (kind: stack) contains reference to a stack template, variables to render the template and backend for states.

      +

      Example of stack.yaml:

      +
      # Define stack itself
      +name: k3s-infra
      +template: "./templates/"
      +kind: stack
      +backend: aws-backend
      +variables:
      +  bucket: {{ .project.variables.state_bucket_name }} # Using project variables.
      +  region: {{ .project.variables.region }}
      +  organization: {{ .project.variables.organization }}
      +  domain: cluster.dev
      +  instance_type: "t3.medium"
      +  vpc_id: "vpc-5ecf1234"
      +
      +
        +
      • +

        name: stack name. Required.

        +
      • +
      • +

        kind: object kind. stack. Required.

        +
      • +
      • +

        backend: name of the backend that will be used to store the states of this stack. Optional.

        +
      • +
      • +

        variables: data set for the stack template rendering. See variables.

        +
      • +
      • +

        template: it's either a path to a local directory containing the stack template's configuration files, or a remote Git repository as the stack template source. For more details on stack templates please refer to Stack Template section. A local path must begin with either / for absolute path, ./ or ../ for relative path. For Git source, use this format: <GIT_URL>//<PATH_TO_TEMPLATE_DIR>?ref=<BRANCH_OR_TAG>:

        +
          +
        • <GIT_URL> - required. Standard Git repo url. See details on official Git page.
        • +
        • <PATH_TO_TEMPLATE_DIR> - optional, use it if the stack template's configuration is not in repo root.
        • +
        • <BRANCH_OR_TAG>- Git branch or tag.
        • +
        +
      • +
      +

      Examples

      +
      template: /path/to/dir # absolute local path
      +template: ./template/ # relative local path
      +template: ../../template/ # relative local path
      +template: https://github.com/shalb/cdev-k8s # https Git url
      +template: https://github.com/shalb/cdev-k8s//some/dir/ # subdirectory
      +template: https://github.com/shalb/cdev-k8s//some/dir/?ref=branch-name # branch
      +template: https://github.com/shalb/cdev-k8s?ref=v1.1.1 # tag
      +template: git@github.com:shalb/cdev-k8s.git # ssh Git url
      +template: git@github.com:shalb/cdev-k8s.git//some/dir/ # subdirectory
      +template: git@github.com:shalb/cdev-k8s.git//some/dir/?ref=branch-name # branch
      +template: git@github.com:shalb/cdev-k8s.git?ref=v1.1.1 # tag
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/style-guide/index.html b/style-guide/index.html new file mode 100644 index 00000000..328dafbb --- /dev/null +++ b/style-guide/index.html @@ -0,0 +1,1761 @@ + + + + + + + + + + + + + + + + + + + + + Style guide - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Style guide

      +

      For better experience, we recommend using VS Code - we have a list of recommended extensions to prevent many common errors, improve code and save time.

      +

      We use .editorconfig. It fixes basic mistakes on every file saving.

      +

      Please make sure to install pre-commit-terraform with all its dependencies. It checks all changed files when you run git commit for more complex problems and tries to fix them for you.

      +

      Bash

      +

      Firstly, please install shellcheck to have vscode-shellcheck extension working properly.

      +

      We use Google Style Guide.

      +

      Terraform

      +

      We use Terraform Best Practices.com code style and conceptions.

      +

      Autogenerated Documentation

      +

      For the successful module documentation initialization, you need to create README.md with:

      +
      <!-- BEGINNING OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
      +
      +<!-- END OF PRE-COMMIT-TERRAFORM DOCS HOOK -->
      +
      +

      It is needed for terraform-docs hooks. The hook rewrites all the things inside with every .tf file change.

      +

      Then run pre-commit run --all-files or make some changes in any .tf file in the same dir (for ex. variable "name" { -> variable "name"{).

      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/templating/index.html b/templating/index.html new file mode 100644 index 00000000..fe8f7a1f --- /dev/null +++ b/templating/index.html @@ -0,0 +1,1846 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Templating - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Templating

      +

      Levels of templating

      +

      Cluster.dev has a two-level templating that is applied on the project's and on the stack template's levels.

      +

      +

      On the first level Cluster.dev reads a project.yaml and files with secrets. Then it uses variables from these files to populate and render files from the current project – stacks and backends.

      +

      On the second level data from the stack object (an outcome of the first stage) is used to render stack template files.

      +

      The templating process could be described as follows:

      +
        +
      1. +

        Reading data from project.yaml and secrets.

        +
      2. +
      3. +

        Using the data to render all yaml files within the project directory.

        +
      4. +
      5. +

        Reading data from stack.yaml and backend.yaml (the files rendered in p.#2) – first-level templating.

        +
      6. +
      7. +

        Downloading specified stack templates.

        +
      8. +
      9. +

        Rendering the stack templates from data contained in the corresponding stack.yaml files (p.#3) – second-level templating.

        +
      10. +
      11. +

        Reading units from the stack templates.

        +
      12. +
      13. +

        Executing the project.

        +
      14. +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/theme-overrides/partials/footer.html b/theme-overrides/partials/footer.html new file mode 100644 index 00000000..8df844b5 --- /dev/null +++ b/theme-overrides/partials/footer.html @@ -0,0 +1,56 @@ +{#- + This file was automatically generated - do not edit +-#} +{% import "partials/language.html" as lang with context %} + diff --git a/theme-overrides/partials/integrations/analytics/gtm.html b/theme-overrides/partials/integrations/analytics/gtm.html new file mode 100644 index 00000000..ad2340a1 --- /dev/null +++ b/theme-overrides/partials/integrations/analytics/gtm.html @@ -0,0 +1 @@ + diff --git a/units-helm/index.html b/units-helm/index.html new file mode 100644 index 00000000..3e0475d9 --- /dev/null +++ b/units-helm/index.html @@ -0,0 +1,1839 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Helm - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Helm Unit

      +

      Describes Terraform Helm provider invocation.

      +

      In the example below we use helm unit to deploy Argo CD to a Kubernetes cluster:

      +
      units:
      +  - name: argocd
      +    type: helm
      +    source:
      +      repository: "https://argoproj.github.io/argo-helm"
      +      chart: "argo-cd"
      +      version: "2.11.0"
      +    pre_hook:
      +      command: *getKubeconfig
      +      on_destroy: true
      +    kubeconfig: /home/john/kubeconfig
      +    additional_options:
      +      namespace: "argocd"
      +      create_namespace: true
      +    values:
      +      - file: ./argo/values.yaml
      +        apply_template: true
      +    inputs:
      +      global.image.tag: v1.8.3
      +
      +

      In addition to common options the following are available:

      +
        +
      • +

        force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

        +
      • +
      • +

        source - map, required. This block describes Helm chart source.

        +
      • +
      • +

        chart, repository, version - correspond to options with the same name from helm_release resource. See chart, repository and version.

        +
      • +
      • +

        kubeconfig - string, required. Path to the kubeconfig file which is relative to the directory where the unit was executed.

        +
      • +
      • +

        provider_version - string, optional. Version of terraform helm provider to use. Default - latest. See terraform helm provider

        +
      • +
      • +

        additional_options - map of any, optional. Corresponds to Terraform helm_release resource options. Will be passed as is.

        +
      • +
      • +

        values - array, optional. List of values files in raw yaml to be passed to Helm. Values will be merged, in order, as Helm does with multiple -f options.

        +
          +
        • +

          file - string, required. Path to the values file.

          +
        • +
        • +

          apply_template - bool, optional. Defines whether a template should be applied to the values file. By default is set to true.

          +
        • +
        +
      • +
      • +

        inputs - map of any, optional. A map that represents Terraform helm_release sets. This block allows to use functions remoteState and insertYAML. For example:

        +
      • +
      +
        inputs:
      +    global.image.tag: v1.8.3
      +    service.type: LoadBalancer
      +  ```
      +
      +Corresponds to:
      +
      +```yaml
      +      set {
      +        name = "global.image.tag"
      +        value = "v1.8.3"
      +      }
      +      set  {
      +        name = "service.type"
      +        value = "LoadBalancer"
      +      }
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-k8s-manifest/index.html b/units-k8s-manifest/index.html new file mode 100644 index 00000000..df81b767 --- /dev/null +++ b/units-k8s-manifest/index.html @@ -0,0 +1,1926 @@ + + + + + + + + + + + + + + + + + + + + + + + + + K8s-manifest - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      K8s-manifest Unit

      +

      Applies Kubernetes resources from manifests.

      +

      Example:

      +
      - name: kubectl-test2
      +  type: k8s-manifest
      +  namespace: default
      +  create_namespaces: true
      +  path: ./manifests/
      +  apply_template: true
      +  kubeconfig: {{ output "this.kubeconfig.kubeconfig_path" }}
      +  kubectl_opts: "--wait=true"
      +
      +

      Options

      +
        +
      • +

        force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

        +
      • +
      • +

        namespace - optional. Corresponds to kubectl -n.

        +
      • +
      • +

        create_namespaces - bool, optional. By default is false. If set to true, cdev will create namespaces required for the unit (i.e. the namespaces listed in manifests and the one specified within the namespace option), in case they don't exist.

        +
      • +
      • +

        path - required, string. Indicates the resources that are to be applied: a file (in case of a file path), a directory recursively (in case of a directory path) or URL. In case of URL path the unit will download the resources by the link and then apply them.

        +

        Example of URL path:

        +
        - name: kubectl-test2
        +  type: k8s-manifest
        +  namespace: default
        +  path: https://git.io/vPieo
        +  kubeconfig: {{ output "this.kubeconfig.kubeconfig_path" }}
        +
        +
      • +
      • +

        apply_template - bool. By default is set to true. See Templating usage below.

        +
      • +
      • +

        kubeconfig - optional. Specifies the path to a kubeconfig file. See How to get kubeconfig subsection below.

        +
      • +
      • +

        kubectl_opts - optional. Lists additional arguments of the kubectl command.

        +
      • +
      +

      Templating usage

      +

      As manifests are part of a stack template, they also maintain templating options. Specifying the apply_template option enables you to use templating in all Kubernetes manifests located with the specified path.

      +

      How to get kubeconfig

      +

      There are several ways to get a kubeconfig from a cluster and pass it to the units that require it (for example, helm, K8s-manifest). The recommended way is to use the shell unit with the option force_apply. Here is an example of such unit:

      +
      - name: kubeconfig
      +  type: shell
      +  force_apply: true
      +  depends_on: this.k3s
      +  apply:
      +    commands:
      +      - aws s3 cp s3://{{ .variables.bucket }}/{{ .variables.cluster_name }}/kubeconfig /tmp/kubeconfig_{{ .variables.cluster_name }}
      +      - echo "kubeconfig_base64=$(cat /tmp/kubeconfig_{{ .variables.cluster_name }} | base64 -w 0)"
      +      - echo "kubeconfig_path=/tmp/kubeconfig_{{ .variables.cluster_name }}"
      +  outputs:
      +    type: separator
      +    separator: "="
      +
      +

      In the example above, the K3s unit (the one referred to) will deploy a Kubernetes cluster in AWS and place a kubeconfig file in S3 bucket. The kubeconfig unit will download the kubeconfig from the storage and place it within the /tmp directory.

      +

      The kubeconfig can then be passed as an output to other units:

      +
      - name: cert-manager-issuer
      +  type: k8s-manifest
      +  path: ./cert-manager/issuer.yaml
      +  kubeconfig: {{ output "this.kubeconfig.kubeconfig_path" }}
      +
      +

      An alternative (but not recommended) way is to create a yaml hook in a stack template that would take the required set of commands:

      +
      _: &getKubeconfig "rm -f ../kubeconfig_{{ .name }}; aws eks --region {{ .variables.region }} update-kubeconfig --name {{ .name }} --kubeconfig ../kubeconfig_{{ .name }}"
      +
      +

      and execute it with a pre-hook in each unit:

      +
      - name: cert-manager-issuer
      +  type: kubernetes
      +  source: ./cert-manager/
      +  provider_version: "0.6.0"
      +  config_path: ../kubeconfig_{{ .name }}
      +  depends_on: this.cert-manager
      +  pre_hook:
      +    command: *getKubeconfig
      +    on_destroy: true
      +    on_plan: true
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-kubernetes/index.html b/units-kubernetes/index.html new file mode 100644 index 00000000..1ed9ec0f --- /dev/null +++ b/units-kubernetes/index.html @@ -0,0 +1,1736 @@ + + + + + + + + + + + + + + + + + + + + + Kubernetes Unit - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Kubernetes Unit

      +
      +

      Info

      +

      This unit is deprecated and will be removed soon. Please use the k8s-manifest unit instead.

      +
      +

      Describes Terraform kubernetes provider invocation.

      +

      Example:

      +
      units:
      +  - name: argocd_apps
      +    type: kubernetes
      +    provider_version: "0.2.1"
      +    source: ./argocd-apps/app1.yaml
      +    kubeconfig: ../kubeconfig
      +    depends_on: this.argocd
      +
      +
        +
      • +

        force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

        +
      • +
      • +

        source - string, required. Path to Kubernetes manifest that will be converted into a representation of kubernetes-alpha provider. Source file will be rendered with the stack template, and also allows to use functions remoteState and insertYAML.

        +
      • +
      • +

        kubeconfig - string, required. Path to the kubeconfig file, which is relative to the directory where the unit was executed.

        +
      • +
      • provider_version - string, optional. Version of terraform kubernetes-alpha provider to use. Default - latest. See terraform kubernetes-alpha provider
      • +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-overview/index.html b/units-overview/index.html new file mode 100644 index 00000000..63dd6a09 --- /dev/null +++ b/units-overview/index.html @@ -0,0 +1,1858 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Overview - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Overview

      +

      cdev unit diagram

      +

      Description

      +

      Units are building blocks that stack templates are made of. It could be anything — a Terraform module, Helm you want to install or a Bash script that you want to run. Units can be remote or stored in the same repo with other Cluster.dev code. Units may contain reference to other files that are required for work. These files should be located inside the current directory (within the stack template's context). As some of the files will also be rendered with the project's data, you can use Go templates in them.

      +
      +

      Tip

      +

      You can pass variables across units within the stack template by using outputs or remoteState.

      +
      +

      All units described below have a common format and common fields. Base example:

      +
        - name: k3s
      +    type: tfmodule
      +    depends_on:
      +      - this.unit1_name
      +      - this.unit2_name
      +#   depends_on: this.unit1_name # is allowed to use string for single, or list for multiple dependencies
      +    pre_hook:
      +      command: "echo pre_hook"
      +      # script: "./scripts/hook.sh"
      +      on_apply: true
      +      on_destroy: false
      +      on_plan: false
      +    post_hook:
      +      # command: "echo post_hook"
      +      script: "./scripts/hook.sh"
      +      on_apply: true
      +      on_destroy: false
      +      on_plan: false
      +
      +
        +
      • +

        name - unit name. Required.

        +
      • +
      • +

        type - unit type. One of: shell, tfmodule, helm, kubernetes, printer.

        +
      • +
      • +

        depends_on - string or list of strings. One or multiple unit dependencies in the format "stack_name.unit_name". Since the name of the stack is unknown inside the stack template, you can use "this" instead:"this.unit_name.output_name".

        +
      • +
      • +

        pre_hook and post_hook blocks: See the description in Shell unit.

        +
      • +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-passing-variables/index.html b/units-passing-variables/index.html new file mode 100644 index 00000000..a3379369 --- /dev/null +++ b/units-passing-variables/index.html @@ -0,0 +1,1728 @@ + + + + + + + + + + + + + + + + + + + + + Units passing variables - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Units passing variables

      + +
      +

      Note

      +

      If passing outputs across units within one stack template, use "this" instead of the stack name: {{ output "this.unit_name.output" }}:.

      +
      +

      Example of passing variables across units in the stack template:

      +
      name: s3-static-web
      +kind: StackTemplate
      +units:
      +  - name: s3-web
      +    type: tfmodule
      +    source: "terraform-aws-modules/s3-bucket/aws"
      +    providers:
      +    - aws:
      +        region: {{ .variables.region }}
      +    inputs:
      +      bucket: {{ .variables.name }}
      +      force_destroy: true
      +      acl: "public-read"
      +  - name: outputs
      +    type: printer
      +    outputs:
      +      bucket_name: {{ remoteState "this.s3-web.s3_bucket_website_endpoint" }}
      +      name: {{ .variables.name }}
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-printer/index.html b/units-printer/index.html new file mode 100644 index 00000000..4989a69f --- /dev/null +++ b/units-printer/index.html @@ -0,0 +1,1786 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Printer - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Printer Unit

      +

      This unit exposes outputs that can be used in other units and stacks.

      +
      +

      Tip

      +

      If named output, the unit will have all its outputs displayed when running cdev apply or cdev output.

      +
      +

      Example:

      +
      units:
      +  - name: outputs
      +    type: printer
      +    outputs:
      +      bucket_name: "Endpoint: {{ remoteState "this.s3-web.s3_bucket_website_endpoint" }}"
      +      name: {{ .variables.name }}
      +
      +
        +
      • +

        outputs - any, required - a map that represents data to be printed in the log. The block allows to use functions remoteState and insertYAML.

        +
      • +
      • +

        force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

        +
      • +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-shell/index.html b/units-shell/index.html new file mode 100644 index 00000000..b3ecced8 --- /dev/null +++ b/units-shell/index.html @@ -0,0 +1,1966 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Shell - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Shell Unit

      +

      Executes Shell commands and scripts.

      +

      Example of a shell unit that creates an index.html file with a greeting message and downloads the file into an S3 bucket. The bucket name is passed as a variable:

      +
      units:
      +  - name: upload-web
      +    type: shell
      +    apply:
      +      commands:
      +        - aws s3 cp ./index.html s3://{{ .variables.name }}/index.html
      +    create_files:
      +    - file: ./index.html
      +      content: |
      +        <h1> Hello from {{ .variables.organization }} </h1>
      +        This page was created automatically by cdev tool.
      +
      +

      Complete reference example:

      +
      units:
      +  - name: my-tf-code
      +    type: shell
      +    env: 
      +      AWS_PROFILE: {{ .variables.aws_profile }}
      +      TF_VAR_region: {{ .project.region }}
      +    create_files:
      +      - file: ./terraform.tfvars
      +        content: |
      +{{- range $key, $value := .variables.tfvars }}
      +        $key = "$value" 
      +{{- end}}
      +    work_dir: ~/env/prod/
      +    apply: 
      +      commands:
      +        - terraform apply -var-file terraform.tfvars {{ range $key, $value := .variables.vars_list }} -var="$key=$value"{{ end }}
      +    plan:
      +      commands:
      +        - terraform plan
      +    destroy:
      +      commands:
      +        - terraform destroy
      +        - rm ./.terraform
      +    outputs: # how to get outputs
      +      type: json (regexp, separator)
      +      regexp_key: "regexp"
      +      regexp_value: "regexp"
      +      separator: "="
      +      command: terraform output -json
      +    create_files:
      +        - file: ./my_text_file.txt
      +          mode: 0644
      +          content: "some text"
      +        - file: ./my_text_file2.txt
      +          content: "some text 2"
      +
      +

      Options

      +
        +
      • +

        force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

        +
      • +
      • +

        env - map, optional. The list of environment variables that will be exported before executing commands of this unit. The variables defined in shell unit have a priority over variables defined in the project (the option exports) and will rewrite them.

        +
      • +
      • +

        work_dir - string, required. The working directory within which the code of the unit will be executed.

        +
      • +
      • +

        apply - optional, map. Describes commands to be executed when running cdev apply.

        +
          +
        • +

          init - optional. Describes commands to be executed prior to running cdev apply.

          +
        • +
        • +

          commands - list of strings, required. The list of commands to be executed when running cdev apply.

          +
        • +
        +
      • +
      • +

        plan - optional, map. Describes commands to be executed when running cdev plan.

        +
          +
        • +

          init - optional. Describes commands to be executed prior to running cdev plan.

          +
        • +
        • +

          commands - list of strings, required. The list of commands to be executed when running cdev plan.

          +
        • +
        +
      • +
      • +

        destroy - optional, map. Describes commands to be executed when running cdev destroy.

        +
          +
        • +

          init - optional. Describes commands to be executed prior to running cdev destroy.

          +
        • +
        • +

          commands - list of strings, required. The list of commands to be executed when running cdev destroy.

          +
        • +
        +
      • +
      • +

        outputs - optional, map. Describes how to get outputs from a command.

        +
          +
        • +

          type - string, required. A type of format to deliver the output. Could have 3 options: JSON, regexp, separator. According to the type specified, further options will differ.

          +
        • +
        • +

          JSON - if the type is defined as JSON, outputs will be parsed as key-value JSON. This type of output makes all other options not required.

          +
        • +
        • +

          regexp - if the type is defined as regexp, this introduces an additional required option regexp. Regexp is a regular expression which defines how to parse each line in the module output. Example:

          +
          outputs: # how to get outputs
          +  type: regexp
          +  regexp: "^(.*)=(.*)$"
          +  command: | 
          +  echo "key1=val1\nkey2=val2"
          +
          +
        • +
        • +

          separator - if the type is defined as separator, this introduces an additional option separator (string). Separator is a symbol that defines how a line is divided in two parts: the key and the value.

          +

          outputs: # how to get outputs
          +  type: separator
          +  separator: "="
          +  command: |
          +  echo "key1=val1\nkey2=val2"
          +
          + * command - string, optional. The command to take the outputs from. Is used regardless of the type option. If the command is not defined, cdev takes the outputs from the apply command.

          +
        • +
        +
      • +
      • +

        create_files - list of files, optional. The list of files that have to be saved in the state in case of their changing.

        +
      • +
      • +

        pre_hook and post_hook blocks: describe the shell commands to be executed before and after the unit, respectively. The commands will be executed in the same context as the actions of the unit. Environment variables are common to the shell commands, the pre_hook and post_hook scripts, and the unit execution. You can export a variable in the pre_hook and it will be available in the post_hook or in the unit.

        +
          +
        • +

          command - string. Shell command in text format. Will be executed in Bash -c "command". Can be used if the "script" option is not used. One of command or script is required.

          +
        • +
        • +

          script - string. Path to shell script file which is relative to template directory. Can be used if the "command" option is not used. One of command or script is required.

          +
        • +
        • +

          on_apply bool, optional. Turn off/on when unit applying. Default: "true".

          +
        • +
        • +

          on_destroy - bool, optional. Turn off/on when unit destroying. Default: "false".

          +
        • +
        • +

          on_plan - bool, optional. Turn off/on when unit plan executing. Default: "false".

          +
        • +
        +
      • +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/units-terraform/index.html b/units-terraform/index.html new file mode 100644 index 00000000..73c7cd35 --- /dev/null +++ b/units-terraform/index.html @@ -0,0 +1,1812 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Tfmodule - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Tfmodule Unit

      +

      Describes direct invocation of Terraform modules.

      +

      In the example below we use the tfmodule unit to create an S3 bucket for hosting a static web page. The tfmodule unit applies a dedicated Terraform module.

      +
      units:
      +  - name: s3-web
      +    type: tfmodule
      +    version: "2.77.0"
      +    source: "terraform-aws-modules/s3-bucket/aws"
      +    providers:
      +    - aws:
      +        region: {{ .variables.region }}
      +    inputs:
      +      bucket: {{ .variables.name }}
      +      force_destroy: true
      +      acl: "public-read"
      +      website:
      +        index_document: "index.html"
      +        error_document: "index.html"
      +      attach_policy: true
      +      policy: |
      +        {
      +            "Version": "2012-10-17",
      +            "Statement": [
      +                {
      +                    "Sid": "PublicReadGetObject",
      +                    "Effect": "Allow",
      +                    "Principal": "*",
      +                    "Action": "s3:GetObject",
      +                    "Resource": "arn:aws:s3:::{{ .variables.name }}/*"
      +                }
      +            ]
      +        }
      +
      +

      In addition to common options the following are available:

      +
        +
      • +

        source - string, required. Terraform module source. It is not allowed to use local folders in source!

        +
      • +
      • +

        version - string, optional. Module version.

        +
      • +
      • +

        inputs - map of any, required. A map that corresponds to input variables defined by the module. This block allows to use functions remoteState and insertYAML.

        +
      • +
      • +

        force_apply - bool, optional. By default is false. If set to true, the unit will be applied when any dependent unit is planned to be changed.

        +
      • +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file diff --git a/variables/index.html b/variables/index.html new file mode 100644 index 00000000..0c44aea3 --- /dev/null +++ b/variables/index.html @@ -0,0 +1,1897 @@ + + + + + + + + + + + + + + + + + + + + + + + + + Variables - Cluster.dev + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
      + +
      + + + + +
      + + +
      + +
      + + + + + + + + + +
      +
      + + + +
      +
      +
      + + + + + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      +
      + + + +
      +
      + + + + + + + +

      Variables

      +

      Stack configuration contains variables that will be applied to a stack template (similar to values.yaml in Helm or tfvars file in Terraform). The variables from stack.yaml are passed to stack template files to render them.

      +

      Example of stack.yaml with variables region, name, organization:

      +
      name: k3s-demo
      +template: https://github.com/shalb/cdev-s3-web
      +kind: Stack
      +variables:
      +  region: eu-central-1
      +  name: web-static-page
      +  organization: Cluster.dev
      +
      +

      The values of the variables are passed to a stack template to configure the resulting infrastructure.

      +

      Passing variables across stacks

      +

      Cluster.dev allows passing variable values across different stacks within one project. This is made possible in 2 ways:

      +
        +
      • using the output of one stack as an input for another stack: {{ output "stack_name.unit_name.output" }}
      • +
      +

      Example of passing outputs between stacks:

      +
      name: s3-web-page
      +template: ../web-page/
      +kind: Stack
      +variables:
      +  region: eu-central-1
      +  name: web-static-page
      +  organization: Shalb
      +
      +
      name: health-check
      +template: ../health-check/
      +kind: Stack
      +variables:
      +  url: {{ output "s3-web-page.outputs.url" }}
      +
      +
        +
      • using remoteState with a syntax: {{ remoteState "stack_name.unit_name.output" }}
      • +
      +

      Global variables

      +

      The variables defined on a project level are called global. Global variables are listed in the project.yaml– a configuration file that defines the parameters and settings for the whole project. From the project.yaml the variables values can be applied to all stack and backend objects within this project.

      +

      Example of the project.yaml file that contains variables organization and region:

      +
      name: demo
      +kind: Project
      +variables:
      +  organization: shalb
      +  region: eu-central-1
      +
      +

      To refer to a variable in stack and backend files, use the {{ .project.variables.KEY_NAME }} syntax, where project.variables is the path that corresponds the structure of variables in the project.yaml. The KEY_NAME stands for the variable name defined in the project.yaml and will be replaced by its value.

      +

      Example of the stack.yaml file that contains reference to the project variables organization and region:

      +
      name: eks-demo
      +template: https://github.com/shalb/cdev-aws-eks?ref=v0.2.0
      +kind: Stack
      +backend: aws-backend
      +variables:
      +  region: {{ .project.variables.region }}
      +  organization: {{ .project.variables.organization }}
      +  domain: cluster.dev
      +  instance_type: "t3.medium"
      +  eks_version: "1.20"
      +
      +

      Example of the rendered stack.yaml:

      +
      name: eks-demo
      +template: https://github.com/shalb/cdev-aws-eks?ref=v0.2.0
      +kind: Stack
      +backend: aws-backend
      +variables:
      +  region: eu-central-1
      +  organization: shalb
      +  domain: cluster.dev
      +  instance_type: "t3.medium"
      +  eks_version: "1.20"
      +
      + + + + + + + + +
      +
      + + +
      + +
      + + + + +
      +
      +
      +
      + + + + + + + + + + \ No newline at end of file

Goal