-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting tolerations and nodeSelector for Operator #606
Comments
Based on your comment, I would recommend you download the single manifest file, add your customization, maintain in git, and then deploy your custom version single manifest file from your git environment. |
That works. But users will not be able to benefit from the updates, we will have to self maintain and compare the changes when there are new releases. For reference, in Terraform, we have a work around by filtering out the specific deployment. To add in Tolerations, NodeSelector, as well as modifying the CPU / Mem Requests / Limits data "http" "falcon_operator_manifest" {
url = "https://github.com/crowdstrike/falcon-operator/releases/latest/download/falcon-operator.yaml"
}
data "kubectl_file_documents" "falcon_operator_documents" {
content = data.http.falcon_operator_manifest.response_body
}
resource "kubectl_manifest" "falcon_operator" {
for_each = {
for key, manifest in data.kubectl_file_documents.falcon_operator_documents.manifests :
key => manifest
if !(yamldecode(manifest)["metadata"]["name"] == "falcon-operator-controller-manager" &&
yamldecode(manifest)["kind"] == "Deployment")
}
yaml_body = each.value
}
resource "kubectl_manifest" "falcon_operator_controller_manager" {
yaml_body = yamlencode(
merge(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
),
{
"spec" = {
"replicas" = lookup(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
)["spec"],
"replicas",
0
)
"selector" = lookup(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
)["spec"],
"selector",
{}
)
"template" = {
"metadata" = lookup(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
)["spec"]["template"],
"metadata",
{}
)
"spec" = merge(
lookup(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
)["spec"]["template"],
"spec",
{} # Default to empty object
),
{
"tolerations" = concat(
try(
lookup(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
)["spec"]["template"]["spec"],
"tolerations",
[]
),
[] # Fallback to empty list if the key is missing or invalid
),
[
{
key = "<key>"
operator = "Equal"
value = "<value>"
effect = "NoSchedule"
}
]
),
"nodeSelector" = {
"<key>" = "<value>"
},
"containers" = [
merge(
yamldecode(
lookup(
data.kubectl_file_documents.falcon_operator_documents.manifests,
"/apis/apps/v1/namespaces/falcon-operator/deployments/falcon-operator-controller-manager",
"{}"
)
)["spec"]["template"]["spec"]["containers"][0],
{
"resources" = {
"requests" = {
"cpu" = "500m"
"memory" = "256Mi"
}
"limits" = {
"cpu" = "500m"
"memory" = "256Mi"
}
}
}
)
]
}
)
}
}
}
)
)
depends_on = [
kubectl_manifest.falcon_operator
]
} But it'd be better if we refrain from this type of logic, in case there are unexpected changes in the future |
It is actually a best security and IT operations practice to use gitops practices with K8s and to review changes and updates all the while storing in git. Otherwise, how you are currently doing it in terraform is how you would do it. |
Following the installation guide in this repo, we need to apply a single Kubernetes YAML file from the releases.
However, installing that way remove any possibility of customising configurations for the Operator Deployment. Can we have a more intuitive way to install Operator? Maybe split them out to individual files?
The text was updated successfully, but these errors were encountered: