Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Huggingface model deployer support #157

Open
wants to merge 8 commits into
base: develop
Choose a base branch
from
Open
2 changes: 1 addition & 1 deletion docs/book/stacks/aws.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ as can a list of components that are coming soon.
| Experiment Tracker | mlflow |
| Orchestrator | kubeflow, kubernetes, sagemaker, skypilot, tekton |
| MLOps Platform | zenml |
| Model Deployer | seldon |
| Model Deployer | seldon, huggingface |
| Step Operator | sagemaker |

## Coming Soon!
Expand Down
2 changes: 1 addition & 1 deletion docs/book/stacks/gcp.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ and cannot be created without one.
| Experiment Tracker | mlflow |
| Orchestrator | kubeflow, kubernetes, skypilot, tekton, vertex |
| MLOps Platform | zenml |
| Model Deployer | seldon |
| Model Deployer | seldon, huggingface |
| Step Operator | vertex |

## Coming Soon!
Expand Down
2 changes: 1 addition & 1 deletion docs/book/stacks/k3d.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ as can a list of components that are coming soon.
| Experiment Tracker | mlflow |
| Orchestrator | kubeflow, kubernetes, sagemaker, tekton |
| MLOps Platform | zenml |
| Model Deployer | seldon |
| Model Deployer | seldon, huggingface |

## Coming Soon!

Expand Down
2 changes: 1 addition & 1 deletion src/mlstacks/constants.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
"vertex",
],
"mlops_platform": ["zenml"],
"model_deployer": ["seldon"],
"model_deployer": ["seldon","huggingface"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"model_deployer": ["seldon","huggingface"],
"model_deployer": ["seldon", "huggingface"],

"step_operator": ["sagemaker", "vertex"],
}
ALLOWED_COMPONENT_TYPES: Dict[str, Dict[str, List[str]]] = {
Expand Down
2 changes: 2 additions & 0 deletions src/mlstacks/enums.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ class ComponentTypeEnum(str, Enum):
FEATURE_STORE = "feature_store"
ANNOTATOR = "annotator"
IMAGE_BUILDER = "image_builder"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change



class ComponentFlavorEnum(str, Enum):
Expand All @@ -50,6 +51,7 @@ class ComponentFlavorEnum(str, Enum):
VERTEX = "vertex"
ZENML = "zenml"
DEFAULT = "default"
HUGGINGFACE = "huggingface"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably insert this in the alphabetical order...



class DeploymentMethodEnum(str, Enum):
Expand Down
2 changes: 1 addition & 1 deletion src/mlstacks/terraform/aws-modular/eks.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# eks module to create a cluster
# newer versions of it had some error so going with v17.23.0 for now
locals {
enable_eks = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
enable_eks = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes || var.enable_model_deployer_seldon || var.enable_model_deployer_huggingface || var.enable_experiment_tracker_mlflow ||
var.enable_zenml)
}

Expand Down
2 changes: 1 addition & 1 deletion src/mlstacks/terraform/aws-modular/istio.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
module "istio" {
source = "../modules/istio-module"

count = (var.enable_model_deployer_seldon) ? 1 : 0
count = (var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon) ? 1 : 0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this change is unneeded. We don't need istio for the HF deployer.


depends_on = [
aws_eks_cluster.cluster,
Expand Down
8 changes: 8 additions & 0 deletions src/mlstacks/terraform/aws-modular/locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -64,6 +64,14 @@ locals {
namespace = "seldon-system"
workloads_namespace = "zenml-workloads-seldon"
service_account_name = "seldon"
}

huggingface = {
version = "4.41.3"
name = "huggingface"
namespace = "huggingface-system"
workloads_namespace = "zenml-workloads-huggingface"
service_account_name = "huggingface"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we don't need this?

}

zenml = {
Expand Down
7 changes: 7 additions & 0 deletions src/mlstacks/terraform/aws-modular/output_file.tf
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,13 @@ resource "local_file" "stack_file" {
flavor: seldon
name: eks_seldon_model_deployer
configuration: {"kubernetes_context": "${aws_eks_cluster.cluster[0].arn}", "kubernetes_namespace": "${local.seldon.workloads_namespace}", "base_url": "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}"}}
%{endif}
%{if var.enable_model_deployer_huggingface}
model_deployer:
id: ${uuid()}
flavor: huggingface
name: eks_huggingface_model_deployer
configuration: {"kubernetes_context": "${aws_eks_cluster.cluster[0].arn}", "kubernetes_namespace": "${local.huggingface.workloads_namespace}", "base_url": "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}"}}
Comment on lines +84 to +85
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs updating as we're no longer using EKS for this etc..

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@strickvl Thank you for the review! I understood everything else but for this part, Could you please provide more guidance or resources on how to do this part?

%{endif}
ADD
filename = "./aws_modular_stack_${replace(substr(timestamp(), 0, 16), ":", "_")}.yaml"
Expand Down
24 changes: 20 additions & 4 deletions src/mlstacks/terraform/aws-modular/outputs.tf
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same theme here... we're not using a cluster any more for the HF deployment so no need to include all that stuff here..

Original file line number Diff line number Diff line change
Expand Up @@ -84,19 +84,25 @@ output "experiment_tracker_configuration" {
}) : ""
}

# if huggingface is enabled, set the model deployer outputs to the huggingface values
# if seldon is enabled, set the model deployer outputs to the seldon values
# otherwise, set the model deployer outputs to empty strings
output "model_deployer_id" {
value = var.enable_model_deployer_seldon ? uuid() : ""
value = var.enable_model_deployer_huggingface ? uuid() : var.enable_model_deployer_seldon ? uuid() : ""
}
output "model_deployer_flavor" {
value = var.enable_model_deployer_seldon ? "seldon" : ""
value = var.enable_model_deployer_huggingface ? "huggingface" : var.enable_model_deployer_seldon ? "seldon" : ""
}
output "model_deployer_name" {
value = var.enable_model_deployer_seldon ? "eks_seldon_model_deployer_${random_string.unique.result}" : ""
value = var.enable_model_deployer_huggingface ? "eks_huggingface_model_deployer_${random_string.unique.result}" : var.enable_model_deployer_seldon ? "eks_seldon_model_deployer_${random_string.unique.result}" : ""
}
output "model_deployer_configuration" {
value = var.enable_model_deployer_seldon ? jsonencode({
value = var.enable_model_deployer_huggingface ? jsonencode({
kubernetes_context = "${aws_eks_cluster.cluster[0].arn}"
kubernetes_namespace = local.huggingface.workloads_namespace
base_url = "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}"
}) :
var.enable_model_deployer_seldon ? jsonencode({
kubernetes_context = "${aws_eks_cluster.cluster[0].arn}"
kubernetes_namespace = local.seldon.workloads_namespace
base_url = "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}"
Expand Down Expand Up @@ -162,6 +168,16 @@ output "seldon-base-url" {
value = var.enable_model_deployer_seldon ? "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}" : null
}

# output for huggingface model deployer
output "huggingface-workload-namespace" {
value = var.enable_model_deployer_huggingface ? local.huggingface.workloads_namespace : null
description = "The namespace created for hosting your Huggingface workloads"
}

output "huggingface-base-url" {
value = var.enable_model_deployer_huggingface ? "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}" : null
}

# output the name of the stack YAML file created
output "stack-yaml-path" {
value = local_file.stack_file.filename
Expand Down
4 changes: 4 additions & 0 deletions src/mlstacks/terraform/aws-modular/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -35,6 +35,10 @@ variable "enable_model_deployer_seldon" {
description = "Enable Seldon deployment"
default = false
}
variable "enable_model_deployer_huggingface" {
description = "Enable Huggingface deployment"
default = false
}
variable "enable_step_operator_sagemaker" {
description = "Enable SageMaker as step operator"
default = false
Expand Down
16 changes: 8 additions & 8 deletions src/mlstacks/terraform/gcp-modular/gke.tf
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ data "google_client_config" "default" {}
# module "gke" {
# count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton
# || var.enable_orchestrator_kubernetes ||
# var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
# var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
# var.enable_zenml)? 1: 0

# depends_on = [
Expand Down Expand Up @@ -64,7 +64,7 @@ data "google_client_config" "default" {}
# }
locals {
enable_gke = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml)
}

Expand All @@ -79,7 +79,7 @@ data "external" "get_cluster" {

resource "google_container_cluster" "gke" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0

name = "${local.prefix}-${local.gke.cluster_name}"
Expand Down Expand Up @@ -117,7 +117,7 @@ resource "google_container_cluster" "gke" {
# service account for GKE nodes
resource "google_service_account" "gke-service-account" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
account_id = "${local.prefix}-${local.gke.service_account_name}"
project = var.project_id
Expand All @@ -136,7 +136,7 @@ resource "google_project_iam_binding" "container-registry" {

resource "google_project_iam_binding" "secret-manager" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
project = var.project_id
role = "roles/secretmanager.admin"
Expand All @@ -148,7 +148,7 @@ resource "google_project_iam_binding" "secret-manager" {

resource "google_project_iam_binding" "cloudsql" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
project = var.project_id
role = "roles/cloudsql.admin"
Expand All @@ -160,7 +160,7 @@ resource "google_project_iam_binding" "cloudsql" {

resource "google_project_iam_binding" "storageadmin" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
project = var.project_id
role = "roles/storage.admin"
Expand All @@ -172,7 +172,7 @@ resource "google_project_iam_binding" "storageadmin" {

resource "google_project_iam_binding" "vertex-ai-user" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
project = var.project_id
role = "roles/aiplatform.user"
Expand Down
2 changes: 1 addition & 1 deletion src/mlstacks/terraform/gcp-modular/istio.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
module "istio" {
source = "../modules/istio-module"

count = (var.enable_model_deployer_seldon) ? 1 : 0
count = (var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon) ? 1 : 0

depends_on = [
google_container_cluster.gke,
Expand Down
6 changes: 3 additions & 3 deletions src/mlstacks/terraform/gcp-modular/kubernetes.tf
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ provider "kubectl" {
# the namespace where zenml will run kubernetes orchestrator workloads
resource "kubernetes_namespace" "k8s-workloads" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
metadata {
name = local.gke.workloads_namespace
Expand All @@ -28,7 +28,7 @@ resource "kubernetes_namespace" "k8s-workloads" {
# tie the kubernetes workloads SA to the GKE service account
resource "null_resource" "k8s-sa-workload-access" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
provisioner "local-exec" {
command = "kubectl -n ${kubernetes_namespace.k8s-workloads[0].metadata[0].name} annotate serviceaccount default iam.gke.io/gcp-service-account=${google_service_account.gke-service-account[0].email} --overwrite=true"
Expand All @@ -44,7 +44,7 @@ resource "null_resource" "k8s-sa-workload-access" {
# Vertex AI resources, which are needed for ZenML pipelines
resource "google_service_account_iam_member" "k8s-workload-access" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
service_account_id = google_service_account.gke-service-account[0].name
role = "roles/iam.workloadIdentityUser"
Expand Down
8 changes: 8 additions & 0 deletions src/mlstacks/terraform/gcp-modular/locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -76,6 +76,14 @@ locals {
service_account_name = "seldon"
}

huggingface = {
version = "4.41.3"
name = "huggingface"
namespace = "huggingface-system"
workloads_namespace = "zenml-workloads-huggingface"
service_account_name = "huggingface"
}

zenml = {
version = ""
database_ssl_ca = ""
Expand Down
7 changes: 7 additions & 0 deletions src/mlstacks/terraform/gcp-modular/output_file.tf
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,13 @@ resource "local_file" "stack_file" {
flavor: seldon
name: gke_seldon
configuration: {"kubernetes_context": "gke_${local.prefix}-${local.gke.cluster_name}", "kubernetes_namespace": "${local.seldon.workloads_namespace}", "base_url": "http://${module.istio[0].ingress-ip-address}:${module.istio[0].ingress-port}"}
%{endif}
%{if var.enable_model_deployer_huggingface}
model_deployer:
id: ${uuid()}
flavor: huggingface
name: eks_huggingface_model_deployer
configuration: {"kubernetes_context": "${aws_eks_cluster.cluster[0].arn}", "kubernetes_namespace": "${local.huggingface.workloads_namespace}", "base_url": "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}"}}
%{endif}
ADD
filename = "./gcp_modular_stack_${replace(substr(timestamp(), 0, 16), ":", "_")}.yaml"
Expand Down
14 changes: 10 additions & 4 deletions src/mlstacks/terraform/gcp-modular/outputs.tf
Original file line number Diff line number Diff line change
Expand Up @@ -100,19 +100,25 @@ output "experiment_tracker_configuration" {
}) : ""
}

# if huggingface is enabled, set the model deployer outputs to the huggingface values
# if seldon is enabled, set the model deployer outputs to the seldon values
# otherwise, set the model deployer outputs to empty strings
output "model_deployer_id" {
value = var.enable_model_deployer_seldon ? uuid() : ""
value = var.enable_model_deployer_huggingface ? uuid() : var.enable_model_deployer_seldon ? uuid() : ""
}
output "model_deployer_flavor" {
value = var.enable_model_deployer_seldon ? "seldon" : ""
value = var.enable_model_deployer_huggingface ? "huggingface" : var.enable_model_deployer_seldon ? "seldon" : ""
}
output "model_deployer_name" {
value = var.enable_model_deployer_seldon ? "gke_seldon_model_deployer_${random_string.unique.result}" : ""
value = var.enable_model_deployer_huggingface ? "eks_huggingface_model_deployer_${random_string.unique.result}" : var.enable_model_deployer_seldon ? "gke_seldon_model_deployer_${random_string.unique.result}" : ""
}
output "model_deployer_configuration" {
value = var.enable_model_deployer_seldon ? jsonencode({
value = var.enable_model_deployer_huggingface ? jsonencode({
kubernetes_context = "${aws_eks_cluster.cluster[0].arn}"
kubernetes_namespace = local.huggingface.workloads_namespace
base_url = "http://${module.istio[0].ingress-hostname}:${module.istio[0].ingress-port}"
}) :
var.enable_model_deployer_seldon ? jsonencode({
kubernetes_context = "gke_${var.project_id}_${var.region}_${local.prefix}-${local.gke.cluster_name}"
kubernetes_namespace = local.seldon.workloads_namespace
base_url = "http://${module.istio[0].ingress-ip-address}:${module.istio[0].ingress-port}"
Expand Down
4 changes: 4 additions & 0 deletions src/mlstacks/terraform/gcp-modular/variables.tf
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,10 @@ variable "enable_model_deployer_seldon" {
description = "Enable Seldon deployment"
default = false
}
variable "enable_model_deployer_huggingface" {
description = "Enable Huggingface deployment"
default = false
}
variable "enable_step_operator_vertex" {
description = "Enable VertexAI Step Operator"
default = false
Expand Down
2 changes: 1 addition & 1 deletion src/mlstacks/terraform/gcp-modular/vpc.tf
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
module "vpc" {
count = (var.enable_orchestrator_kubeflow || var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow ||
var.enable_zenml) ? 1 : 0
source = "terraform-google-modules/network/google"
version = "~> 4.0"
Expand Down
8 changes: 4 additions & 4 deletions src/mlstacks/terraform/k3d-modular/helm.tf
Original file line number Diff line number Diff line change
Expand Up @@ -3,15 +3,15 @@ provider "helm" {
kubernetes {
host = (var.enable_container_registry || var.enable_orchestrator_kubeflow ||
var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.host : ""
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.host : ""
client_certificate = (var.enable_container_registry || var.enable_orchestrator_kubeflow ||
var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.client_certificate : ""
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.client_certificate : ""
client_key = (var.enable_container_registry || var.enable_orchestrator_kubeflow ||
var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.client_key : ""
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.client_key : ""
cluster_ca_certificate = (var.enable_container_registry || var.enable_orchestrator_kubeflow ||
var.enable_orchestrator_tekton || var.enable_orchestrator_kubernetes ||
var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.cluster_ca_certificate : ""
var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon || var.enable_experiment_tracker_mlflow || var.enable_artifact_store || var.enable_zenml) ? k3d_cluster.zenml-cluster[0].credentials.0.cluster_ca_certificate : ""
}
}
2 changes: 1 addition & 1 deletion src/mlstacks/terraform/k3d-modular/istio.tf
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
module "istio" {
source = "../modules/istio-module"

count = (var.enable_model_deployer_seldon) ? 1 : 0
count = (var.enable_model_deployer_huggingface || var.enable_model_deployer_seldon) ? 1 : 0

depends_on = [
k3d_cluster.zenml-cluster,
Expand Down
Loading