You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jun 24, 2020. It is now read-only.
, due to forgotten reasons, I am again leaning on to bring similar configuration option back to operator.
We used to load this argument, when launching the operator itself. I am considering adding it as a field in the CRD, because it can be configured when operator is running.
If we make it configurable when starting the operator, we can only configure it once. Otherwise, we have to re-launch the operator.
Problem
Each version of operator installs one specific version of knative software, and it only bundles in the image one specific version of released manifest for knative. However, different cloud platform may customize the manifest of knative. How can we use this operator to install customized manifest, without shipping new image?
Exit Criteria
Pick up a specific cloud platform, like Google, OpenShift, IBM cloud, etc, as long as the manifest is at the same version as the open source one, but different in content or configuration, we can test if operator can load the manifest from a different location, rather than the one bundled in the image.
Time Estimate (optional):
1 developer for 7 days.
Additional context (optional)
Add any other context about the feature request here.
The text was updated successfully, but these errors were encountered:
I'm not sure we should bring this back. Why would a user specify a different release manifest? That sounds like a backdoor to bypass options the operator doesn't expose today.
Can you describe the specific use-case this would solve for you?
@markusthoemmes I also discussed with @jcrossley3 about what we can do to deal with the platform-specific configurations.
Internally, customization is done to the released yaml of knative, like "a custom set of configMaps and possibly resource definitions - e.g. modify the resource limits". I haven't got a full list of everything. It is difficult for me to say what operator may improve to meet the needs.
Operator CR for serving is now able to configure:
Use private registries to pull the images, with private secrets.
Overwrite all the configmaps, with keys and values.
The deployment named controller can trust registries with self-signed certificates.
Override ingress gateway.
Override local gateway.
However, I am not sure how operator can change resource limits. This is just one example. I am still working on the full list.
It is the bottom line to bring back the URL/Filename option, but would like to see other better ways, when I have a more complete list.
I used to drop our option to configure where to load the manifest of knative, used to be here:
serving-operator/pkg/controller/knativeserving/knativeserving_controller.go
Line 48 in ac0b677
We used to load this argument, when launching the operator itself. I am considering adding it as a field in the CRD, because it can be configured when operator is running.
If we make it configurable when starting the operator, we can only configure it once. Otherwise, we have to re-launch the operator.
Problem
Each version of operator installs one specific version of knative software, and it only bundles in the image one specific version of released manifest for knative. However, different cloud platform may customize the manifest of knative. How can we use this operator to install customized manifest, without shipping new image?
Persona:
Human operator, administrator.
Exit Criteria
Pick up a specific cloud platform, like Google, OpenShift, IBM cloud, etc, as long as the manifest is at the same version as the open source one, but different in content or configuration, we can test if operator can load the manifest from a different location, rather than the one bundled in the image.
Time Estimate (optional):
1 developer for 7 days.
Additional context (optional)
Add any other context about the feature request here.
The text was updated successfully, but these errors were encountered: