copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2019-06-12 |
kubernetes, iks, nginx, ingress controller |
containers |
{:new_window: target="_blank"} {:shortdesc: .shortdesc} {:screen: .screen} {:pre: .pre} {:table: .aria-labeledby="caption"} {:codeblock: .codeblock} {:tip: .tip} {:note: .note} {:important: .important} {:deprecated: .deprecated} {:download: .download} {:preview: .preview}
{: #ingress}
Expose multiple apps in your Kubernetes cluster by creating Ingress resources that are managed by the IBM-provided application load balancer in {{site.data.keyword.containerlong}}. {:shortdesc}
{: #sample_ingress}
Use these sample YAML files to quickly get started with specifying your Ingress resource. {: shortdesc}
Ingress resource to publicly expose an app
Have you already completed the following?
- Deploy app
- Create app service
- Select domain name and TLS secret
You can use the following deployment YAML to create an Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingressresource
spec:
tls:
- hosts:
- <domain>
secretName: <tls_secret_name>
rules:
- host: <domain>
http:
paths:
- path: /<app1_path>
backend:
serviceName: <app1_service>
servicePort: 80
- path: /<app2_path>
backend:
serviceName: <app2_service>
servicePort: 80
{: codeblock}
Ingress resource to privately expose an app
Have you already completed the following?
- Enable private ALB
- Deploy app
- Create app service
- Register custom domain name and TLS secret
You can use the following deployment YAML to create an Ingress resource:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: myingressresource
annotations:
ingress.bluemix.net/ALB-ID: "<private_ALB_ID_1>;<private_ALB_ID_2>"
spec:
tls:
- hosts:
- <domain>
secretName: <tls_secret_name>
rules:
- host: <domain>
http:
paths:
- path: /<app1_path>
backend:
serviceName: <app1_service>
servicePort: 80
- path: /<app2_path>
backend:
serviceName: <app2_service>
servicePort: 80
{: codeblock}
{: #planning}
Ingress is a Kubernetes service that balances network traffic workloads in your cluster by forwarding public or private requests to your apps. You can use Ingress to expose multiple app services to the public or to a private network by using a unique public or private route. {:shortdesc}
{: #ingress_components}
Ingress consists of three components:
- Ingress resource
- To expose an app by using Ingress, you must create a Kubernetes service for your app and register this service with Ingress by defining an Ingress resource. The Ingress resource is a Kubernetes resource that defines the rules for how to route incoming requests for apps. The Ingress resource also specifies the path to your app services, which are appended to the public route to form a unique app URL such as `mycluster.us-south.containers.appdomain.cloud/myapp1`.
As of 24 May 2018, the Ingress subdomain format changed for new clusters. The region or zone name included in the new subdomain format is generated based on the zone where the cluster was created. If you have pipeline dependencies on consistent app domain names, you can use your own custom domain instead of the IBM-provided Ingress subdomain.
- All clusters created after 24 May 2018 are assigned a subdomain in the new format,
<cluster_name>.<region_or_zone>.containers.appdomain.cloud
. - Single-zone clusters created before 24 May 2018 continue to use the assigned subdomain in the old format,
<cluster_name>.<region>.containers.mybluemix.net
. - If you change a single-zone cluster created before 24 May 2018 to multizone by [adding a zone to the cluster](/docs/containers?topic=containers-add_workers#add_zone) for the first time, the cluster continues to use the assigned subdomain in the old format,
<cluster_name>.<region>.containers.mybluemix.net
, and is also assigned a subdomain in the new format,<cluster_name>.<region_or_zone>.containers.appdomain.cloud
. Either subdomain can be used.
- All clusters created after 24 May 2018 are assigned a subdomain in the new format,
- Application load balancer (ALB)
- The application load balancer (ALB) is an external load balancer that listens for incoming HTTP, HTTPS, or TCP service requests. The ALB then forwards requests to the appropriate app pod according to the rules defined in the Ingress resource. When you create a standard cluster, {{site.data.keyword.containerlong_notm}} automatically creates a highly available ALB for your cluster and assigns a unique public route to it. The public route is linked to a portable public IP address that is provisioned into your IBM Cloud infrastructure (SoftLayer) account during cluster creation. A default private ALB is also automatically created, but is not automatically enabled.
**Multizone clusters**: When you add a zone to your cluster, a portable public subnet is added, and a new public ALB is automatically created and enabled on the subnet in that zone. All default public ALBs in your cluster share one public route, but have different IP addresses. A default private ALB is also automatically created in each zone, but is not automatically enabled. - Multizone load balancer (MZLB)
**Multizone clusters**: Whenever you create a multizone cluster or [add a zone to a single zone cluster](/docs/containers?topic=containers-add_workers#add_zone), a Cloudflare multizone load balancer (MZLB) is automatically created and deployed so that 1 MZLB exists for each region. The MZLB puts the IP addresses of your ALBs behind the same subdomain and enables health checks on these IP addresses to determine whether they are available or not. For example, if you have worker nodes in 3 zones in the US-East region, the subdomain `yourcluster.us-east.containers.appdomain.cloud` has 3 ALB IP addresses. The MZLB health checks the public ALB IP in each zone of a region and keeps the DNS lookup results updated based on these health checks. For example, if your ALBs have IP addresses `1.1.1.1`, `2.2.2.2`, and `3.3.3.3`, a normal operation DNS lookup of your Ingress subdomain returns all 3 IPs, 1 of which the client accesses at random. If the ALB with IP address `3.3.3.3` becomes unavailable for any reason, such as due to zone failure, then the health check for that zone fails, the MZLB removes the failed IP from the subdomain, and the DNS lookup returns only the healthy `1.1.1.1` and `2.2.2.2` ALB IPs. The subdomain has a 30 second time to live (TTL), so after 30 seconds, new client apps can access only one of the available, healthy ALB IPs.
In rare cases, some DNS resolvers or client apps might continue to use the unhealthy ALB IP after the 30-second TTL. These client apps might experience a longer load time until the client app abandons the `3.3.3.3` IP and tries to connect to `1.1.1.1` or `2.2.2.2`. Depending on the client browser or client app settings, the delay can range from a few seconds to a full TCP timeout.
The MZLB load balances for public ALBs that use the IBM-provided Ingress subdomain only. If you use only private ALBs, you must manually check the health of the ALBs and update DNS lookup results. If you use public ALBs that use a custom domain, you can include the ALBs in MZLB load balancing by creating a CNAME in your DNS entry to forward requests from your custom domain to the IBM-provided Ingress subdomain for your cluster.
If you use Calico pre-DNAT network policies to block all incoming traffic to Ingress services, you must also whitelist Cloudflare's IPv4 IPs that are used to check the health of your ALBs. For steps on how to create a Calico pre-DNAT policy to whitelist these IPs, see Lesson 3 of the Calico network policy tutorial.
{: #ips}
When you create a standard cluster, {{site.data.keyword.containerlong_notm}} automatically provisions a portable public subnet and a portable private subnet. By default, the cluster automatically uses:
- 1 portable public IP address from the portable public subnet for the default public Ingress ALB.
- 1 portable private IP address from the portable private subnet for the default private Ingress ALB. {: shortdesc}
If you have a multizone cluster, a default public ALB and a default private ALB are automatically created in each zone. The IP addresses of your default public ALBs are all behind the same IBM-provided subdomain for your cluster.
Portable public and private IP addresses are static floating IPs and do not change when a worker node is removed. If the worker node is removed, a Keepalived
daemon that constantly monitors the IP automatically reschedules the ALB pods that were on that worker to another worker node in that zone. The rescheduled ALB pods retain the same static IP address. For the life of the cluster, the ALB IP address in each zone does not change. If you remove a zone from a cluster, then the ALB IP address for that zone is removed.
To see the IPs assigned to your ALBs, you can run the following command.
ibmcloud ks albs --cluster <cluster_name_or_id>
{: pre}
For more information about what happens to ALB IPs in the event of a zone failure, see the definition for the multizone load balancer component.
{: #architecture-single}
The following diagram shows how Ingress directs communication from the internet to an app in a single-zone cluster:
-
A user sends a request to your app by accessing your app's URL. This URL is the public URL for your exposed app appended with the Ingress resource path, such as
mycluster.us-south.containers.appdomain.cloud/myapp
. -
A DNS system service resolves the subdomain in the URL to the portable public IP address of the load balancer that exposes the ALB in your cluster.
-
Based on the resolved IP address, the client sends the request to the load balancer service that exposes the ALB.
-
The load balancer service routes the request to the ALB.
-
The ALB checks if a routing rule for the
myapp
path in the cluster exists. If a matching rule is found, the request is forwarded according to the rules that you defined in the Ingress resource to the pod where the app is deployed. The source IP address of the package is changed to the IP address of the public IP address of the worker node where the app pod is running. If multiple app instances are deployed in the cluster, the ALB load balances the requests between the app pods.
{: #architecture-multi}
The following diagram shows how Ingress directs communication from the internet to an app in a multizone cluster:
-
A user sends a request to your app by accessing your app's URL. This URL is the public URL for your exposed app appended with the Ingress resource path, such as
mycluster.us-south.containers.appdomain.cloud/myapp
. -
A DNS system service, which acts as the global load balancer, resolves the subdomain in the URL to an available IP address that was reported as healthy by the MZLB. The MZLB continuously checks the portable public IP addresses of the load balancer services that expose public ALBs in each zone in your cluster. The IP addresses are resolved in a round-robin cycle, ensuring that requests are equally load balanced among the healthy ALBs in various zones.
-
The client sends the request to the IP address of the load balancer service that exposes an ALB.
-
The load balancer service routes the request to the ALB.
-
The ALB checks if a routing rule for the
myapp
path in the cluster exists. If a matching rule is found, the request is forwarded according to the rules that you defined in the Ingress resource to the pod where the app is deployed. The source IP address of the package is changed to the public IP address of the worker node where the app pod is running. If multiple app instances are deployed in the cluster, the ALB load balances the requests between app pods across all zones.
{: #config_prereqs}
Before you get started with Ingress, review the following prerequisites. {:shortdesc}
Prerequisites for all Ingress configurations:
- Ingress is available for standard clusters only and requires at least two worker nodes per zone to ensure high availability and that periodic updates are applied. If you have only one worker in a zone, the ALB cannot receive automatic updates. When automatic updates are rolled out to ALB pods, the pod is reloaded. However, ALB pods have anti-affinity rules to ensure that only one pod is scheduled to each worker node for high availability. Because there is only one ALB pod on one worker, the pod is not restarted so that traffic is not interrupted. The ALB pod is updated to the latest version only when you delete the old pod manually so that the new, updated pod can be scheduled.
- Setting up Ingress requires the following {{site.data.keyword.cloud_notm}} IAM roles:
- Administrator platform role for the cluster
- Manager service role in all namespaces
Prerequisites for using Ingress in multizone clusters:
- If you restrict network traffic to edge worker nodes, at least 2 edge worker nodes must be enabled in each zone for high availability of Ingress pods. Create an edge node worker pool that spans all the zones in your cluster and has at least 2 worker nodes per zone.
- If you have multiple VLANs for a cluster, multiple subnets on the same VLAN, or a multizone cluster, you must enable a Virtual Router Function (VRF) for your IBM Cloud infrastructure (SoftLayer) account so your worker nodes can communicate with each other on the private network. To enable VRF, contact your IBM Cloud infrastructure (SoftLayer) account representative. If you cannot or do not want to enable VRF, enable VLAN spanning. To perform this action, you need the Network > Manage Network VLAN Spanning infrastructure permission, or you can request the account owner to enable it. To check if VLAN spanning is already enabled, use the
ibmcloud ks vlan-spanning-get --region <region>
command. - If a zone fails, you might see intermittent failures in requests to the Ingress ALB in that zone.
{: #multiple_namespaces}
One Ingress resource is required per namespace where you have apps that you want to expose. {:shortdesc}
{: #one-ns}
If the apps in your cluster are all in the same namespace, one Ingress resource is required to define routing rules for the apps that are exposed there. For example, if you have app1
and app2
exposed by services in a development namespace, you can create an Ingress resource in the namespace. The resource specifies domain.net
as the host and registers the paths that each app listens on with domain.net
.
{: shortdesc}
{: #multi-ns}
If the apps in your cluster are in different namespaces, you must create one resource per namespace to define rules for the apps that are exposed there. {: shortdesc}
However, you can define a host name in only one resource. You cannot define the same host name in multiple resources. To register multiple Ingress resources with the same host name, you must use a wildcard domain. When a wildcard domain such as *.domain.net
is registered, multiple subdomains can all resolve to the same host. Then, you can create an Ingress resource in each namespace and specify a different subdomain in each Ingress resource.
For example, consider the following scenario:
- You have two versions of the same app,
app1
andapp3
, for testing purposes. - You deploy the apps in two different namespaces within the same cluster:
app1
into the development namespace, andapp3
into the staging namespace.
To use the same cluster ALB to manage traffic to these apps, you create the following services and resources:
- A Kubernetes service in the development namespace to expose
app1
. - An Ingress resource in the development namespace that specifies the host as
dev.domain.net
. - A Kubernetes service in the staging namespace to expose
app3
. - An Ingress resource in the staging namespace that specifies the host as
stage.domain.net
.
Now, both URLs resolve to the same domain and are thus both serviced by the same ALB. However, because the resource in the staging namespace is registered with the stage
subdomain, the Ingress ALB correctly routes requests from the stage.domain.net/app3
URL to only app3
.
{: #wildcard_tls}
The IBM-provided Ingress subdomain wildcard, *.<cluster_name>.<region>.containers.appdomain.cloud
, is registered by default for your cluster. The IBM-provided TLS certificate is a wildcard certificate and can be used for the wildcard subdomain. If you want to use a custom domain, you must register the custom domain as a wildcard domain such as *.custom_domain.net
. To use TLS, you must get a wildcard certificate.
{: note}
{: #multi-domains}
Within an individual namespace, you can use one domain to access all the apps in the namespace. If you want to use different domains for the apps within an individual namespace, use a wildcard domain. When a wildcard domain such as *.mycluster.us-south.containers.appdomain.cloud
is registered, multiple subdomains all resolve to the same host. Then, you can use one resource to specify multiple subdomain hosts within that resource. Alternatively, you can create multiple Ingress resources in the namespace and specify a different subdomain in each Ingress resource.
{: shortdesc}
The IBM-provided Ingress subdomain wildcard, *.<cluster_name>.<region>.containers.appdomain.cloud
, is registered by default for your cluster. The IBM-provided TLS certificate is a wildcard certificate and can be used for the wildcard subdomain. If you want to use a custom domain, you must register the custom domain as a wildcard domain such as *.custom_domain.net
. To use TLS, you must get a wildcard certificate.
{: note}
{: #ingress_expose_public}
Expose apps that are inside your cluster to the public by using the public Ingress ALB. {:shortdesc}
Before you begin:
- Review the Ingress prerequisites.
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
{: #public_inside_1}
Start by deploying your apps and creating Kubernetes services to expose them. {: shortdesc}
-
Deploy your app to the cluster. Ensure that you add a label to your deployment in the metadata section of your configuration file, such as
app: code
. This label is needed to identify all pods where your app is running so that the pods can be included in the Ingress load balancing. -
Create a Kubernetes service for each app that you want to expose. Your app must be exposed by a Kubernetes service to be included by the cluster ALB in the Ingress load balancing.
-
Open your preferred editor and create a service configuration file that is named, for example,
myappservice.yaml
. -
Define a service for the app that the ALB will expose.
apiVersion: v1 kind: Service metadata: name: myappservice spec: selector: <selector_key>: <selector_value> ports: - protocol: TCP port: 8080
{: codeblock}
-
Save your changes.
-
Create the service in your cluster. If apps are deployed in multiple namespaces in your cluster, ensure that the service deploys into the same namespace as the app that you want to expose.
kubectl apply -f myappservice.yaml [-n <namespace>]
{: pre}
-
Repeat these steps for every app that you want to expose.
-
{: #public_inside_2}
When you configure the public ALB, you choose the domain that your apps will be accessible through. {: shortdesc}
You can use the IBM-provided domain, such as mycluster-12345.us-south.containers.appdomain.cloud/myapp
, to access your app from the internet. To use a custom domain instead, you can set up a CNAME record to map your custom domain to the IBM-provided domain or set up an A record with your DNS provider using the ALB's public IP address.
To use the IBM-provided Ingress domain:
Get the IBM-provided domain. Replace <cluster_name_or_ID>
with the name of the cluster where the app is deployed.
ibmcloud ks cluster-get --cluster <cluster_name_or_ID> | grep Ingress
{: pre}
Example output:
Ingress Subdomain: mycluster-12345.us-south.containers.appdomain.cloud
Ingress Secret: <tls_secret>
{: screen}
To use a custom domain:
-
Create a custom domain. To register your custom domain, work with your Domain Name Service (DNS) provider or {{site.data.keyword.cloud_notm}} DNS.
- If the apps that you want Ingress to expose are in different namespaces in one cluster, register the custom domain as a wildcard domain, such as
*.custom_domain.net
.
- If the apps that you want Ingress to expose are in different namespaces in one cluster, register the custom domain as a wildcard domain, such as
-
Configure your domain to route incoming network traffic to the IBM-provided ALB. Choose between these options:
- Define an alias for your custom domain by specifying the IBM-provided domain as a Canonical Name record (CNAME). To find the IBM-provided Ingress domain, run
ibmcloud ks cluster-get --cluster <cluster_name>
and look for the Ingress subdomain field. Using a CNAME is preferred because IBM provides automatic health checks on the IBM subdomain and removes any failing IPs from the DNS response. - Map your custom domain to the portable public IP address of the IBM-provided ALB by adding the IP address as a record. To find the portable public IP address of the ALB, run
ibmcloud ks alb-get --albID <public_alb_ID>
.
- Define an alias for your custom domain by specifying the IBM-provided domain as a Canonical Name record (CNAME). To find the IBM-provided Ingress domain, run
{: #public_inside_3}
After you choose the app domain, you choose whether to use TLS termination. {: shortdesc}
The ALB load balances HTTP network traffic to the apps in your cluster. To also load balance incoming HTTPS connections, you can configure the ALB to decrypt the network traffic and forward the decrypted request to the apps that are exposed in your cluster.
- If you use the IBM-provided Ingress subdomain, you can use the IBM-provided TLS certificate. IBM-provided TLS certificates are signed by LetsEncrypt and are fully managed by IBM. The certificates expire every 90 days and are automatically renewed 37 days before they expire. For information about wildcard TLS certification, see this note.
- If you use a custom domain, you can use your own TLS certificate to manage TLS termination. The ALB first checks for a secret in the namespace that the app is in, then in
default
, and finally inibm-cert-store
. If you have apps in one namespace only, you can import or create a TLS secret for the certificate in that same namespace. If you have apps in multiple namespaces, import or create a TLS secret for the certificate in thedefault
namespace so that the ALB can access and use the certificate in every namespace. In the Ingress resources that you define for each namespace, specify the name of the secret that is in the default namespace. For information about wildcard TLS certification, see this note. Note: TLS certificates that contain pre-shared keys (TLS-PSK) are not supported.
If you use the IBM-provided Ingress domain:
Get the IBM-provided TLS secret for your cluster.
ibmcloud ks cluster-get --cluster <cluster_name_or_ID> | grep Ingress
{: pre}
Example output:
Ingress Subdomain: mycluster-12345.us-south.containers.appdomain.cloud
Ingress Secret: <tls_secret>
{: screen}
If you use a custom domain:
If a TLS certificate is stored in {{site.data.keyword.cloudcerts_long_notm}} that you want to use, you can import its associated secret into your cluster by running the following command:
ibmcloud ks alb-cert-deploy --secret-name <secret_name> --cluster <cluster_name_or_ID> --cert-crn <certificate_crn>
{: pre}
Make sure that you do not create the secret with the same name as the IBM-provided Ingress secret. You can get the name of the IBM-provided Ingress secret by running ibmcloud ks cluster-get --cluster <cluster_name_or_ID> | grep Ingress
.
{: note}
When you import a certificate with this command, the certificate secret is created in a namespace called ibm-cert-store
. A reference to this secret is then created in the default
namespace, which any Ingress resource in any namespace can access. When the ALB is processing requests, it follows this reference to pick up and use the certificate secret from the ibm-cert-store
namespace.
If you do not have a TLS certificate ready, follow these steps:
-
Generate a certificate authority (CA) cert and key from your certificate provider. If you have your own domain, purchase an official TLS certificate for your domain. Make sure the CN is different for each certificate.
-
Convert the cert and key into base-64.
-
Encode the cert and key into base-64 and save the base-64 encoded value in a new file.
openssl base64 -in tls.key -out tls.key.base64
{: pre}
-
View the base-64 encoded value for your cert and key.
cat tls.key.base64
{: pre}
-
-
Create a secret YAML file using the cert and key.
apiVersion: v1 kind: Secret metadata: name: ssl-my-test type: Opaque data: tls.crt: <client_certificate> tls.key: <client_key>
{: codeblock}
-
Create the certificate as a Kubernetes secret.
kubectl create -f ssl-my-test
{: pre} Make sure that you do not create the secret with the same name as the IBM-provided Ingress secret. You can get the name of the IBM-provided Ingress secret by running
ibmcloud ks cluster-get --cluster <cluster_name_or_ID> | grep Ingress
. {: note}
{: #public_inside_4}
Ingress resources define the routing rules that the ALB uses to route traffic to your app service. {: shortdesc}
If your cluster has multiple namespaces where apps are exposed, one Ingress resource is required per namespace. However, each namespace must use a different host. You must register a wildcard domain and specify a different subdomain in each resource. For more information, see Planning networking for single or multiple namespaces. {: note}
-
Open your preferred editor and create an Ingress configuration file that is named, for example,
myingressresource.yaml
. -
Define an Ingress resource in your configuration file that uses the IBM-provided domain or your custom domain to route incoming network traffic to the services that you created earlier.
Example YAML that does not use TLS:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingressresource spec: rules: - host: <domain> http: paths: - path: /<app1_path> backend: serviceName: <app1_service> servicePort: 80 - path: /<app2_path> backend: serviceName: <app2_service> servicePort: 80
{: codeblock}
Example YAML that uses TLS:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingressresource spec: tls: - hosts: - <domain> secretName: <tls_secret_name> rules: - host: <domain> http: paths: - path: /<app1_path> backend: serviceName: <app1_service> servicePort: 80 - path: /<app2_path> backend: serviceName: <app2_service> servicePort: 80
{: codeblock}
-
Create the Ingress resource for your cluster. Ensure that the resource deploys into the same namespace as the app services that you specified in the resource.
kubectl apply -f myingressresource.yaml -n <namespace>
{: pre}
-
Verify that the Ingress resource was created successfully.
kubectl describe ingress myingressresource
{: pre}
- If messages in the events describe an error in your resource configuration, change the values in your resource file and reapply the file for the resource.
Your Ingress resource is created in the same namespace as your app services. Your apps in this namespace are registered with the cluster's Ingress ALB.
{: #public_inside_5}
In a web browser, enter the URL of the app service to access. {: shortdesc}
https://<domain>/<app1_path>
{: codeblock}
If you exposed multiple apps, access those apps by changing the path that is appended to the URL.
https://<domain>/<app2_path>
{: codeblock}
If you use a wildcard domain to expose apps in different namespaces, access those apps with their own subdomains.
http://<subdomain1>.<domain>/<app1_path>
{: codeblock}
http://<subdomain2>.<domain>/<app1_path>
{: codeblock}
Having trouble connecting to your app through Ingress? Try debugging Ingress. {: tip}
{: #external_endpoint}
Expose apps that are outside your cluster to the public by including them in public Ingress ALB load balancing. Incoming public requests on the IBM-provided or your custom domain are forwarded automatically to the external app. {: shortdesc}
Before you begin:
- Review the Ingress prerequisites.
- Ensure that the external app that you want to include into the cluster load balancing can be accessed by using a public IP address.
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
To expose apps that are outside your cluster to the public:
-
Create a Kubernetes service for your cluster that will forward incoming requests to an external endpoint that you will create.
-
Open your preferred editor and create a service configuration file that is named, for example,
myexternalservice.yaml
. -
Define a service for the app that the ALB will expose.
apiVersion: v1 kind: Service metadata: name: myexternalservice spec: ports: - protocol: TCP port: 8080
{: codeblock}
Understanding the ALB service file components -
Save your changes.
-
Create the Kubernetes service for your cluster.
kubectl apply -f myexternalservice.yaml
{: pre}
-
-
Configure a Kubernetes endpoint that defines the external location of the app that you want to include into the cluster load balancing.
-
Open your preferred editor and create an endpoint configuration file that is named, for example,
myexternalendpoint.yaml
. -
Define your external endpoint. Include all public IP addresses and ports that you can use to access your external app.
kind: Endpoints apiVersion: v1 metadata: name: myexternalendpoint subsets: - addresses: - ip: <external_IP1> - ip: <external_IP2> ports: - port: <external_port>
{: codeblock}
-
Save your changes.
-
Create the Kubernetes endpoint for your cluster.
kubectl apply -f myexternalendpoint.yaml
{: pre}
-
-
Continue with the steps in "Exposing apps that are inside your cluster to the public", Step 2: Select an app domain.
{: #ingress_expose_private}
Expose apps to a private network by using the private Ingress ALB. {:shortdesc}
To use a private ALB, you must first enable the private ALB. Because private VLAN-only clusters are not assigned an IBM-provided Ingress subdomain, no Ingress secret is created during cluster setup. To expose your apps to the private network, you must register your ALB with a custom domain and, optionally, import your own TLS certificate.
Before you begin:
- Review the Ingress prerequisites.
- Review the options for planning private access to apps when worker nodes are connected to a public and a private VLAN or to a private VLAN only.
- If your worker nodes are connected to a private VLAN only, you must configure a DNS service that is available on the private network .
{: #private_1}
Start by deploying your apps and creating Kubernetes services to expose them. {: shortdesc}
-
Deploy your app to the cluster. Ensure that you add a label to your deployment in the metadata section of your configuration file, such as
app: code
. This label is needed to identify all pods where your app is running so that the pods can be included in the Ingress load balancing. -
Create a Kubernetes service for each app that you want to expose. Your app must be exposed by a Kubernetes service to be included by the cluster ALB in the Ingress load balancing.
-
Open your preferred editor and create a service configuration file that is named, for example,
myappservice.yaml
. -
Define a service for the app that the ALB will expose.
apiVersion: v1 kind: Service metadata: name: myappservice spec: selector: <selector_key>: <selector_value> ports: - protocol: TCP port: 8080
{: codeblock}
-
Save your changes.
-
Create the service in your cluster. If apps are deployed in multiple namespaces in your cluster, ensure that the service deploys into the same namespace as the app that you want to expose.
kubectl apply -f myappservice.yaml [-n <namespace>]
{: pre}
-
Repeat these steps for every app that you want to expose.
-
{: #private_ingress}
When you create a standard cluster, an IBM-provided private application load balancer (ALB) is created in each zone that you have worker nodes and assigned a portable private IP address and a private route. However, the default private ALB in each zone is not automatically enabled. To use the default private ALB to load balance private network traffic to your apps, you must first enable it with either the IBM-provided portable private IP address or your own portable private IP address. {:shortdesc}
If you used the --no-subnet
flag when you created the cluster, then you must add a portable private subnet or a user-managed subnet before you can enable the private ALB. For more information, see Requesting more subnets for your cluster.
{: note}
To enable a default private ALB by using the pre-assigned, IBM-provided portable private IP address:
-
Get the ID of the default private ALB that you want to enable. Replace <cluster_name> with the name of the cluster where the app that you want to expose is deployed.
ibmcloud ks albs --cluster <cluster_name>
{: pre}
The field Status for private ALBs is disabled.
ALB ID Enabled Status Type ALB IP Zone Build ALB VLAN ID private-crdf253b6025d64944ab99ed63bb4567b6-alb1 false disabled private - dal10 ingress:411/ingress-auth:315 2234947 public-crdf253b6025d64944ab99ed63bb4567b6-alb1 true enabled public 169.xx.xxx.xxx dal10 ingress:411/ingress-auth:315 2234945
{: screen} In multizone clusters, the numbered suffix on the ALB ID indicates the order that the ALB was added.
- For example, the
-alb1
suffix on the ALBprivate-cr6d779503319d419aa3b4ab171d12c3b8-alb1
indicates that it was the first default private ALB that was created. It exists in the zone where you created the cluster. In the above example, the cluster was created indal10
. - The
-alb2
suffix on the ALBprivate-crb2f60e9735254ac8b20b9c1e38b649a5-alb2
indicates that it was the second default private ALB that was created. It exists in the second zone that you added to your cluster. In the above example, the second zone isdal12
.
- For example, the
-
Enable the private ALB. Replace <private_ALB_ID> with the ID for private ALB from the output in the previous step.
ibmcloud ks alb-configure --albID <private_ALB_ID> --enable
{: pre}
-
Multizone clusters: For high availability, repeat the above steps for the private ALB in each zone.
**To enable the private ALB by using your own portable private IP address:**
-
Configure the user-managed subnet of your chosen IP address to route traffic on the private VLAN of your cluster.
ibmcloud ks cluster-user-subnet-add --cluster <cluster_name> --subnet-cidr <subnet_CIDR> --private-vlan <private_VLAN>
{: pre}
-
List the available ALBs in your cluster to get the ID of private ALB.
ibmcloud ks albs --cluster <cluster_name>
{: pre}
The field Status for the private ALB is disabled.
ALB ID Enabled Status Type ALB IP Zone Build ALB VLAN ID private-crdf253b6025d64944ab99ed63bb4567b6-alb1 false disabled private - dal10 ingress:411/ingress-auth:315 2234947 public-crdf253b6025d64944ab99ed63bb4567b6-alb1 true enabled public 169.xx.xxx.xxx dal10 ingress:411/ingress-auth:315 2234945
{: screen}
-
Enable the private ALB. Replace <private_ALB_ID> with the ID for private ALB from the output in the previous step and <user_IP> with the IP address from your user-managed subnet that you want to use.
ibmcloud ks alb-configure --albID <private_ALB_ID> --enable --user-ip <user_IP>
{: pre}
-
Multizone clusters: For high availability, repeat the above steps for the private ALB in each zone.
{: #private_3}
Private VLAN-only clusters are not assigned an IBM-provided Ingress subdomain. When you configure the private ALB, expose your apps by using a custom domain. {: shortdesc}
Private VLAN-only clusters:
- If your worker nodes are connected to a private VLAN only, you must configure your own DNS service that is available on your private network .
- Create a custom domain through your DNS provider. If the apps that you want Ingress to expose are in different namespaces in one cluster, register the custom domain as a wildcard domain, such as *.custom_domain.net`.
- Using your private DNS service, map your custom domain to the portable private IP addresses of the ALBs by adding the IP addresses as A records. To find the portable private IP addresses of the ALBs, run
ibmcloud ks alb-get --albID <private_alb_ID>
for each ALB.
Private and public VLAN clusters:
-
Create a custom domain. To register your custom domain, work with your Domain Name Service (DNS) provider or {{site.data.keyword.cloud_notm}} DNS.
- If the apps that you want Ingress to expose are in different namespaces in one cluster, register the custom domain as a wildcard domain, such as
*.custom_domain.net
.
- If the apps that you want Ingress to expose are in different namespaces in one cluster, register the custom domain as a wildcard domain, such as
-
Map your custom domain to the portable private IP addresses of the ALBs by adding the IP addresses as A records. To find the portable private IP addresses of the ALBs, run
ibmcloud ks alb-get --albID <private_alb_ID>
for each ALB.
{: #private_4}
After you map your custom domain, choose whether to use TLS termination. {: shortdesc}
The ALB load balances HTTP network traffic to the apps in your cluster. To also load balance incoming HTTPS connections, you can configure the ALB to decrypt the network traffic and forward the decrypted request to the apps that are exposed in your cluster.
Because private VLAN-only clusters are not assigned an IBM-provided Ingress domain, no Ingress secret is created during the cluster setup. You can use your own TLS certificate to manage TLS termination. The ALB first checks for a secret in the namespace that the app is in, then in default
, and finally in ibm-cert-store
. If you have apps in one namespace only, you can import or create a TLS secret for the certificate in that same namespace. If you have apps in multiple namespaces, import or create a TLS secret for the certificate in the default
namespace so that the ALB can access and use the certificate in every namespace. In the Ingress resources that you define for each namespace, specify the name of the secret that is in the default namespace. For information about wildcard TLS certification, see this note. Note: TLS certificates that contain pre-shared keys (TLS-PSK) are not supported.
If a TLS certificate is stored in {{site.data.keyword.cloudcerts_long_notm}} that you want to use, you can import its associated secret into your cluster by running the following command:
ibmcloud ks alb-cert-deploy --secret-name <secret_name> --cluster <cluster_name_or_ID> --cert-crn <certificate_crn>
{: pre}
When you import a certificate with this command, the certificate secret is created in a namespace called ibm-cert-store
. A reference to this secret is then created in the default
namespace, which any Ingress resource in any namespace can access. When the ALB is processing requests, it follows this reference to pick up and use the certificate secret from the ibm-cert-store
namespace.
{: #private_5}
Ingress resources define the routing rules that the ALB uses to route traffic to your app service. {: shortdesc}
If your cluster has multiple namespaces where apps are exposed, one Ingress resource is required per namespace. However, each namespace must use a different host. You must register a wildcard domain and specify a different subdomain in each resource. For more information, see Planning networking for single or multiple namespaces. {: note}
-
Open your preferred editor and create an Ingress configuration file that is named, for example,
myingressresource.yaml
. -
Define an Ingress resource in your configuration file that uses your custom domain to route incoming network traffic to the services that you created earlier.
Example YAML that does not use TLS:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingressresource annotations: ingress.bluemix.net/ALB-ID: "<private_ALB_ID_1>;<private_ALB_ID_2>" spec: rules: - host: <domain> http: paths: - path: /<app1_path> backend: serviceName: <app1_service> servicePort: 80 - path: /<app2_path> backend: serviceName: <app2_service> servicePort: 80
{: codeblock}
Example YAML that uses TLS:
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: myingressresource annotations: ingress.bluemix.net/ALB-ID: "<private_ALB_ID_1>;<private_ALB_ID_2>" spec: tls: - hosts: - <domain> secretName: <tls_secret_name> rules: - host: <domain> http: paths: - path: /<app1_path> backend: serviceName: <app1_service> servicePort: 80 - path: /<app2_path> backend: serviceName: <app2_service> servicePort: 80
{: codeblock}
-
Create the Ingress resource for your cluster. Ensure that the resource deploys into the same namespace as the app services that you specified in the resource.
kubectl apply -f myingressresource.yaml -n <namespace>
{: pre}
-
Verify that the Ingress resource was created successfully.
kubectl describe ingress myingressresource
{: pre}
- If messages in the events describe an error in your resource configuration, change the values in your resource file and reapply the file for the resource.
Your Ingress resource is created in the same namespace as your app services. Your apps in this namespace are registered with the cluster's Ingress ALB.
{: #private_6}
- Before you can access your app, make sure that you can access a DNS service.
- Public and private VLAN: To use the default external DNS provider, you must configure edge nodes with public access and configure a Virtual Router Appliance .
- Private VLAN only: You must configure a DNS service that is available on the private network .
- From within your private network firewall, enter the URL of the app service in a web browser.
https://<domain>/<app1_path>
{: codeblock}
If you exposed multiple apps, access those apps by changing the path that is appended to the URL.
https://<domain>/<app2_path>
{: codeblock}
If you use a wildcard domain to expose apps in different namespaces, access those apps with their own subdomains.
http://<subdomain1>.<domain>/<app1_path>
{: codeblock}
http://<subdomain2>.<domain>/<app1_path>
{: codeblock}
For a comprehensive tutorial on how to secure microservice-to-microservice communication across your clusters by using the private ALB with TLS, check out this blog post . {: tip}
{: #annotations}
To add capabilities to your Ingress application load balancer (ALB), you can add IBM-specific annotations as metadata in an Ingress resource. {: shortdesc}
Get started with some of the most commonly used annotations.
- redirect-to-https: Convert insecure HTTP client requests to HTTPS.
- rewrite-path: Route incoming network traffic to a different path that your back-end app listens on.
- ssl-services: Use TLS to encrypt traffic to your upstream apps that require HTTPS.
- appid-auth: Use {{site.data.keyword.appid_full_notm}} to authenticate with your application.
- client-max-body-size: Set the maximum size of the body that the client can send as part of a request.
Only annotations in the format ingress.bluemix.net/<annotation>
are supported. For the full list of supported annotations, see Customizing Ingress with annotations. Annotations in the format ingress.kubernetes.io/<annotation>
, kubernetes.io/<annotation>
and nginx.ingress.kubernetes.io/<annotation>
are not supported.
{: note}
{: #opening_ingress_ports}
By default, only ports 80 and 443 are exposed in the Ingress ALB. To expose other ports, you can edit the ibm-cloud-provider-ingress-cm
configmap resource.
{:shortdesc}
-
Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Add a
data
section and specify public ports80
,443
, and any other ports you want to expose separated by a semi-colon (;).By default, ports 80 and 443 are open. If you want to keep 80 and 443 open, you must also include them in addition to any other ports you specify in the
public-ports
field. Any port that is not specified is closed. If you enabled a private ALB, you must also specify any ports that you want to keep open in theprivate-ports
field. {: important}apiVersion: v1 data: public-ports: "80;443;<port3>" private-ports: "80;443;<port4>" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
Example that keeps ports
80
,443
, and9443
open:apiVersion: v1 data: public-ports: "80;443;9443" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: screen}
-
Save the configuration file.
-
Verify that the configmap changes were applied.
kubectl get cm ibm-cloud-provider-ingress-cm -n kube-system -o yaml
{: pre}
- Optional:
- Access an app via a non-standard TCP port that you opened by using the
tcp-ports
annotation. - Change the default ports for HTTP (port 80) and HTTPS (port 443) network traffic to a port that you opened by using the
custom-port
annotation.
For more information about configmap resources, see the Kubernetes documentation.
{: #preserve_source_ip}
By default, the source IP address of the client request is not preserved. When a client request to your app is sent to your cluster, the request is routed to a pod for the load balancer service that exposes the ALB. If no app pod exists on the same worker node as the load balancer service pod, the load balancer forwards the request to an app pod on a different worker node. The source IP address of the package is changed to the public IP address of the worker node where the app pod is running. {: shortdesc}
To preserve the original source IP address of the client request, you can enable source IP preservation . Preserving the client’s IP is useful, for example, when app servers have to apply security and access-control policies.
If you disable an ALB, any source IP changes you make to the load balancer service exposing the ALB are lost. When you re-enable the ALB, you must enable source IP again. {: note}
To enable source IP preservation, edit the load balancer service that exposes an Ingress ALB:
-
Enable source IP preservation for a single ALB or for all the ALBs in your cluster.
-
To set up source IP preservation for a single ALB:
-
Get the ID of the ALB for which you want to enable source IP. The ALB services have a format similar to
public-cr18e61e63c6e94b658596ca93d087eed9-alb1
for a public ALB orprivate-cr18e61e63c6e94b658596ca93d087eed9-alb1
for a private ALB.kubectl get svc -n kube-system | grep alb
{: pre}
-
Open the YAML for the load balancer service that exposes the ALB.
kubectl edit svc <ALB_ID> -n kube-system
{: pre}
-
Under
spec
, change the value ofexternalTrafficPolicy
fromCluster
toLocal
. -
Save and close the configuration file. The output is similar to the following:
service "public-cr18e61e63c6e94b658596ca93d087eed9-alb1" edited
{: screen}
-
-
To set up source IP preservation for all public ALBs in your cluster, run the following command:
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^public" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'; done
{: pre}
Example output:
"public-cr18e61e63c6e94b658596ca93d087eed9-alb1", "public-cr17e61e63c6e94b658596ca92d087eed9-alb2" patched
{: screen}
-
To set up source IP preservation for all private ALBs in your cluster, run the following command:
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^private" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Local"}}'; done
{: pre}
Example output:
"private-cr18e61e63c6e94b658596ca93d087eed9-alb1", "private-cr17e61e63c6e94b658596ca92d087eed9-alb2" patched
{: screen}
-
-
Verify that the source IP is being preserved in your ALB pods logs.
-
Get the ID of a pod for the ALB that you modified.
kubectl get pods -n kube-system | grep alb
{: pre}
-
Open the logs for that ALB pod. Verify that the IP address for the
client
field is the client request IP address instead of the load balancer service IP address.kubectl logs <ALB_pod_ID> nginx-ingress -n kube-system
{: pre}
-
-
Now, when you look up the headers for the requests sent to your back-end app, you can see the client IP address in the
x-forwarded-for
header. -
If you no longer want to preserve the source IP, you can revert the changes you made to the service.
- To revert source IP preservation for your public ALBs:
{: pre}
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^public" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'; done
- To revert source IP preservation for your private ALBs:
{: pre}
kubectl get svc -n kube-system | grep alb | awk '{print $1}' | grep "^private" | while read alb; do kubectl patch svc $alb -n kube-system -p '{"spec":{"externalTrafficPolicy":"Cluster"}}'; done
- To revert source IP preservation for your public ALBs:
{: #ssl_protocols_ciphers}
Enable SSL protocols and ciphers at the global HTTP level by editing the ibm-cloud-provider-ingress-cm
configmap.
{:shortdesc}
To comply with the PCI Security Standards Council mandate, the Ingress service disables TLS 1.0 and 1.1 by default with the upcoming version update of the Ingress ALB pods on 23 January 2019. The update rolls out automatically to all {{site.data.keyword.containerlong_notm}} clusters that have not opted out from automatic ALB updates. If the clients that connect to your apps support TLS 1.2, no action is required. If you still have legacy clients that require TLS 1.0 or 1.1 support, you must manually enable the required TLS versions. You can override the default setting to use TLS 1.1 or 1.0 protocols by following the steps in this section. For more information about how to see the TLS versions that your clients use to access your apps, see this {{site.data.keyword.cloud_notm}} blog post. {: important}
When you specify the enabled protocols for all hosts, the TLSv1.1 and TLSv1.2 parameters (1.1.13, 1.0.12) work only when OpenSSL 1.0.1 or higher is used. The TLSv1.3 parameter (1.13.0) works only when OpenSSL 1.1.1 built with TLSv1.3 support is used. {: note}
To edit the configmap to enable SSL protocols and ciphers:
-
Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Add the SSL protocols and ciphers. Format ciphers according to the OpenSSL library cipher list format .
apiVersion: v1 data: ssl-protocols: "TLSv1 TLSv1.1 TLSv1.2" ssl-ciphers: "HIGH:!aNULL:!MD5" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
-
Save the configuration file.
-
Verify that the configmap changes were applied.
kubectl get cm ibm-cloud-provider-ingress-cm -n kube-system -o yaml
{: pre}
{: #perf_tuning}
To optimize performance of your Ingress ALBs, you can change the default settings according to your needs. {: shortdesc}
{: #scale_albs}
When you create a standard cluster, one public and one private ALB is created in each zone where you have worker nodes. Each ALB can handle 32,768 connections per second. However, if you must process more than 32,768 connections per second, you can scale up your ALBs by creating more ALBs. {: shortdesc}
For example, if you have worker nodes in dal10
, a default public ALB exists in dal10
. This default public ALB is deployed as two pods on two worker nodes in that zone. However, to handle more connections per second, you want to increase the number of ALBs in dal10
. You can create a second public ALB in dal10
. This ALB is also deployed as two pods on two worker nodes in dal10
. All public ALBs in your cluster share the same IBM-assigned Ingress subdomain, so the IP address of the new ALB is automatically added to your Ingress subdomain. You do not need to change your Ingress resource files.
You can also scale up your ALBs across more zones. When you create a multizone cluster, a default public ALB is created in each zone where you have worker nodes. However, default public ALBs are created in only up to 3 zones. If, for example, you later remove one of these original 3 zones and add workers in a different zone, a default public ALB is not created in that new zone. You can manually create an ALB to process connections in that new zone. {: tip}
- In each zone where you have worker nodes, create an ALB.
ibmcloud ks alb-create --cluster <cluster_name_or_ID> --type <public_or_private> --zone <zone> --vlan <VLAN_ID> [--user-ip <IP_address>]
{: pre}
-
Verify that the ALBs that you created in each zone have a Status of
enabled
and that an ALB IP is assigned.ibmcloud ks albs --cluster <cluster_name_or_ID>
{: pre}
Example output for a cluster in which new public ALBs with IDs of
public-crdf253b6025d64944ab99ed63bb4567b6-alb3
andpublic-crdf253b6025d64944ab99ed63bb4567b6-alb4
are created indal10
anddal12
:ALB ID Enabled Status Type ALB IP Zone Build ALB VLAN ID private-crdf253b6025d64944ab99ed63bb4567b6-alb1 false disabled private - dal12 ingress:411/ingress-auth:315 2294021 private-crdf253b6025d64944ab99ed63bb4567b6-alb2 false disabled private - dal10 ingress:411/ingress-auth:315 2234947 public-crdf253b6025d64944ab99ed63bb4567b6-alb1 true enabled public 169.48.228.78 dal12 ingress:411/ingress-auth:315 2294019 public-crdf253b6025d64944ab99ed63bb4567b6-alb2 true enabled public 169.46.17.6 dal10 ingress:411/ingress-auth:315 2234945 public-crdf253b6025d64944ab99ed63bb4567b6-alb3 true enabled public 169.49.28.09 dal12 ingress:411/ingress-auth:315 2294019 public-crdf253b6025d64944ab99ed63bb4567b6-alb4 true enabled public 169.50.35.62 dal10 ingress:411/ingress-auth:315 2234945
{: screen}
-
Optional for public ALBs: Using the IPs of the new ALBs, verify that the new ALBs are listed under the IBM-provided Ingress subdomain for your cluster. You can find this subdomain by running
ibmcloud ks cluster-get --cluster <cluster_name_or_ID>
.
nslookup <Ingress_subdomain>
{: pre}
Example output:
Non-authoritative answer:
Name: mycluster.us-south.containers.appdomain.cloud
Addresses: 169.48.228.78
169.46.17.6
169.49.28.09
169.50.35.62
{: screen}
- If you later decide to scale down your ALBs, you can disable an ALB. For example, you might want to disable an ALB to use less compute resources on your worker nodes. The ALB is disabled and does not route traffic in your cluster. You can re-enable an ALB at any time by running
ibmcloud ks alb-configure --albID <ALB_ID> --enable
.{: pre}ibmcloud ks alb-configure --albID <ALB_ID> --disable
{: #access-log}
By default, the Ingress ALB logs each request as it arrives. If you have an environment that is heavily used, logging each request as it arrives can greatly increase disk I/O utilization. To avoid continuous disk I/O, you can enable log buffering and flush timeout for the ALB by editing the ibm-cloud-provider-ingress-cm
Ingress configmap. When buffering is enabled, instead of performing a separate write operation for each log entry, the ALB buffers a series of entries and writes them to the file together in a single operation.
{: shortdesc}
-
Create and Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Edit the configmap.
-
Enable log buffering by adding the
access-log-buffering
field and setting it to"true"
. -
Set the threshold for when the ALB should write buffer contents to the log.
- Time interval: Add the
flush-interval
field and set it to how often the ALB should write to the log. For example, if the default value of5m
is used, the ALB writes buffer contents to the log once every 5 minutes. - Buffer size: Add the
buffer-size
field and set it to how much log memory can be held in the buffer before the ALB writes the buffer contents to the log. For example, if the default value of100KB
is used, the ALB writes buffer contents to the log every time the buffer reaches 100kb of log content. - Time interval or buffer size: When both
flush-interval
andbuffer-size
are set, the ALB writes buffer content to the log based on whichever threshold parameter is met first.
- Time interval: Add the
apiVersion: v1 kind: ConfigMap data: access-log-buffering: "true" flush-interval: "5m" buffer-size: "100KB" metadata: name: ibm-cloud-provider-ingress-cm ...
{: codeblock}
-
-
Save the configuration file.
-
Verify that your ALB is configured with the access log changes.
kubectl logs -n kube-system <ALB_ID> -c nginx-ingress
{: pre}
{: #keepalive_time}
Keepalive connections can have a major impact on performance by reducing the CPU and network usage that is needed to open and close connections. To optimize the performance of your ALBs, you can change the maximum number of keepalive connections between the ALB and the client and how long the keepalive connections can last. {: shortdesc}
-
Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Change the values of
keep-alive-requests
andkeep-alive
.keep-alive-requests
: The number of keepalive client connections that can stay open to the Ingress ALB. The default is4096
.keep-alive
: The timeout, in seconds, during which the keepalive client connection stays open to the Ingress ALB. The default is8s
.
apiVersion: v1 data: keep-alive-requests: "4096" keep-alive: "8s" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
-
Save the configuration file.
-
Verify that the configmap changes were applied.
kubectl get cm ibm-cloud-provider-ingress-cm -n kube-system -o yaml
{: pre}
{: #backlog}
You can decrease the default backlog setting for how many pending connections can wait in the server queue. {: shortdesc}
In the ibm-cloud-provider-ingress-cm
Ingress configmap, the backlog
field sets the maximum number of pending connections that can wait in the server queue. By default, backlog
is set to 32768
. You can override the default by editing the Ingress configmap.
-
Edit the configuration file for the
ibm-cloud-provider-ingress-cm
configmap resource.kubectl edit cm ibm-cloud-provider-ingress-cm -n kube-system
{: pre}
-
Change the value of
backlog
from32768
to a lower value. The value must be equal to or lesser than 32768.apiVersion: v1 data: backlog: "32768" kind: ConfigMap metadata: name: ibm-cloud-provider-ingress-cm namespace: kube-system
{: codeblock}
-
Save the configuration file.
-
Verify that the configmap changes were applied.
kubectl get cm ibm-cloud-provider-ingress-cm -n kube-system -o yaml
{: pre}
{: #ingress_kernel}
To optimize performance of your Ingress ALBs, you can also change the Linux kernel sysctl
parameters on worker nodes. Worker nodes are automatically provisioned with optimized kernel tuning, so only change these settings if you have specific performance optimization requirements.
{: shortdesc}
{: #migrate-alb-vlan}
When you change your worker node VLAN connections, the worker nodes are connected to the new VLAN and assigned new public or private IP addresses. However, ALBs cannot automatically migrate to the new VLAN because they are assigned a stable, portable public or private IP address from a subnet that belongs to the old VLAN. When your worker nodes and ALBs are connected to different VLANs, the ALBs cannot forward incoming network traffic to app pods to your worker nodes. To move your ALBs to a different VLAN, you must create an ALB on the new VLAN and disable the ALB on the old VLAN. {: shortdesc}
Note that all public ALBs in your cluster share the same IBM-assigned Ingress subdomain. When you create new ALBs, you do not need to change your Ingress resource files.
-
Get the new public or private VLAN that you changed your worker node connections to in each zone.
-
List the details for a worker in a zone.
ibmcloud ks worker-get --cluster <cluster_name_or_ID> --worker <worker_id>
{: pre} -
In the output, note the ID for the public or the private VLAN. * To create public ALBs, note the public VLAN ID. * To create private ALBs, note the private VLAN ID.
-
Repeat steps a and b for a worker in each zone so that you have the IDs for the new public or private VLAN in each zone.
-
In each zone, create an ALB on the new VLAN.
ibmcloud ks alb-create --cluster <cluster_name_or_ID> --type <public_or_private> --zone <zone> --vlan <VLAN_ID> [--user-ip <IP_address>]
{: pre}
-
Verify that the ALBs that you created on the new VLANs in each zone have a Status of
enabled
and that an ALB IP address is assigned.ibmcloud ks albs --cluster <cluster_name_or_ID>
{: pre}
Example output for a cluster in which new public ALBs are created on VLAN
2294030
indal12
and2234940
indal10
:ALB ID Enabled Status Type ALB IP Zone Build ALB VLAN ID private-crdf253b6025d64944ab99ed63bb4567b6-alb1 false disabled private - dal12 ingress:411/ingress-auth:315 2294021 private-crdf253b6025d64944ab99ed63bb4567b6-alb2 false disabled private - dal10 ingress:411/ingress-auth:315 2234947 public-crdf253b6025d64944ab99ed63bb4567b6-alb1 true enabled public 169.48.228.78 dal12 ingress:411/ingress-auth:315 2294019 public-crdf253b6025d64944ab99ed63bb4567b6-alb2 true enabled public 169.46.17.6 dal10 ingress:411/ingress-auth:315 2234945 public-crdf253b6025d64944ab99ed63bb4567b6-alb3 true enabled public 169.49.28.09 dal12 ingress:411/ingress-auth:315 2294030 public-crdf253b6025d64944ab99ed63bb4567b6-alb4 true enabled public 169.50.35.62 dal10 ingress:411/ingress-auth:315 2234940
{: screen}
-
Disable each ALB that is connected to the old VLANs.
ibmcloud ks alb-configure --albID <old_ALB_ID> --disable
{: pre}
-
Verify that each ALB that is connected to the old VLANs has a Status of
disabled
. Only the ALBs that are connected to the new VLANs receive incoming network traffic and communicate with your app pods.ibmcloud ks albs --cluster <cluster_name_or_ID>
{: pre}
Example output for a cluster in which the default public ALBs on VLAN
2294019
indal12
and2234945
indal10
: are disabled:ALB ID Enabled Status Type ALB IP Zone Build private-crdf253b6025d64944ab99ed63bb4567b6-alb1 false disabled private - dal12 ingress:411/ingress-auth:315 2294021 private-crdf253b6025d64944ab99ed63bb4567b6-alb2 false disabled private - dal10 ingress:411/ingress-auth:315 2234947 public-crdf253b6025d64944ab99ed63bb4567b6-alb1 false disabled public 169.48.228.78 dal12 ingress:411/ingress-auth:315 2294019 public-crdf253b6025d64944ab99ed63bb4567b6-alb2 false disabled public 169.46.17.6 dal10 ingress:411/ingress-auth:315 2234945 public-crdf253b6025d64944ab99ed63bb4567b6-alb3 true enabled public 169.49.28.09 dal12 ingress:411/ingress-auth:315 2294030 public-crdf253b6025d64944ab99ed63bb4567b6-alb4 true enabled public 169.50.35.62 dal10 ingress:411/ingress-auth:315 2234940
{: screen}
-
Optional for public ALBs: Verify that the IPs of the new ALBs are listed under the IBM-provided Ingress subdomain for your cluster. You can find this subdomain by running
ibmcloud ks cluster-get --cluster <cluster_name_or_ID>
.
nslookup <Ingress_subdomain>
{: pre}
Example output:
Non-authoritative answer:
Name: mycluster.us-south.containers.appdomain.cloud
Addresses: 169.49.28.09
169.50.35.62
{: screen}
{: #user_managed}
Bring your own Ingress controller to run on {{site.data.keyword.cloud_notm}} and leverage an IBM-provided host name and TLS certificate. {: shortdesc}
The IBM-provided Ingress application load balancers (ALBs) are based on NGINX controllers that you can configure by using custom {{site.data.keyword.cloud_notm}} annotations. Depending on what your app requires, you might want to configure your own custom Ingress controller. When you bring your own Ingress controller instead of using the IBM-provided Ingress ALB, you are responsible for supplying the controller image, maintaining the controller, updating the controller, and any security-related updates to keep your Ingress controller free from vulnerabilities. Note: Bringing your own Ingress controller is supported only for providing public external access to your apps and is not supported for providing private external access.
You have 2 options for bringing your own Ingress controller:
- Create a network load balancer (NLB) to expose your custom Ingress controller deployment, and then create a host name for the NLB IP address. {{site.data.keyword.cloud_notm}} provides the host name and takes care of generating and maintaining a wildcard SSL certificate for the host name for you. For more information about IBM-provided NLB DNS host names, see Registering an NLB host name.
- Disable the IBM-provided ALB deployment and use the load balancer service that exposed the ALB and the DNS registration for the IBM-provided Ingress subdomain. This option permits you to leverage the Ingress subdomain and TLS certificate that are already assigned to your cluster.
{: #user_managed_nlb}
Create a network load balancer (NLB) to expose your custom Ingress controller deployment, and then create a host name for the NLB IP address. {: shortdesc}
-
Get the configuration file for your Ingress controller ready. For example, you can use the cloud-generic NGINX community Ingress controller . If you use the community controller, edit the
kustomization.yaml
file by following these steps. -
Replace the
namespace: ingress-nginx
withnamespace: kube-system
. -
In the
commonLabels
section, replace theapp.kubernetes.io/name: ingress-nginx
andapp.kubernetes.io/part-of: ingress-nginx
labels with oneapp: ingress-nginx
label. -
Deploy your own Ingress controller. For example, to use the cloud-generic NGINX community Ingress controller, run the following command.
kubectl apply --kustomize . -n kube-system
{: pre}
-
Define a load balancer to expose your custom Ingress deployment.
apiVersion: v1 kind: Service metadata: name: my-lb-svc spec: type: LoadBalancer selector: app: ingress-nginx ports: - protocol: TCP port: 8080 externalTrafficPolicy: Local
{: codeblock}
-
Create the service in your cluster.
kubectl apply -f my-lb-svc.yaml
{: pre}
-
Get the EXTERNAL-IP address for the load balancer.
kubectl get svc my-lb-svc -n kube-system
{: pre}
In the following example output, the EXTERNAL-IP is
168.1.1.1
.NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE my-lb-svc LoadBalancer 172.21.xxx.xxx 168.1.1.1 80:30104/TCP 2m
{: screen}
-
Register the load balancer IP address by creating a DNS host name.
ibmcloud ks nlb-dns-create --cluster <cluster_name_or_id> --ip <LB_IP>
{: pre}
-
Verify that the host name is created.
ibmcloud ks nlb-dnss --cluster <cluster_name_or_id>
{: pre}
Example output:
Hostname IP(s) Health Monitor SSL Cert Status SSL Cert Secret Name
mycluster-a1b2cdef345678g9hi012j3kl4567890-0001.us-south.containers.appdomain.cloud ["168.1.1.1"] None created <certificate>
{: screen}
-
Optional: Enable health checks on the host name by creating a health monitor.
-
Deploy any other resources that are required by your custom Ingress controller, such as the configmap.
-
Create Ingress resources for your apps. You can use the Kubernetes documentation to create an Ingress resource file and use annotations .
If you continue to use IBM-provided ALBs concurrently with your custom Ingress controller in one cluster, you can create separate Ingress resources for your ALBs and custom controller. In the [Ingress resource that you create to apply to the IBM ALBs only](/docs/containers?topic=containers-ingress#ingress_expose_public), add the annotation kubernetes.io/ingress.class: "iks-nginx"
.
- Access your app by using the load balancer host name that you found in step 7 and the path that your app listens on that you specified in the Ingress resource file.
https://<load_blanacer_host_name>/<app_path>
{: codeblock}
{: #user_managed_alb}
Disable the IBM-provided ALB deployment and use the load balancer service that exposed the ALB and the DNS registration for the IBM-provided Ingress subdomain. {: shortdesc}
-
Get the ID of the default public ALB. The public ALB has a format similar to
public-cr18e61e63c6e94b658596ca93d087eed9-alb1
.kubectl get svc -n kube-system | grep alb
{: pre}
-
Disable the default public ALB. The
--disable-deployment
flag disables the IBM-provided ALB deployment, but doesn't remove the DNS registration for the IBM-provided Ingress subdomain or the load balancer service that is used to expose the Ingress controller.ibmcloud ks alb-configure --albID <ALB_ID> --disable-deployment
{: pre}
-
Get the configuration file for your Ingress controller ready. For example, you can use the cloud-generic NGINX community Ingress controller . If you use the community controller, edit the
kustomization.yaml
file by following these steps. -
Replace the
namespace: ingress-nginx
withnamespace: kube-system
. -
In the
commonLabels
section, replace theapp.kubernetes.io/name: ingress-nginx
andapp.kubernetes.io/part-of: ingress-nginx
labels with oneapp: ingress-nginx
label. -
In the
SERVICE_NAME
variable, replacename: ingress-nginx
withname: <ALB_ID>
. For example, the ALB ID from step 1 might look likename: public-cr18e61e63c6e94b658596ca93d087eed9-alb1
. -
Deploy your own Ingress controller. For example, to use the cloud-generic NGINX community Ingress controller, run the following command. Important: To continue to use the load balancer service exposing the controller and the IBM-provided Ingress subdomain, your controller must be deployed in the
kube-system
namespace.kubectl apply --kustomize . -n kube-system
{: pre}
-
Get the label on your custom Ingress deployment.
kubectl get deploy <ingress-controller-name> -n kube-system --show-labels
{: pre}
In the following example output, the label value is
ingress-nginx
:NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE LABELS nginx-ingress-controller 1 1 1 1 1m app=ingress-nginx
{: screen}
-
Using the ALB ID you got in step 1, open the load balancer service that exposes the IBM ALB.
kubectl edit svc <ALB_ID> -n kube-system
{: pre}
-
Update the load balancer service to point to your custom Ingress deployment. Under
spec/selector
, remove the ALB ID from theapp
label and add the label for your own Ingress controller that you got in step 5.apiVersion: v1 kind: Service metadata: ... spec: clusterIP: 172.21.xxx.xxx externalTrafficPolicy: Cluster loadBalancerIP: 169.xx.xxx.xxx ports: - name: http nodePort: 31070 port: 80 protocol: TCP targetPort: 80 - name: https nodePort: 31854 port: 443 protocol: TCP targetPort: 443 selector: app: <custom_controller_label> ...
{: codeblock}
- Optional: By default, the load balancer service allows traffic on port 80 and 443. If your custom Ingress controller requires a different set of ports, add those ports to the
ports
section.
- Optional: By default, the load balancer service allows traffic on port 80 and 443. If your custom Ingress controller requires a different set of ports, add those ports to the
-
Save and close the configuration file. The output is similar to the following:
service "public-cr18e61e63c6e94b658596ca93d087eed9-alb1" edited
{: screen}
-
Verify that the ALB
Selector
now points to your controller.kubectl describe svc <ALB_ID> -n kube-system
{: pre}
Example output:
Name: public-cre58bff97659a4f41bc927362d5a8ee7a-alb1 Namespace: kube-system Labels: app=public-cre58bff97659a4f41bc927362d5a8ee7a-alb1 Annotations: service.kubernetes.io/ibm-ingress-controller-public=169.xx.xxx.xxx service.kubernetes.io/ibm-load-balancer-cloud-provider-zone=wdc07 Selector: app=ingress-nginx Type: LoadBalancer IP: 172.21.xxx.xxx IP: 169.xx.xxx.xxx LoadBalancer Ingress: 169.xx.xxx.xxx Port: port-443 443/TCP TargetPort: 443/TCP NodePort: port-443 30087/TCP Endpoints: 172.30.xxx.xxx:443 Port: port-80 80/TCP TargetPort: 80/TCP NodePort: port-80 31865/TCP Endpoints: 172.30.xxx.xxx:80 Session Affinity: None External Traffic Policy: Cluster Events: <none>
{: screen}
-
Deploy any other resources that are required by your custom Ingress controller, such as the configmap.
-
If you have a multizone cluster, repeat these steps for each ALB.
-
If you have a multizone cluster, you must configure a health check. The Cloudflare DNS healthcheck endpoint,
albhealth.<clustername>.<region>.containers.appdomain.com
, expects a200
response with a body ofhealthy
in the response. If no health check is set up to return200
andhealthy
, the health check removes any ALB IP addresses from the DNS pool. You can either edit the existing healthcheck resource, or create your own.
- To edit the existing healthcheck resource:
- Open the
alb-health
resource.kubectl edit ingress alb-health --namespace kube-system
{: pre} - In the
metadata.annotations
section, change theingress.bluemix.net/server-snippets
annotation name to the annotation that your controller supports. For example, you might use thenginx.ingress.kubernetes.io/server-snippet
annotation. Do not change the content of the server snippet. - Save and close the file. Your changes are automatically applied.
- Open the
- To create your own healthcheck resource, ensure that the following snippet is returned to Cloudflare:
{: codeblock}
{ return 200 'healthy'; add_header Content-Type text/plain; }
- Create Ingress resources for your apps by following the steps in Exposing apps that are inside your cluster to the public.
Your apps are now exposed by your custom Ingress controller. To restore the IBM-provided ALB deployment, re-enable the ALB. The ALB is redeployed, and the load balancer service is automatically reconfigured to point to the ALB.
ibmcloud ks alb-configure --albID <alb ID> --enable
{: pre}