Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ko] Translate docs/reference/glossary/contributor.md #116

Open
wants to merge 2 commits into
base: dev-1.13-ko.1
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
41 changes: 20 additions & 21 deletions config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -63,10 +63,10 @@ time_format_blog = "Monday, January 02, 2006"
description = "Production-Grade Container Orchestration"
showedit = true

latest = "v1.12"
latest = "v1.13"

fullversion = "v1.12.0"
version = "v1.12"
fullversion = "v1.13.0"
version = "v1.13"
githubbranch = "master"
docsbranch = "master"
deprecated = false
Expand All @@ -76,10 +76,10 @@ githubWebsiteRepo = "github.com/kubernetes/website"
githubWebsiteRaw = "raw.githubusercontent.com/kubernetes/website"

[[params.versions]]
fullversion = "v1.12.0"
version = "v1.12"
githubbranch = "v1.12.0"
docsbranch = "release-1.12"
fullversion = "v1.13.0"
version = "v1.13"
githubbranch = "v1.13.0"
docsbranch = "release-1.13"
url = "https://kubernetes.io"

[params.pushAssets]
Expand All @@ -94,34 +94,33 @@ js = [
]

[[params.versions]]
fullversion = "v1.11.3"
fullversion = "v1.12.3"
version = "v1.12"
githubbranch = "v1.12.3"
docsbranch = "release-1.12"
url = "https://v1-12.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.11.5"
version = "v1.11"
githubbranch = "v1.11.3"
githubbranch = "v1.11.5"
docsbranch = "release-1.11"
url = "https://v1-11.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.10.3"
fullversion = "v1.10.11"
version = "v1.10"
githubbranch = "v1.10.3"
githubbranch = "v1.10.11"
docsbranch = "release-1.10"
url = "https://v1-10.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.9.7"
fullversion = "v1.9.11"
version = "v1.9"
githubbranch = "v1.9.7"
githubbranch = "v1.9.11"
docsbranch = "release-1.9"
url = "https://v1-9.docs.kubernetes.io"

[[params.versions]]
fullversion = "v1.8.4"
version = "v1.8"
githubbranch = "v1.8.4"
docsbranch = "release-1.8"
url = "https://v1-8.docs.kubernetes.io"


# Language definitions.

[languages]
Expand Down
21 changes: 18 additions & 3 deletions content/en/docs/concepts/architecture/nodes.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,6 +158,20 @@ to be unreachable. (The default timeouts are 40s to start reporting
ConditionUnknown and 5m after that to start evicting pods.) The node controller
checks the state of each node every `--node-monitor-period` seconds.

In versions of Kubernetes prior to 1.13, NodeStatus is the heartbeat from the
node. Starting from Kubernetes 1.13, node lease feature is introduced as an
alpha feature (feature gate `NodeLease`,
[KEP-0009](https://github.com/kubernetes/community/blob/master/keps/sig-node/0009-node-heartbeat.md)).
When node lease feature is enabled, each node has an associated `Lease` object in
`kube-node-lease` namespace that is renewed by the node periodically, and both
NodeStatus and node lease are treated as heartbeats from the node. Node leases
are renewed frequently while NodeStatus is reported from node to master only
when there is some change or enough time has passed (default is 1 minute, which
is longer than the default timeout of 40 seconds for unreachable nodes). Since
node lease is much more lightweight than NodeStatus, this feature makes node
heartbeat significantly cheaper from both scalability and performance
perspectives.

In Kubernetes 1.4, we updated the logic of the node controller to better handle
cases when a large number of nodes have problems with reaching the master
(e.g. because the master has networking problem). Starting with 1.4, the node
Expand Down Expand Up @@ -212,11 +226,12 @@ For self-registration, the kubelet is started with the following options:
- `--register-node` - Automatically register with the API server.
- `--register-with-taints` - Register the node with the given list of taints (comma separated `<key>=<value>:<effect>`). No-op if `register-node` is false.
- `--node-ip` - IP address of the node.
- `--node-labels` - Labels to add when registering the node in the cluster.
- `--node-labels` - Labels to add when registering the node in the cluster (see label restrictions enforced by the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) in 1.13+).
- `--node-status-update-frequency` - Specifies how often kubelet posts node status to master.

Currently, any kubelet is authorized to create/modify any node resource, but in practice it only creates/modifies
its own. (In the future, we plan to only allow a kubelet to modify its own node resource.)
When the [Node authorization mode](/docs/reference/access-authn-authz/node/) and
[NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction) are enabled,
kubelets are only authorized to create/modify their own Node resource.

#### Manual Node Administration

Expand Down
36 changes: 19 additions & 17 deletions content/en/docs/concepts/cluster-administration/cloud-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,30 +17,32 @@ kubeadm has configuration options to specify configuration information for cloud
in-tree cloud provider can be configured using kubeadm as shown below:

```yaml
apiVersion: kubeadm.k8s.io/v1alpha3
apiVersion: kubeadm.k8s.io/v1beta1
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
---
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
apiVersion: kubeadm.k8s.io/v1alpha3
kubernetesVersion: v1.12.0
apiServerExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
apiServerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManagerExtraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
controllerManagerExtraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
kubernetesVersion: v1.13.0
apiServer:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
controllerManager:
extraArgs:
cloud-provider: "openstack"
cloud-config: "/etc/kubernetes/cloud.conf"
extraVolumes:
- name: cloud
hostPath: "/etc/kubernetes/cloud.conf"
mountPath: "/etc/kubernetes/cloud.conf"
```

The in-tree cloud providers typically need both `--cloud-provider` and `--cloud-config` specified in the command lines
Expand Down
15 changes: 15 additions & 0 deletions content/en/docs/concepts/configuration/assign-pod-node.md
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,21 @@ For example, the value of `kubernetes.io/hostname` may be the same as the Node n
and a different value in other environments.
{{< /note >}}

## Node isolation/restriction

Adding labels to Node objects allows targeting pods to specific nodes or groups of nodes.
This can be used to ensure specific pods only run on nodes with certain isolation, security, or regulatory properties.
When using labels for this purpose, choosing label keys that cannot be modified by the kubelet process on the node is strongly recommended.
This prevents a compromised node from using its kubelet credential to set those labels on its own Node object,
and influencing the scheduler to schedule workloads to the compromised node.

The `NodeRestriction` admission plugin prevents kubelets from setting or modifying labels with a `node-restriction.kubernetes.io/` prefix.
To make use of that label prefix for node isolation:

1. Ensure you are using the [Node authorizer](/docs/reference/access-authn-authz/node/) and have enabled the [NodeRestriction admission plugin](/docs/reference/access-authn-authz/admission-controllers/#noderestriction).
2. Add labels under the `node-restriction.kubernetes.io/` prefix to your Node objects, and use those labels in your node selectors.
For example, `example.com.node-restriction.kubernetes.io/fips=true` or `example.com.node-restriction.kubernetes.io/pci-dss=true`.

## Affinity and anti-affinity

`nodeSelector` provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti-affinity
Expand Down
14 changes: 6 additions & 8 deletions content/en/docs/concepts/configuration/taint-and-toleration.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,9 +223,7 @@ certain condition is true. The following taints are built in:
as unusable. After a controller from the cloud-controller-manager initializes
this node, the kubelet removes this taint.

When the `TaintBasedEvictions` alpha feature is enabled (you can do this by
including `TaintBasedEvictions=true` in `--feature-gates` for Kubernetes controller manager,
such as `--feature-gates=FooBar=true,TaintBasedEvictions=true`), the taints are automatically
In version 1.13, the `TaintBasedEvictions` feature is promoted to beta and enabled by default, hence the taints are automatically
added by the NodeController (or kubelet) and the normal logic for evicting pods from nodes
based on the Ready NodeCondition is disabled.

Expand All @@ -236,7 +234,7 @@ in a rate-limited way. This prevents massive pod evictions in scenarios such
as the master becoming partitioned from the nodes.
{{< /note >}}

This alpha feature, in combination with `tolerationSeconds`, allows a pod
This beta feature, in combination with `tolerationSeconds`, allows a pod
to specify how long it should stay bound to a node that has one or both of these problems.

For example, an application with a lot of local state might want to stay
Expand All @@ -246,7 +244,7 @@ The toleration the pod would use in that case would look like

```yaml
tolerations:
- key: "node.alpha.kubernetes.io/unreachable"
- key: "node.kubernetes.io/unreachable"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 6000
Expand All @@ -257,9 +255,9 @@ Note that Kubernetes automatically adds a toleration for
unless the pod configuration provided
by the user already has a toleration for `node.kubernetes.io/not-ready`.
Likewise it adds a toleration for
`node.alpha.kubernetes.io/unreachable` with `tolerationSeconds=300`
`node.kubernetes.io/unreachable` with `tolerationSeconds=300`
unless the pod configuration provided
by the user already has a toleration for `node.alpha.kubernetes.io/unreachable`.
by the user already has a toleration for `node.kubernetes.io/unreachable`.

These automatically-added tolerations ensure that
the default pod behavior of remaining bound for 5 minutes after one of these
Expand All @@ -270,7 +268,7 @@ admission controller](https://git.k8s.io/kubernetes/plugin/pkg/admission/default
[DaemonSet](/docs/concepts/workloads/controllers/daemonset/) pods are created with
`NoExecute` tolerations for the following taints with no `tolerationSeconds`:

* `node.alpha.kubernetes.io/unreachable`
* `node.kubernetes.io/unreachable`
* `node.kubernetes.io/not-ready`

This ensures that DaemonSet pods are never evicted due to these problems,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,36 @@ a Kubernetes release with a newer device plugin API version, upgrade your device
to support both versions before upgrading these nodes to
ensure the continuous functioning of the device allocations during the upgrade.

## Monitoring Device Plugin Resources

In order to monitor resources provided by device plugins, monitoring agents need to be able to
discover the set of devices that are in-use on the node and obtain metadata to describe which
container the metric should be associated with. Prometheus metrics exposed by device monitoring
agents should follow the
[Kubernetes Instrumentation Guidelines](https://github.com/kubernetes/community/blob/master/contributors/devel/instrumentation.md),
which requires identifying containers using `pod`, `namespace`, and `container` prometheus labels.
The kubelet provides a gRPC service to enable discovery of in-use devices, and to provide metadata
for these devices:

```gRPC
// PodResources is a service provided by the kubelet that provides information about the
// node resources consumed by pods and containers on the node
service PodResources {
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
}
```

The gRPC service is served over a unix socket at `/var/lib/kubelet/pod-resources/kubelet.sock`.
Monitoring agents for device plugin resources can be deployed as a daemon, or as a DaemonSet.
The cannonical directory `/var/lib/kubelet/pod-resources` requires privileged access, so monitoring
agents must run in a privileged security context. If a device monitoring agent is running as a
DaemonSet, `/var/lib/kubelet/pod-resources` must be mounted as a
[Volume](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#volume-v1-core)
in the plugin's
[PodSpec](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/#podspec-v1-core).

Support for the "PodResources service" is still in alpha.

## Examples

For examples of device plugin implementations, see:
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,8 @@ Kubernetes objects can be created, updated, and deleted by storing multiple
object configuration files in a directory and using `kubectl apply` to
recursively create and update those objects as needed. This method
retains writes made to live objects without merging the changes
back into the object configuration files.
back into the object configuration files. `kubectl diff` also gives you a
preview of what changes `apply` will make.
{{% /capture %}}

{{% capture body %}}
Expand Down Expand Up @@ -67,6 +68,14 @@ Here's an example of an object configuration file:

{{< codenew file="application/simple_deployment.yaml" >}}

Run `kubectl diff` to print the object that will be created:
```shell
kubectl diff -f https://k8s.io/examples/application/simple_deployment.yaml
```
{{< note >}}
**Note:** `diff` uses [server-side dry-run](/docs/reference/using-api/api-concepts/#dry-run), which needs to be enabled on `kube-apiserver`.
{{< /note >}}

Create the object using `kubectl apply`:

```shell
Expand Down Expand Up @@ -130,6 +139,7 @@ if those objects already exist. This approach accomplishes the following:
2. Clears fields removed from the configuration file in the live configuration.

```shell
kubectl diff -f <directory>/
kubectl apply -f <directory>/
```

Expand Down Expand Up @@ -262,6 +272,7 @@ Update the `simple_deployment.yaml` configuration file to change the image from
Apply the changes made to the configuration file:

```shell
kubectl diff -f https://k8s.io/examples/application/update_deployment.yaml
kubectl apply -f https://k8s.io/examples/application/update_deployment.yaml
```

Expand Down Expand Up @@ -977,5 +988,3 @@ template:
- [Kubectl Command Reference](/docs/reference/generated/kubectl/kubectl/)
- [Kubernetes API Reference](/docs/reference/generated/kubernetes-api/{{< param "version" >}}/)
{{% /capture %}}


Original file line number Diff line number Diff line change
Expand Up @@ -144,16 +144,19 @@ API operation to replace the entire object configuration.

### Examples

Process all object configuration files in the `configs` directory, and
create or patch the live objects:
Process all object configuration files in the `configs` directory, and create or
patch the live objects. You can first `diff` to see what changes are going to be
made, and then apply:

```sh
kubectl diff -f configs/
kubectl apply -f configs/
```

Recursively process directories:

```sh
kubectl diff -R -f configs/
kubectl apply -R -f configs/
```

Expand Down Expand Up @@ -181,5 +184,3 @@ Disadvantages compared to imperative object configuration:
{{< comment >}}
{{< /comment >}}
{{% /capture %}}


13 changes: 12 additions & 1 deletion content/en/docs/concepts/storage/persistent-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -192,6 +192,7 @@ the following types of volumes:
* Azure File
* Azure Disk
* Portworx
* FlexVolumes

You can only expand a PVC if its storage class's `allowVolumeExpansion` field is set to true.

Expand Down Expand Up @@ -227,16 +228,25 @@ kubectl describe pvc <pvc_name>

If the `PersistentVolumeClaim` has the status `FileSystemResizePending`, it is safe to recreate the pod using the PersistentVolumeClaim.

#### Resizing an in-use PersistentVolumeClaim
FlexVolumes allow resize if the driver is set with the `RequiresFSResize` capability to true.
The FlexVolume can be resized on pod restart.

{{< feature-state for_k8s_version="v1.11" state="alpha" >}}

#### Resizing an in-use PersistentVolumeClaim

Expanding in-use PVCs is an alpha feature. To use it, enable the `ExpandInUsePersistentVolumes` feature gate.
In this case, you don't need to delete and recreate a Pod or deployment that is using an existing PVC.
Any in-use PVC automatically becomes available to its Pod as soon as its file system has been expanded.
This feature has no effect on PVCs that are not in use by a Pod or deployment. You must create a Pod which
uses the PVC before the expansion can complete.

Expanding in-use PVCs for FlexVolumes is added in release 1.13. To enable this feature use `ExpandInUsePersistentVolumes` and `ExpandPersistentVolumes` feature gates. The `ExpandPersistentVolumes` feature gate is already enabled by default. If the `ExpandInUsePersistentVolumes` is set, FlexVolume can be resized online without pod restart.

{{< note >}}
**Note:** FlexVolume resize is possible only when the underlying driver supports resize.
{{< /note >}}

{{< note >}}
Expanding EBS volumes is a time consuming operation. Also, there is a per-volume quota of one modification every 6 hours.
{{< /note >}}
Expand Down Expand Up @@ -553,6 +563,7 @@ applicable.
* iSCSI
* Local volume
* RBD (Ceph Block Device)
* VsphereVolume (alpha)

{{< note >}}
Only FC and iSCSI volumes supported raw block volumes in Kubernetes 1.9.
Expand Down
Loading