From a996371bcf23d3fd8c248ac454b6b78b82508090 Mon Sep 17 00:00:00 2001 From: crazyshrut <145760185+crazyshrut@users.noreply.github.com> Date: Tue, 20 Aug 2024 04:41:30 +0530 Subject: [PATCH] Fix typo in docs: "dpeloyment" -> "deployment" in index.md --- docs/docs/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/docs/index.md b/docs/docs/index.md index 6b69171..5c81be6 100644 --- a/docs/docs/index.md +++ b/docs/docs/index.md @@ -39,7 +39,7 @@ The following sections go over the features. * Scheduler will **pin containers** to specific cores on a NUMA node so as to stop containers from stepping on each other’s toes during peak hours and allow them to fully utilise the multi-level caches associated with the allocated CPU cores. Some balance is anyways gained by enabling hyper-threading on the executor nodes. This should be sufficient to provide a significant boost to the application performance. * Allows for **specialised nodes** in the cluster. For example, there might be nodes with GPU available. We would want to run ML models that can utilise such hardware rather than allocate generic service containers on such nodes. To this end, the scheduler supports tagging and allows for containers to be explicitly mapped to tagged nodes. * Allows for different **placement policies** to provide some flexibility to users in where they want to place their container nodes. This sometimes helps developers deploy specific apps to specific nodes where they might have been granted special privileges to perform deeper than usual investigations of running service containers (for example, take heap-dumps to specific mounted volumes etc). -* Allows for **configuration injection** at container startup. Such configuration can be stream in as part of the dpeloyment specification, mounted in from executor hosts or fetched via API calls by the controllers or executors. +* Allows for **configuration injection** at container startup. Such configuration can be stream in as part of the deployment specification, mounted in from executor hosts or fetched via API calls by the controllers or executors. * Provides provisions to allow for extension of the scheduler to implement different scheduling algorithms in the code later on. * Sometimes, NUMA localization and cpu pinning are overkill for clusters that don't need to extract the last bit of performance. For example, testing/staging clusters. To this end, drove supports the following features: * Allows turning off NUMA and core pinning at executor level