Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DPE-4196] Plugin Management Refactor #435

Open
wants to merge 83 commits into
base: 2/edge
Choose a base branch
from

Conversation

phvalguima
Copy link
Contributor

@phvalguima phvalguima commented Sep 10, 2024

The main goal is to: (1) separate API call (/_cluster) between plugin-specific calls and cluster settings; (2) curb the requirements for restart for any plugin changes; (3) get a faster response time using cached entries wherever possible; (4) add new use-case with secret handling within the plugin logic itself; and (5) define models so we can standardize the plugin data exchanged between the objects and via relation.

Dev experience

The idea is to make it easier to add management for separated plugins without impacting the rest of the code. A dev willing to add a new plugin must decide: do we need to manage a relation or just config options on the charm?
If not, then we can add a config-only plugin
If yes, then we will need new a new object to encapsulate the plugin handling: the "DataProvider"

The config-only plugin

These are plugin configured via config options. In this case, it is only needed to add a new OpenSearchPlugin -child class that manages the config options to be added or removed from the cluster.

For example, opensearch-knn receives the config from the charm and returns the options to be set in the opensearch.yml.

The relation-based plugins

These plugins are more elaborate, as they have to process events specific to the given plugin. We also must consider the case of large deployments, where data may come via dedicated relation or the peer-cluster relation.

These plugins should be managed by a separate entity, named the relation manager. Defining a common structure for the relation manager is outside of the scope of this PR.

For example, repository-s3 and OpenSearch backup.

New Plugin Manager Infra

Now, the plugin manager is able to manage plugins that depend on config options, API calls and secrets. Whenever adding a new plugin, we should consider:

opensearch_plugins.py: this plugin should have a representation that is consumable by plugin_manager; it should be composed of all the configurations and keys to be added or removed to the cluster's main configuration
opensearch_plugin_manager.py: add the new plugin to the plugin dict; the manager must be able to instantiate this new plugin
opensearch_{plugin-name}.py: if the plugin is managed by a given relation, this lib will implement the relation manager and interface with OpenSearch's plugin-specific APIs
models.py: add any relation data model to this lib
Using the new plugin data provider
While specific classes takes care of the plugin's API calls (e.g. /_snapshot API for the backup plugin is done by OpenSearchBackup class), the data provider facilitates the exchange of relation data between the specific class and the plugin_manager itself. This way, the plugin manager can apply any cluster-wide configurations that are needed for that plugin.

We need a class to do deal with relation specifics as some plugins may expect different relations depending on their deployment description, e.g. OpenSearchBackupPlugin. The OpenSearchPluginDataProvider encapsulates that logic away from the main plugin classes.

Secret Management

Each plugin that handles the specific secrets must implement the secret management logic in its operation. The goal is to avoid filling the opensearch_secrets.py methods with ifs for each plugin case and separating / isolating each plugin code.

Remove unneeded restarts and add caching

We ensure that any configuration changes that come from plugin management are applied via API before being persisted on config files. If the API responds with a 200 status, then we should only write the new value to the configuration and finish without a need for restart.

In case the service is down and API is not available, we can assume we will eventually start the service back up. In this case, it suffices to write the config entries to the files and leave to the next start to pick them up.

This task is going to be divided into 3x parts:

Addresses low ranging fruits where we reduce the number of restarts and add caching support
Three main actions: (i) Merge {add,delete}_plugin together and its equivalents in OpenSearchPluginConfig class; (ii) we receive one big dictionary where a key: None means we want to simply delete that entry; and (iii) the main OpenSearchKeystore must observe secret changes and update its values accordingly
Returns unit tests: this is going to be commented out whilst Parts 1 and 2 happen, given this part of the code was covered with extensive testing
The current implementation of plugin_manager.run waits for the cluster to be started before processing its config changed. We relax this demand and open to the option where the cluster is not yet ready, so we can modify the configuration without issuing a restart request.

#252 is closed with OpenSearchPluginRelationsHandler interface. It allows plugins to define how they will handle its relation(s). opensearch_backup module extends this interface and defines a checker to process either small or large deployments details.

Other relevant changes:

  • Renaming method check_plugin_manager_ready to check_plugin_manager_ready_for_api
  • Any plugin that needs to manage things via API call should check the health of the cluster using check_plugin_manager_ready_for_api
  • Moving opensearch_distro.version to load the workload_version file we have present instead of an API call: this is two fold, 1. removes the dependency to a cluster to be ready and 2. makes this method in-sync with recent changes for upgrades logic
  • Waive the need of loading the default settings if this particular unit is powered down: which makes sense, in this moment we can do any config changes as we will eventually powered it back up later
  • If /_cluster/settings is available: apply the configs via API and do not add a restart request
  • On config-changed handler, the upgrade_in_progress check gets precedence and will continuously defer config-changed event until upgrade is finished before calling the plugin manager
  • Create a OpenSearchKeyStoreNotReadyYetError: responsible to identify the keystore has not been initialized yet across the cluster and hence, we cannot manage any plugins that use it; however, we always apply the opensearch.yml changes from that plugin
  • Add cached_property whenever it makes sense, also adds logic to clean the cache if there was any relevant changes to its content.
  • That still frees the config_changed to just call plugin_manager.run() before everything is set, as the run() method changes hard configuration only.

Closes #252, #280, #244

HealthColors.GREEN,
HealthColors.IGNORE,
]:
event.defer()
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This defer brings no real benefit, as another update-status will forcefully happen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what if the next event is a non update status related event?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, there are a few answers here:

  1. Update status can be set to arbitrarily long values or even disabled, hence, we should never be doing here anything other than keeping the status
  2. On an one year span, update-status will outpace any other type of event, if we start deferring them we may have multiple update-status happening at once
  3. The update-status sets a time threshhold, if our hooks take too long, then we accumulate update-status. If our update-status is taking too long (e.g. because of deferred previous update-status) then we will have an endless loop

What is the benefit this deferral brings?

from cryptography import x509
from cryptography.hazmat.primitives import hashes, serialization
from cryptography.hazmat.primitives.asymmetric import rsa
from cryptography.x509.oid import NameOID


def patch_wait_fixed() -> Callable:
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Speeds up tenacity by replacing the wait for a smaller range one

if self.charm.unit.is_leader():
self.charm.status.clear(BackupSetupFailed, app=True)
self.charm.status.set(BlockedStatus(BackupSetupFailed))
self.charm.status.set(BlockedStatus(BackupSetupFailed), app=True)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following conversation on: https://chat.canonical.com/canonical/pl/oqwsfdejufgepettcqzobnw7xh

It is neither guaranteed nor wanted by the entire team to support a: all units in a given status -> app status updated automatically

@@ -34,46 +34,25 @@ class OpenSearchKeystoreError(OpenSearchError):
"""Exception thrown when an opensearch keystore is invalid."""


class OpenSearchKeystoreNotReadyYetError(OpenSearchKeystoreError):
"""Exception thrown when the keystore is not ready yet."""
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This error is thrown when we try to reach out to the to reload the keys via API and it fails.

@phvalguima phvalguima marked this pull request as ready for review September 14, 2024 11:49
github-actions bot and others added 6 commits September 26, 2024 07:18
Currently, we are having a lot of time outs in CA rotation testing.
Breaking between small and large deployments and having parallel runners
will help with that overall duration.
…python-3-12' into DPE-4196-improve-plugin-manager
@Mehdi-Bendriss Mehdi-Bendriss requested a review from zmraul October 31, 2024 16:07
Copy link
Contributor

@zmraul zmraul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Huge amount of work on this PR. Nice!

I've been running this branch locally and didn't find issues so far. Left some comments.

Comment on lines +454 to +460
class OpenSearchPluginDataProvider:
"""Implements the data provider for any charm-related data access.

Plugins may have one or more relations tied to them. This abstract class
enables different modules to implement a class that can specify which
relations should plugin manager listen to.
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Does this class include large deployment relation?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not directly, but that is the idea. This class exists to abstract the access to the databags from basic plugins. The plugin should be "dumb", i.e. just v basic dataclasses. This class provides the plugins with any relation info.

@@ -452,66 +503,141 @@ def name(self) -> str:
return "opensearch-knn"


class OpenSearchPluginBackupDataProvider(OpenSearchPluginDataProvider):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I would personally remove most of the OpenSearch prefix on these classes, since they are somewhat redundant and create noise when parsing or searching the file.

for event in [
charm.on[PeerClusterRelationName].relation_joined,
charm.on[PeerClusterRelationName].relation_changed,
charm.on[PeerClusterRelationName].relation_departed,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question Is relation_departed needed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Reason: if you have s3 keys in the opensearch-keystore but you do not have backup configured, then the application breaks. Therefore, we need to know when the relation is gone.

except OpenSearchHttpError as e:
return e.response_body if e.response_body else None
return result if isinstance(result, dict) else None

def _is_restore_in_progress(self) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: should this one be public as well? same as is_backup_in_progress

Comment on lines 1094 to 1098
elif charm.opensearch_peer_cm.deployment_desc().typ == DeploymentType.MAIN_ORCHESTRATOR:
# Using the deployment_desc() method instead of is_provider()
# In both cases: (1) small deployments or (2) large deployments where this cluster is the
# main orchestrator, we want to instantiate the OpenSearchBackup class.
return OpenSearchBackup(charm)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: shouldn't this one also return OpenSearchBackup when type is DeploymentType.FAILOVER_ORCHESTRATOR?

raise OpenSearchPluginMissingConfigError(
"Plugin {} missing: {}".format(
"Plugin {} missing credentials".format(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This message is not informative enough. When integrating with s3, you can have both access_key and secret_key set on s3 integrator and still getting this message. After setting those, and adding eg bucket you get the more informative message from below that shows the missing fields.

and self._charm.health.get()
in [HealthColors.GREEN, HealthColors.YELLOW, HealthColors.IGNORE]
)
def is_ready_for_api(self) -> bool:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More of a design opinion here, but these checks should probably be centralized at some point. They are invoked from outside this file, which means that it is a relevant check for other components.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@zmraul can you give more precise references? AFAIU other places are asking if the opensearch is healthy, I am asking if it is responsive.

except OpenSearchCmdError as e:
if "not found" in str(e):
logger.info(f"Plugin {plugin.name} to be deleted, not found. Continuing...")
return False
raise OpenSearchPluginRemoveError(plugin.name)
return True

def _clean_cache_if_needed(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: Not sure why this is needed. Clearing the cache on self.plugins will lead to ConfigExposedPlugins being read again, which is static, so no benefit to clearing. self.plugins is being used as a read_only property as far as I can see.

For _installed_plugins I would make that a @property instead, and just evaluate every time.


def run(self) -> bool:
"""Runs a check on each plugin: install, execute config changes or remove.

This method should be called at config-changed event. Returns if needed restart.
"""
is_manager_ready = True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Shouldn't this method be gated by is_ready_for_api? It is calling API commands on apply and it's called from base-charm.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe this is more a naming issue. The idea here was more is_keystore_ready.

Copy link
Contributor Author

@phvalguima phvalguima Dec 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed the name, check if that makes more sense.

Copy link
Contributor

@Mehdi-Bendriss Mehdi-Bendriss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Pedro. I left some comments.
It seems to me that we are making the plugin components too smart as opposed to the previous implementation, which I believe was clearer and had better separation of concerns.
The large deployments workflow needs to be carefully analyzed

lib/charms/opensearch/v0/models.py Show resolved Hide resolved
protocol: Optional[str] = None
storage_class: Optional[str] = Field(alias="storage-class")
tls_ca_chain: Optional[str] = Field(alias="tls-ca-chain")
credentials: S3RelDataCredentials = Field(alias=S3_CREDENTIALS, default=S3RelDataCredentials())
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you excplain the default here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing configs from the s3-integrator

@@ -220,48 +226,50 @@ def __init__(self, charm: "OpenSearchBaseCharm", relation_name: str = PeerCluste
]:
self.framework.observe(event, self._on_s3_relation_action)

def _on_secret_changed(self, event: EventBase) -> None:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should be marked as abstract

@abstractmethod
def _on_s3_relation_broken(self, event: EventBase) -> None:
"""Defers the s3 relation broken events."""
raise NotImplementedError
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

a pass or ... would be better here

Comment on lines +265 to +271
# Defaults to True if we have a failure, to avoid any actions due to
# intermittent connection issues.
logger.warning(
"_is_restore_in_progress: failed to get indices status"
" - assuming restore is in progress"
)
return True
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I understand this path, why return true and assume a restore is in progress when it may not be, instead of let it crash?

Comment on lines +562 to +569
MANDATORY_CONFS = [
"bucket",
"endpoint",
"region",
"base_path",
"protocol",
"credentials",
]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar comment on duplication, pydantic should already perform the required validation.

"protocol",
"credentials",
]
DATA_PROVIDER = OpenSearchPluginBackupDataProvider
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why a class variable?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They are very tightly coupled. The idea is to have each subclass defining its own provider type. To simplify it, I am stating a class and then the upstream classes creating the plugin do not need to worry to create 2x objects instead of one.

"""

MODEL = S3RelData
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why a class variable?

Comment on lines +656 to 661
self.charm.secrets.put_object(
Scope.APP,
S3_CREDENTIALS,
S3RelDataCredentials().to_dict(by_alias=True),
)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you explain why?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the case we are missing s3 information, then we just create an empty object to signal that.

)
)

if self.dp.is_main_orchestrator:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is too smart for the plugins - which is supposed to be dumb. The heavy lifting should have been on the data provider and only inject to the plugin what it needs.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I agree. But the thing here is that I need some "factory" method that differs between MAIN or OTHERS...

There were several changes on the config-changed logic and made this
rebase rather complex. I am making a PR to the main refactor branch so
we can look at it more carefully before having it all together.
@phvalguima phvalguima changed the base branch from main to 2/edge December 13, 2024 16:01
- If OpenSearch is throttling, this is an alert that optimizations are
necessary like scaling the number of nodes or changing queries and
indexing patterns
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RFE] Extend plugin manager
5 participants