diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/create-a-minimal-kubernetes-charm.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/create-a-minimal-kubernetes-charm.md index c595a6184..69f15c1e2 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/create-a-minimal-kubernetes-charm.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/create-a-minimal-kubernetes-charm.md @@ -15,7 +15,7 @@ If you are familiar with Juju, as we assume here, you'll know that, to start us As you already know from your knowledge of Juju, when you deploy a Kubernetes charm, the following things happen: -1. The Juju controller provisions a pod with two containers, one for the Juju unit agent and the charm itself and one container for each application workload container that is specified in the `containers` field of a file in the charm that is called `charmcraft.yaml`. +1. The Juju controller provisions a pod with at least two containers, one for the Juju unit agent and the charm itself and one container for each application workload container that is specified in the `containers` field of a file in the charm that is called `charmcraft.yaml`. 1. The same Juju controller injects Pebble -- a lightweight, API-driven process supervisor -- into each workload container and overrides the container entrypoint so that Pebble starts when the container is ready. 1. When the Kubernetes API reports that a workload container is ready, the Juju controller informs the charm that the instance of Pebble in that container is ready. At that point, the charm knows that it can start communicating with Pebble. 1. Typically, at this point the charm will make calls to Pebble so that Pebble can configure and start the workload and begin operations. @@ -59,12 +59,12 @@ title: | demo-fastapi-k8s description: | This is a demo charm built on top of a small Python FastAPI server. - This charm could be related to PostgreSQL charm and COS Lite bundle (Canonical Observability Stack). + This charm can be related to PostgreSQL charm and COS Lite bundle (Canonical Observability Stack). summary: | FastAPI Demo charm for Kubernetes ``` -Second, add an environment constraint assuming the latest major Juju version and a Kubernetes-type cloud: +Second, add an environment constraint assuming the Juju version with the desired features and a Kubernetes-type cloud: ```text assumes: @@ -425,7 +425,4 @@ For the full code see: [01_create_minimal_charm](https://github.com/canonical/ju For a comparative view of the code before and after our edits see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/main...01_create_minimal_charm) - - >**See next: {ref}`Make your charm configurable `** - diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-operational-tasks-via-actions.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-operational-tasks-via-actions.md index 313e64482..bef6ef0d6 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-operational-tasks-via-actions.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-operational-tasks-via-actions.md @@ -3,19 +3,17 @@ > {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Expose operational tasks via actions > -> **See previous: {ref}`Preserve your charm's data `** - +> **See previous: {ref}`Integrate your charm with PostgreSQL `** ````{important} This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - ```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s -git checkout 05_preserve_charm_data -git checkout -b 06_create_actions +git checkout 03_integrate_with_psql +git checkout -b 04_create_actions ``` ```` @@ -28,7 +26,6 @@ This can be done by adding an `actions` section in your `charmcraft.yaml` file a In this part of the tutorial we will follow this process to add an action that will allow a charm user to view the current database access points and, if set, also the username and the password. - ## Define the actions Open the `charmcraft.yaml` file and add to it a block defining an action, as below. As you can see, the action is called `get-db-info` and it is intended to help the user access database authentication information. The action has a single parameter, `show-password`; if set to `True`, it will show the username and the password. @@ -44,7 +41,6 @@ actions: default: False ``` - ## Define the action event handlers Open the `src/charm.py` file. @@ -52,37 +48,21 @@ Open the `src/charm.py` file. In the charm `__init__` method, add an action event observer, as below. As you can see, the name of the event consists of the name defined in the `charmcraft.yaml` file (`get-db-info`) and the word `action`. ```python -# events on custom actions that are run via 'juju run-action' +# Events on charm actions that are run via 'juju run' framework.observe(self.on.get_db_info_action, self._on_get_db_info_action) ``` - - - - Now, define the action event handler, as below: First, read the value of the parameter defined in the `charmcraft.yaml` file (`show-password`). Then, use the `fetch_postgres_relation_data` method (that we defined in a previous chapter) to read the contents of the database relation data and, if the parameter value read earlier is `True`, add the username and password to the output. Finally, use `event.set_results` to attach the results to the event that has called the action; this will print the output to the terminal. If we are not able to get the data (for example, if the charm has not yet been integrated with the postgresql-k8s application) then we use the `fail` method of the event to let the user know. - - ```python def _on_get_db_info_action(self, event: ops.ActionEvent) -> None: """This method is called when "get_db_info" action is called. It shows information about database access points by calling the `fetch_postgres_relation_data` method and creates an output dictionary containing the host, port, if show_password is True, then include username, and password of the database. - If PSQL charm is not integrated, the output is set to "No database connected". + If the postgresql charm is not integrated, the output is set to "No database connected". Learn more about actions at https://juju.is/docs/sdk/actions """ @@ -117,11 +97,10 @@ juju refresh \ demo-server-image=ghcr.io/canonical/api_demo_server:1.0.1 ``` - Next, test that the basic action invocation works: ```text -juju run demo-api-charm/0 get-db-info --wait 1m +juju run demo-api-charm/0 get-db-info ``` It might take a few seconds, but soon you should see an output similar to the one below, showing the database host and port: @@ -138,7 +117,7 @@ db-port: "5432" Now, test that the action parameter (`show-password`) works as well by setting it to `True`: ```text -juju run demo-api-charm/0 get-db-info show-password=True --wait 1m +juju run demo-api-charm/0 get-db-info show-password=True ``` The output should now include the username and the password: @@ -153,14 +132,12 @@ db-port: "5432" db-username: relation_id_4 ``` - Congratulations, you now know how to expose operational tasks via actions! ## Review the final code -For the full code see: [06_create_actions](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/06_create_actions) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/05_preserve_charm_data...06_create_actions) +For the full code see: [04_create_actions](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/04_create_actions) +For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/03_integrate_with_psql...04_create_actions) > **See next: {ref}`Observe your charm with COS Lite `** diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-the-version-of-the-application-behind-your-charm.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-the-version-of-the-application-behind-your-charm.md deleted file mode 100644 index a8cc788a3..000000000 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/expose-the-version-of-the-application-behind-your-charm.md +++ /dev/null @@ -1,118 +0,0 @@ -(expose-the-version-of-the-application-behind-your-charm)= -# Expose the version of the application behind your charm - -> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Expose the version of the application behind your charm -> -> **See previous: {ref}`Make your charm configurable `** - -````{important} - -This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - -``` -git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git -cd juju-sdk-tutorial-k8s -git checkout 02_make_your_charm_configurable -git checkout -b 03_set_workload_version -``` - -```` - -In this chapter of the tutorial you will learn how to expose the version of the application (workload) run by the charm -- something that a charm user might find it useful to know. - - -## Define functions to collect the workload application version and set it in the charm - -As a first step we need to add two helper functions that will send an HTTP request to our application to get its version. If the container is available, we can send a request using the `requests` Python library and then add class methods to parse the JSON output to get a version string, as shown below: - -- Import the `requests` Python library: - -```python -import requests -``` - -- Add the following class methods: - - -```python -@property -def version(self) -> str: - """Reports the current workload (FastAPI app) version.""" - try: - if self.container.get_services(self.pebble_service_name): - return self._request_version() - # Catching Exception is not ideal, but we don't care much for the error here, and just - # default to setting a blank version since there isn't much the admin can do! - except Exception as e: - logger.warning("unable to get version from API: %s", str(e), exc_info=True) - return "" - -def _request_version(self) -> str: - """Helper for fetching the version from the running workload using the API.""" - resp = requests.get(f"http://localhost:{self.config['server-port']}/version", timeout=10) - return resp.json()["version"] -``` - -Next, we need to update the `_update_layer_and_restart` method to set our workload version. Insert the following lines before setting `ActiveStatus`: - -```python -# Add workload version in Juju status. -self.unit.set_workload_version(self.version) -``` - -## Declare Python dependencies - - -Since we've added a third party Python dependency into our project, we need to list it in `requirements.txt`. Edit the file to add the following line: - -``` -requests~=2.28 -``` - -Next time you run `charmcraft` it will fetch this new dependency into the charm package. - - -## Validate your charm - -We've exposed the workload version behind our charm. Let's test that it's working! - -First, repack and refresh your charm: - -```text -charmcraft pack -juju refresh \ - --path="./demo-api-charm_ubuntu-22.04-amd64.charm" \ - demo-api-charm --force-units --resource \ - demo-server-image=ghcr.io/canonical/api_demo_server:1.0.1 -``` - -Our charm should fetch the application version and forward it to `juju`. Run `juju status` to check: - -```text -juju status -``` - -Indeed, the version of our workload is now displayed -- see the App block, the Version column: - -```text -Model Controller Cloud/Region Version SLA Timestamp -charm-model tutorial-controller microk8s/localhost 3.0.0 unsupported 12:37:27+01:00 - -App Version Status Scale Charm Channel Rev Address Exposed Message -demo-api-charm 1.0.1 active 1 demo-api-charm 0 10.152.183.233 no - -Unit Workload Agent Address Ports Message -demo-api-charm/0* active idle 10.1.157.75 -``` - -## Review the final code - - -For the full code see: [03_set_workload_version](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/03_set_workload_version) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/02_make_your_charm_configurable...03_set_workload_version) - - -> **See next: {ref}`Integrate your charm with PostgreSQL `** - - diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/index.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/index.md index e44dfb485..e1b4dd364 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/index.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/index.md @@ -27,7 +27,6 @@ The application that we will charm in this tutorial is based on the Python Fast - Familiarity with the Python programming language, Object-Oriented Programming, event handlers. - Understanding of Kubernetes fundamentals. - **What you'll do:** @@ -56,20 +51,14 @@ set-up-your-development-environment study-your-application create-a-minimal-kubernetes-charm make-your-charm-configurable -expose-the-version-of-the-application-behind-your-charm integrate-your-charm-with-postgresql -preserve-your-charms-data expose-operational-tasks-via-actions observe-your-charm-with-cos-lite write-unit-tests-for-your-charm -write-scenario-tests-for-your-charm write-integration-tests-for-your-charm -open-a-kubernetes-port-in-your-charm publish-your-charm-on-charmhub ``` - - (tutorial-kubernetes-next-steps)= ## Next steps @@ -80,5 +69,3 @@ By the end of this tutorial you will have built a machine charm and evolved it i | "How do I...?" | {ref}`how-to-guides` | | "What is...?" | {ref}`reference` | | "Why...?", "So what?" | {ref}`explanation` | - - diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/integrate-your-charm-with-postgresql.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/integrate-your-charm-with-postgresql.md index e6e711488..18715bd2c 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/integrate-your-charm-with-postgresql.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/integrate-your-charm-with-postgresql.md @@ -1,28 +1,31 @@ (integrate-your-charm-with-postgresql)= # Integrate your charm with PostgreSQL + + > {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Integrate your charm with PostgreSQL > -> **See previous: {ref}`Expose the version of the application behind your charm `** +> **See previous: {ref}`Make your charm configurable `** ````{important} This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - ```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s -git checkout 03_set_workload_version -git checkout -b 04_integrate_with_psql +git checkout 02_make_your_charm_configurable +git checkout -b 03_integrate_with_psql ``` ```` A charm often requires or supports relations to other charms. For example, to make our application fully functional we need to connect it to the PostgreSQL database. In this chapter of the tutorial we will update our charm so that it can be integrated with the existing [PostgreSQL charm](https://charmhub.io/postgresql-k8s?channel=14/stable). - - ## Fetch the required database interface charm libraries Navigate to your charm directory and fetch the [data_interfaces](https://charmhub.io/data-platform-libs/libraries/data_interfaces) charm library from Charmhub: @@ -43,13 +46,8 @@ lib Well done, you've got everything you need to set up a database relation! - ## Define the charm relation interface - - Now, time to define the charm relation interface. First, find out the name of the interface that PostgreSQL offers for other charms to connect to it. According to the [documentation of the PostgreSQL charm](https://charmhub.io/postgresql-k8s?channel=14/stable), the interface is called `postgresql_client`. @@ -65,12 +63,11 @@ requires: That will tell `juju` that our charm can be integrated with charms that provide the same `postgresql_client` interface, for example, the official PostgreSQL charm. - Import the database interface libraries and define database event handlers -We now need to implement the logic that wires our application to a database. When a relation between our application and the data platform is formed, the provider side (i.e., the data platform) will create a database for us and it will provide us with all the information we need to connect to it over the relation -- e.g., username, password, host, port, etc. On our side, we nevertheless still need to set the relevant environment variables to point to the database and restart the service. +We now need to implement the logic that wires our application to a database. When a relation between our application and the data platform is formed, the provider side (that is: the data platform) will create a database for us and it will provide us with all the information we need to connect to it over the relation -- for example, username, password, host, port, and so on. On our side, we nevertheless still need to set the relevant environment variables to point to the database and restart the service. -To do so, we need to update our charm “src/charm.py” to do all of the following: +To do so, we need to update our charm `src/charm.py` to do all of the following: * Import the `DataRequires` class from the interface library; this class represents the relation data exchanged in the client-server communication. * Define the event handlers that will be called during the relation lifecycle. @@ -78,7 +75,6 @@ To do so, we need to update our charm “src/charm.py” to do all of the follow ### Import the database interface libraries - First, at the top of the file, import the database interfaces library: ```python @@ -96,15 +92,14 @@ You might have noticed that despite the charm library being placed in the `lib/c ```python from charms.data_platform_libs ... ``` + and not ```python from lib.charms.data_platform_libs... ``` -The former is not resolvable by default but everything works fine when the charm is deployed. Why? Because the `dispatch` script in the packed charm sets the `PYTHONPATH` environment variable to include the `lib` directory when it executes your `src/charm.py` code. This tells python it can check the `lib` directory when looking for modules and packages at import time. - - +The former is not resolvable by default but everything works fine when the charm is deployed. Why? Because the `dispatch` script in the packed charm sets the `PYTHONPATH` environment variable to include the `lib` directory when it executes your `src/charm.py` code. This tells Python it can check the `lib` directory when looking for modules and packages at import time. If you're experiencing issues with your IDE or just trying to run the `charm.py` file on your own, make sure to set/update `PYTHONPATH` to include `lib` directory as well. @@ -117,7 +112,6 @@ export PYTHONPATH=lib:$PYTHONPATH ```` - ### Add relation event observers Next, in the `__init__` method, define a new instance of the 'DatabaseRequires' class. This is required to set the right permissions scope for the PostgreSQL charm. It will create a new user with a password and a database with the required name (below, `names_db`), and limit the user permissions to only this particular database (that is, below, `names_db`). @@ -143,7 +137,7 @@ framework.observe(self.database.on.endpoints_changed, self._on_database_created) Now we need to extract the database authentication data and endpoints information. We can do that by adding a `fetch_postgres_relation_data` method to our charm class. Inside this method, we first retrieve relation data from the PostgreSQL using the `fetch_relation_data` method of the `database` object. We then log the retrieved data for debugging purposes. Next we process any non-empty data to extract endpoint information, the username, and the password and return this process data as a dictionary. Finally, we ensure that, if no data is retrieved, we return an empty dictionary, so that the caller knows that the database is not yet ready. ```python -def fetch_postgres_relation_data(self) -> Dict[str, str]: +def fetch_postgres_relation_data(self) -> dict[str, str]: """Fetch postgres relation data. This function retrieves relation data from a postgres database using @@ -169,20 +163,9 @@ def fetch_postgres_relation_data(self) -> Dict[str, str]: return {} ``` -Since `ops` supports Python 3.8, this tutorial used type annotations compatible with 3.8. If you're following along with this chapter, you'll need to import the following from the `typing` module: -```python -from typing import Dict, Optional -``` -```{important} - -The version of Python that your charm will use is determined in your `charmcraft.yaml`. In this case, we've specified Ubuntu 22.04, which means the charm will actually be running on Python 3.10, so we could have used some more recent Python features, like using the builtin `dict` instead of `Dict`, and the `|` operator for unions, allowing us to write (e.g.) `str | None` instead of `Optional[str]`. This will likely be updated in a future version of this tutorial. - -``` - - ### Share the authentication information with your application -Our application consumes database authentication information in the form of environment variables. Let's update the Pebble service definition with an `environment` key and let's set this key to a dynamic value -- the class property `self.app_environment`. Your `_pebble_layer` property should look as below: +Our application consumes database authentication information in the form of environment variables. Let's update the Pebble service definition with an `environment` key and let's set this key to a dynamic value -- the class property `self.app_environment`. Your `_pebble_layer` property should look as below: ```python @property @@ -216,7 +199,7 @@ Now, let's define this property such that, every time it is called, it dynamical ```python @property -def app_environment(self) -> Dict[str, Optional[str]]: +def app_environment(self) -> dict[str, str | None]: """This property method creates a dictionary containing environment variables for the application. It retrieves the database authentication data by calling the `fetch_postgres_relation_data` method and uses it to populate the dictionary. @@ -247,7 +230,6 @@ The diagram below illustrates the workflow for the case where the database integ ![Integrate your charm with PostgreSQL](../../resources/integrate_your_charm_with_postgresql.png) - ## Update the unit status to reflect the integration state Now that the charm is getting more complex, there are many more cases where the unit status needs to be set. It's often convenient to do this in a more declarative fashion, which is where the collect-status event can be used. @@ -290,14 +272,13 @@ We also want to clean up the code to remove the places where we're setting the s ```python if port == 22: # The collect-status handler will set the status to blocked. - logger.debug('Invalid port number, 22 is reserved for SSH;) + logger.debug('Invalid port number: 22 is reserved for SSH') ``` And remove the `self.unit.status = WaitingStatus` line from `_update_layer_and_restart` (similarly replacing it with a logging line if you prefer). ## Validate your charm - Time to check the results! First, repack and refresh your charm: @@ -324,7 +305,6 @@ juju integrate postgresql-k8s demo-api-charm > Read more: [Integration](https://juju.is/docs/olm/integration), [`juju integrate`](https://juju.is/docs/olm/juju-integrate) - Finally, run: ```text @@ -364,7 +344,6 @@ curl -X 'POST' \ ```{important} If you changed the `server-port` config value in the previous section, don't forget to change it back to 8000 before doing this! - ``` Second, let's try to read something from the database by running: @@ -383,8 +362,8 @@ Congratulations, your integration with PostgreSQL is functional! ## Review the final code -For the full code see: [04_integrate_with_psql](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/04_integrate_with_psql) +For the full code see: [03_integrate_with_psql](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/03_integrate_with_psql) -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/03_set_workload_version...04_integrate_with_psql) +For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/02_make_your_charm_configurable...04_integrate_with_psql) -> **See next: {ref}`Preserve your charm's data `** +> **See next: {ref}`Expose your charm's operational tasks via actions `** diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/make-your-charm-configurable.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/make-your-charm-configurable.md index 87ceadb39..507e4e08a 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/make-your-charm-configurable.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/make-your-charm-configurable.md @@ -5,12 +5,11 @@ > > **See previous: {ref}`Create a minimal Kubernetes charm `** - ````{important} This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: -```bash +```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s git checkout 01_create_minimal_charm @@ -27,18 +26,12 @@ This can be done by defining a charm configuration in a file called `charmcraft. In this part of the tutorial you will update your charm to make it possible for a charm user to change the port on which the workload application is available. - - ## Define the configuration options - To begin with, let's define the options that will be available for configuration. In the `charmcraft.yaml` file you created earlier, define a configuration option, as below. The name of your configurable option is going to be `server-port`. The `default` value is `8000` -- this is the value you're trying to allow a charm user to configure. - - ```yaml config: options: @@ -50,12 +43,6 @@ config: ## Define the configuration event handlers - - Open your `src/charm.py` file. In the `__init__` function, add an observer for the `config_changed` event and pair it with an `_on_config_changed` handler: @@ -64,24 +51,23 @@ In the `__init__` function, add an observer for the `config_changed` event and p framework.observe(self.on.config_changed, self._on_config_changed) ``` -Now, define the handler, as below. First, read the `self.config` attribute to get the new value of the setting. Then, validate that this value is allowed (or block the charm otherwise). Next, let's log the value to the logger. Finally, since configuring something like a port affects the way we call our workload application, we also need to update our pebble configuration, which we will do via a newly created method `_update_layer_and_restart` that we will define shortly. +Now, define the handler, as below. First, read the `self.config` attribute to get the new value of the setting. Then, validate that this value is allowed (or block the charm otherwise). Next, let's log the value to the logger. Finally, since configuring something like a port affects the way we call our workload application, we also need to update our Pebble configuration, which we will do via a newly created method `_update_layer_and_restart` that we will define shortly. ```python def _on_config_changed(self, event: ops.ConfigChangedEvent) -> None: - port = self.config['server-port'] # see charmcraft.yaml + port = self.config['server-port'] # See charmcraft.yaml if port == 22: self.unit.status = ops.BlockedStatus('invalid port number, 22 is reserved for SSH') return - logger.debug("New application port is requested: %s", port) + logger.debug('New application port is requested: %s', port) self._update_layer_and_restart() ``` ```{caution} A charm does not know which configuration option has been changed. Thus, make sure to validate all the values. This is especially important since multiple values can be changed in one call. - ``` In the `__init__` function, add a new attribute to define a container object for your workload: @@ -105,23 +91,21 @@ def _update_layer_and_restart(self) -> None: https://canonical-pebble.readthedocs-hosted.com/en/latest/reference/layers """ - # Learn more about statuses in the SDK docs: - # https://juju.is/docs/sdk/status - self.unit.status = ops.MaintenanceStatus('Assembling Pebble layers') - try: - # Get the current pebble layer config - services = self.container.get_plan().to_dict().get('services', {}) - if services != self._pebble_layer.to_dict().get('services', {}): - # Changes were made, add the new layer - self.container.add_layer('fastapi_demo', self._pebble_layer, combine=True) - logger.info("Added updated layer 'fastapi_demo' to Pebble plan") - - self.container.restart(self.pebble_service_name) - logger.info(f"Restarted '{self.pebble_service_name}' service") - - self.unit.status = ops.ActiveStatus() - except ops.pebble.APIError: - self.unit.status = ops.MaintenanceStatus('Waiting for Pebble in workload container') + # Learn more about statuses in the SDK docs: + # https://juju.is/docs/sdk/status + self.unit.status = ops.MaintenanceStatus('Assembling Pebble layers') + try: + self.container.add_layer('fastapi_demo', self._pebble_layer, combine=True) + logger.info("Added updated layer 'fastapi_demo' to Pebble plan") + + # Tell Pebble to incorporate the changes, including restarting the + # service if required. + self.container.replan() + logger.info(f"Replanned with '{self.pebble_service_name}' service") + + self.unit.status = ops.ActiveStatus() + except ops.pebble.APIError: + self.unit.status = ops.MaintenanceStatus('Waiting for Pebble in workload container') ``` Now, crucially, update the `_pebble_layer` property to make the layer definition dynamic, as shown below. This will replace the static port `8000` with `f"--port={self.config['server-port']}"`. @@ -148,7 +132,7 @@ def _on_demo_server_pebble_ready(self, event: ops.PebbleReadyEvent) -> None: First, repack and refresh your charm: -``` +```text charmcraft pack juju refresh \ --path="./demo-api-charm_ubuntu-22.04-amd64.charm" \ @@ -156,7 +140,6 @@ juju refresh \ demo-server-image=ghcr.io/canonical/api_demo_server:1.0.1 ``` - Now, check the available configuration options: ```text @@ -197,15 +180,12 @@ Unit Workload Agent Address Ports Message demo-api-charm/0* blocked idle 10.1.157.74 invalid port number, 22 is reserved for SSH ``` - Congratulations, you now know how to make your charm configurable! - ## Review the final code For the full code see: [02_make_your_charm_configurable](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/02_make_your_charm_configurable) For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/01_create_minimal_charm...02_make_your_charm_configurable) - -> **See next: {ref}`Expose the version of the application behind your charm `** +> **See next: {ref}`Integrate your charm with PostgreSQL `** diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/observe-your-charm-with-cos-lite.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/observe-your-charm-with-cos-lite.md index 90f8345ca..232c9b542 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/observe-your-charm-with-cos-lite.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/observe-your-charm-with-cos-lite.md @@ -10,12 +10,11 @@ This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - ```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s -git checkout 06_create_actions -git checkout -b 07_cos_integration +git checkout 04_create_actions +git checkout -b 05_cos_integration ``` ```` @@ -26,14 +25,12 @@ Our application is prepared for that -- as you might recall, it uses [`starlette In the charming universe, what you would do is deploy the existing [Canonical Observability Stack (COS) lite bundle](https://charmhub.io/cos-lite) -- a convenient collection of charms that includes all of [Prometheus](https://charmhub.io/prometheus-k8s), [Loki](https://charmhub.io/loki-k8s), and [Grafana](https://charmhub.io/grafana-k8s) -- and then integrate your charm with Prometheus to collect real-time application metrics; with Loki to collect application logs; and with Grafana to create dashboards and visualise collected data. - In this part of the tutorial we will follow this process to collect various metrics and logs about your application and visualise them on a dashboard. ## Integrate with Prometheus Follow the steps below to make your charm capable of integrating with the existing [Prometheus](https://charmhub.io/prometheus-k8s) charm. This will enable your charm user to collect real-time metrics about your application. - ### Fetch the Prometheus interface libraries Ensure you're in your Multipass Ubuntu VM, in your charm project directory. @@ -63,7 +60,9 @@ lib └── prometheus_scrape.py ``` -Note: When you rebuild your charm with `charmcraft pack`, Charmcraft will copy the contents of the top `lib` directory to the project root. Thus, to import this library in your code, use just `charms.prometheus_k8s.v0.prometheus_scrape`. +```{note} +When you rebuild your charm with `charmcraft pack`, Charmcraft will copy the contents of the top `lib` directory to the project root. Thus, to import this library in your code, use just `charms.prometheus_k8s.v0.prometheus_scrape`. +``` ### Define the Prometheus relation interface @@ -81,11 +80,11 @@ In your `src/charm.py` file, do the following: First, at the top of the file, import the `prometheus_scrape` library: -```text +```python from charms.prometheus_k8s.v0.prometheus_scrape import MetricsEndpointProvider ``` -Now, in your charm's `__init__` method, initialise the `MetricsEndpointProvider` instance with the desired scrape target, as below. Note that this uses the relation name that you specified earlier in the `charmcraft.yaml` file. Also, reflecting the fact that you've made your charm's port configurable (see previous chapter {ref}`Make the charm configurable `), the target job is set to be consumed from config. The URL path is not included because it is predictable (defaults to /metrics), so the Prometheus library uses it automatically. The last line, which sets the `refresh_event` to the `config_change` event, ensures that the Prometheus charm will change its scraping target every time someone changes the port configuration. Overall, this code will allow your application to be scraped by Prometheus once they've been integrated. +Now, in your charm's `__init__` method, initialise the `MetricsEndpointProvider` instance with the desired scrape target, as below. Note that this uses the relation name that you specified earlier in the `charmcraft.yaml` file. Also, reflecting the fact that you've made your charm's port configurable (see previous chapter {ref}`Make the charm configurable `), the target job is set to be consumed from config. The URL path is not included because it is predictable (defaults to /metrics), so the Prometheus library uses it automatically. The last line, which sets the `refresh_event` to the `config_change` event, ensures that the Prometheus charm will change its scraping target every time someone changes the port configuration. Overall, this code will allow your application to be scraped by Prometheus once they've been integrated. ```python self._prometheus_scraping = MetricsEndpointProvider( @@ -96,13 +95,6 @@ self._prometheus_scraping = MetricsEndpointProvider( ) ``` - - Congratulations, your charm is ready to be integrated with Prometheus! ## Integrate with Loki @@ -111,13 +103,9 @@ Follow the steps below to make your charm capable of integrating with the existi ### Fetch the Loki interface libraries - - Ensure you're in your Multipass Ubuntu VM, in your charm folder. -Then, satisfy the interface library requirements of the Loki charm by fetching the {ref}``loki_push_api` <7814md>` library: +Then, satisfy the interface library requirements of the Loki charm by fetching the [loki_push_api](https://charmhub.io/loki-k8s/libraries/loki_push_api) library: ```text ubuntu@charm-dev:~/fastapi-demo$ charmcraft fetch-lib charms.loki_k8s.v0.loki_push_api @@ -133,15 +121,18 @@ lib │ └── loki_push_api.py ``` -Note: The `loki_push_api` library also depends on the [`juju_topology`](https://charmhub.io/observability-libs/libraries/juju_topology) library, but you have already fetched it above for Prometheus. +```{note} +The `loki_push_api` library also depends on the [`juju_topology`](https://charmhub.io/observability-libs/libraries/juju_topology) library, but you have already fetched it above for Prometheus. +``` -Note: When you rebuild your charm with `charmcraft pack`, Charmcraft will copy the contents of the top `lib` directory to the project root. Thus, to import this library in your code, use just `charms.loki_k8s.v0.loki_push_api`. +```{note} +When you rebuild your charm with `charmcraft pack`, Charmcraft will copy the contents of the top `lib` directory to the project root. Thus, to import this library in your code, use just `charms.loki_k8s.v0.loki_push_api`. +``` ### Define the Loki relation interface In your `charmcraft.yaml` file, beneath your existing `requires` endpoint, add another `requires` endpoint with relation name `log-proxy` and interface name `loki_push_api`. This declares that your charm can optionally make use of services from other charms over the `loki_push_api` interface. In short, that your charm is open to integrations with, for example, the official Loki charm. (Note: `log-proxy` is the default relation name recommended by the `loki_push_api` interface library.) - ```yaml requires: database: @@ -152,7 +143,6 @@ requires: limit: 1 ``` - ## Import the Loki interface libraries and set up the Loki API In your `src/charm.py` file, do the following: @@ -163,7 +153,7 @@ First, import the `loki_push_api` lib: from charms.loki_k8s.v0.loki_push_api import LogProxyConsumer ``` -Then, in your charm's `__init__` method, initialise the `LogProxyConsumer` instance with the defined log files, as shown below. The `log-proxy` relation name comes from the `charmcraft.yaml` file and the`demo_server.log` file is the file where the application dumps logs. Overall this code ensures that your application can push logs to Loki (or any other charms that implement the `loki_push_api`). +Then, in your charm's `__init__` method, initialise the `LogProxyConsumer` instance with the defined log files, as shown below. The `log-proxy` relation name comes from the `charmcraft.yaml` file and the`demo_server.log` file is the file where the application writes logs. Overall this code ensures that your application can push logs to Loki (or any other charms that implement the `loki_push_api`). ```python self._logging = LogProxyConsumer( @@ -181,7 +171,7 @@ Follow the steps below to make your charm capable of integrating with the existi Ensure you're in your Multipass Ubuntu VM, in your charm folder. -Then, satisfy the interface requirement of the Grafana charm by fetching the [grafana_dashboard](https://charmhub.io/grafana-k8s/libraries/grafana_dashboard) library: +Then, satisfy the interface requirement of the Grafana charm by fetching the [grafana_dashboard](https://charmhub.io/grafana-k8s/libraries/grafana_dashboard) library: ```text ubuntu@charm-dev:~/fastapi-demo$ charmcraft fetch-lib charms.grafana_k8s.v0.grafana_dashboard @@ -197,13 +187,17 @@ lib │ └── grafana_dashboard.py ``` -Note: When you rebuild your charm with `charmcraft pack`, Charmcraft will copy the contents of the top `lib` directory to the project root. Thus, to import this library in your code, use just `charms.grafana_k8s.v0.grafana_dashboard`. +```{note} +The `grafana_dashboard` library also depends on the [`juju_topology`](https://charmhub.io/observability-libs/libraries/juju_topology) library, but you have already fetched it above for Prometheus. +``` -Note: The `grafana_dashboard` library also depends on the [`juju_topology`](https://charmhub.io/observability-libs/libraries/juju_topology) library, but you have already fetched it above for Prometheus. +```{note} +When you rebuild your charm with `charmcraft pack`, Charmcraft will copy the contents of the top `lib` directory to the project root. Thus, to import this library in your code, use just `charms.grafana_k8s.v0.grafana_dashboard`. +``` ### Define the Grafana relation interface -In your `charmcraft.yaml` file, add another `provides` endpoint with relation name `grafana-dashboard` and interface name `grafana_dashboard`, as below. This declares that your charm can offer services to other charms over the `grafana-dashboard` interface. In short, that your charm is open to integrations with, for example, the official Grafana charm. (Note: Here `grafana-dashboard` endpoint is the default relation name recommended by the `grafana_dashboard` library.) +In your `charmcraft.yaml` file, add another `provides` endpoint with relation name `grafana-dashboard` and interface name `grafana_dashboard`, as below. This declares that your charm can offer services to other charms over the `grafana-dashboard` interface. In short, that your charm is open to integrations with, for example, the official Grafana charm. (Note: Here `grafana-dashboard` endpoint is the default relation name recommended by the `grafana_dashboard` library.) ```yaml provides: @@ -230,8 +224,14 @@ Now, in your charm's `__init__` method, initialise the `GrafanaDashboardProvider self._grafana_dashboards = GrafanaDashboardProvider(self, relation_name="grafana-dashboard") ``` + + Now, in your `src` directory, create a subdirectory called `grafana_dashboards` and, in this directory, create a file called `FastAPI-Monitoring.json.tmpl` with the following content: -[FastAPI-Monitoring.json.tmpl|attachment](https://discourse.charmhub.io/uploads/short-url/6hSGcAA6n20qyStzLFeekgkzqCc.tmpl) (7.7 KB) . Once your charm has been integrated with Grafana, the `GrafanaDashboardProvider` you defined just before will take this file as well as any other files defined in this directory and put them into a Grafana files tree to be read by Grafana. +[FastAPI-Monitoring.json.tmpl|attachment](https://github.com/canonical/juju-sdk-tutorial-k8s/raw/refs/heads/05_cos_integration/src/grafana_dashboards/FastAPI-Monitoring.json.tmpl). Once your charm has been integrated with Grafana, the `GrafanaDashboardProvider` you defined just before will take this file as well as any other files defined in this directory and put them into a Grafana files tree to be read by Grafana. ```{important} @@ -269,7 +269,6 @@ juju refresh \ Next, test your charm's ability to integrate with Prometheus, Loki, and Grafana by following the steps below. - ### Deploy COS Lite Create a Juju model called `cos-lite` and, to this model, deploy the Canonical Observability Stack bundle [`cos-lite`](https://charmhub.io/topics/canonical-observability-stack), as below. This will deploy all the COS applications (`alertmanager`, `catalogue`, `grafana`, `loki`, `prometheus`, `traefik`), already suitably integrated with one another. Note that these also include the applications that you've been working to make your charm integrate with -- Prometheus, Loki, and Grafana. @@ -281,12 +280,9 @@ juju deploy cos-lite --trust ```{important} - **Why put COS Lite in a separate model?** Because (1) it is always a good idea to separate logically unrelated applications in different models and (2) this way you can observe applications across all your models. PS In a production-grade scenario you would actually even want to put your COS Lite in a separate *cloud* (i.e., Kubernetes cluster). This is recommended, for example, to ensure proper hardware resource allocation. - ``` - ### Expose the application integration endpoints Once all the COS Lite applications are deployed and settled down (you can monitor this by using `juju status --watch 2s`), expose the integration points you are interested in for your charm -- `loki:logging`, `grafana-dashboard`, and `metrics-endpoint` -- as below. @@ -330,10 +326,8 @@ juju integrate demo-api-charm admin/cos-lite.prometheus ```{important} The power of Grafana lies in the way it allows you to visualise metrics on a dashboard. Thus, in the general case you will want to open the Grafana Web UI in a web browser. However, you are now working in a headless VM that does not have any user interface. This means that you will need to open Grafana in a web browser on your host machine. To do this, you will need to add IP routes to the Kubernetes (MicroK8s) network inside of our VM. You can skip this step if you have decided to follow this tutorial directly on your host machine. - ``` - First, run: ```text @@ -360,7 +354,6 @@ From this output, from the `Address` column, retrieve the IP address for each ap ```{caution} Do not mix up Apps and Units -- Units represent Kubernetes pods while Apps represent Kubernetes Services. Note: The charm should be programmed to support Services. - ``` Now open a terminal on your host machine and run: @@ -448,24 +441,19 @@ In a while you should see the following data appearing on the dashboard: 2. Percent of failed requests per 2 minutes time frame. In your case this will be a ratio of all the requests and the requests submitted to the `/error` path (i.e., the ones that cause the Internal Server Error). 3. Logs from your application that were collected by Loki and forwarded to Grafana. Here you can see some INFO level logs and ERROR logs with traceback from Python when you were calling the `/error` path. - ![Observe your charm with COS Lite](../../resources/observe_your_charm_with_cos_lite.png) - ```{important} If you are interested in the Prometheus metrics produced by your application that were used to build these dashboards you can run following command in your VM: `curl :8000/metrics` Also, you can reach Prometheus in your web browser (similar to Grafana) at `http://:9090/graph` . - ``` - ## Review the final code +For the full code see: [05_cos_integration](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/05_cos_integration) -For the full code see: [07_cos_integration](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/07_cos_integration) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/06_create_actions...07_cos_integration) +For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/04_create_actions...05_cos_integration) > **See next: {ref}`Write units tests for your charm `** diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/open-a-kubernetes-port-in-your-charm.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/open-a-kubernetes-port-in-your-charm.md deleted file mode 100644 index a0588e693..000000000 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/open-a-kubernetes-port-in-your-charm.md +++ /dev/null @@ -1,359 +0,0 @@ -(open-a-kubernetes-port-in-your-charm)= -# Open a Kubernetes port in your charm - -> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Open a Kubernetes port in your charm -> -> **See previous: {ref}`Write integration tests for your charm `** - -````{important} - -This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - -```bash -git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git -cd juju-sdk-tutorial-k8s -git checkout 10_integration_testing -git checkout -b 11_open_port_k8s_service -``` - -```` - -A deployed charm should be consistently accessible via a stable URL on a cloud. - -However, our charm is currently accessible only at the IP pod address and, if the pod gets recycled, the IP address will change as well. - -> See earlier chapter: {ref}`Make your charm configurable ` - -In Kubernetes you can make a service permanently reachable under a stable URL on the cluster by exposing a service port via the `ClusterIP`. In Juju 3.1+, you can take advantage of this by using the `Unit.set_ports()` method. - -> Read more: [ClusterIP](https://kubernetes.io/docs/concepts/services-networking/service/#type-clusterip) - -In this chapter of the tutorial you will extend the existing `server-port` configuration option to use Juju `open-port` functionality to expose a Kubernetes service port. Building on your experience from the previous testing chapters, you will also write tests to check that the new feature you've added works as intended. - - -## Add a Kubernetes service port to your charm - -In your `src/charm.py` file, do all of the following: - -In the `_on_config_changed` method, add a new method: - -```python -self._handle_ports() -``` - -Then, in the definition of the `FastAPIDemoCharm` class, define the method: - -```python -def _handle_ports(self) -> None: - port = cast(int, self.config['server-port']) - self.unit.set_ports(port) -``` - -> See more: [`ops.Unit.set_ports`](https://ops.readthedocs.io/en/latest/#ops.Unit.set_ports) - - -## Test the new feature - - -### Write a unit test - - -```{important} - -**If you've skipped straight to this chapter:**
Note that it builds on the earlier unit testing chapter. To catch up, see: {ref}`Write unit tests for your charm `. - -``` - -Let's write a unit test to verify that the port is opened. Open `tests/unit/test_charm.py` and add the following test function to the file. - -```python -@pytest.mark.parametrize( - 'port,expected_status', - [ - (22, ops.BlockedStatus('Invalid port number, 22 is reserved for SSH')), - (1234, ops.BlockedStatus('Waiting for database relation')), - ], -) -def test_port_configuration( - monkeypatch, harness: ops.testing.Harness[FastAPIDemoCharm], port, expected_status -): - # Given - monkeypatch.setattr(FastAPIDemoCharm, 'version', '1.0.1') - harness.container_pebble_ready('demo-server') - # When - harness.update_config({'server-port': port}) - harness.evaluate_status() - currently_opened_ports = harness.model.unit.opened_ports() - port_numbers = {port.port for port in currently_opened_ports} - server_port_config = harness.model.config.get('server-port') - unit_status = harness.model.unit.status - # Then - if port == 22: - assert server_port_config not in port_numbers - else: - assert server_port_config in port_numbers - assert unit_status == expected_status -``` - -```{important} - -**Tests parametrisation**
Note that we used the `parametrize` decorator to run a single test against multiple sets of arguments. Adding a new test case, like making sure that the error message is informative given a negative or too big port number, would be as simple as extending the list in the decorator call. -See [How to parametrize fixtures and test functions](https://docs.pytest.org/en/8.0.x/how-to/parametrize.html). - -``` - -Time to run the tests! - -In your Multipass Ubuntu VM shell, run the unit test: - -``` -ubuntu@charm-dev:~/fastapi-demo$ tox -re unit -``` - -If successful, you should get an output similar to the one below: - -```bash -$ tox -re unit -unit: remove tox env folder /home/ubuntu/fastapi-demo/.tox/unit -unit: install_deps> python -I -m pip install cosl 'coverage[toml]' pytest -r /home/ubuntu/fastapi-demo/requirements.txt -unit: commands[0]> coverage run --source=/home/ubuntu/fastapi-demo/src -m pytest --tb native -v -s /home/ubuntu/fastapi-demo/tests/unit -========================================= test session starts ========================================= -platform linux -- Python 3.10.13, pytest-8.0.2, pluggy-1.4.0-- /home/ubuntu/fastapi-demo/.tox/unit/bin/python -cachedir: .tox/unit/.pytest_cache -rootdir: /home/ubuntu/fastapi-demo -collected 3 items - -tests/unit/test_charm.py::test_pebble_layer PASSED -tests/unit/test_charm.py::test_port_configuration[22-expected_status0] PASSED -tests/unit/test_charm.py::test_port_configuration[1234-expected_status1] PASSED - -========================================== 3 passed in 0.21s ========================================== -unit: commands[1]> coverage report -Name Stmts Miss Cover ----------------------------------- -src/charm.py 122 43 65% ----------------------------------- -TOTAL 122 43 65% - unit: OK (6.00=setup[5.43]+cmd[0.49,0.09] seconds) - congratulations :) (6.04 seconds) -``` - -### Write a scenario test - -Let's also write a scenario test! Add this test to your `tests/scenario/test_charm.py` file: - -```python -def test_open_port(monkeypatch: MonkeyPatch): - monkeypatch.setattr('charm.LogProxyConsumer', Mock()) - monkeypatch.setattr('charm.MetricsEndpointProvider', Mock()) - monkeypatch.setattr('charm.GrafanaDashboardProvider', Mock()) - - # Use scenario.Context to declare what charm we are testing. - ctx = scenario.Context( - FastAPIDemoCharm, - meta={ - 'name': 'demo-api-charm', - 'containers': {'demo-server': {}}, - 'peers': {'fastapi-peer': {'interface': 'fastapi_demo_peers'}}, - 'requires': { - 'database': { - 'interface': 'postgresql_client', - } - }, - }, - config={ - 'options': { - 'server-port': { - 'default': 8000, - } - } - }, - actions={ - 'get-db-info': {'params': {'show-password': {'default': False, 'type': 'boolean'}}} - }, - ) - state_in = scenario.State( - leader=True, - relations=[ - scenario.Relation( - endpoint='database', - interface='postgresql_client', - remote_app_name='postgresql-k8s', - local_unit_data={}, - remote_app_data={ - 'endpoints': '127.0.0.1:5432', - 'username': 'foo', - 'password': 'bar', - }, - ), - scenario.PeerRelation( - endpoint='fastapi-peer', - peers_data={'unit_stats': {'started_counter': '0'}}, - ), - ], - containers=[ - scenario.Container(name='demo-server', can_connect=True), - ], - ) - state1 = ctx.run('config_changed', state_in) - assert len(state1.opened_ports) == 1 - assert state1.opened_ports[0].port == 8000 - assert state1.opened_ports[0].protocol == 'tcp' -``` - -In your Multipass Ubuntu VM shell, run your scenario test as below: - -```bash -ubuntu@charm-dev:~/fastapi-demo$ tox -re scenario -``` - -If successful, this should yield: - -```bash -scenario: remove tox env folder /home/ubuntu/fastapi-demo/.tox/scenario -scenario: install_deps> python -I -m pip install cosl 'coverage[toml]' ops-scenario pytest -r /home/ubuntu/fastapi-demo/requirements.txt -scenario: commands[0]> coverage run --source=/home/ubuntu/fastapi-demo/src -m pytest --tb native -v -s /home/ubuntu/fastapi-demo/tests/scenario -========================================= test session starts ========================================= -platform linux -- Python 3.10.13, pytest-8.0.2, pluggy-1.4.0 -- /home/ubuntu/fastapi-demo/.tox/scenario/bin/python -cachedir: .tox/scenario/.pytest_cache -rootdir: /home/ubuntu/fastapi-demo -collected 2 items - -tests/scenario/test_charm.py::test_get_db_info_action PASSED -tests/scenario/test_charm.py::test_open_port PASSED - -========================================== 2 passed in 0.31s ========================================== -scenario: commands[1]> coverage report -Name Stmts Miss Cover ----------------------------------- -src/charm.py 122 22 82% ----------------------------------- -TOTAL 122 22 82% - scenario: OK (6.66=setup[5.98]+cmd[0.59,0.09] seconds) - congratulations :) (6.69 seconds) -``` - -### Write an integration test - -In your `tests/integration` directory, create a `helpers.py` file with the following contents: - -```python -import socket -from pytest_operator.plugin import OpsTest - - -async def get_address(ops_test: OpsTest, app_name: str, unit_num: int = 0) -> str: - """Get the address for a the k8s service for an app.""" - status = await ops_test.model.get_status() - k8s_service_address = status['applications'][app_name].public_address - return k8s_service_address - - -def is_port_open(host: str, port: int) -> bool: - """check if a port is opened in a particular host""" - try: - with socket.create_connection((host, port), timeout=5): - return True # If connection succeeds, the port is open - except (ConnectionRefusedError, TimeoutError): - return False # If connection fails, the port is closed -``` - -In your existing `tests/integration/test_charm.py` file, import the methods defined in `helpers.py`: - -```python -from helpers import is_port_open, get_address -``` - -Now add the test case that will cover open ports: - -```python -@pytest.mark.abort_on_fail -async def test_open_ports(ops_test: OpsTest): - """Verify that setting the server-port in charm's config correctly adjust k8s service - - Assert blocked status in case of port 22 and active status for others - """ - app = ops_test.model.applications.get('demo-api-charm') - - # Get the k8s service address of the app - address = await get_address(ops_test=ops_test, app_name=APP_NAME) - # Validate that initial port is opened - assert is_port_open(address, 8000) - - # Set Port to 22 and validate app going to blocked status with port not opened - await app.set_config({'server-port': '22'}) - (await ops_test.model.wait_for_idle(apps=[APP_NAME], status='blocked', timeout=120),) - assert not is_port_open(address, 22) - - # Set Port to 6789 "Dummy port" and validate app going to active status with port opened - await app.set_config({'server-port': '6789'}) - (await ops_test.model.wait_for_idle(apps=[APP_NAME], status='active', timeout=120),) - assert is_port_open(address, 6789) -``` -In your Multipass Ubuntu VM shell, run the test as below: - -```bash -ubuntu@charm-dev:~/fastapi-demo$ tox -re integration -``` - -This test will take longer as a new model needs to be created. If successful, it should yield something similar to the output below: - -```bash -==================================== 3 passed in 234.15s (0:03:54) ==================================== - integration: OK (254.77=setup[19.55]+cmd[235.22] seconds) - congratulations :) (254.80 seconds) -``` - -## Validate your charm - -Congratulations, you've added a new feature to your charm, and also written tests to ensure that it will work properly. Time to give this feature a test drive! - -In your Multipass VM, repack and refresh your charm as below: - -```bash -ubuntu@charm-dev:~/fastapi-demo$ charmcraft pack -juju refresh \ - --path="./demo-api-charm_ubuntu-22.04-amd64.charm" \ - demo-api-charm --force-units --resource \ - demo-server-image=ghcr.io/canonical/api_demo_server:1.0.1 -``` - -Watch your charm deployment status change until deployment settles down: - -``` -juju status --watch 1s -``` - -Use `kubectl` to list the available services and verify that `demo-api-charm` service exposes the `ClusterIP` on the expected port: - - -```bash -$ kubectl get services -n charm-model -NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE -modeloperator ClusterIP 10.152.183.231 17071/TCP 34m -demo-api-charm-endpoints ClusterIP None 19m -demo-api-charm ClusterIP 10.152.183.92 65535/TCP,8000/TCP 19m -postgresql-k8s-endpoints ClusterIP None 18m -postgresql-k8s ClusterIP 10.152.183.162 5432/TCP,8008/TCP 18m -postgresql-k8s-primary ClusterIP 10.152.183.109 8008/TCP,5432/TCP 18m -postgresql-k8s-replicas ClusterIP 10.152.183.29 8008/TCP,5432/TCP 18m -patroni-postgresql-k8s-config ClusterIP None 17m -``` - -Finally, `curl` the `ClusterIP` to verify that the `version` endpoint responds on the expected port: - -```bash -$ curl 10.152.183.92:8000/version -{"version":"1.0.1"} -``` - -Congratulations, your service now exposes an external port that is independent of any pod / node restarts! - -## Review the final code - -For the full code see: [11_open_port_k8s_service](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/11_open_port_k8s_service) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/10_integration_testing...11_open_port_k8s_service) - -> **See next: {ref}`Publish your charm on Charmhub `** - diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/preserve-your-charms-data.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/preserve-your-charms-data.md deleted file mode 100644 index 49dc7a924..000000000 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/preserve-your-charms-data.md +++ /dev/null @@ -1,179 +0,0 @@ -(preserve-your-charms-data)= -# Preserve your charm's data - -> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Preserve your charm's data -> -> **See previous: {ref}`Integrate your charm with PostgreSQL `** - - -````{important} - -This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - -```bash -git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git -cd juju-sdk-tutorial-k8s -git checkout 04_integrate_with_psql -git checkout -b 05_preserve_charm_data -``` - -```` - - -Charms are stateless applications. That is, they are reinitialised for every event and do not retain information from previous executions. This means that, if an accident occurs and the Kubernetes pod dies, you will also lose any information you may have collected. - -In many cases that is not a problem. However, there are situations where it may be necessary to maintain information from previous runs and to retain the state of the application. As a charm author you should thus know how to preserve state. - -There are a few strategies you can adopt here: - -First, you can use an Ops construct called `Stored State`. With this strategy you can store data on the local unit (at least, so long as your `main` function doesn't set `use_juju_for_storage` to `True`). However, if your Kubernetes pod dies, your unit also dies, and thus also the data. For this reason this strategy is generally not recommended. - -> Read more: [`ops.StoredState`](https://ops.readthedocs.io/en/latest/#ops.StoredState), {ref}`StoredState: Uses, Limitations ` - -Second, you can make use of the Juju notion of 'peer relations' and 'data bags' and set up a peer relation data bag. This will help you store the information in the Juju's database backend. - - - - -Third, when you have confidential data, you can use Juju secrets (from Juju 3.1 onwards). - - - - -In this chapter we will adopt the second strategy, that is, we will store charm data in a peer relation databag. (We will explore the third strategy in a different scenario in the next chapter.) We will illustrate this strategy with an artificial example where we save the counter of how many times the application pod has been restarted. - -## Define a peer relation - -The first thing you need to do is define a peer relation. Update the `charmcraft.yaml` file to add a `peers` block before the `requires` block, as below (where `fastapi-peer` is a custom name for the peer relation and `fastapi_demo_peers` is a custom name for the peer relation interface): - -```yaml -peers: - fastapi-peer: - interface: fastapi_demo_peers -``` - - - -## Set and get data from the peer relation databag - -Now, you need a way to set and get data from the peer relation databag. For that you need to update the `src/charm.py` file as follows: - -First, define some helper methods that will allow you to read and write from the peer relation databag: - -```python -@property -def peers(self) -> Optional[ops.Relation]: - """Fetch the peer relation.""" - return self.model.get_relation(PEER_NAME) - -def set_peer_data(self, key: str, data: JSONData) -> None: - """Put information into the peer data bucket instead of `StoredState`.""" - peers = cast(ops.Relation, self.peers) - peers.data[self.app][key] = json.dumps(data) - -def get_peer_data(self, key: str) -> Dict[str, JSONData]: - """Retrieve information from the peer data bucket instead of `StoredState`.""" - if not self.peers: - return {} - data = self.peers.data[self.app].get(key, '') - if not data: - return {} - return json.loads(data) -``` - -This block uses the built-in `json` module of Python, so you need to import that as well. You also need to define a global variable called `PEER_NAME = "fastapi-peer"`, to match the name of the peer relation defined in `charmcraft.yaml` file. We'll also need to import some additional types from `typing`, and define a type alias for JSON data. Update your imports to include the following: - -```python -import json -from typing import Dict, List, Optional, Union, cast -``` -Then define our global and type alias as follows: - -```python -PEER_NAME = 'fastapi-peer' - -JSONData = Union[ - Dict[str, 'JSONData'], - List['JSONData'], - str, - int, - float, - bool, - None, -] -``` - -Next, you need to add a method that updates a counter for the number of times a Kubernetes pod has been started. Let's make it retrieve the current count of pod starts from the 'unit_stats' peer relation data, increment the count, and then update the 'unit_stats' data with the new count, as below: - -```python -def _count(self, event: ops.StartEvent) -> None: - """This function updates a counter for the number of times a K8s pod has been started. - - It retrieves the current count of pod starts from the 'unit_stats' peer relation data, - increments the count, and then updates the 'unit_stats' data with the new count. - """ - unit_stats = self.get_peer_data('unit_stats') - counter = cast(str, unit_stats.get('started_counter', '0')) - self.set_peer_data('unit_stats', {'started_counter': int(counter) + 1}) -``` - -Finally, you need to call this method and update the peer relation data every time the pod is started. For that, define another event observer in the `__init__` method, as below: - -```python -framework.observe(self.on.start, self._count) -``` - -## Validate your charm - -First, repack and refresh your charm: - -```bash -charmcraft pack -juju refresh \ - --path="./demo-api-charm_ubuntu-22.04-amd64.charm" \ - demo-api-charm --force-units --resource \ - demo-server-image=ghcr.io/canonical/api_demo_server:1.0.1 -``` - - -Next, run `juju status` to make sure the application is refreshed and started, then investigate the relation data as below: - -```bash -juju show-unit demo-api-charm/0 -``` - -The output should include the following lines related to our peer relation: - -```bash - relation-info: - - relation-id: 25 - endpoint: fastapi-peer - related-endpoint: fastapi-peer - application-data: - unit_stats: '{"started_counter": 1}' -``` - -Now, simulate a Kubernetes pod crash by deleting the charm pod: - -```bash -microk8s kubectl --namespace=charm-model delete pod demo-api-charm-0 -``` - -Finally, check the peer relation again. You should see that the `started_counter` has been incremented by one. Good job, you've preserved your application data across restarts! - -## Review the final code - - -For the full code see: [05_preserve_charm_data](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/05_preserve_charm_data) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/04_integrate_with_psql...05_preserve_charm_data) - - -> **See next: {ref}`Expose your charm's operational tasks via actions `** - diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/publish-your-charm-on-charmhub.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/publish-your-charm-on-charmhub.md index 7d49aafe6..12148250b 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/publish-your-charm-on-charmhub.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/publish-your-charm-on-charmhub.md @@ -1,7 +1,7 @@ (publish-your-charm-on-charmhub)= # Publish your charm on Charmhub -> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Pushing your charm to charmhub +> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Pushing your charm to Charmhub > > **See previous: {ref}`Open a Kubernetes port in your charm `** @@ -9,7 +9,7 @@ This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: -```bash +```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s git checkout 11_open_port_k8s_service diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-integration-tests-for-your-charm.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-integration-tests-for-your-charm.md index 0540e77c7..8e0d99adc 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-integration-tests-for-your-charm.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-integration-tests-for-your-charm.md @@ -3,17 +3,17 @@ > {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Write integration tests for your charm > -> **See previous: {ref}`Write scenario tests for your charm `** +> **See previous: {ref}`Write unit tests for your charm `** ````{important} This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: -```bash +```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s -git checkout 09_scenario_test -git checkout -b 10_integration_testing +git checkout 06_unit_testing +git checkout -b 07_integration_testing ``` ```` @@ -50,16 +50,15 @@ commands = ## Prepare your test directory Create a `tests/integration` directory: -```bash -mkdir ~/fastapi-demo/tests/integration +```text +mkdir ~/fastapi-demo/tests/integration ``` ## Write and run a pack-and-deploy integration test Let's begin with the simplest possible integration test, a [smoke test](https://en.wikipedia.org/wiki/Smoke_testing_(software)). This test will build and deploy the charm and verify that the installation hooks finish without any error. - In your `tests/integration` directory, create a file `test_charm.py` and add the following test case: ```python @@ -130,33 +129,28 @@ async def test_database_integration(ops_test: OpsTest): apps=[APP_NAME], status='active', raise_on_blocked=False, timeout=120 ) ``` - ```{important} But if you run the one and then the other (as separate `pytest ...` invocations, then two separate models will be created unless you pass `--model=some-existing-model` to inform pytest-operator to use a model you provide. - ``` In your Multipass Ubuntu VM, run the test again: - -```bash +```text ubuntu@charm-dev:~/fastapi-demo$ tox -e integration - ``` The test may again take some time to run. ```{tip} -**Pro tip:** To make things faster, use the `--model=` to inform `pytest-operator` to use the model it has created for the first test. Otherwise, charmers often have a way to cache their pack or deploy results; an example is https://github.com/canonical/spellbook . - +**Pro tip:** To make things faster, use the `--model=` to inform `pytest-operator` to use the model it has created for the first test. Otherwise, charmers often have a way to cache their pack or deploy results; an example is https://github.com/canonical/spellbook . ``` When it's done, the output should show two passing tests: -```bash +```text ... demo-api-charm/0 [idle] waiting: Waiting for database relation INFO juju.model:model.py:2759 Waiting for model: @@ -195,10 +189,8 @@ Congratulations, with this integration test you have verified that your charms r ## Review the final code -For the full code see: [10_integration_testing](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/09_scenario_test) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/09_scenario_test...10_integration_testing) - -> **See next: {ref}`Open a Kubernetes port in your charm `** +For the full code see: [07_integration_testing](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/07_integration_testing) +For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/06_unit_testing...07_integration_testing) +> **See next: {ref}`Publish your charm on Charmhub `** diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-scenario-tests-for-your-charm.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-scenario-tests-for-your-charm.md deleted file mode 100644 index c401c4e16..000000000 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-scenario-tests-for-your-charm.md +++ /dev/null @@ -1,204 +0,0 @@ -(write-scenario-tests-for-your-charm)= -# Write scenario tests for your charm - -> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Write scenario tests for your charm -> -> **See previous: {ref}`Write unit tests for your charm `** - -````{important} - -This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: - -``` -git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git -cd juju-sdk-tutorial-k8s -git checkout 08_unit_testing -git checkout -b 09_scenario_testing -``` - -```` - -In the previous chapter we checked the basic functionality of our charm by writing unit tests. - -However, there is one more type of test to cover, namely: state transition tests. - -In the charming world the current recommendation is to write state transition tests with the 'scenario' model popularised by the {ref}``ops-scenario` ` library. - -```{note} - Scenario is a state-transition testing SDK for operator framework charms. -``` - -In this chapter you will write a scenario test to check that the `get_db_info` action that you defined in an earlier chapter behaves as expected. - - -## Prepare your test environment - -Install `ops-scenario`: - -```bash -pip install ops-scenario -``` -In your project root's existing `tox.ini` file, add the following: - -``` -... - -[testenv:scenario] -description = Run scenario tests -deps = - pytest - cosl - ops-scenario ~= 7.0 - coverage[toml] - -r {tox_root}/requirements.txt -commands = - coverage run --source={[vars]src_path} \ - -m pytest \ - --tb native \ - -v \ - -s \ - {posargs} \ - {[vars]tests_path}/scenario - coverage report -``` - -And adjust the `env_list` so that the Scenario tests will run with a plain `tox` command: - -``` -env_list = unit, scenario -``` - -## Prepare your test directory - -By convention, scenario tests are kept in a separate directory, `tests/scenario`. Create it as below: - -``` -mkdir -p tests/scenario -cd tests/scenario -``` - - -## Write your scenario test - -In your `tests/scenario` directory, create a new file `test_charm.py` and add the test below. This test will check the behaviour of the `get_db_info` action that you set up in a previous chapter. It will first set up the test context by setting the appropriate metadata, then define the input state, then run the action and, finally, check if the results match the expected values. - -```python -from unittest.mock import Mock - -import scenario -from pytest import MonkeyPatch - -from charm import FastAPIDemoCharm - - -def test_get_db_info_action(monkeypatch: MonkeyPatch): - monkeypatch.setattr('charm.LogProxyConsumer', Mock()) - monkeypatch.setattr('charm.MetricsEndpointProvider', Mock()) - monkeypatch.setattr('charm.GrafanaDashboardProvider', Mock()) - - # Use scenario.Context to declare what charm we are testing. - # Note that Scenario will automatically pick up the metadata from - # your charmcraft.yaml file, so you typically could just do - # `ctx = scenario.Context(FastAPIDemoCharm)` here, but the full - # version is included here as an example. - ctx = scenario.Context( - FastAPIDemoCharm, - meta={ - 'name': 'demo-api-charm', - 'containers': {'demo-server': {}}, - 'peers': {'fastapi-peer': {'interface': 'fastapi_demo_peers'}}, - 'requires': { - 'database': { - 'interface': 'postgresql_client', - } - }, - }, - config={ - 'options': { - 'server-port': { - 'default': 8000, - } - } - }, - actions={ - 'get-db-info': {'params': {'show-password': {'default': False, 'type': 'boolean'}}} - }, - ) - - # Declare the input state. - state_in = scenario.State( - leader=True, - relations={ - scenario.Relation( - endpoint='database', - interface='postgresql_client', - remote_app_name='postgresql-k8s', - local_unit_data={}, - remote_app_data={ - 'endpoints': '127.0.0.1:5432', - 'username': 'foo', - 'password': 'bar', - }, - ), - }, - containers={ - scenario.Container('demo-server', can_connect=True), - }, - ) - - # Run the action with the defined state and collect the output. - ctx.run(ctx.on.action('get-db-info', params={'show-password': True}), state_in) - - assert ctx.action_results == { - 'db-host': '127.0.0.1', - 'db-port': '5432', - 'db-username': 'foo', - 'db-password': 'bar', - } -``` - - -## Run the test - -In your Multipass Ubuntu VM shell, run your scenario test as below: - -```bash -ubuntu@charm-dev:~/juju-sdk-tutorial-k8s$ tox -e scenario -``` - -You should get an output similar to the one below: - -```bash -scenario: commands[0]> coverage run --source=/home/tameyer/code/juju-sdk-tutorial-k8s/src -m pytest --tb native -v -s /home/tameyer/code/juju-sdk-tutorial-k8s/tests/scenario -======================================= test session starts ======================================== -platform linux -- Python 3.11.9, pytest-8.3.3, pluggy-1.5.0 -- /home/tameyer/code/juju-sdk-tutorial-k8s/.tox/scenario/bin/python -cachedir: .tox/scenario/.pytest_cache -rootdir: /home/tameyer/code/juju-sdk-tutorial-k8s -plugins: anyio-4.6.0 -collected 1 item - -tests/scenario/test_charm.py::test_get_db_info_action PASSED - -======================================== 1 passed in 0.19s ========================================= -scenario: commands[1]> coverage report -Name Stmts Miss Cover ----------------------------------- -src/charm.py 129 57 56% ----------------------------------- -TOTAL 129 57 56% - scenario: OK (6.89=setup[6.39]+cmd[0.44,0.06] seconds) - congratulations :) (6.94 seconds) -``` - -Congratulations, you have written your first scenario test! - -## Review the final code - - -For the full code see: [09_scenario_testing](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/09_scenario_test) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/08_unit_testing...09_scenario_test) - -> **See next: {ref}`Write integration tests for your charm `** - - diff --git a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-unit-tests-for-your-charm.md b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-unit-tests-for-your-charm.md index 2f9ff9cc4..968c61e3c 100644 --- a/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-unit-tests-for-your-charm.md +++ b/docs/tutorial/from-zero-to-hero-write-your-first-kubernetes-charm/write-unit-tests-for-your-charm.md @@ -1,7 +1,7 @@ (write-unit-tests-for-your-charm)= # Write unit tests for your charm -> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Write a unit test for your charm +> {ref}`From Zero to Hero: Write your first Kubernetes charm ` > Write unit tests for your charm > > **See previous: {ref}`Observe your charm with COS Lite `** @@ -9,23 +9,32 @@ This document is part of a series, and we recommend you follow it in sequence. However, you can also jump straight in by checking out the code from the previous branches: -```bash +```text git clone https://github.com/canonical/juju-sdk-tutorial-k8s.git cd juju-sdk-tutorial-k8s -git checkout 07_cos_integration -git checkout -b 08_unit_testing +git checkout 05_cos_integration +git checkout -b 06_unit_testing ``` ```` When you're writing a charm, you will want to ensure that it will behave reliably as intended. -For example, that the various components -- relation data, pebble services, or configuration files -- all behave as expected in response to an event. +For example, that the various components -- relation data, Pebble services, or configuration files -- all behave as expected in response to an event. -You can ensure all this by writing a rich battery of units tests. In the context of a charm we recommended using [`pytest`](https://pytest.org/) (but [`unittest`](https://docs.python.org/3/library/unittest.html) can also be used) and especially the operator framework's built-in testing library -- [`ops.testing.Harness`](https://ops.readthedocs.io/en/latest/harness.html#module-ops.testing). We will be using the Python testing tool [`tox`](https://tox.wiki/en/4.14.2/index.html) to automate our testing and set up our testing environment. +You can ensure all this by writing a rich battery of unit tests. In the context of a charm we recommended using [`pytest`](https://pytest.org/) (but [`unittest`](https://docs.python.org/3/library/unittest.html) can also be used) and especially `ops` built-in testing library -- [`ops.testing`](https://ops.readthedocs.io/en/latest/reference/ops-testing.html). We will be using the Python testing tool [`tox`](https://tox.wiki/en/4.14.2/index.html) to automate our testing and set up our testing environment. -In this chapter you will write a simple unit test to check that your workload container is initialised correctly. + + +In this chapter you will write a scenario test to check that the `get_db_info` action that you defined in an earlier chapter behaves as expected. ## Prepare your test environment @@ -58,6 +67,7 @@ deps = pytest cosl coverage[toml] + ops[testing] -r {tox_root}/requirements.txt commands = coverage run --source={[vars]src_path} \ @@ -71,130 +81,136 @@ commands = ``` > Read more: [`tox.ini`](https://tox.wiki/en/latest/config.html#tox-ini) - ## Prepare your test directory In your project root, create a `tests/unit` directory: -```bash +```text mkdir -p tests/unit ``` -### Write your unit test +## Write your test -In your `tests/unit` directory, create a file called `test_charm.py`. - -In this file, do all of the following: - -First, add the necessary imports: +In your `tests/unit` directory, create a new file `test_charm.py` and add the test below. This test will check the behaviour of the `get_db_info` action that you set up in a previous chapter. It will first set up the test context by setting the appropriate metadata, then define the input state, then run the action and, finally, check if the results match the expected values. ```python -import ops -import ops.testing -import pytest +from unittest.mock import Mock +from pytest import MonkeyPatch -from charm import FastAPIDemoCharm -``` - -Then, add a test [fixture](https://docs.pytest.org/en/7.1.x/how-to/fixtures.html) that sets up the testing harness and makes sure that it will be cleaned up after each test: +from ops import testing -```python -@pytest.fixture -def harness(): - harness = ops.testing.Harness(FastAPIDemoCharm) - harness.begin() - yield harness - harness.cleanup() - -``` +from charm import FastAPIDemoCharm -Finally, add a first test case as a function, as below. As you can see, this test case is used to verify that the deployment of the `fastapi-service` within the `demo-server` container is configured correctly and that the service is started and running as expected when the container is marked as `pebble-ready`. It also checks that the unit's status is set to active without any error messages. Note that we mock some methods of the charm because they do external calls that are not represented in the state of this unit test. -```python -def test_pebble_layer( - monkeypatch: pytest.MonkeyPatch, harness: ops.testing.Harness[FastAPIDemoCharm] -): - monkeypatch.setattr(FastAPIDemoCharm, 'version', '1.0.0') - # Expected plan after Pebble ready with default config - expected_plan = { - 'services': { - 'fastapi-service': { - 'override': 'replace', - 'summary': 'fastapi demo', - 'command': 'uvicorn api_demo_server.app:app --host=0.0.0.0 --port=8000', - 'startup': 'enabled', - # Since the environment is empty, Layer.to_dict() will not - # include it. +def test_get_db_info_action(monkeypatch: MonkeyPatch): + monkeypatch.setattr('charm.LogProxyConsumer', Mock()) + monkeypatch.setattr('charm.MetricsEndpointProvider', Mock()) + monkeypatch.setattr('charm.GrafanaDashboardProvider', Mock()) + + # Use testing.Context to declare what charm we are testing. + # Note that the test framework will automatically pick up the metadata from + # your charmcraft.yaml file, so you typically could just do + # `ctx = testing.Context(FastAPIDemoCharm)` here, but the full + # version is included here as an example. + ctx = testing.Context( + FastAPIDemoCharm, + meta={ + 'name': 'demo-api-charm', + 'containers': {'demo-server': {}}, + 'peers': {'fastapi-peer': {'interface': 'fastapi_demo_peers'}}, + 'requires': { + 'database': { + 'interface': 'postgresql_client', + } + }, + }, + config={ + 'options': { + 'server-port': { + 'default': 8000, + } } - } + }, + actions={ + 'get-db-info': {'params': {'show-password': {'default': False, 'type': 'boolean'}}} + }, + ) + + # Declare the input state. + state_in = testing.State( + leader=True, + relations={ + testing.Relation( + endpoint='database', + interface='postgresql_client', + remote_app_name='postgresql-k8s', + local_unit_data={}, + remote_app_data={ + 'endpoints': '127.0.0.1:5432', + 'username': 'foo', + 'password': 'bar', + }, + ), + }, + containers={ + testing.Container('demo-server', can_connect=True), + }, + ) + + # Run the action with the defined state and collect the output. + ctx.run(ctx.on.action('get-db-info', params={'show-password': True}), state_in) + + assert ctx.action_results == { + 'db-host': '127.0.0.1', + 'db-port': '5432', + 'db-username': 'foo', + 'db-password': 'bar', } - - # Simulate the container coming up and emission of pebble-ready event - harness.container_pebble_ready('demo-server') - harness.evaluate_status() - - # Get the plan now we've run PebbleReady - updated_plan = harness.get_container_pebble_plan('demo-server').to_dict() - service = harness.model.unit.get_container('demo-server').get_service('fastapi-service') - # Check that we have the plan we expected: - assert updated_plan == expected_plan - # Check the service was started: - assert service.is_running() - # Ensure we set a BlockedStatus with appropriate message: - assert isinstance(harness.model.unit.status, ops.BlockedStatus) - assert 'Waiting for database' in harness.model.unit.status.message ``` - -> Read more: [`ops.testing`](https://ops.readthedocs.io/en/latest/harness.html#module-ops.testing) - ## Run the test -In your Multipass Ubuntu VM shell, run your unit test as below: +In your Multipass Ubuntu VM shell, run your test as below: -```bash -ubuntu@charm-dev:~/fastapi-demo$ tox -e unit +```text +ubuntu@charm-dev:~/juju-sdk-tutorial-k8s$ tox -e unit ``` You should get an output similar to the one below: -```bash -unit: commands[0]> coverage run --source=/home/ubuntu/fastapi-demo/src -m pytest --tb native -v -s /home/ubuntu/fastapi-demo/tests/unit -=============================================================================================================================================================================== test session starts =============================================================================================================================================================================== -platform linux -- Python 3.10.13, pytest-8.0.2, pluggy-1.4.0 -- /home/ubuntu/fastapi-demo/.tox/unit/bin/python +```text +unit: commands[0]> coverage run --source=/home/ubuntu/code/juju-sdk-tutorial-k8s/src -m pytest --tb native -v -s /home/ubuntu/code/juju-sdk-tutorial-k8s/tests/unit +======================================= test session starts ======================================== +platform linux -- Python 3.11.9, pytest-8.3.3, pluggy-1.5.0 -- /home/ubuntu/code/juju-sdk-tutorial-k8s/.tox/unit/bin/python cachedir: .tox/unit/.pytest_cache -rootdir: /home/ubuntu/fastapi-demo -collected 1 item +rootdir: /home/ubuntu/code/juju-sdk-tutorial-k8s +plugins: anyio-4.6.0 +collected 1 item -tests/unit/test_charm.py::test_pebble_layer PASSED +tests/unit/test_charm.py::test_get_db_info_action PASSED -================================================================================================================================================================================ 1 passed in 0.30s ================================================================================================================================================================================ +======================================== 1 passed in 0.19s ========================================= unit: commands[1]> coverage report Name Stmts Miss Cover ---------------------------------- -src/charm.py 118 49 58% +src/charm.py 129 57 56% ---------------------------------- -TOTAL 118 49 58% - unit: OK (0.99=setup[0.04]+cmd[0.78,0.16] seconds) - congratulations :) (1.02 seconds) +TOTAL 129 57 56% + unit: OK (6.89=setup[6.39]+cmd[0.44,0.06] seconds) + congratulations :) (6.94 seconds) ``` -Congratulations, you have now successfully implemented your first unit test! +Congratulations, you have written your first unit test! ```{caution} -As you can see in the output, the current tests cover 58% of the charm code. In a real-life scenario make sure to cover much more! - +As you can see in the output, the current tests cover 56% of the charm code. In a real-life scenario make sure to cover much more! ``` ## Review the final code -For the full code see: [08_unit_testing](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/08_unit_testing) - -For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/07_cos_integration...08_unit_testing) - -> **See next: {ref}`Write scenario tests for your charm `** - - +For the full code see: [06_unit_testing](https://github.com/canonical/juju-sdk-tutorial-k8s/tree/06_unit_testing) +For a comparative view of the code before and after this doc see: [Comparison](https://github.com/canonical/juju-sdk-tutorial-k8s/compare/05_cos_integration...06_unit_testing) +> **See next: {ref}`Write integration tests for your charm `**