Every configuration directory corresponds to a unique identity. You should maintain separate directories for each person and each microservice.
Generate a certificate for a personal identity using,
certified init 'First Last' --email [email protected] \
--config $HOME/etc/certified
Generate a certificate for a server or microservice using,
certified init --org 'My Company' --division 'My Org Unit' \
--domain my-api.org \
--host '*.my-api.org' --host 'localhost' \
--email '[email protected]' \
--config $VIRTUAL_ENV/etc/certified
Note these are stored in different places because they
represent different entities.
Services need at least one --host
defined that matches the URL
the client will connect to.
You can check your certificate contents using
openssl x509 -text -noout -in $VIRTUAL_ENV/etc/certified/id.crt
To successfully connect to a service, the service must be able to authenticate your identity. It does this by checking your certificate has been issued by a principle that it trusts.
To configure your service to trust you as a principle, use:
cp $HOME/etc/certified/CA.crt \
$VIRTUAL_ENV/etc/certified/trusted_clients/$USER.crt
According to the configuration specification, this will setup the server to be able to talk to all entities that you sign. Note that your personal identity has already been signed by you.
There are two methods to allow a client (person or microservice) to talk to your server.
Both require that the client setup your server as a trusted service:
certified add-service anapi $VIRTUAL_ENV/etc/certified/CA.crt \
--config $HOME/etc/certified
When that user wants to access the microservice at $VIRTUAL_ENV
,
they can now do so by using message https://anapi:<port>/path
.
Technical Note: There should be nothing wrong with adding
the server's id.crt
instead of CA.crt
as the service
certificate. However, SSL fails to validate this with the error:
ConnectError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1007)'))
This is because TLS is over-complicated and puts too much trust at the top. We should really be doing our own certificate chain validation instead of using the TLS model.
-
Method 1: direct client addition
copy CA.crt (as the example above) to the server's
trusted_clients
:cp $HOME/etc/certified/CA.crt \ $VIRTUAL_ENV/etc/certified/trusted_clients/$USER.crt
This allows all identities signed by
CA.crt
to authenticate to your server. -
Method 2: introduction
An "authorizor" can introduce someone else to your server, by signing their identity:
certified introduce /home/other_user/etc/certified/id.crt \ --scope user \ --config $VIRTUAL_ENV/etc/certified \ >/home/other_user/anapi.json
Note:
--scope
is ignored at present.Of course, UNIX permissions don't allow doing this directly, but the basic idea is the same. Both the other user's
id.crt
file and your returned signature (json file) are public documents, and can be exchanged in the open -- for example by email or via posting to github.The
other_user
needs to do two things to use this introduction. First, they need to import it into their certificate list,certified add-intro /home/other_user/anapi.json \ --config /home/other_user/etc/certified
and then they need to create a yaml file describing the service which requires it (using add-service mentioned above).
Technical Note: the service trusts itself as an authorizor by default because
CA.crt
is copied to the service'strusted_clients
directory on creation. Other authorizors can be added by placing theirCA.crt
into$VIRTUAL_ENV/etc/certified/known_clients
under any name (<name>.crt
). Your organization should provide an authorizor that you can use.
Technical explanation: the user access a "known service" using the combination of,
- Your
$VIRTUAL_ENV/etc/certified/id.crt
(cacert / trust root) - The
id/authorizor.crt
(certificate chain you provide them) - Their
id.key
(private key)
All three ingredients are used in a TLS socket handshake to
mutually authenticate the client and server to one another.
To specify custom authorizors for a microservice, see
examples generated by certified add-service
.
The user accesses a general service (no known_servers/name.yaml
file)
using:
- Their
known_servers/*.crt
(cacert / trust roots) - Their
id.crt
(self-signed certificate chain from user'sCA.crt
) - Their
id.key
(private key)
These 3 ingredients (with the exception of using known_clients/*.crt
as
trust roots are also the ones used by the server to authenticate clients.
HTTPS already includes support for custom server authentication and providing the server with your client certificate.
To use it with the curl
tool, the command is:
curl --capath $cfg/trusted_servers \
--cert $cfg/id/authorizor.crt --key $cfg/id.key \
-H "Accept: application/json" \
https://my-api.org:8000
curl --capath $cfg/trusted_servers \
--cert $cfg/id/authorizor.crt --key $cfg/id.key \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-X POST --data '{"message":"hello"}' \
https://my-api.org:8000/notes
The certified package makes this much easier using
the message
utility:
message https://my-api.org:8000/notes
message https://my-api.org:8000/notes '{"message":"hello"}'
You can also access servers programmatically using
using the Certified.Client
context.
This context is an httpx.Client
that bakes in the
appropriate client and server certificates so that
both sides can mutually authenticate one another.
An example:
from certified import Certified
cert = Certified()
with cert.Client("https://my-api.org:8000") as api:
r = api.get("/")
assert r.status_code == 200, "Read error!"
print(r.json())
r = api.post("/notes", json={"message": "hello"})
assert r.status_code == 200, "Post error!"
print(r.json())
To run an API server, create an ASGI webserver application
class (e.g. using app = FastAPI()
inside my_api/server.py
),
and then start it with:
certified serve [options] my_api.server:app
This uses uvicorn internally and is equivalent to running:
uvicorn --ssl-keyfile server.key --ssl-certfile server.pem \
--ssl-cert-reqs 2 --ssl-ca-certs ca_root.pem \
--host <ip_from_config> --port <port_from_config> \
my_api.server:app
where --ssl-cert-reqs 2
is the magic argument needed to ensure clients
authenticate with TLS, and the other keys are created from pem-encoding
data from your server's certified.json
config file.
We actually implement this internally with uvicorn's programmatic API.
import asyncio
from certified import Certified
cert = Certified()
asyncio.run(cert.serve("my_api.server:app",
"https://127.0.0.1:5000"))
# ... calls uvicorn's python API
Certified serve runs your application through uvicorn, which provides some basic logging. However, rich information about the client address, certificate common name, response time for each API call, etc. is not provided.
The standard way to add rich logs with FastAPI is to create
middleware
that gathers details from the Request and
Response objects.
certified
provides a middleware that creates rich JSON logs.
You can enable it in your applications using,
import logging
_logger = logging.getLogger(__name__)
from fastapi import FastAPI
app = FastAPI()
try:
from certified.formatter import log_request
app.middleware("http")(log_request)
except ImportError:
pass
As a bonus, these logs can be sent to loki using a configuration option
certified serve --loki loki.json module:app
The loki.json file should contain the URL for your loki server endpoint, as well as the user and password to use for basic authentication.
{ "url": "https://logs-prod-00x.grafana.net/loki/api/v1/push",
"user": "1111",
"passwd": "long-b64-bassword"
}
For additional information on loki, see its setup documentation.