The Sphinx DevOps platform is a complex piece of software that requires some level of expertise to run in production. We provide the Docker Compose file and documentation on running Sphinx locally to give you a good starting point to build your own production-ready system using this project. We are not going to provide an exact guide on running Sphinx in production, but we will provide a set of basic recommendations to help you understand what is involved and get you on the right track.
- Running the microservices
- Running the database
- Running the website
- Environment variables
- How to configure real S3 buckets
The Sphinx backend includes a set of microservices which you will need to host:
- Executor: Coordinates deployment execution
- Relayer: Handles submitting transactions
- Artifact Generator: Handles generating artifacts and uploading them to S3
- Contract Verifier: Handles verifying contracts on Etherscan and Blockscout
All of these microservices are intended to be run in an arbitrary container hosting service. The easiest way to host them would be to take the existing Docker compose file we've provided for running Sphinx locally and use that to run the system in production. This option likely requires the least modifications to this repo to achieve a production system. However, note that we have not tested this. You will likely need to make at least some adjustments.
There are a variety of other options for container hosting. For example, when we ran Sphinx in production we used AWS Fargate and deployed the system with Terraform. However, that is a complex and expensive system designed to handle a larger number of users so we don't recommend it.
The Sphinx backend uses a Postgres database which we connect to using Prisma and GraphQL. There are many different options for hosted Postgres databases including Neon which we used when running Sphinx in production. The Docker compose file includes a Postgres database, so if you choose to deploy in production using that you will have a self-hosted Postgres instance by default.
However, we recommend that you use any managed Postgres provider such as Neon, Supabase, or Vercel. These options will be less maintenance for you and make sure you do not accidentally wipe your database as part of a upgrade or infrastructure migration. You can use any database provider you want you will just need to configure the POSTGRES_PRISMA_URL
environment variable to point to the correct database instance. You'll also need to set up the Prisma schema and seed the database with some data. For more information on that, we recommend looking at the services/init
service which is used in the Docker Compose file for this purpose.
Prisma can sometimes cause issues with different Postgres providers. We highly recommend using Neon and referencing this guide during setup.
The final piece of the Sphinx platform is the website. The Sphinx website is a NextJS application that includes both a frontend UI and a backend API (which is mostly GraphQL but also includes a few HTTP endpoints which the Sphinx plugin interacts with). For simplicity, we recommend hosting the Sphinx UI using Vercel which is what we did when running Sphinx in production. We recommend referencing this guide on using Vercel with a yarn workspaces monorepo.
Sphinx uses S3 buckets to store a couple of different types of deployment artifacts:
- Deployment Config Files: These are files used internally by Sphinx to execute deployments. They store all of the information related to deployments including contract source code, follow-up transactions, the entire Sphinx Merkle tree, etc.
- Deployment Artifact Files: These are the user-facing deployment artifacts which-are generated by the
artifact-generator
microservice.
When running Sphinx locally and with Docker Compose, we use LocalStack to store these files instead of actual S3 buckets. If you choose to deploy Sphinx in production using the provided Docker Compose file, then this will be the default setup. However, if you want long-term retention of these files you will likely want to use actual S3 buckets.
Note that when running Sphinx locally, files stored in Localstack will not be perisisted if Localstack is stopped.
The Sphinx platform requires a variety of secrets many of which are not particularly high risk. For simplicity, we recommend taking the entire reference .env.example
file and making all of the variables available to all of the Sphinx microservices and the website. Note that this recommendation assumes you are using Vercel to host the Sphinx website (which does not expose environment variables on the front end unless they are prefixed with NEXT_PUBLIC_
). If you are using a different frontend host, you will want to make sure you understand how their system for environment variables works.
There is one variable that is rather sensative
SPHINX_RELAYER__PRIVATE_KEYS
which is the funded private key used to execute deployments. We recommend making sure this variable is only set in the relayer microservice and nowhere else.
To use real S3 buckets, you'll need to run through a few steps:
- Create two S3 buckets in AWS. They should be named
sphinx-artifacts
andsphinx-compiler-configs
. They should be in theus-east-1
region. - Create an access key with permissions to both read and write from those buckets.
- Create a new folder in Infisical
/Vercel
. - Create two new secrets in the
/Vercel
folder in Infisical:AWS_ACCESS_KEY_ID
which is the id of the access key you created in 2, andAWS_SECRET_ACCESS_KEY
which is the secret key for that access key. - In each Sphinx service, set a new environment variable
LOCAL_S3=false
. If you are using the provided Docker Compose file, you should look for this environment variable in that file and update it.
Both of these variables will be fetched from Infisical in production so you just need to make sure each service has access to Infisical for this to work which should be the case if you followed the recommended environment variable setup.
sphinx-compiler-configs
stores configuration files used for actual deployments. These files include things like the contract source code and compiler artifacts.sphinx-artifacts
stores the user facing deployment artifacts which can be retrieved using thesphinx artifacts
command.
If you are running into problems with this, or anything related to secrets we recommend looking in the
./services/utilities/secrets
for all of the code related to fetching secrets from Infisical.
There are a few steps you'll have to go through to add support for new networks.
- Update the
SPHINX_NETWORKS
array in the contracts package of the Sphinx mono repo to include the new network. - Add the required environment variables for the network to Infisical.
- Deploy the Sphinx contracts onto the network by going to the core package and running this command:
npx hardhat deploy-system --network <new network name>
- Create a new package release with the changes to the contracts package.
- Bump the contracts package version in the Sphinx backend microservices.
- Build new containers with the updated contracts package, push them into production, and update your running containers to use the new version.
You will also want to thoroughly check the documentation of the network creators to see if there are any incompatabilities especially things that might meaningfully effect transaction gas estimates and things that prevent the create2 opcode from being EVM equivalent.