Cirrus CI exposes GraphQL API for integrators to use through https://api.cirrus-ci.com/graphql endpoint. Please check
+Cirrus CI GraphQL Schema for a full list of
+available types and methods. Or check built-in interactive GraphQL Explorer. Here is an example of how to get a build for a particular SHA of a given repository:
In order for a tool to access Cirrus CI API, an organization admin should generate an access token through Cirrus CI settings page
+for a corresponding organization. Here is a direct link to the settings page: https://cirrus-ci.com/settings/github/<ORGANIZATION>.
+An access token will allow full write and read access to both public and private repositories of your organization on Cirrus CI:
+it will be possible to create new builds and perform any other GraphQL mutations. It is also possible to generate scoped
+access tokens for a subset of repositories to perform read or write operations on the same settings page.
+
Note that if you only need read access to public repositories of your organization you can skip this step and don't provide Authorization header.
+
Once an access token is generated and securely stored, it can be used to authorize API requests by setting Authorization
+header to Bearer $TOKEN.
+
+
User API Token Permission Scope
+
It is also possible to generate API tokens for personal accounts but they will be scoped only to access personal public and private repositories
+of a particular user. It won't be possible to access private repositories of an organization, even if they have access.
It is possible to subscribe for updates of builds and tasks. If a WebHook URL is configured on Cirrus CI Settings page for
+an organization, Cirrus CI will try to POST a webhook event payload to this URL.
+
POST request will contain X-Cirrus-Event header to specify if the update was made to a build or a task. The event
+payload itself is pretty basic:
In addition to updates to builds and tasks, Cirrus CI will also send audit_event events to the configured WebHook URL.
+action for these audit events will be always "create" and the data field will contain the following GraphQL fragment
+for the particular audit event:
Imagine you've been given a https://example.com/webhook endpoint by your administrator, and for some reason there's no easy way to change that. This kind of URL is easily discoverable on the internet, and an attacker can take advantage of this by sending requests to this URL, thus pretending to be the Cirrus CI.
+
To avoid such situations, set the secret token in the repository settings, and then validate the X-Cirrus-Signature for each WebHook request.
+
Once configured, the secret token and the request's body are fed into the HMAC algorithm to generate the X-Cirrus-Signature for each request coming from the Cirrus CI.
+
+
Missing X-Cirrus-Signature header
+
When secret token is configured in the repository settings, all WebHook requests will contain the X-Cirrus-Signature-Header. Make sure to assert the presence of X-Cirrus-Signature-Header header and correctness of its value in your validation code.
+
+
Using HMAC is pretty straightforward in many languages, here's an example of how to validate the X-Cirrus-Signature using Python's hmac module:
“Wait what!? Yet another CI? Gosh…” one can say after seeing the title. Honestly, at Cirrus Labs we had the same thoughts and we tried to talk ourselves out of building yet another CI. But let us explain why we think there is a need for a better CI and how Cirrus CI is better.
There are continuous integration systems that have been in development for 10+ years. They are super flexible and can be configured for almost any workflow. But this flexibility and long history bring some fundamental problems:
+
+
+
It’s so easy to mess up because they are complicated.
+
+
+
Which plugins to install and which to uninstall?
+
+
+
How to configure builds?
+
+
+
How to configure auto-scalable agent pools(machines that executes builds)?
+
+
+
How to update agent pools so as to not affect builds in flight? And make sure old release branches can still be executed.
+
+
+
Basically there should be someone in your organization very knowledgeable to properly configure and maintain CI.
+
There are also some modern CI-as-a-service systems created in the last 6 years which are not so flexible, but they are doing great job of making continuous integration as simple as possible. Those also have some common
+inconveniences like:
+
+
+
Not pay-as-you-go approach for pricing. Usually users pay for how many jobs one can execute in parallel. Which means the users need to plan and pay for the maximum load they’ll ever have, or face queuing issues otherwise. This is not a suitable pricing model for the era of cloud computing.
+
+
+
Focused mostly on containers which many businesses have not yet migrated their legacy projects to.
+
+
+
Poor environment flexibility. Usually it’s not possible to specify precisely which VM image or Docker container to run and how much resources it can have. This means that code is most likely tested in the environment very different from the production environment.
+
+
+
Because of all the problems and inconveniences described above, we decided to build Cirrus CI with three simple principles in mind:
Every architecture decision, every building block should be self-contained, well abstracted, intuitive and easily replaceable in the future. Think about it as Lego bricks: every single piece is simple but together they can form more complex element which will form the final object.
Since every building block is simple, self-contained and replaceable, they can also be very efficient. Optimizing small parts of the system independently is much easier than optimizing the whole system at once.
Users shouldn’t guess what is happening. What you write and configure is what you get. Things can seem to be magical but there should be no magic and guessing for a user.
Cirrus CI has all features a modern CI system should have and we won’t focus on them right now. Please check documentation for more details.
+
The interesting part is how builds are executed. Usual CI system has agents that wait for builds and execute them. Cirrus CI, on the other hand, delegates execution to a computing service of your choice. For example, Cirrus CI can connect to a Kubernetes Cluster and schedule a task there or use Google Compute Engine APIs to schedule a task on a newly created virtual machine. No need to configure and maintain agents. Cirrus CI manages and orchestrates everything. A customer pays the cloud provider directly and only for the resources used to run CI builds and store build artifacts.
When Cirrus CI was announced a few months ago Docker support was already pretty sophisticated. It was possible to use any existing Docker container image as an environment to run CI tasks in. But even though Docker is so popular nowadays and there are hundreds of thousands of containers created by community members, in some cases it’s still pretty hard to find a container that has everything installed for your builds. Just remember how many times you’ve seen apt-get install in CI scripts! Every such apt-get install is just a waste of time. Everything should be prebuilt into a container image! And now with Cirrus CI it’s easier than ever before!
Now there is no need to build and push custom containers so they can be used as an environment to run CI tasks in. Cirrus CI can do it for you! Just specify path to a Dockerfile via dockerfile field for you container declaration in .cirrus.yml like this:
Cirrus CI will build a container and cache the resulting image based on Dockerfile’s content. On the next build, Cirrus CI will check if a container was already built, and if so, Cirrus CI will instantly start a CI task using the cached image.
+
Under the hood, for every Dockerfile that is needed to be built, Cirrus CI will create a Docker Build task as a dependency. You will see such build_docker_iamge_HASH tasks in the UI:
Before, only container based builds were available for free to Open Source projects via Cirrus Cloud Clusters. We are thrilled to introduce docker_builder tasks that are executed in a VM with Docker preinstalled. Now, Open Source projects can easily build and publish Docker images by adding docker_builder tasks in their CI pipelines. Here is an example of how Docker Builder can be used to push an image to Docker Hub once there is a release tag created:
Cirrus CI already had great Linux and Windows support. The only missing platform was macOS and there was a good reason for that.
+
TLDR: Please check documentation for just instructions on how to configure macOS builds on Cirrus CI. The is a little bit of history and motivation below.
+
+
+
+
Traditionally Linux has the best tooling. There are cloud providers that can give you a Linux VM almost instantly via an API request. Containers were pioneered on Linux. Nowadays Windows tools are catching up. The same cloud providers now have Windows VMs. Windows containers are rapidly evolving and already heavily used in production.
+
macOS world is not that bright. Apple is not investing into making macOS something more than a desktop OS. Only thanks to independent companies engineers can improve their lives.
+
+
For example, Veertu brings a container-like feel to managing macOS VMs with their Anka Virtualization technology. Anka VMs are fast! Their Instant Start technology allows to start VMs in less than a second for on-demand workloads ideal for CI. Anka Controller and Anka Registry brings a Docker-like feel to managing and orchestrating macOS VMs.
+
+
MacStadium is the best provider of Apple Mac infrastructure. They have reliable and fast network and hardware in their data centers. Recently they partnered up with Veertu to offer hosted Anka on a MacStadium private cloud. Finally a solution that provides a modern orchestration for macOS VMs.
+
Today we are happy to announce support for Anka Build Cloud on Cirrus CI. Open Source Projects can try **macOS builds free of charge**. To try the power on Anka Virtualization on your OSS projects simply add following to your .cirrus.yml configuration file:
Private organizations with more serious workloads can use a separate Anka Build Cloud. Simply sign up for an Anka cloud with MacStadium and configure it as described in documentation. Having a dedicated Anka Build Cloud for your organization has many benefits:
+
+
+
Security. The infrastructure is not shared. No need to think about bugs in macOS kernel or virtualization that can potentially give escalated access to VMs running on the same host with your CI builds.
+
+
+
Flexibility. By creating custom Anka VMs with all tools pre-installed you can drastically improve CI build times.
+
+
+
Scalability. The folks at MacStadium specialize in helping you figure out your initial setup. Start small and grow your cloud as needed.
+
+
+
Follow us on Twitter and if you have any questions don’t hesitate to ask.
Core principle of Continuous Integration systems is obsolete¶
+
This blog post will briefly go through the history of CI systems and will describe how a role-model CI system works nowadays. After describing core principles of CI systems, we’ll take a look at how extremely fast evolution of cloud and virtualization technologies allowed to change these principles and especially concept of CI agents.
+
+
+
First time an idea of a Continuous Integration (CI) system was described in the early 90s. But the first big win from a CI system was restraining “integration hell” for Windows XP release in 2001. Around that time a few CI systems were created, but only Jenkins lives to the current days.
+
These first CI systems were pretty simple. They consist of several servers AKA agents that are constantly pulling a single master server for work that they can do. Once the master responses with a job for a particular agent, the agent simply executes commands and streams results back to the master. Simple as that!
+
Over the years some CI best practices were established in order to achieve consistent builds on the agents:
+
+
+
Reproducible Environment. Each agent should have identical environment with the same version of build tools and compilers for executing scripts. Traditionally there were pools of agents with the same environment. For example, a pool of agents with Java 8 and a pool of agents with Java 10 installed. Lately, Docker is becoming very popular for this purpose. There can be a single pool of agents with Docker pre-installed so an agent can execute scripts inside of a docker container instead of just shelling the command.
+
+
+
Clean Environment. There should be no artifacts from the previous builds presented when a new build is executing on the agent. Such artifacts result in unpredictable behaviour of the agents.
+
+
+
+
Another recent improvement to the classic CI architecture is using cloud providers to have an auto-scalable pools of CI agents. Modern clouds allow to spin up new VMs on demand by just calling APIs. There is no need to pre-allocate agents for the maximum load, agents can be scaled up and down pretty easily. This is a huge cost saver not only of compute resources but also of engineering time. Engineers don’t need to wait for available agents for their builds any more!
+
Nowadays a role-model CI system consists of a multi-master node and an auto-scalable pool of CI agents with Docker pre-installed somewhere in the cloud. Sounds pretty good, right?
+
+
But as you can see, the core principle of a CI system hasn’t changed in almost 20 years!
What if I tell you that the idea of a CI agent pool is obsolete? Why is there a need in CI agents in the first place? A CI agent solves one simple problem: quickly get an environment ready to execute a CI build.
+
Technological progress in the recent years redefined many expectations. For example, nowadays most of the cloud providers can start a VM in under a minute. There is no need to pre-allocate resources, a modern cloud charges for seconds of compute time. There are separate systems like Kubernetes whose purpose is quickly and efficiently allocate and manage containers. There is no need to do the same job by maintaining CI agent pools! One can simply use APIs of computing services to allocate resources once they are needed to execute new CI builds.
+
For example, to run a build of a web application using Node.JS, a CI system can simply use Kubernetes API to start node:latest container and use it for the CI build.
+
Such CI system can also leverage multiple computing services within a cloud and even use several clouds for different CI needs.
Yes, it works! At Cirrus Labs we actually built Cirrus CI using this idea precisely. Cirrus CI leverages a variety of modern computing services to run CI builds. Cirrus CI simply uses APIs of computing services to allocate resources once they are needed to execute new CI builds, no need to maintain a CI agent pool.
+
Cirrus CI already supports Google Cloud, Azure and Anka Build Cloud which allows to run Linux, Windows and macOS workloads. Cirrus CI is the only CI-as-a-service system that supports all of these platforms together.
+
The idea of just using APIs of computing services not only allowed easily support a variety of platforms, but also allowed to bring a **new pricing model**. Cirrus CI allows you to bring your own cloud. Simply connect part of your cloud to Cirrus CI and pay for your CI within your current cloud payment. Cirrus CI charges a small fee for orchestrating CI builds of private repositories which is also billed though the already existing GitHub payment.
+
We highly encourage you to try out Cirrus CI. It’s free for Open Source projects and very easy to setup! Also there is a 14 days free trial for private repositories.
+
Follow us on Twitter and if you have any questions don’t hesitate to ask.
Cirrus CI from the day one was build around leveraging modern cloud computing services as backends for executing CI workloads. It allows teams to own the CI infrastructure and at the same time to not have pains of configuring and managing CI agents. Anyways the idea of traditional CI agent pools is obsolete.
+
+
+
+
Cirrus CI initially launched with only Linux and Windows support through Google Cloud integration, shortly Cirrus CI started supporting Azure which enabled more sophisticated Windows Containers support, and finally, Anka integration allowed to add very anticipated macOS support.
+
Today Cirrus CI starts supporting AWS services which brings even more flexibility of integrating Cirrus CI in your existing infrastructure.
+
Cirrus CI supports EC2 for scheduling VM-based and EKS for container-based CI tasks. Cirrus CI will store CI logs and artifacts in S3. Please check documentation for more details.
+
We highly encourage you to try out Cirrus CI. It’s free for Open Source projects and very easy to setup! Also there is a 14 days free trial for private repositories.
+
Follow us on Twitter and if you have any questions don’t hesitate to ask.
While working on a new functionality or fixing an issue it’s crucial to get CI feedback as soon as possible. Fast CI builds are important but it’s also important how fast one can find a reason of a failing build. Usual flow requires to open a separate page for the failing CI build and scroll through all the logs to finally find a relevant error message. How inefficient!
+
Today Cirrus CI starts supporting GitHub Annotations to provide inline feedback right where you review your code. No need to switch context anymore!
+
+
+
+
This became possible as a result of recently added features like execution behaviour and artifacts. Now each artifact can specify a format so it can be parsed into annotations. Here is an example of .cirrus.yml file which saves and annotates JUnit reports of a Gradle build:
Currently Cirrus CI can only parse JUnit XML but many tools use this format already. Please let us know what kind of formats Cirrus CI should support next! The annotation parser is also open source and contributions are highly appreciated! 😉
+
We highly encourage everyone to try Cirrus CI. It’s free for public repositories and all organizations get 200 CPU hours worth of compute credits to try it on private repositories.
+
As always don’t hesitate to ping support or ask any questions on Twitter.
Cirrus CLI — CI-agnostic tool for running Dockerized tasks¶
+
Most Continuous Integration vendors try to lock you not only by providing some unique features that were attractive in the first place but also by making you write hundreds of lines of YAML configuration unique to this particular CI or by making you configure all your scripts in the UI. No wonder it’s always a pain to migrate to another CI and it’s hard to justify the effort! There are so many things to rewrite from one YAML format into another YAML format.
+
Today we are happy to announce Cirrus CLI — an open source tool to run isolated tasks in any environment with Docker installed. Use one configuration format for running your CI builds the same way locally on your laptop or remotely in any CI. Read below to learn more about our motivation and technical details or jump right to the GitHub repository and try Cirrus CLI for yourself!
When Cirrus Labs was created in 2017 the CI market was kind of stagnating. The most popular CIs on GitHub were not innovating for years and it looked like the whole cloud computing technologies are sprinting when CIs are resting on their laurels. Out of this frustration Cirrus CI was created with a focus to leverage modern clouds and be as efficient as possible by using a completely new concept of architecting CI systems. Many things have happened since then and the CI market is not stagnating nowadays! There is a new wave of specialized CIs launched with a focus on fixing the CI problem only for one particular niche: only Android or iOS apps, only a specific framework like Laravel, only for Go applications, etc.
+
Since launching Cirrus CI we heard from users only positive feedback about Cirrus configuration format: it’s concise, there is no magic happening and at the same time it’s easy for humans to understand even though it’s still YAML (check What’s Next section to learn about the upcoming alternative configuration format). Here is an example of .cirrus.yml configuration file for a Go project:
+
task:
+env:
+matrix:
+VERSION:1.15
+VERSION:1.14
+name:Tests (Go $VERSION)
+container:
+image:golang:$VERSION# official Go Docker image
+modules_cache:
+folder:$GOPATH/pkg/mod
+fingerprint_script:cat go.sum
+get_script:go get ./...
+build_script:go build ./...
+test_script:go test ./...
+
+
With Cirrus CLI we want to liberate Cirrus configuration format without a requirement to use Cirrus CI. Many people are OK with their current CI setup and it’s simply not reasonable to put so much effort into migrating to Cirrus CI to benefit from some unique features.
+
With Cirrus CLI it is a very low effort to start using and benefitting from Cirrus configuration format to run your CI builds:
+
+
+
All Cirrus tasks are executed in isolated Docker containers that will make your CI more stable and easier to upgrade.
+
+
+
Run the same tasks locally on your work machine the same way CI is running them to debug issues. Don’t hear “Works on my machine!” excuses ever after.
+
+
+
Easily integrate **remote caching** within your current infrastructure.
+
+
+
Benefit from a huge amount of existing examples and read more in What’s Next section down below about an upcoming alternative configuration format via Starlark.
Traditionally, a CI Agent executes builds from the “outside” by ssh-ing into a VM or a container to execute scripts and save logs. Unlike a traditional CI design, Cirrus Agent that executes tasks is running “inside”. This way the agent has no clue where it’s executed: in a cloud, in a Kubernetes cluster, in a macOS VM or locally in a Docker container. The agent simply executes steps(downloads/uploads caches, runs scripts, streams logs, etc.) and streams back logs and execution results using a gRPC API.
+
Cirrus CLI simply implements the same gRPC API as Cirrus CI but for local usage:
+
+
+
Instead of supporting many compute services the CLI only uses locally available Docker to run containers.
+
+
+
Instead of storing logs in a blob storage and streaming live logs via WebSockets the CLI just outputs them to the console.
+
+
+
Instead of storing caches in a cloud storage the CLI stores caches on disk (there is also an option to use an HTTP cache).
+
+
+
There is no need for the CLI to do dozens other things that Cirrus CI does, things like updating GitHub UI, collecting and analyzing build metrics, running tens of thousands tasks simultaneously, checking user permissions, health checking VMs and containers, supporting different cloud APIs, etc.
+
This simple initial design of the unidirectional communication of the agent though a GRPC API allowed to decouple and bring execution of Cirrus tasks to developer machines and practically any environment where Docker installed.
There are many exciting things planned for both Cirrus CLI and Cirrus CI but one of the most groundbreaking things will be a support for a new configuration format via Starlark! Starlark — is a scripting language designed to be embedded in a larger application with a simple syntax which is basically a subset of Python. Starlark is pretty popular among modern build systems like Bazel for user-defined behaviors because Starlark is fast, very restrictive and deterministic which makes it ideal for caching and other optimizations and leaves very little room for users to shoot themselves in a leg.
+
YAML is the standard for CI configurations but unfortunately YAML is pretty limiting at the same time. Each CI vendor tries to add its own syntactic sugar for doing matrix builds, having if statements, making dynamic inclusion/exclusion of some scripts, etc. At some point CI configuration is going out of hand and people are trying to do imperative programming using a declarative language like YAML! Why to try programming in a language that is no suitable for that!?
+
Enough words, let’s check an example of configuring a Go project via Starlark!
With a real programming language it is possible to do things that were not possible in YAML with any amount of syntactic sugar. There is logic indetect_task method that checks if there is a configuration file in the repository for golangci-lint and auto-magically configures a linting task. This external loading will allow to create reusable templates for all teams across the company.
+
There is no CI build that hasn’t flaked once, you can imagine writing a failure handler in Starlark for your tasks that will check logs for common transient failures specific for your CI process and automatically retry tasks without a need for a human eye and even send a Slack message with the flake details for an additional investigation later on.
+
We are very excited about possibilities that template sharing will enable and what teams will do with it!
+
We are encouraging everyone to try out Cirrus CLI. You can run it locally or integrated with any CI. A list of tested CI configurations can be found here.
+
And please send us feedback either on GitHub or on Twitter!
Announcing public beta of Cirrus CI Persistent Workers¶
+
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
+
+
+
+
Ephemeral environment. Each Cirrus CI task starts in a fresh VM or a container without any state left by previous tasks.
+
Infrastructure as code. All VM versions and container tags are specified in .cirrus.yml configuration file in your Git repository. For any revision in the past Cirrus tasks can be identically reproduced at any point in time in the future using the exact versions of VMs or container tags specified in .cirrus.yml at the particular revision. Just imagine how difficult it is to do a security release for a 6 months old version if your CI environment independently changes.
+
Predictability and cost efficiency. Cirrus CI uses elasticity of modern clouds and creates VMs and containers on demand only when they are needed for executing Cirrus tasks and deletes them right after. Immediately scale from 0 to hundreds or thousands of parallel Cirrus tasks without a need to over provision infrastructure or constantly monitor if your team has reached maximum parallelism of your current CI plan.
+
+
For some use cases the traditional CI setup is still useful. However, not everything is available in the cloud. For example, Apple releases new ARM-based products and there is simply no virtualization yet available for the new hardware. Another use case is to test the hardware itself, since not everyone is working on websites and mobile apps after all! For such use cases it makes sense to go with a traditional CI setup: install some binary on the hardware which will constantly pull for new tasks and will execute them one after another.
+
This is precisely what Persistent Workers for Cirrus CI are: a simple way to run Cirrus tasks beyond cloud! Run Cirrus CI on any hardware including the new Apple Silicon, any other ARM or even things like IBM Z!
+
+
Please follow documentation in order to configure your first persistent worker and please report any issues/ask question either on Twitter or through GitHub issues.
New macOS task execution architecture for Cirrus CI¶
+
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
+
+
+
The new architecture is built on top of the recently announced Persistent Workers functionality and can be easily replicated with on-premise Mac hardware by any Cirrus CI user or on any other CI by using Cirrus CLI.
+
We know from experience that continuous integration for macOS is the hardest and how little information there is about the topic on the internet! And we want to share below how the new simplified architecture looks like and how to replicate it.
+
Cirrus CI architecture is very simple. There is Cirrus Agent (a self contained binary written in Go) which job is to simply execute scripts, download/upload caches, parse test reports and stream progress via gRPC API. Both Cirrus CI Cloud and Cirrus CLI implement the same gRPC API so the agent binary doesn’t even know in which environment it’s been executed.
+
+
+
Cirrus CLI initially was intended to be a local executor of Cirrus Tasks in Docker containers only. Cirrus CLI simply parses Cirrus configuration file and then uses Docker daemon API to start/stop containers to execute parsed tasks. Note that Cirrus CLI doesn’t require to use Cirrus CI Cloud and can be used with any other CI. Once this functionality was ironed out and well tested it was easy to add an option to use Parallels virtualization instead of Docker containers to execute tasks in.
+
Before that Cirrus used Anka cloud which required a complex setup of Controller/Registry services that were orchestrating execution of Anka VMs on the hosts.
+
+
With Persistent Workers we were able not only to dogfood Cirrus CI’s own functionality but also cut the middle man of Anka Controller which was contributing to the “created to execution” metric of macOS tasks. Now macOS tasks will be scheduled even faster! Here is how simple the current architecture look like:
+
+
As you can see this new architecture is not rocket science and somewhat very traditional. The key here is that Cirrus CLI can isolate task execution in a Parallels VM. Under the hood the following configuration
This configuration can be easily executed locally or in any other CI via Cirrus CLI.
+
We are very excited about the new architecture and opportunity to dogfood persistent workers functionality at scale! Please let us know how new architecture works for your projects (especially since there are 3x more CPU resources and better network performance) and send us feedback either on GitHub or on Twitter!
Introducing Cirrus Terminal: a simple way to get SSH-like access to your tasks¶
+
+
Imagine dealing with a failing task that only reproduces in CI or a task with an environment that is is simply too cumbersome to bootstrap locally.
+
For a long time, the classic debugging approach worked just fine: do an attempt to blindly fix the issue or add debugging instructions and re-run. Got it working or found a clue? Cool. No? Do it once again!
+
Then Cirrus CLI appeared. It allows you to replicate the CI environment locally, but complex cases like custom VMs or other architectures are not covered due to platform limitations.
+
Anyway, both methods require some additional tinkering to gain access to the interactive session on the host where the task runs (i.e. something similar to docker exec -it container-ID).
+
Luckily no more! With the recent Cirrus Terminal integration, it’s now possible to have this one click away on Cirrus Cloud!
+
+
+
Simply choose and click “Re-run with Terminal Access” on a task you want to gain access to:
+
+
Then, shortly after the agent is started on the instance, you’ll see the console:
+
+
Voila! Perhaps you could get away with zero CI configuration changes this time?
When you “Re-run with Terminal Access”, the agent running on the started instance registers itself on the Cirrus Terminal server and publishes its session credentials along with the task identification to the Cirrus Cloud.
+
When a task is opened in a web UI, it’ll continuously monitor the task metadata looking for the published Cirrus Terminal credentials and once found, renders a terminal and connects it to the Cirrus Terminal server.
+
The terminal sessions opened in the web UI are not shared, but you can open as much as you need to!
Cirrus Terminal is an opt-in feature: we understand that not everyone needs it, and this reduces the potential attack surface.
+
Cirrus Terminal talks to its consumers over HTTPS (using either gRPC or gRPC-Web).
+
Cirrus Terminal currently does not provide an end-to-end security, meaning that both guest and host trust the Cirrus Terminal server their terminal I/O. Unfortunately having E2E would make some promising features like SSH access impossible to implement (see Future section below) due to the way SSH protocol works.
Cirrus Terminal is designed with a little bit of SSH protocol in mind, so it’s technically possible to provide access over SSH in the future. Imagine typing:
+
sshtask-id@terminal.cirrus-ci.com
+
+
…in the comfort of your own terminal!
+
This time we’ve introduced the Cirrus Terminal, a feature that helps you to spend less time debugging and more time writing great software! And it’s open-source too!
+
Have you already tried it and how do you like it? Perhaps you have some questions? Don’t hesitate to send us your feedback either on GitHub or on Twitter!
Isolating network between Tart’s macOS virtual machines¶
+
Some time has passed since Cirrus Labs released Tart, an open-source tool to manage and run macOS virtual machines on Apple silicon. As Tart matured, we started using it for Cirrus CI’s macOS VM instances to replace other proprietary solutions.
+
+
However, there are some roadblocks that prevent us from scaling and running more than one VM on a single host:
bridged — places VMs into the same broadcast domain as one of the network interfaces on host, so that the VMs will be able to receive IP addresses from the corporate DHCP server available on the LAN, for example
+
+
+
NAT — places VMs into a separate broadcast domain (which includes host, but not LAN) and configures DHCP server on the host itself
+
+
+
file handle — converts all of the I/O done by the VM as send(2) and recv(2) on a file descriptor that we provide to the Virtualization.Framework
+
+
+
Tart currently uses the NAT option by default. It’s simple and gets the work done for most of the use-cases.
+
However, NAT and bridged modes are incompatible with multiple tenants, because they don’t bother about preventing the ARP spoofing and other rogue VM manipulations at all. Any VM controlled by the attacker can divert the traffic destined to another VM by simply answering ARP requests with its own MAC address.
However, in our case, we are dealing with a virtualization framework that is only starting to shape up, so it looks like we have to come up with a solution by ourselves.
We’ve first tried to work around the missing isolation by creating a daemon that would inject VM-specific rules into the PF firewall, but this approach turned out to be racy by design: you have to constantly catch up with the macOS InternetSharing daemon actions and this is a poor model in terms of security.
+
A more sound approach would be then to force all the networking to flow through our daemon using the VZFileHandleNetworkDeviceAttachment and then somehow filter the packets and emit them from the host’s TCP/IP stack.
+
To achieve this, we could’ve used an utun device and configure the NAT ourselves, but all the little details like interacting with the PF firewall, tweaking sysctl’s and evaluating the routing table in the presence of the non-cooperative InternetSharing daemon(that can overwrite things at any point in time) seemed to represent the same racy behavior as above.
+
Significant progress happened when we discovered the vmnet framework. With that framework, we can create an interface and pipe packets to and from it, and it has the same NAT functionality as the Virtualization.Framework, but on a lower level, which removes the need for the utun device and manual NAT configuration completely.
+
The only remaining issue was how to parse the packets, as there are no Swift libraries that could do that at the time of writing, which brings us to the Softnet.
Softnet, unlike Tart, is written in Rust. This complicates things a bit, because we now have to do IPC with the Tart process, however this drawback is fully compensated by the sheer amount of libraries in the Rust ecosystems.
+
We were able to quickly develop a packet filter with DHCP snooping functionality, which works similarly to the libvirt’s network filter automatic IP address detection.
+
Once started with Softnet, a VM can only communicate with a DHCP server. Once a DHCP server assigns the VM an address, we remember it and allow only traffic from that address. Softnet does not modify any packets, but only drops them when they don’t match the learned VM’s IP.
+
Finally, Softnet already ships with Tart (when installed via Homebrew) and can be enabled with --with-softnet command-line flag when starting a VM:
Implementing a user space packet filter involves some overhead, but seems like the only option available at the moment.
+
Next we are looking forward to roll out the Softnet isolation to the production, which will double the capacity of parallel macOS VMs that the Cirrus CI can run.
+
Stay tuned and don’t hesitate to send us your feedback either on GitHub or Twitter!
Apple Silicon is the inevitable future. Apple has no plans to release any x86 hardware anymore. In addition, many people reported huge performance improvements after switching their builds to Apple Silicon.
+There are no excuses not to switch to Apple Silicon except if your CI is not supporting it yet.
+
In this case, we are happy to announce Cirrus Runners -- managed Apple Silicon infrastructure for your existing CI.
+Cirrus Runners are powered by the same infrastructure we've built other the years running macOS tasks as part of Cirrus CI.
+We believe we have the most advanced and scalable tech out there for running macOS CI. We even created and open-sourced our own virtualization technology for Apple Silicon!
We are starting with GitHub Actions support first. Just install Cirrus Runners App
+and configure your subscription for as many runners as your organization needs. Then change runs-on of your workflow to use any of the supported and managed by us images:
+
name:Test Suite
+jobs:
+test:
+runs-on:ghcr.io/cirruslabs/macos-ventura-xcode:latest
+
+
Each GitHub Action job will be executed in a one-time use virtual machine to ensure reproducibility and security of your workflows.
+When workflows are executing you'll see Cirrus on-demand runners on your organization's settings page at https://github.com/organizations/<ORGANIZATION>/settings/actions/runners.
Each Cirrus Runner has 4 M1 cores comparing to GitHub's own macOS Intel runners with just 3 cores.
+On average you should expect double the performance of your actions after the switch.
+
There is no limit on the amount of minutes for your workflows. Each Cirrus Runner costs $150 a month and you can utilize them 24x7.
+For comparison, fully utilizing a slower Intel runner provided by GitHub will cost you roughly $3456 a month which is 20 times more expensive.
+
We recommend to purchase several Cirrus Runners depending on your team size so you can run actions in parallel.
+Note that you can change your subscription at any time via this page.
Mobile CI and particularly managing Apple hardware is very difficult. We've spent years trying different approaches and polishing our setup
+and now we are happy to share it beyond Cirrus CI.
+
Have you already switched to Apple Silicon and how do you like it? Don’t hesitate to send us your feedback either on Twitter or via email!
TLDR Intel-based Big Sur and High Sierra instances will stop working on January 1st 2023. Please migrate to M1-based Monterey and Ventura instances.
+Below we'll provide some history and motivation for this decision.
We've been running macOS instances for almost 5 years now. We evaluated all the existing solutions and even successfully
+operated two of them on Intel platform before creating our own virtualization toolset for Apple Silicon called Tart.
+We are switching managed-by-us macOS instances to exclusively running in Tart virtual machines starting January 1st 2023.
We started back in 2018 by adopting pretty new at the time virtualization technology called Anka. It worked fairly well for us to some extent.
+We started hitting first scaling issues pretty quickly when we reached around a dozen Mac Minis in our fleet. Anka Registry was just bounded by the I/O of a single
+server that it was deployed too. You can't distribute huge 50+ GB templates to dozens of hosts simultaneously from a single server!
+
We had to implement some extra Ansible magic that distributed these templates via scp in log(n) where n is the number of Mac Minis in one data center.
+The magic pulled a new template from Anka registry to a single host, then the next two hosts instead of pulling from the registry, used scp to copy
+from the previous hosts, etc. That unblocked our growth and we continued using Anka.
+
Then in the end of 2019 - early 2020 there were a bunch of transient issues with Anka's networking layer. Sometimes some hosts were just loosing
+internet connections and all consecutive Anka VMs were not able to run anything until a restart of a host. We spent countless hours with Veertu folks
+trying to debug this transient but very annoying issue with no luck. In the end we had to implement some workaround and detections on our end.
+At this point we started thinking of a way to replace Anka Controller, so we could potentially switch the virtualization layer as well.
+
With that in mind we started working on Cirrus CLI -- a CI-agnostic tool that can run "tasks" locally in containers or VMs.
Throughout 2020, we switched from an Anka cluster managed by MacStadium to a self-managed installation. We deployed
+Anka Registry and Anka Controller on Google Cloud and got Mac Minis evenly distributed between two MacMiniVault data centers for redundancy.
+We perfected our Ansible cookbooks and got very comfortable with rolling updates so we don't have downtime. We also prepared
+Packer templates to automate creation of Virtual Machines.
+
In parallel Cirrus CLI matured, it was able to run tasks in Docker containers. It was time to find a replacement for Anka.
+We had two criteria in mind: cost-efficiency and network stability. After some research we ended up with Parallels.
+Network performance was better, starting time for VMs was a little slower but still very fast. And price! Anka's
+license costed us more than we paid for the hardware we rented to run it! Parallels was just $10/month/host.
+
Long story short, we added necessary features to Cirrus CLI to run tasks in Parallels VMs, used the same Packer templates
+to rebuild all the virtual machines. And in early 2021 did the switch!
In the meantime Apple Silicon was taking off. It was clear Apple was very serious about the transition and full switch from Intel processors.
+But at the time none of the virtualization solutions supported Apple Silicon. It was a new stack with new challenges.
+
Thankfully in the end of 2021 with macOS Monterey release Apple themselves released Virtualization.Framework, so companies like
+Veertu and Parallels don't need to re-invent the wheel and reverse engineer all the things about macOS.
+
By February 2022 we were getting more and more requests to support M1 workloads in our CI but none of the virtualization
+solution adopted Virtualization.Framework, except for Anka 3.0. A switch back was off the table. Anka pricing was
+the same even though there is now little "knowhow" because Apple liberated this knowledge with Virtualization.Framework.
+
We decided to give it a try and build our own virtualization solution. Couple months later we open-sourced Tart and
+a couple other tools to help everyone with automation needs on Apple Silicon. One unique feature of Tart is integration with
+OCI-compatible container registries to Push/Pull virtual machines from them. It simplifies distribution of huge virtual machines
+to hundreds of Mac Minis because cloud container registries are super scalable.
+
We also added another fleet of M1 Mac Minis and offered M1 macOS virtual machines as part of Cirrus CI which also includes free tier for open-source projects.
Apple no longer sells Intel-based hardware, and it's just a matter of time for a full transition. For us, continuing managing
+the second generation of infrastructure is becoming a burden. We are fully committing to supporting Apple Silicon and decided
+to sunset our Intel-based offering from January 1st 2023.
+
Please migrate your Big Sur and High Sierra macos_instances to Monterey or Ventura. Refer to documentation for more details.
+
Have any questions? Still need to test on Intel? Don’t hesitate to send us your feedback either on Twitter or via email!
Unfortunately the day has come. As a self-bootstrapped company Cirrus Labs can no longer provide unlimited usage of Cirrus CI
+for public repositories for free. We have to put a limit on the amount of compute resources that can be consumed for free each month
+by organizations and users. Starting September 1st 2023, there will be an upper monthly limit on free usage equal to 50 compute credits
+(which is equal to a little over 16,000 CPU-minutes for Linux tasks).
+
The reason for the change is that we want to continue being a profitable business and keep Cirrus CI running,
+but unfortunately we haven’t found a better solution for a couple of ongoing issues described below.
+
+
+
Crypto miners are still active. Methodically, a lot of CI vendors one after another were restricting free usage because of this single reason.
+Only Cirrus CI and GitHub Actions have been allowing unlimited usage as of lately and we are very proud of the effort that we have put into battling with the abuse.
+We tried everything from clever firewall and traffic analysis to some basic machine learning on factors like similarity of config files and CPU usage patterns.
+This effort got us a silver medal in the race of providing unlimited free usage for as long as we could. Congrats GitHub Actions on getting the gold
+and remaining the only CI with free unlimited usage for public repositories.
+
Cirrus CI usage pattern in many cases is not optimal. This is just an observation we made during the decision-making process.
+It appears that free Cirrus CI tasks have only 30-40% CPU utilization on average. On the other hand,
+paid tasks that use compute credits have an average CPU utilization of 80%. We randomly picked a handful of tasks with low CPU utilization
+and discovered that many people just used the maximum possible resources that Cirrus CI allows “just because they could”.
+Frequently we saw tasks requesting 8 CPUs and 24GB of memory, but in reality they used only a single CPU core.
In addition to introducing the limits we will also lower the prices for the existing compute resources. Starting August 1st,
+we are lowering the existing pricing for macOS and Windows instances by 60% and
+by 40% for Linux and FreeBSD instances respectively.
First of all, this change is not affecting the majority of users. You can check your current monthly usage on your settings page.
+Starting from August, once an account reaches the expected limit there will be a warning message displayed on all tasks.
There are couple of option to avoid reaching the compute limit:
+
+
Improve CPU utilization of CI tasks. Cirrus CI collects CPU charts that can indicate if a particular task is not fully utilizing resources.
+
Bring your own compute. We recommend GKE Autopilot for Container-based tasks and Google Cloud’s compute engine overall for the best stability, performance and cost-efficiency.
+
Use compute credits. Cirrus CI does per-second billing for compute resources only and doesn’t have any hidden fees like ingress/egress traffic.
+
Migrate part of the workloads to GitHub Actions or other CI provider to balance the load. For example, keep Arm workloads on Cirrus CI and move the rest elsewhere. Cirrus CLI conveniently allows to run tasks defined in Cirrus Configuration format on any other CI.
We are committed to continue providing the best CI possible for our customers and OSS community as well. We anticipate that this change will positively impact the experience of Cirrus CI overall.
+This will allow us to remove a few existing abuse detection mechanisms that ultimately are slowing down task scheduling and execution at the moment. But of course such changes will be upsetting for a few,
+and we hope for understanding. If you have any questions or concerns please feel free to email us at support@cirruslabs.org.
Help us spread the word about Cirrus CI! As a non-tradition startup with no VC money to spare we are always optimizing costs of operating Cirrus CI for us and our users.
+The innovative idea of bringing your own compute via direct integration with APIs of cloud providers allows Cirrus CI users to have the most cost-efficient and scalable CI by design.
+Compute resources are created and used on demand and there is no such thing as an “idle worker”. Cirrus CI scales to 0 when there are no tasks to execute and can instantly scale to hundreds
+and thousands of tasks executing in a matter of minutes.
This blog post will briefly go through the history of CI systems and will describe how a role-model CI system works nowadays. After describing core principles of CI systems, we’ll take a look at how extremely fast evolution of cloud and virtualization technologies allowed to change these principles and especially concept of CI agents.
Cirrus CI already had great Linux and Windows support. The only missing platform was macOS and there was a good reason for that.
+
TLDR: Please check documentation for just instructions on how to configure macOS builds on Cirrus CI. The is a little bit of history and motivation below.
When Cirrus CI was announced a few months ago Docker support was already pretty sophisticated. It was possible to use any existing Docker container image as an environment to run CI tasks in. But even though Docker is so popular nowadays and there are hundreds of thousands of containers created by community members, in some cases it’s still pretty hard to find a container that has everything installed for your builds. Just remember how many times you’ve seen apt-get install in CI scripts! Every such apt-get install is just a waste of time. Everything should be prebuilt into a container image! And now with Cirrus CI it’s easier than ever before!
“Wait what!? Yet another CI? Gosh…” one can say after seeing the title. Honestly, at Cirrus Labs we had the same thoughts and we tried to talk ourselves out of building yet another CI. But let us explain why we think there is a need for a better CI and how Cirrus CI is better.
While working on a new functionality or fixing an issue it’s crucial to get CI feedback as soon as possible. Fast CI builds are important but it’s also important how fast one can find a reason of a failing build. Usual flow requires to open a separate page for the failing CI build and scroll through all the logs to finally find a relevant error message. How inefficient!
+
Today Cirrus CI starts supporting GitHub Annotations to provide inline feedback right where you review your code. No need to switch context anymore!
Cirrus CI from the day one was build around leveraging modern cloud computing services as backends for executing CI workloads. It allows teams to own the CI infrastructure and at the same time to not have pains of configuring and managing CI agents. Anyways the idea of traditional CI agent pools is obsolete.
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
Most Continuous Integration vendors try to lock you not only by providing some unique features that were attractive in the first place but also by making you write hundreds of lines of YAML configuration unique to this particular CI or by making you configure all your scripts in the UI. No wonder it’s always a pain to migrate to another CI and it’s hard to justify the effort! There are so many things to rewrite from one YAML format into another YAML format.
+
Today we are happy to announce Cirrus CLI — an open source tool to run isolated tasks in any environment with Docker installed. Use one configuration format for running your CI builds the same way locally on your laptop or remotely in any CI. Read below to learn more about our motivation and technical details or jump right to the GitHub repository and try Cirrus CLI for yourself!
Imagine dealing with a failing task that only reproduces in CI or a task with an environment that is is simply too cumbersome to bootstrap locally.
+
For a long time, the classic debugging approach worked just fine: do an attempt to blindly fix the issue or add debugging instructions and re-run. Got it working or found a clue? Cool. No? Do it once again!
+
Then Cirrus CLI appeared. It allows you to replicate the CI environment locally, but complex cases like custom VMs or other architectures are not covered due to platform limitations.
+
Anyway, both methods require some additional tinkering to gain access to the interactive session on the host where the task runs (i.e. something similar to docker exec -it container-ID).
+
Luckily no more! With the recent Cirrus Terminal integration, it’s now possible to have this one click away on Cirrus Cloud!
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
TLDR Intel-based Big Sur and High Sierra instances will stop working on January 1st 2023. Please migrate to M1-based Monterey and Ventura instances.
+Below we'll provide some history and motivation for this decision.
We've been running macOS instances for almost 5 years now. We evaluated all the existing solutions and even successfully
+operated two of them on Intel platform before creating our own virtualization toolset for Apple Silicon called Tart.
+We are switching managed-by-us macOS instances to exclusively running in Tart virtual machines starting January 1st 2023.
Apple Silicon is the inevitable future. Apple has no plans to release any x86 hardware anymore. In addition, many people reported huge performance improvements after switching their builds to Apple Silicon.
+There are no excuses not to switch to Apple Silicon except if your CI is not supporting it yet.
+
In this case, we are happy to announce Cirrus Runners -- managed Apple Silicon infrastructure for your existing CI.
+Cirrus Runners are powered by the same infrastructure we've built other the years running macOS tasks as part of Cirrus CI.
+We believe we have the most advanced and scalable tech out there for running macOS CI. We even created and open-sourced our own virtualization technology for Apple Silicon!
Some time has passed since Cirrus Labs released Tart, an open-source tool to manage and run macOS virtual machines on Apple silicon. As Tart matured, we started using it for Cirrus CI’s macOS VM instances to replace other proprietary solutions.
+
+
However, there are some roadblocks that prevent us from scaling and running more than one VM on a single host:
Unfortunately the day has come. As a self-bootstrapped company Cirrus Labs can no longer provide unlimited usage of Cirrus CI
+for public repositories for free. We have to put a limit on the amount of compute resources that can be consumed for free each month
+by organizations and users. Starting September 1st 2023, there will be an upper monthly limit on free usage equal to 50 compute credits
+(which is equal to a little over 16,000 CPU-minutes for Linux tasks).
+
The reason for the change is that we want to continue being a profitable business and keep Cirrus CI running,
+but unfortunately we haven’t found a better solution for a couple of ongoing issues described below.
Unfortunately the day has come. As a self-bootstrapped company Cirrus Labs can no longer provide unlimited usage of Cirrus CI
+for public repositories for free. We have to put a limit on the amount of compute resources that can be consumed for free each month
+by organizations and users. Starting September 1st 2023, there will be an upper monthly limit on free usage equal to 50 compute credits
+(which is equal to a little over 16,000 CPU-minutes for Linux tasks).
+
The reason for the change is that we want to continue being a profitable business and keep Cirrus CI running,
+but unfortunately we haven’t found a better solution for a couple of ongoing issues described below.
TLDR Intel-based Big Sur and High Sierra instances will stop working on January 1st 2023. Please migrate to M1-based Monterey and Ventura instances.
+Below we'll provide some history and motivation for this decision.
We've been running macOS instances for almost 5 years now. We evaluated all the existing solutions and even successfully
+operated two of them on Intel platform before creating our own virtualization toolset for Apple Silicon called Tart.
+We are switching managed-by-us macOS instances to exclusively running in Tart virtual machines starting January 1st 2023.
Apple Silicon is the inevitable future. Apple has no plans to release any x86 hardware anymore. In addition, many people reported huge performance improvements after switching their builds to Apple Silicon.
+There are no excuses not to switch to Apple Silicon except if your CI is not supporting it yet.
+
In this case, we are happy to announce Cirrus Runners -- managed Apple Silicon infrastructure for your existing CI.
+Cirrus Runners are powered by the same infrastructure we've built other the years running macOS tasks as part of Cirrus CI.
+We believe we have the most advanced and scalable tech out there for running macOS CI. We even created and open-sourced our own virtualization technology for Apple Silicon!
Some time has passed since Cirrus Labs released Tart, an open-source tool to manage and run macOS virtual machines on Apple silicon. As Tart matured, we started using it for Cirrus CI’s macOS VM instances to replace other proprietary solutions.
+
+
However, there are some roadblocks that prevent us from scaling and running more than one VM on a single host:
Imagine dealing with a failing task that only reproduces in CI or a task with an environment that is is simply too cumbersome to bootstrap locally.
+
For a long time, the classic debugging approach worked just fine: do an attempt to blindly fix the issue or add debugging instructions and re-run. Got it working or found a clue? Cool. No? Do it once again!
+
Then Cirrus CLI appeared. It allows you to replicate the CI environment locally, but complex cases like custom VMs or other architectures are not covered due to platform limitations.
+
Anyway, both methods require some additional tinkering to gain access to the interactive session on the host where the task runs (i.e. something similar to docker exec -it container-ID).
+
Luckily no more! With the recent Cirrus Terminal integration, it’s now possible to have this one click away on Cirrus Cloud!
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
Most Continuous Integration vendors try to lock you not only by providing some unique features that were attractive in the first place but also by making you write hundreds of lines of YAML configuration unique to this particular CI or by making you configure all your scripts in the UI. No wonder it’s always a pain to migrate to another CI and it’s hard to justify the effort! There are so many things to rewrite from one YAML format into another YAML format.
+
Today we are happy to announce Cirrus CLI — an open source tool to run isolated tasks in any environment with Docker installed. Use one configuration format for running your CI builds the same way locally on your laptop or remotely in any CI. Read below to learn more about our motivation and technical details or jump right to the GitHub repository and try Cirrus CLI for yourself!
While working on a new functionality or fixing an issue it’s crucial to get CI feedback as soon as possible. Fast CI builds are important but it’s also important how fast one can find a reason of a failing build. Usual flow requires to open a separate page for the failing CI build and scroll through all the logs to finally find a relevant error message. How inefficient!
+
Today Cirrus CI starts supporting GitHub Annotations to provide inline feedback right where you review your code. No need to switch context anymore!
Cirrus CI from the day one was build around leveraging modern cloud computing services as backends for executing CI workloads. It allows teams to own the CI infrastructure and at the same time to not have pains of configuring and managing CI agents. Anyways the idea of traditional CI agent pools is obsolete.
This blog post will briefly go through the history of CI systems and will describe how a role-model CI system works nowadays. After describing core principles of CI systems, we’ll take a look at how extremely fast evolution of cloud and virtualization technologies allowed to change these principles and especially concept of CI agents.
Cirrus CI already had great Linux and Windows support. The only missing platform was macOS and there was a good reason for that.
+
TLDR: Please check documentation for just instructions on how to configure macOS builds on Cirrus CI. The is a little bit of history and motivation below.
When Cirrus CI was announced a few months ago Docker support was already pretty sophisticated. It was possible to use any existing Docker container image as an environment to run CI tasks in. But even though Docker is so popular nowadays and there are hundreds of thousands of containers created by community members, in some cases it’s still pretty hard to find a container that has everything installed for your builds. Just remember how many times you’ve seen apt-get install in CI scripts! Every such apt-get install is just a waste of time. Everything should be prebuilt into a container image! And now with Cirrus CI it’s easier than ever before!
“Wait what!? Yet another CI? Gosh…” one can say after seeing the title. Honestly, at Cirrus Labs we had the same thoughts and we tried to talk ourselves out of building yet another CI. But let us explain why we think there is a need for a better CI and how Cirrus CI is better.
Cirrus CI from the day one was build around leveraging modern cloud computing services as backends for executing CI workloads. It allows teams to own the CI infrastructure and at the same time to not have pains of configuring and managing CI agents. Anyways the idea of traditional CI agent pools is obsolete.
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
Most Continuous Integration vendors try to lock you not only by providing some unique features that were attractive in the first place but also by making you write hundreds of lines of YAML configuration unique to this particular CI or by making you configure all your scripts in the UI. No wonder it’s always a pain to migrate to another CI and it’s hard to justify the effort! There are so many things to rewrite from one YAML format into another YAML format.
+
Today we are happy to announce Cirrus CLI — an open source tool to run isolated tasks in any environment with Docker installed. Use one configuration format for running your CI builds the same way locally on your laptop or remotely in any CI. Read below to learn more about our motivation and technical details or jump right to the GitHub repository and try Cirrus CLI for yourself!
When Cirrus CI was announced a few months ago Docker support was already pretty sophisticated. It was possible to use any existing Docker container image as an environment to run CI tasks in. But even though Docker is so popular nowadays and there are hundreds of thousands of containers created by community members, in some cases it’s still pretty hard to find a container that has everything installed for your builds. Just remember how many times you’ve seen apt-get install in CI scripts! Every such apt-get install is just a waste of time. Everything should be prebuilt into a container image! And now with Cirrus CI it’s easier than ever before!
While working on a new functionality or fixing an issue it’s crucial to get CI feedback as soon as possible. Fast CI builds are important but it’s also important how fast one can find a reason of a failing build. Usual flow requires to open a separate page for the failing CI build and scroll through all the logs to finally find a relevant error message. How inefficient!
+
Today Cirrus CI starts supporting GitHub Annotations to provide inline feedback right where you review your code. No need to switch context anymore!
TLDR Intel-based Big Sur and High Sierra instances will stop working on January 1st 2023. Please migrate to M1-based Monterey and Ventura instances.
+Below we'll provide some history and motivation for this decision.
We've been running macOS instances for almost 5 years now. We evaluated all the existing solutions and even successfully
+operated two of them on Intel platform before creating our own virtualization toolset for Apple Silicon called Tart.
+We are switching managed-by-us macOS instances to exclusively running in Tart virtual machines starting January 1st 2023.
Apple Silicon is the inevitable future. Apple has no plans to release any x86 hardware anymore. In addition, many people reported huge performance improvements after switching their builds to Apple Silicon.
+There are no excuses not to switch to Apple Silicon except if your CI is not supporting it yet.
+
In this case, we are happy to announce Cirrus Runners -- managed Apple Silicon infrastructure for your existing CI.
+Cirrus Runners are powered by the same infrastructure we've built other the years running macOS tasks as part of Cirrus CI.
+We believe we have the most advanced and scalable tech out there for running macOS CI. We even created and open-sourced our own virtualization technology for Apple Silicon!
Some time has passed since Cirrus Labs released Tart, an open-source tool to manage and run macOS virtual machines on Apple silicon. As Tart matured, we started using it for Cirrus CI’s macOS VM instances to replace other proprietary solutions.
+
+
However, there are some roadblocks that prevent us from scaling and running more than one VM on a single host:
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
Cirrus CI already had great Linux and Windows support. The only missing platform was macOS and there was a good reason for that.
+
TLDR: Please check documentation for just instructions on how to configure macOS builds on Cirrus CI. The is a little bit of history and motivation below.
Imagine dealing with a failing task that only reproduces in CI or a task with an environment that is is simply too cumbersome to bootstrap locally.
+
For a long time, the classic debugging approach worked just fine: do an attempt to blindly fix the issue or add debugging instructions and re-run. Got it working or found a clue? Cool. No? Do it once again!
+
Then Cirrus CLI appeared. It allows you to replicate the CI environment locally, but complex cases like custom VMs or other architectures are not covered due to platform limitations.
+
Anyway, both methods require some additional tinkering to gain access to the interactive session on the host where the task runs (i.e. something similar to docker exec -it container-ID).
+
Luckily no more! With the recent Cirrus Terminal integration, it’s now possible to have this one click away on Cirrus Cloud!
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
Unfortunately the day has come. As a self-bootstrapped company Cirrus Labs can no longer provide unlimited usage of Cirrus CI
+for public repositories for free. We have to put a limit on the amount of compute resources that can be consumed for free each month
+by organizations and users. Starting September 1st 2023, there will be an upper monthly limit on free usage equal to 50 compute credits
+(which is equal to a little over 16,000 CPU-minutes for Linux tasks).
+
The reason for the change is that we want to continue being a profitable business and keep Cirrus CI running,
+but unfortunately we haven’t found a better solution for a couple of ongoing issues described below.
TLDR Intel-based Big Sur and High Sierra instances will stop working on January 1st 2023. Please migrate to M1-based Monterey and Ventura instances.
+Below we'll provide some history and motivation for this decision.
We've been running macOS instances for almost 5 years now. We evaluated all the existing solutions and even successfully
+operated two of them on Intel platform before creating our own virtualization toolset for Apple Silicon called Tart.
+We are switching managed-by-us macOS instances to exclusively running in Tart virtual machines starting January 1st 2023.
Apple Silicon is the inevitable future. Apple has no plans to release any x86 hardware anymore. In addition, many people reported huge performance improvements after switching their builds to Apple Silicon.
+There are no excuses not to switch to Apple Silicon except if your CI is not supporting it yet.
+
In this case, we are happy to announce Cirrus Runners -- managed Apple Silicon infrastructure for your existing CI.
+Cirrus Runners are powered by the same infrastructure we've built other the years running macOS tasks as part of Cirrus CI.
+We believe we have the most advanced and scalable tech out there for running macOS CI. We even created and open-sourced our own virtualization technology for Apple Silicon!
Some time has passed since Cirrus Labs released Tart, an open-source tool to manage and run macOS virtual machines on Apple silicon. As Tart matured, we started using it for Cirrus CI’s macOS VM instances to replace other proprietary solutions.
+
+
However, there are some roadblocks that prevent us from scaling and running more than one VM on a single host:
Imagine dealing with a failing task that only reproduces in CI or a task with an environment that is is simply too cumbersome to bootstrap locally.
+
For a long time, the classic debugging approach worked just fine: do an attempt to blindly fix the issue or add debugging instructions and re-run. Got it working or found a clue? Cool. No? Do it once again!
+
Then Cirrus CLI appeared. It allows you to replicate the CI environment locally, but complex cases like custom VMs or other architectures are not covered due to platform limitations.
+
Anyway, both methods require some additional tinkering to gain access to the interactive session on the host where the task runs (i.e. something similar to docker exec -it container-ID).
+
Luckily no more! With the recent Cirrus Terminal integration, it’s now possible to have this one click away on Cirrus Cloud!
We are happy to announce that the macOS tasks on Cirrus CI Cloud have switched to a new virtualization technology as well as overall architecture of the orchestration. This switch should be unnoticeable for the end users except that the tasks should become much faster since now each macos_instance of the Cirrus CI Cloud offering will utilize a full Mac Mini with 12 virtual CPUs and 24G of RAM.
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
Most Continuous Integration vendors try to lock you not only by providing some unique features that were attractive in the first place but also by making you write hundreds of lines of YAML configuration unique to this particular CI or by making you configure all your scripts in the UI. No wonder it’s always a pain to migrate to another CI and it’s hard to justify the effort! There are so many things to rewrite from one YAML format into another YAML format.
+
Today we are happy to announce Cirrus CLI — an open source tool to run isolated tasks in any environment with Docker installed. Use one configuration format for running your CI builds the same way locally on your laptop or remotely in any CI. Read below to learn more about our motivation and technical details or jump right to the GitHub repository and try Cirrus CLI for yourself!
While working on a new functionality or fixing an issue it’s crucial to get CI feedback as soon as possible. Fast CI builds are important but it’s also important how fast one can find a reason of a failing build. Usual flow requires to open a separate page for the failing CI build and scroll through all the logs to finally find a relevant error message. How inefficient!
+
Today Cirrus CI starts supporting GitHub Annotations to provide inline feedback right where you review your code. No need to switch context anymore!
Cirrus CI from the day one was build around leveraging modern cloud computing services as backends for executing CI workloads. It allows teams to own the CI infrastructure and at the same time to not have pains of configuring and managing CI agents. Anyways the idea of traditional CI agent pools is obsolete.
This blog post will briefly go through the history of CI systems and will describe how a role-model CI system works nowadays. After describing core principles of CI systems, we’ll take a look at how extremely fast evolution of cloud and virtualization technologies allowed to change these principles and especially concept of CI agents.
Cirrus CI already had great Linux and Windows support. The only missing platform was macOS and there was a good reason for that.
+
TLDR: Please check documentation for just instructions on how to configure macOS builds on Cirrus CI. The is a little bit of history and motivation below.
When Cirrus CI was announced a few months ago Docker support was already pretty sophisticated. It was possible to use any existing Docker container image as an environment to run CI tasks in. But even though Docker is so popular nowadays and there are hundreds of thousands of containers created by community members, in some cases it’s still pretty hard to find a container that has everything installed for your builds. Just remember how many times you’ve seen apt-get install in CI scripts! Every such apt-get install is just a waste of time. Everything should be prebuilt into a container image! And now with Cirrus CI it’s easier than ever before!
“Wait what!? Yet another CI? Gosh…” one can say after seeing the title. Honestly, at Cirrus Labs we had the same thoughts and we tried to talk ourselves out of building yet another CI. But let us explain why we think there is a need for a better CI and how Cirrus CI is better.
Cirrus CI has a set of Docker images ready for Android development.
+If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI. For those
+images .cirrus.yml configuration file can look like:
The Cirrus CI annotator supports providing inline reports on PRs and can parse Android Lint reports. Here is an example of an Android Lint
+task that you can add to your .cirrus.yml:
The best way to test Go projects is by using official Go Docker images. Here is
+an example of how .cirrus.yml can look like for a project using Go Modules:
+
+
+
+
container:
+image:golang:latest
+
+test_task:
+modules_cache:
+fingerprint_script:cat go.sum
+folder:$GOPATH/pkg/mod
+get_script:go get ./...
+build_script:go build ./...
+test_script:go test ./...
+
+
+
+
arm_container:
+image:golang:latest
+
+test_task:
+modules_cache:
+fingerprint_script:cat go.sum
+folder:$GOPATH/pkg/mod
+get_script:go get ./...
+build_script:go build ./...
+test_script:go test ./...
+
We highly recommend to configure some sort of linting for your Go project. One of the options is GolangCI Lint.
+The Cirrus CI annotator supports providing inline reports on PRs and can parse GolangCI Lint reports. Here is an example of a GolangCI Lint
+task that you can add to your .cirrus.yml:
We recommend use of the official Gradle Docker containers since they have Gradle specific configurations already set up. For example, standard Java containers don't have
+a pre-configured user and as a result don't have HOME environment variable presented which makes Gradle complain.
To preserve caches between Gradle runs, add a cache instruction as shown below.
+The trick here is to clean up ~/.gradle/caches folder in the very end of a build. Gradle creates some unique nondeterministic
+files in ~/.gradle/caches folder on every run which makes Cirrus CI re-upload the cache every time. This way, you get faster builds!
If your project uses a buildSrc directory, the build cache configuration should also be applied to buildSrc/settings.gradle.
+
To do this, put the build cache configuration above into a separate gradle/buildCacheSettings.gradle file, then apply it to both your settings.gradle and buildSrc/settings.gradle.
Please make sure you are running Gradle commands with --build-cache flag or have org.gradle.caching enabled in gradle.properties file.
+Here is an example of a gradle.properties file that we use internally for all Gradle projects:
Here is a .cirrus.yml that, parses and uploads JUnit reports at the end of the build:
+
junit_test_task:
+junit_script:<replace this comment with instructions to run the test suites>
+always:
+junit_result_artifacts:
+path:"**/test-results/**.xml"
+format:junit
+type:text/xml
+
+
If it is running on a pull request, annotations will also be displayed in-line.
The Additional Containers feature makes it super simple to run the same Docker
+MySQL image as you might be running in production for your application. Getting a running instance of the latest GA
+version of MySQL can used with the following six lines in your .cirrus.yml:
Yarn 2 (also known as Yarn Berry), has a different package cache location (.yarn/cache).
+To run tests, it would look like this:
+
+
+
+
container:
+image:node:latest
+
+test_task:
+yarn_cache:
+folder:.yarn/cache
+fingerprint_script:cat yarn.lock
+install_script:
+-yarn set version berry
+-yarn install
+test_script:yarn run test
+
+
+
+
arm_container:
+image:node:latest
+
+test_task:
+yarn_cache:
+folder:.yarn/cache
+fingerprint_script:cat yarn.lock
+install_script:
+-yarn set version berry
+-yarn install
+test_script:yarn run test
+
ESLint reports are supported by Cirrus CI Annotations.
+This way you can see all the linting issues without leaving the pull request you are reviewing! You'll need to generate an
+ESLint report file (for example, eslint.json) in one of your task's scripts. Then save it as an artifact in eslint format:
Official Python Docker images can be used for builds. Here is an example of a .cirrus.yml
+that caches installed packages based on contents of requirements.txt and runs pytest:
Python Unittest reports are supported by Cirrus CI Annotations.
+This way you can see what tests are failing without leaving the pull request you are reviewing! Here is an example
+of a .cirrus.yml that produces and stores Unittest reports:
+
+
+
+
unittest_task:
+container:
+image:python:slim
+install_dependencies_script:|
+pip3 install unittest_xml_reporting
+run_tests_script:python3 -m xmlrunner tests
+# replace 'tests' with the module,
+# unittest.TestCase, or unittest.TestSuite
+# that the tests are in
+always:
+upload_results_artifacts:
+path:./*.xml
+format:junit
+type:text/xml
+
+
+
+
unittest_task:
+arm_container:
+image:python:slim
+install_dependencies_script:|
+pip3 install unittest_xml_reporting
+run_tests_script:python3 -m xmlrunner tests
+# replace 'tests' with the module,
+# unittest.TestCase, or unittest.TestSuite
+# that the tests are in
+always:
+upload_results_artifacts:
+path:./*.xml
+format:junit
+type:text/xml
+
+
+
+
+
Now you should get annotations for your test results.
Qodana by JetBrains is a code quality monitoring tool that identifies and suggests fixes for bugs,
+security vulnerabilities, duplications, and imperfections. It brings all the smart features you love in the JetBrains IDEs.
+
Here is an example of .cirrus.yml configuration file which will save Qodana's report as an artifact, will parse it and
+report as annotations:
Cirrus CI doesn't provide a built-in functionality to upload artifacts on a GitHub release but this functionality can be
+added via a script. For a release, Cirrus CI will provide CIRRUS_RELEASE environment variable along with CIRRUS_TAG
+environment variable. CIRRUS_RELEASE indicates release id which can be used to upload assets.
+
Cirrus CI only requires write access to Check API and doesn't require write access to repository contents because of security
+reasons. That's why you need to create a personal access token with full access
+to repo scope. Once an access token is created, please create an encrypted variable
+from it and save it to .cirrus.yml:
+
env:
+GITHUB_TOKEN:ENCRYPTED[qwerty]
+
+
Now you can use a script to upload your assets:
+
#!/usr/bin/env bash
+
+if[["$CIRRUS_RELEASE"==""]];then
+echo"Not a release. No need to deploy!"
+exit0
+fi
+
+if[["$GITHUB_TOKEN"==""]];then
+echo"Please provide GitHub access token via GITHUB_TOKEN environment variable!"
+exit1
+fi
+
+file_content_type="application/octet-stream"
+files_to_upload=(
+# relative paths of assets to upload
+)
+
+forfpathin$files_to_upload
+do
+echo"Uploading $fpath..."
+name=$(basename"$fpath")
+url_to_upload="https://uploads.github.com/repos/$CIRRUS_REPO_FULL_NAME/releases/$CIRRUS_RELEASE/assets?name=$name"
+curl-XPOST\
+--data-binary@$fpath\
+--header"Authorization: token $GITHUB_TOKEN"\
+--header"Content-Type: $file_content_type"\
+$url_to_upload
+done
+
Official Ruby Docker images can be used for builds.
+Here is an example of a .cirrus.yml that caches installed gems based on Ruby version,
+contents of Gemfile.lock, and runs rspec:
When you are not committing Gemfile.lock (in Ruby gems repositories, for example)
+you can run bundle install (or bundle update) in install_script
+instead of populate_script in bundle_cache. Cirrus Agent is clever enough to re-upload
+cache entry only if cached folder has been changed during task execution.
+Here is an example of a .cirrus.yml that always runs bundle install:
+
+
+
+
container:
+image:ruby:latest
+
+rspec_task:
+bundle_cache:
+folder:/usr/local/bundle
+fingerprint_script:
+-echo $RUBY_VERSION
+-cat Gemfile
+-cat *.gemspec
+install_script:bundle install# or `update` for the freshest bundle
+rspec_script:bundle exec rspec
+
+
+
+
arm_container:
+image:ruby:latest
+
+rspec_task:
+bundle_cache:
+folder:/usr/local/bundle
+fingerprint_script:
+-echo $RUBY_VERSION
+-cat Gemfile
+-cat *.gemspec
+install_script:bundle install# or `update` for the freshest bundle
+rspec_script:bundle exec rspec
+
Official Rust Docker images can be used for builds. Here is a basic example of .cirrus.yml
+that caches crates in $CARGO_HOME based on contents of Cargo.lock:
Please note before_cache_script that removes registry index from the cache before uploading it in the end of a successful task.
+Registry index is changing very rapidly making the cache invalid. before_cache_script
+deletes the index and leaves only the required crates for caching.
It is possible to use nightly builds of Rust via an official rustlang/rust:nightly container.
+Here is an example of a .cirrus.yml to run tests against the latest stable and nightly versions of Rust:
Vanila FreeBSD VMs don't set some environment variables required by Cargo for effective caching.
+Specifying HOME environment variable to some arbitrarily location should fix caching:
XCLogParser is a CLI tool that parses Xcode and xcodebuild's logs (xcactivitylog files) and produces reports in different formats.
+
Here is an example of .cirrus.yml configuration file which will save XCLogParser's flat JSON report as an artifact, will parse it and report as annotations:
Cirrus CI has the following limitations on how many CPUs for different platforms a single user can run on Cirrus Cloud Clusters
+for public repositories for free:
+
+
16.0 CPUs for Linux platform (Containers or VMs).
+
16.0 CPUs for Arm Linux platform (Containers).
+
8.0 CPUs for Windows platform (Containers or VMs)
+
8.0 CPUs for FreeBSD VMs.
+
4.0 CPUs macOS VM (1 VM).
+
+
Note that a single task can't request more than 8 CPUs (except macOS VMs which are not configurable).
+
+
Monthly CPU Minutes Limit
+
Additionally there is an upper monthly limit on free usage equal to 50 compute credits
+(which is equal to 10,000 CPU-minutes for Linux tasks or 500 minutes for macOS tasks which always use 4 CPUs).
+
+
If you are using Cirrus CI with your private personal repositories under the $10/month plan
+you'll have twice the limits:
+
+
32.0 CPUs for Linux platform (Containers or VMs).
+
16.0 CPUs for Windows platform (Containers or VMs)
+
16.0 CPUs for FreeBSD VMs.
+
8.0 CPUs macOS VM (2 VMs).
+
+
There are no limits on how many VMs or Containers you can run in parallel if you bring your own infrastructure
+or use Compute Credits for either private or public repositories.
+
+
Cache and Logs Redundancy
+
By default Cirrus CI persists caches and logs for 90 days. If you bring your own compute services
+this period can be configured directly in your cloud provider's console.
Free tier of Cirrus CI is intended for public OSS projects to run tests and other validations continuously.
+If your repository is configured to use Cirrus CI in a questionable way to just exploit Cirrus CI infrastructure,
+your repository might be blocked.
+
Here are a few examples of such questionable activities we've seen so far:
+
+
Use Cirrus CI as a powerhouse for arbitrary CPU-intensive calculations (including crypto mining).
+
Use Cirrus CI to download a pirated movie, re-encode it, upload as a Cirrus artifact and distribute it.
+
Use Cirrus CI distributed infrastructure to emulate user activity on a variety of websites to trick advertisers.
Instances running on Cirrus Cloud Clusters are using dynamic IPs by default. It's possible to request
+a static 35.222.255.190 IP for all the "managed-by-us" instance types except macOS VMs via use_static_ip field.
+Here is an example of a Linux Docker container with a static IP:
+
task:
+name:Test IP
+container:
+image:cirrusci/wget:latest
+use_static_ip:true
+script:wget -qO- ifconfig.co
+
It means that Cirrus CI haven't heard from the agent for quite some time. In 99.999% of the cases
+it happens because of two reasons:
+
+
+
Your task was executing on a Cirrus Cloud Cluster. Cirrus Cloud Cluster
+ is backed by Google Cloud's Spot VMs for cost efficiency reasons and
+ Google Cloud preempted back a VM your task was executing on. Cirrus CI is trying to minimize possibility of such cases
+ by constantly rotating VMs before Google Cloud preempts them, but there is still chance of such inconvenience.
+
+
+
Your CI task used too much memory which led to a crash of a VM or a container.
+
+
+
Agent process on a persistent worker exited unexpectedly!¶
+
This means that either an agent process or a VM with an agent process exited before reporting the last instruction of a task.
It means that Cirrus CI has made a successful API call to a computing service
+to allocate resources. But a requested resource wasn't created.
+
If it happened for an OSS project, please contact support immediately. Otherwise check your cloud console first
+and then contact support if it's still not clear what happened.
Spot VMs can be preempted which will require rescheduling and automatically restart tasks that were executing on these VMs.
+This is a rare event since autoscaler is constantly rotating instances but preemption still happens occasionally.
+All automatic re-runs and stateful tasks using compute credits
+are always executed on regular VMs.
By default, Cirrus CI has an execution limit of 60 minutes for each task. However, this default timeout duration can be changed
+by using timeout_in field in .cirrus.yml configuration file:
It means that Cirrus CI has made a successful API call to a computing service
+to start a container but unfortunately container runtime or the corresponding computing service had an internal error.
Cirrus CI itself doesn't provide any discounts except Cirrus Cloud Cluster
+which is free for open source projects. But since Cirrus CI delegates execution of builds to different computing services,
+it means that discounts from your cloud provider will be applied to Cirrus CI builds.
Use compute credits to run as many parallel tasks as you want and pay only for CPU time
+used by these tasks. Another approach is to bring your own infrastructure and pay directly to your cloud provider
+within your current billing.
Cirrus CI leverages elasticity of the modern clouds to always have available resources to process your builds.
+Engineers should never wait for builds to start.
It is possible to run FreeBSD Virtual Machines the same way one can run Linux containers on the FreeBSD Cloud Cluster.
+To accomplish this, use freebsd_instance in your .cirrus.yml:
Any of the official FreeBSD VMs on Google Cloud Platform are supported. Here are a few of them which are self explanatory:
+
+
freebsd-15-0-snap (15.0-SNAP)
+
freebsd-14-0 (14.0-RELEASE)
+
freebsd-13-2 (13.2-RELEASE)
+
+
It's also possible to specify a concrete version of an image by name via image_name field. To get a full list of
+available images please run the following gcloud command:
Any build starts with a change pushed to GitHub. Since Cirrus CI is a GitHub Application, a webhook event
+will be triggered by GitHub. From the webhook event, Cirrus CI will parse a Git branch and the SHA
+for the change. Based on said information, a new build will be created.
+
After build creation Cirrus CI will use GitHub's APIs to download a content of .cirrus.yml file for the SHA. Cirrus CI
+will evaluate it and create corresponding tasks.
+
These tasks (defined in the .cirrus.yml file) will be dispatched within Cirrus CI to different services responsible
+for scheduling on a supported computing service.
+Cirrus CI's scheduling service will use appropriate APIs to create and manage a VM instance or a Docker container on the particular computing service.
+The scheduling service will also configure start-up script that downloads the Cirrus CI agent, configures it to send logs back and starts it. Cirrus CI agent is a self-contained executable written in Go which means it can be executed anywhere.
+
Cirrus CI's agent will request commands to execute for a particular task and will stream back logs, caches,
+artifacts and exit codes of the commands upon execution.
+Once the task finishes, the scheduling service will clean up the used VM or container.
+
+
This is a diagram of how Cirrus CI schedules a task on Google Cloud Platform.
+The blue arrows represent API calls and the green arrows
+represent unidirectional communication between an agent inside a VM or a container and Cirrus CI.
+Other chores such as health checking of the agent and GitHub status reporting happen in real time as a task is running.
Cirrus CI supports many different compute services when you bring your own infrastructure,
+but internally at Cirrus Labs we use Google Cloud Platform for running all managed by us instances
+except macos_instance. Already things like Docker Builder and freebsd_instance
+are basically a syntactic sugar for launching Compute Engine instances from a particular limited set of images.
+
With compute_engine_instance it is possible to use any publicly available image for running your Cirrus tasks in.
+Such instances are particularly useful when you can't use Docker containers, for example, when you need to test things
+against newer versions of the Linux kernel than the Docker host has.
+
Here is an example of using a compute_engine_instance to run a VM with KVM available:
+
compute_engine_instance:
+image_project:cirrus-images# GCP project.
+image:family/docker-kvm# family or a full image name.
+platform:linux
+architecture:arm64# optional. By default, amd64 is assumed.
+cpu:4# optional. Defaults to 2 CPUs.
+memory:16G# optional. Defaults to 4G.
+disk:100# optional. By default, uses the smallest disk size required by the image.
+nested_virtualization:true# optional. Whether to enable Intel VT-x. Defaults to false.
+
+
+Nested Virtualization License
+
Make sure that your source image already has a necessary license.
+Otherwise, nested virtualization won't work.
We recommend to use Packer for building your custom images. As an example, please take a look at our Packer templates
+used for building Docker Builder VM image.
"Docker Builder" tasks are a way to build and publish Docker Images to Docker Registries of your choice using a VM as build environment.
+In essence, a docker_builder is basically a task that is executed in a VM with pre-installed Docker.
+A docker_builder can be defined the same way as a task:
Leveraging features such as Task Dependencies, Conditional Execution
+and Encrypted Variables with a Docker Builder can help building relatively
+complex pipelines. It can also be used to execute builds which need special privileges.
+
In the example below, a docker_builder will be only executed on a tag creation, once both test and lint
+tasks have finished successfully:
Docker Builder VM has QEMU pre-installed and is able to execute multi-arch builds via buildx.
+Add the following setup_script to enable buildx and then use docker buildx build instead of the regular docker build:
Under the hood a simple integration with Google Compute Engine
+is used and basically docker_builder is a syntactic sugar for the following compute_engine_instance configuration:
Docker has the --cache-from flag which allows using a previously built image as a cache source. This way only changed
+layers will be rebuilt which can drastically improve performance of the build_script. Here is a snippet that uses
+the --cache-from flag:
+
# pull an image if available
+dockerpullmyrepo/foo:latest||true
+dockerbuild--cache-frommyrepo/foo:latest\
+--tagmyrepo/foo:$CIRRUS_TAG\
+--tagmyrepo/foo:latest.
+
With Docker Builder there is no need to build and push custom containers so they can be used as an environment to run CI tasks in.
+Cirrus CI can do it for you! Just declare a path to a Dockerfile with the dockerfile field for your container or arm_container
+declarations in your .cirrus.yml like this:
Cirrus CI will build a container and cache the resulting image based on Dockerfile’s content. On the next build,
+Cirrus CI will check if a container was already built, and if so, Cirrus CI will instantly start a CI task using the cached image.
+
Under the hood, for every Dockerfile that is needed to be built, Cirrus CI will create a Docker Builder task as a dependency.
+You will see such build_docker_image_HASH tasks in the UI.
+
+Danger of using COPY and ADD instructions
+
Cirrus only includes files directly added or copied into a container image in the cache key. But Cirrus is not recursively
+waking contents of folders that are being included into the image. This means that for a public repository a potential bad actor
+can create a PR with malicious scripts included into a container, wait for it to be cached and then reset the PR, so it looks harmless.
To use dockerfile with gke_container you first need to create a VM with Docker installed within your GCP project.
+This image will be used to perform building of Docker images for caching. Once this image is available, for example, by
+MY_DOCKER_VM name, you can use it like this:
If your builder image is stored in another project you can also specify it by using builder_image_project field.
+By default, Cirrus CI assumes builder image is stored within the same project as the GKE cluster.
+
+
+Using with private EKS clusters
+
To use dockerfile with eks_container you need three things:
+
+
Either create an AMI with Docker installed or use one like ECS-optimized AMIa. For example, MY_DOCKER_AMI.
+
Create a role which has AmazonEC2ContainerRegistryFullAccess policy. For example, cirrus-builder.
+
Create cirrus-cache repository in your Elastic Container registry and make sure user that aws_credentials are associated with has ecr:DescribeImages access to it.
+
+
Once all of the above requirement are met you can configure eks_container like this:
+
eks_container:
+region:us-east-2
+cluster_name:my-company-arm-cluster
+dockerfile:.ci/Dockerfile
+builder_image:MY_DOCKER_AMI
+builder_role:cirrus-builder# role for builder instance profile
+builder_instance_type:c7g.xlarge# should match the architecture below
+builder_subnet_ids:# optional, list of subnets from your default VPC to randomly choose from for scheduling the instance
+-...
+builder_subnet_filters:# optional, map of filters to use for DescribeSubnets API call. Note to make sure Cirrus is given `ec2:DescribeSubnets`
+-name:tag:Name
+values:
+-subnet1
+-subnet2
+architecture:arm64# default is amd64
+
+
This will make Cirrus CI to check whether cirrus-cache repository in us-east-2 region contains a precached image
+for .ci/Dockerfile of this repository.
Besides the ability to build docker images using a dedicated docker_builder task which runs on VMs, it is also possible to run docker builds on Kubernetes.
+To do so we are leveraging the additional_containers and docker-in-docker functionality.
+
Currently Cirrus CI supports running builds on these Kubernetes distributions:
complex builds are potentially faster than docker-in-docker
+
safer due to better isolation between builds
+
+
+
Kubernetes
+
much faster start - creating a new container usually takes few seconds vs creating a VM which takes usually about a minute on GCP and even longer on AWS.
+
ability to use an image with your custom tools image (e.g. containing Skaffold) to invoke docker instead of using a fixed VM image.
This a full example of how to build a docker image on GKE using docker and pushing it to GCR.
+While not required, the script section in this example also has some best practice cache optimizations and pushes the image to GCR.
+
+
AWS EKS support
+
While the steps below are specifically written for and tested with GKE (Google Kubernetes Engine), it should work equally on AWS EKS.
+
+
docker_build_task:
+gke_container:# for AWS, replace this with `aks_container`
+image:docker:latest# This image can be any custom image. The only hard requirement is that it needs to have `docker-cli` installed.
+cluster_name:cirrus-ci-cluster# your gke cluster name
+zone:us-central1-b# zone of the cluster
+namespace:cirrus-ci# namespace to use
+cpu:1
+memory:1500Mb
+additional_containers:
+-name:dockerdaemon
+privileged:true# docker-in-docker needs to run in privileged mode
+cpu:4
+memory:3500Mb
+image:docker:dind
+port:2375
+env:
+DOCKER_DRIVER:overlay2# this speeds up the build
+DOCKER_TLS_CERTDIR:""# disable TLS to preserve the old behavior
+env:
+DOCKER_HOST:tcp://localhost:2375# this is required so that docker cli commands connect to the "additional container" instead of `docker.sock`.
+GOOGLE_CREDENTIALS:ENCRYPTED[qwerty239abc]# this should contain the json key for a gcp service account with the `roles/storage.admin` role on the `artifacts.<your_gcp_project>.appspot.com` bucket as described here https://cloud.google.com/container-registry/docs/access-control. This is only required if you want to pull / push to gcr. If we use dockerhub you need to use different credentials.
+login_script:
+echo $GOOGLE_CREDENTIALS | docker login -u _json_key --password-stdin https://gcr.io
+build_script:
+-docker pull gcr.io/my-project/my-app:$CIRRUS_LAST_GREEN_CHANGE || true
+-docker build
+--cache-from=gcr.io/my-project/my-app:$CIRRUS_LAST_GREEN_CHANGE
+-t gcr.io/my-project/my-app:$CIRRUS_CHANGE_IN_REPO
+.
+push_script:
+-docker push gcr.io/my-project/my-app:$CIRRUS_CHANGE_IN_REPO
+
Since the additional_container needs to run in privileged mode, the isolation between the Docker build and the host are somewhat limited, you should create a separate cluster for Cirrus CI builds ideally.
+If this a concern you can also try out Kaniko or Makisu to run builds in unprivileged containers.
Docker Pipe is a way to execute each instruction in its own Docker container
+while persisting working directory between each of the containers. For example, you can build your application in
+one container, run some lint tools in another containers and finally deploy your app via CLI from another container.
+
No need to create huge containers with every single tool pre-installed!
+
A pipe can be defined the same way as a task with the only difference that instructions
+should be grouped under the steps field defining a Docker image for each step to be executed in. Here is an example of how
+we build and validate links for the Cirrus CI documentation that you are reading right now:
+
pipe:
+name:Build Site and Validate Links
+steps:
+-image:squidfunk/mkdocs-material:latest
+build_script:mkdocs build
+-image:raviqqe/liche:latest# links validation tool in a separate container
+validate_script:/liche --document-root=site --recursive site/
+
+
Amount of CPU and memory that a pipe has access to can be configured with resources field:
Cirrus CI supports container and arm_container instances in order to run your CI workloads on amd64 and arm64
+platforms respectively. Cirrus CI uses Kubernetes clusters running in different clouds that are the most suitable for
+running each platform:
+
+
For container instances Cirrus CI uses a GKE cluster of compute-optimized instances running in Google Cloud.
+
For arm_container instances Cirrus CI uses a EKS cluster of Graviton2 instances running in AWS.
+
+
Cirrus Cloud Clusters are configured the same way as anyone can configure a private Kubernetes cluster for their own
+repository. Cirrus CI supports connecting managed Kubernetes clusters from most of the cloud providers. Please check out
+all the supported computing services Cirrus CI can integrate with.
+
By default, a container is given 2 CPUs and 4 GB of memory, but it can be configured in .cirrus.yml:
Containers on Cirrus Cloud Cluster can use maximum 8.0 CPUs and up to 32 GB of memory. Memory limit is tied to the amount
+of CPUs requested. For each CPU you can't get more than 4G of memory.
+
Tasks using Compute Credits has higher limits and can use up to 28.0 CPUs and 112G of memory respectively.
+
+Using in-memory disks
+
Some I/O intensive tasks may benefit from using a tmpfs disk mounted as a working directory. Set use_in_memory_disk flag
+to enable in-memory disk for a container:
It is possible to run containers with KVM enabled. Some types of CI tasks can tremendously
+benefit from native virtualization. For example, Android related tasks can benefit from running hardware accelerated
+emulators instead of software emulated ARM emulators.
+
In order to enable KVM module for your containers, add kvm: true to your container declaration. Here is an
+example of a task that runs hardware accelerated Android emulators:
Because of the additional virtualization layer, it takes about a minute to acquire the necessary resources to start such tasks.
+KVM-enabled containers are backed by dedicated VMs which restrict the amount of CPU resources that can be used.
+The value of cpu must be 1 or an even integer. Values like 0.5 or 3 are not supported for KVM-enabled containers
It is possible to use private Docker registries with Cirrus CI to pull containers. To provide an access to a private registry
+of your choice you'll need to obtain a JSON Docker config file for your registry and create an encrypted variable
+for Cirrus CI to use.
+
+Using Kubernetes secrets with private clusters
+
If you don't see auth for your registry, it means your Docker installation is using a credentials store. In this case
+you can manually auth using a Base64 encoded string of your username and your PAT (Personal Access Token).
+Here's how to generate that:
+
echo$USERNAME:$PAT|base64
+
+
Create an encrypted variable from the Docker config and put in .cirrus.yml:
+
registry_config:ENCRYPTED[...]
+
+
Now Cirrus CI will be able to pull images from Oracle Container Registry:
It is possible to run M1 macOS Virtual Machines (like how one can run Linux containers) on the Cirrus Cloud macOS Cluster.
+Use macos_instance in your .cirrus.yml files:
+
macos_instance:
+image:ghcr.io/cirruslabs/macos-sonoma-base:latest
+
+task:
+script:echo "Hello World from macOS!"
+
Please refer to the macos-image-templates repository on how the images were built and
+don't hesitate to create issues if current images are missing something.
Cirrus CI itself doesn't have built-in mechanism to send notifications but, since Cirrus CI is following best practices of
+integrating with GitHub, it's possible to configure a GitHub action that will send any kind of notifications.
+
Here is a full list of curated Cirrus Actions for GitHub including ones to send notifications: cirrus-actions.
It's possible to facilitate GitHub Action's own email notification mechanism to send emails about Cirrus CI failures.
+To enable it, add the following .github/workflows/email.yml workflow file:
Cirrus CI pioneered an idea of directly using compute services
+instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc.
+Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental
+design difference has multiple benefits comparing to more traditional CIs:
+
+
Ephemeral environment. Each Cirrus CI task starts in a fresh VM or a container without any state left by previous tasks.
+
Infrastructure as code. All VM versions and container tags are specified in .cirrus.yml configuration file in your Git repository.
+ For any revision in the past Cirrus tasks can be identically reproduced at any point in time in the future using the exact versions of VMs or container tags specified in .cirrus.yml at the particular revision. Just imagine how difficult it is to do a security release for a 6 months old version if your CI environment independently changes.
+
Predictability and cost efficiency. Cirrus CI uses elasticity of modern clouds and creates VMs and containers on demand
+ only when they are needed for executing Cirrus tasks and deletes them right after. Immediately scale from 0 to hundreds or
+ thousands of parallel Cirrus tasks without a need to over provision infrastructure or constantly monitor if your team has reached maximum parallelism of your current CI plan.
For some use cases the traditional CI setup is still useful since not everything is available in the cloud.
+For example, testing hardware itself or some third party devices that can be attached with wires.
+For such use cases it makes sense to go with a traditional CI setup: install some binary on the hardware which will constantly pull for new tasks
+and will execute them one after another.
+
This is precisely what Persistent Workers for Cirrus CI are: a simple way to run Cirrus tasks beyond cloud!
First, create a persistent workers pool for your personal account or a GitHub organization (https://cirrus-ci.com/settings/github/<ORGANIZATION>):
+
+
Once a persistent worker is created, copy registration token of the pool and follow Cirrus CLI guide
+to configure a host that will be a persistent worker.
+
Once configured, target task execution on a worker by using persistent_worker instance and matching by workers' labels:
By default, a persistent worker spawns all the tasks on the same host machine it's being run.
+
However, using the isolation field, a persistent worker can utilize a VM or a container engine to increase the separation between tasks and to unlock the ability to use different operating systems.
To use this isolation type, install the Tart on the persistent worker's host machine.
+
Here's an example of a configuration that will run the task inside of a fresh macOS virtual machine created from a remote ghcr.io/cirruslabs/macos-ventura-base:latest VM image:
Once the VM spins up, persistent worker will connect to the VM's IP-address over SSH using user and password credentials and run the latest agent version.
Most commonly, Cirrus tasks are declared in a .cirrus.yml file in YAML format as documented in the Writing Tasks guide.
+
YAML, as a language, is great for declaring simple to moderate configurations, but sometimes just using a declarative language is not enough.
+One might need some conditional execution or an easy way to generate multiple similar tasks. Most continuous integration services solve this problem
+by introducing a special domain specific language (DSL) into the existing YAML. In case of Cirrus CI, we have the only_if keyword
+for conditional execution and matrix modification for generating similar tasks.
+These options are mostly hacks to work around the declarative nature of YAML where in reality an imperative language
+would be a better fit. This is why Cirrus CI allows tasks to be configured in Starlark in addition to YAML.
+
Starlark is a procedural programming language similar to Python that originated in the Bazel build tool
+that is ideal for embedding within systems that want to safely allow user-defined logic. There are a few key differences that made us
+choose Starlark instead of common alternatives like JavaScript/TypeScript or WebAssembly:
+
+
Starlark doesn't require compilation. There's no need to introduce a full-blown compile and deploy process for a few dozen lines of logic.
+
Starlark scripts can be executed instantly on any platform. There is Starlark interpreter written in Go which integrates nicely with the Cirrus CLI and Cirrus CI infrastructure.
+
Starlark has built-in functionality for loading external modules which is ideal for config sharing. See module loading for details.
With module loading you can re-use other people's code to avoid wasting time writing tasks from scratch.
+For example, with the official task helpers the example above can be refactored to:
Then the generated YAML is appended to .cirrus.yml (if any) before passing the combined config into the final YAML parser.
+
With Starlark, it's possible to generate parts of the configuration dynamically based on some external conditions:
+
+
Parsing files inside the repository to pick up some common settings (for example, parse package.json to see if it contains a lint script and generate a linting task).
Different events will trigger execution of different top-level functions in the .cirrus.star file. These functions reserve certain names
+and will be called with different arguments depending on the event which triggered the execution.
main() is called once a Cirrus CI build is triggered in order to generate additional configuration that will be appended to .cirrus.yml before parsing.
+
main function can return a single object or a list of objects which will be automatically serialized into YAML. In case of returning plain text,
+it will be appended to .cirrus.yml as is.
+
Note that .cirrus.yml configuration file is optional and the whole build can be generated via evaluation of .cirrus.star file.
It's also possible to execute Starlark scripts on updates to the current build or any of the tasks within the build.
+Think of it as WebHooks running within Cirrus that don't require any infrastructure on your end.
+
Expected names of Starlark Hook functions in .cirrus.star are on_build_<STATUS> or on_task_<STATUS> respectively.
+Please refer to Cirrus CI GraphQL Schema for a
+full list of existing statuses, but most commonly on_build_failed/on_build_completed and on_task_failed/on_task_completed
+are used. These functions should expect a single context argument passed by Cirrus Cloud. At the moment hook's context only contains
+a single field payload containing the same payload as a webhook.
+
One caveat of Starlark Hooks execution is CIRRUS_TOKENenvironment variable that contains a token to access Cirrus API.
+Scope of CIRRUS_TOKEN is restricted to the build associated with that particular hook invocation and allows, for example,
+to automatically re-run tasks. Here is an example of a Starlark Hook that automatically re-runs a failed task in case a particular
+transient issue found in logs:
+
# load some helpers from an external module
+load("github.com/cirrus-modules/graphql","rerun_task_if_issue_in_logs")
+
+defon_task_failed(ctx):
+if"Test"notinctx.payload.data.task.name:
+return
+ifctx.payload.data.task.automaticReRun:
+print("Task is already an automatic re-run! Won't even try to re-run it...")
+return
+rerun_task_if_issue_in_logs(ctx.payload.data.task.id,"Time out")
+
You can also specify an exact commit hash instead of the main() branch name to prevent accidental changes.
+
+
Loading private modules
+
If your organization has private repository called cirrus-modules with installed Cirrus CI, then this repository
+will be available for loading within repositories of your organization.
+
+
To load .star files from repositories other than GitHub, add a .git suffix at the end of the repository name, for example:
While not technically a builtin, is_test is a bool
+that allows Starlark code to determine whether it's running in test environment via Cirrus CLI. This can be useful for limiting the test complexity,
+e.g. by not making a real HTTP request and mocking/skipping it instead. Read more about module testing in a separate guide in Cirrus CLI repository.
changes_include() is a Starlark alternative to the changesInclude() function commonly found in the YAML configuration files.
+
It takes at least one string with a pattern and returns a bool that represents whether any of the specified patterns matched any of the affected files in the running context.
changes_include_only() is a Starlark alternative to the changesIncludeOnly() function commonly found in the YAML configuration files.
+
It takes at least one string with a pattern and returns a bool that represents whether any of the specified patterns matched all the affected files in the running context.
cirrus.zipfile module provides methods to read Zip archives.
+
You instantiate a ZipFile object using zipfile.ZipFile(data) function call and then call namelist() and open(filename) methods to retrieve information about archive contents.
load("cirrus","fs","zipfile")
+
+defis_java_archive(path):
+# Read Zip archive contents from the filesystem
+archive_contents=fs.read(path)
+ifarchive_contents==None:
+returnFalse
+
+# Open Zip archive and a file inside of it
+zf=zipfile.ZipFile(archive_contents)
+manifest=zf.open("META-INF/MANIFEST.MF")
+
+# Does the manifest contain the expected version?
+if"Manifest-Version: 1.0"inmanifest.read():
+returnTrue
+
+returnFalse
+
At the moment Cirrus CI only supports repositories hosted on GitHub. This guide will walk you through the installation process.
+If you are interested in a support for other code hosting platforms please fill up this form
+to help us prioritize the support and notify you once the support is available.
Choose a plan for your personal account or for an organization you have admin writes for.
+
+
GitHub Apps can be installed on all repositories or on repository-by-repository basis for granular access control. For
+example, Cirrus CI can be installed only on public repositories and will only have access to these public repositories.
+In contrast, classic OAuth Apps don't have such restrictions.
+
+
+
Change Repository Access
+
You can always revisit Cirrus CI's repository access settings on your installation page.
Once Cirrus CI is installed for a particular repository, you must add either .cirrus.yml configuration or .cirrus.star script to the root of the repository.
+The .cirrus.yml defines tasks that will be executed for every build for the repository.
+
For a Node.js project, your .cirrus.yml could look like:
If your repository happened to have a Dockerfile in the root, Cirrus CI will attempt to build it even without
+a corresponding .cirrus.yml configuration file.
+
+
You will see all your Cirrus CI builds on cirrus-ci.com once signed in.
+
+
GitHub status checks for each task will appear on GitHub as well.
+
+
Newly created PRs will also get Cirrus CI's status checks.
+
+
+
Examples
+
Don't forget to check examples page for ready-to-copy examples of some .cirrus.yml
+configuration files for different languages and build systems.
All builds created by your account can be viewed on Cirrus CI Web App after signing in with
+your GitHub Account:
+
+
After clicking on Sign In you'll be redirected to GitHub in order to authorize access:
+
+
+
Note about Act on your behalf
+
Cirrus CI only asks for several kinds of permissions that you can see on your installation page.
+These permissions are read-only except for write access to checks and commit statuses in order for Cirrus CI to
+be able to report task statuses via checks or commit statuses.
+
There is a long thread disscussing this weird "Act on your behalf" wording here
+on GitHub's own commuity forum.
If you choose initially to allow Cirrus CI to access all of your repositories, all you need to do is push a .cirrus.yml to start
+building your repository on Cirrus CI.
+
If you only allowed Cirrus CI to access certain repositories, then add your new repository to
+the list of repositories Cirrus CI has access to via this page,
+then push a .cirrus.yml to start building on Cirrus CI.
When a user triggers a build on Cirrus CI by either pushing a change to a repository, creating a PR or a release,
+Cirrus CI will associate a corresponding user's permissions with the build and tasks within that build. Those permissions
+are exposed to tasks with CIRRUS_USER_PERMISSIONS environment variable and are mapped to GitHub's collaborator permissions.
+of the user for the given repository. Only tasks with write and admin permissions will be get decrypted values of the
+encrypted variables.
+
When working with Cirrus GraphQL API either directly or indirectly through Cirrus CI Web UI, permissions play a key role.
+Not only one need read permission to view a certain build and tasks of a private repository, but in order to perform any GraphQL mutation
+one will need at least write permission with a few exceptions:
+
+
admin permission is required for deleting a repository via RepositoryDeleteMutation.
+
admin permission is required for creating API access tokens via GenerateNewOwnerAccessTokenMutation and GenerateNewScopedAccessTokenMutation.
+
+
Note that for public repositories none collaborator permission is mapped to read in order to give public view access to anyone.
For every task Cirrus CI starts a new Virtual Machine or a new Docker Container on a given compute service.
+Using a new VM or a new Docker Container each time for running tasks has many benefits:
+
+
Atomic changes to an environment where tasks are executed. Everything about a task is configured in .cirrus.yml file, including
+ VM image version and Docker Container image version. After committing changes to .cirrus.yml not only new tasks will use the new environment,
+ but also outdated branches will continue using the old configuration.
+
Reproducibility. Fresh environment guarantees no corrupted artifacts or caches are presented from the previous tasks.
+
Cost efficiency. Most compute services are offering per-second pricing which makes them ideal for using with Cirrus CI.
+ Also each task for repository can define ideal amount of CPUs and Memory specific for a nature of the task. No need to manage
+ pools of similar VMs or try to fit workloads within limits of a given Continuous Integration systems.
+
+
To be fair there are of course some disadvantages of starting a new VM or a container for every task:
+
+
Virtual Machine Startup Speed. Starting a VM can take from a few dozen seconds to a minute or two depending on a cloud provider and
+ a particular VM image. Starting a container on the other hand just takes a few hundred milliseconds! But even a minute
+ on average for starting up VMs is not a big inconvenience in favor of more stable, reliable and more reproducible CI.
+
Cold local caches for every task execution. Many tools tend to store some caches like downloaded dependencies locally
+ to avoid downloading them again in future. Since Cirrus CI always uses fresh VMs and containers such local caches will always
+ be empty. Performance implication of empty local caches can be avoided by using Cirrus CI features like
+ built-in caching mechanism. Some tools like Gradle can
+ even take advantages of built-in HTTP cache!
+
+
Please check the list of currently supported cloud compute services below. In case you have your own hardware, please
+take a look at Persistent Workers, which allow connecting anything to Cirrus CI.
Cirrus CI can schedule tasks on several Google Cloud Compute services. In order to interact with Google Cloud APIs
+Cirrus CI needs permissions. Creating a service account
+is a common way to safely give granular access to parts of Google Cloud Projects.
+
+
Isolation
+
We do recommend to create a separate Google Cloud project for running CI builds to make sure tests are
+isolated from production data. Having a separate project also will show how much money is spent on CI and how
+efficient Cirrus CI is
+
+
Once you have a Google Cloud project for Cirrus CI please create a service account by running the following command:
Depending on a compute service Cirrus CI will need different roles
+assigned to the service account. But Cirrus CI will always need permissions to refresh it's token, generate pre-signed URLs (for the artifacts upload/download to work) and be able to view monitoring:
By default Cirrus CI will store logs and caches for 90 days but it can be changed by manually configuring a
+lifecycle rule for a Google Cloud Storage bucket that Cirrus CI is
+using.
At last create an encrypted variable from contents of
+service-account-credentials.json file and add it to the top of .cirrus.yml file:
+
gcp_credentials:ENCRYPTED[qwerty239abc]
+
+
Now Cirrus CI can store logs and caches in Google Cloud Storage for tasks scheduled on either GCE or GKE. Please check
+following sections
+with additional instructions about Compute Engine or Kubernetes Engine.
+
+
Supported Regions
+
Cirrus CI currently supports following GCP regions: us-central1, us-east1, us-east4, us-west1, us-west2,
+europe-west1, europe-west2, europe-west3 and europe-west4.
+
Please contact support if you are interested in support for other regions.
By configuring Cirrus CI as an identity provider, Cirrus CI will be able to acquire temporary access tokens on-demand
+for each task.
+Please read Google Cloud documentation to learn more
+about security and other benefits of using a workload identity provider.
+
Now let's setup Cirrus CI as a workload identity provider:
+
+
+
First, let's make sure the IAM Credentials API is enabled:
exportWORKLOAD_IDENTITY_POOL_ID="..."# value from above
+
+
+
+
Create a Workload Identity Provider in that pool:
+
# TODO(developer): Update this value to your GitHub organization.
+exportOWNER="organization"# e.g. "cirruslabs"
+
+gcloudiamworkload-identity-poolsproviderscreate-oidc"cirrus-oidc"\
+--project="${PROJECT_ID}"\
+--location="global"\
+--workload-identity-pool="ci-pool"\
+--display-name="Cirrus CI"\
+--attribute-mapping="google.subject=assertion.aud,attribute.owner=assertion.owner,attribute.actor=assertion.repository,attribute.actor_visibility=assertion.repository_visibility,attribute.pr=assertion.pr"\
+--attribute-condition="attribute.owner == '$OWNER'"\
+--issuer-uri="https://oidc.cirrus-ci.com"
+
+
The attribute mappings map claims in the Cirrus CI JWT to assertions
+you can make about the request (like the repository name or repository visibility).
+In the example above --attribute-condition flag asserts that the provider can be used with any repository of your organization.
+You can restrict the access further with attributes like repository, repository_visibility and pr.
Use this value as the workload_identity_provider value in your Cirrus configuration file:
+
gcp_credentials:
+# todo(developer): replace PROJECT_NUMBER and PROJECT_ID with the actual values
+workload_identity_provider:projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/ci-pool/providers/cirrus-oidc
+service_account:cirrus-ci@${PROJECT_ID}.iam.gserviceaccount.com
+
In order to schedule tasks on Google Compute Engine a service account that Cirrus CI operates via should have a necessary
+role assigned. It can be done by running a gcloud command:
It's also possible to specify a concrete image name instead of the periodically rolling image family. Use the image_name field
+instead of image_family:
Building an immutable VM image with all necessary software pre-configured is a known best practice with many benefits.
+It makes sure environment where a task is executed is always the same and that no time is spent on useless work like
+installing a package over and over again for every single task.
+
There are many ways how one can create a custom image for Google Compute Engine. Please refer to the official documentation.
+At Cirrus Labs we are using Packer to automate building such
+images. An example of how we use it can be found in our public GitHub repository.
Google Compute Engine support Windows images and Cirrus CI can take full advantages of it by just explicitly specifying
+platform of an image like this:
Google Compute Engine support FreeBSD images and Cirrus CI can take full advantages of it by just explicitly specifying
+platform of an image like this:
It is possible to run a container directly on a Compute Engine VM with pre-installed Docker. Use the gce_container field
+to specify a VM image and a Docker container to execute on the VM (gce_container extends gce_instance definition
+with a few additional fields):
Note that gce_container always runs containers in privileged mode.
+
If your VM image has Nested Virtualization Enabled
+it's possible to use KVM from the container by specifying enable_nested_virtualization flag. Here is an example of
+using KVM-enabled container to run a hardware accelerated Android emulator:
By default Cirrus CI will create Google Compute instances without any scopes
+so an instance can't access Google Cloud Storage for example. But sometimes it can be useful to give some permissions to an
+instance by using scopes key of gce_instance. For example, if a particular task builds Docker images and then pushes
+them to Container Registry, its configuration file can look something like:
Cirrus CI can schedule spot instances with all price
+benefits and stability risks. But sometimes risks of an instance being preempted at any time can be tolerated. For example
+gce_instance can be configured to schedule spot instance for non master branches like this:
Scheduling tasks on Compute Engine has one big disadvantage of waiting for an instance to
+start which usually takes around a minute. One minute is not that long but can't compete with hundreds of milliseconds
+that takes a container cluster on GKE to start a container.
+
To start scheduling tasks on a container cluster we first need to create one using gcloud. Here is a recommended configuration
+of a cluster that is very similar to what is used for the managed container instances. We recommend creating a cluster with two node pools:
+
+
default-pool with a single node and no autoscaling for system pods required by Kubernetes.
+
workers-pool that will use Compute-Optimized instances
+ and SSD storage for better performance. This pool also will be able to scale to 0 when there are no tasks to run.
Done! Now after creating cirrus-ci-cluster cluster and having gcp_credentials configured tasks can be scheduled on the
+newly created cluster like this:
+
gcp_credentials:ENCRYPTED[qwerty239abc]
+
+gke_container:
+image:gradle:jdk8
+cluster_name:cirrus-ci-cluster
+location:us-central1-a# cluster zone or region for multi-zone clusters
+namespace:default# Kubernetes namespace to create pods in
+cpu:6
+memory:24GB
+nodeSelectorTerms:# optional
+-matchExpressions:
+-key:cloud.google.com/gke-spot
+operator:In
+values:
+-"true"
+
+
+
Using in-memory disk
+
By default Cirrus CI mounts an emptyDir into
+/tmp path to protect the pod from unnecessary eviction by autoscaler. It is possible to switch emptyDir's medium to
+use in-memory tmpfs storage instead of a default one by setting use_in_memory_disk field of gke_container to true
+or any other expression that uses environment variables.
+
+
+
Running privileged containers
+
You can run privileged containers on your private GKE cluster by setting privileged field of gke_container to true
+or any other expression that uses environment variables. privileged field is also available for any additional container.
There are two options to provide access to your infrastructure: via a traditional IAM user or
+via a more flexible and secure Identity Provider.
+
+Permissions
+
A user or a role that Cirrus CI will be using for orchestrating tasks on AWS should at least have access to S3 in order to store
+logs and cache artifacts. Here is a list of actions that Cirrus CI requires to store logs and artifacts:
Creating an IAM user for programmatic access
+is a common way to safely give granular access to parts of your AWS.
+
Once you created a user for Cirrus CI you'll need to provide key id and access key itself. In order to do so
+please create an encrypted variable with the following content:
By configuring Cirrus CI as an identity provider, Cirrus CI will be able to acquire temporary access tokens on-demand
+for each task. Please read AWS documentation to learn more
+about security and other benefits of using a workload identity provider.
+
Now lets setup Cirrus CI as a workload identity provider. Here is a Cloud Formation Template that can configure Cirrus CI
+as an OpenID Connect Identity Provider. Please be extra careful and review this template, specifically pay attention to
+the condition that asserts claims CIRRUS_OIDC_TOKEN has.
this example template only checks that CIRRUS_OIDC_TOKEN comes from any repository under your organization.
+ If you are planning to use AWS compute services only for private repositories you should change this condition to:
Additionally, if you are planning to access production services from within your CI tasks, please create a separate
+role with even stricter asserts for additional security. The same CIRRUS_OIDC_TOKEN can be used to acquire tokens for multiple roles.
+
+
+
The output of running the template will a role that can be used in aws_credentials in your .cirrus.yml configuration:
+
aws_credentials:
+role_arn:arn:aws:iam::123456789:role/CirrusCI-Role-Something-Something
+role_session_name:cirrus# an identifier for the assumed role session
+region:us-east-2# region to use for calling the STS
+
+
Note that you'll need to add permissions required for Cirrus to that role.
Now tasks can be scheduled on EC2 by configuring ec2_instance something like this:
+
task:
+ec2_instance:
+image:ami-0a047931e1d42fdb3
+type:t2.micro
+region:us-east-1
+subnet_ids:# optional, list of subnets from your default VPC to randomly choose from for scheduling the instance
+-...
+subnet_filters:# optional, map of filters to use for DescribeSubnets API call. Note to make sure Cirrus is given `ec2:DescribeSubnets`
+-name:tag:Name
+values:
+-subnet1
+-subnet2
+architecture:arm64# defaults to amd64
+spot:true# defaults to false
+block_device_mappings:# empty by default
+-device_name:/dev/sdg
+ebs:
+volume_size:100# to increase the size of the root volume
+-device_name:/dev/sda1
+virtual_name:ephemeral0# to add an ephemeral disk for supported instances
+-device_name:/dev/sdj
+ebs:
+snapshot_id:snap-xxxxxxxx
+script:./run-ci.sh
+
Value for the image field of ec2_instance can be just the image id in a format of ami-*
+but there two more convenient options when Cirrus will do image id resolutions for you:
to figure out the ami right before scheduling the instance (Cirrus will pick the freshest AMI from the list based on creation date).
+Please make use AMI user or role has ec2:DescribeImages permissions.
Please follow instructions on how to create a EKS cluster
+and add workers nodes to it. And don't forget to
+add necessary permissions for the IAM user or OIDC role that Cirrus CI is using:
To verify that Cirrus CI will be able to communicate with your cluster please make sure that if you are locally logged in
+as the user that Cirrus CI acts as you can successfully run the following commands and see your worker nodes up and running:
If you have an issue with accessing your EKS cluster via kubectl, most likely you did not create the cluster
+with the user that Cirrus CI is using. The easiest way to do so is to create the cluster through AWS CLI with the
+following command:
Please add AmazonS3FullAccess policy to the role used for creation of EKS workers (same role you put in aws-auth-cm.yaml
+when enabled worker nodes to join the cluster).
+
+
+Greedy instances
+
Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.
Cirrus CI can schedule tasks on several Azure services. In order to interact with Azure APIs
+Cirrus CI needs permissions. First, please choose a subscription you want to use for scheduling CI tasks.
+Navigate to the Subscriptions blade within the Azure Portal
+and save $SUBSCRIPTION_ID that we'll use below for setting up a service principle.
+
Creating a service principal
+is a common way to safely give granular access to parts of Azure:
Azure Container Instances (ACI) is an ideal
+candidate for running modern CI workloads. ACI allows just to run Linux and Windows containers without thinking about
+underlying infrastructure.
+
Once azure_credentials is configured as described above, tasks can be scheduled on ACI by configuring aci_instance like this:
Linux-based images are usually pretty small and doesn't require much tweaking. For Windows containers ACI recommends
+to follow a few basic tips
+in order to reduce startup time.
Cirrus CI can schedule tasks on several Oracle Cloud services. In order to interact with OCI APIs Cirrus CI needs permissions.
+Please create a user that Cirrus CI will behalf on:
+
ociiamusercreate--namecirrus--description"Cirrus CI Orchestrator"
+
+
Please configure the cirrus user to be able to access storage, launch instances and have access to Kubernetes clusters.
+The easiest way is to add cirrus user to Administrators group, but it's not as secure as a granular access configuration.
+
By default, for every repository you'll start using Cirrus CI with, Cirrus will create a bucket with 90 days lifetime policy.
+In order to allow Cirrus to configure lifecycle policies please add the following policy as described in the documentation.
+Here is an example of the policy for us-ashburn-1 region:
+
Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy
+
+
Once you created and configured cirrus user you'll need to provide its API key. Once you generate an API key you should
+get a *.pem file with the private key that will be used by Cirrus CI.
+
Normally your config file for local use looks like this:
+
[DEFAULT]
+user=ocid1.user.oc1..XXX
+fingerprint=11:22:...:99
+tenancy=ocid1.tenancy.oc1..YYY
+region=us-ashburn-1
+key_file=<path to your *.pem private keyfile>
+
+
For Cirrus to use, you'll need to use a different format:
+
<user value>
+<fingerprint value>
+<tenancy value>
+<region value>
+<content of your *.pem private keyfile>
+
+
This way you'll be able to create a single encrypted variable with the contents
+of the Cirrus specific credentials above.
Please create a Kubernetes cluster and make sure Kubernetes API Public Endpoint is enabled for the cluster so Cirrus
+can access it. Then copy cluster id which can be used in configuring oke_container:
Customizing clone behavior is a simple as overriding clone_script. For example, here an override to use a pre-installed
+Git client (if your build environment has it) to do a shallow clone of a single branch:
Using go-git made it possible not to require a pre-installed Git from an execution environment. For example,
+most of alpine-based containers don't have Git pre-installed. Because of go-git you can even use distroless
+containers with Cirrus CI, which don't even have an Operating System.
You can use YAML aliases to share configuration options between
+multiple tasks. For example, here is a 2-task build which only runs for "master", PRs and tags, and installs some
+framework:
+
# Define a node anywhere in YAML file to create an alias. Make sure the name doesn't clash with an existing keyword.
+regular_task_template:®ULAR_TASK_TEMPLATE
+only_if:$CIRRUS_BRANCH == 'master' || $CIRRUS_TAG != '' || $CIRRUS_PR != ''
+env:
+FRAMEWORK_PATH:"${HOME}/framework"
+install_framework_script:curl https://example.com/framework.tar | tar -C "${FRAMEWORK_PATH}" -x
+
+task:
+# This operator will insert REGULAR_TASK_TEMPLATE at this point in the task node.
+<< :*REGULAR_TASK_TEMPLATE
+name:linux
+container:
+image:alpine:latest
+test_script:ls "${FRAMEWORK_PATH}"
+
+task:
+<< :*REGULAR_TASK_TEMPLATE
+name:osx
+macos_instance:
+image:catalina-xcode
+test_script:ls -w "${FRAMEWORK_PATH}"
+
If you like your YAML file to fit on your screen, and some commands are just too long, you can split them across multiple
+lines. YAML supports a variety of options to do that, for example here's how you can split
+ENCRYPTED values:
Even through most of the time you can configure environment variables via env, there are cases when a variable value is obtained only when the task is already running.
+
Normally you'd use export for that, but since each script instruction is executed in a separate shell, the exported variables won't propagate to the next instruction.
+
However, there's a simple solution: just write your variables in a KEY=VALUE format to the file referenced by the CIRRUS_ENV environment variable.
It is possible to run Windows Containers like how one can run Linux containers on Cirrus Cloud Windows Cluster.
+To use Windows, add windows_container instead of container in .cirrus.yml files:
Cirrus CI assumes that the container image's host OS is Windows Server 2019.
+Cirrus CI used to support 1709 and 1803 versions, but they are deprecated as of April 2021.
By default Cirrus CI agent executed scripts using cmd.exe. It is possible to override default shell executor by providing
+CIRRUS_SHELLenvironment variable:
+
env:
+CIRRUS_SHELL:powershell
+
+
It is also possible to use PowerShell scripts inline inside of a script instruction by prefixing it with ps:
+
windows_task:
+script:
+-ps:Get-Location
+
+
ps: COMMAND is just a syntactic sugar which transforms it to:
Some software installed with Chocolatey would update PATH environment variable in system settings and suggest using refreshenv to pull those changes into the current environment.
+Unfortunately, using refreshenv will overwrite any environment variables set in Cirrus CI configuration with system-configured defaults.
+We advise to make necessary changes using env and environment instead of using refreshenv command in scripts.
All cirrusci/* Windows containers like cirrusci/windowsservercore:2016 have Chocolatey pre-installed.
+Chocolatey is a package manager for Windows which supports unattended installs of software, useful on headless machines.
A task defines a sequence of instructions to execute and an execution environment
+to execute these instructions in. Let's see a line-by-line example of a .cirrus.yml configuration file first:
+
+
+
+
test_task:
+container:
+image:openjdk:latest
+test_script:./gradlew test
+
+
+
+
test_task:
+arm_container:
+image:openjdk:latest
+test_script:./gradlew test
+
+
+
+
+
The example above defines a single task that will be scheduled and executed on the Linux Cluster using the openjdk:latest Docker image.
+Only one user-defined script instruction to run ./gradlew test will be executed. Not that complex, right?
+
Please read the topics below if you want better understand what's going on in a more complex .cirrus.yml configuration file, such as this:
To name a task one can use the name field. foo_task syntax is a syntactic sugar. Separate name
+field is very useful when you want to have a rich task name:
+
task:
+name:Tests (macOS)
+...
+
+
Note: instructions within a task can only be named via a prefix (e.g. test_script).
+
+
+
Visual Task Creation for Beginners
+
If you are just getting started and prefer a more visual way of creating tasks, there
+is a third-party Cirrus CI Configuration Builder for generating YAML config that might be helpful.
A script instruction executes commands via shell on Unix or batch on Windows. A script instruction can be named by
+adding a name as a prefix. For example test_script or my_very_specific_build_step_script. Naming script instructions
+helps gather more granular information about task execution. Cirrus CI will use it in future to auto-detect performance
+regressions.
+
Script commands can be specified as a single string value or a list of string values in a .cirrus.yml configuration file
+like in the example below:
+
check_task:
+compile_script:gradle --parallel classes testClasses
+check_script:
+-echo "Here comes more than one script!"
+-printenv
+-gradle check
+
+
Note: Each script instruction is executed in a newly created process, therefore environment variables are not preserved between them.
+
+Execution on Windows
+
When executed on Windows via batch, Cirrus Agent will wrap each line of the script in a call so it's possible to
+fail fast upon first line exiting with non-zero exit code.
+
To avoid this "syntactic sugar" just create a script file and execute it.
A background_script instruction is absolutely the same as script instruction but Cirrus CI won't wait for the script to finish
+and will continue execution of further instructions.
+
Background scripts can be useful when something needs to be executed in the background. For example, a database or
+some emulators. Traditionally the same effect is achieved by adding & to a command like $: command &. Problem here
+is that logs from command will be mixed into regular logs of the following commands. By using background scripts
+not only logs will be properly saved and displayed, but also command itself will be properly killed in the end of a task.
+
Here is an example of how background_script instruction can be used to run an android emulator:
+
android_test_task:
+start_emulator_background_script:emulator -avd test -no-audio -no-window
+wait_for_emulator_to_boot_script:adb wait-for-device
+test_script:gradle test
+
A cache instruction allows to persist a folder and reuse it during the next execution of the task. A cache instruction can be named the same way as script instruction.
+
Here is an example:
+
+
+
+
test_task:
+container:
+image:node:latest
+node_modules_cache:
+folder:node_modules
+reupload_on_changes:false# since there is a fingerprint script
+fingerprint_script:
+-echo $CIRRUS_OS
+-node --version
+-cat package-lock.json
+populate_script:
+-npm install
+test_script:npm run test
+
+
+
+
test_task:
+arm_container:
+image:node:latest
+node_modules_cache:
+folder:node_modules
+reupload_on_changes:false# since there is a fingerprint script
+fingerprint_script:
+-echo $CIRRUS_OS
+-node --version
+-cat package-lock.json
+populate_script:
+-npm install
+test_script:npm run test
+
+
+
+
+
Either folder or a folders field (with a list of folder paths) is required and they tell the agent which folder paths to cache.
+
Folder paths should be generally relative to the working directory (e.g. node_modules), with the exception of when only a single folder specified. In this case, it can be also an absolute path (/usr/local/bundle).
+
Folder paths can contain a "glob" pattern to cache multiple files/folders within a working directory (e.g. **/node_modules will cache every node_modules folder within the working directory).
+
A fingerprint_script and fingerprint_key are optional fields that can specify either:
+
+
a script, the output of which will be hashed and used as a key for the given cache:
These two fields are mutually exclusive. By default the task name is used as a fingerprint value.
+
After the last script instruction for the task succeeds, Cirrus CI will calculate checksum of the cached folder (note that it's unrelated to fingerprint_script or fingerprint_key fields) and re-upload the cache if it finds any changes.
+To avoid a time-costly re-upload, remove volatile files from the cache (for example, in the last script instruction of a task).
+
populate_script is an optional field that can specify a script that will be executed to populate the cache.
+populate_script should create the folder if it doesn't exist before the cache instruction.
+If your dependencies are updated often, please pay attention to fingerprint_script and make sure it will produce different outputs for different versions of your dependency (ideally just print locked versions of dependencies).
+
reupload_on_changes is an optional field that can specify whether Cirrus Agent should check if
+contents of cached folder have changed during task execution and re-upload a cache entry in case of any changes.
+If reupload_on_changes option is not set explicitly then it will be set to false if fingerprint_script or fingerprint_key is presented and true otherwise.
+Cirrus Agent will detect additions, deletions and modifications of any files under specified folder. All of the detected changes will be
+logged under Upload '$CACHE_NAME' cache instructions for easier debugging of cache invalidations.
+
That means the only difference between the example above and below is that yarn install will always be executed in the
+example below where in the example above only when yarn.lock has changes.
+
+
+
+
test_task:
+container:
+image:node:latest
+node_modules_cache:
+folder:node_modules
+fingerprint_script:cat yarn.lock
+install_script:yarn install
+test_script:yarn run test
+
+
+
+
test_task:
+arm_container:
+image:node:latest
+node_modules_cache:
+folder:node_modules
+fingerprint_script:cat yarn.lock
+install_script:yarn install
+test_script:yarn run test
+
+
+
+
+
+
Caching for Pull Requests
+
Tasks for PRs upload caches to a separate caching namespace to not interfere with caches used by other tasks.
+But such PR tasks can read all caches even from the main caching namespace for a repository.
+
+
+
Scope of cached artifacts
+
Cache artifacts are shared between tasks, so two caches with the same name on e.g. Linux containers and macOS VMs will share the same set of files.
+This may introduce binary incompatibility between caches. To avoid that, add echo $CIRRUS_OS into fingerprint_script or use $CIRRUS_OS in fingerprint_key, which will distinguish caches based on OS.
Normally caches are uploaded at the end of the task execution. However, you can override the default behavior and upload them earlier.
+
To do this, use the upload_caches instruction, which uploads a list of caches passed to it once executed:
+
+
+
+
test_task:
+container:
+image:node:latest
+node_modules_cache:
+folder:node_modules
+upload_caches:
+-node_modules
+install_script:yarn install
+test_script:yarn run test
+pip_cache:
+folder:~/.cache/pip
+
+
+
+
test_task:
+arm_container:
+image:node:latest
+node_modules_cache:
+folder:node_modules
+upload_caches:
+-node_modules
+install_script:yarn install
+test_script:yarn run test
+pip_cache:
+folder:~/.cache/pip
+
+
+
+
+
Note that pip cache won't be uploaded in this example: using upload_caches disables the default behavior where all caches are automatically uploaded at the end of the task, so if you want to upload pip cache too, you'll have to either:
+
+
extend the list of uploaded caches in the first upload_caches instruction
+
insert a second upload_caches instruction that specifically targets pip cache
An artifacts instruction allows to store files and expose them in the UI for downloading later. An artifacts instruction
+can be named the same way as script instruction and has only one required path field which accepts a glob pattern
+of files relative to $CIRRUS_WORKING_DIR to store. Right now only storing files under $CIRRUS_WORKING_DIR folder as artifacts is supported with a total size limit of 1G for a free task and with no limit on your own infrastructure.
+
In the example below, Build and Test task produces two artifacts: binaries artifacts with all executables built during a
+successful task completion and junit artifacts with all test reports regardless of the final task status (more about
+that you can learn in the next section describing execution behavior).
+
build_and_test_task:
+# instructions to build and test
+binaries_artifacts:
+path:"build/*"
+always:
+junit_artifacts:
+path:"**/test-results/**.xml"
+format:junit
+
It is possible to refer to the latest artifacts directly (artifacts of the latest successful build).
+Use the following link format to download the latest artifact of a particular task:
+
https://api.cirrus-ci.com/v1/artifact/github/<USER OR ORGANIZATION>/<REPOSITORY>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>/<PATH>
+
+
It is possible to also download an archive of all files within an artifact with the following link:
+
https://api.cirrus-ci.com/v1/artifact/github/<USER OR ORGANIZATION>/<REPOSITORY>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>.zip
+
+
By default, Cirrus looks up the latest successful build of the default branch for the repository but the branch name
+can be customized via ?branch=<BRANCH> query parameter.
Note that if several tasks are uploading artifacts with the same name then the ZIP archive from the above link will
+contain merged content of all artifacts. It's also possible to refer to an artifact of a particular task within a build
+by name:
+
https://api.cirrus-ci.com/v1/artifact/build/<CIRRUS_BUILD_ID>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>.zip
+
+
It is also possible to download artifacts given a task id directly:
It's also possible to download a particular file of an artifact and not the whole archive by using <ARTIFACTS_NAME>/<PATH>
+instead of <ARTIFACTS_NAME>.zip.
By default, Cirrus CI will try to guess mimetype of files in artifacts by looking at their extensions. In case when artifacts
+don't have extensions, it's possible to explicitly set the Content-Type via type field:
Cirrus CI supports parsing artifacts in order to extract information that can be presented in the UI for a better user experience.
+Use the format field of an artifact instruction to specify artifact's format (mimetypes):
A file instruction allows to create a file from either an environment variable or directly from the configuration file. It is especially useful for situations when
+execution environment doesn't have proper shell to use echo ... >> ... syntax, for example, within scratch Docker containers.
By default, Cirrus CI executes instructions one after another and stops the overall task execution on the first failure.
+Sometimes there might be situations when some scripts should always be executed or some debug information needs to be saved
+on a failure. For such situations the always and on_failure keywords can be used to group instructions.
+
task:
+test_script:./run_tests.sh
+on_failure:
+debug_script:./print_additional_debug_info.sh
+cleanup_script:./cleanup.sh# failure here will not trigger `on_failure` instruction above
+always:
+test_reports_script:./print_test_reports.sh
+
+
In the example above, print_additional_debug_info.sh script will be executed only on failures of test_script to output some additional
+debug information. print_test_reports.sh on the other hand will be executed both on successful and failed runs to
+print test reports (test reports are always useful! ).
+
Sometimes, a complex task might exceed the pre-defined timeout, and it might not be clear why. In this case, the on_timeout execution behavior, which has an extra time budget of 5 minutes might be useful:
Environment variables may also be set at the root level of .cirrus.yml. In that case, they will be merged with each task's
+individual environment variables, but the task level variables always take precedence. For example:
PR number if current build was triggered by a PR. For example 239.
+
+
+
CIRRUS_PR_DRAFT
+
true if current build was triggered by a Draft PR.
+
+
+
CIRRUS_PR_TITLE
+
Title of a corresponding PR if any.
+
+
+
CIRRUS_PR_BODY
+
Body of a corresponding PR if any.
+
+
+
CIRRUS_PR_LABELS
+
comma separated list of PR's labels if current build was triggered by a PR.
+
+
+
CIRRUS_TAG
+
Tag name if current build was triggered by a new tag. For example v1.0
+
+
+
CIRRUS_OIDC_TOKEN
+
OpenID Connect Token issued by https://oidc.cirrus-ci.com with audience set to https://cirrus-ci.com/github/$CIRRUS_REPO_OWNER (can be changed via $CIRRUS_OIDC_TOKEN_AUDIENCE). Please refer to a dedicated section below for in-depth details.
And some environment variables can be set to control behavior of the Cirrus CI Agent:
+
+
+
+
Name
+
Default Value
+
Description
+
+
+
+
+
CIRRUS_AGENT_VERSION
+
not set
+
Cirrus Agent version to use. If not set, the latest release
+
+
+
CIRRUS_AGENT_EXPOSE_SCRIPTS_OUTPUTS
+
not set
+
If set, instructs Cirrus Agent to stream scripts outputs to the console as well as Cirrus API. Useful in case your Kubernetes cluster has logging collection enabled.
+
+
+
CIRRUS_CLONE_DEPTH
+
0 which will reflect in a full clone of a single branch
+
Clone depth.
+
+
+
CIRRUS_CLONE_SUBMODULES
+
false
+
Set to true to clone submodules recursively.
+
+
+
CIRRUS_LOG_TIMESTAMP
+
false
+
Indicate Cirrus Agent to prepend timestamp to each line of logs.
+
+
+
CIRRUS_OIDC_TOKEN_AUDIENCE
+
not set
+
Allows to override aud claim for CIRRUS_OIDC_TOKEN.
+
+
+
CIRRUS_SHELL
+
sh on Linux/macOS/FreeBSD and cmd.exe on Windows. Set to direct to execute each script directly without wrapping the commands in a shell script.
+
Shell that Cirrus CI uses to execute scripts. By default sh is used.
+
+
+
CIRRUS_VOLUME
+
/tmp
+
Defines a path for a temporary volume to be mounted into instances running in a Kubernetes cluster. This volume is mounted into all additional containers and is persisted between steps of a pipe.
+
+
+
CIRRUS_WORKING_DIR
+
cirrus-ci-build folder inside of a system's temporary folder
+
Working directory where Cirrus CI executes builds. Default to cirrus-ci-build folder inside of a system's temporary folder.
+
+
+
CIRRUS_ESCAPING_PROCESSES
+
not set
+
Set this variable to prevent the agent from terminating the processes spawned in each non-background instruction after that instruction ends. By default, the agent tries it's best to garbage collect these processes and their standard input/output streams. It's generally better to use a Background Script Instruction instead of this variable to achieve the same effect.
+
+
+
CIRRUS_WINDOWS_ERROR_MODE
+
not set
+
Set this value to force all processes spawned by the agent to call the equivalent of SetErrorMode() with the provided value (for example, 0x8001) before beginning their execution.
+
+
+
CIRRUS_VAULT_URL
+
not set
+
Address of the Vault server expressed as a URL and port (for example, https://vault.example.com:8200/), see HashiCorp Vault Support.
OpenID Connect is a very powerful mechanism that allows two independent systems establish trust without sharing any secrets.
+In the core of OpenID Connect is a simple JWT token that is signed by a trusted party (in our case it's Cirrus CI). Then
+the second system can be configured to trust such CIRRUS_OIDC_TOKENs signed by Cirrus CI. For examples please check
+Vault Integration, Google Cloud Integration
+and AWS Integration.
+
Once such external system receives a request authenticated with CIRRUS_OIDC_TOKEN it can verify the signature of the token
+via publicly available keys. Then it can extract claims from the token
+to make necessary assertions. Properly configuring assertions of such claims is crucial for secure integration with OIDC.
+Let's take a closer look at claims that are available through a payload of a CIRRUS_OIDC_TOKEN:
The above task will print out payload of a CIRRUS_OIDC_TOKEN that contains claims from the configuration
+that can be used for assertions.
+
{
+// Reserved Claims https://openid.net/specs/draft-jones-json-web-token-07.html#rfc.section.4.1
+"iss":"https://oidc.cirrus-ci.com",
+"aud":"https://cirrus-ci.com/github/cirruslabs",// can be changed via $CIRRUS_OIDC_TOKEN_AUDIENCE
+"sub":"repo:github:cirruslabs/cirrus-ci-docs",
+"nbf":...,
+"exp":...,
+"iat":...,
+"jti":"...",
+// Cirrus Added Claims
+"platform":"github",// Currently only GitHub is supported but more platforms will be added in the future
+"owner":"cirruslabs",// Unique organization or username on the platform
+"owner_id":"29414678",// Internal ID of the organization or user on the platform
+"repository":"cirrus-ci-docs",// Repository name
+"repository_visibility":"public",// either public or private
+"repository_id":"5730634941071360",// Internal Cirrus CI ID of the repository
+"build_id":"1234567890",// Internal Cirrus CI ID of the build. Same as $CIRRUS_BUILD_ID
+"branch":"fkorotkov-patch-2",// Git branch name. Same as $CIRRUS_BRANCH
+"change_in_repo":"e6e989d4792a678b697a9f17a787761bfefb52d0",// Git commit SHA. Same as $CIRRUS_CHANGE_IN_REPO
+"pr":"123",// Pull request number if a build was triggered by a PR. Same as $CIRRUS_PR
+"pr_draft":"false",// Whether the pull request is a draft. Same as $CIRRUS_PR_DRAFT
+"pr_labels":"",// Comma-separated list of labels of the pull request. Same as $CIRRUS_PR_LABELS
+"tag":"1.0.0",// Git tag name if a build was triggered by a tag creation. Same as $CIRRUS_TAG
+"task_id":"987654321",// Internal Cirrus CI ID of the task. Same as $CIRRUS_TASK_ID
+"task_name":"main",// Name of the task. Same as $CIRRUS_TASK_NAME
+"task_name_alias":"main",// Optional name alias of the task. Same as $CIRRUS_TASK_NAME_ALIAS
+"user_collaborator":"true",// Whether the user is a collaborator of the repository. Same as $CIRRUS_USER_COLLABORATOR
+"user_permission":"admin",// Permission level of the user in the repository. Same as $CIRRUS_USER_PERMISSION
+}
+
+
Please use the above claims to configure assertions in your external system. For example, you can assert that only tokens
+for specific branches can retrieve secrets for deploying to production.
It is possible to add encrypted variables to a .cirrus.yml file. These variables are decrypted only in builds for commits and pull requests that are made by users with write permission or approved by them.
+
In order to encrypt a variable go to repository's settings page via clicking settings icon
+on a repository's main page (for example https://cirrus-ci.com/github/my-organization/my-repository) and follow instructions.
+
+
Warning
+
Only users with WRITE permissions can add encrypted variables to a repository.
+
+
An encrypted variable will be presented in a form like ENCRYPTED[qwerty239abc] which can be safely committed to .cirrus.yml file:
Cirrus CI encrypts variables with a unique per repository 256-bit encryption key so forks and even repositories within
+the same organization cannot re-use them. qwerty239abc from the example above is NOT the content of your encrypted
+variable, it's just an internal ID. No one can brute force your secrets from such ID. In addition, Cirrus CI doesn't know
+a relation between an encrypted variable and a repository for which the encrypted variable was created.
+
+Organization Level Encrypted Variables
+
Sometimes there might be secrets that are used in almost all repositories of an organization. For example, credentials
+to a compute service where tasks will be executed. In order to create such sharable
+encrypted variable go to organization's settings page via clicking settings icon
+on an organization's main page (for example https://cirrus-ci.com/github/my-organization) and follow instructions
+in Organization Level Encrypted Variables section.
+
+
+Encrypted Variable for Cloud Credentials
+
In case you use integration with one of supported computing services, an encrypted variable
+used to store credentials that Cirrus is using to communicate with the computing service won't be decrypted if used
+in environment variables. These credentials have too many permissions for most of the cases,
+please create separate credentials with the minimum needed permissions for your specific case.
+
gcp_credentials:SECURED[!qwerty]
+
+env:
+CREDENTIALS:SECURED[!qwerty]# won't be decrypted in any case
+
+
+
+Skipping Task in Forked Repository
+
In forked repository the decryption of variable fails, which causes failure of task depending on it.
+To avoid this by default, make the sensitive task conditional:
Owner of forked repository can re-enable the task, if they have the required sensitive data, by encrypting
+the variable by themselves and editing both the encrypted variable and repo-owner condition
+in the .cirrus.yml file.
In addition to using Cirrus CI for managing secrets, it is possible to retrieve secrets from HashiCorp Vault.
+
You will need to configure a JWT authentication method and point it to the Cirrus CI's OIDC discovery URL: https://oidc.cirrus-ci.com.
+
This ensures that a cryptographic JWT token (CIRRUS_OIDC_TOKEN) that each Cirrus CI's task get assigned will be verified by your Vault installation.
+
From the Cirrus CI's side, use the CIRRUS_VAULT_URL environment variable to point Cirrus Agent at your vault and configure other Vault-specific variables, if needed. Note that it's not required for CIRRUS_VAULT_URL to be publicly available since Cirrus CI can orchestrate tasks on your infrastructure. Only Cirrus Agent executing a task from within an execution environment needs access to your Vault.
+
Once done, you will be able to use the VAULT[path/to/secret selector] syntax to retrieve a version 2 secret, for example:
The path is exactly the one you are familiar from invoking Vault CLI like vault read ..., and the selector is a simply dot-delimited list of fields to query in the output.
+
+
Caching of Vault secrets
+
Note that all VAULT[...] invocations cache the retrieved secrets on a per-path basis by default. Caching happens within a single task execution and is not shared between several tasks using the same secret.
+
To disable caching, use VAULT_NOCACHE[...] instead of VAULT[...].
+
+
+
Mixing of VAULT[...] and VAULT_NOCACHE[...] on the same path
+
Using both VAULT[...] and VAULT_NOCACHE[...] on the same path is not recommended because the order in which these invocations are processed is not deterministic.
It is possible to configure invocations of re-occurring builds via the well-known Cron expressions. Cron builds can be
+configured on a repository's settings page (not in .cirrus.yml).
+
It's possible to configure several cron builds with unique names which will be available via CIRRUS_CRONenvironment variable.
+Each cron build should specify branch to trigger new builds for and a cron expression compatible with Quartz. You can use
+this generator to generate/validate your expressions.
+
Note: Cron Builds are timed with the UTC timezone.
Sometimes it's useful to run the same task against different software versions. Or run different batches of tests based
+on an environment variable. For cases like these, the matrix modifier comes very handy. It's possible to use matrix
+keyword only inside of a particular task to have multiple tasks based on the original one. Each new task will be created
+from the original task by replacing the whole matrix YAML node with each matrix's children separately.
+
Let check an example of a .cirrus.yml:
+
+
+
+
test_task:
+container:
+matrix:
+-image:node:latest
+-image:node:lts
+test_script:yarn run test
+
+
+
+
test_task:
+arm_container:
+matrix:
+-image:node:latest
+-image:node:lts
+test_script:yarn run test
+
+
+
+
+
Which will be expanded into:
+
+
+
+
test_task:
+container:
+image:node:latest
+test_script:yarn run test
+
+test_task:
+container:
+image:node:lts
+test_script:yarn run test
+
+
+
+
test_task:
+arm_container:
+image:node:latest
+test_script:yarn run test
+
+test_task:
+arm_container:
+image:node:lts
+test_script:yarn run test
+
+
+
+
+
+
Tip
+
The matrix modifier can be used multiple times within a task.
+
+
The matrix modification makes it easy to create some pretty complex testing scenarios like this:
Sometimes it might be very handy to execute some tasks only after successful execution of other tasks. For such cases
+it is possible to specify task names that a particular task depends. Use depends_on keyword to define dependencies:
+
+
+
+
container:
+image:node:latest
+
+lint_task:
+script:yarn run lint
+
+test_task:
+script:yarn run test
+
+publish_task:
+depends_on:
+-test
+-lint
+script:yarn run publish
+
+
+
+
arm_container:
+image:node:latest
+
+lint_task:
+script:yarn run lint
+
+test_task:
+script:yarn run test
+
+publish_task:
+depends_on:
+-test
+-lint
+script:yarn run publish
+
+
+
+
+
+Task Names and Aliases
+
It is possible to specify the task's name via the name field. lint_task syntax is a syntactic sugar that will be
+expanded into:
Complex task names make it difficult to list and maintain all of such task names in your depends_on field. In order to
+make it simpler you can use the alias field to have a short simplified name for several tasks to use in depends_on.
+
Here is a modified version of an example above that leverages the alias field:
Some tasks are meant to be created only if a certain condition is met. And some tasks can be skipped in some cases.
+Cirrus CI supports the only_if and skip keywords in order to provide such flexibility:
+
+
+
+
+
The only_if keyword controls whether or not a task will be created. For example, you may want to publish only changes
+ committed to the master branch.
+
publish_task:
+only_if:$CIRRUS_BRANCH == 'master'
+script:yarn run publish
+
+
+
+
The skip keyword allows to skip execution of a task and mark it as successful. For example, you may want to skip linting
+ if no source files have changed since the last successful run.
+
lint_task:
+skip:"!changesInclude('.cirrus.yml','**.{js,ts}')"
+script:yarn run lint
+
+
+
+
+
+
+
+
Skip CI Completely
+
Just include [skip ci] or [skip cirrus] in the first line or last line of your commit message in order to skip CI execution for a commit completely.
+
If you push multiple commits at the same time, only the last commit message will be checked for [skip ci] or [ci skip].
+
If you open a PR, PR title will be checked for [skip ci] or [ci skip] instead of the last commit message on the PR branch.
Currently only basic operators like ==, !=, =~, !=~, &&, || and unary ! are supported in only_if and skip expressions.
+Environment variables can also be used as usually.
Note that =~ operator can match against multiline values (dotall mode) and therefore looking for the exact occurrence of the regular expression
+so don't forget to use .* around your term for matching it at any position (for example, $CIRRUS_CHANGE_TITLE =~ '.*\[docs\].*').
Currently two functions are supported in the only_if and skip expressions:
+
+
changesInclude function allows to check which files were changed
+
changesIncludeOnly is a more strict version of changesInclude, i.e. it won't evaluate to true if there are changed files other than the ones covered by patterns
+
+
These two functions behave differently for PR builds and regular builds:
+
+
For PR builds, functions check the list of files affected by the PR.
+
For regular builds, the CIRRUS_LAST_GREEN_CHANGEenvironment variable
+ will be used to determine list of affected files between CIRRUS_LAST_GREEN_CHANGE and CIRRUS_CHANGE_IN_REPO.
+ In case CIRRUS_LAST_GREEN_CHANGE is not available (either it's a new branch or there were no passing builds before),
+ list of files affected by a commit associated with CIRRUS_CHANGE_IN_REPO environment variable will be used instead.
+
+
changesInclude function can be very useful for skipping some tasks when no changes to sources have been made since the
+last successful Cirrus CI build.
+
lint_task:
+skip:"!changesInclude('.cirrus.yml','**.{js,ts}')"
+script:yarn run lint
+
+
changesIncludeOnly function can be used to skip running a heavyweight task if only documentation was changed, for example:
Cirrus CI can automatically cancel tasks in case of new pushes to the same branch. By default, Cirrus CI auto-cancels
+all tasks for non default branch (for most repositories master branch) but this behavior can be changed by specifying
+auto_cancellation field:
It's possible to tell Cirrus CI that a certain task is stateful and Cirrus CI will use a slightly different scheduling algorithm
+to minimize chances of such tasks being interrupted. Stateful tasks are intended to use low CPU count.
+Scheduling times of such stateful tasks might be a bit longer then usual especially for tasks with high CPU requirements.
+
By default, Cirrus CI marks a task as stateful if its name contains one of the following terms: deploy, push, publish,
+upload or release. Otherwise, you can explicitly mark a task as stateful via stateful field:
+
task:
+name:Propagate to Production
+stateful:true
+...
+
Sometimes tasks can play a role of sanity checks. For example, a task can check that your library is working with the latest nightly
+version of some dependency package. It will be great to be notified about such failures but it's not necessary to fail the
+whole build when a failure occurs. Cirrus CI has the allow_failures keyword which will make a task to not affect the overall status of a build.
By default a Cirrus CI task is automatically triggered when all its dependency tasks
+finished successfully. Sometimes though, it can be very handy to trigger some tasks manually, for example, perform a
+deployment to staging for manual testing upon all automation checks have succeeded. In order change the default behavior
+please use trigger_type field like this:
Some CI tasks perform external operations which are required to be executed one at a time. For example, parallel deploys
+to the same environment is usually a bad idea. In order to restrict parallel execution of a certain task within a repository,
+you can use execution_lock to specify a task's lock key, a unique string that will be used to make sure that any tasks with the same execution_lock string
+are executed one at a time. Here is an example of how to make sure deployments
+on a specific branch can not run in parallel:
Similar to manual tasks Cirrus CI can pause execution of tasks until a corresponding PR gets labeled.
+This can be particular useful when you'd like to do an initial review before running all unit and integration
+tests on every supported platform. Use the required_pr_labels field to specify
+a list of labels a PR requires to have in order to trigger a task. Here is a simple example of .cirrus.yml config
+that automatically runs a linting tool but requires initial-review label being presented in order to run tests:
For the most cases regular caching mechanism where Cirrus CI caches a folder is more than enough. But modern build systems
+like Gradle, Bazel and Pants can take
+advantage of remote caching. Remote caching is when a build system uploads and downloads intermediate results of a build
+execution while the build itself is still executing.
+
Cirrus CI agent starts a local caching server and exposes it via CIRRUS_HTTP_CACHE_HOST environments variable. Caching server
+supports GET, POST, HEAD and DELETE requests to upload, download, check presence and delete artifacts.
+
+
Info
+
If port 12321 is available CIRRUS_HTTP_CACHE_HOST will be equal to localhost:12321.
Sometimes one container is not enough to run a CI build. For example, your application might use a MySQL database
+as a storage. In this case you most likely want a MySQL instance running for your tests.
+
One option here is to pre-install MySQL and use a background_script to start it. This
+approach has some inconveniences like the need to pre-install MySQL by building a custom Docker container.
+
For such use cases Cirrus CI allows to run additional containers in parallel with the main container that executes a task.
+Each additional container is defined under additional_containers keyword in .cirrus.yml. Each additional container
+should have a unique name and specify at least a container image.
+
Normally, you would also specify a port (or ports, if there are many) to instruct the Cirrus CI to configure the networking between the containers and wait for the ports to be available before running the task.
+Additional containers do not inherit environment variables because they are started before the main task receives it's environment variables.
+
In the example below we use an official MySQL Docker image that exposes
+the standard MySQL port (3306). Tests will be able to access MySQL instance via localhost:3306.
Additional container can be very handy in many scenarios. Please check Cirrus CI catalog of examples for more details.
+
+Default Resources
+
By default, each additional container will get 0.5 CPU and 512Mi of memory. These values can be configured as usual
+via cpu and memory fields.
+
+
+Port Mapping
+
It's also possible to map ports of additional containers by using <HOST_PORT>:<CONTAINER_PORT> format for the port field.
+For example, port: 80:8080 will map port 8080 of the container to be available on local port 80 within a task.
+
Note: don't use port mapping unless absolutely necessary. A perfect use case is when you have several additional containers
+which start the service on the same port and there's no easy way to change that. Port mapping limits
+the number of places the container can be scheduled and will affect how fast such tasks are scheduled.
+
To specify multiple mappings use the ports field, instead of the port:
+
ports:
+-8080
+-3306
+
+
+
+Overriding Default Command
+
It's also possible to override the default CMD of an additional container via command field:
Cirrus CI provides a way to embed a badge that can represent status of your builds into a ReadMe file or a website.
+
For example, this is a badge for cirruslabs/cirrus-ci-web repository that contains Cirrus CI's front end:
+
In order to embed such a check into a "read-me" file or your website, just use a URL to a badge that looks like this:
+
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg
+
+
If you want a badge for a particular branch, use the ?branch=<BRANCH NAME> query parameter (at the end of the URL) like this:
+
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg?branch=<BRANCH NAME>
+
+
By default, Cirrus picks the latest build in a final state for the repository or a particular branch if branch parameter is specified. It's also possible to explicitly set a concrete build to use with ?buildId=<BUILD ID> query parameter.
+
If you want a badge for a particular task within the latest finished build, use the ?task=<TASK NAME> query parameter (at the end of the URL) like this:
+
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg?task=tests
+
+
You can even pick a specific script instruction within the task with an additional script=<SCRIPT NAME> parameter:
+
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg?task=build&script=lint
+
Here is how Cirrus CI's badge can be embeded in a Markdown file:
+
[![Build Status](https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg)](https://cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>)
+
Make your development cycle fast, cost efficient, and secure
+
Cirrus CI is arguably the most technological and flexible Continuous Integration service there is.
+
+ Start on an infrastructure managed by us and enjoy per-second billing
+ with no concurrency limits!
+
+
+ Grow your product without worrying about infrastructure maintenance and updates. When it's the right time,
+ configure and bring your own infrastructure.
+
+
Build your Open Source projects on Linux, Windows, macOS and FreeBSD for free, including ARM architecture!
Cirrus Labs Inc will collect certain non-personally identify information about you as you use our sites. We may use
+this data to better understand our users. We can also publish this data, but the data will be about a large group of users,
+not individuals.
+
We will also ask you to provide personal information, but you'll always be able to opt out. If you give us personal
+information, we won't do anything evil with it.
+
We can also use cookies, but you can choose not to store these.
+
That's the basic idea, but you must read through the entire Privacy Policy below and agree with all the details
+before you use any of our sites.
This document is based upon the Automattic Privacy Policy and is licensed under
+Creative Commons Attribution Share-Alike License 2.5. Basically,
+this means you can use it verbatim or edited, but you must release new versions under the same license and
+you have to credit Automattic somewhere (like this!). Automattic is not connected with and does not sponsor or endorse
+Cirrus Labs Inc or its use of the work.
+
Cirrus Labs Inc ("Cirrus Labs") makes available services include our web sites (https://cirrus-ci.org/), our blog, our API,
+and any other software, sites, and services offered by Cirrus Labs Inc in connection to any of those (taken together, the "Service").
+It is Cirrus Labs Inc's policy to respect your privacy regarding any information we may collect while operating our websites.
Like most website operators, Cirrus Labs Inc collects non-personally-identifying information of the sort that web browsers and
+servers typically make available, such as the browser type, language preference, referring site, and the date and time of each visitor request.
+Cirrus Labs Inc's purpose in collecting non-personally identifying information is to better understand how Cirrus Labs Inc's
+visitors use its website. From time to time, Cirrus Labs Inc may release non-personally-identifying information in the aggregate,
+e.g., by publishing a report on trends in the usage of its website.
+
Cirrus Labs Inc also collects potentially personally-identifying information like Internet Protocol (IP) addresses.
+Cirrus Labs Inc does not use such information to identify its visitors, however, and does not disclose such information,
+other than under the same circumstances that it uses and discloses personally-identifying information, as described below.
+We may also collect and use IP addresses to block users who violated our Terms of Service.
Certain visitors to Cirrus Labs Inc's websites choose to interact with Cirrus Labs Inc in ways that require
+Cirrus Labs Inc to gather personally-identifying information. The amount and type of information that Cirrus Labs Inc gathers
+depends on the nature of the interaction. Cirrus Labs Inc collects such information only insofar as is necessary or
+appropriate to fulfill the purpose of the visitor's interaction with Cirrus Labs Inc. Cirrus Labs Inc does not disclose
+personally-identifying information other than as described below. And visitors can always refuse to supply personally-identifying information,
+with the caveat that it may prevent them from engaging in certain Service-related activities.
+
Additionally, some interactions, such as posting a comment, may ask for optional personal information. For instance,
+when posting a comment, may provide a website that will be displayed along with a user's name when the comment is displayed.
+Supplying such personal information is completely optional and is only displayed for the benefit and the convenience of the user.
Cirrus Labs Inc may collect statistics about the behavior of visitors to the Service. For instance, Cirrus Labs Inc
+may monitor the most popular parts of the https://cirrus-ci.org/. Cirrus Labs Inc may display this information publicly or
+provide it to others. However, Cirrus Labs Inc does not disclose personally-identifying information other than as described below.
+
Protection of Certain Personally-Identifying Information¶
+
Cirrus Labs Inc discloses potentially personally-identifying and personally-identifying information only to those of its employees,
+contractors and affiliated organizations that (i) need to know that information in order to process it on Cirrus Labs Inc's behalf
+or to provide services available at Cirrus Labs Inc's websites, and (ii) that have agreed not to disclose it to others.
+Some of those employees, contractors and affiliated organizations may be located outside of your home country; by using the Service,
+you consent to the transfer of such information to them. Cirrus Labs Inc will not rent or sell potentially personally-identifying and
+personally-identifying information to anyone. Other than to its employees, contractors and affiliated organizations, as described above,
+Cirrus Labs Inc discloses potentially personally-identifying and personally-identifying information only when required to do so by law,
+or when Cirrus Labs Inc believes in good faith that disclosure is reasonably necessary to protect the property or rights of Cirrus Labs Inc,
+third parties or the public at large. If you are a registered user of the Service and have supplied your email address, Cirrus Labs Inc may
+occasionally send you an email to tell you about new features, solicit your feedback, or just keep you up to date with what's going on with
+Cirrus Labs Inc and our products. We primarily use our website and blog to communicate this type of information, so we expect to keep
+this type of email to a minimum. If you send us a request (for example via a support email or via one of our feedback mechanisms),
+we reserve the right to publish it in order to help us clarify or respond to your request or to help us support other users.
+Cirrus Labs Inc takes all measures reasonably necessary to protect against the unauthorized access, use, alteration or
+destruction of potentially personally-identifying and personally-identifying information.
A cookie is a string of information that a website stores on a visitor's computer, and that the visitor's browser provides
+to the Service each time the visitor returns. Cirrus Labs Inc uses cookies to help Cirrus Labs Inc identify and track visitors,
+their usage of Cirrus Labs Inc Service, and their Service access preferences. Cirrus Labs Inc visitors who do not wish to have
+cookies placed on their computers should set their browsers to refuse cookies before using Cirrus Labs Inc's websites, with
+the drawback that certain features of Cirrus Labs Inc's websites may not function properly without the aid of cookies.
Cirrus Labs Inc uses third party vendors and hosting partners to provide the necessary hardware, software, networking,
+storage, and related technology required to run the Service. You understand that although you retain full rights to your data,
+it may be stored on third party storage and transmitted through third party networks.
Although most changes are likely to be minor, Cirrus Labs Inc may change its Privacy Policy from time to time,
+and in Cirrus Labs Inc's sole discretion. Cirrus Labs Inc encourages visitors to frequently check this page for any changes
+to its Privacy Policy. Your continued use of this site after any change in this Privacy Policy will constitute your
+acceptance of such change.
Cirrus Labs Inc ("Cirrus Labs") operates the Cirrus CI service, which we hope you use. If you use it, please use it responsibly.
+If you don't, we'll have to terminate your subscription.
+
For paid plans, you'll be charged on a monthly basis. You can cancel anytime, but there are no refunds.
+
You own the source code that you provide to Cirrus CI and you're responsible for keeping it safe.
+
The Terms of Service, the Cirrus CI Service, and our prices can change at any time. We'll warn you 30 days in advance
+of any price changes. We'll try to warn you about major changes to the Terms of Service or Cirrus CI, but we make no guarantees.
+
That's the basic idea, but you must read through the entire Terms of Service below and agree with all the details before
+you use any of our websites or services (whether or not you have signed up).
This document is an adaptation of the Code Climate Terms of Service, which is
+an adaptation of the original Heroku Terms of Service, which is turn an adaptation of the
+Google App Engine Terms of Service. The original work has been modified
+with permission under the Creative Commons Attribution 3.0 License.
+Neither Code Climate, Inc, nor Heroku, Inc. nor Google, Inc. is connected with and they do not sponsor or endorse
+Cirrus CI or its use of the work.
+
You're welcome to adapt and use this document for your own needs. If you make an improvement, we'd appreciate it if
+you would let us know so we can consider improving our own document.
Your use of the Cirrus CI Service is governed by this agreement (the "Terms"). The "Service" means the services Cirrus CI
+makes available include our web sites (https://cirrus-ci.org/, https://cirrus-ci.com/), our blog, our API, and any other software, sites,
+and services offered by Cirrus Labs in connection to any of those.
+
"Customer Source Code" means any source code you directly or indirectly submit to Cirrus CI for the purpose of using the Service.
+"Content" means all content generated by Cirrus CI on your behalf (including metric data) and does not include Customer Source Code.
+
In order to use the Service, You (the "Customer", "You", or "Your") must first agree to the Terms. You understand and agree
+that Cirrus Labs will treat Your use of the Service as acceptance of the Terms from that point onwards.
+
Cirrus Labs may make changes to the Terms from time to time. You may reject the changes by terminating Your subscription.
+You understand and agree that if You use the Service after the date on which the Terms have changed, Cirrus Labs will treat
+Your use as acceptance of the updated Terms.
+
If you have any question about the Terms, please contact us.
You may not use the Service if You are a person barred from receiving the Service under the laws of the United States
+or other countries, including the country in which You are resident or from which You use the Service.
+
You may not use the service unless you are over the age of 13.
+
You must be a human. Sign ups via automated methods are not permitted.
You must provide accurate and complete registration information any time You register to use the Service.
+
You are responsible for the security of Your passwords and for any use of Your user.
+
Your use of the Service must comply with all applicable laws, regulations and ordinances.
+
You agree to not engage in any activity that interferes with or disrupts the Service.
+
Cirrus Labs reserves the right to enforce quotas and usage limits (to any resources, including the API) at its sole discretion,
+with or without notice, which may result in Cirrus Labs disabling or throttling your usage of the Service for any amount of time.
+
Shared users are forbidden unless you pay for all Git commit authors from the last 30 days.
The Service shall be subject to the privacy policy for the Service available at Privacy Policy, hereby
+expressly into the Terms of Service by reference. You agree to the use of Your data in accordance with Cirrus CI's privacy policies.
Cirrus Labs may change its fees and payment policies for the Service by notifying You at least thirty (30) days before the beginning of the billing cycle in which such change will take effect.
You agree that Cirrus Labs, in its sole discretion and for any or no reason, may terminate or suspend Your subscription. You agree that any termination of Your access to the Service may be without prior notice, and You agree that Cirrus CI will not be liable to You or any third party for such termination.
Cirrus Labs claims no ownership or control over any Customer Source Code. You retain copyright and any other rights You
+already hold in the Customer Source Code and You are responsible for protecting those rights, as appropriate.
+
You agree to assume full responsibility for configuring the Service to allow appropriate access to any Customer Source Code provided to the Service.
+
You understand that private projects will display Customer Source Code to You and any collaborators that you designate for that project.
+
You retain sole responsibility for any collaborators or third-party services that you allow to view Customer Source Code and entrust them at your own risk.
+
Cirrus Labs is not responsible if you fail to configure, or misconfigure, your project and inadvertently allow unauthorized parties to view any Customer Source Code.
You may choose to or we may invite You to submit comments or ideas about the Service, including but not limited to ideas
+about improving the Service or our products ("Ideas"). By submitting any Idea, You agree that Your disclosure is unsolicited
+and without restriction and will not place Cirrus Labs under any fiduciary or other obligation, and that we are free to
+use the Idea without any additional compensation to You, and/or to disclose the Idea on a non-confidential basis or otherwise to anyone.
You acknowledge and agree that the Service may change from time to time without prior notice to You.
+
Changes include, without limitation, changes to fee and payment policies, security patches, added or removed functionality, and other enhancements or restrictions.
+
Cirrus Labs shall not be liable to you or to any third party for any modification, price change, suspension or discontinuance of the Service.
The Service may include hyperlinks to other websites or content or resources or email content. You acknowledge and
+agree that Cirrus Labs is not responsible for the availability of any such external sites or resources, and does not
+endorse any advertising, products or other materials on or available from such web sites or resources.
All of the content available on or through the Service, including without limitation, text, photographs, graphics, logos,
+trade/service marks, and/or audiovisual content, but expressly excluding Customer Source Code, is owned and/or controlled
+by Cirrus Labs, or other licensors or Service users and is protected, as applicable, by copyright, trademark, trade dress,
+patent, and trade secret laws, other proprietary rights, and international treaties. You acknowledge that the Service and
+any underlying technology or software used in connection with the Service contain our proprietary information.
+
Subject to and conditioned upon your compliance with these Terms of Service, we grant to you a personal, worldwide,
+royalty-free, non-assignable and non-exclusive license to use the software provided to You by Cirrus Labs as part of
+the Service as provided to You by Cirrus Labs. This license is for the sole purpose of enabling You to use and enjoy
+the benefit of the Service as provided by Cirrus Labs, in the manner permitted by the Terms.
+
You may not (and You may not permit anyone else to): (a) copy, modify, create a derivative work of, reverse engineer,
+decompile or otherwise attempt to extract the source code of the Service or any part thereof, unless this is expressly
+permitted or required by law, or unless You have been specifically told that You may do so by Cirrus Labs, in writing
+(e.g., through an open source software license); or (b) attempt to disable or circumvent any security mechanisms used by the Service.
+
Open source software licenses for components of the Service released under an open source license constitute separate written agreements.
+To the limited extent that the open source software licenses expressly supersede these Terms of Service, the open source licenses
+govern Your agreement with Cirrus Labs for the use of the components of the Service released under an open source license.
+
You may not use the Service in any manner that could damage, disable, overburden or impair our servers or networks, or
+interfere with any other users' use or enjoyment of the Service.
+
You may not attempt to gain unauthorized access to any of the Service, member accounts, or computer systems or networks,
+through hacking, password mining or any other means.
+
Without limiting anything else contained herein, you agree that you shall not (and you agree not to allow any third party to):
+
+
remove any notices of copyright, trademark or other proprietary rights contained in/on or accessible through the Service
+or in any content or other material obtained via the Service;
+
use any robot, spider, website search/retrieval application, or other automated device, process or means to access,
+retrieve or index any portion of the Service;
+
reformat or frame any portion of the web pages that are part of the Service;
+
use the Service for commercial purposes not permitted under these Terms;
+
create users by automated means or under false or fraudulent pretenses;
+
attempt to defeat any security or verification measure relating to the Service;
+
provide or use tracking or monitoring functionality in connection with the Service, including, without limitation,
+to identify other users’ actions or activities;
+
impersonate or attempt to impersonate Cirrus Labs or any employee, contractor or associate of Cirrus Labs, or any other
+person or entity; or collect or store personal data about other users in connection with the prohibited activities described in this paragraph.
Cirrus Labs respects the intellectual property of others and requires that our users do the same. It is our policy to
+terminate the membership of repeat infringers. If you believe that material or content residing on or accessible through
+the Service infringes a copyright, please send a notice of copyright infringement containing the following information
+to the Designated Copyright Agent listed below:
+
+
identification of the copyrighted work claimed to have been infringed, or, if multiple copyrighted works are covered
+by a single notification, a representative list of such works;
+
identification of the claimed infringing material and information reasonably sufficient to permit us to locate
+the material on the Cirrus CI Service (providing the URL(s) of the claimed infringing material satisfies this requirement);
+
information reasonably sufficient to permit us to contact you, such as an address, telephone number, and an email address;
+
a statement by you that you have a good faith belief that the disputed use is not authorized by the copyright owner, its agent, or the law;
+
a statement by you, made under penalty of perjury, that the above information in your notification is accurate and that
+you are the copyright owner or are authorized to act on the copyright owner's behalf; and
+
your physical or electronic signature.
+
+
Our Designated Copyright Agent for notification of claimed infringement can be reached by email at: hello@cirruslabs.org.
+
The Service may contain advertisements and/or links to other websites (“Third Party Sites”). Cirrus Labs does not endorse,
+sanction or verify the accuracy or ownership of the information contained in/on any Third Party Site or any products or
+services advertised on Third Party Sites. If you decide to leave the Site and navigate to Third Party Sites, or install
+any software or download content from any such Third Party Sites, you do so at your own risk. Once you access a Third Party Site
+through a link on our Site, you may no longer be protected by these Terms of Service and you may be subject to the terms
+and conditions of such Third Party Site. You should review the applicable policies, including privacy and data gathering practices,
+of any Third Party Site to which you navigate from the Site, or relating to any software you use or install from a Third Party Site.
+Concerns regarding a Third Party Site should be directed to the Third Party Site itself. Cirrus CI bears no responsibility for
+any action associated with any Third Party Site.
IF YOU ACCESS THE SERVICE, YOU DO SO AT YOUR OWN RISK. WE PROVIDE THE SERVICE “AS IS”, “WITH ALL FAULTS” AND “AS AVAILABLE.”
+WE MAKE NO EXPRESS OR IMPLIED WARRANTIES OR GUARANTEES ABOUT THE SERVICE. TO THE MAXIMUM EXTENT PERMITTED BY LAW, WE HEREBY
+DISCLAIM ALL SUCH WARRANTIES, INCLUDING ALL STATUTORY WARRANTIES, WITH RESPECT TO THE SERVICE, INCLUDING WITHOUT LIMITATION
+ANY WARRANTIES THAT THE SERVICE IS MERCHANTABLE, OF SATISFACTORY QUALITY, ACCURATE, FIT FOR A PARTICULAR PURPOSE OR NEED,
+OR NON-INFRINGING. WE DO NOT GUARANTEE THAT THE RESULTS THAT MAY BE OBTAINED FROM THE USE OF THE SERVICE WILL BE EFFECTIVE,
+RELIABLE OR ACCURATE OR WILL MEET YOUR REQUIREMENTS. WE DO NOT GUARANTEE THAT YOU WILL BE ABLE TO ACCESS OR USE THE SERVICE
+(EITHER DIRECTLY OR THROUGH THIRD-PARTY NETWORKS) AT TIMES OR LOCATIONS OF YOUR CHOOSING. WE ARE NOT RESPONSIBLE FOR THE ACCURACY,
+RELIABILITY, TIMELINESS OR COMPLETENESS OF INFORMATION PROVIDED BY ANY OTHER USERS OF THE SERVICE OR ANY OTHER DATA OR
+INFORMATION PROVIDED OR RECEIVED THROUGH THE SERVICE. EXCEPT AS EXPRESSLY SET FORTH HEREIN, CIRRUS LABS MAKES NO WARRANTIES
+ABOUT THE INFORMATION SYSTEMS, SOFTWARE AND FUNCTIONS MADE ACCESSIBLE BY OR THROUGH THE SERVICE OR ANY SECURITY ASSOCIATED
+WITH THE TRANSMISSION OF SENSITIVE INFORMATION. CIRRUS LABS DOES NOT WARRANT THAT THE SERVICE WILL OPERATE ERROR-FREE,
+THAT ERRORS IN THE SERVICE WILL BE FIXED, THAT LOSS OF DATA WILL NOT OCCUR, OR THAT THE SERVICE OR SOFTWARE ARE FREE OF
+COMPUTER VIRUSES, CONTAMINANTS OR OTHER HARMFUL ITEMS. UNDER NO CIRCUMSTANCES WILL CIRRUS LABS, ANY OF OUR AFFILIATES,
+DISTRIBUTORS, PARTNERS, LICENSORS, AND/OR ANY OF OUR OR THEIR DIRECTORS, OFFICERS, EMPLOYEES, CONSULTANTS, AGENTS, OR
+OTHER REPRESENTATIVES BE LIABLE FOR ANY LOSS OR DAMAGE CAUSED BY YOUR RELIANCE ON INFORMATION OBTAINED THROUGH THE SERVICE.
YOUR SOLE AND EXCLUSIVE REMEDY FOR ANY DISPUTE WITH US IS THE CANCELLATION OF YOUR REGISTRATION. IN NO EVENT SHALL OUR
+TOTAL CUMULATIVE LIABILITY TO YOU FOR ANY AND ALL CLAIMS RELATING TO OR ARISING OUT OF YOUR USE OF THE SERVICE,
+REGARDLESS OF THE FORM OF ACTION, EXCEED THE GREATER OF: (A) THE TOTAL AMOUNT OF FEES, IF ANY, THAT YOU PAID TO UTILIZE
+THE SERVICE OR (B) ONE HUNDRED DOLLARS ($100). IN NO EVENT SHALL WE BE LIABLE TO YOU (OR TO ANY THIRD PARTY CLAIMING
+UNDER OR THROUGH YOU) FOR ANY DIRECT, INDIRECT, SPECIAL, INCIDENTAL, CONSEQUENTIAL, PUNITIVE OR EXEMPLARY DAMAGES OR
+ANY BODILY INJURY, EMOTIONAL DISTRESS, DEATH OR ANY OTHER DAMAGES ARISING FROM YOUR USE OF OR INABILITY TO USE THE SERVICE,
+WHETHER ON-LINE OR OFF-LINE, OR OTHERWISE IN CONNECTION WITH THE SERVICE. THESE EXCLUSIONS APPLY TO ANY CLAIMS FOR LOST PROFITS,
+LOST DATA, LOSS OF GOODWILL OR BUSINESS REPUTATION, COST OF PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES, WORK STOPPAGE,
+COMPUTER FAILURE OR MALFUNCTION, ANY OTHER COMMERCIAL DAMAGES OR LOSSES, OR ANY PERSONAL INJURY OR PROPERTY DAMAGES,
+EVEN IF WE KNEW OR SHOULD HAVE KNOWN OF THE POSSIBILITY OF SUCH DAMAGES. BECAUSE SOME STATES OR JURISDICTIONS DO NOT ALLOW
+THE EXCLUSION OR THE LIMITATION OF LIABILITY FOR CONSEQUENTIAL OR INCIDENTAL DAMAGES, IN SUCH STATES OR JURISDICTIONS,
+OUR LIABILITY SHALL BE LIMITED TO THE EXTENT PERMITTED BY LAW. IF YOU ARE A CALIFORNIA RESIDENT, YOU WAIVE YOUR RIGHTS
+WITH RESPECT TO CALIFORNIA CIVIL CODE SECTION 1542, WHICH SAYS "A GENERAL RELEASE DOES NOT EXTEND TO CLAIMS WHICH THE
+CREDITOR DOES NOT KNOW OR SUSPECT TO EXIST IN HIS FAVOR AT THE TIME OF EXECUTING THE RELEASE, WHICH, IF KNOWN BY HIM
+MUST HAVE MATERIALLY AFFECTED HIS SETTLEMENT WITH THE DEBTOR.”
The Terms of Service shall be deemed to have been entered into and shall be construed and enforced in accordance with
+the laws of the State of New York as applied to contracts made and performed entirely within New York, without giving
+effect to any conflicts of law statutes. Any controversy, dispute or claim arising out of or related to the
+Terms of Service or the Service shall be settled by final and binding arbitration to be conducted by an arbitration
+tribunal in the State of New York and the County of New York, pursuant to the rules of the American Arbitration Association.
+Any and all disputes that you may have with Cirrus Labs shall be resolved individually, without resort to any form of class action.
The Terms constitute the whole legal agreement between You and Cirrus Labs and govern Your use of the Service and
+completely replace any prior agreements between You and Cirrus Labs in relation to the Service.
+
If any part of the Terms of Service is held invalid or unenforceable, that portion shall be construed in a manner
+consistent with applicable law to reflect, as nearly as possible, the original intentions of the parties, and
+the remaining portions shall remain in full force and effect.
+
The failure of Cirrus Labs to exercise or enforce any right or provision of the Terms of Service shall not constitute
+a waiver of such right or provision. The failure of either party to exercise in any respect any right provided for herein
+shall not be deemed a waiver of any further rights hereunder.
+
You agree that if Cirrus Labs does not exercise or enforce any legal right or remedy which is contained in the Terms
+(or which Cirrus Labs has the benefit of under any applicable law), this will not be taken to be a formal waiver of
+Cirrus Labs' rights and that those rights or remedies will still be available to Cirrus Labs.
+
Cirrus Labs shall not be liable for failing or delaying performance of its obligations resulting from any condition
+beyond its reasonable control, including but not limited to, governmental action, acts of terrorism, earthquake, fire,
+flood or other acts of God, labor conditions, power failures, and Internet disturbances.
+
We may assign this contract at any time to any parent, subsidiary, or any affiliated company, or as part of the sale to,
+merger with, or other transfer of our company to another entity.
+
This page was last updated on 02/03/2019.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/plugins/social/templates/default.yml b/plugins/social/templates/default.yml
new file mode 100644
index 00000000..2d803a98
--- /dev/null
+++ b/plugins/social/templates/default.yml
@@ -0,0 +1,231 @@
+# Copyright (c) 2016-2023 Martin Donath
+
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to
+# deal in the Software without restriction, including without limitation the
+# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+# sell copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+# -----------------------------------------------------------------------------
+# Configuration
+# -----------------------------------------------------------------------------
+
+# Definitions
+definitions:
+
+ # Background image
+ - &background_image >-
+ {{ layout.background_image | x }}
+
+ # Background color (default: indigo)
+ - &background_color >-
+ {%- if layout.background_color -%}
+ {{ layout.background_color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set primary = palette.get("primary", "indigo") -%}
+ {%- set primary = primary.replace(" ", "-") -%}
+ {{ {
+ "red": "#ef5552",
+ "pink": "#e92063",
+ "purple": "#ab47bd",
+ "deep-purple": "#7e56c2",
+ "indigo": "#4051b5",
+ "blue": "#2094f3",
+ "light-blue": "#02a6f2",
+ "cyan": "#00bdd6",
+ "teal": "#009485",
+ "green": "#4cae4f",
+ "light-green": "#8bc34b",
+ "lime": "#cbdc38",
+ "yellow": "#ffec3d",
+ "amber": "#ffc105",
+ "orange": "#ffa724",
+ "deep-orange": "#ff6e42",
+ "brown": "#795649",
+ "grey": "#757575",
+ "blue-grey": "#546d78",
+ "black": "#000000",
+ "white": "#ffffff"
+ }[primary] or "#4051b5" }}
+ {%- endif -%}
+
+ # Text color (default: white)
+ - &color >-
+ {%- if layout.color -%}
+ {{ layout.color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set primary = palette.get("primary", "indigo") -%}
+ {%- set primary = primary.replace(" ", "-") -%}
+ {{ {
+ "red": "#ffffff",
+ "pink": "#ffffff",
+ "purple": "#ffffff",
+ "deep-purple": "#ffffff",
+ "indigo": "#ffffff",
+ "blue": "#ffffff",
+ "light-blue": "#ffffff",
+ "cyan": "#ffffff",
+ "teal": "#ffffff",
+ "green": "#ffffff",
+ "light-green": "#ffffff",
+ "lime": "#000000",
+ "yellow": "#000000",
+ "amber": "#000000",
+ "orange": "#000000",
+ "deep-orange": "#ffffff",
+ "brown": "#ffffff",
+ "grey": "#ffffff",
+ "blue-grey": "#ffffff",
+ "black": "#ffffff",
+ "white": "#000000"
+ }[primary] or "#ffffff" }}
+ {%- endif -%}
+
+ # Font family (default: Roboto)
+ - &font_family >-
+ {%- if layout.font_family -%}
+ {{ layout.font_family }}
+ {%- elif config.theme.font != false -%}
+ {{ config.theme.font.get("text", "Roboto") }}
+ {%- else -%}
+ Roboto
+ {%- endif -%}
+
+ # Site name
+ - &site_name >-
+ {{ config.site_name }}
+
+ # Page title
+ - &page_title >-
+ {%- if layout.title -%}
+ {{ layout.title }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page title with site name
+ - &page_title_with_site_name >-
+ {%- if not page.is_homepage -%}
+ {{ page.meta.get("title", page.title) }} - {{ config.site_name }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page description
+ - &page_description >-
+ {%- if layout.description -%}
+ {{ layout.description }}
+ {%- else -%}
+ {{ page.meta.get("description", config.site_description) | x }}
+ {%- endif -%}
+
+ # Logo
+ - &logo >-
+ {%- if layout.logo -%}
+ {{ layout.logo }}
+ {%- elif config.theme.logo -%}
+ {{ config.docs_dir }}/{{ config.theme.logo }}
+ {%- endif -%}
+
+ # Logo (icon)
+ - &logo_icon >-
+ {{ config.theme.icon.logo | x }}
+
+# Meta tags
+tags:
+
+ # Open Graph
+ og:type: website
+ og:title: *page_title_with_site_name
+ og:description: *page_description
+ og:image: "{{ image.url }}"
+ og:image:type: "{{ image.type }}"
+ og:image:width: "{{ image.width }}"
+ og:image:height: "{{ image.height }}"
+ og:url: "{{ page.canonical_url }}"
+
+ # Twitter
+ twitter:card: summary_large_image
+ twitter.title: *page_title_with_site_name
+ twitter:description: *page_description
+ twitter:image: "{{ image.url }}"
+
+# -----------------------------------------------------------------------------
+# Specification
+# -----------------------------------------------------------------------------
+
+# Card size and layers
+size: { width: 1200, height: 630 }
+layers:
+
+ # Background
+ - background:
+ image: *background_image
+ color: *background_color
+
+ # Logo
+ - size: { width: 144, height: 144 }
+ offset: { x: 992, y: 64 }
+ background:
+ image: *logo
+ icon:
+ value: *logo_icon
+ color: *color
+
+ # Site name
+ - size: { width: 832, height: 42 }
+ offset: { x: 64, y: 64 }
+ typography:
+ content: *site_name
+ color: *color
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page title
+ - size: { width: 832, height: 310 }
+ offset: { x: 62, y: 160 }
+ typography:
+ content: *page_title
+ align: start
+ color: *color
+ line:
+ amount: 3
+ height: 1.25
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page description
+ - size: { width: 832, height: 64 }
+ offset: { x: 64, y: 512 }
+ typography:
+ content: *page_description
+ align: start
+ color: *color
+ line:
+ amount: 2
+ height: 1.5
+ font:
+ family: *font_family
+ style: Regular
diff --git a/plugins/social/templates/default/accent.yml b/plugins/social/templates/default/accent.yml
new file mode 100644
index 00000000..dde03b53
--- /dev/null
+++ b/plugins/social/templates/default/accent.yml
@@ -0,0 +1,221 @@
+# Copyright (c) 2016-2023 Martin Donath
+
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to
+# deal in the Software without restriction, including without limitation the
+# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+# sell copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+# -----------------------------------------------------------------------------
+# Configuration
+# -----------------------------------------------------------------------------
+
+# Definitions
+definitions:
+
+ # Background image
+ - &background_image >-
+ {{ layout.background_image | x }}
+
+ # Background color (default: indigo)
+ - &background_color >-
+ {%- if layout.background_color -%}
+ {{ layout.background_color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set accent = palette.get("accent", "indigo") -%}
+ {%- set accent = accent.replace(" ", "-") -%}
+ {{ {
+ "red": "#ff1a47",
+ "pink": "#f50056",
+ "purple": "#df41fb",
+ "deep-purple": "#7c4dff",
+ "indigo": "#526cfe",
+ "blue": "#4287ff",
+ "light-blue": "#0091eb",
+ "cyan": "#00bad6",
+ "teal": "#00bda4",
+ "green": "#00c753",
+ "light-green": "#63de17",
+ "lime": "#b0eb00",
+ "yellow": "#ffd500",
+ "amber": "#ffaa00",
+ "orange": "#ff9100",
+ "deep-orange": "#ff6e42"
+ }[accent] or "#4051b5" }}
+ {%- endif -%}
+
+ # Text color (default: white)
+ - &color >-
+ {%- if layout.color -%}
+ {{ layout.color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set accent = palette.get("accent", "indigo") -%}
+ {%- set accent = accent.replace(" ", "-") -%}
+ {{ {
+ "red": "#ffffff",
+ "pink": "#ffffff",
+ "purple": "#ffffff",
+ "deep-purple": "#ffffff",
+ "indigo": "#ffffff",
+ "blue": "#ffffff",
+ "light-blue": "#ffffff",
+ "cyan": "#ffffff",
+ "teal": "#ffffff",
+ "green": "#ffffff",
+ "light-green": "#ffffff",
+ "lime": "#000000",
+ "yellow": "#000000",
+ "amber": "#000000",
+ "orange": "#000000",
+ "deep-orange": "#ffffff"
+ }[accent] or "#ffffff" }}
+ {%- endif -%}
+
+ # Font family (default: Roboto)
+ - &font_family >-
+ {%- if layout.font_family -%}
+ {{ layout.font_family }}
+ {%- elif config.theme.font != false -%}
+ {{ config.theme.font.get("text", "Roboto") }}
+ {%- else -%}
+ Roboto
+ {%- endif -%}
+
+ # Site name
+ - &site_name >-
+ {{ config.site_name }}
+
+ # Page title
+ - &page_title >-
+ {%- if layout.title -%}
+ {{ layout.title }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page title with site name
+ - &page_title_with_site_name >-
+ {%- if not page.is_homepage -%}
+ {{ page.meta.get("title", page.title) }} - {{ config.site_name }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page description
+ - &page_description >-
+ {%- if layout.description -%}
+ {{ layout.description }}
+ {%- else -%}
+ {{ page.meta.get("description", config.site_description) | x }}
+ {%- endif -%}
+
+ # Logo
+ - &logo >-
+ {%- if layout.logo -%}
+ {{ layout.logo }}
+ {%- elif config.theme.logo -%}
+ {{ config.docs_dir }}/{{ config.theme.logo }}
+ {%- endif -%}
+
+ # Logo (icon)
+ - &logo_icon >-
+ {{ config.theme.icon.logo | x }}
+
+# Meta tags
+tags:
+
+ # Open Graph
+ og:type: website
+ og:title: *page_title_with_site_name
+ og:description: *page_description
+ og:image: "{{ image.url }}"
+ og:image:type: "{{ image.type }}"
+ og:image:width: "{{ image.width }}"
+ og:image:height: "{{ image.height }}"
+ og:url: "{{ page.canonical_url }}"
+
+ # Twitter
+ twitter:card: summary_large_image
+ twitter.title: *page_title_with_site_name
+ twitter:description: *page_description
+ twitter:image: "{{ image.url }}"
+
+# -----------------------------------------------------------------------------
+# Specification
+# -----------------------------------------------------------------------------
+
+# Card size and layers
+size: { width: 1200, height: 630 }
+layers:
+
+ # Background
+ - background:
+ image: *background_image
+ color: *background_color
+
+ # Logo
+ - size: { width: 144, height: 144 }
+ offset: { x: 992, y: 64 }
+ background:
+ image: *logo
+ icon:
+ value: *logo_icon
+ color: *color
+
+ # Site name
+ - size: { width: 832, height: 42 }
+ offset: { x: 64, y: 64 }
+ typography:
+ content: *site_name
+ color: *color
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page title
+ - size: { width: 832, height: 310 }
+ offset: { x: 62, y: 160 }
+ typography:
+ content: *page_title
+ align: start
+ color: *color
+ line:
+ amount: 3
+ height: 1.25
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page description
+ - size: { width: 832, height: 64 }
+ offset: { x: 64, y: 512 }
+ typography:
+ content: *page_description
+ align: start
+ color: *color
+ line:
+ amount: 2
+ height: 1.5
+ font:
+ family: *font_family
+ style: Regular
diff --git a/plugins/social/templates/default/invert.yml b/plugins/social/templates/default/invert.yml
new file mode 100644
index 00000000..7ea36443
--- /dev/null
+++ b/plugins/social/templates/default/invert.yml
@@ -0,0 +1,231 @@
+# Copyright (c) 2016-2023 Martin Donath
+
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to
+# deal in the Software without restriction, including without limitation the
+# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+# sell copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+# -----------------------------------------------------------------------------
+# Configuration
+# -----------------------------------------------------------------------------
+
+# Definitions
+definitions:
+
+ # Background image
+ - &background_image >-
+ {{ layout.background_image | x }}
+
+ # Background color (default: white)
+ - &background_color >-
+ {%- if layout.background_color -%}
+ {{ layout.background_color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set primary = palette.get("primary", "indigo") -%}
+ {%- set primary = primary.replace(" ", "-") -%}
+ {{ {
+ "red": "#ffffff",
+ "pink": "#ffffff",
+ "purple": "#ffffff",
+ "deep-purple": "#ffffff",
+ "indigo": "#ffffff",
+ "blue": "#ffffff",
+ "light-blue": "#ffffff",
+ "cyan": "#ffffff",
+ "teal": "#ffffff",
+ "green": "#ffffff",
+ "light-green": "#ffffff",
+ "lime": "#000000",
+ "yellow": "#000000",
+ "amber": "#000000",
+ "orange": "#000000",
+ "deep-orange": "#ffffff",
+ "brown": "#ffffff",
+ "grey": "#ffffff",
+ "blue-grey": "#ffffff",
+ "black": "#ffffff",
+ "white": "#000000"
+ }[primary] or "#ffffff" }}
+ {%- endif -%}
+
+ # Text color (default: indigo)
+ - &color >-
+ {%- if layout.color -%}
+ {{ layout.color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set primary = palette.get("primary", "indigo") -%}
+ {%- set primary = primary.replace(" ", "-") -%}
+ {{ {
+ "red": "#ef5552",
+ "pink": "#e92063",
+ "purple": "#ab47bd",
+ "deep-purple": "#7e56c2",
+ "indigo": "#4051b5",
+ "blue": "#2094f3",
+ "light-blue": "#02a6f2",
+ "cyan": "#00bdd6",
+ "teal": "#009485",
+ "green": "#4cae4f",
+ "light-green": "#8bc34b",
+ "lime": "#cbdc38",
+ "yellow": "#ffec3d",
+ "amber": "#ffc105",
+ "orange": "#ffa724",
+ "deep-orange": "#ff6e42",
+ "brown": "#795649",
+ "grey": "#757575",
+ "blue-grey": "#546d78",
+ "black": "#000000",
+ "white": "#ffffff"
+ }[primary] or "#4051b5" }}
+ {%- endif -%}
+
+ # Font family (default: Roboto)
+ - &font_family >-
+ {%- if layout.font_family -%}
+ {{ layout.font_family }}
+ {%- elif config.theme.font != false -%}
+ {{ config.theme.font.get("text", "Roboto") }}
+ {%- else -%}
+ Roboto
+ {%- endif -%}
+
+ # Site name
+ - &site_name >-
+ {{ config.site_name }}
+
+ # Page title
+ - &page_title >-
+ {%- if layout.title -%}
+ {{ layout.title }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page title with site name
+ - &page_title_with_site_name >-
+ {%- if not page.is_homepage -%}
+ {{ page.meta.get("title", page.title) }} - {{ config.site_name }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page description
+ - &page_description >-
+ {%- if layout.description -%}
+ {{ layout.description }}
+ {%- else -%}
+ {{ page.meta.get("description", config.site_description) | x }}
+ {%- endif -%}
+
+ # Logo
+ - &logo >-
+ {%- if layout.logo -%}
+ {{ layout.logo }}
+ {%- elif config.theme.logo -%}
+ {{ config.docs_dir }}/{{ config.theme.logo }}
+ {%- endif -%}
+
+ # Logo (icon)
+ - &logo_icon >-
+ {{ config.theme.icon.logo | x }}
+
+# Meta tags
+tags:
+
+ # Open Graph
+ og:type: website
+ og:title: *page_title_with_site_name
+ og:description: *page_description
+ og:image: "{{ image.url }}"
+ og:image:type: "{{ image.type }}"
+ og:image:width: "{{ image.width }}"
+ og:image:height: "{{ image.height }}"
+ og:url: "{{ page.canonical_url }}"
+
+ # Twitter
+ twitter:card: summary_large_image
+ twitter.title: *page_title_with_site_name
+ twitter:description: *page_description
+ twitter:image: "{{ image.url }}"
+
+# -----------------------------------------------------------------------------
+# Specification
+# -----------------------------------------------------------------------------
+
+# Card size and layers
+size: { width: 1200, height: 630 }
+layers:
+
+ # Background
+ - background:
+ image: *background_image
+ color: *background_color
+
+ # Logo
+ - size: { width: 144, height: 144 }
+ offset: { x: 992, y: 64 }
+ background:
+ image: *logo
+ icon:
+ value: *logo_icon
+ color: *color
+
+ # Site name
+ - size: { width: 832, height: 42 }
+ offset: { x: 64, y: 64 }
+ typography:
+ content: *site_name
+ color: *color
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page title
+ - size: { width: 832, height: 310 }
+ offset: { x: 62, y: 160 }
+ typography:
+ content: *page_title
+ align: start
+ color: *color
+ line:
+ amount: 3
+ height: 1.25
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page description
+ - size: { width: 832, height: 64 }
+ offset: { x: 64, y: 512 }
+ typography:
+ content: *page_description
+ align: start
+ color: *color
+ line:
+ amount: 2
+ height: 1.5
+ font:
+ family: *font_family
+ style: Regular
diff --git a/plugins/social/templates/default/only/image.yml b/plugins/social/templates/default/only/image.yml
new file mode 100644
index 00000000..ee10b9c2
--- /dev/null
+++ b/plugins/social/templates/default/only/image.yml
@@ -0,0 +1,77 @@
+# Copyright (c) 2016-2023 Martin Donath
+
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to
+# deal in the Software without restriction, including without limitation the
+# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+# sell copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+# -----------------------------------------------------------------------------
+# Configuration
+# -----------------------------------------------------------------------------
+
+# Definitions
+definitions:
+
+ # Background image
+ - &background_image >-
+ {{ layout.background_image }}
+
+ # Page title with site name
+ - &page_title_with_site_name >-
+ {%- if not page.is_homepage -%}
+ {{ page.meta.get("title", page.title) }} - {{ config.site_name }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page description
+ - &page_description >-
+ {%- if layout.description -%}
+ {{ layout.description }}
+ {%- else -%}
+ {{ page.meta.get("description", config.site_description) | x }}
+ {%- endif -%}
+
+# Meta tags
+tags:
+
+ # Open Graph
+ og:type: website
+ og:title: *page_title_with_site_name
+ og:description: *page_description
+ og:image: "{{ image.url }}"
+ og:image:type: "{{ image.type }}"
+ og:image:width: "{{ image.width }}"
+ og:image:height: "{{ image.height }}"
+ og:url: "{{ page.canonical_url }}"
+
+ # Twitter
+ twitter:card: summary_large_image
+ twitter.title: *page_title_with_site_name
+ twitter:description: *page_description
+ twitter:image: "{{ image.url }}"
+
+# -----------------------------------------------------------------------------
+# Specification
+# -----------------------------------------------------------------------------
+
+# Card size and layers
+size: { width: 1200, height: 630 }
+layers:
+
+ # Background
+ - background:
+ image: *background_image
diff --git a/plugins/social/templates/default/variant.yml b/plugins/social/templates/default/variant.yml
new file mode 100644
index 00000000..9f505372
--- /dev/null
+++ b/plugins/social/templates/default/variant.yml
@@ -0,0 +1,242 @@
+# Copyright (c) 2016-2023 Martin Donath
+
+# Permission is hereby granted, free of charge, to any person obtaining a copy
+# of this software and associated documentation files (the "Software"), to
+# deal in the Software without restriction, including without limitation the
+# rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+# sell copies of the Software, and to permit persons to whom the Software is
+# furnished to do so, subject to the following conditions:
+
+# The above copyright notice and this permission notice shall be included in
+# all copies or substantial portions of the Software.
+
+# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+# FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE
+# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+# FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS
+# IN THE SOFTWARE.
+
+# -----------------------------------------------------------------------------
+# Configuration
+# -----------------------------------------------------------------------------
+
+# Definitions
+definitions:
+
+ # Background image
+ - &background_image >-
+ {{ layout.background_image | x }}
+
+ # Background color (default: indigo)
+ - &background_color >-
+ {%- if layout.background_color -%}
+ {{ layout.background_color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set primary = palette.get("primary", "indigo") -%}
+ {%- set primary = primary.replace(" ", "-") -%}
+ {{ {
+ "red": "#ef5552",
+ "pink": "#e92063",
+ "purple": "#ab47bd",
+ "deep-purple": "#7e56c2",
+ "indigo": "#4051b5",
+ "blue": "#2094f3",
+ "light-blue": "#02a6f2",
+ "cyan": "#00bdd6",
+ "teal": "#009485",
+ "green": "#4cae4f",
+ "light-green": "#8bc34b",
+ "lime": "#cbdc38",
+ "yellow": "#ffec3d",
+ "amber": "#ffc105",
+ "orange": "#ffa724",
+ "deep-orange": "#ff6e42",
+ "brown": "#795649",
+ "grey": "#757575",
+ "blue-grey": "#546d78",
+ "black": "#000000",
+ "white": "#ffffff"
+ }[primary] or "#4051b5" }}
+ {%- endif -%}
+
+ # Text color (default: white)
+ - &color >-
+ {%- if layout.color -%}
+ {{ layout.color }}
+ {%- else -%}
+ {%- set palette = config.theme.palette or {} -%}
+ {%- if not palette is mapping -%}
+ {%- set palette = palette | first -%}
+ {%- endif -%}
+ {%- set primary = palette.get("primary", "indigo") -%}
+ {%- set primary = primary.replace(" ", "-") -%}
+ {{ {
+ "red": "#ffffff",
+ "pink": "#ffffff",
+ "purple": "#ffffff",
+ "deep-purple": "#ffffff",
+ "indigo": "#ffffff",
+ "blue": "#ffffff",
+ "light-blue": "#ffffff",
+ "cyan": "#ffffff",
+ "teal": "#ffffff",
+ "green": "#ffffff",
+ "light-green": "#ffffff",
+ "lime": "#000000",
+ "yellow": "#000000",
+ "amber": "#000000",
+ "orange": "#000000",
+ "deep-orange": "#ffffff",
+ "brown": "#ffffff",
+ "grey": "#ffffff",
+ "blue-grey": "#ffffff",
+ "black": "#ffffff",
+ "white": "#000000"
+ }[primary] or "#ffffff" }}
+ {%- endif -%}
+
+ # Font family (default: Roboto)
+ - &font_family >-
+ {%- if layout.font_family -%}
+ {{ layout.font_family }}
+ {%- elif config.theme.font != false -%}
+ {{ config.theme.font.get("text", "Roboto") }}
+ {%- else -%}
+ Roboto
+ {%- endif -%}
+
+ # Site name
+ - &site_name >-
+ {{ config.site_name }}
+
+ # Page title
+ - &page_title >-
+ {%- if layout.title -%}
+ {{ layout.title }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page title with site name
+ - &page_title_with_site_name >-
+ {%- if not page.is_homepage -%}
+ {{ page.meta.get("title", page.title) }} - {{ config.site_name }}
+ {%- else -%}
+ {{ page.meta.get("title", page.title) }}
+ {%- endif -%}
+
+ # Page description
+ - &page_description >-
+ {%- if layout.description -%}
+ {{ layout.description }}
+ {%- else -%}
+ {{ page.meta.get("description", config.site_description) | x }}
+ {%- endif -%}
+
+ # Page icon
+ - &page_icon >-
+ {{ page.meta.icon | x }}
+
+ # Logo
+ - &logo >-
+ {%- if layout.logo -%}
+ {{ layout.logo }}
+ {%- elif config.theme.logo -%}
+ {{ config.docs_dir }}/{{ config.theme.logo }}
+ {%- endif -%}
+
+ # Logo (icon)
+ - &logo_icon >-
+ {{ config.theme.icon.logo | x }}
+
+# Meta tags
+tags:
+
+ # Open Graph
+ og:type: website
+ og:title: *page_title_with_site_name
+ og:description: *page_description
+ og:image: "{{ image.url }}"
+ og:image:type: "{{ image.type }}"
+ og:image:width: "{{ image.width }}"
+ og:image:height: "{{ image.height }}"
+ og:url: "{{ page.canonical_url }}"
+
+ # Twitter
+ twitter:card: summary_large_image
+ twitter.title: *page_title_with_site_name
+ twitter:description: *page_description
+ twitter:image: "{{ image.url }}"
+
+# -----------------------------------------------------------------------------
+# Specification
+# -----------------------------------------------------------------------------
+
+# Card size and layers
+size: { width: 1200, height: 630 }
+layers:
+
+ # Background
+ - background:
+ image: *background_image
+ color: *background_color
+
+ # Page icon
+ - size: { width: 630, height: 630 }
+ offset: { x: 800, y: 0 }
+ icon:
+ value: *page_icon
+ color: "#00000033"
+
+ # Logo
+ - size: { width: 64, height: 64 }
+ offset: { x: 64, y: 64 }
+ background:
+ image: *logo
+ icon:
+ value: *logo_icon
+ color: *color
+
+ # Site name
+ - size: { width: 768, height: 42 }
+ offset: { x: 160, y: 74 }
+ typography:
+ content: *site_name
+ color: *color
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page title
+ - size: { width: 864, height: 256 }
+ offset: { x: 62, y: 192 }
+ typography:
+ content: *page_title
+ align: start
+ color: *color
+ line:
+ amount: 3
+ height: 1.25
+ font:
+ family: *font_family
+ style: Bold
+
+ # Page description
+ - size: { width: 864, height: 64 }
+ offset: { x: 64, y: 512 }
+ typography:
+ content: *page_description
+ align: start
+ color: *color
+ line:
+ amount: 2
+ height: 1.5
+ font:
+ family: *font_family
+ style: Regular
diff --git a/pricing/index.html b/pricing/index.html
new file mode 100644
index 00000000..fc1d3ea0
--- /dev/null
+++ b/pricing/index.html
@@ -0,0 +1,2292 @@
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+ Pricing - Cirrus CI
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Sometimes configuring your own compute services isn't worth it. It takes time and effort to maintain them. For such cases there is a way to use the Cirrus Cloud Clusters for your organization.
+
1 compute credit can be bought for 1 US dollar. Here is how much 1000 minutes of CPU time will cost for different platforms:
+
+
1000 minutes of 1 virtual CPU for Linux platform for 3 compute credits
+
1000 minutes of 1 virtual CPU for FreeBSD platform for 3 compute credits
+
1000 minutes of 1 virtual CPU for Windows platform for 4 compute credits
+
1000 minutes of 1 Apple Silicon CPU for 15 compute credits
+
+
All tasks using compute credits are charged on per-second basis. 2 CPU Linux task takes 5 minutes? Pay 3 cents.
+
Note: orchestration costs are included in compute credits and there is no need to purchase additional seats on your organization's plan.
+
+
Works for OSS projects
+
Compute credits can be used for commercial OSS projects to avoid concurrency limits.
+Note that only collaborators for the project will be able to use organization's compute credits.
+
+
Benefits of this approach:
+
+
Use the same pre-configured infrastructure that we fine tune and constantly upgrade/improve.
+
No need to configure anything. Let Cirrus CI's team manage and upgrade infrastructure for you.
+
Per-second billing with no additional monthly fees for storage and traffic.
+
Cost efficient for small to medium teams.
+
+
Cons of this approach:
+
+
No support for exotic use cases like GPUs, SSDs and 100+ cores machines.
Compute credits can be used with any of the following instance types: container, windows_container and macos_instance.
+No additional configuration needed.
+
+
+
+
task:
+container:
+image:node:latest
+...
+
+
+
+
task:
+arm_container:
+image:node:latest
+...
+
+
+
+
+
+
Using compute credits for public or personal private repositories
+
If you willing to boost Cirrus CI for public or your personal private repositories you need to explicitly mark a task to use compute credits
+with use_compute_credits field.
+
Here is an example of how to enable compute credits for internal and external collaborators of a public repository:
Here is another example of how to enable compute credits for master branch of a personal private project to make sure
+all of the master builds are executed as fast as possible by skipping free usage limits:
A seat is a user that initiates CI builds by pushing commits and/or creating pull requests in a private repository.
+It can be a real person or a bot. If you are using Cron Builds or creating builds through Cirrus's API
+it will be counted as an additional seat (like a bot).
+
For example, if there are 10 people in your GitHub Organization and only 5 of them are working on private repositories
+where Cirrus CI is configured, the remaining 5 people are not counted as seats, given that they aren't pushing to the private repository.
+Let's say Dependabot is also configured for these private repositories.
+
In that case there are 5 + 1 = 6 seats you need to purchase Cirrus CI plan for.
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/robots.txt b/robots.txt
new file mode 100644
index 00000000..ee36893e
--- /dev/null
+++ b/robots.txt
@@ -0,0 +1,4 @@
+User-agent: *
+Allow: *
+Disallow:
+Sitemap: https://cirrus-ci.org/sitemap.xml
diff --git a/search/search_index.json b/search/search_index.json
new file mode 100644
index 00000000..1ce97879
--- /dev/null
+++ b/search/search_index.json
@@ -0,0 +1 @@
+{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"],"fields":{"title":{"boost":1000.0},"text":{"boost":1.0},"tags":{"boost":1000000.0}}},"docs":[{"location":"api/","title":"Cirrus CI API","text":"
Cirrus CI exposes GraphQL API for integrators to use through https://api.cirrus-ci.com/graphql endpoint. Please check Cirrus CI GraphQL Schema for a full list of available types and methods. Or check built-in interactive GraphQL Explorer. Here is an example of how to get a build for a particular SHA of a given repository:
In order for a tool to access Cirrus CI API, an organization admin should generate an access token through Cirrus CI settings page for a corresponding organization. Here is a direct link to the settings page: https://cirrus-ci.com/settings/github/<ORGANIZATION>. An access token will allow full write and read access to both public and private repositories of your organization on Cirrus CI: it will be possible to create new builds and perform any other GraphQL mutations. It is also possible to generate scoped access tokens for a subset of repositories to perform read or write operations on the same settings page.
Note that if you only need read access to public repositories of your organization you can skip this step and don't provide Authorization header.
Once an access token is generated and securely stored, it can be used to authorize API requests by setting Authorization header to Bearer $TOKEN.
User API Token Permission Scope
It is also possible to generate API tokens for personal accounts but they will be scoped only to access personal public and private repositories of a particular user. It won't be possible to access private repositories of an organization, even if they have access.
"},{"location":"api/#webhooks","title":"WebHooks","text":""},{"location":"api/#builds-and-tasks-webhooks","title":"Builds and Tasks WebHooks","text":"
It is possible to subscribe for updates of builds and tasks. If a WebHook URL is configured on Cirrus CI Settings page for an organization, Cirrus CI will try to POST a webhook event payload to this URL.
POST request will contain X-Cirrus-Event header to specify if the update was made to a build or a task. The event payload itself is pretty basic:
{\n\"action\": \"created\" | \"updated\",\n\"old_status\": \"FAILED\", # optional field for \"updated\" action in case a task or a build transitioned from one status to another\n\"data\": ...\n}\n
data field will be populated by executing the following GraphQL query:
In addition to updates to builds and tasks, Cirrus CI will also send audit_event events to the configured WebHook URL. action for these audit events will be always \"create\" and the data field will contain the following GraphQL fragment for the particular audit event:
fragment AuditEventWebhookPayload on AuditEventType {\nid\ntype\ntimestamp\ndata\nactor {\nid\n}\nrepository {\nid\nowner\nname\nisPrivate\n}\n}\n
Here is an example of an audit event for when a user re-ran a task with an attached Terminal within your organization:
Imagine you've been given a https://example.com/webhook endpoint by your administrator, and for some reason there's no easy way to change that. This kind of URL is easily discoverable on the internet, and an attacker can take advantage of this by sending requests to this URL, thus pretending to be the Cirrus CI.
To avoid such situations, set the secret token in the repository settings, and then validate the X-Cirrus-Signature for each WebHook request.
Once configured, the secret token and the request's body are fed into the HMAC algorithm to generate the X-Cirrus-Signature for each request coming from the Cirrus CI.
Missing X-Cirrus-Signature header
When secret token is configured in the repository settings, all WebHook requests will contain the X-Cirrus-Signature-Header. Make sure to assert the presence of X-Cirrus-Signature-Header header and correctness of its value in your validation code.
Using HMAC is pretty straightforward in many languages, here's an example of how to validate the X-Cirrus-Signature using Python's hmac module:
Cirrus CI has a set of Docker images ready for Android development. If these images are not the right fit for your project you can always use any custom Docker image with Cirrus CI. For those images .cirrus.yml configuration file can look like:
The Cirrus CI annotator supports providing inline reports on PRs and can parse Android Lint reports. Here is an example of an Android Lint task that you can add to your .cirrus.yml:
Our Docker images with Flutter and Dart SDK pre-installed have special *-web tags with Chromium pre-installed. You can use these tags to run Flutter Web
First define a new chromium platform in your dart_test.yaml:
The best way to test Go projects is by using official Go Docker images. Here is an example of how .cirrus.yml can look like for a project using Go Modules:
amd64arm64
container:\nimage: golang:latest\ntest_task:\nmodules_cache:\nfingerprint_script: cat go.sum\nfolder: $GOPATH/pkg/mod\nget_script: go get ./...\nbuild_script: go build ./...\ntest_script: go test ./...\n
arm_container:\nimage: golang:latest\ntest_task:\nmodules_cache:\nfingerprint_script: cat go.sum\nfolder: $GOPATH/pkg/mod\nget_script: go get ./...\nbuild_script: go build ./...\ntest_script: go test ./...\n
We highly recommend to configure some sort of linting for your Go project. One of the options is GolangCI Lint. The Cirrus CI annotator supports providing inline reports on PRs and can parse GolangCI Lint reports. Here is an example of a GolangCI Lint task that you can add to your .cirrus.yml:
We recommend use of the official Gradle Docker containers since they have Gradle specific configurations already set up. For example, standard Java containers don't have a pre-configured user and as a result don't have HOME environment variable presented which makes Gradle complain.
To preserve caches between Gradle runs, add a cache instruction as shown below. The trick here is to clean up ~/.gradle/caches folder in the very end of a build. Gradle creates some unique nondeterministic files in ~/.gradle/caches folder on every run which makes Cirrus CI re-upload the cache every time. This way, you get faster builds!
If your project uses a buildSrc directory, the build cache configuration should also be applied to buildSrc/settings.gradle.
To do this, put the build cache configuration above into a separate gradle/buildCacheSettings.gradle file, then apply it to both your settings.gradle and buildSrc/settings.gradle.
In settings.gradle:
apply from: new File(settingsDir, 'gradle/buildCacheSettings.gradle')\n
In buildSrc/settings.gradle:
apply from: new File(settingsDir, '../gradle/buildCacheSettings.gradle')\n
Please make sure you are running Gradle commands with --build-cache flag or have org.gradle.caching enabled in gradle.properties file. Here is an example of a gradle.properties file that we use internally for all Gradle projects:
Here is a .cirrus.yml that, parses and uploads JUnit reports at the end of the build:
junit_test_task:\njunit_script: <replace this comment with instructions to run the test suites>\nalways:\njunit_result_artifacts:\npath: \"**/test-results/**.xml\"\nformat: junit\ntype: text/xml\n
If it is running on a pull request, annotations will also be displayed in-line.
The Additional Containers feature makes it super simple to run the same Docker MySQL image as you might be running in production for your application. Getting a running instance of the latest GA version of MySQL can used with the following six lines in your .cirrus.yml:
Yarn 2 (also known as Yarn Berry), has a different package cache location (.yarn/cache). To run tests, it would look like this:
amd64arm64
container:\nimage: node:latest\ntest_task:\nyarn_cache:\nfolder: .yarn/cache\nfingerprint_script: cat yarn.lock\ninstall_script:\n- yarn set version berry\n- yarn install\ntest_script: yarn run test\n
arm_container:\nimage: node:latest\ntest_task:\nyarn_cache:\nfolder: .yarn/cache\nfingerprint_script: cat yarn.lock\ninstall_script:\n- yarn set version berry\n- yarn install\ntest_script: yarn run test\n
ESLint reports are supported by Cirrus CI Annotations. This way you can see all the linting issues without leaving the pull request you are reviewing! You'll need to generate an ESLint report file (for example, eslint.json) in one of your task's scripts. Then save it as an artifact in eslint format:
Official Python Docker images can be used for builds. Here is an example of a .cirrus.yml that caches installed packages based on contents of requirements.txt and runs pytest:
Python Unittest reports are supported by Cirrus CI Annotations. This way you can see what tests are failing without leaving the pull request you are reviewing! Here is an example of a .cirrus.yml that produces and stores Unittest reports:
amd64arm64
unittest_task:\ncontainer:\nimage: python:slim\ninstall_dependencies_script: |\npip3 install unittest_xml_reporting\nrun_tests_script: python3 -m xmlrunner tests\n# replace 'tests' with the module,\n# unittest.TestCase, or unittest.TestSuite\n# that the tests are in\nalways:\nupload_results_artifacts:\npath: ./*.xml\nformat: junit\ntype: text/xml\n
unittest_task:\narm_container:\nimage: python:slim\ninstall_dependencies_script: |\npip3 install unittest_xml_reporting\nrun_tests_script: python3 -m xmlrunner tests\n# replace 'tests' with the module,\n# unittest.TestCase, or unittest.TestSuite\n# that the tests are in\nalways:\nupload_results_artifacts:\npath: ./*.xml\nformat: junit\ntype: text/xml\n
Now you should get annotations for your test results.
Qodana by JetBrains is a code quality monitoring tool that identifies and suggests fixes for bugs, security vulnerabilities, duplications, and imperfections. It brings all the smart features you love in the JetBrains IDEs.
Here is an example of .cirrus.yml configuration file which will save Qodana's report as an artifact, will parse it and report as annotations:
Cirrus CI doesn't provide a built-in functionality to upload artifacts on a GitHub release but this functionality can be added via a script. For a release, Cirrus CI will provide CIRRUS_RELEASE environment variable along with CIRRUS_TAG environment variable. CIRRUS_RELEASE indicates release id which can be used to upload assets.
Cirrus CI only requires write access to Check API and doesn't require write access to repository contents because of security reasons. That's why you need to create a personal access token with full access to repo scope. Once an access token is created, please create an encrypted variable from it and save it to .cirrus.yml:
env:\nGITHUB_TOKEN: ENCRYPTED[qwerty]\n
Now you can use a script to upload your assets:
#!/usr/bin/env bash\nif [[ \"$CIRRUS_RELEASE\" == \"\" ]]; then\necho \"Not a release. No need to deploy!\"\nexit 0\nfi\nif [[ \"$GITHUB_TOKEN\" == \"\" ]]; then\necho \"Please provide GitHub access token via GITHUB_TOKEN environment variable!\"\nexit 1\nfi\nfile_content_type=\"application/octet-stream\"\nfiles_to_upload=(\n# relative paths of assets to upload\n)\nfor fpath in $files_to_upload\ndo\necho \"Uploading $fpath...\"\nname=$(basename \"$fpath\")\nurl_to_upload=\"https://uploads.github.com/repos/$CIRRUS_REPO_FULL_NAME/releases/$CIRRUS_RELEASE/assets?name=$name\"\ncurl -X POST \\\n--data-binary @$fpath \\\n--header \"Authorization: token $GITHUB_TOKEN\" \\\n--header \"Content-Type: $file_content_type\" \\\n$url_to_upload\ndone\n
Official Ruby Docker images can be used for builds. Here is an example of a .cirrus.yml that caches installed gems based on Ruby version, contents of Gemfile.lock, and runs rspec:
When you are not committing Gemfile.lock (in Ruby gems repositories, for example) you can run bundle install (or bundle update) in install_script instead of populate_script in bundle_cache. Cirrus Agent is clever enough to re-upload cache entry only if cached folder has been changed during task execution. Here is an example of a .cirrus.yml that always runs bundle install:
amd64arm64
container:\nimage: ruby:latest\nrspec_task:\nbundle_cache:\nfolder: /usr/local/bundle\nfingerprint_script:\n- echo $RUBY_VERSION\n- cat Gemfile\n- cat *.gemspec\ninstall_script: bundle install # or `update` for the freshest bundle\nrspec_script: bundle exec rspec\n
arm_container:\nimage: ruby:latest\nrspec_task:\nbundle_cache:\nfolder: /usr/local/bundle\nfingerprint_script:\n- echo $RUBY_VERSION\n- cat Gemfile\n- cat *.gemspec\ninstall_script: bundle install # or `update` for the freshest bundle\nrspec_script: bundle exec rspec\n
Test Parallelization
It's super easy to add intelligent test splitting by using Knapsack Pro and matrix modification. After setting up Knapsack Pro gem, you can add sharding like this:
Official Rust Docker images can be used for builds. Here is a basic example of .cirrus.yml that caches crates in $CARGO_HOME based on contents of Cargo.lock:
Please note before_cache_script that removes registry index from the cache before uploading it in the end of a successful task. Registry index is changing very rapidly making the cache invalid. before_cache_script deletes the index and leaves only the required crates for caching.
It is possible to use nightly builds of Rust via an official rustlang/rust:nightly container. Here is an example of a .cirrus.yml to run tests against the latest stable and nightly versions of Rust:
Vanila FreeBSD VMs don't set some environment variables required by Cargo for effective caching. Specifying HOME environment variable to some arbitrarily location should fix caching:
XCLogParser is a CLI tool that parses Xcode and xcodebuild's logs (xcactivitylog files) and produces reports in different formats.
Here is an example of .cirrus.yml configuration file which will save XCLogParser's flat JSON report as an artifact, will parse it and report as annotations:
"},{"location":"faq/","title":"Frequently Asked Questions","text":""},{"location":"faq/#what-are-the-ip-addresses-of-cirrus-ci","title":"What are the IP addresses of Cirrus CI?","text":"
Cirrus CI control plane uses three IP addresses:
34.117.12.6 - IP address of the Cirrus CI API and all *.cirrus-ci.com domains.
34.27.109.83 - IP address for egress connections when evaluating Starlark configuration files.
34.28.114.255 - IP addresses for egress connections that Cirrus CI uses to access APIs, deliver webhook events, etc.
"},{"location":"faq/#is-cirrus-ci-a-delivery-platform","title":"Is Cirrus CI a delivery platform?","text":"
Cirrus CI is not positioned as a delivery platform but can be used as one for many general use cases by having Dependencies between tasks and using Conditional Task Execution or Manual Tasks:
lint_task:\nscript: yarn run lint\ntest_task:\nscript: yarn run test\npublish_task:\nonly_if: $BRANCH == 'master'\ntrigger_type: manual\ndepends_on: - test\n- lint\nscript: yarn run publish\n
"},{"location":"faq/#are-there-any-limits","title":"Are there any limits?","text":"
Cirrus CI has the following limitations on how many CPUs for different platforms a single user can run on Cirrus Cloud Clusters for public repositories for free:
16.0 CPUs for Linux platform (Containers or VMs).
16.0 CPUs for Arm Linux platform (Containers).
8.0 CPUs for Windows platform (Containers or VMs)
8.0 CPUs for FreeBSD VMs.
4.0 CPUs macOS VM (1 VM).
Note that a single task can't request more than 8 CPUs (except macOS VMs which are not configurable).
Monthly CPU Minutes Limit
Additionally there is an upper monthly limit on free usage equal to 50 compute credits (which is equal to 10,000 CPU-minutes for Linux tasks or 500 minutes for macOS tasks which always use 4 CPUs).
If you are using Cirrus CI with your private personal repositories under the $10/month plan you'll have twice the limits:
32.0 CPUs for Linux platform (Containers or VMs).
16.0 CPUs for Windows platform (Containers or VMs)
16.0 CPUs for FreeBSD VMs.
8.0 CPUs macOS VM (2 VMs).
There are no limits on how many VMs or Containers you can run in parallel if you bring your own infrastructure or use Compute Credits for either private or public repositories.
Cache and Logs Redundancy
By default Cirrus CI persists caches and logs for 90 days. If you bring your own compute services this period can be configured directly in your cloud provider's console.
"},{"location":"faq/#repository-is-blocked","title":"Repository is blocked","text":"
Free tier of Cirrus CI is intended for public OSS projects to run tests and other validations continuously. If your repository is configured to use Cirrus CI in a questionable way to just exploit Cirrus CI infrastructure, your repository might be blocked.
Here are a few examples of such questionable activities we've seen so far:
Use Cirrus CI as a powerhouse for arbitrary CPU-intensive calculations (including crypto mining).
Use Cirrus CI to download a pirated movie, re-encode it, upload as a Cirrus artifact and distribute it.
Use Cirrus CI distributed infrastructure to emulate user activity on a variety of websites to trick advertisers.
"},{"location":"faq/#ip-addresses-of-cirrus-cloud-clusters","title":"IP Addresses of Cirrus Cloud Clusters","text":"
Instances running on Cirrus Cloud Clusters are using dynamic IPs by default. It's possible to request a static 35.222.255.190 IP for all the \"managed-by-us\" instance types except macOS VMs via use_static_ip field. Here is an example of a Linux Docker container with a static IP:
task:\nname: Test IP\ncontainer:\nimage: cirrusci/wget:latest\nuse_static_ip: true\nscript: wget -qO- ifconfig.co\n
It means that Cirrus CI haven't heard from the agent for quite some time. In 99.999% of the cases it happens because of two reasons:
Your task was executing on a Cirrus Cloud Cluster. Cirrus Cloud Cluster is backed by Google Cloud's Spot VMs for cost efficiency reasons and Google Cloud preempted back a VM your task was executing on. Cirrus CI is trying to minimize possibility of such cases by constantly rotating VMs before Google Cloud preempts them, but there is still chance of such inconvenience.
Your CI task used too much memory which led to a crash of a VM or a container.
"},{"location":"faq/#agent-process-on-a-persistent-worker-exited-unexpectedly","title":"Agent process on a persistent worker exited unexpectedly!","text":"
This means that either an agent process or a VM with an agent process exited before reporting the last instruction of a task.
If it's happening for a macos_instance then please contact support.
"},{"location":"faq/#instance-failed-to-start","title":"Instance failed to start!","text":"
It means that Cirrus CI has made a successful API call to a computing service to allocate resources. But a requested resource wasn't created.
If it happened for an OSS project, please contact support immediately. Otherwise check your cloud console first and then contact support if it's still not clear what happened.
Cirrus CI is trying to be as efficient as possible and heavily uses spot VMs to run majority of workloads. It allows to drastically lower Cirrus CI's infrastructure bill and allows to provide the best pricing model with per-second billing and very generous limits for OSS projects, but it comes with a rare edge case...
Spot VMs can be preempted which will require rescheduling and automatically restart tasks that were executing on these VMs. This is a rare event since autoscaler is constantly rotating instances but preemption still happens occasionally. All automatic re-runs and stateful tasks using compute credits are always executed on regular VMs.
By default, Cirrus CI has an execution limit of 60 minutes for each task. However, this default timeout duration can be changed by using timeout_in field in .cirrus.yml configuration file:
task: timeout_in: 90m\n...\n
Maximum timeout
There is a hard limit of 2 hours for free tasks. Use compute credits or compute service integration to avoid the limit.
It means that Cirrus CI has made a successful API call to a computing service to start a container but unfortunately container runtime or the corresponding computing service had an internal error.
Cirrus CI itself doesn't provide any discounts except Cirrus Cloud Cluster which is free for open source projects. But since Cirrus CI delegates execution of builds to different computing services, it means that discounts from your cloud provider will be applied to Cirrus CI builds.
"},{"location":"features/","title":"Features","text":""},{"location":"features/#free-for-open-source","title":"Free for Open Source","text":"
To support the Open Source community, Cirrus CI provides Linux, Windows, macOS and FreeBSD services free of charge up to a cap of 50 compute credits a month to OSS projects.
Here is a list of all instance types available for free for Open Source Projects:
Instance Type Managed by Description container us Linux Docker Container arm_container us Linux Arm Docker Container windows_container us Windows Docker Container docker_builder us Full-fledged VM pre-configured for running Docker macos_instance us macOS Virtual Machines freebsd_instance us FreeBSD Virtual Machines compute_engine_instance us Full-fledged custom VM persistent_worker you Use any host on any platform and architecture"},{"location":"features/#per-second-billing","title":"Per-second billing","text":"
Use compute credits to run as many parallel tasks as you want and pay only for CPU time used by these tasks. Another approach is to bring your own infrastructure and pay directly to your cloud provider within your current billing.
"},{"location":"features/#no-concurrency-limit-no-queues","title":"No concurrency limit. No queues","text":"
Cirrus CI leverages elasticity of the modern clouds to always have available resources to process your builds. Engineers should never wait for builds to start.
"},{"location":"features/#bring-your-own-infrastructure","title":"Bring Your Own Infrastructure","text":"
Cirrus CI supports bringing your own infrastructure (BYO) for full control over security and for easy integration with your current workflow.
Cirrus CI is free for Open Source projects with some limitations. For private projects, Cirrus CI has couple of options depending on your needs:
For private personal repositories there is a very affordable $10 a month plan with access to Cirrus Cloud Clusters for Linux, Windows and macOS workloads.
Buy compute credits to access managed and pre-configured Cirrus Cloud Clusters for Linux, FreeBSD, Windows, and macOS workloads.
Configure access to your own infrastructure and pay $10/seat/month.
Here is a comparison table of available Cirrus CI plans:
User Free Public Repositories Private Personal Repository Private Organization Repositories Person
Free access to Cirrus Cloud Clusters for public repositories
Bring your own infrastructure for public repositories
Configure persistent workers for public repositories
Access to community clusters for public and private repositories
Bring your own infrastructure for public and private repositories
Configure persistent workers for public and private repositories
Not Applicable Organization
Free access to Cirrus Cloud Clusters for public repositories
Use compute credits to access community clusters for private repositories and/or to avoid the limits on public repositories
Bring your own infrastructure for public repositories
Configure persistent workers for public repositories
Not Applicable
Free access to community clusters for public repositories
Use compute credits to access community clusters for private repositories and/or to avoid the limits on public repositories
Bring your own infrastructure for public and private repositories
Configure persistent workers for public and private repositories
Sometimes configuring your own compute services isn't worth it. It takes time and effort to maintain them. For such cases there is a way to use the Cirrus Cloud Clusters for your organization.
1 compute credit can be bought for 1 US dollar. Here is how much 1000 minutes of CPU time will cost for different platforms:
1000 minutes of 1 virtual CPU for Linux platform for 3 compute credits
1000 minutes of 1 virtual CPU for FreeBSD platform for 3 compute credits
1000 minutes of 1 virtual CPU for Windows platform for 4 compute credits
1000 minutes of 1 Apple Silicon CPU for 15 compute credits
All tasks using compute credits are charged on per-second basis. 2 CPU Linux task takes 5 minutes? Pay 3 cents.
Note: orchestration costs are included in compute credits and there is no need to purchase additional seats on your organization's plan.
Works for OSS projects
Compute credits can be used for commercial OSS projects to avoid concurrency limits. Note that only collaborators for the project will be able to use organization's compute credits.
Benefits of this approach:
Use the same pre-configured infrastructure that we fine tune and constantly upgrade/improve.
No need to configure anything. Let Cirrus CI's team manage and upgrade infrastructure for you.
Per-second billing with no additional monthly fees for storage and traffic.
Cost efficient for small to medium teams.
Cons of this approach:
No support for exotic use cases like GPUs, SSDs and 100+ cores machines.
Compute credits can be used with any of the following instance types: container, windows_container and macos_instance. No additional configuration needed.
amd64arm64
task:\ncontainer:\nimage: node:latest\n...\n
task:\narm_container:\nimage: node:latest\n...\n
Using compute credits for public or personal private repositories
If you willing to boost Cirrus CI for public or your personal private repositories you need to explicitly mark a task to use compute credits with use_compute_credits field.
Here is an example of how to enable compute credits for internal and external collaborators of a public repository:
Here is another example of how to enable compute credits for master branch of a personal private project to make sure all of the master builds are executed as fast as possible by skipping free usage limits:
Configure and connect one or more compute services and/or persistent workers to Cirrus CI for orchestrating CI workloads on them. It's free for your public repositories and costs $10/seat/month to use with private repositories unless your organization has Priority Support Subscription.
Benefits of this approach:
Full control of underlying infrastructure. Use any type of VMs and containers with any amount of CPUs and memory.
More secure. Setup any firewall and access rules.
Pay for CI within your existing cloud and GitHub bills.
Cons of this approach:
Need to configure and connect one or several compute services.
Might not be worth the effort for a small team.
Need to pay $10/seat/month plan.
What is a seat?
A seat is a user that initiates CI builds by pushing commits and/or creating pull requests in a private repository. It can be a real person or a bot. If you are using Cron Builds or creating builds through Cirrus's API it will be counted as an additional seat (like a bot).
For example, if there are 10 people in your GitHub Organization and only 5 of them are working on private repositories where Cirrus CI is configured, the remaining 5 people are not counted as seats, given that they aren't pushing to the private repository. Let's say Dependabot is also configured for these private repositories.
In that case there are 5 + 1 = 6 seats you need to purchase Cirrus CI plan for.
"},{"location":"security/","title":"Security Policy","text":""},{"location":"security/#reporting-a-vulnerability","title":"Reporting a Vulnerability","text":"
If you find a security vulnerability in the Cirrus CI platform (the backend, web interface, etc.), please follow the steps below.
Do NOT comment about the vulnerability publicly.
Please email hello@cirruslabs.org with the following format:
Subject: Platform Security Risk\n\nHOW TO EXPLOIT\n\nGive exact details so our team can replicate it.\n\nOTHER INFORMATION\n\nIf anything else needs to be said, put it here.\n
Please be patient. You will get an email back soon.
The best way to ask general questions about particular use cases is to email our support team at support+ci@cirruslabs.org. Our support team is trying our best to respond ASAP, but there is no guarantee on a response time unless your organization enrolls in Priority Support.
If you have a feature request or noticed lack of some documentation please feel free to create a GitHub issue. Our support team will answer it by replying to the issue or by updating the documentation.
In addition to the general support we provide a Priority Support option with guaranteed response times. But most importantly we'll be doing regular checkins to make sure roadmap for Cirrus CI and other services/software under cirruslabs organization is aligned with your company's needs. You'll be helping to shape the future of software developed by Cirrus Labs!
Severity Support Impact First Response Time SLA Hours How to Submit 1 Emergency (service is unavailable or completely unusable). 30 minutes 24x7 Please use urgent email address. 2 Highly Degraded (Important features unavailable or extremely slow; No acceptable workaround). 4 hours 24x5 Please use priority email address. 3 Medium Impact. 8 hours 24x5 Please use priority email address. 4 Low Impact. 24 hours 24x5 Please use regular support email address. Make sure to send the email from your corporate email.
24x5 means period of time from 9AM on Monday till 5PM on Friday in EST timezone.
Support Impact Definitions
Severity 1 - Cirrus CI or other services is unavailable or completely unusable. An urgent issue can be filed and our On-Call Support Engineer will respond within 30 minutes. Example: Cirrus CI showing 502 errors for all users.
Severity 2 - Cirrus CI or other services is Highly Degraded. Significant Business Impact. Important Cirrus CI features are unavailable or extremely slowed, with no acceptable workaround.
Severity 3 - Something is preventing normal service operation. Some Business Impact. Important features of Cirrus CI or other services are unavailable or somewhat slowed, but a workaround is available. Cirrus CI use has a minor loss of operational functionality.
Severity 4 - Questions or Clarifications around features or documentation. Minimal or no Business Impact. Information, an enhancement, or documentation clarification is requested, but there is no impact on the operation of Cirrus CI or other services/software.
How to submit a priority or an urgent issue
Once your organization signs the Priority Support Subscription contract, members of your organization will get access to separate support emails specified in your subscription contract.
"},{"location":"support/#priority-support-pricing","title":"Priority Support Pricing","text":"
As a company grows, engineering team tend to accumulate knowledge operating and working with Cirrus CI and other services/software provided by Cirrus Labs, therefore there is less effort needed to support each new seat from our side. On the other hand, Cirrus CI allows to bring your own infrastructure which increases complexity of the support. As a result we reflected the above challenges in a tiered pricing model based on a seat amount and a type of infrastructure used:
Seat Amount Only managed by us instance types Bring your own infrastructure 20-100 $60/seat/month $100/seat/month 101-300 $45/seat/month $75/seat/month 301-500 $30/seat/month $50/seat/month 500+ $15/seat/month $25/seat/month
Note that Priority Support Subscription requires a purchase of a minimum of 20 seats even if some of them will be unused.
What is a seat?
A seat is a user that initiates CI builds by pushing commits and/or creating pull requests in a private repository. It can be a real person or a bot. If you are using Cron Builds or creating builds through Cirrus's API it will be counted as an additional seat (like a bot).
If you'd like to get a priority support for your public repositories then the amount of seats will be equal to the amount of members in your organization.
"},{"location":"support/#how-to-purchase-priority-support-subscription","title":"How to purchase Priority Support Subscription","text":"
Please email sales@cirruslabs.org, so we can get a support contract in addition to TOC. The contract will contain a special priority email address for your organization and other helpful information. Sales team will also schedule a check-in meeting to make sure your engineering team is set for success and Cirrus Labs roadmap aligns with your needs.
It is possible to run FreeBSD Virtual Machines the same way one can run Linux containers on the FreeBSD Cloud Cluster. To accomplish this, use freebsd_instance in your .cirrus.yml:
Under the hood, a basic integration with Google Compute Engine is used and freebsd_instance is a syntactic sugar for the following compute_engine_instance configuration:
"},{"location":"guide/FreeBSD/#list-of-available-image-families","title":"List of available image families","text":"
Any of the official FreeBSD VMs on Google Cloud Platform are supported. Here are a few of them which are self explanatory:
freebsd-15-0-snap (15.0-SNAP)
freebsd-14-0 (14.0-RELEASE)
freebsd-13-2 (13.2-RELEASE)
It's also possible to specify a concrete version of an image by name via image_name field. To get a full list of available images please run the following gcloud command:
gcloud compute images list --project freebsd-org-cloud-dev --no-standard-images\n
"},{"location":"guide/build-life/","title":"Life of a Build","text":"
Any build starts with a change pushed to GitHub. Since Cirrus CI is a GitHub Application, a webhook event will be triggered by GitHub. From the webhook event, Cirrus CI will parse a Git branch and the SHA for the change. Based on said information, a new build will be created.
After build creation Cirrus CI will use GitHub's APIs to download a content of .cirrus.yml file for the SHA. Cirrus CI will evaluate it and create corresponding tasks.
These tasks (defined in the .cirrus.yml file) will be dispatched within Cirrus CI to different services responsible for scheduling on a supported computing service. Cirrus CI's scheduling service will use appropriate APIs to create and manage a VM instance or a Docker container on the particular computing service. The scheduling service will also configure start-up script that downloads the Cirrus CI agent, configures it to send logs back and starts it. Cirrus CI agent is a self-contained executable written in Go which means it can be executed anywhere.
Cirrus CI's agent will request commands to execute for a particular task and will stream back logs, caches, artifacts and exit codes of the commands upon execution. Once the task finishes, the scheduling service will clean up the used VM or container.
This is a diagram of how Cirrus CI schedules a task on Google Cloud Platform. The blue arrows represent API calls and the green arrows represent unidirectional communication between an agent inside a VM or a container and Cirrus CI. Other chores such as health checking of the agent and GitHub status reporting happen in real time as a task is running.
Cirrus CI supports many different compute services when you bring your own infrastructure, but internally at Cirrus Labs we use Google Cloud Platform for running all managed by us instances except macos_instance. Already things like Docker Builder and freebsd_instance are basically a syntactic sugar for launching Compute Engine instances from a particular limited set of images.
With compute_engine_instance it is possible to use any publicly available image for running your Cirrus tasks in. Such instances are particularly useful when you can't use Docker containers, for example, when you need to test things against newer versions of the Linux kernel than the Docker host has.
Here is an example of using a compute_engine_instance to run a VM with KVM available:
compute_engine_instance:\nimage_project: cirrus-images # GCP project.\nimage: family/docker-kvm # family or a full image name.\nplatform: linux\narchitecture: arm64 # optional. By default, amd64 is assumed.\ncpu: 4 # optional. Defaults to 2 CPUs.\nmemory: 16G # optional. Defaults to 4G.\ndisk: 100 # optional. By default, uses the smallest disk size required by the image.\nnested_virtualization: true # optional. Whether to enable Intel VT-x. Defaults to false.\n
Nested Virtualization License
Make sure that your source image already has a necessary license. Otherwise, nested virtualization won't work.
"},{"location":"guide/custom-vms/#building-custom-image-for-compute-engine","title":"Building custom image for Compute Engine","text":"
We recommend to use Packer for building your custom images. As an example, please take a look at our Packer templates used for building Docker Builder VM image.
After building your image, please make sure the image publicly available:
"},{"location":"guide/docker-builder-vm/","title":"Docker Builder on VM","text":""},{"location":"guide/docker-builder-vm/#docker-builder-vm","title":"Docker Builder VM","text":"
\"Docker Builder\" tasks are a way to build and publish Docker Images to Docker Registries of your choice using a VM as build environment. In essence, a docker_builder is basically a task that is executed in a VM with pre-installed Docker. A docker_builder can be defined the same way as a task:
Leveraging features such as Task Dependencies, Conditional Execution and Encrypted Variables with a Docker Builder can help building relatively complex pipelines. It can also be used to execute builds which need special privileges.
In the example below, a docker_builder will be only executed on a tag creation, once both test and lint tasks have finished successfully:
Docker Builder VM has QEMU pre-installed and is able to execute multi-arch builds via buildx. Add the following setup_script to enable buildx and then use docker buildx build instead of the regular docker build:
For your convenience, a Docker Builder VM has some common packages pre-installed:
AWS CLI
Docker Compose
OpenJDK
Python
Ruby with Bundler
"},{"location":"guide/docker-builder-vm/#under-the-hood","title":"Under the hood","text":"
Under the hood a simple integration with Google Compute Engine is used and basically docker_builder is a syntactic sugar for the following compute_engine_instance configuration:
Docker has the --cache-from flag which allows using a previously built image as a cache source. This way only changed layers will be rebuilt which can drastically improve performance of the build_script. Here is a snippet that uses the --cache-from flag:
# pull an image if available\ndocker pull myrepo/foo:latest || true\ndocker build --cache-from myrepo/foo:latest \\\n--tag myrepo/foo:$CIRRUS_TAG \\\n--tag myrepo/foo:latest .\n
"},{"location":"guide/docker-builder-vm/#dockerfile-as-a-ci-environment","title":"Dockerfile as a CI environment","text":"
With Docker Builder there is no need to build and push custom containers so they can be used as an environment to run CI tasks in. Cirrus CI can do it for you! Just declare a path to a Dockerfile with the dockerfile field for your container or arm_container declarations in your .cirrus.yml like this:
Cirrus CI will build a container and cache the resulting image based on Dockerfile\u2019s content. On the next build, Cirrus CI will check if a container was already built, and if so, Cirrus CI will instantly start a CI task using the cached image.
Under the hood, for every Dockerfile that is needed to be built, Cirrus CI will create a Docker Builder task as a dependency. You will see such build_docker_image_HASH tasks in the UI.
Danger of using COPY and ADD instructions
Cirrus only includes files directly added or copied into a container image in the cache key. But Cirrus is not recursively waking contents of folders that are being included into the image. This means that for a public repository a potential bad actor can create a PR with malicious scripts included into a container, wait for it to be cached and then reset the PR, so it looks harmless.
Please try to only COPY files by full path, e.g.:
FROM python:3\nCOPY requirements.txt /tmp/\nRUN pip install --requirement /tmp/requirements.txt\n
Using with private GKE clusters
To use dockerfile with gke_container you first need to create a VM with Docker installed within your GCP project. This image will be used to perform building of Docker images for caching. Once this image is available, for example, by MY_DOCKER_VM name, you can use it like this:
Please make sure your buidler image has gcloud configured as a credential helper.
If your builder image is stored in another project you can also specify it by using builder_image_project field. By default, Cirrus CI assumes builder image is stored within the same project as the GKE cluster.
Using with private EKS clusters
To use dockerfile with eks_container you need three things:
Either create an AMI with Docker installed or use one like ECS-optimized AMIa. For example, MY_DOCKER_AMI.
Create a role which has AmazonEC2ContainerRegistryFullAccess policy. For example, cirrus-builder.
Create cirrus-cache repository in your Elastic Container registry and make sure user that aws_credentials are associated with has ecr:DescribeImages access to it.
Once all of the above requirement are met you can configure eks_container like this:
eks_container:\nregion: us-east-2\ncluster_name: my-company-arm-cluster\ndockerfile: .ci/Dockerfile\nbuilder_image: MY_DOCKER_AMI\nbuilder_role: cirrus-builder # role for builder instance profile\nbuilder_instance_type: c7g.xlarge # should match the architecture below\nbuilder_subnet_ids: # optional, list of subnets from your default VPC to randomly choose from for scheduling the instance\n- ...\nbuilder_subnet_filters: # optional, map of filters to use for DescribeSubnets API call. Note to make sure Cirrus is given `ec2:DescribeSubnets` \n- name: tag:Name\nvalues:\n- subnet1\n- subnet2\narchitecture: arm64 # default is amd64\n
This will make Cirrus CI to check whether cirrus-cache repository in us-east-2 region contains a precached image for .ci/Dockerfile of this repository.
Docker builders also support building Windows Docker containers - use the platform and os_version fields:
docker_builder:\nplatform: windows\n...\n
"},{"location":"guide/docker-builds-on-kubernetes/","title":"Docker Builds on GKE","text":""},{"location":"guide/docker-builds-on-kubernetes/#docker-builds-on-kubernetes","title":"Docker Builds on Kubernetes","text":"
Besides the ability to build docker images using a dedicated docker_builder task which runs on VMs, it is also possible to run docker builds on Kubernetes. To do so we are leveraging the additional_containers and docker-in-docker functionality.
Currently Cirrus CI supports running builds on these Kubernetes distributions:
Google Kubernetes Engine (GKE)
AWS Elastic Kubernetes Service (EKS)
For Generic Kubernetes Support follow this issue.
"},{"location":"guide/docker-builds-on-kubernetes/#comparison-of-docker-builds-on-vms-vs-kubernetes","title":"Comparison of docker builds on VMs vs Kubernetes","text":"
VMs
complex builds are potentially faster than docker-in-docker
safer due to better isolation between builds
Kubernetes
much faster start - creating a new container usually takes few seconds vs creating a VM which takes usually about a minute on GCP and even longer on AWS.
ability to use an image with your custom tools image (e.g. containing Skaffold) to invoke docker instead of using a fixed VM image.
This a full example of how to build a docker image on GKE using docker and pushing it to GCR. While not required, the script section in this example also has some best practice cache optimizations and pushes the image to GCR.
AWS EKS support
While the steps below are specifically written for and tested with GKE (Google Kubernetes Engine), it should work equally on AWS EKS.
docker_build_task:\ngke_container: # for AWS, replace this with `aks_container`\nimage: docker:latest # This image can be any custom image. The only hard requirement is that it needs to have `docker-cli` installed.\ncluster_name: cirrus-ci-cluster # your gke cluster name\nzone: us-central1-b # zone of the cluster\nnamespace: cirrus-ci # namespace to use\ncpu: 1\nmemory: 1500Mb\nadditional_containers:\n- name: dockerdaemon\nprivileged: true # docker-in-docker needs to run in privileged mode\ncpu: 4\nmemory: 3500Mb\nimage: docker:dind\nport: 2375\nenv:\nDOCKER_DRIVER: overlay2 # this speeds up the build\nDOCKER_TLS_CERTDIR: \"\" # disable TLS to preserve the old behavior\nenv:\nDOCKER_HOST: tcp://localhost:2375 # this is required so that docker cli commands connect to the \"additional container\" instead of `docker.sock`.\nGOOGLE_CREDENTIALS: ENCRYPTED[qwerty239abc] # this should contain the json key for a gcp service account with the `roles/storage.admin` role on the `artifacts.<your_gcp_project>.appspot.com` bucket as described here https://cloud.google.com/container-registry/docs/access-control. This is only required if you want to pull / push to gcr. If we use dockerhub you need to use different credentials.\nlogin_script:\necho $GOOGLE_CREDENTIALS | docker login -u _json_key --password-stdin https://gcr.io\nbuild_script:\n- docker pull gcr.io/my-project/my-app:$CIRRUS_LAST_GREEN_CHANGE || true\n- docker build\n--cache-from=gcr.io/my-project/my-app:$CIRRUS_LAST_GREEN_CHANGE\n-t gcr.io/my-project/my-app:$CIRRUS_CHANGE_IN_REPO . push_script:\n- docker push gcr.io/my-project/my-app:$CIRRUS_CHANGE_IN_REPO
Since the additional_container needs to run in privileged mode, the isolation between the Docker build and the host are somewhat limited, you should create a separate cluster for Cirrus CI builds ideally. If this a concern you can also try out Kaniko or Makisu to run builds in unprivileged containers.
Docker Pipe is a way to execute each instruction in its own Docker container while persisting working directory between each of the containers. For example, you can build your application in one container, run some lint tools in another containers and finally deploy your app via CLI from another container.
No need to create huge containers with every single tool pre-installed!
A pipe can be defined the same way as a task with the only difference that instructions should be grouped under the steps field defining a Docker image for each step to be executed in. Here is an example of how we build and validate links for the Cirrus CI documentation that you are reading right now:
pipe:\nname: Build Site and Validate Links\nsteps:\n- image: squidfunk/mkdocs-material:latest\nbuild_script: mkdocs build\n- image: raviqqe/liche:latest # links validation tool in a separate container\nvalidate_script: /liche --document-root=site --recursive site/\n
Amount of CPU and memory that a pipe has access to can be configured with resources field:
Cirrus CI supports container and arm_container instances in order to run your CI workloads on amd64 and arm64 platforms respectively. Cirrus CI uses Kubernetes clusters running in different clouds that are the most suitable for running each platform:
For container instances Cirrus CI uses a GKE cluster of compute-optimized instances running in Google Cloud.
For arm_container instances Cirrus CI uses a EKS cluster of Graviton2 instances running in AWS.
Cirrus Cloud Clusters are configured the same way as anyone can configure a private Kubernetes cluster for their own repository. Cirrus CI supports connecting managed Kubernetes clusters from most of the cloud providers. Please check out all the supported computing services Cirrus CI can integrate with.
By default, a container is given 2 CPUs and 4 GB of memory, but it can be configured in .cirrus.yml:
Containers on Cirrus Cloud Cluster can use maximum 8.0 CPUs and up to 32 GB of memory. Memory limit is tied to the amount of CPUs requested. For each CPU you can't get more than 4G of memory.
Tasks using Compute Credits has higher limits and can use up to 28.0 CPUs and 112G of memory respectively.
Using in-memory disks
Some I/O intensive tasks may benefit from using a tmpfs disk mounted as a working directory. Set use_in_memory_disk flag to enable in-memory disk for a container:
amd64arm64
task:\nname: Much I/O\ncontainer:\nimage: alpine:latest\nuse_in_memory_disk: true\n
task:\nname: Much I/O\narm_container:\nimage: alpine:latest\nuse_in_memory_disk: true\n
Note: any files you write including cloned repository will count against your task's memory limit.
Privileged Access
If you need to run privileged docker containers, take a look at the docker builder.
Greedy instances
Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.
It is possible to run containers with KVM enabled. Some types of CI tasks can tremendously benefit from native virtualization. For example, Android related tasks can benefit from running hardware accelerated emulators instead of software emulated ARM emulators.
In order to enable KVM module for your containers, add kvm: true to your container declaration. Here is an example of a task that runs hardware accelerated Android emulators:
Because of the additional virtualization layer, it takes about a minute to acquire the necessary resources to start such tasks. KVM-enabled containers are backed by dedicated VMs which restrict the amount of CPU resources that can be used. The value of cpu must be 1 or an even integer. Values like 0.5 or 3 are not supported for KVM-enabled containers
"},{"location":"guide/linux/#working-with-private-registries","title":"Working with Private Registries","text":"
It is possible to use private Docker registries with Cirrus CI to pull containers. To provide an access to a private registry of your choice you'll need to obtain a JSON Docker config file for your registry and create an encrypted variable for Cirrus CI to use.
Using Kubernetes secrets with private clusters
Alternatively, if you are using Cirrus CI with your private Kubernetes cluster you can create a kubernetes.io/dockerconfigjson secret and just use it's name for registry_config:
If you don't see auth for your registry, it means your Docker installation is using a credentials store. In this case you can manually auth using a Base64 encoded string of your username and your PAT (Personal Access Token). Here's how to generate that:
echo $USERNAME:$PAT | base64\n
Create an encrypted variable from the Docker config and put in .cirrus.yml:
registry_config: ENCRYPTED[...]\n
Now Cirrus CI will be able to pull images from Oracle Container Registry:
It is possible to run M1 macOS Virtual Machines (like how one can run Linux containers) on the Cirrus Cloud macOS Cluster. Use macos_instance in your .cirrus.yml files:
macos_instance:\nimage: ghcr.io/cirruslabs/macos-sonoma-base:latest\ntask:\nscript: echo \"Hello World from macOS!\"\n
Cirrus CI is using Tart virtualization for running macOS Virtual Machines on Apple Silicon. Cirrus CI Cloud only allows images managed and regularly updated by us where with Cirrus CLI you can run any Tart VM on your infrastructure.
Please refer to the macos-image-templates repository on how the images were built and don't hesitate to create issues if current images are missing something.
Underlying Orchestration Technology
Under the hood Cirrus CI is using Cirrus CI's own Persistent Workers. See more details in out blog post.
Cirrus CI itself doesn't have built-in mechanism to send notifications but, since Cirrus CI is following best practices of integrating with GitHub, it's possible to configure a GitHub action that will send any kind of notifications.
Here is a full list of curated Cirrus Actions for GitHub including ones to send notifications: cirrus-actions.
It's possible to facilitate GitHub Action's own email notification mechanism to send emails about Cirrus CI failures. To enable it, add the following .github/workflows/email.yml workflow file:
on:\ncheck_suite:\ntype: ['completed']\nname: Email about Cirrus CI failures\njobs:\ncontinue:\nname: After Cirrus CI Failure\nif: >-\ngithub.event.check_suite.app.name == 'Cirrus CI'\n&& github.event.check_suite.conclusion != 'success'\n&& github.event.check_suite.conclusion != 'cancelled'\n&& github.event.check_suite.conclusion != 'skipped'\n&& github.event.check_suite.conclusion != 'neutral'\nruns-on: ubuntu-latest\nsteps:\n- uses: octokit/request-action@v2.x\nid: get_failed_check_run\nwith:\nroute: GET /repos/${{ github.repository }}/check-suites/${{ github.event.check_suite.id }}/check-runs?status=completed\nmediaType: '{\"previews\": [\"antiope\"]}'\nenv:\nGITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}\n- run: |\necho \"Cirrus CI ${{ github.event.check_suite.conclusion }} on ${{ github.event.check_suite.head_branch }} branch!\"\necho \"SHA ${{ github.event.check_suite.head_sha }}\"\necho $MESSAGE\necho \"##[error]See $CHECK_RUN_URL for details\" && false\nenv:\nCHECK_RUN_URL: ${{ fromJson(steps.get_failed_check_run.outputs.data).check_runs[0].html_url }}\n
Cirrus CI pioneered an idea of directly using compute services instead of requiring users to manage their own infrastructure, configuring servers for running CI jobs, performing upgrades, etc. Instead, Cirrus CI just uses APIs of cloud providers to create virtual machines or containers on demand. This fundamental design difference has multiple benefits comparing to more traditional CIs:
Ephemeral environment. Each Cirrus CI task starts in a fresh VM or a container without any state left by previous tasks.
Infrastructure as code. All VM versions and container tags are specified in .cirrus.yml configuration file in your Git repository. For any revision in the past Cirrus tasks can be identically reproduced at any point in time in the future using the exact versions of VMs or container tags specified in .cirrus.yml at the particular revision. Just imagine how difficult it is to do a security release for a 6 months old version if your CI environment independently changes.
Predictability and cost efficiency. Cirrus CI uses elasticity of modern clouds and creates VMs and containers on demand only when they are needed for executing Cirrus tasks and deletes them right after. Immediately scale from 0 to hundreds or thousands of parallel Cirrus tasks without a need to over provision infrastructure or constantly monitor if your team has reached maximum parallelism of your current CI plan.
"},{"location":"guide/persistent-workers/#what-is-a-persistent-worker","title":"What is a Persistent Worker","text":"
For some use cases the traditional CI setup is still useful since not everything is available in the cloud. For example, testing hardware itself or some third party devices that can be attached with wires. For such use cases it makes sense to go with a traditional CI setup: install some binary on the hardware which will constantly pull for new tasks and will execute them one after another.
This is precisely what Persistent Workers for Cirrus CI are: a simple way to run Cirrus tasks beyond cloud!
First, create a persistent workers pool for your personal account or a GitHub organization (https://cirrus-ci.com/settings/github/<ORGANIZATION>):
Once a persistent worker is created, copy registration token of the pool and follow Cirrus CLI guide to configure a host that will be a persistent worker.
Once configured, target task execution on a worker by using persistent_worker instance and matching by workers' labels:
By default, a persistent worker spawns all the tasks on the same host machine it's being run.
However, using the isolation field, a persistent worker can utilize a VM or a container engine to increase the separation between tasks and to unlock the ability to use different operating systems.
To use this isolation type, install the Tart on the persistent worker's host machine.
Here's an example of a configuration that will run the task inside of a fresh macOS virtual machine created from a remote ghcr.io/cirruslabs/macos-ventura-base:latest VM image:
Once the VM spins up, persistent worker will connect to the VM's IP-address over SSH using user and password credentials and run the latest agent version.
"},{"location":"guide/programming-tasks/","title":"Programming Tasks in Starlark","text":""},{"location":"guide/programming-tasks/#introduction-into-starlark","title":"Introduction into Starlark","text":"
Most commonly, Cirrus tasks are declared in a .cirrus.yml file in YAML format as documented in the Writing Tasks guide.
YAML, as a language, is great for declaring simple to moderate configurations, but sometimes just using a declarative language is not enough. One might need some conditional execution or an easy way to generate multiple similar tasks. Most continuous integration services solve this problem by introducing a special domain specific language (DSL) into the existing YAML. In case of Cirrus CI, we have the only_if keyword for conditional execution and matrix modification for generating similar tasks. These options are mostly hacks to work around the declarative nature of YAML where in reality an imperative language would be a better fit. This is why Cirrus CI allows tasks to be configured in Starlark in addition to YAML.
Starlark is a procedural programming language similar to Python that originated in the Bazel build tool that is ideal for embedding within systems that want to safely allow user-defined logic. There are a few key differences that made us choose Starlark instead of common alternatives like JavaScript/TypeScript or WebAssembly:
Starlark doesn't require compilation. There's no need to introduce a full-blown compile and deploy process for a few dozen lines of logic.
Starlark scripts can be executed instantly on any platform. There is Starlark interpreter written in Go which integrates nicely with the Cirrus CLI and Cirrus CI infrastructure.
Starlark has built-in functionality for loading external modules which is ideal for config sharing. See module loading for details.
With module loading you can re-use other people's code to avoid wasting time writing tasks from scratch. For example, with the official task helpers the example above can be refactored to:
Then the generated YAML is appended to .cirrus.yml (if any) before passing the combined config into the final YAML parser.
With Starlark, it's possible to generate parts of the configuration dynamically based on some external conditions:
Parsing files inside the repository to pick up some common settings (for example, parse package.json to see if it contains a lint script and generate a linting task).
Making an HTTP request to check the previous build status.
See a video tutorial on how to create a custom Cirrus module:
Different events will trigger execution of different top-level functions in the .cirrus.star file. These functions reserve certain names and will be called with different arguments depending on the event which triggered the execution.
main() is called once a Cirrus CI build is triggered in order to generate additional configuration that will be appended to .cirrus.yml before parsing.
main function can return a single object or a list of objects which will be automatically serialized into YAML. In case of returning plain text, it will be appended to .cirrus.yml as is.
Note that .cirrus.yml configuration file is optional and the whole build can be generated via evaluation of .cirrus.star file.
It's also possible to execute Starlark scripts on updates to the current build or any of the tasks within the build. Think of it as WebHooks running within Cirrus that don't require any infrastructure on your end.
Expected names of Starlark Hook functions in .cirrus.star are on_build_<STATUS> or on_task_<STATUS> respectively. Please refer to Cirrus CI GraphQL Schema for a full list of existing statuses, but most commonly on_build_failed/on_build_completed and on_task_failed/on_task_completed are used. These functions should expect a single context argument passed by Cirrus Cloud. At the moment hook's context only contains a single field payload containing the same payload as a webhook.
One caveat of Starlark Hooks execution is CIRRUS_TOKEN environment variable that contains a token to access Cirrus API. Scope of CIRRUS_TOKEN is restricted to the build associated with that particular hook invocation and allows, for example, to automatically re-run tasks. Here is an example of a Starlark Hook that automatically re-runs a failed task in case a particular transient issue found in logs:
# load some helpers from an external module \nload(\"github.com/cirrus-modules/graphql\", \"rerun_task_if_issue_in_logs\")\ndef on_task_failed(ctx):\nif \"Test\" not in ctx.payload.data.task.name:\nreturn\nif ctx.payload.data.task.automaticReRun:\nprint(\"Task is already an automatic re-run! Won't even try to re-run it...\")\nreturn\nrerun_task_if_issue_in_logs(ctx.payload.data.task.id, \"Time out\")\n
You can also specify an exact commit hash instead of the main() branch name to prevent accidental changes.
Loading private modules
If your organization has private repository called cirrus-modules with installed Cirrus CI, then this repository will be available for loading within repositories of your organization.
To load .star files from repositories other than GitHub, add a .git suffix at the end of the repository name, for example:
load(\"gitlab.com/fictional/repository.git/validator.star\", \"validate\")\n^^^^ note the suffix\n
While not technically a builtin, is_test is a bool that allows Starlark code to determine whether it's running in test environment via Cirrus CLI. This can be useful for limiting the test complexity, e.g. by not making a real HTTP request and mocking/skipping it instead. Read more about module testing in a separate guide in Cirrus CLI repository.
changes_include() is a Starlark alternative to the changesInclude() function commonly found in the YAML configuration files.
It takes at least one string with a pattern and returns a bool that represents whether any of the specified patterns matched any of the affected files in the running context.
changes_include_only() is a Starlark alternative to the changesIncludeOnly() function commonly found in the YAML configuration files.
It takes at least one string with a pattern and returns a bool that represents whether any of the specified patterns matched all the affected files in the running context.
Currently supported contexts:
main() entrypoint
Example:
load(\"cirrus\", \"changes_include_only\")\ndef main(ctx):\n# skip if only documentation changed\nif changes_include_only(\"doc/*\"):\nreturn []\n# ...\n
cirrus.zipfile module provides methods to read Zip archives.
You instantiate a ZipFile object using zipfile.ZipFile(data) function call and then call namelist() and open(filename) methods to retrieve information about archive contents.
Refer to the starlib's documentation for more details.
Example:
load(\"cirrus\", \"fs\", \"zipfile\")\ndef is_java_archive(path):\n# Read Zip archive contents from the filesystem\narchive_contents = fs.read(path)\nif archive_contents == None:\nreturn False\n# Open Zip archive and a file inside of it\nzf = zipfile.ZipFile(archive_contents)\nmanifest = zf.open(\"META-INF/MANIFEST.MF\")\n# Does the manifest contain the expected version?\nif \"Manifest-Version: 1.0\" in manifest.read():\nreturn True\nreturn False\n
At the moment Cirrus CI only supports repositories hosted on GitHub. This guide will walk you through the installation process. If you are interested in a support for other code hosting platforms please fill up this form to help us prioritize the support and notify you once the support is available.
Start by configuring the Cirrus CI application from GitHub Marketplace.
Choose a plan for your personal account or for an organization you have admin writes for.
GitHub Apps can be installed on all repositories or on repository-by-repository basis for granular access control. For example, Cirrus CI can be installed only on public repositories and will only have access to these public repositories. In contrast, classic OAuth Apps don't have such restrictions.
Change Repository Access
You can always revisit Cirrus CI's repository access settings on your installation page.
Once Cirrus CI is installed for a particular repository, you must add either .cirrus.yml configuration or .cirrus.star script to the root of the repository. The .cirrus.yml defines tasks that will be executed for every build for the repository.
For a Node.js project, your .cirrus.yml could look like:
That's all! After pushing a .cirrus.yml a build with all the tasks defined in the .cirrus.yml file will be created.
Note: Please check the full guide on configuring Cirrus Tasks and/or check a list of available examples.
Zero-config Docker Builds
If your repository happened to have a Dockerfile in the root, Cirrus CI will attempt to build it even without a corresponding .cirrus.yml configuration file.
You will see all your Cirrus CI builds on cirrus-ci.com once signed in.
GitHub status checks for each task will appear on GitHub as well.
Newly created PRs will also get Cirrus CI's status checks.
Examples
Don't forget to check examples page for ready-to-copy examples of some .cirrus.yml configuration files for different languages and build systems.
Life of a build
Please check a high level overview of what's happening under the hood when a changed is pushed and this guide to learn more about how to write tasks.
"},{"location":"guide/quick-start/#authorization-on-cirrus-ci-web-app","title":"Authorization on Cirrus CI Web App","text":"
All builds created by your account can be viewed on Cirrus CI Web App after signing in with your GitHub Account:
After clicking on Sign In you'll be redirected to GitHub in order to authorize access:
Note about Act on your behalf
Cirrus CI only asks for several kinds of permissions that you can see on your installation page. These permissions are read-only except for write access to checks and commit statuses in order for Cirrus CI to be able to report task statuses via checks or commit statuses.
There is a long thread disscussing this weird \"Act on your behalf\" wording here on GitHub's own commuity forum.
"},{"location":"guide/quick-start/#enabling-new-repositories-after-installation","title":"Enabling New Repositories after Installation","text":"
If you choose initially to allow Cirrus CI to access all of your repositories, all you need to do is push a .cirrus.yml to start building your repository on Cirrus CI.
If you only allowed Cirrus CI to access certain repositories, then add your new repository to the list of repositories Cirrus CI has access to via this page, then push a .cirrus.yml to start building on Cirrus CI.
"},{"location":"guide/quick-start/#permission-model-for-github-repositories","title":"Permission Model for GitHub Repositories","text":"
When a user triggers a build on Cirrus CI by either pushing a change to a repository, creating a PR or a release, Cirrus CI will associate a corresponding user's permissions with the build and tasks within that build. Those permissions are exposed to tasks with CIRRUS_USER_PERMISSIONS environment variable and are mapped to GitHub's collaborator permissions. of the user for the given repository. Only tasks with write and admin permissions will be get decrypted values of the encrypted variables.
When working with Cirrus GraphQL API either directly or indirectly through Cirrus CI Web UI, permissions play a key role. Not only one need read permission to view a certain build and tasks of a private repository, but in order to perform any GraphQL mutation one will need at least write permission with a few exceptions:
admin permission is required for deleting a repository via RepositoryDeleteMutation.
admin permission is required for creating API access tokens via GenerateNewOwnerAccessTokenMutation and GenerateNewScopedAccessTokenMutation.
Note that for public repositories none collaborator permission is mapped to read in order to give public view access to anyone.
For every task Cirrus CI starts a new Virtual Machine or a new Docker Container on a given compute service. Using a new VM or a new Docker Container each time for running tasks has many benefits:
Atomic changes to an environment where tasks are executed. Everything about a task is configured in .cirrus.yml file, including VM image version and Docker Container image version. After committing changes to .cirrus.yml not only new tasks will use the new environment, but also outdated branches will continue using the old configuration.
Reproducibility. Fresh environment guarantees no corrupted artifacts or caches are presented from the previous tasks.
Cost efficiency. Most compute services are offering per-second pricing which makes them ideal for using with Cirrus CI. Also each task for repository can define ideal amount of CPUs and Memory specific for a nature of the task. No need to manage pools of similar VMs or try to fit workloads within limits of a given Continuous Integration systems.
To be fair there are of course some disadvantages of starting a new VM or a container for every task:
Virtual Machine Startup Speed. Starting a VM can take from a few dozen seconds to a minute or two depending on a cloud provider and a particular VM image. Starting a container on the other hand just takes a few hundred milliseconds! But even a minute on average for starting up VMs is not a big inconvenience in favor of more stable, reliable and more reproducible CI.
Cold local caches for every task execution. Many tools tend to store some caches like downloaded dependencies locally to avoid downloading them again in future. Since Cirrus CI always uses fresh VMs and containers such local caches will always be empty. Performance implication of empty local caches can be avoided by using Cirrus CI features like built-in caching mechanism. Some tools like Gradle can even take advantages of built-in HTTP cache!
Please check the list of currently supported cloud compute services below. In case you have your own hardware, please take a look at Persistent Workers, which allow connecting anything to Cirrus CI.
Cirrus CI can schedule tasks on several Google Cloud Compute services. In order to interact with Google Cloud APIs Cirrus CI needs permissions. Creating a service account is a common way to safely give granular access to parts of Google Cloud Projects.
Isolation
We do recommend to create a separate Google Cloud project for running CI builds to make sure tests are isolated from production data. Having a separate project also will show how much money is spent on CI and how efficient Cirrus CI is
Once you have a Google Cloud project for Cirrus CI please create a service account by running the following command:
gcloud iam service-accounts create cirrus-ci \\\n--project $PROJECT_ID\n
Depending on a compute service Cirrus CI will need different roles assigned to the service account. But Cirrus CI will always need permissions to refresh it's token, generate pre-signed URLs (for the artifacts upload/download to work) and be able to view monitoring:
By default Cirrus CI will store logs and caches for 90 days but it can be changed by manually configuring a lifecycle rule for a Google Cloud Storage bucket that Cirrus CI is using.
A private key can be created by running the following command:
gcloud iam service-accounts keys create service-account-credentials.json \\\n--iam-account cirrus-ci@$PROJECT_ID.iam.gserviceaccount.com\n
At last create an encrypted variable from contents of service-account-credentials.json file and add it to the top of .cirrus.yml file:
gcp_credentials: ENCRYPTED[qwerty239abc]\n
Now Cirrus CI can store logs and caches in Google Cloud Storage for tasks scheduled on either GCE or GKE. Please check following sections with additional instructions about Compute Engine or Kubernetes Engine.
Supported Regions
Cirrus CI currently supports following GCP regions: us-central1, us-east1, us-east4, us-west1, us-west2, europe-west1, europe-west2, europe-west3 and europe-west4.
Please contact support if you are interested in support for other regions.
By configuring Cirrus CI as an identity provider, Cirrus CI will be able to acquire temporary access tokens on-demand for each task. Please read Google Cloud documentation to learn more about security and other benefits of using a workload identity provider.
Now let's setup Cirrus CI as a workload identity provider:
First, let's make sure the IAM Credentials API is enabled:
gcloud iam workload-identity-pools create \"ci-pool\" \\\n--project=\"${PROJECT_ID}\" \\\n--location=\"global\" \\\n--display-name=\"Continuous Integration\"\n
Get the full ID of the Workload Identity Pool:
gcloud iam workload-identity-pools describe \"ci-pool\" \\\n--project=\"${PROJECT_ID}\" \\\n--location=\"global\" \\\n--format=\"value(name)\"\n
Save this value as an environment variable:
export WORKLOAD_IDENTITY_POOL_ID=\"...\" # value from above\n
Create a Workload Identity Provider in that pool:
# TODO(developer): Update this value to your GitHub organization.\nexport OWNER=\"organization\" # e.g. \"cirruslabs\"\ngcloud iam workload-identity-pools providers create-oidc \"cirrus-oidc\" \\\n--project=\"${PROJECT_ID}\" \\\n--location=\"global\" \\\n--workload-identity-pool=\"ci-pool\" \\\n--display-name=\"Cirrus CI\" \\\n--attribute-mapping=\"google.subject=assertion.aud,attribute.owner=assertion.owner,attribute.actor=assertion.repository,attribute.actor_visibility=assertion.repository_visibility,attribute.pr=assertion.pr\" \\\n--attribute-condition=\"attribute.owner == '$OWNER'\" \\\n--issuer-uri=\"https://oidc.cirrus-ci.com\"\n
The attribute mappings map claims in the Cirrus CI JWT to assertions you can make about the request (like the repository name or repository visibility). In the example above --attribute-condition flag asserts that the provider can be used with any repository of your organization. You can restrict the access further with attributes like repository, repository_visibility and pr.
If not yet created, create a Service Account that Cirrus CI will impersonate to manage compute resources and assign it the required roles.
Allow authentications from the Workload Identity Provider originating from your organization to impersonate the Service Account created above:
gcloud iam service-accounts add-iam-policy-binding \"cirrus-ci@${PROJECT_ID}.iam.gserviceaccount.com\" \\\n--project=\"${PROJECT_ID}\" \\\n--role=\"roles/iam.workloadIdentityUser\" \\\n--member=\"principalSet://iam.googleapis.com/${WORKLOAD_IDENTITY_POOL_ID}/attribute.owner/${OWNER}\"\n
Extract the Workload Identity Provider resource name:
gcloud iam workload-identity-pools providers describe \"cirrus-oidc\" \\\n--project=\"${PROJECT_ID}\" \\\n--location=\"global\" \\\n--workload-identity-pool=\"ci-pool\" \\\n--format=\"value(name)\"\n
Use this value as the workload_identity_provider value in your Cirrus configuration file:
gcp_credentials:\n# todo(developer): replace PROJECT_NUMBER and PROJECT_ID with the actual values\nworkload_identity_provider: projects/${PROJECT_NUMBER}/locations/global/workloadIdentityPools/ci-pool/providers/cirrus-oidc\nservice_account: cirrus-ci@${PROJECT_ID}.iam.gserviceaccount.com\n
In order to schedule tasks on Google Compute Engine a service account that Cirrus CI operates via should have a necessary role assigned. It can be done by running a gcloud command:
It's also possible to specify a concrete image name instead of the periodically rolling image family. Use the image_name field instead of image_family:
"},{"location":"guide/supported-computing-services/#custom-vm-images","title":"Custom VM images","text":"
Building an immutable VM image with all necessary software pre-configured is a known best practice with many benefits. It makes sure environment where a task is executed is always the same and that no time is spent on useless work like installing a package over and over again for every single task.
There are many ways how one can create a custom image for Google Compute Engine. Please refer to the official documentation. At Cirrus Labs we are using Packer to automate building such images. An example of how we use it can be found in our public GitHub repository.
Google Compute Engine support Windows images and Cirrus CI can take full advantages of it by just explicitly specifying platform of an image like this:
Google Compute Engine support FreeBSD images and Cirrus CI can take full advantages of it by just explicitly specifying platform of an image like this:
"},{"location":"guide/supported-computing-services/#docker-containers-on-dedicated-vms","title":"Docker Containers on Dedicated VMs","text":"
It is possible to run a container directly on a Compute Engine VM with pre-installed Docker. Use the gce_container field to specify a VM image and a Docker container to execute on the VM (gce_container extends gce_instance definition with a few additional fields):
Note that gce_container always runs containers in privileged mode.
If your VM image has Nested Virtualization Enabled it's possible to use KVM from the container by specifying enable_nested_virtualization flag. Here is an example of using KVM-enabled container to run a hardware accelerated Android emulator:
By default Cirrus CI will create Google Compute instances without any scopes so an instance can't access Google Cloud Storage for example. But sometimes it can be useful to give some permissions to an instance by using scopes key of gce_instance. For example, if a particular task builds Docker images and then pushes them to Container Registry, its configuration file can look something like:
Cirrus CI can schedule spot instances with all price benefits and stability risks. But sometimes risks of an instance being preempted at any time can be tolerated. For example gce_instance can be configured to schedule spot instance for non master branches like this:
Scheduling tasks on Compute Engine has one big disadvantage of waiting for an instance to start which usually takes around a minute. One minute is not that long but can't compete with hundreds of milliseconds that takes a container cluster on GKE to start a container.
To start scheduling tasks on a container cluster we first need to create one using gcloud. Here is a recommended configuration of a cluster that is very similar to what is used for the managed container instances. We recommend creating a cluster with two node pools:
default-pool with a single node and no autoscaling for system pods required by Kubernetes.
workers-pool that will use Compute-Optimized instances and SSD storage for better performance. This pool also will be able to scale to 0 when there are no tasks to run.
Done! Now after creating cirrus-ci-cluster cluster and having gcp_credentials configured tasks can be scheduled on the newly created cluster like this:
gcp_credentials: ENCRYPTED[qwerty239abc]\ngke_container:\nimage: gradle:jdk8\ncluster_name: cirrus-ci-cluster\nlocation: us-central1-a # cluster zone or region for multi-zone clusters\nnamespace: default # Kubernetes namespace to create pods in\ncpu: 6\nmemory: 24GB\nnodeSelectorTerms: # optional\n- matchExpressions:\n- key: cloud.google.com/gke-spot\noperator: In\nvalues:\n- \"true\"\n
Using in-memory disk
By default Cirrus CI mounts an emptyDir into /tmp path to protect the pod from unnecessary eviction by autoscaler. It is possible to switch emptyDir's medium to use in-memory tmpfs storage instead of a default one by setting use_in_memory_disk field of gke_container to true or any other expression that uses environment variables.
Running privileged containers
You can run privileged containers on your private GKE cluster by setting privileged field of gke_container to true or any other expression that uses environment variables. privileged field is also available for any additional container.
There are two options to provide access to your infrastructure: via a traditional IAM user or via a more flexible and secure Identity Provider.
Permissions
A user or a role that Cirrus CI will be using for orchestrating tasks on AWS should at least have access to S3 in order to store logs and cache artifacts. Here is a list of actions that Cirrus CI requires to store logs and artifacts:
"},{"location":"guide/supported-computing-services/#iam-user-credentials","title":"IAM user credentials","text":"
Creating an IAM user for programmatic access is a common way to safely give granular access to parts of your AWS.
Once you created a user for Cirrus CI you'll need to provide key id and access key itself. In order to do so please create an encrypted variable with the following content:
"},{"location":"guide/supported-computing-services/#cirrus-as-an-openid-connect-identity-provider","title":"Cirrus as an OpenID Connect Identity Provider","text":"
By configuring Cirrus CI as an identity provider, Cirrus CI will be able to acquire temporary access tokens on-demand for each task. Please read AWS documentation to learn more about security and other benefits of using a workload identity provider.
Now lets setup Cirrus CI as a workload identity provider. Here is a Cloud Formation Template that can configure Cirrus CI as an OpenID Connect Identity Provider. Please be extra careful and review this template, specifically pay attention to the condition that asserts claims CIRRUS_OIDC_TOKEN has.
this example template only checks that CIRRUS_OIDC_TOKEN comes from any repository under your organization. If you are planning to use AWS compute services only for private repositories you should change this condition to:
Additionally, if you are planning to access production services from within your CI tasks, please create a separate role with even stricter asserts for additional security. The same CIRRUS_OIDC_TOKEN can be used to acquire tokens for multiple roles.
The output of running the template will a role that can be used in aws_credentials in your .cirrus.yml configuration:
aws_credentials:\nrole_arn: arn:aws:iam::123456789:role/CirrusCI-Role-Something-Something\nrole_session_name: cirrus # an identifier for the assumed role session\nregion: us-east-2 # region to use for calling the STS\n
Note that you'll need to add permissions required for Cirrus to that role.
Now tasks can be scheduled on EC2 by configuring ec2_instance something like this:
task:\nec2_instance:\nimage: ami-0a047931e1d42fdb3\ntype: t2.micro\nregion: us-east-1\nsubnet_ids: # optional, list of subnets from your default VPC to randomly choose from for scheduling the instance\n- ...\nsubnet_filters: # optional, map of filters to use for DescribeSubnets API call. Note to make sure Cirrus is given `ec2:DescribeSubnets`\n- name: tag:Name\nvalues:\n- subnet1\n- subnet2\narchitecture: arm64 # defaults to amd64\nspot: true # defaults to false\nblock_device_mappings: # empty by default\n- device_name: /dev/sdg\nebs:\nvolume_size: 100 # to increase the size of the root volume\n- device_name: /dev/sda1\nvirtual_name: ephemeral0 # to add an ephemeral disk for supported instances\n- device_name: /dev/sdj\nebs:\nsnapshot_id: snap-xxxxxxxx\nscript: ./run-ci.sh\n
Value for the image field of ec2_instance can be just the image id in a format of ami-* but there two more convenient options when Cirrus will do image id resolutions for you:
"},{"location":"guide/supported-computing-services/#aws-system-manager","title":"AWS System Manager","text":"
to figure out the ami right before scheduling the instance (Cirrus will pick the freshest AMI from the list based on creation date). Please make use AMI user or role has ec2:DescribeImages permissions.
Please follow instructions on how to create a EKS cluster and add workers nodes to it. And don't forget to add necessary permissions for the IAM user or OIDC role that Cirrus CI is using:
To verify that Cirrus CI will be able to communicate with your cluster please make sure that if you are locally logged in as the user that Cirrus CI acts as you can successfully run the following commands and see your worker nodes up and running:
If you have an issue with accessing your EKS cluster via kubectl, most likely you did not create the cluster with the user that Cirrus CI is using. The easiest way to do so is to create the cluster through AWS CLI with the following command:
Please add AmazonS3FullAccess policy to the role used for creation of EKS workers (same role you put in aws-auth-cm.yaml when enabled worker nodes to join the cluster).
Greedy instances
Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.
Cirrus CI can schedule tasks on several Azure services. In order to interact with Azure APIs Cirrus CI needs permissions. First, please choose a subscription you want to use for scheduling CI tasks. Navigate to the Subscriptions blade within the Azure Portal and save $SUBSCRIPTION_ID that we'll use below for setting up a service principle.
Creating a service principal is a common way to safely give granular access to parts of Azure:
az ad sp create-for-rbac --name CirrusCI --sdk-auth \\\n--scopes \"/subscriptions/$SUBSCRIPTION_ID\"\n
Command above will create a new service principal and will print something like:
Azure Container Instances (ACI) is an ideal candidate for running modern CI workloads. ACI allows just to run Linux and Windows containers without thinking about underlying infrastructure.
Once azure_credentials is configured as described above, tasks can be scheduled on ACI by configuring aci_instance like this:
Linux-based images are usually pretty small and doesn't require much tweaking. For Windows containers ACI recommends to follow a few basic tips in order to reduce startup time.
Cirrus CI can schedule tasks on several Oracle Cloud services. In order to interact with OCI APIs Cirrus CI needs permissions. Please create a user that Cirrus CI will behalf on:
oci iam user create --name cirrus --description \"Cirrus CI Orchestrator\"\n
Please configure the cirrus user to be able to access storage, launch instances and have access to Kubernetes clusters. The easiest way is to add cirrus user to Administrators group, but it's not as secure as a granular access configuration.
By default, for every repository you'll start using Cirrus CI with, Cirrus will create a bucket with 90 days lifetime policy. In order to allow Cirrus to configure lifecycle policies please add the following policy as described in the documentation. Here is an example of the policy for us-ashburn-1 region:
Allow service objectstorage-us-ashburn-1 to manage object-family in tenancy\n
Once you created and configured cirrus user you'll need to provide its API key. Once you generate an API key you should get a *.pem file with the private key that will be used by Cirrus CI.
Normally your config file for local use looks like this:
[DEFAULT]\nuser=ocid1.user.oc1..XXX\nfingerprint=11:22:...:99\ntenancy=ocid1.tenancy.oc1..YYY\nregion=us-ashburn-1\nkey_file=<path to your *.pem private keyfile>\n
For Cirrus to use, you'll need to use a different format:
<user value>\n<fingerprint value>\n<tenancy value>\n<region value>\n<content of your *.pem private keyfile>\n
This way you'll be able to create a single encrypted variable with the contents of the Cirrus specific credentials above.
Please create a Kubernetes cluster and make sure Kubernetes API Public Endpoint is enabled for the cluster so Cirrus can access it. Then copy cluster id which can be used in configuring oke_container:
The cluster can utilize Oracle's Ampere A1 Arm instances in order to run arm64 CI workloads!
Greedy instances
Greedy instances can potentially use more CPU resources if available. Please check this blog post for more details.
"},{"location":"guide/tips-and-tricks/","title":"Configuration Tips and Tricks","text":""},{"location":"guide/tips-and-tricks/#custom-clone-command","title":"Custom Clone Command","text":"
By default, Cirrus CI uses a Git client implemented purely in Go to perform a clone of a single branch with full Git history. It is possible to control clone depth via CIRRUS_CLONE_DEPTH environment variable.
Customizing clone behavior is a simple as overriding clone_script. For example, here an override to use a pre-installed Git client (if your build environment has it) to do a shallow clone of a single branch:
Using go-git made it possible not to require a pre-installed Git from an execution environment. For example, most of alpine-based containers don't have Git pre-installed. Because of go-git you can even use distroless containers with Cirrus CI, which don't even have an Operating System.
"},{"location":"guide/tips-and-tricks/#sharing-configuration-between-tasks","title":"Sharing configuration between tasks","text":"
You can use YAML aliases to share configuration options between multiple tasks. For example, here is a 2-task build which only runs for \"master\", PRs and tags, and installs some framework:
# Define a node anywhere in YAML file to create an alias. Make sure the name doesn't clash with an existing keyword.\nregular_task_template: ®ULAR_TASK_TEMPLATE\nonly_if: $CIRRUS_BRANCH == 'master' || $CIRRUS_TAG != '' || $CIRRUS_PR != ''\nenv:\nFRAMEWORK_PATH: \"${HOME}/framework\"\ninstall_framework_script: curl https://example.com/framework.tar | tar -C \"${FRAMEWORK_PATH}\" -x\ntask:\n# This operator will insert REGULAR_TASK_TEMPLATE at this point in the task node.\n<< : *REGULAR_TASK_TEMPLATE\nname: linux\ncontainer:\nimage: alpine:latest\ntest_script: ls \"${FRAMEWORK_PATH}\"\ntask:\n<< : *REGULAR_TASK_TEMPLATE\nname: osx\nmacos_instance:\nimage: catalina-xcode\ntest_script: ls -w \"${FRAMEWORK_PATH}\"\n
"},{"location":"guide/tips-and-tricks/#long-lines-in-configuration-file","title":"Long lines in configuration file","text":"
If you like your YAML file to fit on your screen, and some commands are just too long, you can split them across multiple lines. YAML supports a variety of options to do that, for example here's how you can split ENCRYPTED values:
"},{"location":"guide/tips-and-tricks/#setting-environment-variables-from-scripts","title":"Setting environment variables from scripts","text":"
Even through most of the time you can configure environment variables via env, there are cases when a variable value is obtained only when the task is already running.
Normally you'd use export for that, but since each script instruction is executed in a separate shell, the exported variables won't propagate to the next instruction.
However, there's a simple solution: just write your variables in a KEY=VALUE format to the file referenced by the CIRRUS_ENV environment variable.
It is possible to run Windows Containers like how one can run Linux containers on Cirrus Cloud Windows Cluster. To use Windows, add windows_container instead of container in .cirrus.yml files:
Cirrus CI assumes that the container image's host OS is Windows Server 2019. Cirrus CI used to support 1709 and 1803 versions, but they are deprecated as of April 2021.
By default Cirrus CI agent executed scripts using cmd.exe. It is possible to override default shell executor by providing CIRRUS_SHELL environment variable:
env:\nCIRRUS_SHELL: powershell\n
It is also possible to use PowerShell scripts inline inside of a script instruction by prefixing it with ps:
windows_task:\nscript:\n- ps: Get-Location\n
ps: COMMAND is just a syntactic sugar which transforms it to:
Some software installed with Chocolatey would update PATH environment variable in system settings and suggest using refreshenv to pull those changes into the current environment. Unfortunately, using refreshenv will overwrite any environment variables set in Cirrus CI configuration with system-configured defaults. We advise to make necessary changes using env and environment instead of using refreshenv command in scripts.
All cirrusci/* Windows containers like cirrusci/windowsservercore:2016 have Chocolatey pre-installed. Chocolatey is a package manager for Windows which supports unattended installs of software, useful on headless machines.
A task defines a sequence of instructions to execute and an execution environment to execute these instructions in. Let's see a line-by-line example of a .cirrus.yml configuration file first:
The example above defines a single task that will be scheduled and executed on the Linux Cluster using the openjdk:latest Docker image. Only one user-defined script instruction to run ./gradlew test will be executed. Not that complex, right?
Please read the topics below if you want better understand what's going on in a more complex .cirrus.yml configuration file, such as this:
Use any Docker image from public or private registries
Use cache instruction to persist folders based on an arbitrary fingerprint_script.
Use matrix modification to produce many similar tasks.
See what kind of files were changes and skip tasks that are not applicable. See changesInclude and changesIncludeOnly documentation for details.
Use nested matrix modification to produce even more tasks.
Completely exclude tasks from execution graph by any custom condition.
Task Naming
To name a task one can use the name field. foo_task syntax is a syntactic sugar. Separate name field is very useful when you want to have a rich task name:
task:\nname: Tests (macOS)\n...\n
Note: instructions within a task can only be named via a prefix (e.g. test_script).
Visual Task Creation for Beginners
If you are just getting started and prefer a more visual way of creating tasks, there is a third-party Cirrus CI Configuration Builder for generating YAML config that might be helpful.
In order to specify where to execute a particular task you can choose from a variety of options by defining one of the following fields for a task:
Field Name Managed by Description container us Linux Docker Container arm_container us Linux Arm Docker Container windows_container us Windows Docker Container macos_instance us macOS Virtual Machines freebsd_instance us FreeBSD Virtual Machines compute_engine_instance us Full-fledged custom VM persistent_worker you Use any host on any platform and architecture gce_instance you Linux, Windows and FreeBSD Virtual Machines in your GCP project gke_container you Linux Docker Containers on private GKE cluster ec2_instance you Linux Virtual Machines in your AWS eks_instance you Linux Docker Containers on private EKS cluster azure_container_instance you Linux and Windows Docker Container on Azure oke_instance you Linux x86 and Arm Containers on Oracle Cloud"},{"location":"guide/writing-tasks/#supported-instructions","title":"Supported Instructions","text":"
Each task is essentially a collection of instructions that are executed sequentially. The following instructions are supported:
script instruction to execute a script.
background_script instruction to execute a script in a background.
cache instruction to persist files between task runs.
artifacts instruction to store and expose files created via a task.
file instruction to create a file from an environment variable.
A script instruction executes commands via shell on Unix or batch on Windows. A script instruction can be named by adding a name as a prefix. For example test_script or my_very_specific_build_step_script. Naming script instructions helps gather more granular information about task execution. Cirrus CI will use it in future to auto-detect performance regressions.
Script commands can be specified as a single string value or a list of string values in a .cirrus.yml configuration file like in the example below:
check_task:\ncompile_script: gradle --parallel classes testClasses\ncheck_script:\n- echo \"Here comes more than one script!\"\n- printenv\n- gradle check\n
Note: Each script instruction is executed in a newly created process, therefore environment variables are not preserved between them.
Execution on Windows
When executed on Windows via batch, Cirrus Agent will wrap each line of the script in a call so it's possible to fail fast upon first line exiting with non-zero exit code.
To avoid this \"syntactic sugar\" just create a script file and execute it.
A background_script instruction is absolutely the same as script instruction but Cirrus CI won't wait for the script to finish and will continue execution of further instructions.
Background scripts can be useful when something needs to be executed in the background. For example, a database or some emulators. Traditionally the same effect is achieved by adding & to a command like $: command &. Problem here is that logs from command will be mixed into regular logs of the following commands. By using background scripts not only logs will be properly saved and displayed, but also command itself will be properly killed in the end of a task.
Here is an example of how background_script instruction can be used to run an android emulator:
android_test_task:\nstart_emulator_background_script: emulator -avd test -no-audio -no-window\nwait_for_emulator_to_boot_script: adb wait-for-device\ntest_script: gradle test\n
A cache instruction allows to persist a folder and reuse it during the next execution of the task. A cache instruction can be named the same way as script instruction.
Here is an example:
amd64arm64
test_task:\ncontainer:\nimage: node:latest\nnode_modules_cache:\nfolder: node_modules\nreupload_on_changes: false # since there is a fingerprint script\nfingerprint_script:\n- echo $CIRRUS_OS\n- node --version\n- cat package-lock.json\npopulate_script: - npm install\ntest_script: npm run test\n
test_task:\narm_container:\nimage: node:latest\nnode_modules_cache:\nfolder: node_modules\nreupload_on_changes: false # since there is a fingerprint script\nfingerprint_script:\n- echo $CIRRUS_OS\n- node --version\n- cat package-lock.json\npopulate_script: - npm install\ntest_script: npm run test\n
Either folder or a folders field (with a list of folder paths) is required and they tell the agent which folder paths to cache.
Folder paths should be generally relative to the working directory (e.g. node_modules), with the exception of when only a single folder specified. In this case, it can be also an absolute path (/usr/local/bundle).
Folder paths can contain a \"glob\" pattern to cache multiple files/folders within a working directory (e.g. **/node_modules will cache every node_modules folder within the working directory).
A fingerprint_script and fingerprint_key are optional fields that can specify either:
a script, the output of which will be hashed and used as a key for the given cache:
These two fields are mutually exclusive. By default the task name is used as a fingerprint value.
After the last script instruction for the task succeeds, Cirrus CI will calculate checksum of the cached folder (note that it's unrelated to fingerprint_script or fingerprint_key fields) and re-upload the cache if it finds any changes. To avoid a time-costly re-upload, remove volatile files from the cache (for example, in the last script instruction of a task).
populate_script is an optional field that can specify a script that will be executed to populate the cache. populate_script should create the folder if it doesn't exist before the cache instruction. If your dependencies are updated often, please pay attention to fingerprint_script and make sure it will produce different outputs for different versions of your dependency (ideally just print locked versions of dependencies).
reupload_on_changes is an optional field that can specify whether Cirrus Agent should check if contents of cached folder have changed during task execution and re-upload a cache entry in case of any changes. If reupload_on_changes option is not set explicitly then it will be set to false if fingerprint_script or fingerprint_key is presented and true otherwise. Cirrus Agent will detect additions, deletions and modifications of any files under specified folder. All of the detected changes will be logged under Upload '$CACHE_NAME' cache instructions for easier debugging of cache invalidations.
That means the only difference between the example above and below is that yarn install will always be executed in the example below where in the example above only when yarn.lock has changes.
amd64arm64
test_task:\ncontainer:\nimage: node:latest\nnode_modules_cache:\nfolder: node_modules\nfingerprint_script: cat yarn.lock\ninstall_script: yarn install\ntest_script: yarn run test\n
test_task:\narm_container:\nimage: node:latest\nnode_modules_cache:\nfolder: node_modules\nfingerprint_script: cat yarn.lock\ninstall_script: yarn install\ntest_script: yarn run test\n
Caching for Pull Requests
Tasks for PRs upload caches to a separate caching namespace to not interfere with caches used by other tasks. But such PR tasks can read all caches even from the main caching namespace for a repository.
Scope of cached artifacts
Cache artifacts are shared between tasks, so two caches with the same name on e.g. Linux containers and macOS VMs will share the same set of files. This may introduce binary incompatibility between caches. To avoid that, add echo $CIRRUS_OS into fingerprint_script or use $CIRRUS_OS in fingerprint_key, which will distinguish caches based on OS.
Normally caches are uploaded at the end of the task execution. However, you can override the default behavior and upload them earlier.
To do this, use the upload_caches instruction, which uploads a list of caches passed to it once executed:
amd64arm64
test_task:\ncontainer:\nimage: node:latest\nnode_modules_cache:\nfolder: node_modules\nupload_caches:\n- node_modules\ninstall_script: yarn install\ntest_script: yarn run test\npip_cache:\nfolder: ~/.cache/pip\n
test_task:\narm_container:\nimage: node:latest\nnode_modules_cache:\nfolder: node_modules\nupload_caches:\n- node_modules\ninstall_script: yarn install\ntest_script: yarn run test\npip_cache:\nfolder: ~/.cache/pip\n
Note that pip cache won't be uploaded in this example: using upload_caches disables the default behavior where all caches are automatically uploaded at the end of the task, so if you want to upload pip cache too, you'll have to either:
extend the list of uploaded caches in the first upload_caches instruction
insert a second upload_caches instruction that specifically targets pip cache
An artifacts instruction allows to store files and expose them in the UI for downloading later. An artifacts instruction can be named the same way as script instruction and has only one required path field which accepts a glob pattern of files relative to $CIRRUS_WORKING_DIR to store. Right now only storing files under $CIRRUS_WORKING_DIR folder as artifacts is supported with a total size limit of 1G for a free task and with no limit on your own infrastructure.
In the example below, Build and Test task produces two artifacts: binaries artifacts with all executables built during a successful task completion and junit artifacts with all test reports regardless of the final task status (more about that you can learn in the next section describing execution behavior).
build_and_test_task:\n# instructions to build and test\nbinaries_artifacts:\npath: \"build/*\"\nalways:\njunit_artifacts:\npath: \"**/test-results/**.xml\"\nformat: junit\n
URLs to the artifacts"},{"location":"guide/writing-tasks/#latest-build-artifacts","title":"Latest build artifacts","text":"
It is possible to refer to the latest artifacts directly (artifacts of the latest successful build). Use the following link format to download the latest artifact of a particular task:
https://api.cirrus-ci.com/v1/artifact/github/<USER OR ORGANIZATION>/<REPOSITORY>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>/<PATH>\n
It is possible to also download an archive of all files within an artifact with the following link:
https://api.cirrus-ci.com/v1/artifact/github/<USER OR ORGANIZATION>/<REPOSITORY>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>.zip\n
By default, Cirrus looks up the latest successful build of the default branch for the repository but the branch name can be customized via ?branch=<BRANCH> query parameter.
Note that if several tasks are uploading artifacts with the same name then the ZIP archive from the above link will contain merged content of all artifacts. It's also possible to refer to an artifact of a particular task within a build by name:
https://api.cirrus-ci.com/v1/artifact/build/<CIRRUS_BUILD_ID>/<TASK NAME OR ALIAS>/<ARTIFACTS_NAME>.zip\n
It is also possible to download artifacts given a task id directly:
It's also possible to download a particular file of an artifact and not the whole archive by using <ARTIFACTS_NAME>/<PATH> instead of <ARTIFACTS_NAME>.zip.
By default, Cirrus CI will try to guess mimetype of files in artifacts by looking at their extensions. In case when artifacts don't have extensions, it's possible to explicitly set the Content-Type via type field:
Cirrus CI supports parsing artifacts in order to extract information that can be presented in the UI for a better user experience. Use the format field of an artifact instruction to specify artifact's format (mimetypes):
A file instruction allows to create a file from either an environment variable or directly from the configuration file. It is especially useful for situations when execution environment doesn't have proper shell to use echo ... >> ... syntax, for example, within scratch Docker containers.
Here is an example of how to populate Docker config from an encrypted environment variable:
You can also populate a file directly from the .cirrus.yml configuration file:
task:\ngit_config_file:\npath: /root/.gitconfig\nfrom_contents: |\n[user]\nname = John Doe\nemail = john@example.com\n
"},{"location":"guide/writing-tasks/#execution-behavior-of-instructions","title":"Execution Behavior of Instructions","text":"
By default, Cirrus CI executes instructions one after another and stops the overall task execution on the first failure. Sometimes there might be situations when some scripts should always be executed or some debug information needs to be saved on a failure. For such situations the always and on_failure keywords can be used to group instructions.
task:\ntest_script: ./run_tests.sh\non_failure:\ndebug_script: ./print_additional_debug_info.sh\ncleanup_script: ./cleanup.sh # failure here will not trigger `on_failure` instruction above\nalways:\ntest_reports_script: ./print_test_reports.sh\n
In the example above, print_additional_debug_info.sh script will be executed only on failures of test_script to output some additional debug information. print_test_reports.sh on the other hand will be executed both on successful and failed runs to print test reports (test reports are always useful! ).
Sometimes, a complex task might exceed the pre-defined timeout, and it might not be clear why. In this case, the on_timeout execution behavior, which has an extra time budget of 5 minutes might be useful:
Environment variables may also be set at the root level of .cirrus.yml. In that case, they will be merged with each task's individual environment variables, but the task level variables always take precedence. For example:
Will output /opt/bin:/usr/local/bin:/usr/bin or similar, but will not include /sdk/bin because this root level setting is ignored.
Also some default environment variables are pre-defined:
Name Value / Description CI true CIRRUS_CI true CI_NODE_INDEX Index of the current task within CI_NODE_TOTAL tasks CI_NODE_TOTAL Total amount of unique tasks for a given CIRRUS_BUILD_ID build CONTINUOUS_INTEGRATION true CIRRUS_API_CREATED true if the current build was created through the API. CIRRUS_BASE_BRANCH Base branch name if current build was triggered by a PR. For example master CIRRUS_BASE_SHA Base SHA if current build was triggered by a PR CIRRUS_BRANCH Branch name. For example my-feature CIRRUS_BUILD_ID Unique build ID CIRRUS_CHANGE_IN_REPO Git SHA CIRRUS_CHANGE_MESSAGE Commit message or PR title and description, depending on trigger event (Non-PRs or PRs respectively). CIRRUS_CHANGE_TITLE First line of CIRRUS_CHANGE_MESSAGE CIRRUS_CPU Amount of CPUs requested by the task. CIRRUS_CPU value is integer and rounded up for tasks that requested non-interger amount of CPUs. CIRRUS_CRON Cron Build name configured in the repository settings if this build was triggered by Cron. For example, nightly. CIRRUS_DEFAULT_BRANCH Default repository branch name. For example master CIRRUS_DOCKER_CONTEXT Docker build's context directory to use for Dockerfile as a CI environment. Defaults to project's root directory. CIRRUS_LAST_GREEN_BUILD_ID The build id of the last successful build on the same branch at the time of the current build creation. CIRRUS_LAST_GREEN_CHANGE Corresponding to CIRRUS_LAST_GREEN_BUILD_ID SHA (used in changesInclude and changesIncludeOnly functions). CIRRUS_PR PR number if current build was triggered by a PR. For example 239. CIRRUS_PR_DRAFT true if current build was triggered by a Draft PR. CIRRUS_PR_TITLE Title of a corresponding PR if any. CIRRUS_PR_BODY Body of a corresponding PR if any. CIRRUS_PR_LABELS comma separated list of PR's labels if current build was triggered by a PR. CIRRUS_TAG Tag name if current build was triggered by a new tag. For example v1.0 CIRRUS_OIDC_TOKEN OpenID Connect Token issued by https://oidc.cirrus-ci.com with audience set to https://cirrus-ci.com/github/$CIRRUS_REPO_OWNER (can be changed via $CIRRUS_OIDC_TOKEN_AUDIENCE). Please refer to a dedicated section below for in-depth details. CIRRUS_OS, OS Host OS. Either linux, windows or darwin. CIRRUS_TASK_NAME Task name CIRRUS_TASK_NAME_ALIAS Task name alias if any. CIRRUS_TASK_ID Unique task ID CIRRUS_RELEASE GitHub Release id if current tag was created for a release. Handy for uploading release assets. CIRRUS_REPO_CLONE_TOKEN Temporary GitHub access token to perform a clone. CIRRUS_REPO_NAME Repository name. For example my-project CIRRUS_REPO_OWNER Repository owner (an organization or a user). For example my-organization CIRRUS_REPO_FULL_NAME Repository full name/slug. For example my-organization/my-project CIRRUS_REPO_CLONE_URL URL used for cloning. For example https://github.com/my-organization/my-project.git CIRRUS_USER_COLLABORATOR true if a user initialized a build is already a contributor to the repository. false otherwise. CIRRUS_USER_PERMISSION admin, write, read or none. CIRRUS_HTTP_CACHE_HOST Host and port number on which local HTTP cache can be accessed on. GITHUB_CHECK_SUITE_ID Monotonically increasing id of a corresponding GitHub Check Suite which caused the Cirrus CI build. CIRRUS_ENV Path to a file, by writing to which you can set task-wide environment variables. CIRRUS_ENV_SENSITIVE Set to true to mask all variable values written to the CIRRUS_ENV file in the console output"},{"location":"guide/writing-tasks/#behavioral-environment-variables","title":"Behavioral Environment Variables","text":"
And some environment variables can be set to control behavior of the Cirrus CI Agent:
Name Default Value Description CIRRUS_AGENT_VERSION not set Cirrus Agent version to use. If not set, the latest release CIRRUS_AGENT_EXPOSE_SCRIPTS_OUTPUTS not set If set, instructs Cirrus Agent to stream scripts outputs to the console as well as Cirrus API. Useful in case your Kubernetes cluster has logging collection enabled. CIRRUS_CLONE_DEPTH 0 which will reflect in a full clone of a single branch Clone depth. CIRRUS_CLONE_SUBMODULES false Set to true to clone submodules recursively. CIRRUS_LOG_TIMESTAMP false Indicate Cirrus Agent to prepend timestamp to each line of logs. CIRRUS_OIDC_TOKEN_AUDIENCE not set Allows to override aud claim for CIRRUS_OIDC_TOKEN. CIRRUS_SHELL sh on Linux/macOS/FreeBSD and cmd.exe on Windows. Set to direct to execute each script directly without wrapping the commands in a shell script. Shell that Cirrus CI uses to execute scripts. By default sh is used. CIRRUS_VOLUME /tmp Defines a path for a temporary volume to be mounted into instances running in a Kubernetes cluster. This volume is mounted into all additional containers and is persisted between steps of a pipe. CIRRUS_WORKING_DIR cirrus-ci-build folder inside of a system's temporary folder Working directory where Cirrus CI executes builds. Default to cirrus-ci-build folder inside of a system's temporary folder. CIRRUS_ESCAPING_PROCESSES not set Set this variable to prevent the agent from terminating the processes spawned in each non-background instruction after that instruction ends. By default, the agent tries it's best to garbage collect these processes and their standard input/output streams. It's generally better to use a Background Script Instruction instead of this variable to achieve the same effect. CIRRUS_WINDOWS_ERROR_MODE not set Set this value to force all processes spawned by the agent to call the equivalent of SetErrorMode() with the provided value (for example, 0x8001) before beginning their execution. CIRRUS_VAULT_URL not set Address of the Vault server expressed as a URL and port (for example, https://vault.example.com:8200/), see HashiCorp Vault Support. CIRRUS_VAULT_NAMESPACE not set A Vault Enterprise Namespace to use when authenticating and reading secrets from Vault. CIRRUS_VAULT_AUTH_PATH jwt Alternative auth method mount point, in case it was mounted to a non-default path. CIRRUS_VAULT_ROLE not set"},{"location":"guide/writing-tasks/#internals-of-openid-connect-tokens","title":"Internals of OpenID Connect tokens","text":"
OpenID Connect is a very powerful mechanism that allows two independent systems establish trust without sharing any secrets. In the core of OpenID Connect is a simple JWT token that is signed by a trusted party (in our case it's Cirrus CI). Then the second system can be configured to trust such CIRRUS_OIDC_TOKENs signed by Cirrus CI. For examples please check Vault Integration, Google Cloud Integration and AWS Integration.
Once such external system receives a request authenticated with CIRRUS_OIDC_TOKEN it can verify the signature of the token via publicly available keys. Then it can extract claims from the token to make necessary assertions. Properly configuring assertions of such claims is crucial for secure integration with OIDC. Let's take a closer look at claims that are available through a payload of a CIRRUS_OIDC_TOKEN:
The above task will print out payload of a CIRRUS_OIDC_TOKEN that contains claims from the configuration that can be used for assertions.
{\n// Reserved Claims https://openid.net/specs/draft-jones-json-web-token-07.html#rfc.section.4.1 \n\"iss\": \"https://oidc.cirrus-ci.com\",\n\"aud\": \"https://cirrus-ci.com/github/cirruslabs\", // can be changed via $CIRRUS_OIDC_TOKEN_AUDIENCE\n\"sub\": \"repo:github:cirruslabs/cirrus-ci-docs\",\n\"nbf\": ...,\n\"exp\": ...,\n\"iat\": ...,\n\"jti\": \"...\",\n// Cirrus Added Claims\n\"platform\": \"github\", // Currently only GitHub is supported but more platforms will be added in the future\n\"owner\": \"cirruslabs\", // Unique organization or username on the platform\n\"owner_id\": \"29414678\", // Internal ID of the organization or user on the platform\n\"repository\": \"cirrus-ci-docs\", // Repository name\n\"repository_visibility\": \"public\", // either public or private\n\"repository_id\": \"5730634941071360\", // Internal Cirrus CI ID of the repository\n\"build_id\": \"1234567890\", // Internal Cirrus CI ID of the build. Same as $CIRRUS_BUILD_ID\n\"branch\": \"fkorotkov-patch-2\", // Git branch name. Same as $CIRRUS_BRANCH\n\"change_in_repo\": \"e6e989d4792a678b697a9f17a787761bfefb52d0\", // Git commit SHA. Same as $CIRRUS_CHANGE_IN_REPO\n\"pr\": \"123\", // Pull request number if a build was triggered by a PR. Same as $CIRRUS_PR\n\"pr_draft\": \"false\", // Whether the pull request is a draft. Same as $CIRRUS_PR_DRAFT\n\"pr_labels\": \"\", // Comma-separated list of labels of the pull request. Same as $CIRRUS_PR_LABELS\n\"tag\": \"1.0.0\", // Git tag name if a build was triggered by a tag creation. Same as $CIRRUS_TAG\n\"task_id\": \"987654321\", // Internal Cirrus CI ID of the task. Same as $CIRRUS_TASK_ID\n\"task_name\": \"main\", // Name of the task. Same as $CIRRUS_TASK_NAME\n\"task_name_alias\": \"main\", // Optional name alias of the task. Same as $CIRRUS_TASK_NAME_ALIAS\n\"user_collaborator\": \"true\", // Whether the user is a collaborator of the repository. Same as $CIRRUS_USER_COLLABORATOR\n\"user_permission\": \"admin\", // Permission level of the user in the repository. Same as $CIRRUS_USER_PERMISSION\n}\n
Please use the above claims to configure assertions in your external system. For example, you can assert that only tokens for specific branches can retrieve secrets for deploying to production.
It is possible to add encrypted variables to a .cirrus.yml file. These variables are decrypted only in builds for commits and pull requests that are made by users with write permission or approved by them.
In order to encrypt a variable go to repository's settings page via clicking settings icon on a repository's main page (for example https://cirrus-ci.com/github/my-organization/my-repository) and follow instructions.
Warning
Only users with WRITE permissions can add encrypted variables to a repository.
An encrypted variable will be presented in a form like ENCRYPTED[qwerty239abc] which can be safely committed to .cirrus.yml file:
Cirrus CI encrypts variables with a unique per repository 256-bit encryption key so forks and even repositories within the same organization cannot re-use them. qwerty239abc from the example above is NOT the content of your encrypted variable, it's just an internal ID. No one can brute force your secrets from such ID. In addition, Cirrus CI doesn't know a relation between an encrypted variable and a repository for which the encrypted variable was created.
Organization Level Encrypted Variables
Sometimes there might be secrets that are used in almost all repositories of an organization. For example, credentials to a compute service where tasks will be executed. In order to create such sharable encrypted variable go to organization's settings page via clicking settings icon on an organization's main page (for example https://cirrus-ci.com/github/my-organization) and follow instructions in Organization Level Encrypted Variables section.
Encrypted Variable for Cloud Credentials
In case you use integration with one of supported computing services, an encrypted variable used to store credentials that Cirrus is using to communicate with the computing service won't be decrypted if used in environment variables. These credentials have too many permissions for most of the cases, please create separate credentials with the minimum needed permissions for your specific case.
gcp_credentials: SECURED[!qwerty]\nenv:\nCREDENTIALS: SECURED[!qwerty] # won't be decrypted in any case\n
Skipping Task in Forked Repository
In forked repository the decryption of variable fails, which causes failure of task depending on it. To avoid this by default, make the sensitive task conditional:
Owner of forked repository can re-enable the task, if they have the required sensitive data, by encrypting the variable by themselves and editing both the encrypted variable and repo-owner condition in the .cirrus.yml file.
In addition to using Cirrus CI for managing secrets, it is possible to retrieve secrets from HashiCorp Vault.
You will need to configure a JWT authentication method and point it to the Cirrus CI's OIDC discovery URL: https://oidc.cirrus-ci.com.
This ensures that a cryptographic JWT token (CIRRUS_OIDC_TOKEN) that each Cirrus CI's task get assigned will be verified by your Vault installation.
From the Cirrus CI's side, use the CIRRUS_VAULT_URL environment variable to point Cirrus Agent at your vault and configure other Vault-specific variables, if needed. Note that it's not required for CIRRUS_VAULT_URL to be publicly available since Cirrus CI can orchestrate tasks on your infrastructure. Only Cirrus Agent executing a task from within an execution environment needs access to your Vault.
Once done, you will be able to use the VAULT[path/to/secret selector] syntax to retrieve a version 2 secret, for example:
The path is exactly the one you are familiar from invoking Vault CLI like vault read ..., and the selector is a simply dot-delimited list of fields to query in the output.
Caching of Vault secrets
Note that all VAULT[...] invocations cache the retrieved secrets on a per-path basis by default. Caching happens within a single task execution and is not shared between several tasks using the same secret.
To disable caching, use VAULT_NOCACHE[...] instead of VAULT[...].
Mixing of VAULT[...] and VAULT_NOCACHE[...] on the same path
Using both VAULT[...] and VAULT_NOCACHE[...] on the same path is not recommended because the order in which these invocations are processed is not deterministic.
It is possible to configure invocations of re-occurring builds via the well-known Cron expressions. Cron builds can be configured on a repository's settings page (not in .cirrus.yml).
It's possible to configure several cron builds with unique names which will be available via CIRRUS_CRON environment variable. Each cron build should specify branch to trigger new builds for and a cron expression compatible with Quartz. You can use this generator to generate/validate your expressions.
Note: Cron Builds are timed with the UTC timezone.
Sometimes it's useful to run the same task against different software versions. Or run different batches of tests based on an environment variable. For cases like these, the matrix modifier comes very handy. It's possible to use matrix keyword only inside of a particular task to have multiple tasks based on the original one. Each new task will be created from the original task by replacing the whole matrix YAML node with each matrix's children separately.
Let check an example of a .cirrus.yml:
amd64arm64
test_task:\ncontainer:\nmatrix:\n- image: node:latest\n- image: node:lts\ntest_script: yarn run test\n
test_task:\narm_container:\nmatrix:\n- image: node:latest\n- image: node:lts\ntest_script: yarn run test\n
Which will be expanded into:
amd64arm64
test_task:\ncontainer:\nimage: node:latest\ntest_script: yarn run test\ntest_task:\ncontainer:\nimage: node:lts\ntest_script: yarn run test\n
test_task:\narm_container:\nimage: node:latest\ntest_script: yarn run test\ntest_task:\narm_container:\nimage: node:lts\ntest_script: yarn run test\n
Tip
The matrix modifier can be used multiple times within a task.
The matrix modification makes it easy to create some pretty complex testing scenarios like this:
Sometimes it might be very handy to execute some tasks only after successful execution of other tasks. For such cases it is possible to specify task names that a particular task depends. Use depends_on keyword to define dependencies:
amd64arm64
container:\nimage: node:latest\nlint_task:\nscript: yarn run lint\ntest_task:\nscript: yarn run test\npublish_task:\ndepends_on:\n- test\n- lint\nscript: yarn run publish\n
arm_container:\nimage: node:latest\nlint_task:\nscript: yarn run lint\ntest_task:\nscript: yarn run test\npublish_task:\ndepends_on:\n- test\n- lint\nscript: yarn run publish\n
Task Names and Aliases
It is possible to specify the task's name via the name field. lint_task syntax is a syntactic sugar that will be expanded into:
task:\nname: lint\n...\n
Names can be also pretty complex:
task:\nname: Test Shard $TESTS_SPLIT\nenv:\nmatrix:\nTESTS_SPLIT: 1/3\nTESTS_SPLIT: 2/2\nTESTS_SPLIT: 3/3\ntests_script: ./.ci/tests.sh\ndeploy_task:\nonly_if: $CIRRUS_BRANCH == 'master'\ndepends_on:\n- Test Shard 1/3\n- Test Shard 2/3\n- Test Shard 3/3\nscript: ./.ci/deploy.sh\n...\n
Complex task names make it difficult to list and maintain all of such task names in your depends_on field. In order to make it simpler you can use the alias field to have a short simplified name for several tasks to use in depends_on.
Here is a modified version of an example above that leverages the alias field:
Some tasks are meant to be created only if a certain condition is met. And some tasks can be skipped in some cases. Cirrus CI supports the only_if and skip keywords in order to provide such flexibility:
The only_if keyword controls whether or not a task will be created. For example, you may want to publish only changes committed to the master branch.
publish_task:\nonly_if: $CIRRUS_BRANCH == 'master'\nscript: yarn run publish\n
The skip keyword allows to skip execution of a task and mark it as successful. For example, you may want to skip linting if no source files have changed since the last successful run.
lint_task:\nskip: \"!changesInclude('.cirrus.yml', '**.{js,ts}')\"\nscript: yarn run lint\n
Skip CI Completely
Just include [skip ci] or [skip cirrus] in the first line or last line of your commit message in order to skip CI execution for a commit completely.
If you push multiple commits at the same time, only the last commit message will be checked for [skip ci] or [ci skip].
If you open a PR, PR title will be checked for [skip ci] or [ci skip] instead of the last commit message on the PR branch.
Currently only basic operators like ==, !=, =~, !=~, &&, || and unary ! are supported in only_if and skip expressions. Environment variables can also be used as usually.
Note that =~ operator can match against multiline values (dotall mode) and therefore looking for the exact occurrence of the regular expression so don't forget to use .* around your term for matching it at any position (for example, $CIRRUS_CHANGE_TITLE =~ '.*\\[docs\\].*').
Currently two functions are supported in the only_if and skip expressions:
changesInclude function allows to check which files were changed
changesIncludeOnly is a more strict version of changesInclude, i.e. it won't evaluate to true if there are changed files other than the ones covered by patterns
These two functions behave differently for PR builds and regular builds:
For PR builds, functions check the list of files affected by the PR.
For regular builds, the CIRRUS_LAST_GREEN_CHANGE environment variable will be used to determine list of affected files between CIRRUS_LAST_GREEN_CHANGE and CIRRUS_CHANGE_IN_REPO. In case CIRRUS_LAST_GREEN_CHANGE is not available (either it's a new branch or there were no passing builds before), list of files affected by a commit associated with CIRRUS_CHANGE_IN_REPO environment variable will be used instead.
changesInclude function can be very useful for skipping some tasks when no changes to sources have been made since the last successful Cirrus CI build.
lint_task:\nskip: \"!changesInclude('.cirrus.yml', '**.{js,ts}')\"\nscript: yarn run lint\n
changesIncludeOnly function can be used to skip running a heavyweight task if only documentation was changed, for example:
"},{"location":"guide/writing-tasks/#auto-cancellation-of-tasks","title":"Auto-Cancellation of Tasks","text":"
Cirrus CI can automatically cancel tasks in case of new pushes to the same branch. By default, Cirrus CI auto-cancels all tasks for non default branch (for most repositories master branch) but this behavior can be changed by specifying auto_cancellation field:
It's possible to tell Cirrus CI that a certain task is stateful and Cirrus CI will use a slightly different scheduling algorithm to minimize chances of such tasks being interrupted. Stateful tasks are intended to use low CPU count. Scheduling times of such stateful tasks might be a bit longer then usual especially for tasks with high CPU requirements.
By default, Cirrus CI marks a task as stateful if its name contains one of the following terms: deploy, push, publish, upload or release. Otherwise, you can explicitly mark a task as stateful via stateful field:
task:\nname: Propagate to Production\nstateful: true\n...\n
Sometimes tasks can play a role of sanity checks. For example, a task can check that your library is working with the latest nightly version of some dependency package. It will be great to be notified about such failures but it's not necessary to fail the whole build when a failure occurs. Cirrus CI has the allow_failures keyword which will make a task to not affect the overall status of a build.
By default a Cirrus CI task is automatically triggered when all its dependency tasks finished successfully. Sometimes though, it can be very handy to trigger some tasks manually, for example, perform a deployment to staging for manual testing upon all automation checks have succeeded. In order change the default behavior please use trigger_type field like this:
Some CI tasks perform external operations which are required to be executed one at a time. For example, parallel deploys to the same environment is usually a bad idea. In order to restrict parallel execution of a certain task within a repository, you can use execution_lock to specify a task's lock key, a unique string that will be used to make sure that any tasks with the same execution_lock string are executed one at a time. Here is an example of how to make sure deployments on a specific branch can not run in parallel:
Similar to manual tasks Cirrus CI can pause execution of tasks until a corresponding PR gets labeled. This can be particular useful when you'd like to do an initial review before running all unit and integration tests on every supported platform. Use the required_pr_labels field to specify a list of labels a PR requires to have in order to trigger a task. Here is a simple example of .cirrus.yml config that automatically runs a linting tool but requires initial-review label being presented in order to run tests:
For the most cases regular caching mechanism where Cirrus CI caches a folder is more than enough. But modern build systems like Gradle, Bazel and Pants can take advantage of remote caching. Remote caching is when a build system uploads and downloads intermediate results of a build execution while the build itself is still executing.
Cirrus CI agent starts a local caching server and exposes it via CIRRUS_HTTP_CACHE_HOST environments variable. Caching server supports GET, POST, HEAD and DELETE requests to upload, download, check presence and delete artifacts.
Info
If port 12321 is available CIRRUS_HTTP_CACHE_HOST will be equal to localhost:12321.
For example running the following command:
curl -s -X POST --data-binary @myfolder.tar.gz http://$CIRRUS_HTTP_CACHE_HOST/name-key\n
...has the same effect as the following caching instruction:
Sometimes one container is not enough to run a CI build. For example, your application might use a MySQL database as a storage. In this case you most likely want a MySQL instance running for your tests.
One option here is to pre-install MySQL and use a background_script to start it. This approach has some inconveniences like the need to pre-install MySQL by building a custom Docker container.
For such use cases Cirrus CI allows to run additional containers in parallel with the main container that executes a task. Each additional container is defined under additional_containers keyword in .cirrus.yml. Each additional container should have a unique name and specify at least a container image.
Normally, you would also specify a port (or ports, if there are many) to instruct the Cirrus CI to configure the networking between the containers and wait for the ports to be available before running the task. Additional containers do not inherit environment variables because they are started before the main task receives it's environment variables.
In the example below we use an official MySQL Docker image that exposes the standard MySQL port (3306). Tests will be able to access MySQL instance via localhost:3306.
Additional container can be very handy in many scenarios. Please check Cirrus CI catalog of examples for more details.
Default Resources
By default, each additional container will get 0.5 CPU and 512Mi of memory. These values can be configured as usual via cpu and memory fields.
Port Mapping
It's also possible to map ports of additional containers by using <HOST_PORT>:<CONTAINER_PORT> format for the port field. For example, port: 80:8080 will map port 8080 of the container to be available on local port 80 within a task.
Note: don't use port mapping unless absolutely necessary. A perfect use case is when you have several additional containers which start the service on the same port and there's no easy way to change that. Port mapping limits the number of places the container can be scheduled and will affect how fast such tasks are scheduled.
To specify multiple mappings use the ports field, instead of the port:
ports:\n- 8080\n- 3306\n
Overriding Default Command
It's also possible to override the default CMD of an additional container via command field:
Cirrus CI provides a way to embed a badge that can represent status of your builds into a ReadMe file or a website.
For example, this is a badge for cirruslabs/cirrus-ci-web repository that contains Cirrus CI's front end:
In order to embed such a check into a \"read-me\" file or your website, just use a URL to a badge that looks like this:
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg\n
If you want a badge for a particular branch, use the ?branch=<BRANCH NAME> query parameter (at the end of the URL) like this:
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg?branch=<BRANCH NAME>\n
By default, Cirrus picks the latest build in a final state for the repository or a particular branch if branch parameter is specified. It's also possible to explicitly set a concrete build to use with ?buildId=<BUILD ID> query parameter.
If you want a badge for a particular task within the latest finished build, use the ?task=<TASK NAME> query parameter (at the end of the URL) like this:
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg?task=tests\n
You can even pick a specific script instruction within the task with an additional script=<SCRIPT NAME> parameter:
https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg?task=build&script=lint\n
"},{"location":"guide/writing-tasks/#badges-in-markdown","title":"Badges in Markdown","text":"
Here is how Cirrus CI's badge can be embeded in a Markdown file:
[![Build Status](https://api.cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>.svg)](https://cirrus-ci.com/github/<USER OR ORGANIZATION>/<REPOSITORY>)\n
Subject: Platform Security Risk
+
+HOW TO EXPLOIT
+
+Give exact details so our team can replicate it.
+
+OTHER INFORMATION
+
+If anything else needs to be said, put it here.
+
+
+
+
Please be patient. You will get an email back soon.
The best way to ask general questions about particular use cases is to email our support team at support+ci@cirruslabs.org.
+Our support team is trying our best to respond ASAP, but there is no guarantee on a response time unless your organization enrolls in Priority Support.
+
If you have a feature request or noticed lack of some documentation please feel free to create a GitHub issue.
+Our support team will answer it by replying to the issue or by updating the documentation.
In addition to the general support we provide a Priority Support option with guaranteed response times. But most importantly we'll be doing
+regular checkins to make sure roadmap for Cirrus CI and other services/software under cirruslabs organization is aligned with your company's needs.
+You'll be helping to shape the future of software developed by Cirrus Labs!
+
+
+
+
Severity
+
Support Impact
+
First Response Time SLA
+
Hours
+
How to Submit
+
+
+
+
+
1
+
Emergency (service is unavailable or completely unusable).
+
30 minutes
+
24x7
+
Please use urgent email address.
+
+
+
2
+
Highly Degraded (Important features unavailable or extremely slow; No acceptable workaround).
+
4 hours
+
24x5
+
Please use priority email address.
+
+
+
3
+
Medium Impact.
+
8 hours
+
24x5
+
Please use priority email address.
+
+
+
4
+
Low Impact.
+
24 hours
+
24x5
+
Please use regular support email address. Make sure to send the email from your corporate email.
+
+
+
+
24x5 means period of time from 9AM on Monday till 5PM on Friday in EST timezone.
+
+
+Support Impact Definitions
+
+
Severity 1 - Cirrus CI or other services is unavailable or completely unusable. An urgent issue can be filed and
+ our On-Call Support Engineer will respond within 30 minutes. Example: Cirrus CI showing 502 errors for all users.
+
Severity 2 - Cirrus CI or other services is Highly Degraded. Significant Business Impact. Important Cirrus CI features are unavailable
+ or extremely slowed, with no acceptable workaround.
+
Severity 3 - Something is preventing normal service operation. Some Business Impact. Important features of Cirrus CI or other services
+ are unavailable or somewhat slowed, but a workaround is available. Cirrus CI use has a minor loss of operational functionality.
+
Severity 4 - Questions or Clarifications around features or documentation. Minimal or no Business Impact.
+ Information, an enhancement, or documentation clarification is requested, but there is no impact on the operation of Cirrus CI or other services/software.
As a company grows, engineering team tend to accumulate knowledge operating and working with Cirrus CI and other services/software provided by Cirrus Labs,
+therefore there is less effort needed to support each new seat from our side. On the other hand, Cirrus CI allows to bring your own infrastructure
+which increases complexity of the support. As a result we reflected the above challenges in a tiered pricing model
+based on a seat amount and a type of infrastructure used:
Note that Priority Support Subscription requires a purchase of a minimum of 20 seats even if some of them will be unused.
+
+What is a seat?
+
A seat is a user that initiates CI builds by pushing commits and/or creating pull requests in a private repository.
+It can be a real person or a bot. If you are using Cron Builds or creating builds through Cirrus's API
+it will be counted as an additional seat (like a bot).
+
If you'd like to get a priority support for your public repositories then the amount of seats will be equal to the amount of members in your organization.
Please email sales@cirruslabs.org, so we can get a support contract in addition to TOC.
+The contract will contain a special priority email address for your organization and other helpful information. Sales team will
+also schedule a check-in meeting to make sure your engineering team is set for success and Cirrus Labs roadmap aligns with your needs.