From f69167e824f54b3e351e21d39173d33c7194e1c6 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Fri, 11 Oct 2024 06:13:45 +0000 Subject: [PATCH] Deployed c88db3ee to dev with MkDocs 1.5.3 and mike 2.0.0 --- dev/__pycache__/docs_hooks.cpython-312.pyc | Bin 928 -> 928 bytes dev/search/search_index.json | 2 +- dev/sitemap.xml | 70 ++++++++++----------- dev/sitemap.xml.gz | Bin 592 -> 592 bytes dev/tutorials/python/index.html | 14 ++--- 5 files changed, 43 insertions(+), 43 deletions(-) diff --git a/dev/__pycache__/docs_hooks.cpython-312.pyc b/dev/__pycache__/docs_hooks.cpython-312.pyc index 542d92b0f93b098b8a45d98a837ca846c541360c..f1206ca78822cb129095d15c6d7a45e14b9a375f 100644 GIT binary patch delta 20 acmZ3$zJQ(kG%qg~0}yZ?;@HSNl^FmqK?HCB delta 20 acmZ3$zJQ(kG%qg~0}vQ1v2Ns^$_xN4m;?6! diff --git a/dev/search/search_index.json b/dev/search/search_index.json index bde37f95e..d6af82b46 100644 --- a/dev/search/search_index.json +++ b/dev/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Getting Started","text":"

Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.

"},{"location":"#installation","title":"Installation","text":"

To install pixi you can run the following command in your terminal:

Linux & macOSWindows
curl -fsSL https://pixi.sh/install.sh | bash\n

The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to ~/.pixi/bin. If this directory does not already exist, the script will create it.

The script will also update your ~/.bash_profile to include ~/.pixi/bin in your PATH, allowing you to invoke the pixi command from anywhere.

PowerShell:

iwr -useb https://pixi.sh/install.ps1 | iex\n
winget:
winget install prefix-dev.pixi\n
The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to LocalAppData/pixi/bin. If this directory does not already exist, the script will create it.

The command will also automatically add LocalAppData/pixi/bin to your path allowing you to invoke pixi from anywhere.

Tip

You might need to restart your terminal or source your shell for the changes to take effect.

You can find more options for the installation script here.

"},{"location":"#autocompletion","title":"Autocompletion","text":"

To get autocompletion follow the instructions for your shell. Afterwards, restart the shell or source the shell config file.

"},{"location":"#bash-default-on-most-linux-systems","title":"Bash (default on most Linux systems)","text":"
echo 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\n
"},{"location":"#zsh-default-on-macos","title":"Zsh (default on macOS)","text":"
echo 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\n
"},{"location":"#powershell-pre-installed-on-all-windows-systems","title":"PowerShell (pre-installed on all Windows systems)","text":"
Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n

Failure because no profile file exists

Make sure your profile file exists, otherwise create it with:

New-Item -Path $PROFILE -ItemType File -Force\n

"},{"location":"#fish","title":"Fish","text":"
echo 'pixi completion --shell fish | source' > ~/.config/fish/completions/pixi.fish\n
"},{"location":"#nushell","title":"Nushell","text":"

Add the following to the end of your Nushell env file (find it by running $nu.env-path in Nushell):

mkdir ~/.cache/pixi\npixi completion --shell nushell | save -f ~/.cache/pixi/completions.nu\n

And add the following to the end of your Nushell configuration (find it by running $nu.config-path):

use ~/.cache/pixi/completions.nu *\n
"},{"location":"#elvish","title":"Elvish","text":"
echo 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n
"},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"

Although we recommend installing pixi through the above method we also provide additional installation methods.

"},{"location":"#homebrew","title":"Homebrew","text":"

Pixi is available via homebrew. To install pixi via homebrew simply run:

brew install pixi\n
"},{"location":"#windows-installer","title":"Windows installer","text":"

We provide an msi installer on our GitHub releases page. The installer will download pixi and add it to the path.

"},{"location":"#install-from-source","title":"Install from source","text":"

pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:

cargo install --locked --git https://github.com/prefix-dev/pixi.git pixi\n

We don't publish to crates.io anymore, so you need to install it from the repository. The reason for this is that we depend on some unpublished crates which disallows us to publish to crates.io.

or when you want to make changes use:

cargo build\ncargo test\n

If you have any issues building because of the dependency on rattler checkout its compile steps.

"},{"location":"#installer-script-options","title":"Installer script options","text":"Linux & macOSWindows

The installation script has several options that can be manipulated through environment variables.

Variable Description Default Value PIXI_VERSION The version of pixi getting installed, can be used to up- or down-grade. latest PIXI_HOME The location of the binary folder. $HOME/.pixi PIXI_ARCH The architecture the pixi version was built for. uname -m PIXI_NO_PATH_UPDATE If set the $PATH will not be updated to add pixi to it. TMP_DIR The temporary directory the script uses to download to and unpack the binary from. /tmp

For example, on Apple Silicon, you can force the installation of the x86 version:

curl -fsSL https://pixi.sh/install.sh | PIXI_ARCH=x86_64 bash\n
Or set the version
curl -fsSL https://pixi.sh/install.sh | PIXI_VERSION=v0.18.0 bash\n

The installation script has several options that can be manipulated through environment variables.

Variable Environment variable Description Default Value PixiVersion PIXI_VERSION The version of pixi getting installed, can be used to up- or down-grade. latest PixiHome PIXI_HOME The location of the installation. $Env:USERPROFILE\\.pixi NoPathUpdate If set, the $PATH will not be updated to add pixi to it.

For example, set the version using:

iwr -useb https://pixi.sh/install.ps1 | iex -Args \"-PixiVersion v0.18.0\"\n
"},{"location":"#update","title":"Update","text":"

Updating is as simple as installing, rerunning the installation script gets you the latest version.

pixi self-update\n
Or get a specific pixi version using:
pixi self-update --version x.y.z\n

Note

If you've used a package manager like brew, mamba, conda, paru etc. to install pixi. It's preferable to use the built-in update mechanism. e.g. brew upgrade pixi.

"},{"location":"#uninstall","title":"Uninstall","text":"

To uninstall pixi from your system, simply remove the binary.

Linux & macOSWindows
rm ~/.pixi/bin/pixi\n
$PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n

After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi directory and remove the directory from your path.

"},{"location":"Community/","title":"Community","text":"

When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:

[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n

Customize your badge

To further customize the look and feel of your badge, you can add &style=<custom-style> at the end of the URL. See the documentation on shields.io for more info.

"},{"location":"Community/#built-using-pixi","title":"Built using Pixi","text":" "},{"location":"FAQ/","title":"Frequently asked questions","text":""},{"location":"FAQ/#what-is-the-difference-with-conda-mamba-poetry-pip","title":"What is the difference with conda, mamba, poetry, pip","text":"Tool Installs python Builds packages Runs predefined tasks Has lock files builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi","text":"

Starting with the name prefix we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px, pex, pax, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.

Linux & macOSWindows
alias not_pixi=\"pixi\"\n

PowerShell:

New-Alias -Name not_pixi -Value pixi\n

"},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build","text":"

TL;DR: It's coming we promise!

pixi build is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build which will be used as a library in pixi.

"},{"location":"basic_usage/","title":"Basic usage","text":"

Ensure you've got pixi set up. If running pixi doesn't show the help, see the getting started if it doesn't.

pixi\n

Initialize a new project and navigate to the project directory.

pixi init pixi-hello-world\ncd pixi-hello-world\n

Add the dependencies you would like to use.

pixi add python\n

Create a file named hello_world.py in the directory and paste the following code into the file.

hello_world.py
def hello():\n    print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n    hello()\n

Run the code inside the environment.

pixi run python hello_world.py\n

You can also put this run command in a task.

pixi task add hello python hello_world.py\n

After adding the task, you can run the task using its name.

pixi run hello\n

Use the shell command to activate the environment and start a new shell in there.

pixi shell\npython\nexit()\n

You've just learned the basic features of pixi:

  1. initializing a project
  2. adding a dependency.
  3. adding a task, and executing it.
  4. running a program.

Feel free to play around with what you just learned like adding more tasks, dependencies or code.

Happy coding!

"},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"

Use pixi to install tools on your machine.

Some notable examples:

# Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
"},{"location":"basic_usage/#using-the-no-activation-option","title":"Using the --no-activation option","text":"

When installing packages globally, you can use the --no-activation option to prevent the insertion of environment activation code into the installed executable scripts. This means that when you run the installed executable, it won't modify the PATH or CONDA_PREFIX environment variables beforehand.

Example:

# Install a package without inserting activation code\npixi global install ruff --no-activation\n

This option can be useful in scenarios where you want more control over the environment activation or if you're using the installed executables in contexts where automatic activation might interfere with other processes.

"},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"

You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.

- uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n

See the GitHub Actions for more details.

"},{"location":"vision/","title":"Vision","text":"

We created pixi because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.

"},{"location":"vision/#pixi-values","title":"Pixi values","text":"

We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:

  1. Fast. We want to have a fast package manager, that is able to solve the environment in a few seconds.
  2. User Friendly. We want to have a package manager that puts user friendliness on the front-line. Providing easy, accessible and intuitive commands. That have the element of least surprise.
  3. Isolated Environment. We want to have isolated environments, that are reproducible and easy to share. Ideally, it should run on all common platforms. The Conda packaging system provides an excellent base for this.
  4. Single Tool. We want to integrate most common uses when working on a development project with Pixi, so it should support at least dependency management, command management, building and uploading packages. You should not need to reach to another external tool for this.
  5. Fun. It should be fun to use pixi and not cause frustrations, you should not need to think about it a lot and it should generally just get out of your way.
"},{"location":"vision/#conda","title":"Conda","text":"

We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.

"},{"location":"vision/#target-languages","title":"Target languages","text":"

Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.

"},{"location":"advanced/authentication/","title":"Authenticate pixi with a server","text":"

You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.

Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n  <HOST>  The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n      --token <TOKEN>              The token to use (for authentication with prefix.dev)\n      --username <USERNAME>        The username to use (for basic HTTP authentication)\n      --password <PASSWORD>        The password to use (for basic HTTP authentication)\n      --conda-token <CONDA_TOKEN>  The token to use on anaconda.org / quetz authentication\n  -v, --verbose...                 More output per occurrence\n  -q, --quiet...                   Less output per occurrence\n  -h, --help                       Print help\n

The different options are \"token\", \"conda-token\" and \"username + password\".

The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>.

The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/....

The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/.... This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.

"},{"location":"advanced/authentication/#examples","title":"Examples","text":"

Login to prefix.dev:

pixi auth login prefix.dev --token pfx_jj8WDzvnuTHEGdAhwRZMC1Ag8gSto8\n

Login to anaconda.org:

pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n

Login to a basic HTTP secured server:

pixi auth login myserver.com --username user --password password\n
"},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"

The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.

On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access program that comes pre-installed on macOS. Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

On Linux, one can use GNOME Keyring (or just Keyring) to access credentials that are securely stored by libsecret. Searching for rattler should list all the credentials stored by pixi and other rattler-based programs.

"},{"location":"advanced/authentication/#fallback-storage","title":"Fallback storage","text":"

If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json and contains the credentials.

"},{"location":"advanced/authentication/#override-the-authentication-storage","title":"Override the authentication storage","text":"

You can use the RATTLER_AUTH_FILE environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.

E.g.

export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n

The JSON should follow the following format:

{\n    \"*.prefix.dev\": {\n        \"BearerToken\": \"your_token\"\n    },\n    \"otherhost.com\": {\n        \"BasicHTTP\": {\n            \"username\": \"your_username\",\n            \"password\": \"your_password\"\n        }\n    },\n    \"conda.anaconda.org\": {\n        \"CondaToken\": \"your_token\"\n    }\n}\n

Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev also matches repo.prefix.dev).

Lastly you can set the authentication override file in the global configuration file.

"},{"location":"advanced/authentication/#pypi-authentication","title":"PyPI authentication","text":"

Currently, we support the following methods for authenticating against PyPI:

  1. keyring authentication.
  2. .netrc file authentication.

We want to add more methods in the future, so if you have a specific method you would like to see, please let us know.

"},{"location":"advanced/authentication/#keyring-authentication","title":"Keyring authentication","text":"

Currently, pixi supports the uv method of authentication through the python keyring library. To enable this use the CLI flag --pypi-keyring-provider which can either be set to subprocess (activated) or disabled.

# From an existing pixi project\npixi install --pypi-keyring-provider subprocess\n

This option can also be set in the global configuration file under pypi-config.

"},{"location":"advanced/authentication/#installing-keyring","title":"Installing keyring","text":"

To install keyring you can use pixi global install:

Either use:

pixi global install keyring\n
GCP and other backends

The downside of this method is currently, because you cannot inject into a pixi global environment just yet, that installing different keyring backends is not possible. This allows only the default keyring backend to be used. Give the issue a \ud83d\udc4d up if you would like to see inject as a feature.

Or alternatively, you can install keyring using pipx:

# Install pipx if you haven't already\npixi global install pipx\npipx install keyring\n\n# For Google Artifact Registry, also install and initialize its keyring backend.\n# Inject this into the pipx environment\npipx inject keyring keyrings.google-artifactregistry-auth --index-url https://pypi.org/simple\ngcloud auth login\n
"},{"location":"advanced/authentication/#using-keyring-with-basic-auth","title":"Using keyring with Basic Auth","text":"

Use keyring to store your credentials e.g:

keyring set https://my-index/simple your_username\n# prompt will appear for your password\n
"},{"location":"advanced/authentication/#configuration","title":"Configuration","text":"

Make sure to include username@ in the URL of the registry. An example of this would be:

[pypi-options]\nindex-url = \"https://username@custom-registry.com/simple\"\n
"},{"location":"advanced/authentication/#gcp","title":"GCP","text":"

For Google Artifact Registry, you can use the Google Cloud SDK to authenticate. Make sure to have run gcloud auth login before using pixi. Another thing to note is that you need to add oauth2accesstoken to the URL of the registry. An example of this would be:

"},{"location":"advanced/authentication/#configuration_1","title":"Configuration","text":"
# rest of the pixi.toml\n#\n# Add's the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://oauth2accesstoken@<location>-python.pkg.dev/<project>/<repository>/simple\"]\n

Note

Include the /simple at the end, replace the <location> etc. with your project and repository and location.

To find this URL more easily, you can use the gcloud command:

gcloud artifacts print-settings python --project=<project> --repository=<repository> --location=<location>\n
"},{"location":"advanced/authentication/#azure-devops","title":"Azure DevOps","text":"

Similarly for Azure DevOps, you can use the Azure keyring backend for authentication. The backend, along with installation instructions can be found at keyring.artifacts.

After following the instructions and making sure that keyring works correctly, you can use the following configuration:

"},{"location":"advanced/authentication/#configuration_2","title":"Configuration","text":"

# rest of the pixi.toml\n#\n# Adds the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://VssSessionToken@pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/pypi/simple/\"]\n
This should allow for getting packages from the Azure DevOps artifact registry.

"},{"location":"advanced/authentication/#installing-your-environment","title":"Installing your environment","text":"

To actually install either configure your Global Config, or use the flag:

pixi install --pypi-keyring-provider subprocess\n

"},{"location":"advanced/authentication/#netrc-file","title":".netrc file","text":"

pixi allows you to access private registries securely by authenticating with credentials stored in a .netrc file.

In the .netrc file, you store authentication details like this:

machine registry-name\nlogin admin\npassword admin\n
For more details, you can access the .netrc docs.

"},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"

All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.

The actual code regarding this is in the rattler_solve crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.

"},{"location":"advanced/channel_priority/#channel-specific-dependencies","title":"Channel specific dependencies","text":"

When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.

[project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
In the packagex example, the solver will understand that the package is only available in my-channel and will not look for it in conda-forge.

The flowchart of the logic that excludes all other channels:

flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Channel Specific Dependency?}\n    C -->|Yes| D[Exclude All Other Channels for This Package]\n    C -->|No| E{Any Other Dependencies?}\n    E -->|Yes| B\n    E -->|No| F[End]\n    D --> E
"},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"

Channel priority is dictated by the order in the project.channels array, where the first channel is the highest priority. For instance:

[project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
If the package is found in conda-forge the solver will not look for it in my-channel and your-channel, because it tells the solver they are excluded. If the package is not found in conda-forge the solver will look for it in my-channel and if it is found there it will tell the solver to exclude your-channel for this package. This diagram explains the logic:
flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Loop Over Channels}\n    C --> D{Package in This Channel?}\n    D -->|No| C\n    D -->|Yes| E{\"This the first channel\n     for this package?\"}\n    E -->|Yes| F[Include Package in Candidates]\n    E -->|No| G[Exclude Package from Candidates]\n    F --> H{Any Other Channels?}\n    G --> H\n    H -->|Yes| C\n    H -->|No| I{Any Other Dependencies?}\n    I -->|No| J[End]\n    I -->|Yes| B

This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.

"},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"

A common use case is to use pytorch with nvidia drivers, while also needing the conda-forge channel for the main dependencies.

[project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
What this will do is get as much as possible from the nvidia/label/cuda-11.8.0 channel, which is actually only the cuda package.

Then it will get all packages from the nvidia channel, which is a little more and some packages overlap the nvidia and conda-forge channel. Like the cuda-cudart package, which will now only be retrieved from the nvidia channel because of the priority logic.

Then it will get the packages from the conda-forge channel, which is the main channel for the dependencies.

But the user only wants the pytorch packages from the pytorch channel, which is why pytorch is added last and the dependencies are added as channel specific dependencies.

We don't define the pytorch channel before conda-forge because we want to get as much as possible from the conda-forge as the pytorch channel is not always shipping the best versions of all packages.

For example, it also ships the ffmpeg package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge channel for ffmpeg with the priority logic.

"},{"location":"advanced/channel_priority/#force-a-specific-channel-priority","title":"Force a specific channel priority","text":"

If you want to force a specific priority for a channel, you can use the priority (int) key in the channel definition. The higher the number, the higher the priority. Non specified priorities are set to 0 but the index in the array still counts as a priority, where the first in the list has the highest priority.

This priority definition is mostly important for multiple environments with different channel priorities, as by default feature channels are prepended to the project channels.

[project]\nname = \"test_channel_priority\"\nplatforms = [\"linux-64\", \"osx-64\", \"win-64\", \"osx-arm64\"]\nchannels = [\"conda-forge\"]\n\n[feature.a]\nchannels = [\"nvidia\"]\n\n[feature.b]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = 1}]\n\n[feature.c]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = -1}]\n\n[environments]\na = [\"a\"]\nb = [\"b\"]\nc = [\"c\"]\n
This example creates 4 environments, a, b, c, and the default environment. Which will have the following channel order:

Environment Resulting Channels order default conda-forge a nvidia, conda-forge b nvidia, pytorch, conda-forge c pytorch, conda-forge, nvidia Check priority result with pixi info

Using pixi info you can check the priority of the channels in the environment.

pixi info\nEnvironments\n------------\n       Environment: default\n          Features: default\n          Channels: conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n       Environment: a\n          Features: a, default\n          Channels: nvidia, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n       Environment: b\n          Features: b, default\n          Channels: nvidia, pytorch, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n       Environment: c\n          Features: c, default\n          Channels: pytorch, conda-forge, nvidia\nDependency count: 0\nTarget platforms: linux-64\n

"},{"location":"advanced/explain_info_command/","title":"Info command","text":"

pixi info prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json format using the --json flag, which can be useful for programmatically reading it.

Running pixi info in the pixi repo
\u279c pixi info\n      Pixi version: 0.13.0\n          Platform: linux-64\n  Virtual packages: __unix=0=0\n                  : __linux=6.5.12=0\n                  : __glibc=2.36=0\n                  : __cuda=12.3=0\n                  : __archspec=1=x86_64\n         Cache dir: /home/user/.cache/rattler/cache\n      Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n           Version: 0.13.0\n     Manifest file: /home/user/development/pixi/pixi.toml\n      Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n          Features: default\n          Channels: conda-forge\n  Dependency count: 10\n      Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n  Target platforms: linux-64, osx-arm64, win-64, osx-64\n             Tasks: docs, test-all, test, build, lint, install, build-docs\n
"},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"

The first part of the info output is information that is always available and tells you what pixi can read on your machine.

"},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"

This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.

"},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"

The virtual packages that pixi can find on your machine.

In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda virtual package. In that case, if pixi cannot find the __cuda virtual package on your machine the installation will fail.

"},{"location":"advanced/explain_info_command/#cache-dir","title":"Cache dir","text":"

The directory where pixi stores its cache. Checkout the cache documentation for more information.

"},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"

Check the authentication documentation

"},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"

[requires --extended]

The size of the previously mentioned \"Cache dir\" in Mebibytes.

"},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"

Everything below Project is info about the project you're currently in. This info is only available if your path has a manifest file.

"},{"location":"advanced/explain_info_command/#manifest-file","title":"Manifest file","text":"

The path to the manifest file that describes the project.

"},{"location":"advanced/explain_info_command/#last-updated","title":"Last updated","text":"

The last time the lock file was updated, either manually or by pixi itself.

"},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"

The environment info defined per environment. If you don't have any environments defined, this will only show the default environment.

"},{"location":"advanced/explain_info_command/#features","title":"Features","text":"

This lists which features are enabled in the environment. For the default this is only default

"},{"location":"advanced/explain_info_command/#channels","title":"Channels","text":"

The list of channels used in this environment.

"},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"

The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).

"},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"

The list of dependencies defined for this environment.

"},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"

The platforms the project has defined.

"},{"location":"advanced/github_actions/","title":"GitHub Action","text":"

We created prefix-dev/setup-pixi to facilitate using pixi in CI.

"},{"location":"advanced/github_actions/#usage","title":"Usage","text":"
- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    pixi-version: v0.32.1\n    cache: true\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n

Pin your action versions

Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.8.0) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.

Put the following in your .github/dependabot.yml file to enable Dependabot for your GitHub Actions:

.github/dependabot.yml
version: 2\nupdates:\n  - package-ecosystem: github-actions\n    directory: /\n    schedule:\n      interval: monthly # (1)!\n    groups:\n      dependencies:\n        patterns:\n          - \"*\"\n
  1. or daily, weekly
"},{"location":"advanced/github_actions/#features","title":"Features","text":"

To see all available input arguments, see the action.yml file in setup-pixi. The most important features are described below.

"},{"location":"advanced/github_actions/#caching","title":"Caching","text":"

The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock file is present. It will then use the pixi.lock file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache input argument.

Customize your cache key

If you need to customize your cache-key, you can use the cache-key input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>.

Only save caches on main

In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write argument.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    cache: true\n    cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
"},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"

With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments input argument. This will install all environments that are specified and cache them.

[project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
"},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"

The following example will install the py311 and py312 environments in different jobs.

test:\n  runs-on: ubuntu-latest\n  strategy:\n    matrix:\n      environment: [py311, py312]\n  steps:\n  - uses: actions/checkout@v4\n  - uses: prefix-dev/setup-pixi@v0.8.0\n    with:\n      environments: ${{ matrix.environment }}\n
"},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"

The following example will install both the py311 and the py312 environment on the runner.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    environments: >- # (1)!\n      py311\n      py312\n- run: |\n  pixi run -e py311 test\n  pixi run -e py312 test\n
  1. separated by spaces, equivalent to

    environments: py311 py312\n

Caching behavior if you don't specify environments

If you don't specify any environment, the default environment will be installed and cached, even if you use other environments.

"},{"location":"advanced/github_actions/#authentication","title":"Authentication","text":"

There are currently three ways to authenticate with pixi:

For more information, see Authentication.

Handle secrets with care

Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }} syntax. These secrets will always be masked in the logs.

"},{"location":"advanced/github_actions/#token","title":"Token","text":"

Specify the token using the auth-token input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
"},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"

Specify the username and password using the auth-username and auth-password input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    auth-host: custom-artifactory.com\n    auth-username: ${{ secrets.PIXI_USERNAME }}\n    auth-password: ${{ secrets.PIXI_PASSWORD }}\n
"},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"

Specify the conda-token using the conda-token input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel) is used at anaconda.org or with quetz instances.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    auth-host: anaconda.org # (1)!\n    conda-token: ${{ secrets.CONDA_TOKEN }}\n
  1. or my-quetz-instance.com
"},{"location":"advanced/github_actions/#custom-shell-wrapper","title":"Custom shell wrapper","text":"

setup-pixi allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run command for each command.

- run: | # (1)!\n    python --version\n    pip install --no-deps -e .\n  shell: pixi run bash -e {0}\n
  1. everything here will be run inside of the pixi environment

You can even run Python scripts like this:

- run: | # (1)!\n    import my_package\n    print(\"Hello world!\")\n  shell: pixi run python {0}\n
  1. everything here will be run inside of the pixi environment

If you want to use PowerShell, you need to specify -Command as well.

- run: | # (1)!\n    python --version | Select-String \"3.11\"\n  shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
  1. everything here will be run inside of the pixi environment

How does it work under the hood?

Under the hood, the shell: xyz {0} option is implemented by creating a temporary script file and calling xyz with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0} directly but instead have to use shell: pixi run bash {0}. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell in the documentation. See the official documentation and ADR 0277 for more information about how the shell: input works in GitHub Actions.

"},{"location":"advanced/github_actions/#one-off-shell-wrapper-using-pixi-exec","title":"One-off shell wrapper using pixi exec","text":"

With pixi exec, you can also run a one-off command inside a temporary pixi environment.

- run: | # (1)!\n    zstd --version\n  shell: pixi exec --spec zstd -- bash -e {0}\n
  1. everything here will be run inside of the temporary pixi environment
- run: | # (1)!\n    import ruamel.yaml\n    # ...\n  shell: pixi exec --spec python=3.11.* --spec ruamel.yaml -- python {0}\n
  1. everything here will be run inside of the temporary pixi environment

See here for more information about pixi exec.

"},{"location":"advanced/github_actions/#environment-activation","title":"Environment activation","text":"

Instead of using a custom shell wrapper, you can also make all pixi-installed binaries available to subsequent steps by \"activating\" the installed environment in the currently running job. To this end, setup-pixi adds all environment variables set when executing pixi run to $GITHUB_ENV and, similarly, adds all path modifications to $GITHUB_PATH. As a result, all installed binaries can be accessed without having to call pixi run.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    activate-environment: true\n

If you are installing multiple environments, you will need to specify the name of the environment that you want to be activated.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    environments: >-\n      py311\n      py312\n    activate-environment: py311\n

Activating an environment may be more useful than using a custom shell wrapper as it allows non-shell based steps to access binaries on the path. However, be aware that this option augments the environment of your job.

"},{"location":"advanced/github_actions/#-frozen-and-locked","title":"--frozen and --locked","text":"

You can specify whether setup-pixi should run pixi install --frozen or pixi install --locked depending on the frozen or the locked input argument. See the official documentation for more information about the --frozen and --locked flags.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    locked: true\n    # or\n    frozen: true\n

If you don't specify anything, the default behavior is to run pixi install --locked if a pixi.lock file is present and pixi install otherwise.

"},{"location":"advanced/github_actions/#debugging","title":"Debugging","text":"

There are two types of debug logging that you can enable.

"},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"

The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:

Debug logging documentation

For more information about debug logging in GitHub Actions, see the official documentation.

"},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"

The second type is the debug logging of the pixi executable. This can be specified by setting the log-level input.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    log-level: vvv # (1)!\n
  1. One of q, default, v, vv, or vvv.

If nothing is specified, log-level will default to default or vv depending on if debug logging is enabled for the action.

"},{"location":"advanced/github_actions/#self-hosted-runners","title":"Self-hosted runners","text":"

On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).

If you set post-cleanup to true, the action will delete the following files:

If nothing is specified, post-cleanup will default to true.

On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi to do this.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    post-cleanup: true\n    pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
  1. ${{ runner.temp }}\\Scripts\\pixi.exe on Windows

You can also use a preinstalled local version of pixi on the runner by not setting any of the pixi-version, pixi-url or pixi-bin-path inputs. This action will then try to find a local version of pixi in the runner's PATH.

"},{"location":"advanced/github_actions/#using-the-pyprojecttoml-as-a-manifest-file-for-pixi","title":"Using the pyproject.toml as a manifest file for pixi.","text":"

setup-pixi will automatically pick up the pyproject.toml if it contains a [tool.pixi.project] section and no pixi.toml. This can be overwritten by setting the manifest-path input argument.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    manifest-path: pyproject.toml\n
"},{"location":"advanced/github_actions/#more-examples","title":"More examples","text":"

If you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi repository.

"},{"location":"advanced/production_deployment/","title":"Bringing pixi to production","text":"

You can bring pixi projects into production by either containerizing it using tools like Docker or by using quantco/pixi-pack.

@pavelzw from QuantCo wrote a blog post about bringing pixi to production. You can read it here.

"},{"location":"advanced/production_deployment/#docker","title":"Docker","text":"

We provide a simple docker image at pixi-docker that contains the pixi executable on top of different base images.

The images are available on ghcr.io/prefix-dev/pixi.

There are different tags for different base images available:

All tags

For all tags, take a look at the build script.

"},{"location":"advanced/production_deployment/#example-usage","title":"Example usage","text":"

The following example uses the pixi docker image as a base image for a multi-stage build. It also makes use of pixi shell-hook to not rely on pixi being installed in the production container.

More examples

For more examples, take a look at pavelzw/pixi-docker-example.

FROM ghcr.io/prefix-dev/pixi:0.32.1 AS build\n\n# copy source code, pixi.toml and pixi.lock to the container\nWORKDIR /app\nCOPY . .\n# install dependencies to `/app/.pixi/envs/prod`\n# use `--locked` to ensure the lockfile is up to date with pixi.toml\nRUN pixi install --locked -e prod\n# create the shell-hook bash script to activate the environment\nRUN pixi shell-hook -e prod -s bash > /shell-hook\nRUN echo \"#!/bin/bash\" > /app/entrypoint.sh\nRUN cat /shell-hook >> /app/entrypoint.sh\n# extend the shell-hook script to run the command passed to the container\nRUN echo 'exec \"$@\"' >> /app/entrypoint.sh\n\nFROM ubuntu:24.04 AS production\nWORKDIR /app\n# only copy the production environment into prod container\n# please note that the \"prefix\" (path) needs to stay the same as in the build container\nCOPY --from=build /app/.pixi/envs/prod /app/.pixi/envs/prod\nCOPY --from=build --chmod=0755 /app/entrypoint.sh /app/entrypoint.sh\n# copy your project code into the container as well\nCOPY ./my_project /app/my_project\n\nEXPOSE 8000\nENTRYPOINT [ \"/app/entrypoint.sh\" ]\n# run your app inside the pixi environment\nCMD [ \"uvicorn\", \"my_project:app\", \"--host\", \"0.0.0.0\" ]\n
"},{"location":"advanced/production_deployment/#pixi-pack","title":"pixi-pack","text":"

pixi-pack is a simple tool that takes a pixi environment and packs it into a compressed archive that can be shipped to the target machine.

It can be installed via

pixi global install pixi-pack\n

Or by downloading our pre-built binaries from the releases page.

Instead of installing pixi-pack globally, you can also use pixi exec to run pixi-pack in a temporary environment:

pixi exec pixi-pack pack\npixi exec pixi-pack unpack environment.tar\n

You can pack an environment with

pixi-pack pack --manifest-file pixi.toml --environment prod --platform linux-64\n

This will create a environment.tar file that contains all conda packages required to create the environment.

# environment.tar\n| pixi-pack.json\n| environment.yml\n| channel\n|    \u251c\u2500\u2500 noarch\n|    |    \u251c\u2500\u2500 tzdata-2024a-h0c530f3_0.conda\n|    |    \u251c\u2500\u2500 ...\n|    |    \u2514\u2500\u2500 repodata.json\n|    \u2514\u2500\u2500 linux-64\n|         \u251c\u2500\u2500 ca-certificates-2024.2.2-hbcca054_0.conda\n|         \u251c\u2500\u2500 ...\n|         \u2514\u2500\u2500 repodata.json\n
"},{"location":"advanced/production_deployment/#unpacking-an-environment","title":"Unpacking an environment","text":"

With pixi-pack unpack environment.tar, you can unpack the environment on your target system. This will create a new conda environment in ./env that contains all packages specified in your pixi.toml. It also creates an activate.sh (or activate.bat on Windows) file that lets you activate the environment without needing to have conda or micromamba installed.

"},{"location":"advanced/production_deployment/#cross-platform-packs","title":"Cross-platform packs","text":"

Since pixi-pack just downloads the .conda and .tar.bz2 files from the conda repositories, you can trivially create packs for different platforms.

pixi-pack pack --platform win-64\n

You can only unpack a pack on a system that has the same platform as the pack was created for.

"},{"location":"advanced/production_deployment/#inject-additional-packages","title":"Inject additional packages","text":"

You can inject additional packages into the environment that are not specified in pixi.lock by using the --inject flag:

pixi-pack pack --inject local-package-1.0.0-hbefa133_0.conda --manifest-pack pixi.toml\n

This can be particularly useful if you build the project itself and want to include the built package in the environment but still want to use pixi.lock from the project.

"},{"location":"advanced/production_deployment/#unpacking-without-pixi-pack","title":"Unpacking without pixi-pack","text":"

If you don't have pixi-pack available on your target system, you can still install the environment if you have conda or micromamba available. Just unarchive the environment.tar, then you have a local channel on your system where all necessary packages are available. Next to this local channel, you will find an environment.yml file that contains the environment specification. You can then install the environment using conda or micromamba:

tar -xvf environment.tar\nmicromamba create -p ./env --file environment.yml\n# or\nconda env create -p ./env --file environment.yml\n

The environment.yml and repodata.json files are only for this use case, pixi-pack unpack does not use them.

"},{"location":"advanced/pyproject_toml/","title":"pyproject.toml in pixi","text":"

We support the use of the pyproject.toml as our manifest file in pixi. This allows the user to keep one file with all configuration. The pyproject.toml file is a standard for Python projects. We don't advise to use the pyproject.toml file for anything else than python projects, the pixi.toml is better suited for other types of projects.

"},{"location":"advanced/pyproject_toml/#initial-setup-of-the-pyprojecttoml-file","title":"Initial setup of the pyproject.toml file","text":"

When you already have a pyproject.toml file in your project, you can run pixi init in a that folder. Pixi will automatically

If you do not have an existing pyproject.toml file , you can run pixi init --format pyproject in your project folder. In that case, pixi will create a pyproject.toml manifest from scratch with some sane defaults.

"},{"location":"advanced/pyproject_toml/#python-dependency","title":"Python dependency","text":"

The pyproject.toml file supports the requires_python field. Pixi understands that field and automatically adds the version to the dependencies.

This is an example of a pyproject.toml file with the requires_python field, which will be used as the python dependency:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n

Which is equivalent to:

equivalent pixi.toml
[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[dependencies]\npython = \">=3.9\"\n
"},{"location":"advanced/pyproject_toml/#dependency-section","title":"Dependency section","text":"

The pyproject.toml file supports the dependencies field. Pixi understands that field and automatically adds the dependencies to the project as [pypi-dependencies].

This is an example of a pyproject.toml file with the dependencies field:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n    \"numpy\",\n    \"pandas\",\n    \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n

Which is equivalent to:

equivalent pixi.toml
[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[pypi-dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n\n[dependencies]\npython = \">=3.9\"\n

You can overwrite these with conda dependencies by adding them to the dependencies field:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n    \"numpy\",\n    \"pandas\",\n    \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n

This would result in the conda dependencies being installed and the pypi dependencies being ignored. As pixi takes the conda dependencies over the pypi dependencies.

"},{"location":"advanced/pyproject_toml/#optional-dependencies","title":"Optional dependencies","text":"

If your python project includes groups of optional dependencies, pixi will automatically interpret them as pixi features of the same name with the associated pypi-dependencies.

You can add them to pixi environments manually, or use pixi init to setup the project, which will create one environment per feature. Self-references to other groups of optional dependencies are also handled.

For instance, imagine you have a project folder with a pyproject.toml file similar to:

[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n

Running pixi init in that project folder will transform the pyproject.toml file into:

[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # if executed on linux\n\n[tool.pixi.environments]\ndefault = {features = [], solve-group = \"default\"}\ntest = {features = [\"test\"], solve-group = \"default\"}\nall = {features = [\"all\", \"test\"], solve-group = \"default\"}\n

In this example, three environments will be created by pixi:

All environments will be solved together, as indicated by the common solve-group, and added to the lock file. You can edit the [tool.pixi.environments] section manually to adapt it to your use case (e.g. if you do not need a particular environment).

"},{"location":"advanced/pyproject_toml/#example","title":"Example","text":"

As the pyproject.toml file supports the full pixi spec with [tool.pixi] prepended an example would look like this:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n    \"numpy\",\n    \"pandas\",\n    \"matplotlib\",\n    \"ruff\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\ncompilers = \"*\"\ncmake = \"*\"\n\n[tool.pixi.tasks]\nstart = \"python my_project/main.py\"\nlint = \"ruff lint\"\n\n[tool.pixi.system-requirements]\ncuda = \"11.0\"\n\n[tool.pixi.feature.test.dependencies]\npytest = \"*\"\n\n[tool.pixi.feature.test.tasks]\ntest = \"pytest\"\n\n[tool.pixi.environments]\ntest = [\"test\"]\n
"},{"location":"advanced/pyproject_toml/#build-system-section","title":"Build-system section","text":"

The pyproject.toml file normally contains a [build-system] section. Pixi will use this section to build and install the project if it is added as a pypi path dependency.

If the pyproject.toml file does not contain any [build-system] section, pixi will fall back to uv's default, which is equivalent to the below:

pyproject.toml
[build-system]\nrequires = [\"setuptools >= 40.8.0\"]\nbuild-backend = \"setuptools.build_meta:__legacy__\"\n

Including a [build-system] section is highly recommended. If you are not sure of the build-backend you want to use, including the [build-system] section below in your pyproject.toml is a good starting point. pixi init --format pyproject defaults to hatchling. The advantages of hatchling over setuptools are outlined on its website.

pyproject.toml
[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n
"},{"location":"advanced/updates_github_actions/","title":"Update lockfiles with GitHub Actions","text":"

You can leverage GitHub Actions in combination with pavelzw/pixi-diff-to-markdown to automatically update your lockfiles similar to dependabot or renovate in other ecosystems.

Dependabot/Renovate support for pixi

You can track native Dependabot support for pixi in dependabot/dependabot-core #2227 and for Renovate in renovatebot/renovate #2213.

"},{"location":"advanced/updates_github_actions/#how-to-use","title":"How to use","text":"

To get started, create a new GitHub Actions workflow file in your repository.

.github/workflows/update-lockfiles.yml
name: Update lockfiles\n\npermissions: # (1)!\n  contents: write\n  pull-requests: write\n\non:\n  workflow_dispatch:\n  schedule:\n    - cron: 0 5 1 * * # (2)!\n\njobs:\n  pixi-update:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Set up pixi\n        uses: prefix-dev/setup-pixi@v0.8.1\n        with:\n          run-install: false\n      - name: Update lockfiles\n        run: |\n          set -o pipefail\n          pixi update --json | pixi exec pixi-diff-to-markdown >> diff.md\n      - name: Create pull request\n        uses: peter-evans/create-pull-request@v6\n        with:\n          token: ${{ secrets.GITHUB_TOKEN }}\n          commit-message: Update pixi lockfile\n          title: Update pixi lockfile\n          body-path: diff.md\n          branch: update-pixi\n          base: main\n          labels: pixi\n          delete-branch: true\n          add-paths: pixi.lock\n
  1. Needed for peter-evans/create-pull-request
  2. Runs at 05:00, on day 1 of the month

In order for this workflow to work, you need to set \"Allow GitHub Actions to create and approve pull requests\" to true in your repository settings (in \"Actions\" -> \"General\").

Tip

If you don't have any pypi-dependencies, you can use pixi update --json --no-install to speed up diff generation.

"},{"location":"advanced/updates_github_actions/#triggering-ci-in-automated-prs","title":"Triggering CI in automated PRs","text":"

In order to prevent accidental recursive GitHub Workflow runs, GitHub decided to not trigger any workflows on automated PRs when using the default GITHUB_TOKEN. There are a couple of ways how to work around this limitation. You can find excellent documentation for this in peter-evans/create-pull-request, see here.

"},{"location":"advanced/updates_github_actions/#customizing-the-summary","title":"Customizing the summary","text":"

You can customize the summary by either using command-line-arguments of pixi-diff-to-markdown or by specifying the configuration in pixi.toml under [tool.pixi-diff-to-markdown]. See the pixi-diff-to-markdown documentation or run pixi-diff-to-markdown --help for more information.

"},{"location":"advanced/updates_github_actions/#using-reusable-workflows","title":"Using reusable workflows","text":"

If you want to use the same workflow in multiple repositories in your GitHub organization, you can create a reusable workflow. You can find more information in the GitHub documentation.

"},{"location":"design_proposals/pixi_global_manifest/","title":"Pixi Global Manifest","text":"

Feedback wanted

This document is work in progress, and community feedback is greatly appreciated. Please share your thoughts at our GitHub discussion.

"},{"location":"design_proposals/pixi_global_manifest/#motivation","title":"Motivation","text":"

pixi global is currently limited to imperatively managing CLI packages. The next iteration of this feature should fulfill the following needs:

"},{"location":"design_proposals/pixi_global_manifest/#design-considerations","title":"Design Considerations","text":"

There are a few things we wanted to keep in mind in the design:

  1. User-friendliness: Pixi is a user focused tool that goes beyond developers. The feature should have good error reporting and helpful documentation from the start.
  2. Keep it simple: The CLI should be all you strictly need to interact with global environments.
  3. Unsurprising: Simple commands should behave similar to traditional package managers.
  4. Human Readable: Any file created by this feature should be human-readable and modifiable.
"},{"location":"design_proposals/pixi_global_manifest/#manifest","title":"Manifest","text":"

The global environments and exposed will be managed by a human-readable manifest. This manifest will stick to conventions set by pixi.toml where possible. Among other things it will be written in the TOML format, be named pixi-global.toml and be placed at ~/.pixi/manifests/pixi-global.toml. The motivation for the location is discussed further below

pixi-global.toml
# The name of the environment is `python`\n[envs.python]\nchannels = [\"conda-forge\"]\n# optional, defaults to your current OS\nplatform = \"osx-64\"\n# It will expose python, python3 and python3.11, but not pip\n[envs.python.dependencies]\npython = \"3.11.*\"\npip = \"*\"\n\n[envs.python.exposed]\npython = \"python\"\npython3 = \"python3\"\n\"python3.11\" = \"python3.11\"\n\n# The name of the environment is `python3-10`\n[envs.python3-10]\nchannels = [\"https://fast.prefix.dev/conda-forge\"]\n# It will expose python3.10\n[envs.python3-10.dependencies]\npython = \"3.10.*\"\n\n[envs.python3-10.exposed]\n\"python3.10\" = \"python\"\n
"},{"location":"design_proposals/pixi_global_manifest/#cli","title":"CLI","text":"

Install one or more packages PACKAGE and expose their executables. If --environment has been given, all packages will be installed in the same environment. --expose can be given if --environment is given as well or if only a single PACKAGE will be installed. The syntax for MAPPING is exposed_name=executable_name, so for example python3.10=python. --platform sets the platform of the environment to PLATFORM Multiple channels can be specified by using --channel multiple times. By default, if no channel is provided, the default-channels key in the pixi configuration is used, which again defaults to \"conda-forge\".

pixi global install [--expose MAPPING] [--environment ENV] [--platform PLATFORM] [--no-activation] [--channel CHANNEL]... PACKAGE...\n

Remove environments ENV.

pixi global uninstall <ENV>...\n

Update PACKAGE if --package is given. If not, all packages in environments ENV will be updated. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes will assume yes as answer for every question that would otherwise be asked interactively.

pixi global update [--package PACKAGE] [--assume-yes] <ENV>...\n

Updates all packages in all environments. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes will assume yes as answer for every question that would otherwise be asked interactively.

pixi global update-all [--assume-yes]\n

Add one or more packages PACKAGE into an existing environment ENV. If environment ENV does not exist, it will return with an error. Without --expose no binary will be exposed. If you don't mention a spec like python=3.8.*, the spec will be unconstrained with *. The syntax for MAPPING is exposed_name=executable_name, so for example python3.10=python.

pixi global add --environment ENV [--expose MAPPING] <PACKAGE>...\n

Remove package PACKAGE from environment ENV. If that was the last package remove the whole environment and print that information in the console. If this leads to executables being removed, it will offer to remove the mappings. If the user declines the remove process will stop.

pixi global remove --environment ENV PACKAGE\n

Add one or more MAPPING for environment ENV which describe which executables are exposed. The syntax for MAPPING is exposed_name=executable_name, so for example python3.10=python.

pixi global expose add --environment ENV  <MAPPING>...\n

Remove one or more exposed BINARY from environment ENV

pixi global expose remove --environment ENV <BINARY>...\n

Ensure that the environments on the machine reflect the state in the manifest. The manifest is the single source of truth. Only if there's no manifest, will the data from existing environments be used to create a manifest. pixi global sync is implied by most other pixi global commands.

pixi global sync\n

List all environments, their specs and exposed executables

pixi global list\n

Set the channels CHANNEL for a certain environment ENV in the pixi global manifest.

pixi global channel set --environment ENV <CHANNEL>...\n

Set the platform PLATFORM for a certain environment ENV in the pixi global manifest.

pixi global platform set --environment ENV PLATFORM\n

"},{"location":"design_proposals/pixi_global_manifest/#simple-workflow","title":"Simple workflow","text":"

Create environment python, install package python=3.10.* and expose all executables of that package

pixi global install python=3.10.*\n

Update all packages in environment python

pixi global update python\n

Remove environment python

pixi global uninstall python\n

Create environment python and pip, install corresponding packages and expose all executables of that packages

pixi global install python pip\n

Remove environments python and pip

pixi global uninstall python pip\n

Create environment python-pip, install python and pip in the same environment and expose all executables of these packages

pixi global install --environment python-pip python pip\n

"},{"location":"design_proposals/pixi_global_manifest/#adding-dependencies","title":"Adding dependencies","text":"

Create environment python, install package python and expose all executables of that package. Then add package hypercorn to environment python but doesn't expose its executables.

pixi global install python\npixi global add --environment python hypercorn\n

Update package cryptography (a dependency of hypercorn) to 43.0.0 in environment python

pixi update --environment python cryptography=43.0.0\n

Then remove hypercorn again.

pixi global remove --environment python hypercorn\n

"},{"location":"design_proposals/pixi_global_manifest/#specifying-which-executables-to-expose","title":"Specifying which executables to expose","text":"

Make a new environment python3-10 with package python=3.10 and expose the python executable as python3.10.

pixi global install --environment python3-10 --expose \"python3.10=python\" python=3.10\n

Now python3.10 is available.

Run the following in order to expose python from environment python3-10 as python3-10 instead.

pixi global expose remove --environment python3-10 python3.10\npixi global expose add --environment python3-10 \"python3-10=python\"\n

Now python3-10 is available, but python3.10 isn't anymore.

"},{"location":"design_proposals/pixi_global_manifest/#syncing","title":"Syncing","text":"

Most pixi global sub commands imply a pixi global sync.

First time, clean computer. Running the following creates manifest and ~/.pixi/envs/python.

pixi global install python\n

Delete ~/.pixi and syncing, should add environment python again as described in the manifest

rm `~/.pixi/envs`\npixi global sync\n

If there's no manifest, but existing environments, pixi will create a manifest that matches your current environments. It is to be decided whether the user should be asked if they want an empty manifest instead, or if it should always import the data from the environments.

rm <manifest>\npixi global sync\n

If we remove the python environment from the manifest, running pixi global sync will also remove the ~/.pixi/envs/python environment from the file system.

vim <manifest>\npixi global sync\n

"},{"location":"design_proposals/pixi_global_manifest/#open-questions","title":"Open Questions","text":""},{"location":"design_proposals/pixi_global_manifest/#should-we-version-the-manifest","title":"Should we version the manifest?","text":"

Something like:

[manifest]\nversion = 1\n

We still have to figure out which existing programs do something similar and how they benefit from it.

"},{"location":"design_proposals/pixi_global_manifest/#multiple-manifests","title":"Multiple manifests","text":"

We could go for one default manifest, but also parse other manifests in the same directory. The only requirement to be parsed as manifest is a .toml extension In order to modify those with the CLI one would have to add an option --manifest to select the correct one.

It is unclear whether the first implementation already needs to support this. At the very least we should put the manifest into its own folder like ~/.pixi/global/manifests/pixi-global.toml

"},{"location":"design_proposals/pixi_global_manifest/#discovery-via-config-key","title":"Discovery via config key","text":"

In order to make it easier to manage manifests in version control, we could allow to set the manifest path via a key in the pixi configuration.

config.toml
global_manifests = \"/path/to/your/manifests\"\n
"},{"location":"design_proposals/pixi_global_manifest/#no-activation","title":"No activation","text":"

The current pixi global install features --no-activation. When this flag is set, CONDA_PREFIX and PATH will not be set when running the exposed executable. This is useful when installing Python package managers or shells.

Assuming that this needs to be set per mapping, one way to expose this functionality would be to allow the following:

[envs.pip.exposed]\npip = { executable=\"pip\", activation=false }\n
"},{"location":"examples/cpp-sdl/","title":"SDL example","text":"

The cpp-sdl example is located in the pixi repository.

git clone https://github.com/prefix-dev/pixi.git\n

Move to the example folder

cd pixi/examples/cpp-sdl\n

Run the start command

pixi run start\n

Using the depends-on feature you only needed to run the start task but under water it is running the following tasks.

# Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
"},{"location":"examples/opencv/","title":"Opencv example","text":"

The opencv example is located in the pixi repository.

git clone https://github.com/prefix-dev/pixi.git\n

Move to the example folder

cd pixi/examples/opencv\n
"},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"

Run the start command to start the face detection algorithm.

pixi run start\n

The screen that starts should look like this:

Check out the webcame_capture.py to see how we detect a face.

"},{"location":"examples/opencv/#camera-calibration","title":"Camera Calibration","text":"

Next to face recognition, a camera calibration example is also included.

You'll need a checkerboard for this to work. Print this:

Then run

pixi run calibrate\n

To make a picture for calibration press SPACE Do this approximately 10 times with the chessboard in view of the camera

After that press ESC which will start the calibration.

When the calibration is done, the camera will be used again to find the distance to the checkerboard.

"},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"

The nav2 example is located in the pixi repository.

git clone https://github.com/prefix-dev/pixi.git\n

Move to the example folder

cd pixi/examples/ros2-nav2\n

Run the start command

pixi run start\n
"},{"location":"features/advanced_tasks/","title":"Advanced tasks","text":"

When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.

Here are some quick examples

pixi.toml
[tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n    \"cmake\",\n    # Use the cross-platform Ninja generator\n    \"-G\",\n    \"Ninja\",\n    # The source is in the root directory\n    \"-S\",\n    \".\",\n    # We wanna build in the .build directory\n    \"-B\",\n    \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends-on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
"},{"location":"features/advanced_tasks/#depends-on","title":"Depends on","text":"

Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.

An obvious example is compiling before running an application.

Checkout our cpp_sdl example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start everything is set up as expected.

pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n

Results in the following lines added to the pixi.toml

pixi.toml
[tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends-on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends-on = [\"build\"] }\n
pixi run start\n

The tasks will be executed after each other:

If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.

With this logic, you can also create aliases as you don't have to specify any command in a task.

pixi task add fmt ruff\npixi task add lint pylint\n
pixi task alias style fmt lint\n

Results in the following pixi.toml.

pixi.toml
fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends-on = [\"fmt\", \"lint\"] }\n

Now run both tools with one command.

pixi run style\n
"},{"location":"features/advanced_tasks/#working-directory","title":"Working directory","text":"

Pixi tasks support the definition of a working directory.

cwd\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml file is located.

Consider a pixi project structured as follows:

\u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n    \u2514\u2500\u2500 bar.py\n

To add a task to run the bar.py file, use:

pixi task add bar \"python bar.py\" --cwd scripts\n

This will add the following line to manifest file:

pixi.toml
[tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n
"},{"location":"features/advanced_tasks/#caching","title":"Caching","text":"

When you specify inputs and/or outputs to a task, pixi will reuse the result of the task.

For the cache, pixi checks that the following are true:

If all of these conditions are met, pixi will not run the task again and instead use the existing result.

Inputs and outputs can be specified as globs, which will be expanded to all matching files.

pixi.toml
[tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n

Note: if you want to debug the globs you can use the --verbose flag to see which files are selected.

# shows info logs of all files that were selected by the globs\npixi run -v start\n
"},{"location":"features/advanced_tasks/#environment-variables","title":"Environment variables","text":"

You can set environment variables for a task. These are seen as \"default\" values for the variables as you can overwrite them from the shell.

pixi.toml

[tasks]\necho = { cmd = \"echo $ARGUMENT\", env = { ARGUMENT = \"hello\" } }\n
If you run pixi run echo it will output hello. When you set the environment variable ARGUMENT before running the task, it will use that value instead.

ARGUMENT=world pixi run echo\n\u2728 Pixi task (echo in default): echo $ARGUMENT\nworld\n

These variables are not shared over tasks, so you need to define these for every task you want to use them in.

Extend instead of overwrite

If you use the same environment variable in the value as in the key of the map you will also overwrite the variable. For example overwriting a PATH pixi.toml

[tasks]\necho = { cmd = \"echo $PATH\", env = { PATH = \"/tmp/path:$PATH\" } }\n
This will output /tmp/path:/usr/bin:/bin instead of the original /usr/bin:/bin.

"},{"location":"features/advanced_tasks/#clean-environment","title":"Clean environment","text":"

You can make sure the environment of a task is \"pixi only\". Here pixi will only include the minimal required environment variables for your platform to run the command in. The environment will contain all variables set by the conda environment like \"CONDA_PREFIX\". It will however include some default values from the shell, like: \"DISPLAY\", \"LC_ALL\", \"LC_TIME\", \"LC_NUMERIC\", \"LC_MEASUREMENT\", \"SHELL\", \"USER\", \"USERNAME\", \"LOGNAME\", \"HOME\", \"HOSTNAME\",\"TMPDIR\", \"XPC_SERVICE_NAME\", \"XPC_FLAGS\"

[tasks]\nclean_command = { cmd = \"python run_in_isolated_env.py\", clean-env = true}\n
This setting can also be set from the command line with pixi run --clean-env TASK_NAME.

clean-env not supported on Windows

On Windows it's hard to create a \"clean environment\" as conda-forge doesn't ship Windows compilers and Windows needs a lot of base variables. Making this feature not worthy of implementing as the amount of edge cases will make it unusable.

"},{"location":"features/advanced_tasks/#our-task-runner-deno_task_shell","title":"Our task runner: deno_task_shell","text":"

To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell. The task shell is a limited implementation of a bourne-shell interface.

"},{"location":"features/advanced_tasks/#built-in-commands","title":"Built-in commands","text":"

Next to running actual executable like ./myprogram, cmake or python the shell has some built-in commandos.

"},{"location":"features/advanced_tasks/#syntax","title":"Syntax","text":"

More info in deno_task_shell documentation.

"},{"location":"features/environment/","title":"Environments","text":"

Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.

"},{"location":"features/environment/#structure","title":"Structure","text":"

A pixi environment is located in the .pixi/envs directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.

If you look at the .pixi/envs directory, you will see a directory for each environment, the default being the one that is normally used, if you specify a custom environment the name you specified will be used.

.pixi\n\u2514\u2500\u2500 envs\n    \u251c\u2500\u2500 cuda\n    \u2502   \u251c\u2500\u2500 bin\n    \u2502   \u251c\u2500\u2500 conda-meta\n    \u2502   \u251c\u2500\u2500 etc\n    \u2502   \u251c\u2500\u2500 include\n    \u2502   \u251c\u2500\u2500 lib\n    \u2502   ...\n    \u2514\u2500\u2500 default\n        \u251c\u2500\u2500 bin\n        \u251c\u2500\u2500 conda-meta\n        \u251c\u2500\u2500 etc\n        \u251c\u2500\u2500 include\n        \u251c\u2500\u2500 lib\n        ...\n

These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml. Pixi will always make sure the environment is in sync with the pixi.lock file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run, pixi shell.

"},{"location":"features/environment/#cleaning-up","title":"Cleaning up","text":"

If you want to clean up the environments, you can simply delete the .pixi/envs directory, and pixi will recreate the environments when needed.

# either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n
"},{"location":"features/environment/#activation","title":"Activation","text":"

An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin directory of the environment to the PATH variable. But there is more to it in a conda environment, as it also sets some environment variables.

To do the activation we have multiple options:

Where the run command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.

Using the pixi shell-hook in pixi you would get the following output:

export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n

It sets the PATH and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh script. Thus, just adding the bin directory to the PATH is not enough.

"},{"location":"features/environment/#traditional-conda-activate-like-activation","title":"Traditional conda activate-like activation","text":"

If you prefer to use the traditional conda activate-like activation, you could use the pixi shell-hook command.

$ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n

Warning

It is not encouraged to use the traditional conda activate-like activation, as deactivating the environment is not really possible. Use pixi shell instead.

"},{"location":"features/environment/#using-pixi-with-direnv","title":"Using pixi with direnv","text":"Installing direnv

Of course you can use pixi to install direnv globally. We recommend to run

pixi global install direnv

to install the latest version of direnv on your computer.

This allows you to use pixi in combination with direnv. Enter the following into your .envrc file:

.envrc
watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
  1. This ensures that every time your pixi.lock changes, direnv invokes the shell-hook again.
  2. This installs if needed, and activates the environment. direnv ensures that the environment is deactivated when you leave the directory.
$ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
"},{"location":"features/environment/#environment-variables","title":"Environment variables","text":"

The following environment variables are set by pixi, when using the pixi run, pixi shell, or pixi shell-hook command:

Note

Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT in the environment.

"},{"location":"features/environment/#solving-environments","title":"Solving environments","text":"

When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml and will put the output of the solve step into the pixi.lock file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:

Pixi solves both the conda and PyPI dependencies, where the PyPI dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler and uv library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo. resolve is able to solve multiple ecosystem like conda and PyPI. It implements the lazy solving process for PyPI packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.

For the [pypi-dependencies], uv implements sdist building to retrieve the metadata of the packages, and wheel building to install the packages. For this building step, pixi requires to first install python in the (conda)[dependencies] section of the pixi.toml file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies] section of the pixi.toml file.

"},{"location":"features/environment/#caching","title":"Caching","text":"

Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools.

Normally the location would be the following platform-specific default cache folder:

This location is configurable by setting the PIXI_CACHE_DIR or RATTLER_CACHE_DIR environment variable.

When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.

The cache contains multiple folders concerning different caches from within pixi.

"},{"location":"features/lockfile/","title":"The pixi.lock lock file","text":"

A lock file is the protector of the environments, and pixi is the key to unlock it.

"},{"location":"features/lockfile/#what-is-a-lock-file","title":"What is a lock file?","text":"

A lock file locks the environment in a specific state. Within pixi a lock file is a description of the packages in an environment. The lock file contains two definitions:

"},{"location":"features/lockfile/#why-a-lock-file","title":"Why a lock file","text":"

Pixi uses the lock file for the following reasons:

This gives you (and your collaborators) a way to really reproduce the environment they are working in. Using tools such as docker suddenly becomes much less necessary.

"},{"location":"features/lockfile/#when-is-a-lock-file-generated","title":"When is a lock file generated?","text":"

A lock file is generated when you install a package. More specifically, a lock file is generated from the solve step of the installation process. The solve will return a list of packages that are to be installed, and the lock file will be generated from this list. This diagram tries to explain the process:

graph TD\n    A[Install] --> B[Solve]\n    B --> C[Generate and write lock file]\n    C --> D[Install Packages]
"},{"location":"features/lockfile/#how-to-use-a-lock-file","title":"How to use a lock file","text":"

Do not edit the lock file

A lock file is a machine only file, and should not be edited by hand.

That said, the pixi.lock is human-readable, so it's easy to track the changes in the environment. We recommend you track the lock file in git or other version control systems. This will ensure that the environment is always reproducible and that you can always revert back to a working state, in case something goes wrong. The pixi.lock and the manifest file pixi.toml/pyproject.toml should always be in sync.

Running the following commands will check and automatically update the lock file if you changed any dependencies:

All the commands that support the interaction with the lock file also include some lock file usage options:

Syncing the lock file with the manifest file

The lock file is always matched with the whole configuration in the manifest file. This means that if you change the manifest file, the lock file will be updated.

flowchart TD\n    C[manifest] --> A[lockfile] --> B[environment]

"},{"location":"features/lockfile/#lockfile-satisfiability","title":"Lockfile satisfiability","text":"

The lock file is a description of the environment, and it should always be satisfiable. Satisfiable means that the given manifest file and the created environment are in sync with the lockfile. If the lock file is not satisfiable, pixi will generate a new lock file automatically.

Steps to check if the lock file is satisfiable:

If you want to get more details checkout the actual code as this is a simplification of the actual code.

"},{"location":"features/lockfile/#the-version-of-the-lock-file","title":"The version of the lock file","text":"

The lock file has a version number, this is to ensure that the lock file is compatible with the local version of pixi.

version: 4\n

Pixi is backward compatible with the lock file, but not forward compatible. This means that you can use an older lock file with a newer version of pixi, but not the other way around.

"},{"location":"features/lockfile/#your-lock-file-is-big","title":"Your lock file is big","text":"

The lock file can grow quite large, especially if you have a lot of packages installed. This is because the lock file contains all the information about the packages.

  1. We try to keep the lock file as small as possible.
  2. It's always smaller than a docker image.
  3. Downloading the lock file is always faster than downloading the incorrect packages.
"},{"location":"features/lockfile/#you-dont-need-a-lock-file-because","title":"You don't need a lock file because...","text":"

If you can not think of a case where you would benefit from a fast reproducible environment, then you don't need a lock file.

But take note of the following:

"},{"location":"features/lockfile/#removing-the-lock-file","title":"Removing the lock file","text":"

If you want to remove the lock file, you can simply delete it.

rm pixi.lock\n

This will remove the lock file, and the next time you run a command that requires the lock file, it will be generated again.

Note

This does remove the locked state of the environment, and the environment will be updated to the latest version of the packages.

"},{"location":"features/multi_environment/","title":"Multi Environment Support","text":""},{"location":"features/multi_environment/#motivating-example","title":"Motivating Example","text":"

There are multiple scenarios where multiple environments are useful.

This prepares pixi for use in large projects with multiple use-cases, multiple developers and different CI needs.

"},{"location":"features/multi_environment/#design-considerations","title":"Design Considerations","text":"

There are a few things we wanted to keep in mind in the design:

  1. User-friendliness: Pixi is a user focussed tool that goes beyond developers. The feature should have good error reporting and helpful documentation from the start.
  2. Keep it simple: Not understanding the multiple environments feature shouldn't limit a user to use pixi. The feature should be \"invisible\" to the non-multi env use-cases.
  3. No Automatic Combinatorial: To ensure the dependency resolution process remains manageable, the solution should avoid a combinatorial explosion of dependency sets. By making the environments user defined and not automatically inferred by testing a matrix of the features.
  4. Single environment Activation: The design should allow only one environment to be active at any given time, simplifying the resolution process and preventing conflicts.
  5. Fixed lock files: It's crucial to preserve fixed lock files for consistency and predictability. Solutions must ensure reliability not just for authors but also for end-users, particularly at the time of lock file creation.
"},{"location":"features/multi_environment/#feature-environment-set-definitions","title":"Feature & Environment Set Definitions","text":"

Introduce environment sets into the pixi.toml this describes environments based on feature's. Introduce features into the pixi.toml that can describe parts of environments. As an environment goes beyond just dependencies the features should be described including the following fields:

Default features
[dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
Different dependencies per feature
[feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
Full set of environment modification in one feature
[feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}] # Results in:  [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Define tasks as defaults of an environment
[feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environment test test`\n

The environment definition should contain the following fields:

Creating environments from features
[environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
Testing a production environment with additional dependencies
[environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
Creating environments without including the default feature
[dependencies]\npython = \"*\"\nnumpy = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom environment which only has the `lint` feature (numpy isn't part of that env).\nlint = {features = [\"lint\"], no-default-feature = true}\n
"},{"location":"features/multi_environment/#lock-file-structure","title":"lock file Structure","text":"

Within the pixi.lock file, a package may now include an additional environments field, specifying the environment to which it belongs. To avoid duplication the packages environments field may contain multiple environments so the lock file is of minimal size.

- platform: linux-64\n  name: pre-commit\n  version: 3.3.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n  ...:\n- platform: linux-64\n  name: python\n  version: 3.9.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n    - py39\n    - default\n  ...:\n
"},{"location":"features/multi_environment/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"

Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:

Default behavior
\u279c pixi run python\n# Runs python in the `default` environment\n
Activating an specific environment
\u279c pixi run -e test pytest\n\u279c pixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n
Activating a shell in an environment
\u279c pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
Running any command in an environment
\u279c pixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n
"},{"location":"features/multi_environment/#ambiguous-environment-selection","title":"Ambiguous Environment Selection","text":"

It's possible to define tasks in multiple environments, in this case the user should be prompted to select the environment.

Here is a simple example of a task only manifest:

pixi.toml

[project]\nname = \"test_ambiguous_env\"\nchannels = []\nplatforms = [\"linux-64\", \"win-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ndefault = \"echo Default\"\nambi = \"echo Ambi::Default\"\n[feature.test.tasks]\ntest = \"echo Test\"\nambi = \"echo Ambi::Test\"\n\n[feature.dev.tasks]\ndev = \"echo Dev\"\nambi = \"echo Ambi::Dev\"\n\n[environments]\ndefault = [\"test\", \"dev\"]\ntest = [\"test\"]\ndev = [\"dev\"]\n
Trying to run the abmi task will prompt the user to select the environment. As it is available in all environments.

Interactive selection of environments if task is in multiple environments
\u279c pixi run ambi\n? The task 'ambi' can be run in multiple environments.\n\nPlease select an environment to run the task in: \u203a\n\u276f default # selecting default\n  test\n  dev\n\n\u2728 Pixi task (ambi in default): echo Ambi::Test\nAmbi::Test\n

As you can see it runs the task defined in the feature.task but it is run in the default environment. This happens because the ambi task is defined in the test feature, and it is overwritten in the default environment. So the tasks.default is now non-reachable from any environment.

Some other results running in this example:

\u279c pixi run --environment test ambi\n\u2728 Pixi task (ambi in test): echo Ambi::Test\nAmbi::Test\n\n\u279c pixi run --environment dev ambi\n\u2728 Pixi task (ambi in dev): echo Ambi::Dev\nAmbi::Dev\n\n# dev is run in the default environment\n\u279c pixi run dev\n\u2728 Pixi task (dev in default): echo Dev\nDev\n\n# dev is run in the dev environment\n\u279c pixi run -e dev dev\n\u2728 Pixi task (dev in dev): echo Dev\nDev\n

"},{"location":"features/multi_environment/#important-links","title":"Important links","text":""},{"location":"features/multi_environment/#real-world-example-use-cases","title":"Real world example use cases","text":"Polarify test setup

In polarify they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.

pixi.toml
[project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
.github/workflows/test.yml
jobs:\n  tests-per-env:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        environment: [py311, py312]\n    steps:\n    - uses: actions/checkout@v4\n      - uses: prefix-dev/setup-pixi@v0.5.1\n        with:\n          environments: ${{ matrix.environment }}\n      - name: Run tasks\n        run: |\n          pixi run --environment ${{ matrix.environment }} test\n  tests-with-multiple-envs:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - uses: prefix-dev/setup-pixi@v0.5.1\n      with:\n       environments: pl017 pl018\n    - run: |\n        pixi run -e pl017 test\n        pixi run -e pl018 test\n
Test vs Production example

This is an example of a project that has a test feature and prod environment. The prod environment is a production environment that contains the run dependencies. The test feature is a set of dependencies and tasks that we want to put on top of the previously solved prod environment. This is a common use case where we want to test the production environment with additional dependencies.

pixi.toml

[project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
In ci you would run the following commands:
pixi run postinstall-e && pixi run test\n
Locally you would run the following command:
pixi run postinstall-e && pixi run dev\n

Then in a Dockerfile you would run the following command: Dockerfile

FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n

Multiple machines from one project

This is an example for an ML project that should be executable on a machine that supports cuda and mlx. It should also be executable on machines that don't support cuda or mlx, we use the cpu feature for this.

pixi.toml
[project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n# MLX is only available on macOS >=13.5 (>14.0 is recommended)\nsystem-requirements = {macos = \"13.5\"}\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.16.0,<0.17.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n
Running the project on a cuda machine
pixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
Running the project with mlx
pixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
Running the project on a machine without cuda or mlx
pixi run train-model\n
"},{"location":"features/multi_platform_configuration/","title":"Multi platform config","text":"

Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.

Here is an example manifest file that highlights some of the features:

pixi.tomlpyproject.toml pixi.toml
[project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
pyproject.toml
[tool.pixi.project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tool.pixi.dependencies]\npython = \">=3.8\"\n\n[tool.pixi.target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"~=3.7.0\"\n\n\n[tool.pixi.activation]\nscripts = [\"setup.sh\"]\n\n[tool.pixi.target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
"},{"location":"features/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"

The project.platforms defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lock file.

Running pixi install on a platform that is not configured will warn the user that it is not setup for that platform:

\u276f pixi install\n  \u00d7 the project is not configured for your current platform\n   \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n   \u00b7             \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n   \u00b7                             \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n   \u2570\u2500\u2500\u2500\u2500\n  help: The project needs to be configured to support your platform (linux-64).\n
"},{"location":"features/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"

With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms then pixi will throw an error.

"},{"location":"features/multi_platform_configuration/#dependencies","title":"Dependencies","text":"

It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.

pixi.toml
[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n

In the above example, we specify that we depend on msmpi only on Windows. We also specifically want python on 3.8 when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.

You can use pixi's cli to add these dependencies to the manifest file.

pixi add --platform win-64 posix\n

This also works for the host and build dependencies.

pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n

Which results in this.

pixi.toml
[target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
"},{"location":"features/multi_platform_configuration/#activation","title":"Activation","text":"

Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash and for windows they are bat

To deal with this, you can define your activation scripts using the target definition.

pixi.toml

[activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
When this project is run on win-64 it will only execute the target scripts not the scripts specified in the default activation.scripts

"},{"location":"features/system_requirements/","title":"System Requirements in pixi","text":"

System requirements define the minimal system specifications necessary during dependency resolution for a project. For instance, specifying a Unix system with a particular minimal libc version ensures that dependencies are compatible with the project's environment.

System specifications are closely related to virtual packages, allowing for flexible and accurate dependency management.

"},{"location":"features/system_requirements/#default-system-requirements","title":"Default System Requirements","text":"

The following configurations outline the default minimal system requirements for different operating systems:

LinuxWindowsosx-64osx-arm64
# Default system requirements for Linux\n[system-requirements]\nlinux = \"4.18\"\nlibc = { family = \"glibc\", version = \"2.28\" }\n

Windows currently has no minimal system requirements defined. If your project requires specific Windows configurations, you should define them accordingly.

# Default system requirements for macOS\n[system-requirements]\nmacos = \"13.0\"\n
# Default system requirements for macOS ARM64\n[system-requirements]\nmacos = \"13.0\"\n
"},{"location":"features/system_requirements/#customizing-system-requirements","title":"Customizing System Requirements","text":"

You only need to define system requirements if your project necessitates a different set from the defaults. This is common when installing environments on older or newer versions of operating systems.

"},{"location":"features/system_requirements/#adjusting-for-older-systems","title":"Adjusting for Older Systems","text":"

If you're encountering an error like:

\u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '4.18' but the system has version '4.12.14'\n

This indicates that the project's system requirements are higher than your current system's specifications. To resolve this, you can lower the system requirements in your project's configuration:

[system-requirements]\nlinux = \"4.12.14\"\n

This adjustment informs the dependency resolver to accommodate the older system version.

"},{"location":"features/system_requirements/#using-cuda-in-pixi","title":"Using CUDA in pixi","text":"

To utilize CUDA in your project, you must specify the desired CUDA version in the system-requirements table. This ensures that CUDA is recognized and appropriately locked into the lock file if necessary.

Example Configuration

[system-requirements]\ncuda = \"12\"  # Replace \"12\" with the specific CUDA version you intend to use\n
"},{"location":"features/system_requirements/#setting-system-requirements-environment-specific","title":"Setting System Requirements environment specific","text":"

This can be set per feature in the the manifest file.

[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[environments]\ncuda = [\"cuda\"]\n
"},{"location":"features/system_requirements/#available-override-options","title":"Available Override Options","text":"

In certain scenarios, you might need to override the system requirements detected on your machine. This can be particularly useful when working on systems that do not meet the project's default requirements.

You can override virtual packages by setting the following environment variables:

"},{"location":"features/system_requirements/#additional-resources","title":"Additional Resources","text":"

For more detailed information on managing virtual packages and overriding system requirements, refer to the Conda Documentation.

"},{"location":"ide_integration/devcontainer/","title":"Use pixi inside of a devcontainer","text":"

VSCode Devcontainers are a popular tool to develop on a project with a consistent environment. They are also used in GitHub Codespaces which makes it a great way to develop on a project without having to install anything on your local machine.

To use pixi inside of a devcontainer, follow these steps:

Create a new directory .devcontainer in the root of your project. Then, create the following two files in the .devcontainer directory:

.devcontainer/Dockerfile
FROM mcr.microsoft.com/devcontainers/base:jammy\n\nARG PIXI_VERSION=v0.32.1\n\nRUN curl -L -o /usr/local/bin/pixi -fsSL --compressed \"https://github.com/prefix-dev/pixi/releases/download/${PIXI_VERSION}/pixi-$(uname -m)-unknown-linux-musl\" \\\n    && chmod +x /usr/local/bin/pixi \\\n    && pixi info\n\n# set some user and workdir settings to work nicely with vscode\nUSER vscode\nWORKDIR /home/vscode\n\nRUN echo 'eval \"$(pixi completion -s bash)\"' >> /home/vscode/.bashrc\n
.devcontainer/devcontainer.json
{\n    \"name\": \"my-project\",\n    \"build\": {\n      \"dockerfile\": \"Dockerfile\",\n      \"context\": \"..\",\n    },\n    \"customizations\": {\n      \"vscode\": {\n        \"settings\": {},\n        \"extensions\": [\"ms-python.python\", \"charliermarsh.ruff\", \"GitHub.copilot\"]\n      }\n    },\n    \"features\": {\n      \"ghcr.io/devcontainers/features/docker-in-docker:2\": {}\n    },\n    \"mounts\": [\"source=${localWorkspaceFolderBasename}-pixi,target=${containerWorkspaceFolder}/.pixi,type=volume\"],\n    \"postCreateCommand\": \"sudo chown vscode .pixi && pixi install\"\n}\n

Put .pixi in a mount

In the above example, we mount the .pixi directory into a volume. This is needed since the .pixi directory shouldn't be on a case insensitive filesystem (default on macOS, Windows) but instead in its own volume. There are some conda packages (for example ncurses-feedstock#73) that contain files that only differ in case which leads to errors on case insensitive filesystems.

"},{"location":"ide_integration/devcontainer/#secrets","title":"Secrets","text":"

If you want to authenticate to a private conda channel, you can add secrets to your devcontainer.

.devcontainer/devcontainer.json
{\n    \"build\": \"Dockerfile\",\n    \"context\": \"..\",\n    \"options\": [\n        \"--secret\",\n        \"id=prefix_dev_token,env=PREFIX_DEV_TOKEN\",\n    ],\n    // ...\n}\n
.devcontainer/Dockerfile
# ...\nRUN --mount=type=secret,id=prefix_dev_token,uid=1000 \\\n    test -s /run/secrets/prefix_dev_token \\\n    && pixi auth login --token \"$(cat /run/secrets/prefix_dev_token)\" https://repo.prefix.dev\n

These secrets need to be present either as an environment variable when starting the devcontainer locally or in your GitHub Codespaces settings under Secrets.

"},{"location":"ide_integration/jupyterlab/","title":"JupyterLab Integration","text":""},{"location":"ide_integration/jupyterlab/#basic-usage","title":"Basic usage","text":"

Using JupyterLab with pixi is very simple. You can just create a new pixi project and add the jupyterlab package to it. The full example is provided under the following Github link.

pixi init\npixi add jupyterlab\n

This will create a new pixi project and add the jupyterlab package to it. You can then start JupyterLab using the following command:

pixi run jupyter lab\n

If you want to add more \"kernels\" to JupyterLab, you can simply add them to your current project \u2013 as well as any dependencies from the scientific stack you might need.

pixi add bash_kernel ipywidgets matplotlib numpy pandas  # ...\n
"},{"location":"ide_integration/jupyterlab/#what-kernels-are-available","title":"What kernels are available?","text":"

You can easily install more \"kernels\" for JupyterLab. The conda-forge repository has a number of interesting additional kernels - not just Python!

"},{"location":"ide_integration/jupyterlab/#advanced-usage","title":"Advanced usage","text":"

If you want to have only one instance of JupyterLab running but still want per-directory Pixi environments, you can use one of the kernels provided by the pixi-kernel package.

"},{"location":"ide_integration/jupyterlab/#configuring-jupyterlab","title":"Configuring JupyterLab","text":"

To get started, create a Pixi project, add jupyterlab and pixi-kernel and then start JupyterLab:

pixi init\npixi add jupyterlab pixi-kernel\npixi run jupyter lab\n

This will start JupyterLab and open it in your browser.

pixi-kernel searches for a manifest file, either pixi.toml or pyproject.toml, in the same directory of your notebook or in any parent directory. When it finds one, it will use the environment specified in the manifest file to start the kernel and run your notebooks.

"},{"location":"ide_integration/jupyterlab/#binder","title":"Binder","text":"

If you just want to check a JupyterLab environment running in the cloud using pixi-kernel, you can visit Binder.

"},{"location":"ide_integration/pycharm/","title":"PyCharm Integration","text":"

You can use PyCharm with pixi environments by using the conda shim provided by the pixi-pycharm package.

"},{"location":"ide_integration/pycharm/#how-to-use","title":"How to use","text":"

To get started, add pixi-pycharm to your pixi project.

pixi add pixi-pycharm\n

This will ensure that the conda shim is installed in your project's environment.

Having pixi-pycharm installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda file (on Windows: conda.bat) which is located in .pixi/envs/default/libexec. You can get the path using the following command:

Linux & macOSWindows
pixi run 'echo $CONDA_PREFIX/libexec/conda'\n
pixi run 'echo $CONDA_PREFIX\\\\libexec\\\\conda.bat'\n

This is an executable that tricks PyCharm into thinking it's the proper conda executable. Under the hood it redirects all calls to the corresponding pixi equivalent.

Use the conda shim from this pixi project

Please make sure that this is the conda shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.

Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.

PyCharm should now be able to show you the installed packages as well.

You can now run your programs and tests as usual.

Mark .pixi as excluded

In order for PyCharm to not get confused about the .pixi directory, please mark it as excluded.

Also, when using a remote interpreter, you should exclude the .pixi directory on the remote machine. Instead, you should run pixi install on the remote machine and select the conda shim from there.

"},{"location":"ide_integration/pycharm/#multiple-environments","title":"Multiple environments","text":"

If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.

You can then specify the corresponding environment in the bottom right corner of the PyCharm window.

"},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"

When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.

It is recommended to rename the environments to something unique.

"},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"

Logs are written to ~/.cache/pixi-pycharm.log. You can use them to debug problems. Please attach the logs when filing a bug report.

"},{"location":"ide_integration/r_studio/","title":"Developing R scripts in RStudio","text":"

You can use pixi to manage your R dependencies. The conda-forge channel contains a wide range of R packages that can be installed using pixi.

"},{"location":"ide_integration/r_studio/#installing-r-packages","title":"Installing R packages","text":"

R packages are usually prefixed with r- in the conda-forge channel. To install an R package, you can use the following command:

pixi add r-<package-name>\n# for example\npixi add r-ggplot2\n
"},{"location":"ide_integration/r_studio/#using-r-packages-in-rstudio","title":"Using R packages in RStudio","text":"

To use the R packages installed by pixi in RStudio, you need to run rstudio from an activated environment. This can be achieved by running RStudio from pixi shell or from a task in the pixi.toml file.

"},{"location":"ide_integration/r_studio/#full-example","title":"Full example","text":"

The full example can be found here: RStudio example. Here is an example of a pixi.toml file that sets up an RStudio task:

[project]\nname = \"r\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[target.linux.tasks]\nrstudio = \"rstudio\"\n\n[target.osx.tasks]\nrstudio = \"open -a rstudio\"\n# or alternatively with the full path:\n# rstudio = \"/Applications/RStudio.app/Contents/MacOS/RStudio\"\n\n[dependencies]\nr = \">=4.3,<5\"\nr-ggplot2 = \">=3.5.0,<3.6\"\n

Once RStudio has loaded, you can execute the following R code that uses the ggplot2 package:

# Load the ggplot2 package\nlibrary(ggplot2)\n\n# Load the built-in 'mtcars' dataset\ndata <- mtcars\n\n# Create a scatterplot of 'mpg' vs 'wt'\nggplot(data, aes(x = wt, y = mpg)) +\n  geom_point() +\n  labs(x = \"Weight (1000 lbs)\", y = \"Miles per Gallon\") +\n  ggtitle(\"Fuel Efficiency vs. Weight\")\n

Note

This example assumes that you have installed RStudio system-wide. We are working on updating RStudio as well as the R interpreter builds on Windows for maximum compatibility with pixi.

"},{"location":"reference/cli/","title":"Commands","text":""},{"location":"reference/cli/#global-options","title":"Global options","text":""},{"location":"reference/cli/#init","title":"init","text":"

This command is used to create a new project. It initializes a pixi.toml file and also prepares a .gitignore to prevent the environment from being added to git.

It also supports the pyproject.toml file, if you have a pyproject.toml file in the directory where you run pixi init, it appends the pixi data to the pyproject.toml instead of a new pixi.toml file.

"},{"location":"reference/cli/#arguments","title":"Arguments","text":"
  1. [PATH]: Where to place the project (defaults to current path) [default: .]
"},{"location":"reference/cli/#options","title":"Options","text":"

Importing an environment.yml

When importing an environment, the pixi.toml will be created with the dependencies from the environment file. The pixi.lock will be created when you install the environment. We don't support git+ urls as dependencies for pip packages and for the defaults channel we use main, r and msys2 as the default channels.

pixi init myproject\npixi init ~/myproject\npixi init  # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\npixi init --format pyproject\npixi init --format pixi\n
"},{"location":"reference/cli/#add","title":"add","text":"

Adds dependencies to the manifest file. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.

If the project manifest is a pyproject.toml, adding a pypi dependency will add it to the native pyproject project.dependencies array, or to the native project.optional-dependencies table if a feature is specified:

These dependencies will be read by pixi as if they had been added to the pixi pypi-dependencies tables of the default or a named feature.

"},{"location":"reference/cli/#arguments_1","title":"Arguments","text":"
  1. [SPECS]: The package(s) to add, space separated. The version constraint is optional.
"},{"location":"reference/cli/#options_1","title":"Options","text":"
pixi add numpy # (1)!\npixi add numpy pandas \"pytorch>=1.8\" # (2)!\npixi add \"numpy>=1.22,<1.24\" # (3)!\npixi add --manifest-path ~/myproject/pixi.toml numpy # (4)!\npixi add --host \"python>=3.9.0\" # (5)!\npixi add --build cmake # (6)!\npixi add --platform osx-64 clang # (7)!\npixi add --no-install numpy # (8)!\npixi add --no-lockfile-update numpy # (9)!\npixi add --feature featurex numpy # (10)!\n\n# Add a pypi dependency\npixi add --pypi requests[security] # (11)!\npixi add --pypi Django==5.1rc1 # (12)!\npixi add --pypi \"boltons>=24.0.0\" --feature lint # (13)!\npixi add --pypi \"boltons @ https://files.pythonhosted.org/packages/46/35/e50d4a115f93e2a3fbf52438435bb2efcf14c11d4fcd6bdcd77a6fc399c9/boltons-24.0.0-py3-none-any.whl\" # (14)!\npixi add --pypi \"exchangelib @ git+https://github.com/ecederstrand/exchangelib\" # (15)!\npixi add --pypi \"project @ file:///absolute/path/to/project\" # (16)!\npixi add --pypi \"project@file:///absolute/path/to/project\" --editable # (17)!\n
  1. This will add the numpy package to the project with the latest available for the solved environment.
  2. This will add multiple packages to the project solving them all together.
  3. This will add the numpy package with the version constraint.
  4. This will add the numpy package to the project of the manifest file at the given path.
  5. This will add the python package as a host dependency. There is currently no different behavior for host dependencies.
  6. This will add the cmake package as a build dependency. There is currently no different behavior for build dependencies.
  7. This will add the clang package only for the osx-64 platform.
  8. This will add the numpy package to the manifest and lockfile, without installing it in an environment.
  9. This will add the numpy package to the manifest without updating the lockfile or installing it in the environment.
  10. This will add the numpy package in the feature featurex.
  11. This will add the requests package as pypi dependency with the security extra.
  12. This will add the pre-release version of Django to the project as a pypi dependency.
  13. This will add the boltons package in the feature lint as pypi dependency.
  14. This will add the boltons package with the given url as pypi dependency.
  15. This will add the exchangelib package with the given git url as pypi dependency.
  16. This will add the project package with the given file url as pypi dependency.
  17. This will add the project package with the given file url as an editable package as pypi dependency.

Tip

If you want to use a non default pinning strategy, you can set it using pixi's configuration.

pixi config set pinning-strategy no-pin --global\n
The default is semver which will pin the dependencies to the latest major version or minor for v0 versions.

"},{"location":"reference/cli/#install","title":"install","text":"

Installs an environment based on the manifest file. If there is no pixi.lock file or it is not up-to-date with the manifest file, it will (re-)generate the lock file.

pixi install only installs one environment at a time, if you have multiple environments you can select the right one with the --environment flag. If you don't provide an environment, the default environment will be installed.

Running pixi install is not required before running other commands. As all commands interacting with the environment will first run the install command if the environment is not ready, to make sure you always run in a correct state. E.g. pixi run, pixi shell, pixi shell-hook, pixi add, pixi remove to name a few.

"},{"location":"reference/cli/#options_2","title":"Options","text":"
pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\npixi install --environment lint\npixi install -e lint\n
"},{"location":"reference/cli/#update","title":"update","text":"

The update command checks if there are newer versions of the dependencies and updates the pixi.lock file and environments accordingly. It will only update the lock file if the dependencies in the manifest file are still compatible with the new versions.

"},{"location":"reference/cli/#arguments_2","title":"Arguments","text":"
  1. [PACKAGES]... The packages to update, space separated. If no packages are provided, all packages will be updated.
"},{"location":"reference/cli/#options_3","title":"Options","text":"
pixi update numpy\npixi update numpy pandas\npixi update --manifest-path ~/myproject/pixi.toml numpy\npixi update --environment lint python\npixi update -e lint -e schema -e docs pre-commit\npixi update --platform osx-arm64 mlx\npixi update -p linux-64 -p osx-64 numpy\npixi update --dry-run\npixi update --no-install boto3\n
"},{"location":"reference/cli/#run","title":"run","text":"

The run commands first checks if the environment is ready to use. When you didn't run pixi install the run command will do that for you. The custom tasks defined in the manifest file are also available through the run command.

You cannot run pixi run source setup.bash as source is not available in the deno_task_shell commandos and not an executable.

"},{"location":"reference/cli/#arguments_3","title":"Arguments","text":"
  1. [TASK]... The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.
"},{"location":"reference/cli/#options_4","title":"Options","text":"

Info

In pixi the deno_task_shell is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.

Cross environment tasks

If you're using the depends-on feature of the tasks, the tasks will be run in the order you specified them. The depends-on can be used cross environment, e.g. you have this pixi.toml:

pixi.toml
[tasks]\nstart = { cmd = \"python start.py\", depends-on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n

Then you're able to run the build from the build environment and start from the default environment. By only calling:

pixi run start\n

"},{"location":"reference/cli/#exec","title":"exec","text":"

Runs a command in a temporary environment disconnected from any project. This can be useful to quickly test out a certain package or version.

Temporary environments are cached. If the same command is run again, the same environment will be reused.

Cleaning temporary environments

Currently, temporary environments can only be cleaned up manually. Environments for pixi exec are stored under cached-envs-v0/ in the cache directory. Run pixi info to find the cache directory.

"},{"location":"reference/cli/#arguments_4","title":"Arguments","text":"
  1. <COMMAND>: The command to run.
"},{"location":"reference/cli/#options_5","title":"Options:","text":"
pixi exec python\n\n# Add a constraint to the python version\npixi exec -s python=3.9 python\n\n# Run ipython and include the py-rattler package in the environment\npixi exec -s ipython -s py-rattler ipython\n\n# Force reinstall to recreate the environment and get the latest package versions\npixi exec --force-reinstall -s ipython -s py-rattler ipython\n
"},{"location":"reference/cli/#remove","title":"remove","text":"

Removes dependencies from the manifest file.

If the project manifest is a pyproject.toml, removing a pypi dependency with the --pypi flag will remove it from either - the native pyproject project.dependencies array or the native project.optional-dependencies table (if a feature is specified) - pixi pypi-dependencies tables of the default or a named feature (if a feature is specified)

"},{"location":"reference/cli/#arguments_5","title":"Arguments","text":"
  1. <DEPS>...: List of dependencies you wish to remove from the project.
"},{"location":"reference/cli/#options_6","title":"Options","text":"
pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\npixi remove --no-install numpy\n
"},{"location":"reference/cli/#task","title":"task","text":"

If you want to make a shorthand for a specific command you can add a task for it.

"},{"location":"reference/cli/#options_7","title":"Options","text":""},{"location":"reference/cli/#task-add","title":"task add","text":"

Add a task to the manifest file, use --depends-on to add tasks you want to run before this task, e.g. build before an execute task.

"},{"location":"reference/cli/#arguments_6","title":"Arguments","text":"
  1. <NAME>: The name of the task.
  2. <COMMAND>: The command to run. This can be more than one word.

Info

If you are using $ for env variables they will be resolved before adding them to the task. If you want to use $ in the task you need to escape it with a \\, e.g. echo \\$HOME.

"},{"location":"reference/cli/#options_8","title":"Options","text":"
pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\npixi task add publish-pypi \"hatch publish --yes --repo main\" --feature build --env HATCH_CONFIG=config/hatch.toml --description \"Publish the package to pypi\"\n

This adds the following to the manifest file:

[tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends-on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n\n[feature.build.tasks]\npublish-pypi = { cmd = \"hatch publish --yes --repo main\", env = { HATCH_CONFIG = \"config/hatch.toml\" }, description = \"Publish the package to pypi\" }\n

Which you can then run with the run command:

pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
"},{"location":"reference/cli/#task-remove","title":"task remove","text":"

Remove the task from the manifest file

"},{"location":"reference/cli/#arguments_7","title":"Arguments","text":""},{"location":"reference/cli/#options_9","title":"Options","text":"
pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
"},{"location":"reference/cli/#task-alias","title":"task alias","text":"

Create an alias for a task.

"},{"location":"reference/cli/#arguments_8","title":"Arguments","text":"
  1. <ALIAS>: The alias name
  2. <DEPENDS_ON>: The names of the tasks you want to execute on this alias, order counts, first one runs first.
"},{"location":"reference/cli/#options_10","title":"Options","text":"
pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
"},{"location":"reference/cli/#task-list","title":"task list","text":"

List all tasks in the project.

"},{"location":"reference/cli/#options_11","title":"Options","text":"
pixi task list\npixi task list --environment cuda\npixi task list --summary\n
"},{"location":"reference/cli/#list","title":"list","text":"

List project's packages. Highlighted packages are explicit dependencies.

"},{"location":"reference/cli/#options_12","title":"Options","text":"
pixi list\npixi list --json-pretty\npixi list --explicit\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n

Output will look like this, where python will be green as it is the package that was explicitly added to the manifest file:

\u279c pixi list\n Package           Version     Build               Size       Kind   Source\n _libgcc_mutex     0.1         conda_forge         2.5 KiB    conda  _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex     4.5         2_gnu               23.1 KiB   conda  _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2             1.0.8       hd590300_5          248.3 KiB  conda  bzip2-1.0.8-hd590300_5.conda\n ca-certificates   2023.11.17  hbcca054_0          150.5 KiB  conda  ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64  2.40        h41732ed_0          688.2 KiB  conda  ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat          2.5.0       hcb278e6_1          76.2 KiB   conda  libexpat-2.5.0-hcb278e6_1.conda\n libffi            3.4.2       h7f98852_5          56.9 KiB   conda  libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng         13.2.0      h807b86a_4          755.7 KiB  conda  libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp           13.2.0      h807b86a_4          412.2 KiB  conda  libgomp-13.2.0-h807b86a_4.conda\n libnsl            2.0.1       hd590300_0          32.6 KiB   conda  libnsl-2.0.1-hd590300_0.conda\n libsqlite         3.44.2      h2797004_0          826 KiB    conda  libsqlite-3.44.2-h2797004_0.conda\n libuuid           2.38.1      h0b41bf4_0          32.8 KiB   conda  libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt         4.4.36      hd590300_1          98 KiB     conda  libxcrypt-4.4.36-hd590300_1.conda\n libzlib           1.2.13      hd590300_5          60.1 KiB   conda  libzlib-1.2.13-hd590300_5.conda\n ncurses           6.4         h59595ed_2          863.7 KiB  conda  ncurses-6.4-h59595ed_2.conda\n openssl           3.2.0       hd590300_1          2.7 MiB    conda  openssl-3.2.0-hd590300_1.conda\n python            3.12.1      hab00c5b_1_cpython  30.8 MiB   conda  python-3.12.1-hab00c5b_1_cpython.conda\n readline          8.2         h8228510_1          274.9 KiB  conda  readline-8.2-h8228510_1.conda\n tk                8.6.13      noxft_h4845f30_101  3.2 MiB    conda  tk-8.6.13-noxft_h4845f30_101.conda\n tzdata            2023d       h0c530f3_0          116.8 KiB  conda  tzdata-2023d-h0c530f3_0.conda\n xz                5.2.6       h166bdaf_0          408.6 KiB  conda  xz-5.2.6-h166bdaf_0.tar.bz2\n
"},{"location":"reference/cli/#tree","title":"tree","text":"

Display the project's packages in a tree. Highlighted packages are those specified in the manifest.

The package tree can also be inverted (-i), to see which packages require a specific dependencies.

"},{"location":"reference/cli/#arguments_9","title":"Arguments","text":""},{"location":"reference/cli/#options_13","title":"Options","text":"
pixi tree\npixi tree pre-commit\npixi tree -i yaml\npixi tree --environment docs\npixi tree --platform win-64\n

Warning

Use -v to show which pypi packages are not yet parsed correctly. The extras and markers parsing is still under development.

Output will look like this, where direct packages in the manifest file will be green. Once a package has been displayed once, the tree won't continue to recurse through its dependencies (compare the first time python appears, vs the rest), and it will instead be marked with a star (*).

Version numbers are colored by the package type, yellow for Conda packages and blue for PyPI.

\u279c pixi tree\n\u251c\u2500\u2500 pre-commit v3.3.3\n\u2502   \u251c\u2500\u2500 cfgv v3.3.1\n\u2502   \u2502   \u2514\u2500\u2500 python v3.12.2\n\u2502   \u2502       \u251c\u2500\u2500 bzip2 v1.0.8\n\u2502   \u2502       \u251c\u2500\u2500 libexpat v2.6.2\n\u2502   \u2502       \u251c\u2500\u2500 libffi v3.4.2\n\u2502   \u2502       \u251c\u2500\u2500 libsqlite v3.45.2\n\u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13\n\u2502   \u2502       \u251c\u2500\u2500 libzlib v1.2.13 (*)\n\u2502   \u2502       \u251c\u2500\u2500 ncurses v6.4.20240210\n\u2502   \u2502       \u251c\u2500\u2500 openssl v3.2.1\n\u2502   \u2502       \u251c\u2500\u2500 readline v8.2\n\u2502   \u2502       \u2502   \u2514\u2500\u2500 ncurses v6.4.20240210 (*)\n\u2502   \u2502       \u251c\u2500\u2500 tk v8.6.13\n\u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13 (*)\n\u2502   \u2502       \u2514\u2500\u2500 xz v5.2.6\n\u2502   \u251c\u2500\u2500 identify v2.5.35\n\u2502   \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n...\n\u2514\u2500\u2500 tbump v6.9.0\n...\n    \u2514\u2500\u2500 tomlkit v0.12.4\n        \u2514\u2500\u2500 python v3.12.2 (*)\n

A regex pattern can be specified to filter the tree to just those that show a specific direct, or transitive dependency:

\u279c pixi tree pre-commit\n\u2514\u2500\u2500 pre-commit v3.3.3\n    \u251c\u2500\u2500 virtualenv v20.25.1\n    \u2502   \u251c\u2500\u2500 filelock v3.13.1\n    \u2502   \u2502   \u2514\u2500\u2500 python v3.12.2\n    \u2502   \u2502       \u251c\u2500\u2500 libexpat v2.6.2\n    \u2502   \u2502       \u251c\u2500\u2500 readline v8.2\n    \u2502   \u2502       \u2502   \u2514\u2500\u2500 ncurses v6.4.20240210\n    \u2502   \u2502       \u251c\u2500\u2500 libsqlite v3.45.2\n    \u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13\n    \u2502   \u2502       \u251c\u2500\u2500 bzip2 v1.0.8\n    \u2502   \u2502       \u251c\u2500\u2500 libzlib v1.2.13 (*)\n    \u2502   \u2502       \u251c\u2500\u2500 libffi v3.4.2\n    \u2502   \u2502       \u251c\u2500\u2500 tk v8.6.13\n    \u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13 (*)\n    \u2502   \u2502       \u251c\u2500\u2500 xz v5.2.6\n    \u2502   \u2502       \u251c\u2500\u2500 ncurses v6.4.20240210 (*)\n    \u2502   \u2502       \u2514\u2500\u2500 openssl v3.2.1\n    \u2502   \u251c\u2500\u2500 platformdirs v4.2.0\n    \u2502   \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n    \u2502   \u251c\u2500\u2500 distlib v0.3.8\n    \u2502   \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n    \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n    \u251c\u2500\u2500 pyyaml v6.0.1\n...\n

Additionally, the tree can be inverted, and it can show which packages depend on a regex pattern. The packages specified in the manifest will also be highlighted (in this case cffconvert and pre-commit would be).

\u279c pixi tree -i yaml\n\nruamel.yaml v0.18.6\n\u251c\u2500\u2500 pykwalify v1.8.0\n\u2502   \u2514\u2500\u2500 cffconvert v2.0.0\n\u2514\u2500\u2500 cffconvert v2.0.0\n\npyyaml v6.0.1\n\u2514\u2500\u2500 pre-commit v3.3.3\n\nruamel.yaml.clib v0.2.8\n\u2514\u2500\u2500 ruamel.yaml v0.18.6\n    \u251c\u2500\u2500 pykwalify v1.8.0\n    \u2502   \u2514\u2500\u2500 cffconvert v2.0.0\n    \u2514\u2500\u2500 cffconvert v2.0.0\n\nyaml v0.2.5\n\u2514\u2500\u2500 pyyaml v6.0.1\n    \u2514\u2500\u2500 pre-commit v3.3.3\n
"},{"location":"reference/cli/#shell","title":"shell","text":"

This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit.

"},{"location":"reference/cli/#options_14","title":"Options","text":"
pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
"},{"location":"reference/cli/#shell-hook","title":"shell-hook","text":"

This command prints the activation script of an environment.

"},{"location":"reference/cli/#options_15","title":"Options","text":"
pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\npixi shell-hook --json\n

Example use-case, when you want to get rid of the pixi executable in a Docker container.

pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n
"},{"location":"reference/cli/#search","title":"search","text":"

Search a package, output will list the latest version of the package.

"},{"location":"reference/cli/#arguments_10","title":"Arguments","text":"
  1. <PACKAGE>: Name of package to search, it's possible to use wildcards (*).
"},{"location":"reference/cli/#options_16","title":"Options","text":"
pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
"},{"location":"reference/cli/#self-update","title":"self-update","text":"

Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force flag

"},{"location":"reference/cli/#options_17","title":"Options","text":"
pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
"},{"location":"reference/cli/#info","title":"info","text":"

Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.

"},{"location":"reference/cli/#options_18","title":"Options","text":"
pixi info\npixi info --json --extended\n
"},{"location":"reference/cli/#clean","title":"clean","text":"

Clean the parts of your system which are touched by pixi. Defaults to cleaning the environments and task cache. Use the cache subcommand to clean the cache

"},{"location":"reference/cli/#options_19","title":"Options","text":"
pixi clean\n
"},{"location":"reference/cli/#clean-cache","title":"clean cache","text":"

Clean the pixi cache on your system.

"},{"location":"reference/cli/#options_20","title":"Options","text":"
pixi clean cache # clean all pixi caches\npixi clean cache --pypi # clean only the pypi cache\npixi clean cache --conda # clean only the conda cache\npixi clean cache --yes # skip the confirmation prompt\n
"},{"location":"reference/cli/#upload","title":"upload","text":"

Upload a package to a prefix.dev channel

"},{"location":"reference/cli/#arguments_11","title":"Arguments","text":"
  1. <HOST>: The host + channel to upload to.
  2. <PACKAGE_FILE>: The package file to upload.
pixi upload https://prefix.dev/api/v1/upload/my_channel my_package.conda\n
"},{"location":"reference/cli/#auth","title":"auth","text":"

This command is used to authenticate the user's access to remote hosts such as prefix.dev or anaconda.org for private channels.

"},{"location":"reference/cli/#auth-login","title":"auth login","text":"

Store authentication information for given host.

Tip

The host is real hostname not a channel.

"},{"location":"reference/cli/#arguments_12","title":"Arguments","text":"
  1. <HOST>: The host to authenticate with.
"},{"location":"reference/cli/#options_21","title":"Options","text":"
pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaAndHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --username john --password xxxxxx\n
"},{"location":"reference/cli/#auth-logout","title":"auth logout","text":"

Remove authentication information for a given host.

"},{"location":"reference/cli/#arguments_13","title":"Arguments","text":"
  1. <HOST>: The host to authenticate with.
pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
"},{"location":"reference/cli/#config","title":"config","text":"

Use this command to manage the configuration.

"},{"location":"reference/cli/#options_22","title":"Options","text":"

Checkout the pixi configuration for more information about the locations.

"},{"location":"reference/cli/#config-edit","title":"config edit","text":"

Edit the configuration file in the default editor.

pixi config edit --system\npixi config edit --local\npixi config edit -g\n
"},{"location":"reference/cli/#config-list","title":"config list","text":"

List the configuration

"},{"location":"reference/cli/#arguments_14","title":"Arguments","text":"
  1. [KEY]: The key to list the value of. (all if not provided)
"},{"location":"reference/cli/#options_23","title":"Options","text":"
pixi config list default-channels\npixi config list --json\npixi config list --system\npixi config list -g\n
"},{"location":"reference/cli/#config-prepend","title":"config prepend","text":"

Prepend a value to a list configuration key.

"},{"location":"reference/cli/#arguments_15","title":"Arguments","text":"
  1. <KEY>: The key to prepend the value to.
  2. <VALUE>: The value to prepend.
pixi config prepend default-channels conda-forge\n
"},{"location":"reference/cli/#config-append","title":"config append","text":"

Append a value to a list configuration key.

"},{"location":"reference/cli/#arguments_16","title":"Arguments","text":"
  1. <KEY>: The key to append the value to.
  2. <VALUE>: The value to append.
pixi config append default-channels robostack\npixi config append default-channels bioconda --global\n
"},{"location":"reference/cli/#config-set","title":"config set","text":"

Set a configuration key to a value.

"},{"location":"reference/cli/#arguments_17","title":"Arguments","text":"
  1. <KEY>: The key to set the value of.
  2. [VALUE]: The value to set. (if not provided, the key will be removed)
pixi config set default-channels '[\"conda-forge\", \"bioconda\"]'\npixi config set --global mirrors '{\"https://conda.anaconda.org/\": [\"https://prefix.dev/conda-forge\"]}'\npixi config set repodata-config.disable-zstd true --system\npixi config set --global detached-environments \"/opt/pixi/envs\"\npixi config set detached-environments false\n
"},{"location":"reference/cli/#config-unset","title":"config unset","text":"

Unset a configuration key.

"},{"location":"reference/cli/#arguments_18","title":"Arguments","text":"
  1. <KEY>: The key to unset.
pixi config unset default-channels\npixi config unset --global mirrors\npixi config unset repodata-config.disable-zstd --system\n
"},{"location":"reference/cli/#global","title":"global","text":"

Global is the main entry point for the part of pixi that executes on the global(system) level.

Tip

Binaries and environments installed globally are stored in ~/.pixi by default, this can be changed by setting the PIXI_HOME environment variable.

"},{"location":"reference/cli/#global-install","title":"global install","text":"

This command installs package(s) into its own environment and adds the binary to PATH, allowing you to access it anywhere on your system without activating the environment.

"},{"location":"reference/cli/#arguments_19","title":"Arguments","text":"

1.<PACKAGE>: The package(s) to install, this can also be a version constraint.

"},{"location":"reference/cli/#options_24","title":"Options","text":"
pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n\n# Install for a specific platform, only useful on osx-arm64\npixi global install --platform osx-64 ruff\n\n# Install without inserting activation code into the executable script\npixi global install ruff --no-activation\n

Tip

Running osx-64 on Apple Silicon will install the Intel binary but run it using Rosetta

pixi global install --platform osx-64 ruff\n

After using global install, you can use the package you installed anywhere on your system.

"},{"location":"reference/cli/#global-list","title":"global list","text":"

This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:

> pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n|   \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n|   \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n|   \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n|   \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n    \u2514\u2500 exec: uv\n
"},{"location":"reference/cli/#global-upgrade","title":"global upgrade","text":"

This command upgrades a globally installed package (to the latest version by default).

"},{"location":"reference/cli/#arguments_20","title":"Arguments","text":"
  1. <PACKAGE>: The package to upgrade.
"},{"location":"reference/cli/#options_25","title":"Options","text":"
pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
"},{"location":"reference/cli/#global-upgrade-all","title":"global upgrade-all","text":"

This command upgrades all globally installed packages to their latest version.

"},{"location":"reference/cli/#options_26","title":"Options","text":"
pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
"},{"location":"reference/cli/#global-remove","title":"global remove","text":"

Removes a package previously installed into a globally accessible location via pixi global install

Use pixi global info to find out what the package name is that belongs to the tool you want to remove.

"},{"location":"reference/cli/#arguments_21","title":"Arguments","text":"
  1. <PACKAGE>: The package(s) to remove.
pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
"},{"location":"reference/cli/#project","title":"project","text":"

This subcommand allows you to modify the project configuration through the command line interface.

"},{"location":"reference/cli/#options_27","title":"Options","text":""},{"location":"reference/cli/#project-channel-add","title":"project channel add","text":"

Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lock file and the environment is reinstalled.

"},{"location":"reference/cli/#arguments_22","title":"Arguments","text":"
  1. <CHANNEL>: The channels to add, name or URL.
"},{"location":"reference/cli/#options_28","title":"Options","text":"
pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nvidia\n
"},{"location":"reference/cli/#project-channel-list","title":"project channel list","text":"

List the channels in the manifest file

"},{"location":"reference/cli/#options_29","title":"Options","text":"
$ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
"},{"location":"reference/cli/#project-channel-remove","title":"project channel remove","text":"

List the channels in the manifest file

"},{"location":"reference/cli/#arguments_23","title":"Arguments","text":"
  1. <CHANNEL>...: The channels to remove, name(s) or URL(s).
"},{"location":"reference/cli/#options_30","title":"Options","text":"
pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nvidia\n
"},{"location":"reference/cli/#project-description-get","title":"project description get","text":"

Get the project description.

$ pixi project description get\nPackage management made easy!\n
"},{"location":"reference/cli/#project-description-set","title":"project description set","text":"

Set the project description.

"},{"location":"reference/cli/#arguments_24","title":"Arguments","text":"
  1. <DESCRIPTION>: The description to set.
pixi project description set \"my new description\"\n
"},{"location":"reference/cli/#project-environment-add","title":"project environment add","text":"

Add an environment to the manifest file.

"},{"location":"reference/cli/#arguments_25","title":"Arguments","text":"
  1. <NAME>: The name of the environment to add.
"},{"location":"reference/cli/#options_31","title":"Options","text":"
pixi project environment add env1 --feature feature1 --feature feature2\npixi project environment add env2 -f feature1 --solve-group test\npixi project environment add env3 -f feature1 --no-default-feature\npixi project environment add env3 -f feature1 --force\n
"},{"location":"reference/cli/#project-environment-remove","title":"project environment remove","text":"

Remove an environment from the manifest file.

"},{"location":"reference/cli/#arguments_26","title":"Arguments","text":"
  1. <NAME>: The name of the environment to remove.
pixi project environment remove env1\n
"},{"location":"reference/cli/#project-environment-list","title":"project environment list","text":"

List the environments in the manifest file.

pixi project environment list\n
"},{"location":"reference/cli/#project-export-conda_environment","title":"project export conda_environment","text":"

Exports a conda environment.yml file. The file can be used to create a conda environment using conda/mamba:

pixi project export conda-environment environment.yml\nmamba create --name <env> --file environment.yml\n
"},{"location":"reference/cli/#arguments_27","title":"Arguments","text":"
  1. <OUTPUT_PATH>: Optional path to render environment.yml to. Otherwise it will be printed to standard out.
"},{"location":"reference/cli/#options_32","title":"Options","text":"
pixi project export conda-environment --environment lint\npixi project export conda-environment --platform linux-64 environment.linux-64.yml\n
"},{"location":"reference/cli/#project-export-conda_explicit_spec","title":"project export conda_explicit_spec","text":"

Render a platform-specific conda explicit specification file for an environment. The file can be then used to create a conda environment using conda/mamba:

mamba create --name <env> --file <explicit spec file>\n

As the explicit specification file format does not support pypi-dependencies, use the --ignore-pypi-errors option to ignore those dependencies.

"},{"location":"reference/cli/#arguments_28","title":"Arguments","text":"
  1. <OUTPUT_DIR>: Output directory for rendered explicit environment spec files.
"},{"location":"reference/cli/#options_33","title":"Options","text":"
pixi project export conda_explicit_spec output\npixi project export conda_explicit_spec -e default -e test -p linux-64 output\n
"},{"location":"reference/cli/#project-platform-add","title":"project platform add","text":"

Adds a platform(s) to the manifest file and updates the lock file.

"},{"location":"reference/cli/#arguments_29","title":"Arguments","text":"
  1. <PLATFORM>...: The platforms to add.
"},{"location":"reference/cli/#options_34","title":"Options","text":"
pixi project platform add win-64\npixi project platform add --feature test win-64\n
"},{"location":"reference/cli/#project-platform-list","title":"project platform list","text":"

List the platforms in the manifest file.

$ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
"},{"location":"reference/cli/#project-platform-remove","title":"project platform remove","text":"

Remove platform(s) from the manifest file and updates the lock file.

"},{"location":"reference/cli/#arguments_30","title":"Arguments","text":"
  1. <PLATFORM>...: The platforms to remove.
"},{"location":"reference/cli/#options_35","title":"Options","text":"
pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
"},{"location":"reference/cli/#project-version-get","title":"project version get","text":"

Get the project version.

$ pixi project version get\n0.11.0\n
"},{"location":"reference/cli/#project-version-set","title":"project version set","text":"

Set the project version.

"},{"location":"reference/cli/#arguments_31","title":"Arguments","text":"
  1. <VERSION>: The version to set.
pixi project version set \"0.13.0\"\n
"},{"location":"reference/cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}","text":"

Bump the project version to {MAJOR|MINOR|PATCH}.

pixi project version major\npixi project version minor\npixi project version patch\n
  1. An up-to-date lock file means that the dependencies in the lock file are allowed by the dependencies in the manifest file. For example

    • a manifest with python = \">= 3.11\" is up-to-date with a name: python, version: 3.11.0 in the pixi.lock.
    • a manifest with python = \">= 3.12\" is not up-to-date with a name: python, version: 3.11.0 in the pixi.lock.

    Being up-to-date does not mean that the lock file holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9\u21a9

"},{"location":"reference/pixi_configuration/","title":"The configuration of pixi itself","text":"

Apart from the project specific configuration pixi supports configuration options which are not required for the project to work but are local to the machine. The configuration is loaded in the following order:

LinuxmacOSWindows Priority Location Comments 1 /etc/pixi/config.toml System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml XDG compliant user-specific configuration 3 $HOME/.config/pixi/config.toml User-specific configuration 4 $PIXI_HOME/config.toml Global configuration in the user home directory. PIXI_HOME defaults to ~/.pixi 5 your_project/.pixi/config.toml Project-specific configuration 6 Command line arguments (--tls-no-verify, --change-ps1=false, etc.) Configuration via command line arguments Priority Location Comments 1 /etc/pixi/config.toml System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml XDG compliant user-specific configuration 3 $HOME/Library/Application Support/pixi/config.toml User-specific configuration 4 $PIXI_HOME/config.toml Global configuration in the user home directory. PIXI_HOME defaults to ~/.pixi 5 your_project/.pixi/config.toml Project-specific configuration 6 Command line arguments (--tls-no-verify, --change-ps1=false, etc.) Configuration via command line arguments Priority Location Comments 1 C:\\ProgramData\\pixi\\config.toml System-wide configuration 2 %APPDATA%\\pixi\\config.toml User-specific configuration 3 $PIXI_HOME\\config.toml Global configuration in the user home directory. PIXI_HOME defaults to %USERPROFILE%/.pixi 4 your_project\\.pixi\\config.toml Project-specific configuration 5 Command line arguments (--tls-no-verify, --change-ps1=false, etc.) Configuration via command line arguments

Note

The highest priority wins. If a configuration file is found in a higher priority location, the values from the configuration read from lower priority locations are overwritten.

Note

To find the locations where pixi looks for configuration files, run pixi with -vv.

"},{"location":"reference/pixi_configuration/#reference","title":"Reference","text":"Casing In Configuration

In versions of pixi 0.20.1 and older the global configuration used snake_case we've changed to kebab-case for consistency with the rest of the configuration. But we still support the old snake_case configuration, for older configuration options. These are:

The following reference describes all available configuration options.

"},{"location":"reference/pixi_configuration/#default-channels","title":"default-channels","text":"

The default channels to select when running pixi init or pixi global install. This defaults to only conda-forge. config.toml

default-channels = [\"conda-forge\"]\n

Note

The default-channels are only used when initializing a new project. Once initialized the channels are used from the project manifest.

"},{"location":"reference/pixi_configuration/#change-ps1","title":"change-ps1","text":"

When set to false, the (pixi) prefix in the shell prompt is removed. This applies to the pixi shell subcommand. You can override this from the CLI with --change-ps1.

config.toml
change-ps1 = true\n
"},{"location":"reference/pixi_configuration/#tls-no-verify","title":"tls-no-verify","text":"

When set to true, the TLS certificates are not verified.

Warning

This is a security risk and should only be used for testing purposes or internal networks.

You can override this from the CLI with --tls-no-verify.

config.toml
tls-no-verify = false\n
"},{"location":"reference/pixi_configuration/#authentication-override-file","title":"authentication-override-file","text":"

Override from where the authentication information is loaded. Usually, we try to use the keyring to load authentication data from, and only use a JSON file as a fallback. This option allows you to force the use of a JSON file. Read more in the authentication section. config.toml

authentication-override-file = \"/path/to/your/override.json\"\n

"},{"location":"reference/pixi_configuration/#detached-environments","title":"detached-environments","text":"

The directory where pixi stores the project environments, what would normally be placed in the .pixi/envs folder in a project's root. It doesn't affect the environments built for pixi global. The location of environments created for a pixi global installation can be controlled using the PIXI_HOME environment variable.

Warning

We recommend against using this because any environment created for a project is no longer placed in the same folder as the project. This creates a disconnect between the project and its environments and manual cleanup of the environments is required when deleting the project.

However, in some cases, this option can still be very useful, for instance to:

This field can consist of two types of input.

config.toml

detached-environments = true\n
or: config.toml
detached-environments = \"/opt/pixi/envs\"\n

The environments will be stored in the cache directory when this option is true. When you specify a custom path the environments will be stored in that directory.

The resulting directory structure will look like this: config.toml

detached-environments = \"/opt/pixi/envs\"\n
/opt/pixi/envs\n\u251c\u2500\u2500 pixi-6837172896226367631\n\u2502   \u2514\u2500\u2500 envs\n\u2514\u2500\u2500 NAME_OF_PROJECT-HASH_OF_ORIGINAL_PATH\n    \u251c\u2500\u2500 envs # the runnable environments\n    \u2514\u2500\u2500 solve-group-envs # If there are solve groups\n

"},{"location":"reference/pixi_configuration/#pinning-strategy","title":"pinning-strategy","text":"

The strategy to use for pinning dependencies when running pixi add. The default is semver but you can set the following:

config.toml
pinning-strategy = \"no-pin\"\n
"},{"location":"reference/pixi_configuration/#mirrors","title":"mirrors","text":"

Configuration for conda channel-mirrors, more info below.

config.toml
[mirrors]\n# redirect all requests for conda-forge to the prefix.dev mirror\n\"https://conda.anaconda.org/conda-forge\" = [\n    \"https://prefix.dev/conda-forge\"\n]\n\n# redirect all requests for bioconda to one of the three listed mirrors\n# Note: for repodata we try the first mirror first.\n\"https://conda.anaconda.org/bioconda\" = [\n    \"https://conda.anaconda.org/bioconda\",\n    # OCI registries are also supported\n    \"oci://ghcr.io/channel-mirrors/bioconda\",\n    \"https://prefix.dev/bioconda\",\n]\n
"},{"location":"reference/pixi_configuration/#repodata-config","title":"repodata-config","text":"

Configuration for repodata fetching. config.toml

[repodata-config]\n# disable fetching of jlap, bz2 or zstd repodata files.\n# This should only be used for specific old versions of artifactory and other non-compliant\n# servers.\ndisable-jlap = true  # don't try to download repodata.jlap\ndisable-bzip2 = true # don't try to download repodata.json.bz2\ndisable-zstd = true  # don't try to download repodata.json.zst\n

"},{"location":"reference/pixi_configuration/#pypi-config","title":"pypi-config","text":"

To setup a certain number of defaults for the usage of PyPI registries. You can use the following configuration options:

config.toml
[pypi-config]\n# Main index url\nindex-url = \"https://pypi.org/simple\"\n# list of additional urls\nextra-index-urls = [\"https://pypi.org/simple2\"]\n# can be \"subprocess\" or \"disabled\"\nkeyring-provider = \"subprocess\"\n

index-url and extra-index-urls are not globals

Unlike pip, these settings, with the exception of keyring-provider will only modify the pixi.toml/pyproject.toml file and are not globally interpreted when not present in the manifest. This is because we want to keep the manifest file as complete and reproducible as possible.

"},{"location":"reference/pixi_configuration/#mirror-configuration","title":"Mirror configuration","text":"

You can configure mirrors for conda channels. We expect that mirrors are exact copies of the original channel. The implementation will look for the mirror key (a URL) in the mirrors section of the configuration file and replace the original URL with the mirror URL.

To also include the original URL, you have to repeat it in the list of mirrors.

The mirrors are prioritized based on the order of the list. We attempt to fetch the repodata (the most important file) from the first mirror in the list. The repodata contains all the SHA256 hashes of the individual packages, so it is important to get this file from a trusted source.

You can also specify mirrors for an entire \"host\", e.g.

config.toml
[mirrors]\n\"https://conda.anaconda.org\" = [\n    \"https://prefix.dev/\"\n]\n

This will forward all request to channels on anaconda.org to prefix.dev. Channels that are not currently mirrored on prefix.dev will fail in the above example.

"},{"location":"reference/pixi_configuration/#oci-mirrors","title":"OCI Mirrors","text":"

You can also specify mirrors on the OCI registry. There is a public mirror on the Github container registry (ghcr.io) that is maintained by the conda-forge team. You can use it like this:

config.toml
[mirrors]\n\"https://conda.anaconda.org/conda-forge\" = [\n    \"oci://ghcr.io/channel-mirrors/conda-forge\"\n]\n

The GHCR mirror also contains bioconda packages. You can search the available packages on Github.

"},{"location":"reference/project_configuration/","title":"Configuration","text":"

The pixi.toml is the pixi project configuration file, also known as the project manifest.

A toml file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check pixi on crates.io.

Tip

We also support the pyproject.toml file. It has the same structure as the pixi.toml file. except that you need to prepend the tables with tool.pixi instead of just the table name. For example, the [project] table becomes [tool.pixi.project]. There are also some small extras that are available in the pyproject.toml file, checkout the pyproject.toml documentation for more information.

"},{"location":"reference/project_configuration/#the-project-table","title":"The project table","text":"

The minimally required information in the project table is:

[project]\nchannels = [\"conda-forge\"]\nname = \"project-name\"\nplatforms = [\"linux-64\"]\n
"},{"location":"reference/project_configuration/#name","title":"name","text":"

The name of the project.

name = \"project-name\"\n
"},{"location":"reference/project_configuration/#channels","title":"channels","text":"

This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org you only need to use the name of the channel directly.

channels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n

Channels situated on the file system are also supported with absolute file paths:

channels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n

To access private or public channels on prefix.dev or Quetz use the url including the hostname:

channels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n
"},{"location":"reference/project_configuration/#platforms","title":"platforms","text":"

Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lock file (pixi.lock).

platforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n

The available platforms are listed here: link

Special macOS behavior

macOS has two platforms: osx-64 for Intel Macs and osx-arm64 for Apple Silicon Macs. To support both, include both in your platforms list. Fallback: If osx-arm64 can't resolve, use osx-64. Running osx-64 on Apple Silicon uses Rosetta for Intel binaries.

"},{"location":"reference/project_configuration/#version-optional","title":"version (optional)","text":"

The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.

version = \"1.2.3\"\n
"},{"location":"reference/project_configuration/#authors-optional","title":"authors (optional)","text":"

This is a list of authors of the project.

authors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n
"},{"location":"reference/project_configuration/#description-optional","title":"description (optional)","text":"

This should contain a short description of the project.

description = \"A simple description\"\n
"},{"location":"reference/project_configuration/#license-optional","title":"license (optional)","text":"

The license as a valid SPDX string (e.g. MIT AND Apache-2.0)

license = \"MIT\"\n
"},{"location":"reference/project_configuration/#license-file-optional","title":"license-file (optional)","text":"

Relative path to the license file.

license-file = \"LICENSE.md\"\n
"},{"location":"reference/project_configuration/#readme-optional","title":"readme (optional)","text":"

Relative path to the README file.

readme = \"README.md\"\n
"},{"location":"reference/project_configuration/#homepage-optional","title":"homepage (optional)","text":"

URL of the project homepage.

homepage = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#repository-optional","title":"repository (optional)","text":"

URL of the project source repository.

repository = \"https://github.com/prefix-dev/pixi\"\n
"},{"location":"reference/project_configuration/#documentation-optional","title":"documentation (optional)","text":"

URL of the project documentation.

documentation = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#conda-pypi-map-optional","title":"conda-pypi-map (optional)","text":"

Mapping of channel name or URL to location of mapping that can be URL/Path. Mapping should be structured in json format where conda_name: pypi_package_name. Example:

local/robostack_mapping.json
{\n  \"jupyter-ros\": \"my-name-from-mapping\",\n  \"boltons\": \"boltons-pypi\"\n}\n

If conda-forge is not present in conda-pypi-map pixi will use prefix.dev mapping for it.

conda-pypi-map = { \"conda-forge\" = \"https://example.com/mapping\", \"https://repo.prefix.dev/robostack\" = \"local/robostack_mapping.json\"}\n
"},{"location":"reference/project_configuration/#channel-priority-optional","title":"channel-priority (optional)","text":"

This is the setting for the priority of the channels in the solver step.

Options:

channel-priority = \"disabled\"\n

channel-priority = \"disabled\" is a security risk

Disabling channel priority may lead to unpredictable dependency resolutions. This is a possible security risk as it may lead to packages being installed from unexpected channels. It's advisable to maintain the default strict setting and order channels thoughtfully. If necessary, specify a channel directly for a dependency.

[project]\n# Putting conda-forge first solves most issues\nchannels = [\"conda-forge\", \"channel-name\"]\n[dependencies]\npackage = {version = \"*\", channel = \"channel-name\"}\n

"},{"location":"reference/project_configuration/#the-tasks-table","title":"The tasks table","text":"

Tasks are a way to automate certain custom commands in your project. For example, a lint or format step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run and are executed using the deno_task_shell.

[tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends-on=\"simple\"}\nalias = { depends-on=[\"depending\"]}\ndownload = { cmd=\"curl -o file.txt https://example.com/file.txt\" , outputs=[\"file.txt\"]}\nbuild = { cmd=\"npm build\", cwd=\"frontend\", inputs=[\"frontend/package.json\", \"frontend/*.js\"]}\nrun = { cmd=\"python run.py $ARGUMENT\", env={ ARGUMENT=\"value\" }}\nformat = { cmd=\"black $INIT_CWD\" } # runs black where you run pixi run format\nclean-env = { cmd = \"python isolated.py\", clean-env = true} # Only on Unix!\n

You can modify this table using pixi task.

Note

Specify different tasks for different platforms using the target table

Info

If you want to hide a task from showing up with pixi task list or pixi info, you can prefix the name with _. For example, if you want to hide depending, you can rename it to _depending.

"},{"location":"reference/project_configuration/#the-system-requirements-table","title":"The system-requirements table","text":"

The system requirements are used to define minimal system specifications used during dependency resolution.

For example, we can define a unix system with a specific minimal libc version.

[system-requirements]\nlibc = \"2.28\"\n
or make the project depend on a specific version of cuda:
[system-requirements]\ncuda = \"12\"\n

The options are:

More information in the system requirements documentation.

"},{"location":"reference/project_configuration/#the-pypi-options-table","title":"The pypi-options table","text":"

The pypi-options table is used to define options that are specific to PyPI registries. These options can be specified either at the root level, which will add it to the default options feature, or on feature level, which will create a union of these options when the features are included in the environment.

The options that can be defined are:

These options are explained in the sections below. Most of these options are taken directly or with slight modifications from the uv settings. If any are missing that you need feel free to create an issue requesting them.

"},{"location":"reference/project_configuration/#alternative-registries","title":"Alternative registries","text":"

Strict Index Priority

Unlike pip, because we make use of uv, we have a strict index priority. This means that the first index is used where a package can be found. The order is determined by the order in the toml file. Where the extra-index-urls are preferred over the index-url. Read more about this on the uv docs

Often you might want to use an alternative or extra index for your project. This can be done by adding the pypi-options table to your pixi.toml file, the following options are available:

An example:

[pypi-options]\nindex-url = \"https://pypi.org/simple\"\nextra-index-urls = [\"https://example.com/simple\"]\nfind-links = [{path = './links'}]\n

There are some examples in the pixi repository, that make use of this feature.

Authentication Methods

To read about existing authentication methods for private registries, please check the PyPI Authentication section.

"},{"location":"reference/project_configuration/#no-build-isolation","title":"No Build Isolation","text":"

Even though build isolation is a good default. One can choose to not isolate the build for a certain package name, this allows the build to access the pixi environment. This is convenient if you want to use torch or something similar for your build-process.

[dependencies]\npytorch = \"2.4.0\"\n\n[pypi-options]\nno-build-isolation = [\"detectron2\"]\n\n[pypi-dependencies]\ndetectron2 = { git = \"https://github.com/facebookresearch/detectron2.git\", rev = \"5b72c27ae39f99db75d43f18fd1312e1ea934e60\"}\n

Conda dependencies define the build environment

To use no-build-isolation effectively, use conda dependencies to define the build environment. These are installed before the PyPI dependencies are resolved, this way these dependencies are available during the build process. In the example above adding torch as a PyPI dependency would be ineffective, as it would not yet be installed during the PyPI resolution phase.

"},{"location":"reference/project_configuration/#index-strategy","title":"Index Strategy","text":"

The strategy to use when resolving against multiple index URLs. Description modified from the uv documentation:

By default, uv and thus pixi, will stop at the first index on which a given package is available, and limit resolutions to those present on that first index (first-match). This prevents dependency confusion attacks, whereby an attack can upload a malicious package under the same name to a secondary index.

One index strategy per environment

Only one index-strategy can be defined per environment or solve-group, otherwise, an error will be shown.

"},{"location":"reference/project_configuration/#possible-values","title":"Possible values:","text":"

PyPI only

The index-strategy only changes PyPI package resolution and not conda package resolution.

"},{"location":"reference/project_configuration/#the-dependencies-tables","title":"The dependencies table(s)","text":"

This section defines what dependencies you would like to use for your project.

There are multiple dependencies tables. The default is [dependencies], which are dependencies that are shared across platforms.

Dependencies are defined using a VersionSpec. A VersionSpec combines a Version with an optional operator.

Some examples are:

# Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n

Dependencies can also be defined as a mapping where it is using a matchspec:

package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n

Tip

The dependencies can be easily added using the pixi add command line. Running add for an existing dependency will replace it with the newest it can use.

Note

To specify different dependencies for different platforms use the target table

"},{"location":"reference/project_configuration/#dependencies","title":"dependencies","text":"

Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge. Even if the dependency defines a channel that channel should be added to the project.channels list.

[dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
"},{"location":"reference/project_configuration/#pypi-dependencies","title":"pypi-dependencies","text":"Details regarding the PyPI integration

We use uv, which is a new fast pip replacement written in Rust.

We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.

The uv resolution is included in the lock file directly.

Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.

Important considerations

"},{"location":"reference/project_configuration/#version-specification","title":"Version specification:","text":"

These dependencies don't follow the conda matchspec specification. The version is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:

[dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\nfastapi = \"*\"  # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\npandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\n\n# git dependencies\n# With ssh\nflask = { git = \"ssh://git@github.com/pallets/flask\" }\n# With https and a specific revision\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\n# TODO: will support later -> branch = '' or tag = '' to specify a branch or tag\n\n# You can also directly add a source dependency from a path, tip keep this relative to the root of the project.\nminimal-project = { path = \"./minimal-project\", editable = true}\n\n# You can also use a direct url, to either a `.tar.gz` or `.zip`, or a `.whl` file\nclick = { url = \"https://github.com/pallets/click/releases/download/8.1.7/click-8.1.7-py3-none-any.whl\" }\n\n# You can also just the default git repo, it will checkout the default branch\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\n
"},{"location":"reference/project_configuration/#full-specification","title":"Full specification","text":"

The full specification of a PyPI dependencies that pixi supports can be split into the following fields:

"},{"location":"reference/project_configuration/#extras","title":"extras","text":"

A list of extras to install with the package. e.g. [\"dataframe\", \"sql\"] The extras field works with all other version specifiers as it is an addition to the version specifier.

pandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\npytest = { git = \"URL\", extras = [\"dev\"]}\nblack = { url = \"URL\", extras = [\"cli\"]}\nminimal-project = { path = \"./minimal-project\", editable = true, extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#version","title":"version","text":"

The version of the package to install. e.g. \">=1.0.0\" or * which stands for any version, this is pixi specific. Version is our default field so using no inline table ({}) will default to this field.

py-rattler = \"*\"\nruff = \"~=1.0.0\"\npytest = {version = \"*\", extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#git","title":"git","text":"

A git repository to install from. This support both https:// and ssh:// urls.

Use git in combination with rev or subdirectory:

# Note don't forget the `ssh://` or `https://` prefix!\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\npy-rattler = { git = \"ssh://git@github.com/mamba-org/rattler.git\", subdirectory = \"py-rattler\" }\n
"},{"location":"reference/project_configuration/#path","title":"path","text":"

A local path to install from. e.g. path = \"./path/to/package\" We would advise to keep your path projects in the project, and to use a relative path.

Set editable to true to install in editable mode, this is highly recommended as it is hard to reinstall if you're not using editable mode. e.g. editable = true

minimal-project = { path = \"./minimal-project\", editable = true}\n
"},{"location":"reference/project_configuration/#url","title":"url","text":"

A URL to install a wheel or sdist directly from an url.

pandas = {url = \"https://files.pythonhosted.org/packages/3d/59/2afa81b9fb300c90531803c0fd43ff4548074fa3e8d0f747ef63b3b5e77a/pandas-2.2.1.tar.gz\"}\n
Did you know you can use: add --pypi?

Use the --pypi flag with the add command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask

This does not support all the features of the pypi-dependencies table yet.

"},{"location":"reference/project_configuration/#source-dependencies-sdist","title":"Source dependencies (sdist)","text":"

The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc for example, it's used from the conda environment instead of the system.

"},{"location":"reference/project_configuration/#host-dependencies","title":"host-dependencies","text":"

This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.

[host-dependencies]\npython = \"~=3.10.3\"\n

Typical examples of host dependencies are:

"},{"location":"reference/project_configuration/#build-dependencies","title":"build-dependencies","text":"

This table contains dependencies that are needed to build the project. Different from dependencies and host-dependencies these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.

[build-dependencies]\ncmake = \"~=3.24\"\n

Typical examples of build dependencies are:

Info

The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.

For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64 and your host platform is linux-64.

"},{"location":"reference/project_configuration/#the-activation-table","title":"The activation table","text":"

The activation table is used for specialized activation operations that need to be run when the environment is activated.

There are two types of activation operations a user can modify in the manifest:

These activation operations will be run before the pixi run and pixi shell commands.

Note

The activation operations are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe on windows and bash on linux and osx (Unix). Only .sh, .bash and .bat files are supported.

And the environment variables are set in the shell that is running the activation script, thus take note when using e.g. $ or %.

If you have scripts or env variable per platform use the target table.

[activation]\nscripts = [\"env_setup.sh\"]\nenv = { ENV_VAR = \"value\" }\n\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n\n[target.linux-64.activation.env]\nENV_VAR = \"linux-value\"\n\n# You can also reference existing environment variables, but this has\n# to be done separately for unix-like operating systems and Windows\n[target.unix.activation.env]\nENV_VAR = \"$OTHER_ENV_VAR/unix-value\"\n\n[target.win.activation.env]\nENV_VAR = \"%OTHER_ENV_VAR%\\\\windows-value\"\n
"},{"location":"reference/project_configuration/#the-target-table","title":"The target table","text":"

The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.

The target table is currently implemented for the following sub-tables:

The target table is defined using [target.PLATFORM.SUB-TABLE]. E.g [target.linux-64.dependencies]

The platform can be any of:

The sub-table can be any of the specified above.

To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake being used for all targets but on osx-64 or osx-arm64 a different version of python will be selected.

[dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n

Here are some more examples:

[target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n
"},{"location":"reference/project_configuration/#the-feature-and-environments-tables","title":"The feature and environments tables","text":"

The feature table allows you to define features that can be used to create different [environments]. The [environments] table allows you to define different environments. The design is explained in the this design document.

Simplest example
[feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n

This will create an environment called test that has pytest installed.

"},{"location":"reference/project_configuration/#the-feature-table","title":"The feature table","text":"

The feature table allows you to define the following fields per feature.

These tables are all also available without the feature prefix. When those are used we call them the default feature. This is a protected name you can not use for your own feature.

Cuda feature table example
[feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\n# Results in:  [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\nchannels = [\"nvidia\"]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Cuda feature table example but written as separate tables
[feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
"},{"location":"reference/project_configuration/#the-environments-table","title":"The environments table","text":"

The [environments] table allows you to define environments that are created using the features defined in the [feature] tables.

The environments table is defined using the following fields:

Full environments table specification

[environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = {features = [\"lint\"], no-default-feature = true}\n
As shown in the example above, in the simplest of cases, it is possible to define an environment only by listing its features:

Simplest example
[environments]\ntest = [\"test\"]\n

is equivalent to

Simplest example expanded
[environments]\ntest = {features = [\"test\"]}\n

When an environment comprises several features (including the default feature): - The activation and tasks of the environment are the union of the activation and tasks of all its features. - The dependencies and pypi-dependencies of the environment are the union of the dependencies and pypi-dependencies of all its features. This means that if several features define a requirement for the same package, both requirements will be combined. Beware of conflicting requirements across features added to the same environment. - The system-requirements of the environment is the union of the system-requirements of all its features. If multiple features specify a requirement for the same system package, the highest version is chosen. - The channels of the environment is the union of the channels of all its features. Channel priorities can be specified in each feature, to ensure channels are considered in the right order in the environment. - The platforms of the environment is the intersection of the platforms of all its features. Be aware that the platforms supported by a feature (including the default feature) will be considered as the platforms defined at project level (unless overridden in the feature). This means that it is usually a good idea to set the project platforms to all platforms it can support across its environments.

"},{"location":"reference/project_configuration/#global-configuration","title":"Global configuration","text":"

The global configuration options are documented in the global configuration section.

"},{"location":"switching_from/conda/","title":"Transitioning from the conda or mamba to pixi","text":"

Welcome to the guide designed to ease your transition from conda or mamba to pixi. This document compares key commands and concepts between these tools, highlighting pixi's unique approach to managing environments and packages. With pixi, you'll experience a project-based workflow, enhancing your development process, and allowing for easy sharing of your work.

"},{"location":"switching_from/conda/#why-pixi","title":"Why Pixi?","text":"

Pixi builds upon the foundation of the conda ecosystem, introducing a project-centric approach rather than focusing solely on environments. This shift towards projects offers a more organized and efficient way to manage dependencies and run code, tailored to modern development practices.

"},{"location":"switching_from/conda/#key-differences-at-a-glance","title":"Key Differences at a Glance","text":"Task Conda/Mamba Pixi Installation Requires an installer Download and add to path (See installation) Creating an Environment conda create -n myenv -c conda-forge python=3.8 pixi init myenv followed by pixi add python=3.8 Activating an Environment conda activate myenv pixi shell within the project directory Deactivating an Environment conda deactivate exit from the pixi shell Running a Task conda run -n myenv python my_program.py pixi run python my_program.py (See run) Installing a Package conda install numpy pixi add numpy Uninstalling a Package conda remove numpy pixi remove numpy

No base environment

Conda has a base environment, which is the default environment when you start a new shell. Pixi does not have a base environment. And requires you to install the tools you need in the project or globally. Using pixi global install bat will install bat in a global environment, which is not the same as the base environment in conda.

Activating pixi environment in the current shell

For some advanced use-cases, you can activate the environment in the current shell. This uses the pixi shell-hook which prints the activation script, which can be used to activate the environment in the current shell without pixi itself.

~/myenv > eval \"$(pixi shell-hook)\"\n

"},{"location":"switching_from/conda/#environment-vs-project","title":"Environment vs Project","text":"

Conda and mamba focus on managing environments, while pixi emphasizes projects. In pixi, a project is a folder containing a manifest(pixi.toml/pyproject.toml) file that describes the project, a pixi.lock lock-file that describes the exact dependencies, and a .pixi folder that contains the environment.

This project-centric approach allows for easy sharing and collaboration, as the project folder contains all the necessary information to recreate the environment. It manages more than one environment for more than one platform in a single project, and allows for easy switching between them. (See multiple environments)

"},{"location":"switching_from/conda/#global-environments","title":"Global environments","text":"

conda installs all environments in one global location. When this is important to you for filesystem reasons, you can use the detached-environments feature of pixi.

pixi config set detached-environments true\n# or a specific location\npixi config set detached-environments /path/to/envs\n
This doesn't allow you to activate the environments using pixi shell -n but it will make the installation of the environments go to the same folder.

pixi does have the pixi global command to install tools on your machine. (See global) This is not a replacement for conda but works the same as pipx and condax. It creates a single isolated environment for the given requirement and installs the binaries into the global path.

pixi global install bat\nbat pixi.toml\n

Never install pip with pixi global

Installations with pixi global get their own isolated environment. Installing pip with pixi global will create a new isolated environment with its own pip binary. Using that pip binary will install packages in the pip environment, making it unreachable form anywhere as you can't activate it.

"},{"location":"switching_from/conda/#automated-switching","title":"Automated switching","text":"

With pixi you can import environment.yml files into a pixi project. (See import)

pixi init --import environment.yml\n
This will create a new project with the dependencies from the environment.yml file.

Exporting your environment

If you are working with Conda users or systems, you can export your environment to a environment.yml file to share them.

pixi project export conda\n
Additionally you can export a conda explicit specification.

"},{"location":"switching_from/conda/#troubleshooting","title":"Troubleshooting","text":"

Encountering issues? Here are solutions to some common problems when being used to the conda workflow:

"},{"location":"switching_from/poetry/","title":"Transitioning from poetry to pixi","text":"

Welcome to the guide designed to ease your transition from poetry to pixi. This document compares key commands and concepts between these tools, highlighting pixi's unique approach to managing environments and packages. With pixi, you'll experience a project-based workflow similar to poetry while including the conda ecosystem and allowing for easy sharing of your work.

"},{"location":"switching_from/poetry/#why-pixi","title":"Why Pixi?","text":"

Poetry is most-likely the closest tool to pixi in terms of project management, in the python ecosystem. On top of the PyPI ecosystem, pixi adds the power of the conda ecosystem, allowing for a more flexible and powerful environment management.

"},{"location":"switching_from/poetry/#quick-look-at-the-differences","title":"Quick look at the differences","text":"Task Poetry Pixi Creating an Environment poetry new myenv pixi init myenv Running a Task poetry run which python pixi run which python pixi uses a built-in cross platform shell for run where poetry uses your shell. Installing a Package poetry add numpy pixi add numpy adds the conda variant. pixi add --pypi numpy adds the PyPI variant. Uninstalling a Package poetry remove numpy pixi remove numpy removes the conda variant. pixi remove --pypi numpy removes the PyPI variant. Building a package poetry build We've yet to implement package building and publishing Publishing a package poetry publish We've yet to implement package building and publishing Reading the pyproject.toml [tool.poetry] [tool.pixi] Defining dependencies [tool.poetry.dependencies] [tool.pixi.dependencies] for conda, [tool.pixi.pypi-dependencies] or [project.dependencies] for PyPI dependencies Dependency definition - numpy = \"^1.2.3\"- numpy = \"~1.2.3\"- numpy = \"*\" - numpy = \">=1.2.3 <2.0.0\"- numpy = \">=1.2.3 <1.3.0\"- numpy = \"*\" Lock file poetry.lock pixi.lock Environment directory ~/.cache/pypoetry/virtualenvs/myenv ./.pixi Defaults to the project folder, move this using the detached-environments"},{"location":"switching_from/poetry/#support-both-poetry-and-pixi-in-my-project","title":"Support both poetry and pixi in my project","text":"

You can allow users to use poetry and pixi in the same project, they will not touch each other's parts of the configuration or system. It's best to duplicate the dependencies, basically making an exact copy of the tool.poetry.dependencies into tool.pixi.pypi-dependencies. Make sure that python is only defined in the tool.pixi.dependencies and not in the tool.pixi.pypi-dependencies.

Mixing pixi and poetry

It's possible to use poetry in pixi environments but this is advised against. Pixi supports PyPI dependencies in a different way than poetry does, and mixing them can lead to unexpected behavior. As you can only use one package manager at a time, it's best to stick to one.

If using poetry on top of a pixi project, you'll always need to install the poetry environment after the pixi environment. And let pixi handle the python and poetry installation.

"},{"location":"tutorials/python/","title":"Tutorial: Doing Python development with Pixi","text":"

In this tutorial, we will show you how to create a simple Python project with pixi. We will show some of the features that pixi provides, that are currently not a part of pdm, poetry etc.

"},{"location":"tutorials/python/#why-is-this-useful","title":"Why is this useful?","text":"

Pixi builds upon the conda ecosystem, which allows you to create a Python environment with all the dependencies you need. This is especially useful when you are working with multiple Python interpreters and bindings to C and C++ libraries. For example, GDAL from PyPI does not have binary C dependencies, but the conda package does. On the other hand, some packages are only available through PyPI, which pixi can also install for you. Best of both world, let's give it a go!

"},{"location":"tutorials/python/#pixitoml-and-pyprojecttoml","title":"pixi.toml and pyproject.toml","text":"

We support two manifest formats: pyproject.toml and pixi.toml. In this tutorial, we will use the pyproject.toml format because it is the most common format for Python projects.

"},{"location":"tutorials/python/#lets-get-started","title":"Let's get started","text":"

Let's start out by making a directory and creating a new pyproject.toml file.

pixi init pixi-py --format pyproject\n

This gives you the following pyproject.toml:

[project]\nname = \"pixi-py\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [{name = \"Tim de Jager\", email = \"tim@prefix.dev\"}]\nrequires-python = \">= 3.11\"\ndependencies = []\n\n[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\"]\n\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n\n[tool.pixi.tasks]\n

Let's add the Python project to the tree:

Linux & macOSWindows
cd pixi-py # move into the project directory\nmkdir pixi_py\ntouch pixi_py/__init__.py\n
cd pixi-py\nmkdir pixi_py\ntype nul > pixi_py\\__init__.py\n

We now have the following directory structure:

.\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\n\u2514\u2500\u2500 pyproject.toml\n

We've used a flat-layout here but pixi supports both flat- and src-layouts.

"},{"location":"tutorials/python/#whats-in-the-pyprojecttoml","title":"What's in the pyproject.toml?","text":"

Okay, so let's have a look at what's sections have been added and how we can modify the pyproject.toml.

These first entries were added to the pyproject.toml file:

# Main pixi entry\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\n# This is your machine platform by default\nplatforms = [\"osx-arm64\"]\n

The channels and platforms are added to the [tool.pixi.project] section. Channels like conda-forge manage packages similar to PyPI but allow for different packages across languages. The keyword platforms determines what platform the project supports.

The pixi_py package itself is added as an editable dependency. This means that the package is installed in editable mode, so you can make changes to the package and see the changes reflected in the environment, without having to re-install the environment.

# Editable installs\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n

In pixi, unlike other package managers, this is explicitly stated in the pyproject.toml file. The main reason being so that you can choose which environment this package should be included in.

"},{"location":"tutorials/python/#managing-both-conda-and-pypi-dependencies-in-pixi","title":"Managing both conda and PyPI dependencies in pixi","text":"

Our projects usually depend on other packages.

$ pixi add black\nAdded black\n

This will result in the following addition to the pyproject.toml:

# Dependencies\n[tool.pixi.dependencies]\nblack = \">=24.4.2,<24.5\"\n

But we can also be strict about the version that should be used with pixi add black=24, resulting in

[tool.pixi.dependencies]\nblack = \"24.*\"\n

Now, let's add some optional dependencies:

pixi add --pypi --feature test pytest\n

Which results in the following fields added to the pyproject.toml:

[project.optional-dependencies]\ntest = [\"pytest\"]\n

After we have added the optional dependencies to the pyproject.toml, pixi automatically creates a feature, which can contain a collection of dependencies, tasks, channels, and more.

Sometimes there are packages that aren't available on conda channels but are published on PyPI. We can add these as well, which pixi will solve together with the default dependencies.

$ pixi add black --pypi\nAdded black\nAdded these as pypi-dependencies.\n

which results in the addition to the dependencies key in the pyproject.toml

dependencies = [\"black\"]\n

When using the pypi-dependencies you can make use of the optional-dependencies that other packages make available. For example, black makes the cli dependencies option, which can be added with the --pypi keyword:

$ pixi add black[cli] --pypi\nAdded black[cli]\nAdded these as pypi-dependencies.\n

which updates the dependencies entry to

dependencies = [\"black[cli]\"]\n
Optional dependencies in pixi.toml

This tutorial focuses on the use of the pyproject.toml, but in case you're curious, the pixi.toml would contain the following entry after the installation of a PyPI package including an optional dependency:

[pypi-dependencies]\nblack = { version = \"*\", extras = [\"cli\"] }\n

"},{"location":"tutorials/python/#installation-pixi-install","title":"Installation: pixi install","text":"

Now let's install the project with pixi install:

$ pixi install\n\u2714 Project in /path/to/pixi-py is ready to use!\n

We now have a new directory called .pixi in the project root. This directory contains the environment that was created when we ran pixi install. The environment is a conda environment that contains the dependencies that we specified in the pyproject.toml file. We can also install the test environment with pixi install -e test. We can use these environments for executing code.

We also have a new file called pixi.lock in the project root. This file contains the exact versions of the dependencies that were installed in the environment across platforms.

"},{"location":"tutorials/python/#whats-in-the-environment","title":"What's in the environment?","text":"

Using pixi list, you can see what's in the environment, this is essentially a nicer view on the lock file:

$ pixi list\nPackage          Version       Build               Size       Kind   Source\nbzip2            1.0.8         h93a5062_5          119.5 KiB  conda  bzip2-1.0.8-h93a5062_5.conda\nblack            24.4.2                            3.8 MiB    pypi   black-24.4.2-cp312-cp312-win_amd64.http.whl\nca-certificates  2024.2.2      hf0a4a13_0          152.1 KiB  conda  ca-certificates-2024.2.2-hf0a4a13_0.conda\nlibexpat         2.6.2         hebf3989_0          62.2 KiB   conda  libexpat-2.6.2-hebf3989_0.conda\nlibffi           3.4.2         h3422bc3_5          38.1 KiB   conda  libffi-3.4.2-h3422bc3_5.tar.bz2\nlibsqlite        3.45.2        h091b4b1_0          806 KiB    conda  libsqlite-3.45.2-h091b4b1_0.conda\nlibzlib          1.2.13        h53f4e23_5          47 KiB     conda  libzlib-1.2.13-h53f4e23_5.conda\nncurses          6.4.20240210  h078ce10_0          801 KiB    conda  ncurses-6.4.20240210-h078ce10_0.conda\nopenssl          3.2.1         h0d3ecfb_1          2.7 MiB    conda  openssl-3.2.1-h0d3ecfb_1.conda\npython           3.12.3        h4a7b5fc_0_cpython  12.6 MiB   conda  python-3.12.3-h4a7b5fc_0_cpython.conda\nreadline         8.2           h92ec313_1          244.5 KiB  conda  readline-8.2-h92ec313_1.conda\ntk               8.6.13        h5083fa2_1          3 MiB      conda  tk-8.6.13-h5083fa2_1.conda\ntzdata           2024a         h0c530f3_0          117 KiB    conda  tzdata-2024a-h0c530f3_0.conda\npixi-py          0.1.0                                        pypi   . (editable)\nxz               5.2.6         h57fd34a_0          230.2 KiB  conda  xz-5.2.6-h57fd34a_0.tar.bz2\n

Python

The Python interpreter is also installed in the environment. This is because the Python interpreter version is read from the requires-python field in the pyproject.toml file. This is used to determine the Python version to install in the environment. This way, pixi automatically manages/bootstraps the Python interpreter for you, so no more brew, apt or other system install steps.

Here, you can see the different conda and Pypi packages listed. As you can see, the pixi-py package that we are working on is installed in editable mode. Every environment in pixi is isolated but reuses files that are hard-linked from a central cache directory. This means that you can have multiple environments with the same packages but only have the individual files stored once on disk.

We can create the default and test environments based on our own test feature from the optional-dependency:

pixi project environment add default --solve-group default\npixi project environment add test --feature test --solve-group default\n

Which results in:

# Environments\n[tool.pixi.environments]\ndefault = { solve-group = \"default\" }\ntest = { features = [\"test\"], solve-group = \"default\" }\n
Solve Groups

Solve groups are a way to group dependencies together. This is useful when you have multiple environments that share the same dependencies. For example, maybe pytest is a dependency that influences the dependencies of the default environment. By putting these in the same solve group, you ensure that the versions in test and default are exactly the same.

The default environment is created when you run pixi install. The test environment is created from the optional dependencies in the pyproject.toml file. You can execute commands in this environment with e.g. pixi run -e test python

"},{"location":"tutorials/python/#getting-code-to-run","title":"Getting code to run","text":"

Let's add some code to the pixi-py package. We will add a new function to the pixi_py/__init__.py file:

from rich import print\n\ndef hello():\n    return \"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\"\n\ndef say_hello():\n    print(*hello())\n

Now add the rich dependency from PyPI using: pixi add --pypi rich.

Let's see if this works by running:

pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n
Slow?

This might be slow(2 minutes) the first time because pixi installs the project, but it will be near instant the second time.

Pixi runs the self installed Python interpreter. Then, we are importing the pixi_py package, which is installed in editable mode. The code calls the say_hello function that we just added. And it works! Cool!

"},{"location":"tutorials/python/#testing-this-code","title":"Testing this code","text":"

Okay, so let's add a test for this function. Let's add a tests/test_me.py file in the root of the project.

Giving us the following project structure:

.\n\u251c\u2500\u2500 pixi.lock\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u251c\u2500\u2500 __init__.py\n\u251c\u2500\u2500 pyproject.toml\n\u2514\u2500\u2500 tests/test_me.py\n
from pixi_py import hello\n\ndef test_pixi_py():\n    assert hello() == (\"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\")\n

Let's add an easy task for running the tests.

$ pixi task add --feature test test \"pytest\"\n\u2714 Added task `test`: pytest .\n

So pixi has a task system to make it easy to run commands. Similar to npm scripts or something you would specify in a Justfile.

Pixi tasks

Tasks are actually a pretty cool pixi feature that is powerful and runs in a cross-platform shell. You can do caching, dependencies and more. Read more about tasks in the tasks section.

$ pixi r test\n\u2728 Pixi task (test): pytest .\n================================================================================================= test session starts =================================================================================================\nplatform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0\nrootdir: /private/tmp/pixi-py\nconfigfile: pyproject.toml\ncollected 1 item\n\ntest_me.py .                                                                                                                                                                                                    [100%]\n\n================================================================================================== 1 passed in 0.00s =================================================================================================\n

Neat! It seems to be working!

"},{"location":"tutorials/python/#test-vs-default-environment","title":"Test vs Default environment","text":"

The interesting thing is if we compare the output of the two environments.

pixi list -e test\n# v.s. default environment\npixi list\n

Is that the test environment has:

package          version       build               size       kind   source\n...\npytest           8.1.1                             1.1 mib    pypi   pytest-8.1.1-py3-none-any.whl\n...\n

But the default environment is missing this package. This way, you can finetune your environments to only have the packages that are needed for that environment. E.g. you could also have a dev environment that has pytest and ruff installed, but you could omit these from the prod environment. There is a docker example that shows how to set up a minimal prod environment and copy from there.

"},{"location":"tutorials/python/#replacing-pypi-packages-with-conda-packages","title":"Replacing PyPI packages with conda packages","text":"

Last thing, pixi provides the ability for pypi packages to depend on conda packages. Let's confirm this with pixi list:

$ pixi list\nPackage          Version       Build               Size       Kind   Source\n...\npygments         2.17.2                            4.1 MiB    pypi   pygments-2.17.2-py3-none-any.http.whl\n...\n

Let's explicitly add pygments to the pyproject.toml file. Which is a dependency of the rich package.

pixi add pygments\n

This will add the following to the pyproject.toml file:

[tool.pixi.dependencies]\npygments = \">=2.17.2,<2.18\"\n

We can now see that the pygments package is now installed as a conda package.

$ pixi list\nPackage          Version       Build               Size       Kind   Source\n...\npygments         2.17.2        pyhd8ed1ab_0        840.3 KiB  conda  pygments-2.17.2-pyhd8ed1ab_0.conda\n

This way, PyPI dependencies and conda dependencies can be mixed and matched to seamlessly interoperate.

$  pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n

And it still works!

"},{"location":"tutorials/python/#conclusion","title":"Conclusion","text":"

In this tutorial, you've seen how easy it is to use a pyproject.toml to manage your pixi dependencies and environments. We have also explored how to use PyPI and conda dependencies seamlessly together in the same project and install optional dependencies to manage Python packages.

Hopefully, this provides a flexible and powerful way to manage your Python projects and a fertile base for further exploration with Pixi.

Thanks for reading! Happy Coding \ud83d\ude80

Any questions? Feel free to reach out or share this tutorial on X, join our Discord, send us an e-mail or follow our GitHub.

"},{"location":"tutorials/ros2/","title":"Tutorial: Develop a ROS 2 package with pixi","text":"

In this tutorial, we will show you how to develop a ROS 2 package using pixi. The tutorial is written to be executed from top to bottom, missing steps might result in errors.

The audience for this tutorial is developers who are familiar with ROS 2 and how are interested to try pixi for their development workflow.

"},{"location":"tutorials/ros2/#prerequisites","title":"Prerequisites","text":"

If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.

"},{"location":"tutorials/ros2/#create-a-pixi-project","title":"Create a pixi project.","text":"
pixi init my_ros2_project -c robostack-staging -c conda-forge\ncd my_ros2_project\n

It should have created a directory structure like this:

my_ros2_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n

The pixi.toml file is the manifest file for your project. It should look like this:

pixi.toml
[project]\nname = \"my_ros2_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"robostack-staging\", \"conda-forge\"]\n# Your project can support multiple platforms, the current platform will be automatically added.\nplatforms = [\"linux-64\"]\n\n[tasks]\n\n[dependencies]\n

The channels you added to the init command are repositories of packages, you can search in these repositories through our prefix.dev website. The platforms are the systems you want to support, in pixi you can support multiple platforms, but you have to define which platforms, so pixi can test if those are supported for your dependencies. For the rest of the fields, you can fill them in as you see fit.

"},{"location":"tutorials/ros2/#add-ros-2-dependencies","title":"Add ROS 2 dependencies","text":"

To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.

Let's start with the turtlesim example

pixi add ros-humble-desktop ros-humble-turtlesim\n

This will add the ros-humble-desktop and ros-humble-turtlesim packages to your manifest. Depending on your internet speed this might take a minute, as it will also install ROS in your project folder (.pixi).

Now run the turtlesim example.

pixi run ros2 run turtlesim turtlesim_node\n

Or use the shell command to start an activated environment in your terminal.

pixi shell\nros2 run turtlesim turtlesim_node\n

Congratulations you have ROS 2 running on your machine with pixi!

Some more fun with the turtle

To control the turtle you can run the following command in a new terminal

cd my_ros2_project\npixi run ros2 run turtlesim turtle_teleop_key\n

Now you can control the turtle with the arrow keys on your keyboard.

"},{"location":"tutorials/ros2/#add-a-custom-python-node","title":"Add a custom Python node","text":"

As ros works with custom nodes, let's add a custom node to our project.

pixi run ros2 pkg create --build-type ament_python --destination-directory src --node-name my_node my_package\n

To build the package we need some more dependencies:

pixi add colcon-common-extensions \"setuptools<=58.2.0\"\n

Add the created initialization script for the ros workspace to your manifest file.

Then run the build command

pixi run colcon build\n

This will create a sourceable script in the install folder, you can source this script through an activation script to use your custom node. Normally this would be the script you add to your .bashrc but now you tell pixi to use it.

Linux & macOSWindows pixi.toml
[activation]\nscripts = [\"install/setup.sh\"]\n
pixi.toml
[activation]\nscripts = [\"install/setup.bat\"]\n
Multi platform support

You can add multiple activation scripts for different platforms, so you can support multiple platforms with one project. Use the following example to add support for both Linux and Windows, using the target syntax.

[project]\nplatforms = [\"linux-64\", \"win-64\"]\n\n[activation]\nscripts = [\"install/setup.sh\"]\n[target.win-64.activation]\nscripts = [\"install/setup.bat\"]\n

Now you can run your custom node with the following command

pixi run ros2 run my_package my_node\n
"},{"location":"tutorials/ros2/#simplify-the-user-experience","title":"Simplify the user experience","text":"

In pixi we have a feature called tasks, this allows you to define a task in your manifest file and run it with a simple command. Let's add a task to run the turtlesim example and the custom node.

pixi task add sim \"ros2 run turtlesim turtlesim_node\"\npixi task add build \"colcon build --symlink-install\"\npixi task add hello \"ros2 run my_package my_node\"\n

Now you can run these task by simply running

pixi run sim\npixi run build\npixi run hello\n
Advanced task usage

Tasks are a powerful feature in pixi.

[tasks]\nsim = \"ros2 run turtlesim turtlesim_node\"\nbuild = {cmd = \"colcon build --symlink-install\", inputs = [\"src\"]}\nhello = { cmd = \"ros2 run my_package my_node\", depends-on = [\"build\"] }\n
"},{"location":"tutorials/ros2/#build-a-c-node","title":"Build a C++ node","text":"

To build a C++ node you need to add the ament_cmake and some other build dependencies to your manifest file.

pixi add ros-humble-ament-cmake-auto compilers pkg-config cmake ninja\n

Now you can create a C++ node with the following command

pixi run ros2 pkg create --build-type ament_cmake --destination-directory src --node-name my_cpp_node my_cpp_package\n

Now you can build it again and run it with the following commands

# Passing arguments to the build command to build with Ninja, add them to the manifest if you want to default to ninja.\npixi run build --cmake-args -G Ninja\npixi run ros2 run my_cpp_package my_cpp_node\n
Tip

Add the cpp task to the manifest file to simplify the user experience.

pixi task add hello-cpp \"ros2 run my_cpp_package my_cpp_node\"\n
"},{"location":"tutorials/ros2/#conclusion","title":"Conclusion","text":"

In this tutorial, we showed you how to create a Python & CMake ROS2 project using pixi. We also showed you how to add dependencies to your project using pixi, and how to run your project using pixi run. This way you can make sure that your project is reproducible on all your machines that have pixi installed.

"},{"location":"tutorials/ros2/#show-off-your-work","title":"Show Off Your Work!","text":"

Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!

"},{"location":"tutorials/ros2/#frequently-asked-questions","title":"Frequently asked questions","text":""},{"location":"tutorials/ros2/#what-happens-with-rosdep","title":"What happens with rosdep?","text":"

Currently, we don't support rosdep in a pixi environment, so you'll have to add the packages using pixi add. rosdep will call conda install which isn't supported in a pixi environment.

"},{"location":"tutorials/rust/","title":"Tutorial: Develop a Rust package using pixi","text":"

In this tutorial, we will show you how to develop a Rust package using pixi. The tutorial is written to be executed from top to bottom, missing steps might result in errors.

The audience for this tutorial is developers who are familiar with Rust and cargo and how are interested to try pixi for their development workflow. The benefit would be within a rust workflow that you lock both rust and the C/System dependencies your project might be using. E.g tokio users will almost most definitely use openssl.

If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.

"},{"location":"tutorials/rust/#prerequisites","title":"Prerequisites","text":""},{"location":"tutorials/rust/#create-a-pixi-project","title":"Create a pixi project.","text":"
pixi init my_rust_project\ncd my_rust_project\n

It should have created a directory structure like this:

my_rust_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n

The pixi.toml file is the manifest file for your project. It should look like this:

pixi.toml
[project]\nname = \"my_rust_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # (1)!\n\n[tasks]\n\n[dependencies]\n
  1. The platforms is set to your system's platform by default. You can change it to any platform you want to support. e.g. [\"linux-64\", \"osx-64\", \"osx-arm64\", \"win-64\"].
"},{"location":"tutorials/rust/#add-rust-dependencies","title":"Add Rust dependencies","text":"

To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.

pixi add rust\n

This will add the rust package to your pixi.toml file under [dependencies]. Which includes the rust toolchain, and cargo.

"},{"location":"tutorials/rust/#add-a-cargo-project","title":"Add a cargo project","text":"

Now that you have rust installed, you can create a cargo project in your pixi project.

pixi run cargo init\n

pixi run is pixi's way to run commands in the pixi environment, it will make sure that the environment is set up correctly for the command to run. It runs its own cross-platform shell, if you want more information checkout the tasks documentation. You can also activate the environment in your own shell by running pixi shell, after that you don't need pixi run ... anymore.

Now we can build a cargo project using pixi.

pixi run cargo build\n
To simplify the build process, you can add a build task to your pixi.toml file using the following command:
pixi task add build \"cargo build\"\n
Which creates this field in the pixi.toml file: pixi.toml
[tasks]\nbuild = \"cargo build\"\n

And now you can build your project using:

pixi run build\n

You can also run your project using:

pixi run cargo run\n
Which you can simplify with a task again.
pixi task add start \"cargo run\"\n

So you should get the following output:

pixi run start\nHello, world!\n

Congratulations, you have a Rust project running on your machine with pixi!

"},{"location":"tutorials/rust/#next-steps-why-is-this-useful-when-there-is-rustup","title":"Next steps, why is this useful when there is rustup?","text":"

Cargo is not a binary package manager, but a source-based package manager. This means that you need to have the Rust compiler installed on your system to use it. And possibly other dependencies that are not included in the cargo package manager. For example, you might need to install openssl or libssl-dev on your system to build a package. This is the case for pixi as well, but pixi will install these dependencies in your project folder, so you don't have to worry about them.

Add the following dependencies to your cargo project:

pixi run cargo add git2\n

If your system is not preconfigured to build C and have the libssl-dev package installed you will not be able to build the project:

pixi run build\n...\nCould not find directory of OpenSSL installation, and this `-sys` crate cannot\nproceed without this knowledge. If OpenSSL is installed and this crate had\ntrouble finding it,  you can set the `OPENSSL_DIR` environment variable for the\ncompilation process.\n\nMake sure you also have the development packages of openssl installed.\nFor example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.\n\nIf you're in a situation where you think the directory *should* be found\nautomatically, please open a bug at https://github.com/sfackler/rust-openssl\nand include information about your system as well as this message.\n\n$HOST = x86_64-unknown-linux-gnu\n$TARGET = x86_64-unknown-linux-gnu\nopenssl-sys = 0.9.102\n\n\nIt looks like you're compiling on Linux and also targeting Linux. Currently this\nrequires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`\ncould not be found. If you have OpenSSL installed you can likely fix this by\ninstalling `pkg-config`.\n...\n
You can fix this, by adding the necessary dependencies for building git2, with pixi:
pixi add openssl pkg-config compilers\n

Now you should be able to build your project again:

pixi run build\n...\n   Compiling git2 v0.18.3\n   Compiling my_rust_project v0.1.0 (/my_rust_project)\n    Finished dev [unoptimized + debuginfo] target(s) in 7.44s\n     Running `target/debug/my_rust_project`\n

"},{"location":"tutorials/rust/#extra-add-more-tasks","title":"Extra: Add more tasks","text":"

You can add more tasks to your pixi.toml file to simplify your workflow.

For example, you can add a test task to run your tests:

pixi task add test \"cargo test\"\n

And you can add a clean task to clean your project:

pixi task add clean \"cargo clean\"\n

You can add a formatting task to your project:

pixi task add fmt \"cargo fmt\"\n

You can extend these tasks to run multiple commands with the use of the depends-on field.

pixi task add lint \"cargo clippy\" --depends-on fmt\n

"},{"location":"tutorials/rust/#conclusion","title":"Conclusion","text":"

In this tutorial, we showed you how to create a Rust project using pixi. We also showed you how to add dependencies to your project using pixi. This way you can make sure that your project is reproducible on any system that has pixi installed.

"},{"location":"tutorials/rust/#show-off-your-work","title":"Show Off Your Work!","text":"

Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!

"},{"location":"CHANGELOG/","title":"Changelog","text":"

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

"},{"location":"CHANGELOG/#0321-2024-10-08","title":"[0.32.1] - 2024-10-08","text":""},{"location":"CHANGELOG/#fixes","title":"Fixes","text":""},{"location":"CHANGELOG/#documentation","title":"Documentation","text":""},{"location":"CHANGELOG/#0320-2024-10-08","title":"[0.32.0] - 2024-10-08","text":""},{"location":"CHANGELOG/#highlights","title":"\u2728 Highlights","text":"

The biggest fix in this PR is the move to the latest rattler as it came with some major bug fixes for macOS and Rust 1.81 compatibility.

"},{"location":"CHANGELOG/#changed","title":"Changed","text":""},{"location":"CHANGELOG/#fixed","title":"Fixed","text":""},{"location":"CHANGELOG/#0310-2024-10-03","title":"[0.31.0] - 2024-10-03","text":""},{"location":"CHANGELOG/#highlights_1","title":"\u2728 Highlights","text":"

Thanks to our maintainer @baszamstra! He sped up the resolver for all cases we could think of in #2162 Check the result of times it takes to solve the environments in our test set:

"},{"location":"CHANGELOG/#added","title":"Added","text":""},{"location":"CHANGELOG/#changed_1","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_1","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_1","title":"Fixed","text":""},{"location":"CHANGELOG/#performance","title":"Performance","text":""},{"location":"CHANGELOG/#new-contributors","title":"New Contributors","text":""},{"location":"CHANGELOG/#0300-2024-09-19","title":"[0.30.0] - 2024-09-19","text":""},{"location":"CHANGELOG/#highlights_2","title":"\u2728 Highlights","text":"

I want to thank @synapticarbors and @abkfenris for starting the work on pixi project export. Pixi now supports the export of a conda environment.yml file and a conda explicit specification file. This is a great addition to the project and will help users to share their projects with other non pixi users.

"},{"location":"CHANGELOG/#added_1","title":"Added","text":""},{"location":"CHANGELOG/#changed_2","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_2","title":"Documentation","text":""},{"location":"CHANGELOG/#testing","title":"Testing","text":""},{"location":"CHANGELOG/#fixed_2","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_1","title":"New Contributors","text":""},{"location":"CHANGELOG/#0290-2024-09-04","title":"[0.29.0] - 2024-09-04","text":""},{"location":"CHANGELOG/#highlights_3","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_2","title":"Added","text":" "},{"location":"CHANGELOG/#changed_3","title":"Changed","text":" "},{"location":"CHANGELOG/#fixed_3","title":"Fixed","text":" "},{"location":"CHANGELOG/#refactor_1","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_2","title":"New Contributors","text":""},{"location":"CHANGELOG/#0282-2024-08-28","title":"[0.28.2] - 2024-08-28","text":""},{"location":"CHANGELOG/#changed_4","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_3","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_4","title":"Fixed","text":""},{"location":"CHANGELOG/#0281-2024-08-26","title":"[0.28.1] - 2024-08-26","text":""},{"location":"CHANGELOG/#changed_5","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_4","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_5","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_3","title":"New Contributors","text":""},{"location":"CHANGELOG/#0280-2024-08-22","title":"[0.28.0] - 2024-08-22","text":""},{"location":"CHANGELOG/#highlights_4","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_3","title":"Added","text":""},{"location":"CHANGELOG/#changed_6","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_5","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_6","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_2","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_4","title":"New Contributors","text":""},{"location":"CHANGELOG/#0271-2024-08-09","title":"[0.27.1] - 2024-08-09","text":""},{"location":"CHANGELOG/#documentation_6","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_7","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_3","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_5","title":"New Contributors","text":""},{"location":"CHANGELOG/#0270-2024-08-07","title":"[0.27.0] - 2024-08-07","text":""},{"location":"CHANGELOG/#highlights_5","title":"\u2728 Highlights","text":"

This release contains a lot of refactoring and improvements to the codebase, in preparation for future features and improvements. Including with that we've fixed a ton of bugs. To make sure we're not breaking anything we've added a lot of tests and CI checks. But let us know if you find any issues!

As a reminder, you can update pixi using pixi self-update and move to a specific version, including backwards, with pixi self-update --version 0.27.0.

"},{"location":"CHANGELOG/#added_4","title":"Added","text":""},{"location":"CHANGELOG/#changed_7","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_7","title":"Documentation","text":""},{"location":"CHANGELOG/#testing_1","title":"Testing","text":""},{"location":"CHANGELOG/#fixed_8","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_4","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_6","title":"New Contributors","text":""},{"location":"CHANGELOG/#0261-2024-07-22","title":"[0.26.1] - 2024-07-22","text":""},{"location":"CHANGELOG/#fixed_9","title":"Fixed","text":""},{"location":"CHANGELOG/#0260-2024-07-19","title":"[0.26.0] - 2024-07-19","text":""},{"location":"CHANGELOG/#highlights_6","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_5","title":"Added","text":""},{"location":"CHANGELOG/#changed_8","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_8","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_10","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_5","title":"Refactor","text":""},{"location":"CHANGELOG/#removed","title":"Removed","text":""},{"location":"CHANGELOG/#new-contributors_7","title":"New Contributors","text":""},{"location":"CHANGELOG/#0250-2024-07-05","title":"[0.25.0] - 2024-07-05","text":""},{"location":"CHANGELOG/#highlights_7","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#changed_9","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_9","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_11","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_6","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_8","title":"New Contributors","text":""},{"location":"CHANGELOG/#0242-2024-06-14","title":"[0.24.2] - 2024-06-14","text":""},{"location":"CHANGELOG/#documentation_10","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_12","title":"Fixed","text":""},{"location":"CHANGELOG/#0241-2024-06-12","title":"[0.24.1] - 2024-06-12","text":""},{"location":"CHANGELOG/#fixed_13","title":"Fixed","text":""},{"location":"CHANGELOG/#0240-2024-06-12","title":"[0.24.0] - 2024-06-12","text":""},{"location":"CHANGELOG/#highlights_8","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_6","title":"Added","text":""},{"location":"CHANGELOG/#changed_10","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_11","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_14","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_9","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0230-2024-05-27","title":"[0.23.0] - 2024-05-27","text":""},{"location":"CHANGELOG/#highlights_9","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_7","title":"Added","text":""},{"location":"CHANGELOG/#changed_11","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_12","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_15","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_7","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_10","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0220-2024-05-13","title":"[0.22.0] - 2024-05-13","text":""},{"location":"CHANGELOG/#highlights_10","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_8","title":"Added","text":""},{"location":"CHANGELOG/#documentation_13","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_16","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_8","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_11","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0211-2024-05-07","title":"[0.21.1] - 2024-05-07","text":""},{"location":"CHANGELOG/#fixed_17","title":"Fixed","text":"

Full commit history

"},{"location":"CHANGELOG/#0210-2024-05-06","title":"[0.21.0] - 2024-05-06","text":""},{"location":"CHANGELOG/#highlights_11","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_9","title":"Added","text":""},{"location":"CHANGELOG/#changed_12","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_14","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_18","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_9","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_12","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0201-2024-04-26","title":"[0.20.1] - 2024-04-26","text":""},{"location":"CHANGELOG/#highlights_12","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#fixed_19","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_13","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0200-2024-04-19","title":"[0.20.0] - 2024-04-19","text":""},{"location":"CHANGELOG/#highlights_13","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_10","title":"Added","text":""},{"location":"CHANGELOG/#changed_13","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_15","title":"Documentation","text":" "},{"location":"CHANGELOG/#fixed_20","title":"Fixed","text":""},{"location":"CHANGELOG/#breaking","title":"BREAKING","text":""},{"location":"CHANGELOG/#new-contributors_14","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0191-2024-04-11","title":"[0.19.1] - 2024-04-11","text":""},{"location":"CHANGELOG/#highlights_14","title":"\u2728 Highlights","text":"

This fixes the issue where pixi would generate broken environments/lockfiles when a mapping for a brand-new version of a package is missing.

"},{"location":"CHANGELOG/#changed_14","title":"Changed","text":"

Full commit history

"},{"location":"CHANGELOG/#0190-2024-04-10","title":"[0.19.0] - 2024-04-10","text":""},{"location":"CHANGELOG/#highlights_15","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_11","title":"Added","text":""},{"location":"CHANGELOG/#changed_15","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_16","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_21","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_15","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0180-2024-04-02","title":"[0.18.0] - 2024-04-02","text":""},{"location":"CHANGELOG/#highlights_16","title":"\u2728 Highlights","text":"

[!TIP] These new features are part of the ongoing effort to make pixi more flexible, powerful, and comfortable for the python users. They are still in progress so expect more improvements on these features soon, so please report any issues you encounter and follow our next releases!

"},{"location":"CHANGELOG/#added_12","title":"Added","text":""},{"location":"CHANGELOG/#changed_16","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_17","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_22","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_16","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0171-2024-03-21","title":"[0.17.1] - 2024-03-21","text":""},{"location":"CHANGELOG/#highlights_17","title":"\u2728 Highlights","text":"

A quick bug-fix release for pixi list.

"},{"location":"CHANGELOG/#documentation_18","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_23","title":"Fixed","text":""},{"location":"CHANGELOG/#0170-2024-03-19","title":"[0.17.0] - 2024-03-19","text":""},{"location":"CHANGELOG/#highlights_18","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_13","title":"Added","text":""},{"location":"CHANGELOG/#changed_17","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_19","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_24","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_17","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0161-2024-03-11","title":"[0.16.1] - 2024-03-11","text":""},{"location":"CHANGELOG/#fixed_25","title":"Fixed","text":"

Full commit history

"},{"location":"CHANGELOG/#0160-2024-03-09","title":"[0.16.0] - 2024-03-09","text":""},{"location":"CHANGELOG/#highlights_19","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_14","title":"Added","text":""},{"location":"CHANGELOG/#changed_18","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_26","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_18","title":"New Contributors","text":"

Full Commit history

"},{"location":"CHANGELOG/#0152-2024-02-29","title":"[0.15.2] - 2024-02-29","text":""},{"location":"CHANGELOG/#changed_19","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_27","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_19","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0151-2024-02-26","title":"[0.15.1] - 2024-02-26","text":""},{"location":"CHANGELOG/#added_15","title":"Added","text":""},{"location":"CHANGELOG/#changed_20","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_28","title":"Fixed","text":"

Full commit history

"},{"location":"CHANGELOG/#0150-2024-02-23","title":"[0.15.0] - 2024-02-23","text":""},{"location":"CHANGELOG/#highlights_20","title":"\u2728 Highlights","text":"

[!WARNING] This versions build failed, use v0.15.1

"},{"location":"CHANGELOG/#added_16","title":"Added","text":""},{"location":"CHANGELOG/#fixed_29","title":"Fixed","text":""},{"location":"CHANGELOG/#other","title":"Other","text":"

Full commit history

"},{"location":"CHANGELOG/#0140-2024-02-15","title":"[0.14.0] - 2024-02-15","text":""},{"location":"CHANGELOG/#highlights_21","title":"\u2728 Highlights","text":"

Now, solve-groups can be used in [environments] to ensure dependency alignment across different environments without simultaneous installation. This feature is particularly beneficial for managing identical dependencies in test and production environments. Example configuration:

[environments]\ntest = { features = [\"prod\", \"test\"], solve-groups = [\"group1\"] }\nprod = { features = [\"prod\"], solve-groups = [\"group1\"] }\n
This setup simplifies managing dependencies that must be consistent across test and production.

"},{"location":"CHANGELOG/#added_17","title":"Added","text":""},{"location":"CHANGELOG/#changed_21","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_30","title":"Fixed","text":""},{"location":"CHANGELOG/#miscellaneous","title":"Miscellaneous","text":""},{"location":"CHANGELOG/#new-contributors_20","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0130-2024-02-01","title":"[0.13.0] - 2024-02-01","text":""},{"location":"CHANGELOG/#highlights_22","title":"\u2728 Highlights","text":"

This release is pretty crazy in amount of features! The major ones are: - We added support for multiple environments. :tada: Checkout the documentation - We added support for sdist installation, which greatly improves the amount of packages that can be installed from PyPI. :rocket:

[!IMPORTANT]

Renaming of PIXI_PACKAGE_* variables:

PIXI_PACKAGE_ROOT -> PIXI_PROJECT_ROOT\nPIXI_PACKAGE_NAME ->  PIXI_PROJECT_NAME\nPIXI_PACKAGE_MANIFEST -> PIXI_PROJECT_MANIFEST\nPIXI_PACKAGE_VERSION -> PIXI_PROJECT_VERSION\nPIXI_PACKAGE_PLATFORMS -> PIXI_ENVIRONMENT_PLATFORMS\n
Check documentation here: https://pixi.sh/environment/

[!IMPORTANT]

The .pixi/env/ folder has been moved to accommodate multiple environments. If you only have one environment it is now named .pixi/envs/default.

"},{"location":"CHANGELOG/#added_18","title":"Added","text":" "},{"location":"CHANGELOG/#changed_22","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_31","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_21","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0120-2024-01-15","title":"[0.12.0] - 2024-01-15","text":""},{"location":"CHANGELOG/#highlights_23","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_19","title":"Added","text":""},{"location":"CHANGELOG/#changed_23","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_32","title":"Fixed","text":""},{"location":"CHANGELOG/#removed_1","title":"Removed","text":""},{"location":"CHANGELOG/#documentation_20","title":"Documentation","text":""},{"location":"CHANGELOG/#new-contributors_22","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.11.0...v0.12.0

"},{"location":"CHANGELOG/#0111-2024-01-06","title":"[0.11.1] - 2024-01-06","text":""},{"location":"CHANGELOG/#fixed_33","title":"Fixed","text":""},{"location":"CHANGELOG/#0110-2024-01-05","title":"[0.11.0] - 2024-01-05","text":""},{"location":"CHANGELOG/#highlights_24","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_20","title":"Added","text":""},{"location":"CHANGELOG/#changed_24","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_34","title":"Fixed","text":""},{"location":"CHANGELOG/#documentation_21","title":"Documentation","text":""},{"location":"CHANGELOG/#new-contributors_23","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.10.0...v0.11.0

"},{"location":"CHANGELOG/#0100-2023-12-8","title":"[0.10.0] - 2023-12-8","text":""},{"location":"CHANGELOG/#highlights_25","title":"Highlights","text":""},{"location":"CHANGELOG/#added_21","title":"Added","text":""},{"location":"CHANGELOG/#fixed_35","title":"Fixed","text":""},{"location":"CHANGELOG/#miscellaneous_1","title":"Miscellaneous","text":""},{"location":"CHANGELOG/#new-contributors_24","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.1...v0.10.0

"},{"location":"CHANGELOG/#091-2023-11-29","title":"[0.9.1] - 2023-11-29","text":""},{"location":"CHANGELOG/#highlights_26","title":"Highlights","text":""},{"location":"CHANGELOG/#fixed_36","title":"Fixed","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.0...v0.9.1

"},{"location":"CHANGELOG/#090-2023-11-28","title":"[0.9.0] - 2023-11-28","text":""},{"location":"CHANGELOG/#highlights_27","title":"Highlights","text":""},{"location":"CHANGELOG/#added_22","title":"Added","text":""},{"location":"CHANGELOG/#fixed_37","title":"Fixed","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.8.0...v0.9.0

"},{"location":"CHANGELOG/#080-2023-11-27","title":"[0.8.0] - 2023-11-27","text":""},{"location":"CHANGELOG/#highlights_28","title":"Highlights","text":"

[!NOTE] [pypi-dependencies] support is still incomplete, missing functionality is listed here: https://github.com/orgs/prefix-dev/projects/6. Our intent is not to have 100% feature parity with pip, our goal is that you only need pixi for both conda and pypi packages alike.

"},{"location":"CHANGELOG/#added_23","title":"Added","text":""},{"location":"CHANGELOG/#fixed_38","title":"Fixed","text":""},{"location":"CHANGELOG/#miscellaneous_2","title":"Miscellaneous","text":""},{"location":"CHANGELOG/#new-contributors_25","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.7.0...v0.8.0

"},{"location":"CHANGELOG/#070-2023-11-14","title":"[0.7.0] - 2023-11-14","text":""},{"location":"CHANGELOG/#highlights_29","title":"Highlights","text":""},{"location":"CHANGELOG/#added_24","title":"Added","text":""},{"location":"CHANGELOG/#changed_25","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_39","title":"Fixed","text":""},{"location":"CHANGELOG/#docs","title":"Docs","text":""},{"location":"CHANGELOG/#new-contributors_26","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.6.0...v0.7.0

"},{"location":"CHANGELOG/#060-2023-10-17","title":"[0.6.0] - 2023-10-17","text":""},{"location":"CHANGELOG/#highlights_30","title":"Highlights","text":"

This release fixes some bugs and adds the --cwd option to the tasks.

"},{"location":"CHANGELOG/#fixed_40","title":"Fixed","text":""},{"location":"CHANGELOG/#changed_26","title":"Changed","text":""},{"location":"CHANGELOG/#added_25","title":"Added","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.5.0...v0.6.0

"},{"location":"CHANGELOG/#050-2023-10-03","title":"[0.5.0] - 2023-10-03","text":""},{"location":"CHANGELOG/#highlights_31","title":"Highlights","text":"

We rebuilt pixi shell, fixing the fact that your rc file would overrule the environment activation.

"},{"location":"CHANGELOG/#fixed_41","title":"Fixed","text":""},{"location":"CHANGELOG/#added_26","title":"Added","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.4.0...v0.5.0

"},{"location":"CHANGELOG/#040-2023-09-22","title":"[0.4.0] - 2023-09-22","text":""},{"location":"CHANGELOG/#highlights_32","title":"Highlights","text":"

This release adds the start of a new cli command pixi project which will allow users to interact with the project configuration from the command line.

"},{"location":"CHANGELOG/#fixed_42","title":"Fixed","text":""},{"location":"CHANGELOG/#added_27","title":"Added","text":""},{"location":"CHANGELOG/#new-contributors_27","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.3.0...v0.4.0

"},{"location":"CHANGELOG/#030-2023-09-11","title":"[0.3.0] - 2023-09-11","text":""},{"location":"CHANGELOG/#highlights_33","title":"Highlights","text":"

This releases fixes a lot of issues encountered by the community as well as some awesome community contributions like the addition of pixi global list and pixi global remove.

"},{"location":"CHANGELOG/#fixed_43","title":"Fixed","text":""},{"location":"CHANGELOG/#added_28","title":"Added","text":""},{"location":"CHANGELOG/#changed_27","title":"Changed","text":""},{"location":"CHANGELOG/#020-2023-08-22","title":"[0.2.0] - 2023-08-22","text":""},{"location":"CHANGELOG/#highlights_34","title":"Highlights","text":""},{"location":"CHANGELOG/#fixed_44","title":"Fixed","text":""},{"location":"CHANGELOG/#added_29","title":"Added","text":""},{"location":"CHANGELOG/#010-2023-08-11","title":"[0.1.0] - 2023-08-11","text":"

As this is our first Semantic Versioning release, we'll change from the prototype to the developing phase, as semver describes. A 0.x release could be anything from a new major feature to a breaking change where the 0.0.x releases will be bugfixes or small improvements.

"},{"location":"CHANGELOG/#highlights_35","title":"Highlights","text":""},{"location":"CHANGELOG/#fixed_45","title":"Fixed","text":""},{"location":"CHANGELOG/#008-2023-08-01","title":"[0.0.8] - 2023-08-01","text":""},{"location":"CHANGELOG/#highlights_36","title":"Highlights","text":""},{"location":"CHANGELOG/#added_30","title":"Added","text":""},{"location":"CHANGELOG/#fixed_46","title":"Fixed","text":""},{"location":"CHANGELOG/#changed_28","title":"Changed","text":""},{"location":"CHANGELOG/#007-2023-07-11","title":"[0.0.7] - 2023-07-11","text":""},{"location":"CHANGELOG/#highlights_37","title":"Highlights","text":""},{"location":"CHANGELOG/#breaking-changes","title":"BREAKING CHANGES","text":""},{"location":"CHANGELOG/#added_31","title":"Added","text":""},{"location":"CHANGELOG/#fixed_47","title":"Fixed","text":""},{"location":"CHANGELOG/#006-2023-06-30","title":"[0.0.6] - 2023-06-30","text":""},{"location":"CHANGELOG/#highlights_38","title":"Highlights","text":"

Improving the reliability is important to us, so we added an integration testing framework, we can now test as close as possible to the CLI level using cargo.

"},{"location":"CHANGELOG/#added_32","title":"Added","text":""},{"location":"CHANGELOG/#fixed_48","title":"Fixed","text":""},{"location":"CHANGELOG/#005-2023-06-26","title":"[0.0.5] - 2023-06-26","text":"

Fixing Windows installer build in CI. (#145)

"},{"location":"CHANGELOG/#004-2023-06-26","title":"[0.0.4] - 2023-06-26","text":""},{"location":"CHANGELOG/#highlights_39","title":"Highlights","text":"

A new command, auth which can be used to authenticate the host of the package channels. A new command, shell which can be used to start a shell in the pixi environment of a project. A refactor of the install command which is changed to global install and the install command now installs a pixi project if you run it in the directory. Platform specific dependencies using [target.linux-64.dependencies] instead of [dependencies] in the pixi.toml

Lots and lots of fixes and improvements to make it easier for this user, where bumping to the new version of rattler helped a lot.

"},{"location":"CHANGELOG/#added_33","title":"Added","text":""},{"location":"CHANGELOG/#changed_29","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_49","title":"Fixed","text":" "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Getting Started","text":"

Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.

"},{"location":"#installation","title":"Installation","text":"

To install pixi you can run the following command in your terminal:

Linux & macOSWindows
curl -fsSL https://pixi.sh/install.sh | bash\n

The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to ~/.pixi/bin. If this directory does not already exist, the script will create it.

The script will also update your ~/.bash_profile to include ~/.pixi/bin in your PATH, allowing you to invoke the pixi command from anywhere.

PowerShell:

iwr -useb https://pixi.sh/install.ps1 | iex\n
winget:
winget install prefix-dev.pixi\n
The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to LocalAppData/pixi/bin. If this directory does not already exist, the script will create it.

The command will also automatically add LocalAppData/pixi/bin to your path allowing you to invoke pixi from anywhere.

Tip

You might need to restart your terminal or source your shell for the changes to take effect.

You can find more options for the installation script here.

"},{"location":"#autocompletion","title":"Autocompletion","text":"

To get autocompletion follow the instructions for your shell. Afterwards, restart the shell or source the shell config file.

"},{"location":"#bash-default-on-most-linux-systems","title":"Bash (default on most Linux systems)","text":"
echo 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\n
"},{"location":"#zsh-default-on-macos","title":"Zsh (default on macOS)","text":"
echo 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\n
"},{"location":"#powershell-pre-installed-on-all-windows-systems","title":"PowerShell (pre-installed on all Windows systems)","text":"
Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n

Failure because no profile file exists

Make sure your profile file exists, otherwise create it with:

New-Item -Path $PROFILE -ItemType File -Force\n

"},{"location":"#fish","title":"Fish","text":"
echo 'pixi completion --shell fish | source' > ~/.config/fish/completions/pixi.fish\n
"},{"location":"#nushell","title":"Nushell","text":"

Add the following to the end of your Nushell env file (find it by running $nu.env-path in Nushell):

mkdir ~/.cache/pixi\npixi completion --shell nushell | save -f ~/.cache/pixi/completions.nu\n

And add the following to the end of your Nushell configuration (find it by running $nu.config-path):

use ~/.cache/pixi/completions.nu *\n
"},{"location":"#elvish","title":"Elvish","text":"
echo 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n
"},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"

Although we recommend installing pixi through the above method we also provide additional installation methods.

"},{"location":"#homebrew","title":"Homebrew","text":"

Pixi is available via homebrew. To install pixi via homebrew simply run:

brew install pixi\n
"},{"location":"#windows-installer","title":"Windows installer","text":"

We provide an msi installer on our GitHub releases page. The installer will download pixi and add it to the path.

"},{"location":"#install-from-source","title":"Install from source","text":"

pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:

cargo install --locked --git https://github.com/prefix-dev/pixi.git pixi\n

We don't publish to crates.io anymore, so you need to install it from the repository. The reason for this is that we depend on some unpublished crates which disallows us to publish to crates.io.

or when you want to make changes use:

cargo build\ncargo test\n

If you have any issues building because of the dependency on rattler checkout its compile steps.

"},{"location":"#installer-script-options","title":"Installer script options","text":"Linux & macOSWindows

The installation script has several options that can be manipulated through environment variables.

Variable Description Default Value PIXI_VERSION The version of pixi getting installed, can be used to up- or down-grade. latest PIXI_HOME The location of the binary folder. $HOME/.pixi PIXI_ARCH The architecture the pixi version was built for. uname -m PIXI_NO_PATH_UPDATE If set the $PATH will not be updated to add pixi to it. TMP_DIR The temporary directory the script uses to download to and unpack the binary from. /tmp

For example, on Apple Silicon, you can force the installation of the x86 version:

curl -fsSL https://pixi.sh/install.sh | PIXI_ARCH=x86_64 bash\n
Or set the version
curl -fsSL https://pixi.sh/install.sh | PIXI_VERSION=v0.18.0 bash\n

The installation script has several options that can be manipulated through environment variables.

Variable Environment variable Description Default Value PixiVersion PIXI_VERSION The version of pixi getting installed, can be used to up- or down-grade. latest PixiHome PIXI_HOME The location of the installation. $Env:USERPROFILE\\.pixi NoPathUpdate If set, the $PATH will not be updated to add pixi to it.

For example, set the version using:

iwr -useb https://pixi.sh/install.ps1 | iex -Args \"-PixiVersion v0.18.0\"\n
"},{"location":"#update","title":"Update","text":"

Updating is as simple as installing, rerunning the installation script gets you the latest version.

pixi self-update\n
Or get a specific pixi version using:
pixi self-update --version x.y.z\n

Note

If you've used a package manager like brew, mamba, conda, paru etc. to install pixi. It's preferable to use the built-in update mechanism. e.g. brew upgrade pixi.

"},{"location":"#uninstall","title":"Uninstall","text":"

To uninstall pixi from your system, simply remove the binary.

Linux & macOSWindows
rm ~/.pixi/bin/pixi\n
$PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n

After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi directory and remove the directory from your path.

"},{"location":"Community/","title":"Community","text":"

When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:

[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n

Customize your badge

To further customize the look and feel of your badge, you can add &style=<custom-style> at the end of the URL. See the documentation on shields.io for more info.

"},{"location":"Community/#built-using-pixi","title":"Built using Pixi","text":" "},{"location":"FAQ/","title":"Frequently asked questions","text":""},{"location":"FAQ/#what-is-the-difference-with-conda-mamba-poetry-pip","title":"What is the difference with conda, mamba, poetry, pip","text":"Tool Installs python Builds packages Runs predefined tasks Has lock files builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi","text":"

Starting with the name prefix we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px, pex, pax, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.

Linux & macOSWindows
alias not_pixi=\"pixi\"\n

PowerShell:

New-Alias -Name not_pixi -Value pixi\n

"},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build","text":"

TL;DR: It's coming we promise!

pixi build is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build which will be used as a library in pixi.

"},{"location":"basic_usage/","title":"Basic usage","text":"

Ensure you've got pixi set up. If running pixi doesn't show the help, see the getting started if it doesn't.

pixi\n

Initialize a new project and navigate to the project directory.

pixi init pixi-hello-world\ncd pixi-hello-world\n

Add the dependencies you would like to use.

pixi add python\n

Create a file named hello_world.py in the directory and paste the following code into the file.

hello_world.py
def hello():\n    print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n    hello()\n

Run the code inside the environment.

pixi run python hello_world.py\n

You can also put this run command in a task.

pixi task add hello python hello_world.py\n

After adding the task, you can run the task using its name.

pixi run hello\n

Use the shell command to activate the environment and start a new shell in there.

pixi shell\npython\nexit()\n

You've just learned the basic features of pixi:

  1. initializing a project
  2. adding a dependency.
  3. adding a task, and executing it.
  4. running a program.

Feel free to play around with what you just learned like adding more tasks, dependencies or code.

Happy coding!

"},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"

Use pixi to install tools on your machine.

Some notable examples:

# Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
"},{"location":"basic_usage/#using-the-no-activation-option","title":"Using the --no-activation option","text":"

When installing packages globally, you can use the --no-activation option to prevent the insertion of environment activation code into the installed executable scripts. This means that when you run the installed executable, it won't modify the PATH or CONDA_PREFIX environment variables beforehand.

Example:

# Install a package without inserting activation code\npixi global install ruff --no-activation\n

This option can be useful in scenarios where you want more control over the environment activation or if you're using the installed executables in contexts where automatic activation might interfere with other processes.

"},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"

You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.

- uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n

See the GitHub Actions for more details.

"},{"location":"vision/","title":"Vision","text":"

We created pixi because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.

"},{"location":"vision/#pixi-values","title":"Pixi values","text":"

We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:

  1. Fast. We want to have a fast package manager, that is able to solve the environment in a few seconds.
  2. User Friendly. We want to have a package manager that puts user friendliness on the front-line. Providing easy, accessible and intuitive commands. That have the element of least surprise.
  3. Isolated Environment. We want to have isolated environments, that are reproducible and easy to share. Ideally, it should run on all common platforms. The Conda packaging system provides an excellent base for this.
  4. Single Tool. We want to integrate most common uses when working on a development project with Pixi, so it should support at least dependency management, command management, building and uploading packages. You should not need to reach to another external tool for this.
  5. Fun. It should be fun to use pixi and not cause frustrations, you should not need to think about it a lot and it should generally just get out of your way.
"},{"location":"vision/#conda","title":"Conda","text":"

We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.

"},{"location":"vision/#target-languages","title":"Target languages","text":"

Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.

"},{"location":"advanced/authentication/","title":"Authenticate pixi with a server","text":"

You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.

Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n  <HOST>  The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n      --token <TOKEN>              The token to use (for authentication with prefix.dev)\n      --username <USERNAME>        The username to use (for basic HTTP authentication)\n      --password <PASSWORD>        The password to use (for basic HTTP authentication)\n      --conda-token <CONDA_TOKEN>  The token to use on anaconda.org / quetz authentication\n  -v, --verbose...                 More output per occurrence\n  -q, --quiet...                   Less output per occurrence\n  -h, --help                       Print help\n

The different options are \"token\", \"conda-token\" and \"username + password\".

The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>.

The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/....

The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/.... This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.

"},{"location":"advanced/authentication/#examples","title":"Examples","text":"

Login to prefix.dev:

pixi auth login prefix.dev --token pfx_jj8WDzvnuTHEGdAhwRZMC1Ag8gSto8\n

Login to anaconda.org:

pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n

Login to a basic HTTP secured server:

pixi auth login myserver.com --username user --password password\n
"},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"

The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.

On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access program that comes pre-installed on macOS. Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

On Linux, one can use GNOME Keyring (or just Keyring) to access credentials that are securely stored by libsecret. Searching for rattler should list all the credentials stored by pixi and other rattler-based programs.

"},{"location":"advanced/authentication/#fallback-storage","title":"Fallback storage","text":"

If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json and contains the credentials.

"},{"location":"advanced/authentication/#override-the-authentication-storage","title":"Override the authentication storage","text":"

You can use the RATTLER_AUTH_FILE environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.

E.g.

export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n

The JSON should follow the following format:

{\n    \"*.prefix.dev\": {\n        \"BearerToken\": \"your_token\"\n    },\n    \"otherhost.com\": {\n        \"BasicHTTP\": {\n            \"username\": \"your_username\",\n            \"password\": \"your_password\"\n        }\n    },\n    \"conda.anaconda.org\": {\n        \"CondaToken\": \"your_token\"\n    }\n}\n

Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev also matches repo.prefix.dev).

Lastly you can set the authentication override file in the global configuration file.

"},{"location":"advanced/authentication/#pypi-authentication","title":"PyPI authentication","text":"

Currently, we support the following methods for authenticating against PyPI:

  1. keyring authentication.
  2. .netrc file authentication.

We want to add more methods in the future, so if you have a specific method you would like to see, please let us know.

"},{"location":"advanced/authentication/#keyring-authentication","title":"Keyring authentication","text":"

Currently, pixi supports the uv method of authentication through the python keyring library. To enable this use the CLI flag --pypi-keyring-provider which can either be set to subprocess (activated) or disabled.

# From an existing pixi project\npixi install --pypi-keyring-provider subprocess\n

This option can also be set in the global configuration file under pypi-config.

"},{"location":"advanced/authentication/#installing-keyring","title":"Installing keyring","text":"

To install keyring you can use pixi global install:

Either use:

pixi global install keyring\n
GCP and other backends

The downside of this method is currently, because you cannot inject into a pixi global environment just yet, that installing different keyring backends is not possible. This allows only the default keyring backend to be used. Give the issue a \ud83d\udc4d up if you would like to see inject as a feature.

Or alternatively, you can install keyring using pipx:

# Install pipx if you haven't already\npixi global install pipx\npipx install keyring\n\n# For Google Artifact Registry, also install and initialize its keyring backend.\n# Inject this into the pipx environment\npipx inject keyring keyrings.google-artifactregistry-auth --index-url https://pypi.org/simple\ngcloud auth login\n
"},{"location":"advanced/authentication/#using-keyring-with-basic-auth","title":"Using keyring with Basic Auth","text":"

Use keyring to store your credentials e.g:

keyring set https://my-index/simple your_username\n# prompt will appear for your password\n
"},{"location":"advanced/authentication/#configuration","title":"Configuration","text":"

Make sure to include username@ in the URL of the registry. An example of this would be:

[pypi-options]\nindex-url = \"https://username@custom-registry.com/simple\"\n
"},{"location":"advanced/authentication/#gcp","title":"GCP","text":"

For Google Artifact Registry, you can use the Google Cloud SDK to authenticate. Make sure to have run gcloud auth login before using pixi. Another thing to note is that you need to add oauth2accesstoken to the URL of the registry. An example of this would be:

"},{"location":"advanced/authentication/#configuration_1","title":"Configuration","text":"
# rest of the pixi.toml\n#\n# Add's the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://oauth2accesstoken@<location>-python.pkg.dev/<project>/<repository>/simple\"]\n

Note

Include the /simple at the end, replace the <location> etc. with your project and repository and location.

To find this URL more easily, you can use the gcloud command:

gcloud artifacts print-settings python --project=<project> --repository=<repository> --location=<location>\n
"},{"location":"advanced/authentication/#azure-devops","title":"Azure DevOps","text":"

Similarly for Azure DevOps, you can use the Azure keyring backend for authentication. The backend, along with installation instructions can be found at keyring.artifacts.

After following the instructions and making sure that keyring works correctly, you can use the following configuration:

"},{"location":"advanced/authentication/#configuration_2","title":"Configuration","text":"

# rest of the pixi.toml\n#\n# Adds the following options to the default feature\n[pypi-options]\nextra-index-urls = [\"https://VssSessionToken@pkgs.dev.azure.com/{organization}/{project}/_packaging/{feed}/pypi/simple/\"]\n
This should allow for getting packages from the Azure DevOps artifact registry.

"},{"location":"advanced/authentication/#installing-your-environment","title":"Installing your environment","text":"

To actually install either configure your Global Config, or use the flag:

pixi install --pypi-keyring-provider subprocess\n

"},{"location":"advanced/authentication/#netrc-file","title":".netrc file","text":"

pixi allows you to access private registries securely by authenticating with credentials stored in a .netrc file.

In the .netrc file, you store authentication details like this:

machine registry-name\nlogin admin\npassword admin\n
For more details, you can access the .netrc docs.

"},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"

All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.

The actual code regarding this is in the rattler_solve crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.

"},{"location":"advanced/channel_priority/#channel-specific-dependencies","title":"Channel specific dependencies","text":"

When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.

[project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
In the packagex example, the solver will understand that the package is only available in my-channel and will not look for it in conda-forge.

The flowchart of the logic that excludes all other channels:

flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Channel Specific Dependency?}\n    C -->|Yes| D[Exclude All Other Channels for This Package]\n    C -->|No| E{Any Other Dependencies?}\n    E -->|Yes| B\n    E -->|No| F[End]\n    D --> E
"},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"

Channel priority is dictated by the order in the project.channels array, where the first channel is the highest priority. For instance:

[project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
If the package is found in conda-forge the solver will not look for it in my-channel and your-channel, because it tells the solver they are excluded. If the package is not found in conda-forge the solver will look for it in my-channel and if it is found there it will tell the solver to exclude your-channel for this package. This diagram explains the logic:
flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Loop Over Channels}\n    C --> D{Package in This Channel?}\n    D -->|No| C\n    D -->|Yes| E{\"This the first channel\n     for this package?\"}\n    E -->|Yes| F[Include Package in Candidates]\n    E -->|No| G[Exclude Package from Candidates]\n    F --> H{Any Other Channels?}\n    G --> H\n    H -->|Yes| C\n    H -->|No| I{Any Other Dependencies?}\n    I -->|No| J[End]\n    I -->|Yes| B

This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.

"},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"

A common use case is to use pytorch with nvidia drivers, while also needing the conda-forge channel for the main dependencies.

[project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
What this will do is get as much as possible from the nvidia/label/cuda-11.8.0 channel, which is actually only the cuda package.

Then it will get all packages from the nvidia channel, which is a little more and some packages overlap the nvidia and conda-forge channel. Like the cuda-cudart package, which will now only be retrieved from the nvidia channel because of the priority logic.

Then it will get the packages from the conda-forge channel, which is the main channel for the dependencies.

But the user only wants the pytorch packages from the pytorch channel, which is why pytorch is added last and the dependencies are added as channel specific dependencies.

We don't define the pytorch channel before conda-forge because we want to get as much as possible from the conda-forge as the pytorch channel is not always shipping the best versions of all packages.

For example, it also ships the ffmpeg package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge channel for ffmpeg with the priority logic.

"},{"location":"advanced/channel_priority/#force-a-specific-channel-priority","title":"Force a specific channel priority","text":"

If you want to force a specific priority for a channel, you can use the priority (int) key in the channel definition. The higher the number, the higher the priority. Non specified priorities are set to 0 but the index in the array still counts as a priority, where the first in the list has the highest priority.

This priority definition is mostly important for multiple environments with different channel priorities, as by default feature channels are prepended to the project channels.

[project]\nname = \"test_channel_priority\"\nplatforms = [\"linux-64\", \"osx-64\", \"win-64\", \"osx-arm64\"]\nchannels = [\"conda-forge\"]\n\n[feature.a]\nchannels = [\"nvidia\"]\n\n[feature.b]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = 1}]\n\n[feature.c]\nchannels = [ \"pytorch\", {channel = \"nvidia\", priority = -1}]\n\n[environments]\na = [\"a\"]\nb = [\"b\"]\nc = [\"c\"]\n
This example creates 4 environments, a, b, c, and the default environment. Which will have the following channel order:

Environment Resulting Channels order default conda-forge a nvidia, conda-forge b nvidia, pytorch, conda-forge c pytorch, conda-forge, nvidia Check priority result with pixi info

Using pixi info you can check the priority of the channels in the environment.

pixi info\nEnvironments\n------------\n       Environment: default\n          Features: default\n          Channels: conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n       Environment: a\n          Features: a, default\n          Channels: nvidia, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n       Environment: b\n          Features: b, default\n          Channels: nvidia, pytorch, conda-forge\nDependency count: 0\nTarget platforms: linux-64\n\n       Environment: c\n          Features: c, default\n          Channels: pytorch, conda-forge, nvidia\nDependency count: 0\nTarget platforms: linux-64\n

"},{"location":"advanced/explain_info_command/","title":"Info command","text":"

pixi info prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json format using the --json flag, which can be useful for programmatically reading it.

Running pixi info in the pixi repo
\u279c pixi info\n      Pixi version: 0.13.0\n          Platform: linux-64\n  Virtual packages: __unix=0=0\n                  : __linux=6.5.12=0\n                  : __glibc=2.36=0\n                  : __cuda=12.3=0\n                  : __archspec=1=x86_64\n         Cache dir: /home/user/.cache/rattler/cache\n      Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n           Version: 0.13.0\n     Manifest file: /home/user/development/pixi/pixi.toml\n      Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n          Features: default\n          Channels: conda-forge\n  Dependency count: 10\n      Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n  Target platforms: linux-64, osx-arm64, win-64, osx-64\n             Tasks: docs, test-all, test, build, lint, install, build-docs\n
"},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"

The first part of the info output is information that is always available and tells you what pixi can read on your machine.

"},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"

This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.

"},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"

The virtual packages that pixi can find on your machine.

In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda virtual package. In that case, if pixi cannot find the __cuda virtual package on your machine the installation will fail.

"},{"location":"advanced/explain_info_command/#cache-dir","title":"Cache dir","text":"

The directory where pixi stores its cache. Checkout the cache documentation for more information.

"},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"

Check the authentication documentation

"},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"

[requires --extended]

The size of the previously mentioned \"Cache dir\" in Mebibytes.

"},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"

Everything below Project is info about the project you're currently in. This info is only available if your path has a manifest file.

"},{"location":"advanced/explain_info_command/#manifest-file","title":"Manifest file","text":"

The path to the manifest file that describes the project.

"},{"location":"advanced/explain_info_command/#last-updated","title":"Last updated","text":"

The last time the lock file was updated, either manually or by pixi itself.

"},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"

The environment info defined per environment. If you don't have any environments defined, this will only show the default environment.

"},{"location":"advanced/explain_info_command/#features","title":"Features","text":"

This lists which features are enabled in the environment. For the default this is only default

"},{"location":"advanced/explain_info_command/#channels","title":"Channels","text":"

The list of channels used in this environment.

"},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"

The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).

"},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"

The list of dependencies defined for this environment.

"},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"

The platforms the project has defined.

"},{"location":"advanced/github_actions/","title":"GitHub Action","text":"

We created prefix-dev/setup-pixi to facilitate using pixi in CI.

"},{"location":"advanced/github_actions/#usage","title":"Usage","text":"
- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    pixi-version: v0.32.1\n    cache: true\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n

Pin your action versions

Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.8.0) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.

Put the following in your .github/dependabot.yml file to enable Dependabot for your GitHub Actions:

.github/dependabot.yml
version: 2\nupdates:\n  - package-ecosystem: github-actions\n    directory: /\n    schedule:\n      interval: monthly # (1)!\n    groups:\n      dependencies:\n        patterns:\n          - \"*\"\n
  1. or daily, weekly
"},{"location":"advanced/github_actions/#features","title":"Features","text":"

To see all available input arguments, see the action.yml file in setup-pixi. The most important features are described below.

"},{"location":"advanced/github_actions/#caching","title":"Caching","text":"

The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock file is present. It will then use the pixi.lock file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache input argument.

Customize your cache key

If you need to customize your cache-key, you can use the cache-key input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>.

Only save caches on main

In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write argument.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    cache: true\n    cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
"},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"

With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments input argument. This will install all environments that are specified and cache them.

[project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
"},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"

The following example will install the py311 and py312 environments in different jobs.

test:\n  runs-on: ubuntu-latest\n  strategy:\n    matrix:\n      environment: [py311, py312]\n  steps:\n  - uses: actions/checkout@v4\n  - uses: prefix-dev/setup-pixi@v0.8.0\n    with:\n      environments: ${{ matrix.environment }}\n
"},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"

The following example will install both the py311 and the py312 environment on the runner.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    environments: >- # (1)!\n      py311\n      py312\n- run: |\n  pixi run -e py311 test\n  pixi run -e py312 test\n
  1. separated by spaces, equivalent to

    environments: py311 py312\n

Caching behavior if you don't specify environments

If you don't specify any environment, the default environment will be installed and cached, even if you use other environments.

"},{"location":"advanced/github_actions/#authentication","title":"Authentication","text":"

There are currently three ways to authenticate with pixi:

For more information, see Authentication.

Handle secrets with care

Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }} syntax. These secrets will always be masked in the logs.

"},{"location":"advanced/github_actions/#token","title":"Token","text":"

Specify the token using the auth-token input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
"},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"

Specify the username and password using the auth-username and auth-password input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    auth-host: custom-artifactory.com\n    auth-username: ${{ secrets.PIXI_USERNAME }}\n    auth-password: ${{ secrets.PIXI_PASSWORD }}\n
"},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"

Specify the conda-token using the conda-token input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel) is used at anaconda.org or with quetz instances.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    auth-host: anaconda.org # (1)!\n    conda-token: ${{ secrets.CONDA_TOKEN }}\n
  1. or my-quetz-instance.com
"},{"location":"advanced/github_actions/#custom-shell-wrapper","title":"Custom shell wrapper","text":"

setup-pixi allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run command for each command.

- run: | # (1)!\n    python --version\n    pip install --no-deps -e .\n  shell: pixi run bash -e {0}\n
  1. everything here will be run inside of the pixi environment

You can even run Python scripts like this:

- run: | # (1)!\n    import my_package\n    print(\"Hello world!\")\n  shell: pixi run python {0}\n
  1. everything here will be run inside of the pixi environment

If you want to use PowerShell, you need to specify -Command as well.

- run: | # (1)!\n    python --version | Select-String \"3.11\"\n  shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
  1. everything here will be run inside of the pixi environment

How does it work under the hood?

Under the hood, the shell: xyz {0} option is implemented by creating a temporary script file and calling xyz with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0} directly but instead have to use shell: pixi run bash {0}. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell in the documentation. See the official documentation and ADR 0277 for more information about how the shell: input works in GitHub Actions.

"},{"location":"advanced/github_actions/#one-off-shell-wrapper-using-pixi-exec","title":"One-off shell wrapper using pixi exec","text":"

With pixi exec, you can also run a one-off command inside a temporary pixi environment.

- run: | # (1)!\n    zstd --version\n  shell: pixi exec --spec zstd -- bash -e {0}\n
  1. everything here will be run inside of the temporary pixi environment
- run: | # (1)!\n    import ruamel.yaml\n    # ...\n  shell: pixi exec --spec python=3.11.* --spec ruamel.yaml -- python {0}\n
  1. everything here will be run inside of the temporary pixi environment

See here for more information about pixi exec.

"},{"location":"advanced/github_actions/#environment-activation","title":"Environment activation","text":"

Instead of using a custom shell wrapper, you can also make all pixi-installed binaries available to subsequent steps by \"activating\" the installed environment in the currently running job. To this end, setup-pixi adds all environment variables set when executing pixi run to $GITHUB_ENV and, similarly, adds all path modifications to $GITHUB_PATH. As a result, all installed binaries can be accessed without having to call pixi run.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    activate-environment: true\n

If you are installing multiple environments, you will need to specify the name of the environment that you want to be activated.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    environments: >-\n      py311\n      py312\n    activate-environment: py311\n

Activating an environment may be more useful than using a custom shell wrapper as it allows non-shell based steps to access binaries on the path. However, be aware that this option augments the environment of your job.

"},{"location":"advanced/github_actions/#-frozen-and-locked","title":"--frozen and --locked","text":"

You can specify whether setup-pixi should run pixi install --frozen or pixi install --locked depending on the frozen or the locked input argument. See the official documentation for more information about the --frozen and --locked flags.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    locked: true\n    # or\n    frozen: true\n

If you don't specify anything, the default behavior is to run pixi install --locked if a pixi.lock file is present and pixi install otherwise.

"},{"location":"advanced/github_actions/#debugging","title":"Debugging","text":"

There are two types of debug logging that you can enable.

"},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"

The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:

Debug logging documentation

For more information about debug logging in GitHub Actions, see the official documentation.

"},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"

The second type is the debug logging of the pixi executable. This can be specified by setting the log-level input.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    log-level: vvv # (1)!\n
  1. One of q, default, v, vv, or vvv.

If nothing is specified, log-level will default to default or vv depending on if debug logging is enabled for the action.

"},{"location":"advanced/github_actions/#self-hosted-runners","title":"Self-hosted runners","text":"

On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).

If you set post-cleanup to true, the action will delete the following files:

If nothing is specified, post-cleanup will default to true.

On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi to do this.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    post-cleanup: true\n    pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
  1. ${{ runner.temp }}\\Scripts\\pixi.exe on Windows

You can also use a preinstalled local version of pixi on the runner by not setting any of the pixi-version, pixi-url or pixi-bin-path inputs. This action will then try to find a local version of pixi in the runner's PATH.

"},{"location":"advanced/github_actions/#using-the-pyprojecttoml-as-a-manifest-file-for-pixi","title":"Using the pyproject.toml as a manifest file for pixi.","text":"

setup-pixi will automatically pick up the pyproject.toml if it contains a [tool.pixi.project] section and no pixi.toml. This can be overwritten by setting the manifest-path input argument.

- uses: prefix-dev/setup-pixi@v0.8.0\n  with:\n    manifest-path: pyproject.toml\n
"},{"location":"advanced/github_actions/#more-examples","title":"More examples","text":"

If you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi repository.

"},{"location":"advanced/production_deployment/","title":"Bringing pixi to production","text":"

You can bring pixi projects into production by either containerizing it using tools like Docker or by using quantco/pixi-pack.

@pavelzw from QuantCo wrote a blog post about bringing pixi to production. You can read it here.

"},{"location":"advanced/production_deployment/#docker","title":"Docker","text":"

We provide a simple docker image at pixi-docker that contains the pixi executable on top of different base images.

The images are available on ghcr.io/prefix-dev/pixi.

There are different tags for different base images available:

All tags

For all tags, take a look at the build script.

"},{"location":"advanced/production_deployment/#example-usage","title":"Example usage","text":"

The following example uses the pixi docker image as a base image for a multi-stage build. It also makes use of pixi shell-hook to not rely on pixi being installed in the production container.

More examples

For more examples, take a look at pavelzw/pixi-docker-example.

FROM ghcr.io/prefix-dev/pixi:0.32.1 AS build\n\n# copy source code, pixi.toml and pixi.lock to the container\nWORKDIR /app\nCOPY . .\n# install dependencies to `/app/.pixi/envs/prod`\n# use `--locked` to ensure the lockfile is up to date with pixi.toml\nRUN pixi install --locked -e prod\n# create the shell-hook bash script to activate the environment\nRUN pixi shell-hook -e prod -s bash > /shell-hook\nRUN echo \"#!/bin/bash\" > /app/entrypoint.sh\nRUN cat /shell-hook >> /app/entrypoint.sh\n# extend the shell-hook script to run the command passed to the container\nRUN echo 'exec \"$@\"' >> /app/entrypoint.sh\n\nFROM ubuntu:24.04 AS production\nWORKDIR /app\n# only copy the production environment into prod container\n# please note that the \"prefix\" (path) needs to stay the same as in the build container\nCOPY --from=build /app/.pixi/envs/prod /app/.pixi/envs/prod\nCOPY --from=build --chmod=0755 /app/entrypoint.sh /app/entrypoint.sh\n# copy your project code into the container as well\nCOPY ./my_project /app/my_project\n\nEXPOSE 8000\nENTRYPOINT [ \"/app/entrypoint.sh\" ]\n# run your app inside the pixi environment\nCMD [ \"uvicorn\", \"my_project:app\", \"--host\", \"0.0.0.0\" ]\n
"},{"location":"advanced/production_deployment/#pixi-pack","title":"pixi-pack","text":"

pixi-pack is a simple tool that takes a pixi environment and packs it into a compressed archive that can be shipped to the target machine.

It can be installed via

pixi global install pixi-pack\n

Or by downloading our pre-built binaries from the releases page.

Instead of installing pixi-pack globally, you can also use pixi exec to run pixi-pack in a temporary environment:

pixi exec pixi-pack pack\npixi exec pixi-pack unpack environment.tar\n

You can pack an environment with

pixi-pack pack --manifest-file pixi.toml --environment prod --platform linux-64\n

This will create a environment.tar file that contains all conda packages required to create the environment.

# environment.tar\n| pixi-pack.json\n| environment.yml\n| channel\n|    \u251c\u2500\u2500 noarch\n|    |    \u251c\u2500\u2500 tzdata-2024a-h0c530f3_0.conda\n|    |    \u251c\u2500\u2500 ...\n|    |    \u2514\u2500\u2500 repodata.json\n|    \u2514\u2500\u2500 linux-64\n|         \u251c\u2500\u2500 ca-certificates-2024.2.2-hbcca054_0.conda\n|         \u251c\u2500\u2500 ...\n|         \u2514\u2500\u2500 repodata.json\n
"},{"location":"advanced/production_deployment/#unpacking-an-environment","title":"Unpacking an environment","text":"

With pixi-pack unpack environment.tar, you can unpack the environment on your target system. This will create a new conda environment in ./env that contains all packages specified in your pixi.toml. It also creates an activate.sh (or activate.bat on Windows) file that lets you activate the environment without needing to have conda or micromamba installed.

"},{"location":"advanced/production_deployment/#cross-platform-packs","title":"Cross-platform packs","text":"

Since pixi-pack just downloads the .conda and .tar.bz2 files from the conda repositories, you can trivially create packs for different platforms.

pixi-pack pack --platform win-64\n

You can only unpack a pack on a system that has the same platform as the pack was created for.

"},{"location":"advanced/production_deployment/#inject-additional-packages","title":"Inject additional packages","text":"

You can inject additional packages into the environment that are not specified in pixi.lock by using the --inject flag:

pixi-pack pack --inject local-package-1.0.0-hbefa133_0.conda --manifest-pack pixi.toml\n

This can be particularly useful if you build the project itself and want to include the built package in the environment but still want to use pixi.lock from the project.

"},{"location":"advanced/production_deployment/#unpacking-without-pixi-pack","title":"Unpacking without pixi-pack","text":"

If you don't have pixi-pack available on your target system, you can still install the environment if you have conda or micromamba available. Just unarchive the environment.tar, then you have a local channel on your system where all necessary packages are available. Next to this local channel, you will find an environment.yml file that contains the environment specification. You can then install the environment using conda or micromamba:

tar -xvf environment.tar\nmicromamba create -p ./env --file environment.yml\n# or\nconda env create -p ./env --file environment.yml\n

The environment.yml and repodata.json files are only for this use case, pixi-pack unpack does not use them.

"},{"location":"advanced/pyproject_toml/","title":"pyproject.toml in pixi","text":"

We support the use of the pyproject.toml as our manifest file in pixi. This allows the user to keep one file with all configuration. The pyproject.toml file is a standard for Python projects. We don't advise to use the pyproject.toml file for anything else than python projects, the pixi.toml is better suited for other types of projects.

"},{"location":"advanced/pyproject_toml/#initial-setup-of-the-pyprojecttoml-file","title":"Initial setup of the pyproject.toml file","text":"

When you already have a pyproject.toml file in your project, you can run pixi init in a that folder. Pixi will automatically

If you do not have an existing pyproject.toml file , you can run pixi init --format pyproject in your project folder. In that case, pixi will create a pyproject.toml manifest from scratch with some sane defaults.

"},{"location":"advanced/pyproject_toml/#python-dependency","title":"Python dependency","text":"

The pyproject.toml file supports the requires_python field. Pixi understands that field and automatically adds the version to the dependencies.

This is an example of a pyproject.toml file with the requires_python field, which will be used as the python dependency:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n

Which is equivalent to:

equivalent pixi.toml
[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[dependencies]\npython = \">=3.9\"\n
"},{"location":"advanced/pyproject_toml/#dependency-section","title":"Dependency section","text":"

The pyproject.toml file supports the dependencies field. Pixi understands that field and automatically adds the dependencies to the project as [pypi-dependencies].

This is an example of a pyproject.toml file with the dependencies field:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n    \"numpy\",\n    \"pandas\",\n    \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n

Which is equivalent to:

equivalent pixi.toml
[project]\nname = \"my_project\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[pypi-dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n\n[dependencies]\npython = \">=3.9\"\n

You can overwrite these with conda dependencies by adding them to the dependencies field:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n    \"numpy\",\n    \"pandas\",\n    \"matplotlib\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\nnumpy = \"*\"\npandas = \"*\"\nmatplotlib = \"*\"\n

This would result in the conda dependencies being installed and the pypi dependencies being ignored. As pixi takes the conda dependencies over the pypi dependencies.

"},{"location":"advanced/pyproject_toml/#optional-dependencies","title":"Optional dependencies","text":"

If your python project includes groups of optional dependencies, pixi will automatically interpret them as pixi features of the same name with the associated pypi-dependencies.

You can add them to pixi environments manually, or use pixi init to setup the project, which will create one environment per feature. Self-references to other groups of optional dependencies are also handled.

For instance, imagine you have a project folder with a pyproject.toml file similar to:

[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n

Running pixi init in that project folder will transform the pyproject.toml file into:

[project]\nname = \"my_project\"\ndependencies = [\"package1\"]\n\n[project.optional-dependencies]\ntest = [\"pytest\"]\nall = [\"package2\",\"my_project[test]\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # if executed on linux\n\n[tool.pixi.environments]\ndefault = {features = [], solve-group = \"default\"}\ntest = {features = [\"test\"], solve-group = \"default\"}\nall = {features = [\"all\", \"test\"], solve-group = \"default\"}\n

In this example, three environments will be created by pixi:

All environments will be solved together, as indicated by the common solve-group, and added to the lock file. You can edit the [tool.pixi.environments] section manually to adapt it to your use case (e.g. if you do not need a particular environment).

"},{"location":"advanced/pyproject_toml/#example","title":"Example","text":"

As the pyproject.toml file supports the full pixi spec with [tool.pixi] prepended an example would look like this:

pyproject.toml
[project]\nname = \"my_project\"\nrequires-python = \">=3.9\"\ndependencies = [\n    \"numpy\",\n    \"pandas\",\n    \"matplotlib\",\n    \"ruff\",\n]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tool.pixi.dependencies]\ncompilers = \"*\"\ncmake = \"*\"\n\n[tool.pixi.tasks]\nstart = \"python my_project/main.py\"\nlint = \"ruff lint\"\n\n[tool.pixi.system-requirements]\ncuda = \"11.0\"\n\n[tool.pixi.feature.test.dependencies]\npytest = \"*\"\n\n[tool.pixi.feature.test.tasks]\ntest = \"pytest\"\n\n[tool.pixi.environments]\ntest = [\"test\"]\n
"},{"location":"advanced/pyproject_toml/#build-system-section","title":"Build-system section","text":"

The pyproject.toml file normally contains a [build-system] section. Pixi will use this section to build and install the project if it is added as a pypi path dependency.

If the pyproject.toml file does not contain any [build-system] section, pixi will fall back to uv's default, which is equivalent to the below:

pyproject.toml
[build-system]\nrequires = [\"setuptools >= 40.8.0\"]\nbuild-backend = \"setuptools.build_meta:__legacy__\"\n

Including a [build-system] section is highly recommended. If you are not sure of the build-backend you want to use, including the [build-system] section below in your pyproject.toml is a good starting point. pixi init --format pyproject defaults to hatchling. The advantages of hatchling over setuptools are outlined on its website.

pyproject.toml
[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n
"},{"location":"advanced/updates_github_actions/","title":"Update lockfiles with GitHub Actions","text":"

You can leverage GitHub Actions in combination with pavelzw/pixi-diff-to-markdown to automatically update your lockfiles similar to dependabot or renovate in other ecosystems.

Dependabot/Renovate support for pixi

You can track native Dependabot support for pixi in dependabot/dependabot-core #2227 and for Renovate in renovatebot/renovate #2213.

"},{"location":"advanced/updates_github_actions/#how-to-use","title":"How to use","text":"

To get started, create a new GitHub Actions workflow file in your repository.

.github/workflows/update-lockfiles.yml
name: Update lockfiles\n\npermissions: # (1)!\n  contents: write\n  pull-requests: write\n\non:\n  workflow_dispatch:\n  schedule:\n    - cron: 0 5 1 * * # (2)!\n\njobs:\n  pixi-update:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v4\n      - name: Set up pixi\n        uses: prefix-dev/setup-pixi@v0.8.1\n        with:\n          run-install: false\n      - name: Update lockfiles\n        run: |\n          set -o pipefail\n          pixi update --json | pixi exec pixi-diff-to-markdown >> diff.md\n      - name: Create pull request\n        uses: peter-evans/create-pull-request@v6\n        with:\n          token: ${{ secrets.GITHUB_TOKEN }}\n          commit-message: Update pixi lockfile\n          title: Update pixi lockfile\n          body-path: diff.md\n          branch: update-pixi\n          base: main\n          labels: pixi\n          delete-branch: true\n          add-paths: pixi.lock\n
  1. Needed for peter-evans/create-pull-request
  2. Runs at 05:00, on day 1 of the month

In order for this workflow to work, you need to set \"Allow GitHub Actions to create and approve pull requests\" to true in your repository settings (in \"Actions\" -> \"General\").

Tip

If you don't have any pypi-dependencies, you can use pixi update --json --no-install to speed up diff generation.

"},{"location":"advanced/updates_github_actions/#triggering-ci-in-automated-prs","title":"Triggering CI in automated PRs","text":"

In order to prevent accidental recursive GitHub Workflow runs, GitHub decided to not trigger any workflows on automated PRs when using the default GITHUB_TOKEN. There are a couple of ways how to work around this limitation. You can find excellent documentation for this in peter-evans/create-pull-request, see here.

"},{"location":"advanced/updates_github_actions/#customizing-the-summary","title":"Customizing the summary","text":"

You can customize the summary by either using command-line-arguments of pixi-diff-to-markdown or by specifying the configuration in pixi.toml under [tool.pixi-diff-to-markdown]. See the pixi-diff-to-markdown documentation or run pixi-diff-to-markdown --help for more information.

"},{"location":"advanced/updates_github_actions/#using-reusable-workflows","title":"Using reusable workflows","text":"

If you want to use the same workflow in multiple repositories in your GitHub organization, you can create a reusable workflow. You can find more information in the GitHub documentation.

"},{"location":"design_proposals/pixi_global_manifest/","title":"Pixi Global Manifest","text":"

Feedback wanted

This document is work in progress, and community feedback is greatly appreciated. Please share your thoughts at our GitHub discussion.

"},{"location":"design_proposals/pixi_global_manifest/#motivation","title":"Motivation","text":"

pixi global is currently limited to imperatively managing CLI packages. The next iteration of this feature should fulfill the following needs:

"},{"location":"design_proposals/pixi_global_manifest/#design-considerations","title":"Design Considerations","text":"

There are a few things we wanted to keep in mind in the design:

  1. User-friendliness: Pixi is a user focused tool that goes beyond developers. The feature should have good error reporting and helpful documentation from the start.
  2. Keep it simple: The CLI should be all you strictly need to interact with global environments.
  3. Unsurprising: Simple commands should behave similar to traditional package managers.
  4. Human Readable: Any file created by this feature should be human-readable and modifiable.
"},{"location":"design_proposals/pixi_global_manifest/#manifest","title":"Manifest","text":"

The global environments and exposed will be managed by a human-readable manifest. This manifest will stick to conventions set by pixi.toml where possible. Among other things it will be written in the TOML format, be named pixi-global.toml and be placed at ~/.pixi/manifests/pixi-global.toml. The motivation for the location is discussed further below

pixi-global.toml
# The name of the environment is `python`\n[envs.python]\nchannels = [\"conda-forge\"]\n# optional, defaults to your current OS\nplatform = \"osx-64\"\n# It will expose python, python3 and python3.11, but not pip\n[envs.python.dependencies]\npython = \"3.11.*\"\npip = \"*\"\n\n[envs.python.exposed]\npython = \"python\"\npython3 = \"python3\"\n\"python3.11\" = \"python3.11\"\n\n# The name of the environment is `python3-10`\n[envs.python3-10]\nchannels = [\"https://fast.prefix.dev/conda-forge\"]\n# It will expose python3.10\n[envs.python3-10.dependencies]\npython = \"3.10.*\"\n\n[envs.python3-10.exposed]\n\"python3.10\" = \"python\"\n
"},{"location":"design_proposals/pixi_global_manifest/#cli","title":"CLI","text":"

Install one or more packages PACKAGE and expose their executables. If --environment has been given, all packages will be installed in the same environment. --expose can be given if --environment is given as well or if only a single PACKAGE will be installed. The syntax for MAPPING is exposed_name=executable_name, so for example python3.10=python. --platform sets the platform of the environment to PLATFORM Multiple channels can be specified by using --channel multiple times. By default, if no channel is provided, the default-channels key in the pixi configuration is used, which again defaults to \"conda-forge\".

pixi global install [--expose MAPPING] [--environment ENV] [--platform PLATFORM] [--no-activation] [--channel CHANNEL]... PACKAGE...\n

Remove environments ENV.

pixi global uninstall <ENV>...\n

Update PACKAGE if --package is given. If not, all packages in environments ENV will be updated. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes will assume yes as answer for every question that would otherwise be asked interactively.

pixi global update [--package PACKAGE] [--assume-yes] <ENV>...\n

Updates all packages in all environments. If the update leads to executables being removed, it will offer to remove the mappings. If the user declines the update process will stop. If the update leads to executables being added, it will offer for each binary individually to expose it. --assume-yes will assume yes as answer for every question that would otherwise be asked interactively.

pixi global update-all [--assume-yes]\n

Add one or more packages PACKAGE into an existing environment ENV. If environment ENV does not exist, it will return with an error. Without --expose no binary will be exposed. If you don't mention a spec like python=3.8.*, the spec will be unconstrained with *. The syntax for MAPPING is exposed_name=executable_name, so for example python3.10=python.

pixi global add --environment ENV [--expose MAPPING] <PACKAGE>...\n

Remove package PACKAGE from environment ENV. If that was the last package remove the whole environment and print that information in the console. If this leads to executables being removed, it will offer to remove the mappings. If the user declines the remove process will stop.

pixi global remove --environment ENV PACKAGE\n

Add one or more MAPPING for environment ENV which describe which executables are exposed. The syntax for MAPPING is exposed_name=executable_name, so for example python3.10=python.

pixi global expose add --environment ENV  <MAPPING>...\n

Remove one or more exposed BINARY from environment ENV

pixi global expose remove --environment ENV <BINARY>...\n

Ensure that the environments on the machine reflect the state in the manifest. The manifest is the single source of truth. Only if there's no manifest, will the data from existing environments be used to create a manifest. pixi global sync is implied by most other pixi global commands.

pixi global sync\n

List all environments, their specs and exposed executables

pixi global list\n

Set the channels CHANNEL for a certain environment ENV in the pixi global manifest.

pixi global channel set --environment ENV <CHANNEL>...\n

Set the platform PLATFORM for a certain environment ENV in the pixi global manifest.

pixi global platform set --environment ENV PLATFORM\n

"},{"location":"design_proposals/pixi_global_manifest/#simple-workflow","title":"Simple workflow","text":"

Create environment python, install package python=3.10.* and expose all executables of that package

pixi global install python=3.10.*\n

Update all packages in environment python

pixi global update python\n

Remove environment python

pixi global uninstall python\n

Create environment python and pip, install corresponding packages and expose all executables of that packages

pixi global install python pip\n

Remove environments python and pip

pixi global uninstall python pip\n

Create environment python-pip, install python and pip in the same environment and expose all executables of these packages

pixi global install --environment python-pip python pip\n

"},{"location":"design_proposals/pixi_global_manifest/#adding-dependencies","title":"Adding dependencies","text":"

Create environment python, install package python and expose all executables of that package. Then add package hypercorn to environment python but doesn't expose its executables.

pixi global install python\npixi global add --environment python hypercorn\n

Update package cryptography (a dependency of hypercorn) to 43.0.0 in environment python

pixi update --environment python cryptography=43.0.0\n

Then remove hypercorn again.

pixi global remove --environment python hypercorn\n

"},{"location":"design_proposals/pixi_global_manifest/#specifying-which-executables-to-expose","title":"Specifying which executables to expose","text":"

Make a new environment python3-10 with package python=3.10 and expose the python executable as python3.10.

pixi global install --environment python3-10 --expose \"python3.10=python\" python=3.10\n

Now python3.10 is available.

Run the following in order to expose python from environment python3-10 as python3-10 instead.

pixi global expose remove --environment python3-10 python3.10\npixi global expose add --environment python3-10 \"python3-10=python\"\n

Now python3-10 is available, but python3.10 isn't anymore.

"},{"location":"design_proposals/pixi_global_manifest/#syncing","title":"Syncing","text":"

Most pixi global sub commands imply a pixi global sync.

First time, clean computer. Running the following creates manifest and ~/.pixi/envs/python.

pixi global install python\n

Delete ~/.pixi and syncing, should add environment python again as described in the manifest

rm `~/.pixi/envs`\npixi global sync\n

If there's no manifest, but existing environments, pixi will create a manifest that matches your current environments. It is to be decided whether the user should be asked if they want an empty manifest instead, or if it should always import the data from the environments.

rm <manifest>\npixi global sync\n

If we remove the python environment from the manifest, running pixi global sync will also remove the ~/.pixi/envs/python environment from the file system.

vim <manifest>\npixi global sync\n

"},{"location":"design_proposals/pixi_global_manifest/#open-questions","title":"Open Questions","text":""},{"location":"design_proposals/pixi_global_manifest/#should-we-version-the-manifest","title":"Should we version the manifest?","text":"

Something like:

[manifest]\nversion = 1\n

We still have to figure out which existing programs do something similar and how they benefit from it.

"},{"location":"design_proposals/pixi_global_manifest/#multiple-manifests","title":"Multiple manifests","text":"

We could go for one default manifest, but also parse other manifests in the same directory. The only requirement to be parsed as manifest is a .toml extension In order to modify those with the CLI one would have to add an option --manifest to select the correct one.

It is unclear whether the first implementation already needs to support this. At the very least we should put the manifest into its own folder like ~/.pixi/global/manifests/pixi-global.toml

"},{"location":"design_proposals/pixi_global_manifest/#discovery-via-config-key","title":"Discovery via config key","text":"

In order to make it easier to manage manifests in version control, we could allow to set the manifest path via a key in the pixi configuration.

config.toml
global_manifests = \"/path/to/your/manifests\"\n
"},{"location":"design_proposals/pixi_global_manifest/#no-activation","title":"No activation","text":"

The current pixi global install features --no-activation. When this flag is set, CONDA_PREFIX and PATH will not be set when running the exposed executable. This is useful when installing Python package managers or shells.

Assuming that this needs to be set per mapping, one way to expose this functionality would be to allow the following:

[envs.pip.exposed]\npip = { executable=\"pip\", activation=false }\n
"},{"location":"examples/cpp-sdl/","title":"SDL example","text":"

The cpp-sdl example is located in the pixi repository.

git clone https://github.com/prefix-dev/pixi.git\n

Move to the example folder

cd pixi/examples/cpp-sdl\n

Run the start command

pixi run start\n

Using the depends-on feature you only needed to run the start task but under water it is running the following tasks.

# Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
"},{"location":"examples/opencv/","title":"Opencv example","text":"

The opencv example is located in the pixi repository.

git clone https://github.com/prefix-dev/pixi.git\n

Move to the example folder

cd pixi/examples/opencv\n
"},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"

Run the start command to start the face detection algorithm.

pixi run start\n

The screen that starts should look like this:

Check out the webcame_capture.py to see how we detect a face.

"},{"location":"examples/opencv/#camera-calibration","title":"Camera Calibration","text":"

Next to face recognition, a camera calibration example is also included.

You'll need a checkerboard for this to work. Print this:

Then run

pixi run calibrate\n

To make a picture for calibration press SPACE Do this approximately 10 times with the chessboard in view of the camera

After that press ESC which will start the calibration.

When the calibration is done, the camera will be used again to find the distance to the checkerboard.

"},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"

The nav2 example is located in the pixi repository.

git clone https://github.com/prefix-dev/pixi.git\n

Move to the example folder

cd pixi/examples/ros2-nav2\n

Run the start command

pixi run start\n
"},{"location":"features/advanced_tasks/","title":"Advanced tasks","text":"

When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.

Here are some quick examples

pixi.toml
[tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n    \"cmake\",\n    # Use the cross-platform Ninja generator\n    \"-G\",\n    \"Ninja\",\n    # The source is in the root directory\n    \"-S\",\n    \".\",\n    # We wanna build in the .build directory\n    \"-B\",\n    \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends-on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
"},{"location":"features/advanced_tasks/#depends-on","title":"Depends on","text":"

Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.

An obvious example is compiling before running an application.

Checkout our cpp_sdl example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start everything is set up as expected.

pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n

Results in the following lines added to the pixi.toml

pixi.toml
[tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends-on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends-on = [\"build\"] }\n
pixi run start\n

The tasks will be executed after each other:

If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.

With this logic, you can also create aliases as you don't have to specify any command in a task.

pixi task add fmt ruff\npixi task add lint pylint\n
pixi task alias style fmt lint\n

Results in the following pixi.toml.

pixi.toml
fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends-on = [\"fmt\", \"lint\"] }\n

Now run both tools with one command.

pixi run style\n
"},{"location":"features/advanced_tasks/#working-directory","title":"Working directory","text":"

Pixi tasks support the definition of a working directory.

cwd\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml file is located.

Consider a pixi project structured as follows:

\u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n    \u2514\u2500\u2500 bar.py\n

To add a task to run the bar.py file, use:

pixi task add bar \"python bar.py\" --cwd scripts\n

This will add the following line to manifest file:

pixi.toml
[tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n
"},{"location":"features/advanced_tasks/#caching","title":"Caching","text":"

When you specify inputs and/or outputs to a task, pixi will reuse the result of the task.

For the cache, pixi checks that the following are true:

If all of these conditions are met, pixi will not run the task again and instead use the existing result.

Inputs and outputs can be specified as globs, which will be expanded to all matching files.

pixi.toml
[tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n

Note: if you want to debug the globs you can use the --verbose flag to see which files are selected.

# shows info logs of all files that were selected by the globs\npixi run -v start\n
"},{"location":"features/advanced_tasks/#environment-variables","title":"Environment variables","text":"

You can set environment variables for a task. These are seen as \"default\" values for the variables as you can overwrite them from the shell.

pixi.toml

[tasks]\necho = { cmd = \"echo $ARGUMENT\", env = { ARGUMENT = \"hello\" } }\n
If you run pixi run echo it will output hello. When you set the environment variable ARGUMENT before running the task, it will use that value instead.

ARGUMENT=world pixi run echo\n\u2728 Pixi task (echo in default): echo $ARGUMENT\nworld\n

These variables are not shared over tasks, so you need to define these for every task you want to use them in.

Extend instead of overwrite

If you use the same environment variable in the value as in the key of the map you will also overwrite the variable. For example overwriting a PATH pixi.toml

[tasks]\necho = { cmd = \"echo $PATH\", env = { PATH = \"/tmp/path:$PATH\" } }\n
This will output /tmp/path:/usr/bin:/bin instead of the original /usr/bin:/bin.

"},{"location":"features/advanced_tasks/#clean-environment","title":"Clean environment","text":"

You can make sure the environment of a task is \"pixi only\". Here pixi will only include the minimal required environment variables for your platform to run the command in. The environment will contain all variables set by the conda environment like \"CONDA_PREFIX\". It will however include some default values from the shell, like: \"DISPLAY\", \"LC_ALL\", \"LC_TIME\", \"LC_NUMERIC\", \"LC_MEASUREMENT\", \"SHELL\", \"USER\", \"USERNAME\", \"LOGNAME\", \"HOME\", \"HOSTNAME\",\"TMPDIR\", \"XPC_SERVICE_NAME\", \"XPC_FLAGS\"

[tasks]\nclean_command = { cmd = \"python run_in_isolated_env.py\", clean-env = true}\n
This setting can also be set from the command line with pixi run --clean-env TASK_NAME.

clean-env not supported on Windows

On Windows it's hard to create a \"clean environment\" as conda-forge doesn't ship Windows compilers and Windows needs a lot of base variables. Making this feature not worthy of implementing as the amount of edge cases will make it unusable.

"},{"location":"features/advanced_tasks/#our-task-runner-deno_task_shell","title":"Our task runner: deno_task_shell","text":"

To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell. The task shell is a limited implementation of a bourne-shell interface.

"},{"location":"features/advanced_tasks/#built-in-commands","title":"Built-in commands","text":"

Next to running actual executable like ./myprogram, cmake or python the shell has some built-in commandos.

"},{"location":"features/advanced_tasks/#syntax","title":"Syntax","text":"

More info in deno_task_shell documentation.

"},{"location":"features/environment/","title":"Environments","text":"

Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.

"},{"location":"features/environment/#structure","title":"Structure","text":"

A pixi environment is located in the .pixi/envs directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.

If you look at the .pixi/envs directory, you will see a directory for each environment, the default being the one that is normally used, if you specify a custom environment the name you specified will be used.

.pixi\n\u2514\u2500\u2500 envs\n    \u251c\u2500\u2500 cuda\n    \u2502   \u251c\u2500\u2500 bin\n    \u2502   \u251c\u2500\u2500 conda-meta\n    \u2502   \u251c\u2500\u2500 etc\n    \u2502   \u251c\u2500\u2500 include\n    \u2502   \u251c\u2500\u2500 lib\n    \u2502   ...\n    \u2514\u2500\u2500 default\n        \u251c\u2500\u2500 bin\n        \u251c\u2500\u2500 conda-meta\n        \u251c\u2500\u2500 etc\n        \u251c\u2500\u2500 include\n        \u251c\u2500\u2500 lib\n        ...\n

These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml. Pixi will always make sure the environment is in sync with the pixi.lock file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run, pixi shell.

"},{"location":"features/environment/#cleaning-up","title":"Cleaning up","text":"

If you want to clean up the environments, you can simply delete the .pixi/envs directory, and pixi will recreate the environments when needed.

# either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n
"},{"location":"features/environment/#activation","title":"Activation","text":"

An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin directory of the environment to the PATH variable. But there is more to it in a conda environment, as it also sets some environment variables.

To do the activation we have multiple options:

Where the run command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.

Using the pixi shell-hook in pixi you would get the following output:

export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n

It sets the PATH and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh script. Thus, just adding the bin directory to the PATH is not enough.

"},{"location":"features/environment/#traditional-conda-activate-like-activation","title":"Traditional conda activate-like activation","text":"

If you prefer to use the traditional conda activate-like activation, you could use the pixi shell-hook command.

$ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n

Warning

It is not encouraged to use the traditional conda activate-like activation, as deactivating the environment is not really possible. Use pixi shell instead.

"},{"location":"features/environment/#using-pixi-with-direnv","title":"Using pixi with direnv","text":"Installing direnv

Of course you can use pixi to install direnv globally. We recommend to run

pixi global install direnv

to install the latest version of direnv on your computer.

This allows you to use pixi in combination with direnv. Enter the following into your .envrc file:

.envrc
watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
  1. This ensures that every time your pixi.lock changes, direnv invokes the shell-hook again.
  2. This installs if needed, and activates the environment. direnv ensures that the environment is deactivated when you leave the directory.
$ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
"},{"location":"features/environment/#environment-variables","title":"Environment variables","text":"

The following environment variables are set by pixi, when using the pixi run, pixi shell, or pixi shell-hook command:

Note

Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT in the environment.

"},{"location":"features/environment/#solving-environments","title":"Solving environments","text":"

When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml and will put the output of the solve step into the pixi.lock file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:

Pixi solves both the conda and PyPI dependencies, where the PyPI dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler and uv library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo. resolve is able to solve multiple ecosystem like conda and PyPI. It implements the lazy solving process for PyPI packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.

For the [pypi-dependencies], uv implements sdist building to retrieve the metadata of the packages, and wheel building to install the packages. For this building step, pixi requires to first install python in the (conda)[dependencies] section of the pixi.toml file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies] section of the pixi.toml file.

"},{"location":"features/environment/#caching","title":"Caching","text":"

Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools.

Normally the location would be the following platform-specific default cache folder:

This location is configurable by setting the PIXI_CACHE_DIR or RATTLER_CACHE_DIR environment variable.

When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.

The cache contains multiple folders concerning different caches from within pixi.

"},{"location":"features/lockfile/","title":"The pixi.lock lock file","text":"

A lock file is the protector of the environments, and pixi is the key to unlock it.

"},{"location":"features/lockfile/#what-is-a-lock-file","title":"What is a lock file?","text":"

A lock file locks the environment in a specific state. Within pixi a lock file is a description of the packages in an environment. The lock file contains two definitions:

"},{"location":"features/lockfile/#why-a-lock-file","title":"Why a lock file","text":"

Pixi uses the lock file for the following reasons:

This gives you (and your collaborators) a way to really reproduce the environment they are working in. Using tools such as docker suddenly becomes much less necessary.

"},{"location":"features/lockfile/#when-is-a-lock-file-generated","title":"When is a lock file generated?","text":"

A lock file is generated when you install a package. More specifically, a lock file is generated from the solve step of the installation process. The solve will return a list of packages that are to be installed, and the lock file will be generated from this list. This diagram tries to explain the process:

graph TD\n    A[Install] --> B[Solve]\n    B --> C[Generate and write lock file]\n    C --> D[Install Packages]
"},{"location":"features/lockfile/#how-to-use-a-lock-file","title":"How to use a lock file","text":"

Do not edit the lock file

A lock file is a machine only file, and should not be edited by hand.

That said, the pixi.lock is human-readable, so it's easy to track the changes in the environment. We recommend you track the lock file in git or other version control systems. This will ensure that the environment is always reproducible and that you can always revert back to a working state, in case something goes wrong. The pixi.lock and the manifest file pixi.toml/pyproject.toml should always be in sync.

Running the following commands will check and automatically update the lock file if you changed any dependencies:

All the commands that support the interaction with the lock file also include some lock file usage options:

Syncing the lock file with the manifest file

The lock file is always matched with the whole configuration in the manifest file. This means that if you change the manifest file, the lock file will be updated.

flowchart TD\n    C[manifest] --> A[lockfile] --> B[environment]

"},{"location":"features/lockfile/#lockfile-satisfiability","title":"Lockfile satisfiability","text":"

The lock file is a description of the environment, and it should always be satisfiable. Satisfiable means that the given manifest file and the created environment are in sync with the lockfile. If the lock file is not satisfiable, pixi will generate a new lock file automatically.

Steps to check if the lock file is satisfiable:

If you want to get more details checkout the actual code as this is a simplification of the actual code.

"},{"location":"features/lockfile/#the-version-of-the-lock-file","title":"The version of the lock file","text":"

The lock file has a version number, this is to ensure that the lock file is compatible with the local version of pixi.

version: 4\n

Pixi is backward compatible with the lock file, but not forward compatible. This means that you can use an older lock file with a newer version of pixi, but not the other way around.

"},{"location":"features/lockfile/#your-lock-file-is-big","title":"Your lock file is big","text":"

The lock file can grow quite large, especially if you have a lot of packages installed. This is because the lock file contains all the information about the packages.

  1. We try to keep the lock file as small as possible.
  2. It's always smaller than a docker image.
  3. Downloading the lock file is always faster than downloading the incorrect packages.
"},{"location":"features/lockfile/#you-dont-need-a-lock-file-because","title":"You don't need a lock file because...","text":"

If you can not think of a case where you would benefit from a fast reproducible environment, then you don't need a lock file.

But take note of the following:

"},{"location":"features/lockfile/#removing-the-lock-file","title":"Removing the lock file","text":"

If you want to remove the lock file, you can simply delete it.

rm pixi.lock\n

This will remove the lock file, and the next time you run a command that requires the lock file, it will be generated again.

Note

This does remove the locked state of the environment, and the environment will be updated to the latest version of the packages.

"},{"location":"features/multi_environment/","title":"Multi Environment Support","text":""},{"location":"features/multi_environment/#motivating-example","title":"Motivating Example","text":"

There are multiple scenarios where multiple environments are useful.

This prepares pixi for use in large projects with multiple use-cases, multiple developers and different CI needs.

"},{"location":"features/multi_environment/#design-considerations","title":"Design Considerations","text":"

There are a few things we wanted to keep in mind in the design:

  1. User-friendliness: Pixi is a user focussed tool that goes beyond developers. The feature should have good error reporting and helpful documentation from the start.
  2. Keep it simple: Not understanding the multiple environments feature shouldn't limit a user to use pixi. The feature should be \"invisible\" to the non-multi env use-cases.
  3. No Automatic Combinatorial: To ensure the dependency resolution process remains manageable, the solution should avoid a combinatorial explosion of dependency sets. By making the environments user defined and not automatically inferred by testing a matrix of the features.
  4. Single environment Activation: The design should allow only one environment to be active at any given time, simplifying the resolution process and preventing conflicts.
  5. Fixed lock files: It's crucial to preserve fixed lock files for consistency and predictability. Solutions must ensure reliability not just for authors but also for end-users, particularly at the time of lock file creation.
"},{"location":"features/multi_environment/#feature-environment-set-definitions","title":"Feature & Environment Set Definitions","text":"

Introduce environment sets into the pixi.toml this describes environments based on feature's. Introduce features into the pixi.toml that can describe parts of environments. As an environment goes beyond just dependencies the features should be described including the following fields:

Default features
[dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
Different dependencies per feature
[feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
Full set of environment modification in one feature
[feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}] # Results in:  [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Define tasks as defaults of an environment
[feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environment test test`\n

The environment definition should contain the following fields:

Creating environments from features
[environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
Testing a production environment with additional dependencies
[environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
Creating environments without including the default feature
[dependencies]\npython = \"*\"\nnumpy = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom environment which only has the `lint` feature (numpy isn't part of that env).\nlint = {features = [\"lint\"], no-default-feature = true}\n
"},{"location":"features/multi_environment/#lock-file-structure","title":"lock file Structure","text":"

Within the pixi.lock file, a package may now include an additional environments field, specifying the environment to which it belongs. To avoid duplication the packages environments field may contain multiple environments so the lock file is of minimal size.

- platform: linux-64\n  name: pre-commit\n  version: 3.3.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n  ...:\n- platform: linux-64\n  name: python\n  version: 3.9.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n    - py39\n    - default\n  ...:\n
"},{"location":"features/multi_environment/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"

Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:

Default behavior
\u279c pixi run python\n# Runs python in the `default` environment\n
Activating an specific environment
\u279c pixi run -e test pytest\n\u279c pixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n
Activating a shell in an environment
\u279c pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
Running any command in an environment
\u279c pixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n
"},{"location":"features/multi_environment/#ambiguous-environment-selection","title":"Ambiguous Environment Selection","text":"

It's possible to define tasks in multiple environments, in this case the user should be prompted to select the environment.

Here is a simple example of a task only manifest:

pixi.toml

[project]\nname = \"test_ambiguous_env\"\nchannels = []\nplatforms = [\"linux-64\", \"win-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ndefault = \"echo Default\"\nambi = \"echo Ambi::Default\"\n[feature.test.tasks]\ntest = \"echo Test\"\nambi = \"echo Ambi::Test\"\n\n[feature.dev.tasks]\ndev = \"echo Dev\"\nambi = \"echo Ambi::Dev\"\n\n[environments]\ndefault = [\"test\", \"dev\"]\ntest = [\"test\"]\ndev = [\"dev\"]\n
Trying to run the abmi task will prompt the user to select the environment. As it is available in all environments.

Interactive selection of environments if task is in multiple environments
\u279c pixi run ambi\n? The task 'ambi' can be run in multiple environments.\n\nPlease select an environment to run the task in: \u203a\n\u276f default # selecting default\n  test\n  dev\n\n\u2728 Pixi task (ambi in default): echo Ambi::Test\nAmbi::Test\n

As you can see it runs the task defined in the feature.task but it is run in the default environment. This happens because the ambi task is defined in the test feature, and it is overwritten in the default environment. So the tasks.default is now non-reachable from any environment.

Some other results running in this example:

\u279c pixi run --environment test ambi\n\u2728 Pixi task (ambi in test): echo Ambi::Test\nAmbi::Test\n\n\u279c pixi run --environment dev ambi\n\u2728 Pixi task (ambi in dev): echo Ambi::Dev\nAmbi::Dev\n\n# dev is run in the default environment\n\u279c pixi run dev\n\u2728 Pixi task (dev in default): echo Dev\nDev\n\n# dev is run in the dev environment\n\u279c pixi run -e dev dev\n\u2728 Pixi task (dev in dev): echo Dev\nDev\n

"},{"location":"features/multi_environment/#important-links","title":"Important links","text":""},{"location":"features/multi_environment/#real-world-example-use-cases","title":"Real world example use cases","text":"Polarify test setup

In polarify they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.

pixi.toml
[project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
.github/workflows/test.yml
jobs:\n  tests-per-env:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        environment: [py311, py312]\n    steps:\n    - uses: actions/checkout@v4\n      - uses: prefix-dev/setup-pixi@v0.5.1\n        with:\n          environments: ${{ matrix.environment }}\n      - name: Run tasks\n        run: |\n          pixi run --environment ${{ matrix.environment }} test\n  tests-with-multiple-envs:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - uses: prefix-dev/setup-pixi@v0.5.1\n      with:\n       environments: pl017 pl018\n    - run: |\n        pixi run -e pl017 test\n        pixi run -e pl018 test\n
Test vs Production example

This is an example of a project that has a test feature and prod environment. The prod environment is a production environment that contains the run dependencies. The test feature is a set of dependencies and tasks that we want to put on top of the previously solved prod environment. This is a common use case where we want to test the production environment with additional dependencies.

pixi.toml

[project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
In ci you would run the following commands:
pixi run postinstall-e && pixi run test\n
Locally you would run the following command:
pixi run postinstall-e && pixi run dev\n

Then in a Dockerfile you would run the following command: Dockerfile

FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n

Multiple machines from one project

This is an example for an ML project that should be executable on a machine that supports cuda and mlx. It should also be executable on machines that don't support cuda or mlx, we use the cpu feature for this.

pixi.toml
[project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = -1}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n# MLX is only available on macOS >=13.5 (>14.0 is recommended)\nsystem-requirements = {macos = \"13.5\"}\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.16.0,<0.17.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n
Running the project on a cuda machine
pixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
Running the project with mlx
pixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
Running the project on a machine without cuda or mlx
pixi run train-model\n
"},{"location":"features/multi_platform_configuration/","title":"Multi platform config","text":"

Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.

Here is an example manifest file that highlights some of the features:

pixi.tomlpyproject.toml pixi.toml
[project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
pyproject.toml
[tool.pixi.project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tool.pixi.dependencies]\npython = \">=3.8\"\n\n[tool.pixi.target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"~=3.7.0\"\n\n\n[tool.pixi.activation]\nscripts = [\"setup.sh\"]\n\n[tool.pixi.target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
"},{"location":"features/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"

The project.platforms defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lock file.

Running pixi install on a platform that is not configured will warn the user that it is not setup for that platform:

\u276f pixi install\n  \u00d7 the project is not configured for your current platform\n   \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n   \u00b7             \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n   \u00b7                             \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n   \u2570\u2500\u2500\u2500\u2500\n  help: The project needs to be configured to support your platform (linux-64).\n
"},{"location":"features/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"

With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms then pixi will throw an error.

"},{"location":"features/multi_platform_configuration/#dependencies","title":"Dependencies","text":"

It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.

pixi.toml
[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n

In the above example, we specify that we depend on msmpi only on Windows. We also specifically want python on 3.8 when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.

You can use pixi's cli to add these dependencies to the manifest file.

pixi add --platform win-64 posix\n

This also works for the host and build dependencies.

pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n

Which results in this.

pixi.toml
[target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
"},{"location":"features/multi_platform_configuration/#activation","title":"Activation","text":"

Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash and for windows they are bat

To deal with this, you can define your activation scripts using the target definition.

pixi.toml

[activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
When this project is run on win-64 it will only execute the target scripts not the scripts specified in the default activation.scripts

"},{"location":"features/system_requirements/","title":"System Requirements in pixi","text":"

System requirements define the minimal system specifications necessary during dependency resolution for a project. For instance, specifying a Unix system with a particular minimal libc version ensures that dependencies are compatible with the project's environment.

System specifications are closely related to virtual packages, allowing for flexible and accurate dependency management.

"},{"location":"features/system_requirements/#default-system-requirements","title":"Default System Requirements","text":"

The following configurations outline the default minimal system requirements for different operating systems:

LinuxWindowsosx-64osx-arm64
# Default system requirements for Linux\n[system-requirements]\nlinux = \"4.18\"\nlibc = { family = \"glibc\", version = \"2.28\" }\n

Windows currently has no minimal system requirements defined. If your project requires specific Windows configurations, you should define them accordingly.

# Default system requirements for macOS\n[system-requirements]\nmacos = \"13.0\"\n
# Default system requirements for macOS ARM64\n[system-requirements]\nmacos = \"13.0\"\n
"},{"location":"features/system_requirements/#customizing-system-requirements","title":"Customizing System Requirements","text":"

You only need to define system requirements if your project necessitates a different set from the defaults. This is common when installing environments on older or newer versions of operating systems.

"},{"location":"features/system_requirements/#adjusting-for-older-systems","title":"Adjusting for Older Systems","text":"

If you're encountering an error like:

\u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '4.18' but the system has version '4.12.14'\n

This indicates that the project's system requirements are higher than your current system's specifications. To resolve this, you can lower the system requirements in your project's configuration:

[system-requirements]\nlinux = \"4.12.14\"\n

This adjustment informs the dependency resolver to accommodate the older system version.

"},{"location":"features/system_requirements/#using-cuda-in-pixi","title":"Using CUDA in pixi","text":"

To utilize CUDA in your project, you must specify the desired CUDA version in the system-requirements table. This ensures that CUDA is recognized and appropriately locked into the lock file if necessary.

Example Configuration

[system-requirements]\ncuda = \"12\"  # Replace \"12\" with the specific CUDA version you intend to use\n
"},{"location":"features/system_requirements/#setting-system-requirements-environment-specific","title":"Setting System Requirements environment specific","text":"

This can be set per feature in the the manifest file.

[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[environments]\ncuda = [\"cuda\"]\n
"},{"location":"features/system_requirements/#available-override-options","title":"Available Override Options","text":"

In certain scenarios, you might need to override the system requirements detected on your machine. This can be particularly useful when working on systems that do not meet the project's default requirements.

You can override virtual packages by setting the following environment variables:

"},{"location":"features/system_requirements/#additional-resources","title":"Additional Resources","text":"

For more detailed information on managing virtual packages and overriding system requirements, refer to the Conda Documentation.

"},{"location":"ide_integration/devcontainer/","title":"Use pixi inside of a devcontainer","text":"

VSCode Devcontainers are a popular tool to develop on a project with a consistent environment. They are also used in GitHub Codespaces which makes it a great way to develop on a project without having to install anything on your local machine.

To use pixi inside of a devcontainer, follow these steps:

Create a new directory .devcontainer in the root of your project. Then, create the following two files in the .devcontainer directory:

.devcontainer/Dockerfile
FROM mcr.microsoft.com/devcontainers/base:jammy\n\nARG PIXI_VERSION=v0.32.1\n\nRUN curl -L -o /usr/local/bin/pixi -fsSL --compressed \"https://github.com/prefix-dev/pixi/releases/download/${PIXI_VERSION}/pixi-$(uname -m)-unknown-linux-musl\" \\\n    && chmod +x /usr/local/bin/pixi \\\n    && pixi info\n\n# set some user and workdir settings to work nicely with vscode\nUSER vscode\nWORKDIR /home/vscode\n\nRUN echo 'eval \"$(pixi completion -s bash)\"' >> /home/vscode/.bashrc\n
.devcontainer/devcontainer.json
{\n    \"name\": \"my-project\",\n    \"build\": {\n      \"dockerfile\": \"Dockerfile\",\n      \"context\": \"..\",\n    },\n    \"customizations\": {\n      \"vscode\": {\n        \"settings\": {},\n        \"extensions\": [\"ms-python.python\", \"charliermarsh.ruff\", \"GitHub.copilot\"]\n      }\n    },\n    \"features\": {\n      \"ghcr.io/devcontainers/features/docker-in-docker:2\": {}\n    },\n    \"mounts\": [\"source=${localWorkspaceFolderBasename}-pixi,target=${containerWorkspaceFolder}/.pixi,type=volume\"],\n    \"postCreateCommand\": \"sudo chown vscode .pixi && pixi install\"\n}\n

Put .pixi in a mount

In the above example, we mount the .pixi directory into a volume. This is needed since the .pixi directory shouldn't be on a case insensitive filesystem (default on macOS, Windows) but instead in its own volume. There are some conda packages (for example ncurses-feedstock#73) that contain files that only differ in case which leads to errors on case insensitive filesystems.

"},{"location":"ide_integration/devcontainer/#secrets","title":"Secrets","text":"

If you want to authenticate to a private conda channel, you can add secrets to your devcontainer.

.devcontainer/devcontainer.json
{\n    \"build\": \"Dockerfile\",\n    \"context\": \"..\",\n    \"options\": [\n        \"--secret\",\n        \"id=prefix_dev_token,env=PREFIX_DEV_TOKEN\",\n    ],\n    // ...\n}\n
.devcontainer/Dockerfile
# ...\nRUN --mount=type=secret,id=prefix_dev_token,uid=1000 \\\n    test -s /run/secrets/prefix_dev_token \\\n    && pixi auth login --token \"$(cat /run/secrets/prefix_dev_token)\" https://repo.prefix.dev\n

These secrets need to be present either as an environment variable when starting the devcontainer locally or in your GitHub Codespaces settings under Secrets.

"},{"location":"ide_integration/jupyterlab/","title":"JupyterLab Integration","text":""},{"location":"ide_integration/jupyterlab/#basic-usage","title":"Basic usage","text":"

Using JupyterLab with pixi is very simple. You can just create a new pixi project and add the jupyterlab package to it. The full example is provided under the following Github link.

pixi init\npixi add jupyterlab\n

This will create a new pixi project and add the jupyterlab package to it. You can then start JupyterLab using the following command:

pixi run jupyter lab\n

If you want to add more \"kernels\" to JupyterLab, you can simply add them to your current project \u2013 as well as any dependencies from the scientific stack you might need.

pixi add bash_kernel ipywidgets matplotlib numpy pandas  # ...\n
"},{"location":"ide_integration/jupyterlab/#what-kernels-are-available","title":"What kernels are available?","text":"

You can easily install more \"kernels\" for JupyterLab. The conda-forge repository has a number of interesting additional kernels - not just Python!

"},{"location":"ide_integration/jupyterlab/#advanced-usage","title":"Advanced usage","text":"

If you want to have only one instance of JupyterLab running but still want per-directory Pixi environments, you can use one of the kernels provided by the pixi-kernel package.

"},{"location":"ide_integration/jupyterlab/#configuring-jupyterlab","title":"Configuring JupyterLab","text":"

To get started, create a Pixi project, add jupyterlab and pixi-kernel and then start JupyterLab:

pixi init\npixi add jupyterlab pixi-kernel\npixi run jupyter lab\n

This will start JupyterLab and open it in your browser.

pixi-kernel searches for a manifest file, either pixi.toml or pyproject.toml, in the same directory of your notebook or in any parent directory. When it finds one, it will use the environment specified in the manifest file to start the kernel and run your notebooks.

"},{"location":"ide_integration/jupyterlab/#binder","title":"Binder","text":"

If you just want to check a JupyterLab environment running in the cloud using pixi-kernel, you can visit Binder.

"},{"location":"ide_integration/pycharm/","title":"PyCharm Integration","text":"

You can use PyCharm with pixi environments by using the conda shim provided by the pixi-pycharm package.

"},{"location":"ide_integration/pycharm/#how-to-use","title":"How to use","text":"

To get started, add pixi-pycharm to your pixi project.

pixi add pixi-pycharm\n

This will ensure that the conda shim is installed in your project's environment.

Having pixi-pycharm installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda file (on Windows: conda.bat) which is located in .pixi/envs/default/libexec. You can get the path using the following command:

Linux & macOSWindows
pixi run 'echo $CONDA_PREFIX/libexec/conda'\n
pixi run 'echo $CONDA_PREFIX\\\\libexec\\\\conda.bat'\n

This is an executable that tricks PyCharm into thinking it's the proper conda executable. Under the hood it redirects all calls to the corresponding pixi equivalent.

Use the conda shim from this pixi project

Please make sure that this is the conda shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.

Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.

PyCharm should now be able to show you the installed packages as well.

You can now run your programs and tests as usual.

Mark .pixi as excluded

In order for PyCharm to not get confused about the .pixi directory, please mark it as excluded.

Also, when using a remote interpreter, you should exclude the .pixi directory on the remote machine. Instead, you should run pixi install on the remote machine and select the conda shim from there.

"},{"location":"ide_integration/pycharm/#multiple-environments","title":"Multiple environments","text":"

If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.

You can then specify the corresponding environment in the bottom right corner of the PyCharm window.

"},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"

When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.

It is recommended to rename the environments to something unique.

"},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"

Logs are written to ~/.cache/pixi-pycharm.log. You can use them to debug problems. Please attach the logs when filing a bug report.

"},{"location":"ide_integration/r_studio/","title":"Developing R scripts in RStudio","text":"

You can use pixi to manage your R dependencies. The conda-forge channel contains a wide range of R packages that can be installed using pixi.

"},{"location":"ide_integration/r_studio/#installing-r-packages","title":"Installing R packages","text":"

R packages are usually prefixed with r- in the conda-forge channel. To install an R package, you can use the following command:

pixi add r-<package-name>\n# for example\npixi add r-ggplot2\n
"},{"location":"ide_integration/r_studio/#using-r-packages-in-rstudio","title":"Using R packages in RStudio","text":"

To use the R packages installed by pixi in RStudio, you need to run rstudio from an activated environment. This can be achieved by running RStudio from pixi shell or from a task in the pixi.toml file.

"},{"location":"ide_integration/r_studio/#full-example","title":"Full example","text":"

The full example can be found here: RStudio example. Here is an example of a pixi.toml file that sets up an RStudio task:

[project]\nname = \"r\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[target.linux.tasks]\nrstudio = \"rstudio\"\n\n[target.osx.tasks]\nrstudio = \"open -a rstudio\"\n# or alternatively with the full path:\n# rstudio = \"/Applications/RStudio.app/Contents/MacOS/RStudio\"\n\n[dependencies]\nr = \">=4.3,<5\"\nr-ggplot2 = \">=3.5.0,<3.6\"\n

Once RStudio has loaded, you can execute the following R code that uses the ggplot2 package:

# Load the ggplot2 package\nlibrary(ggplot2)\n\n# Load the built-in 'mtcars' dataset\ndata <- mtcars\n\n# Create a scatterplot of 'mpg' vs 'wt'\nggplot(data, aes(x = wt, y = mpg)) +\n  geom_point() +\n  labs(x = \"Weight (1000 lbs)\", y = \"Miles per Gallon\") +\n  ggtitle(\"Fuel Efficiency vs. Weight\")\n

Note

This example assumes that you have installed RStudio system-wide. We are working on updating RStudio as well as the R interpreter builds on Windows for maximum compatibility with pixi.

"},{"location":"reference/cli/","title":"Commands","text":""},{"location":"reference/cli/#global-options","title":"Global options","text":""},{"location":"reference/cli/#init","title":"init","text":"

This command is used to create a new project. It initializes a pixi.toml file and also prepares a .gitignore to prevent the environment from being added to git.

It also supports the pyproject.toml file, if you have a pyproject.toml file in the directory where you run pixi init, it appends the pixi data to the pyproject.toml instead of a new pixi.toml file.

"},{"location":"reference/cli/#arguments","title":"Arguments","text":"
  1. [PATH]: Where to place the project (defaults to current path) [default: .]
"},{"location":"reference/cli/#options","title":"Options","text":"

Importing an environment.yml

When importing an environment, the pixi.toml will be created with the dependencies from the environment file. The pixi.lock will be created when you install the environment. We don't support git+ urls as dependencies for pip packages and for the defaults channel we use main, r and msys2 as the default channels.

pixi init myproject\npixi init ~/myproject\npixi init  # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\npixi init --format pyproject\npixi init --format pixi\n
"},{"location":"reference/cli/#add","title":"add","text":"

Adds dependencies to the manifest file. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.

If the project manifest is a pyproject.toml, adding a pypi dependency will add it to the native pyproject project.dependencies array, or to the native project.optional-dependencies table if a feature is specified:

These dependencies will be read by pixi as if they had been added to the pixi pypi-dependencies tables of the default or a named feature.

"},{"location":"reference/cli/#arguments_1","title":"Arguments","text":"
  1. [SPECS]: The package(s) to add, space separated. The version constraint is optional.
"},{"location":"reference/cli/#options_1","title":"Options","text":"
pixi add numpy # (1)!\npixi add numpy pandas \"pytorch>=1.8\" # (2)!\npixi add \"numpy>=1.22,<1.24\" # (3)!\npixi add --manifest-path ~/myproject/pixi.toml numpy # (4)!\npixi add --host \"python>=3.9.0\" # (5)!\npixi add --build cmake # (6)!\npixi add --platform osx-64 clang # (7)!\npixi add --no-install numpy # (8)!\npixi add --no-lockfile-update numpy # (9)!\npixi add --feature featurex numpy # (10)!\n\n# Add a pypi dependency\npixi add --pypi requests[security] # (11)!\npixi add --pypi Django==5.1rc1 # (12)!\npixi add --pypi \"boltons>=24.0.0\" --feature lint # (13)!\npixi add --pypi \"boltons @ https://files.pythonhosted.org/packages/46/35/e50d4a115f93e2a3fbf52438435bb2efcf14c11d4fcd6bdcd77a6fc399c9/boltons-24.0.0-py3-none-any.whl\" # (14)!\npixi add --pypi \"exchangelib @ git+https://github.com/ecederstrand/exchangelib\" # (15)!\npixi add --pypi \"project @ file:///absolute/path/to/project\" # (16)!\npixi add --pypi \"project@file:///absolute/path/to/project\" --editable # (17)!\n
  1. This will add the numpy package to the project with the latest available for the solved environment.
  2. This will add multiple packages to the project solving them all together.
  3. This will add the numpy package with the version constraint.
  4. This will add the numpy package to the project of the manifest file at the given path.
  5. This will add the python package as a host dependency. There is currently no different behavior for host dependencies.
  6. This will add the cmake package as a build dependency. There is currently no different behavior for build dependencies.
  7. This will add the clang package only for the osx-64 platform.
  8. This will add the numpy package to the manifest and lockfile, without installing it in an environment.
  9. This will add the numpy package to the manifest without updating the lockfile or installing it in the environment.
  10. This will add the numpy package in the feature featurex.
  11. This will add the requests package as pypi dependency with the security extra.
  12. This will add the pre-release version of Django to the project as a pypi dependency.
  13. This will add the boltons package in the feature lint as pypi dependency.
  14. This will add the boltons package with the given url as pypi dependency.
  15. This will add the exchangelib package with the given git url as pypi dependency.
  16. This will add the project package with the given file url as pypi dependency.
  17. This will add the project package with the given file url as an editable package as pypi dependency.

Tip

If you want to use a non default pinning strategy, you can set it using pixi's configuration.

pixi config set pinning-strategy no-pin --global\n
The default is semver which will pin the dependencies to the latest major version or minor for v0 versions.

"},{"location":"reference/cli/#install","title":"install","text":"

Installs an environment based on the manifest file. If there is no pixi.lock file or it is not up-to-date with the manifest file, it will (re-)generate the lock file.

pixi install only installs one environment at a time, if you have multiple environments you can select the right one with the --environment flag. If you don't provide an environment, the default environment will be installed.

Running pixi install is not required before running other commands. As all commands interacting with the environment will first run the install command if the environment is not ready, to make sure you always run in a correct state. E.g. pixi run, pixi shell, pixi shell-hook, pixi add, pixi remove to name a few.

"},{"location":"reference/cli/#options_2","title":"Options","text":"
pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\npixi install --environment lint\npixi install -e lint\n
"},{"location":"reference/cli/#update","title":"update","text":"

The update command checks if there are newer versions of the dependencies and updates the pixi.lock file and environments accordingly. It will only update the lock file if the dependencies in the manifest file are still compatible with the new versions.

"},{"location":"reference/cli/#arguments_2","title":"Arguments","text":"
  1. [PACKAGES]... The packages to update, space separated. If no packages are provided, all packages will be updated.
"},{"location":"reference/cli/#options_3","title":"Options","text":"
pixi update numpy\npixi update numpy pandas\npixi update --manifest-path ~/myproject/pixi.toml numpy\npixi update --environment lint python\npixi update -e lint -e schema -e docs pre-commit\npixi update --platform osx-arm64 mlx\npixi update -p linux-64 -p osx-64 numpy\npixi update --dry-run\npixi update --no-install boto3\n
"},{"location":"reference/cli/#run","title":"run","text":"

The run commands first checks if the environment is ready to use. When you didn't run pixi install the run command will do that for you. The custom tasks defined in the manifest file are also available through the run command.

You cannot run pixi run source setup.bash as source is not available in the deno_task_shell commandos and not an executable.

"},{"location":"reference/cli/#arguments_3","title":"Arguments","text":"
  1. [TASK]... The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.
"},{"location":"reference/cli/#options_4","title":"Options","text":"

Info

In pixi the deno_task_shell is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.

Cross environment tasks

If you're using the depends-on feature of the tasks, the tasks will be run in the order you specified them. The depends-on can be used cross environment, e.g. you have this pixi.toml:

pixi.toml
[tasks]\nstart = { cmd = \"python start.py\", depends-on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n

Then you're able to run the build from the build environment and start from the default environment. By only calling:

pixi run start\n

"},{"location":"reference/cli/#exec","title":"exec","text":"

Runs a command in a temporary environment disconnected from any project. This can be useful to quickly test out a certain package or version.

Temporary environments are cached. If the same command is run again, the same environment will be reused.

Cleaning temporary environments

Currently, temporary environments can only be cleaned up manually. Environments for pixi exec are stored under cached-envs-v0/ in the cache directory. Run pixi info to find the cache directory.

"},{"location":"reference/cli/#arguments_4","title":"Arguments","text":"
  1. <COMMAND>: The command to run.
"},{"location":"reference/cli/#options_5","title":"Options:","text":"
pixi exec python\n\n# Add a constraint to the python version\npixi exec -s python=3.9 python\n\n# Run ipython and include the py-rattler package in the environment\npixi exec -s ipython -s py-rattler ipython\n\n# Force reinstall to recreate the environment and get the latest package versions\npixi exec --force-reinstall -s ipython -s py-rattler ipython\n
"},{"location":"reference/cli/#remove","title":"remove","text":"

Removes dependencies from the manifest file.

If the project manifest is a pyproject.toml, removing a pypi dependency with the --pypi flag will remove it from either - the native pyproject project.dependencies array or the native project.optional-dependencies table (if a feature is specified) - pixi pypi-dependencies tables of the default or a named feature (if a feature is specified)

"},{"location":"reference/cli/#arguments_5","title":"Arguments","text":"
  1. <DEPS>...: List of dependencies you wish to remove from the project.
"},{"location":"reference/cli/#options_6","title":"Options","text":"
pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\npixi remove --no-install numpy\n
"},{"location":"reference/cli/#task","title":"task","text":"

If you want to make a shorthand for a specific command you can add a task for it.

"},{"location":"reference/cli/#options_7","title":"Options","text":""},{"location":"reference/cli/#task-add","title":"task add","text":"

Add a task to the manifest file, use --depends-on to add tasks you want to run before this task, e.g. build before an execute task.

"},{"location":"reference/cli/#arguments_6","title":"Arguments","text":"
  1. <NAME>: The name of the task.
  2. <COMMAND>: The command to run. This can be more than one word.

Info

If you are using $ for env variables they will be resolved before adding them to the task. If you want to use $ in the task you need to escape it with a \\, e.g. echo \\$HOME.

"},{"location":"reference/cli/#options_8","title":"Options","text":"
pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\npixi task add publish-pypi \"hatch publish --yes --repo main\" --feature build --env HATCH_CONFIG=config/hatch.toml --description \"Publish the package to pypi\"\n

This adds the following to the manifest file:

[tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends-on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n\n[feature.build.tasks]\npublish-pypi = { cmd = \"hatch publish --yes --repo main\", env = { HATCH_CONFIG = \"config/hatch.toml\" }, description = \"Publish the package to pypi\" }\n

Which you can then run with the run command:

pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
"},{"location":"reference/cli/#task-remove","title":"task remove","text":"

Remove the task from the manifest file

"},{"location":"reference/cli/#arguments_7","title":"Arguments","text":""},{"location":"reference/cli/#options_9","title":"Options","text":"
pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
"},{"location":"reference/cli/#task-alias","title":"task alias","text":"

Create an alias for a task.

"},{"location":"reference/cli/#arguments_8","title":"Arguments","text":"
  1. <ALIAS>: The alias name
  2. <DEPENDS_ON>: The names of the tasks you want to execute on this alias, order counts, first one runs first.
"},{"location":"reference/cli/#options_10","title":"Options","text":"
pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
"},{"location":"reference/cli/#task-list","title":"task list","text":"

List all tasks in the project.

"},{"location":"reference/cli/#options_11","title":"Options","text":"
pixi task list\npixi task list --environment cuda\npixi task list --summary\n
"},{"location":"reference/cli/#list","title":"list","text":"

List project's packages. Highlighted packages are explicit dependencies.

"},{"location":"reference/cli/#options_12","title":"Options","text":"
pixi list\npixi list --json-pretty\npixi list --explicit\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n

Output will look like this, where python will be green as it is the package that was explicitly added to the manifest file:

\u279c pixi list\n Package           Version     Build               Size       Kind   Source\n _libgcc_mutex     0.1         conda_forge         2.5 KiB    conda  _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex     4.5         2_gnu               23.1 KiB   conda  _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2             1.0.8       hd590300_5          248.3 KiB  conda  bzip2-1.0.8-hd590300_5.conda\n ca-certificates   2023.11.17  hbcca054_0          150.5 KiB  conda  ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64  2.40        h41732ed_0          688.2 KiB  conda  ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat          2.5.0       hcb278e6_1          76.2 KiB   conda  libexpat-2.5.0-hcb278e6_1.conda\n libffi            3.4.2       h7f98852_5          56.9 KiB   conda  libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng         13.2.0      h807b86a_4          755.7 KiB  conda  libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp           13.2.0      h807b86a_4          412.2 KiB  conda  libgomp-13.2.0-h807b86a_4.conda\n libnsl            2.0.1       hd590300_0          32.6 KiB   conda  libnsl-2.0.1-hd590300_0.conda\n libsqlite         3.44.2      h2797004_0          826 KiB    conda  libsqlite-3.44.2-h2797004_0.conda\n libuuid           2.38.1      h0b41bf4_0          32.8 KiB   conda  libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt         4.4.36      hd590300_1          98 KiB     conda  libxcrypt-4.4.36-hd590300_1.conda\n libzlib           1.2.13      hd590300_5          60.1 KiB   conda  libzlib-1.2.13-hd590300_5.conda\n ncurses           6.4         h59595ed_2          863.7 KiB  conda  ncurses-6.4-h59595ed_2.conda\n openssl           3.2.0       hd590300_1          2.7 MiB    conda  openssl-3.2.0-hd590300_1.conda\n python            3.12.1      hab00c5b_1_cpython  30.8 MiB   conda  python-3.12.1-hab00c5b_1_cpython.conda\n readline          8.2         h8228510_1          274.9 KiB  conda  readline-8.2-h8228510_1.conda\n tk                8.6.13      noxft_h4845f30_101  3.2 MiB    conda  tk-8.6.13-noxft_h4845f30_101.conda\n tzdata            2023d       h0c530f3_0          116.8 KiB  conda  tzdata-2023d-h0c530f3_0.conda\n xz                5.2.6       h166bdaf_0          408.6 KiB  conda  xz-5.2.6-h166bdaf_0.tar.bz2\n
"},{"location":"reference/cli/#tree","title":"tree","text":"

Display the project's packages in a tree. Highlighted packages are those specified in the manifest.

The package tree can also be inverted (-i), to see which packages require a specific dependencies.

"},{"location":"reference/cli/#arguments_9","title":"Arguments","text":""},{"location":"reference/cli/#options_13","title":"Options","text":"
pixi tree\npixi tree pre-commit\npixi tree -i yaml\npixi tree --environment docs\npixi tree --platform win-64\n

Warning

Use -v to show which pypi packages are not yet parsed correctly. The extras and markers parsing is still under development.

Output will look like this, where direct packages in the manifest file will be green. Once a package has been displayed once, the tree won't continue to recurse through its dependencies (compare the first time python appears, vs the rest), and it will instead be marked with a star (*).

Version numbers are colored by the package type, yellow for Conda packages and blue for PyPI.

\u279c pixi tree\n\u251c\u2500\u2500 pre-commit v3.3.3\n\u2502   \u251c\u2500\u2500 cfgv v3.3.1\n\u2502   \u2502   \u2514\u2500\u2500 python v3.12.2\n\u2502   \u2502       \u251c\u2500\u2500 bzip2 v1.0.8\n\u2502   \u2502       \u251c\u2500\u2500 libexpat v2.6.2\n\u2502   \u2502       \u251c\u2500\u2500 libffi v3.4.2\n\u2502   \u2502       \u251c\u2500\u2500 libsqlite v3.45.2\n\u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13\n\u2502   \u2502       \u251c\u2500\u2500 libzlib v1.2.13 (*)\n\u2502   \u2502       \u251c\u2500\u2500 ncurses v6.4.20240210\n\u2502   \u2502       \u251c\u2500\u2500 openssl v3.2.1\n\u2502   \u2502       \u251c\u2500\u2500 readline v8.2\n\u2502   \u2502       \u2502   \u2514\u2500\u2500 ncurses v6.4.20240210 (*)\n\u2502   \u2502       \u251c\u2500\u2500 tk v8.6.13\n\u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13 (*)\n\u2502   \u2502       \u2514\u2500\u2500 xz v5.2.6\n\u2502   \u251c\u2500\u2500 identify v2.5.35\n\u2502   \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n...\n\u2514\u2500\u2500 tbump v6.9.0\n...\n    \u2514\u2500\u2500 tomlkit v0.12.4\n        \u2514\u2500\u2500 python v3.12.2 (*)\n

A regex pattern can be specified to filter the tree to just those that show a specific direct, or transitive dependency:

\u279c pixi tree pre-commit\n\u2514\u2500\u2500 pre-commit v3.3.3\n    \u251c\u2500\u2500 virtualenv v20.25.1\n    \u2502   \u251c\u2500\u2500 filelock v3.13.1\n    \u2502   \u2502   \u2514\u2500\u2500 python v3.12.2\n    \u2502   \u2502       \u251c\u2500\u2500 libexpat v2.6.2\n    \u2502   \u2502       \u251c\u2500\u2500 readline v8.2\n    \u2502   \u2502       \u2502   \u2514\u2500\u2500 ncurses v6.4.20240210\n    \u2502   \u2502       \u251c\u2500\u2500 libsqlite v3.45.2\n    \u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13\n    \u2502   \u2502       \u251c\u2500\u2500 bzip2 v1.0.8\n    \u2502   \u2502       \u251c\u2500\u2500 libzlib v1.2.13 (*)\n    \u2502   \u2502       \u251c\u2500\u2500 libffi v3.4.2\n    \u2502   \u2502       \u251c\u2500\u2500 tk v8.6.13\n    \u2502   \u2502       \u2502   \u2514\u2500\u2500 libzlib v1.2.13 (*)\n    \u2502   \u2502       \u251c\u2500\u2500 xz v5.2.6\n    \u2502   \u2502       \u251c\u2500\u2500 ncurses v6.4.20240210 (*)\n    \u2502   \u2502       \u2514\u2500\u2500 openssl v3.2.1\n    \u2502   \u251c\u2500\u2500 platformdirs v4.2.0\n    \u2502   \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n    \u2502   \u251c\u2500\u2500 distlib v0.3.8\n    \u2502   \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n    \u2502   \u2514\u2500\u2500 python v3.12.2 (*)\n    \u251c\u2500\u2500 pyyaml v6.0.1\n...\n

Additionally, the tree can be inverted, and it can show which packages depend on a regex pattern. The packages specified in the manifest will also be highlighted (in this case cffconvert and pre-commit would be).

\u279c pixi tree -i yaml\n\nruamel.yaml v0.18.6\n\u251c\u2500\u2500 pykwalify v1.8.0\n\u2502   \u2514\u2500\u2500 cffconvert v2.0.0\n\u2514\u2500\u2500 cffconvert v2.0.0\n\npyyaml v6.0.1\n\u2514\u2500\u2500 pre-commit v3.3.3\n\nruamel.yaml.clib v0.2.8\n\u2514\u2500\u2500 ruamel.yaml v0.18.6\n    \u251c\u2500\u2500 pykwalify v1.8.0\n    \u2502   \u2514\u2500\u2500 cffconvert v2.0.0\n    \u2514\u2500\u2500 cffconvert v2.0.0\n\nyaml v0.2.5\n\u2514\u2500\u2500 pyyaml v6.0.1\n    \u2514\u2500\u2500 pre-commit v3.3.3\n
"},{"location":"reference/cli/#shell","title":"shell","text":"

This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit.

"},{"location":"reference/cli/#options_14","title":"Options","text":"
pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
"},{"location":"reference/cli/#shell-hook","title":"shell-hook","text":"

This command prints the activation script of an environment.

"},{"location":"reference/cli/#options_15","title":"Options","text":"
pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\npixi shell-hook --json\n

Example use-case, when you want to get rid of the pixi executable in a Docker container.

pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n
"},{"location":"reference/cli/#search","title":"search","text":"

Search a package, output will list the latest version of the package.

"},{"location":"reference/cli/#arguments_10","title":"Arguments","text":"
  1. <PACKAGE>: Name of package to search, it's possible to use wildcards (*).
"},{"location":"reference/cli/#options_16","title":"Options","text":"
pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
"},{"location":"reference/cli/#self-update","title":"self-update","text":"

Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force flag

"},{"location":"reference/cli/#options_17","title":"Options","text":"
pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
"},{"location":"reference/cli/#info","title":"info","text":"

Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.

"},{"location":"reference/cli/#options_18","title":"Options","text":"
pixi info\npixi info --json --extended\n
"},{"location":"reference/cli/#clean","title":"clean","text":"

Clean the parts of your system which are touched by pixi. Defaults to cleaning the environments and task cache. Use the cache subcommand to clean the cache

"},{"location":"reference/cli/#options_19","title":"Options","text":"
pixi clean\n
"},{"location":"reference/cli/#clean-cache","title":"clean cache","text":"

Clean the pixi cache on your system.

"},{"location":"reference/cli/#options_20","title":"Options","text":"
pixi clean cache # clean all pixi caches\npixi clean cache --pypi # clean only the pypi cache\npixi clean cache --conda # clean only the conda cache\npixi clean cache --yes # skip the confirmation prompt\n
"},{"location":"reference/cli/#upload","title":"upload","text":"

Upload a package to a prefix.dev channel

"},{"location":"reference/cli/#arguments_11","title":"Arguments","text":"
  1. <HOST>: The host + channel to upload to.
  2. <PACKAGE_FILE>: The package file to upload.
pixi upload https://prefix.dev/api/v1/upload/my_channel my_package.conda\n
"},{"location":"reference/cli/#auth","title":"auth","text":"

This command is used to authenticate the user's access to remote hosts such as prefix.dev or anaconda.org for private channels.

"},{"location":"reference/cli/#auth-login","title":"auth login","text":"

Store authentication information for given host.

Tip

The host is real hostname not a channel.

"},{"location":"reference/cli/#arguments_12","title":"Arguments","text":"
  1. <HOST>: The host to authenticate with.
"},{"location":"reference/cli/#options_21","title":"Options","text":"
pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaAndHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --username john --password xxxxxx\n
"},{"location":"reference/cli/#auth-logout","title":"auth logout","text":"

Remove authentication information for a given host.

"},{"location":"reference/cli/#arguments_13","title":"Arguments","text":"
  1. <HOST>: The host to authenticate with.
pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
"},{"location":"reference/cli/#config","title":"config","text":"

Use this command to manage the configuration.

"},{"location":"reference/cli/#options_22","title":"Options","text":"

Checkout the pixi configuration for more information about the locations.

"},{"location":"reference/cli/#config-edit","title":"config edit","text":"

Edit the configuration file in the default editor.

pixi config edit --system\npixi config edit --local\npixi config edit -g\n
"},{"location":"reference/cli/#config-list","title":"config list","text":"

List the configuration

"},{"location":"reference/cli/#arguments_14","title":"Arguments","text":"
  1. [KEY]: The key to list the value of. (all if not provided)
"},{"location":"reference/cli/#options_23","title":"Options","text":"
pixi config list default-channels\npixi config list --json\npixi config list --system\npixi config list -g\n
"},{"location":"reference/cli/#config-prepend","title":"config prepend","text":"

Prepend a value to a list configuration key.

"},{"location":"reference/cli/#arguments_15","title":"Arguments","text":"
  1. <KEY>: The key to prepend the value to.
  2. <VALUE>: The value to prepend.
pixi config prepend default-channels conda-forge\n
"},{"location":"reference/cli/#config-append","title":"config append","text":"

Append a value to a list configuration key.

"},{"location":"reference/cli/#arguments_16","title":"Arguments","text":"
  1. <KEY>: The key to append the value to.
  2. <VALUE>: The value to append.
pixi config append default-channels robostack\npixi config append default-channels bioconda --global\n
"},{"location":"reference/cli/#config-set","title":"config set","text":"

Set a configuration key to a value.

"},{"location":"reference/cli/#arguments_17","title":"Arguments","text":"
  1. <KEY>: The key to set the value of.
  2. [VALUE]: The value to set. (if not provided, the key will be removed)
pixi config set default-channels '[\"conda-forge\", \"bioconda\"]'\npixi config set --global mirrors '{\"https://conda.anaconda.org/\": [\"https://prefix.dev/conda-forge\"]}'\npixi config set repodata-config.disable-zstd true --system\npixi config set --global detached-environments \"/opt/pixi/envs\"\npixi config set detached-environments false\n
"},{"location":"reference/cli/#config-unset","title":"config unset","text":"

Unset a configuration key.

"},{"location":"reference/cli/#arguments_18","title":"Arguments","text":"
  1. <KEY>: The key to unset.
pixi config unset default-channels\npixi config unset --global mirrors\npixi config unset repodata-config.disable-zstd --system\n
"},{"location":"reference/cli/#global","title":"global","text":"

Global is the main entry point for the part of pixi that executes on the global(system) level.

Tip

Binaries and environments installed globally are stored in ~/.pixi by default, this can be changed by setting the PIXI_HOME environment variable.

"},{"location":"reference/cli/#global-install","title":"global install","text":"

This command installs package(s) into its own environment and adds the binary to PATH, allowing you to access it anywhere on your system without activating the environment.

"},{"location":"reference/cli/#arguments_19","title":"Arguments","text":"

1.<PACKAGE>: The package(s) to install, this can also be a version constraint.

"},{"location":"reference/cli/#options_24","title":"Options","text":"
pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n\n# Install for a specific platform, only useful on osx-arm64\npixi global install --platform osx-64 ruff\n\n# Install without inserting activation code into the executable script\npixi global install ruff --no-activation\n

Tip

Running osx-64 on Apple Silicon will install the Intel binary but run it using Rosetta

pixi global install --platform osx-64 ruff\n

After using global install, you can use the package you installed anywhere on your system.

"},{"location":"reference/cli/#global-list","title":"global list","text":"

This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:

> pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n|   \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n|   \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n|   \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n|   \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n    \u2514\u2500 exec: uv\n
"},{"location":"reference/cli/#global-upgrade","title":"global upgrade","text":"

This command upgrades a globally installed package (to the latest version by default).

"},{"location":"reference/cli/#arguments_20","title":"Arguments","text":"
  1. <PACKAGE>: The package to upgrade.
"},{"location":"reference/cli/#options_25","title":"Options","text":"
pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
"},{"location":"reference/cli/#global-upgrade-all","title":"global upgrade-all","text":"

This command upgrades all globally installed packages to their latest version.

"},{"location":"reference/cli/#options_26","title":"Options","text":"
pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
"},{"location":"reference/cli/#global-remove","title":"global remove","text":"

Removes a package previously installed into a globally accessible location via pixi global install

Use pixi global info to find out what the package name is that belongs to the tool you want to remove.

"},{"location":"reference/cli/#arguments_21","title":"Arguments","text":"
  1. <PACKAGE>: The package(s) to remove.
pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
"},{"location":"reference/cli/#project","title":"project","text":"

This subcommand allows you to modify the project configuration through the command line interface.

"},{"location":"reference/cli/#options_27","title":"Options","text":""},{"location":"reference/cli/#project-channel-add","title":"project channel add","text":"

Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lock file and the environment is reinstalled.

"},{"location":"reference/cli/#arguments_22","title":"Arguments","text":"
  1. <CHANNEL>: The channels to add, name or URL.
"},{"location":"reference/cli/#options_28","title":"Options","text":"
pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nvidia\n
"},{"location":"reference/cli/#project-channel-list","title":"project channel list","text":"

List the channels in the manifest file

"},{"location":"reference/cli/#options_29","title":"Options","text":"
$ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
"},{"location":"reference/cli/#project-channel-remove","title":"project channel remove","text":"

List the channels in the manifest file

"},{"location":"reference/cli/#arguments_23","title":"Arguments","text":"
  1. <CHANNEL>...: The channels to remove, name(s) or URL(s).
"},{"location":"reference/cli/#options_30","title":"Options","text":"
pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nvidia\n
"},{"location":"reference/cli/#project-description-get","title":"project description get","text":"

Get the project description.

$ pixi project description get\nPackage management made easy!\n
"},{"location":"reference/cli/#project-description-set","title":"project description set","text":"

Set the project description.

"},{"location":"reference/cli/#arguments_24","title":"Arguments","text":"
  1. <DESCRIPTION>: The description to set.
pixi project description set \"my new description\"\n
"},{"location":"reference/cli/#project-environment-add","title":"project environment add","text":"

Add an environment to the manifest file.

"},{"location":"reference/cli/#arguments_25","title":"Arguments","text":"
  1. <NAME>: The name of the environment to add.
"},{"location":"reference/cli/#options_31","title":"Options","text":"
pixi project environment add env1 --feature feature1 --feature feature2\npixi project environment add env2 -f feature1 --solve-group test\npixi project environment add env3 -f feature1 --no-default-feature\npixi project environment add env3 -f feature1 --force\n
"},{"location":"reference/cli/#project-environment-remove","title":"project environment remove","text":"

Remove an environment from the manifest file.

"},{"location":"reference/cli/#arguments_26","title":"Arguments","text":"
  1. <NAME>: The name of the environment to remove.
pixi project environment remove env1\n
"},{"location":"reference/cli/#project-environment-list","title":"project environment list","text":"

List the environments in the manifest file.

pixi project environment list\n
"},{"location":"reference/cli/#project-export-conda_environment","title":"project export conda_environment","text":"

Exports a conda environment.yml file. The file can be used to create a conda environment using conda/mamba:

pixi project export conda-environment environment.yml\nmamba create --name <env> --file environment.yml\n
"},{"location":"reference/cli/#arguments_27","title":"Arguments","text":"
  1. <OUTPUT_PATH>: Optional path to render environment.yml to. Otherwise it will be printed to standard out.
"},{"location":"reference/cli/#options_32","title":"Options","text":"
pixi project export conda-environment --environment lint\npixi project export conda-environment --platform linux-64 environment.linux-64.yml\n
"},{"location":"reference/cli/#project-export-conda_explicit_spec","title":"project export conda_explicit_spec","text":"

Render a platform-specific conda explicit specification file for an environment. The file can be then used to create a conda environment using conda/mamba:

mamba create --name <env> --file <explicit spec file>\n

As the explicit specification file format does not support pypi-dependencies, use the --ignore-pypi-errors option to ignore those dependencies.

"},{"location":"reference/cli/#arguments_28","title":"Arguments","text":"
  1. <OUTPUT_DIR>: Output directory for rendered explicit environment spec files.
"},{"location":"reference/cli/#options_33","title":"Options","text":"
pixi project export conda_explicit_spec output\npixi project export conda_explicit_spec -e default -e test -p linux-64 output\n
"},{"location":"reference/cli/#project-platform-add","title":"project platform add","text":"

Adds a platform(s) to the manifest file and updates the lock file.

"},{"location":"reference/cli/#arguments_29","title":"Arguments","text":"
  1. <PLATFORM>...: The platforms to add.
"},{"location":"reference/cli/#options_34","title":"Options","text":"
pixi project platform add win-64\npixi project platform add --feature test win-64\n
"},{"location":"reference/cli/#project-platform-list","title":"project platform list","text":"

List the platforms in the manifest file.

$ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
"},{"location":"reference/cli/#project-platform-remove","title":"project platform remove","text":"

Remove platform(s) from the manifest file and updates the lock file.

"},{"location":"reference/cli/#arguments_30","title":"Arguments","text":"
  1. <PLATFORM>...: The platforms to remove.
"},{"location":"reference/cli/#options_35","title":"Options","text":"
pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
"},{"location":"reference/cli/#project-version-get","title":"project version get","text":"

Get the project version.

$ pixi project version get\n0.11.0\n
"},{"location":"reference/cli/#project-version-set","title":"project version set","text":"

Set the project version.

"},{"location":"reference/cli/#arguments_31","title":"Arguments","text":"
  1. <VERSION>: The version to set.
pixi project version set \"0.13.0\"\n
"},{"location":"reference/cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}","text":"

Bump the project version to {MAJOR|MINOR|PATCH}.

pixi project version major\npixi project version minor\npixi project version patch\n
  1. An up-to-date lock file means that the dependencies in the lock file are allowed by the dependencies in the manifest file. For example

    • a manifest with python = \">= 3.11\" is up-to-date with a name: python, version: 3.11.0 in the pixi.lock.
    • a manifest with python = \">= 3.12\" is not up-to-date with a name: python, version: 3.11.0 in the pixi.lock.

    Being up-to-date does not mean that the lock file holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9\u21a9

"},{"location":"reference/pixi_configuration/","title":"The configuration of pixi itself","text":"

Apart from the project specific configuration pixi supports configuration options which are not required for the project to work but are local to the machine. The configuration is loaded in the following order:

LinuxmacOSWindows Priority Location Comments 1 /etc/pixi/config.toml System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml XDG compliant user-specific configuration 3 $HOME/.config/pixi/config.toml User-specific configuration 4 $PIXI_HOME/config.toml Global configuration in the user home directory. PIXI_HOME defaults to ~/.pixi 5 your_project/.pixi/config.toml Project-specific configuration 6 Command line arguments (--tls-no-verify, --change-ps1=false, etc.) Configuration via command line arguments Priority Location Comments 1 /etc/pixi/config.toml System-wide configuration 2 $XDG_CONFIG_HOME/pixi/config.toml XDG compliant user-specific configuration 3 $HOME/Library/Application Support/pixi/config.toml User-specific configuration 4 $PIXI_HOME/config.toml Global configuration in the user home directory. PIXI_HOME defaults to ~/.pixi 5 your_project/.pixi/config.toml Project-specific configuration 6 Command line arguments (--tls-no-verify, --change-ps1=false, etc.) Configuration via command line arguments Priority Location Comments 1 C:\\ProgramData\\pixi\\config.toml System-wide configuration 2 %APPDATA%\\pixi\\config.toml User-specific configuration 3 $PIXI_HOME\\config.toml Global configuration in the user home directory. PIXI_HOME defaults to %USERPROFILE%/.pixi 4 your_project\\.pixi\\config.toml Project-specific configuration 5 Command line arguments (--tls-no-verify, --change-ps1=false, etc.) Configuration via command line arguments

Note

The highest priority wins. If a configuration file is found in a higher priority location, the values from the configuration read from lower priority locations are overwritten.

Note

To find the locations where pixi looks for configuration files, run pixi with -vv.

"},{"location":"reference/pixi_configuration/#reference","title":"Reference","text":"Casing In Configuration

In versions of pixi 0.20.1 and older the global configuration used snake_case we've changed to kebab-case for consistency with the rest of the configuration. But we still support the old snake_case configuration, for older configuration options. These are:

The following reference describes all available configuration options.

"},{"location":"reference/pixi_configuration/#default-channels","title":"default-channels","text":"

The default channels to select when running pixi init or pixi global install. This defaults to only conda-forge. config.toml

default-channels = [\"conda-forge\"]\n

Note

The default-channels are only used when initializing a new project. Once initialized the channels are used from the project manifest.

"},{"location":"reference/pixi_configuration/#change-ps1","title":"change-ps1","text":"

When set to false, the (pixi) prefix in the shell prompt is removed. This applies to the pixi shell subcommand. You can override this from the CLI with --change-ps1.

config.toml
change-ps1 = true\n
"},{"location":"reference/pixi_configuration/#tls-no-verify","title":"tls-no-verify","text":"

When set to true, the TLS certificates are not verified.

Warning

This is a security risk and should only be used for testing purposes or internal networks.

You can override this from the CLI with --tls-no-verify.

config.toml
tls-no-verify = false\n
"},{"location":"reference/pixi_configuration/#authentication-override-file","title":"authentication-override-file","text":"

Override from where the authentication information is loaded. Usually, we try to use the keyring to load authentication data from, and only use a JSON file as a fallback. This option allows you to force the use of a JSON file. Read more in the authentication section. config.toml

authentication-override-file = \"/path/to/your/override.json\"\n

"},{"location":"reference/pixi_configuration/#detached-environments","title":"detached-environments","text":"

The directory where pixi stores the project environments, what would normally be placed in the .pixi/envs folder in a project's root. It doesn't affect the environments built for pixi global. The location of environments created for a pixi global installation can be controlled using the PIXI_HOME environment variable.

Warning

We recommend against using this because any environment created for a project is no longer placed in the same folder as the project. This creates a disconnect between the project and its environments and manual cleanup of the environments is required when deleting the project.

However, in some cases, this option can still be very useful, for instance to:

This field can consist of two types of input.

config.toml

detached-environments = true\n
or: config.toml
detached-environments = \"/opt/pixi/envs\"\n

The environments will be stored in the cache directory when this option is true. When you specify a custom path the environments will be stored in that directory.

The resulting directory structure will look like this: config.toml

detached-environments = \"/opt/pixi/envs\"\n
/opt/pixi/envs\n\u251c\u2500\u2500 pixi-6837172896226367631\n\u2502   \u2514\u2500\u2500 envs\n\u2514\u2500\u2500 NAME_OF_PROJECT-HASH_OF_ORIGINAL_PATH\n    \u251c\u2500\u2500 envs # the runnable environments\n    \u2514\u2500\u2500 solve-group-envs # If there are solve groups\n

"},{"location":"reference/pixi_configuration/#pinning-strategy","title":"pinning-strategy","text":"

The strategy to use for pinning dependencies when running pixi add. The default is semver but you can set the following:

config.toml
pinning-strategy = \"no-pin\"\n
"},{"location":"reference/pixi_configuration/#mirrors","title":"mirrors","text":"

Configuration for conda channel-mirrors, more info below.

config.toml
[mirrors]\n# redirect all requests for conda-forge to the prefix.dev mirror\n\"https://conda.anaconda.org/conda-forge\" = [\n    \"https://prefix.dev/conda-forge\"\n]\n\n# redirect all requests for bioconda to one of the three listed mirrors\n# Note: for repodata we try the first mirror first.\n\"https://conda.anaconda.org/bioconda\" = [\n    \"https://conda.anaconda.org/bioconda\",\n    # OCI registries are also supported\n    \"oci://ghcr.io/channel-mirrors/bioconda\",\n    \"https://prefix.dev/bioconda\",\n]\n
"},{"location":"reference/pixi_configuration/#repodata-config","title":"repodata-config","text":"

Configuration for repodata fetching. config.toml

[repodata-config]\n# disable fetching of jlap, bz2 or zstd repodata files.\n# This should only be used for specific old versions of artifactory and other non-compliant\n# servers.\ndisable-jlap = true  # don't try to download repodata.jlap\ndisable-bzip2 = true # don't try to download repodata.json.bz2\ndisable-zstd = true  # don't try to download repodata.json.zst\n

"},{"location":"reference/pixi_configuration/#pypi-config","title":"pypi-config","text":"

To setup a certain number of defaults for the usage of PyPI registries. You can use the following configuration options:

config.toml
[pypi-config]\n# Main index url\nindex-url = \"https://pypi.org/simple\"\n# list of additional urls\nextra-index-urls = [\"https://pypi.org/simple2\"]\n# can be \"subprocess\" or \"disabled\"\nkeyring-provider = \"subprocess\"\n

index-url and extra-index-urls are not globals

Unlike pip, these settings, with the exception of keyring-provider will only modify the pixi.toml/pyproject.toml file and are not globally interpreted when not present in the manifest. This is because we want to keep the manifest file as complete and reproducible as possible.

"},{"location":"reference/pixi_configuration/#mirror-configuration","title":"Mirror configuration","text":"

You can configure mirrors for conda channels. We expect that mirrors are exact copies of the original channel. The implementation will look for the mirror key (a URL) in the mirrors section of the configuration file and replace the original URL with the mirror URL.

To also include the original URL, you have to repeat it in the list of mirrors.

The mirrors are prioritized based on the order of the list. We attempt to fetch the repodata (the most important file) from the first mirror in the list. The repodata contains all the SHA256 hashes of the individual packages, so it is important to get this file from a trusted source.

You can also specify mirrors for an entire \"host\", e.g.

config.toml
[mirrors]\n\"https://conda.anaconda.org\" = [\n    \"https://prefix.dev/\"\n]\n

This will forward all request to channels on anaconda.org to prefix.dev. Channels that are not currently mirrored on prefix.dev will fail in the above example.

"},{"location":"reference/pixi_configuration/#oci-mirrors","title":"OCI Mirrors","text":"

You can also specify mirrors on the OCI registry. There is a public mirror on the Github container registry (ghcr.io) that is maintained by the conda-forge team. You can use it like this:

config.toml
[mirrors]\n\"https://conda.anaconda.org/conda-forge\" = [\n    \"oci://ghcr.io/channel-mirrors/conda-forge\"\n]\n

The GHCR mirror also contains bioconda packages. You can search the available packages on Github.

"},{"location":"reference/project_configuration/","title":"Configuration","text":"

The pixi.toml is the pixi project configuration file, also known as the project manifest.

A toml file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check pixi on crates.io.

Tip

We also support the pyproject.toml file. It has the same structure as the pixi.toml file. except that you need to prepend the tables with tool.pixi instead of just the table name. For example, the [project] table becomes [tool.pixi.project]. There are also some small extras that are available in the pyproject.toml file, checkout the pyproject.toml documentation for more information.

"},{"location":"reference/project_configuration/#the-project-table","title":"The project table","text":"

The minimally required information in the project table is:

[project]\nchannels = [\"conda-forge\"]\nname = \"project-name\"\nplatforms = [\"linux-64\"]\n
"},{"location":"reference/project_configuration/#name","title":"name","text":"

The name of the project.

name = \"project-name\"\n
"},{"location":"reference/project_configuration/#channels","title":"channels","text":"

This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org you only need to use the name of the channel directly.

channels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n

Channels situated on the file system are also supported with absolute file paths:

channels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n

To access private or public channels on prefix.dev or Quetz use the url including the hostname:

channels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n
"},{"location":"reference/project_configuration/#platforms","title":"platforms","text":"

Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lock file (pixi.lock).

platforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n

The available platforms are listed here: link

Special macOS behavior

macOS has two platforms: osx-64 for Intel Macs and osx-arm64 for Apple Silicon Macs. To support both, include both in your platforms list. Fallback: If osx-arm64 can't resolve, use osx-64. Running osx-64 on Apple Silicon uses Rosetta for Intel binaries.

"},{"location":"reference/project_configuration/#version-optional","title":"version (optional)","text":"

The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.

version = \"1.2.3\"\n
"},{"location":"reference/project_configuration/#authors-optional","title":"authors (optional)","text":"

This is a list of authors of the project.

authors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n
"},{"location":"reference/project_configuration/#description-optional","title":"description (optional)","text":"

This should contain a short description of the project.

description = \"A simple description\"\n
"},{"location":"reference/project_configuration/#license-optional","title":"license (optional)","text":"

The license as a valid SPDX string (e.g. MIT AND Apache-2.0)

license = \"MIT\"\n
"},{"location":"reference/project_configuration/#license-file-optional","title":"license-file (optional)","text":"

Relative path to the license file.

license-file = \"LICENSE.md\"\n
"},{"location":"reference/project_configuration/#readme-optional","title":"readme (optional)","text":"

Relative path to the README file.

readme = \"README.md\"\n
"},{"location":"reference/project_configuration/#homepage-optional","title":"homepage (optional)","text":"

URL of the project homepage.

homepage = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#repository-optional","title":"repository (optional)","text":"

URL of the project source repository.

repository = \"https://github.com/prefix-dev/pixi\"\n
"},{"location":"reference/project_configuration/#documentation-optional","title":"documentation (optional)","text":"

URL of the project documentation.

documentation = \"https://pixi.sh\"\n
"},{"location":"reference/project_configuration/#conda-pypi-map-optional","title":"conda-pypi-map (optional)","text":"

Mapping of channel name or URL to location of mapping that can be URL/Path. Mapping should be structured in json format where conda_name: pypi_package_name. Example:

local/robostack_mapping.json
{\n  \"jupyter-ros\": \"my-name-from-mapping\",\n  \"boltons\": \"boltons-pypi\"\n}\n

If conda-forge is not present in conda-pypi-map pixi will use prefix.dev mapping for it.

conda-pypi-map = { \"conda-forge\" = \"https://example.com/mapping\", \"https://repo.prefix.dev/robostack\" = \"local/robostack_mapping.json\"}\n
"},{"location":"reference/project_configuration/#channel-priority-optional","title":"channel-priority (optional)","text":"

This is the setting for the priority of the channels in the solver step.

Options:

channel-priority = \"disabled\"\n

channel-priority = \"disabled\" is a security risk

Disabling channel priority may lead to unpredictable dependency resolutions. This is a possible security risk as it may lead to packages being installed from unexpected channels. It's advisable to maintain the default strict setting and order channels thoughtfully. If necessary, specify a channel directly for a dependency.

[project]\n# Putting conda-forge first solves most issues\nchannels = [\"conda-forge\", \"channel-name\"]\n[dependencies]\npackage = {version = \"*\", channel = \"channel-name\"}\n

"},{"location":"reference/project_configuration/#the-tasks-table","title":"The tasks table","text":"

Tasks are a way to automate certain custom commands in your project. For example, a lint or format step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run and are executed using the deno_task_shell.

[tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends-on=\"simple\"}\nalias = { depends-on=[\"depending\"]}\ndownload = { cmd=\"curl -o file.txt https://example.com/file.txt\" , outputs=[\"file.txt\"]}\nbuild = { cmd=\"npm build\", cwd=\"frontend\", inputs=[\"frontend/package.json\", \"frontend/*.js\"]}\nrun = { cmd=\"python run.py $ARGUMENT\", env={ ARGUMENT=\"value\" }}\nformat = { cmd=\"black $INIT_CWD\" } # runs black where you run pixi run format\nclean-env = { cmd = \"python isolated.py\", clean-env = true} # Only on Unix!\n

You can modify this table using pixi task.

Note

Specify different tasks for different platforms using the target table

Info

If you want to hide a task from showing up with pixi task list or pixi info, you can prefix the name with _. For example, if you want to hide depending, you can rename it to _depending.

"},{"location":"reference/project_configuration/#the-system-requirements-table","title":"The system-requirements table","text":"

The system requirements are used to define minimal system specifications used during dependency resolution.

For example, we can define a unix system with a specific minimal libc version.

[system-requirements]\nlibc = \"2.28\"\n
or make the project depend on a specific version of cuda:
[system-requirements]\ncuda = \"12\"\n

The options are:

More information in the system requirements documentation.

"},{"location":"reference/project_configuration/#the-pypi-options-table","title":"The pypi-options table","text":"

The pypi-options table is used to define options that are specific to PyPI registries. These options can be specified either at the root level, which will add it to the default options feature, or on feature level, which will create a union of these options when the features are included in the environment.

The options that can be defined are:

These options are explained in the sections below. Most of these options are taken directly or with slight modifications from the uv settings. If any are missing that you need feel free to create an issue requesting them.

"},{"location":"reference/project_configuration/#alternative-registries","title":"Alternative registries","text":"

Strict Index Priority

Unlike pip, because we make use of uv, we have a strict index priority. This means that the first index is used where a package can be found. The order is determined by the order in the toml file. Where the extra-index-urls are preferred over the index-url. Read more about this on the uv docs

Often you might want to use an alternative or extra index for your project. This can be done by adding the pypi-options table to your pixi.toml file, the following options are available:

An example:

[pypi-options]\nindex-url = \"https://pypi.org/simple\"\nextra-index-urls = [\"https://example.com/simple\"]\nfind-links = [{path = './links'}]\n

There are some examples in the pixi repository, that make use of this feature.

Authentication Methods

To read about existing authentication methods for private registries, please check the PyPI Authentication section.

"},{"location":"reference/project_configuration/#no-build-isolation","title":"No Build Isolation","text":"

Even though build isolation is a good default. One can choose to not isolate the build for a certain package name, this allows the build to access the pixi environment. This is convenient if you want to use torch or something similar for your build-process.

[dependencies]\npytorch = \"2.4.0\"\n\n[pypi-options]\nno-build-isolation = [\"detectron2\"]\n\n[pypi-dependencies]\ndetectron2 = { git = \"https://github.com/facebookresearch/detectron2.git\", rev = \"5b72c27ae39f99db75d43f18fd1312e1ea934e60\"}\n

Conda dependencies define the build environment

To use no-build-isolation effectively, use conda dependencies to define the build environment. These are installed before the PyPI dependencies are resolved, this way these dependencies are available during the build process. In the example above adding torch as a PyPI dependency would be ineffective, as it would not yet be installed during the PyPI resolution phase.

"},{"location":"reference/project_configuration/#index-strategy","title":"Index Strategy","text":"

The strategy to use when resolving against multiple index URLs. Description modified from the uv documentation:

By default, uv and thus pixi, will stop at the first index on which a given package is available, and limit resolutions to those present on that first index (first-match). This prevents dependency confusion attacks, whereby an attack can upload a malicious package under the same name to a secondary index.

One index strategy per environment

Only one index-strategy can be defined per environment or solve-group, otherwise, an error will be shown.

"},{"location":"reference/project_configuration/#possible-values","title":"Possible values:","text":"

PyPI only

The index-strategy only changes PyPI package resolution and not conda package resolution.

"},{"location":"reference/project_configuration/#the-dependencies-tables","title":"The dependencies table(s)","text":"

This section defines what dependencies you would like to use for your project.

There are multiple dependencies tables. The default is [dependencies], which are dependencies that are shared across platforms.

Dependencies are defined using a VersionSpec. A VersionSpec combines a Version with an optional operator.

Some examples are:

# Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n

Dependencies can also be defined as a mapping where it is using a matchspec:

package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n

Tip

The dependencies can be easily added using the pixi add command line. Running add for an existing dependency will replace it with the newest it can use.

Note

To specify different dependencies for different platforms use the target table

"},{"location":"reference/project_configuration/#dependencies","title":"dependencies","text":"

Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge. Even if the dependency defines a channel that channel should be added to the project.channels list.

[dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
"},{"location":"reference/project_configuration/#pypi-dependencies","title":"pypi-dependencies","text":"Details regarding the PyPI integration

We use uv, which is a new fast pip replacement written in Rust.

We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.

The uv resolution is included in the lock file directly.

Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.

Important considerations

"},{"location":"reference/project_configuration/#version-specification","title":"Version specification:","text":"

These dependencies don't follow the conda matchspec specification. The version is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:

[dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\nfastapi = \"*\"  # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\npandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\n\n# git dependencies\n# With ssh\nflask = { git = \"ssh://git@github.com/pallets/flask\" }\n# With https and a specific revision\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\n# TODO: will support later -> branch = '' or tag = '' to specify a branch or tag\n\n# You can also directly add a source dependency from a path, tip keep this relative to the root of the project.\nminimal-project = { path = \"./minimal-project\", editable = true}\n\n# You can also use a direct url, to either a `.tar.gz` or `.zip`, or a `.whl` file\nclick = { url = \"https://github.com/pallets/click/releases/download/8.1.7/click-8.1.7-py3-none-any.whl\" }\n\n# You can also just the default git repo, it will checkout the default branch\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\n
"},{"location":"reference/project_configuration/#full-specification","title":"Full specification","text":"

The full specification of a PyPI dependencies that pixi supports can be split into the following fields:

"},{"location":"reference/project_configuration/#extras","title":"extras","text":"

A list of extras to install with the package. e.g. [\"dataframe\", \"sql\"] The extras field works with all other version specifiers as it is an addition to the version specifier.

pandas = { version = \">=1.0.0\", extras = [\"dataframe\", \"sql\"]}\npytest = { git = \"URL\", extras = [\"dev\"]}\nblack = { url = \"URL\", extras = [\"cli\"]}\nminimal-project = { path = \"./minimal-project\", editable = true, extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#version","title":"version","text":"

The version of the package to install. e.g. \">=1.0.0\" or * which stands for any version, this is pixi specific. Version is our default field so using no inline table ({}) will default to this field.

py-rattler = \"*\"\nruff = \"~=1.0.0\"\npytest = {version = \"*\", extras = [\"dev\"]}\n
"},{"location":"reference/project_configuration/#git","title":"git","text":"

A git repository to install from. This support both https:// and ssh:// urls.

Use git in combination with rev or subdirectory:

# Note don't forget the `ssh://` or `https://` prefix!\npytest = { git = \"https://github.com/pytest-dev/pytest.git\"}\nrequests = { git = \"https://github.com/psf/requests.git\", rev = \"0106aced5faa299e6ede89d1230bd6784f2c3660\" }\npy-rattler = { git = \"ssh://git@github.com/mamba-org/rattler.git\", subdirectory = \"py-rattler\" }\n
"},{"location":"reference/project_configuration/#path","title":"path","text":"

A local path to install from. e.g. path = \"./path/to/package\" We would advise to keep your path projects in the project, and to use a relative path.

Set editable to true to install in editable mode, this is highly recommended as it is hard to reinstall if you're not using editable mode. e.g. editable = true

minimal-project = { path = \"./minimal-project\", editable = true}\n
"},{"location":"reference/project_configuration/#url","title":"url","text":"

A URL to install a wheel or sdist directly from an url.

pandas = {url = \"https://files.pythonhosted.org/packages/3d/59/2afa81b9fb300c90531803c0fd43ff4548074fa3e8d0f747ef63b3b5e77a/pandas-2.2.1.tar.gz\"}\n
Did you know you can use: add --pypi?

Use the --pypi flag with the add command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask

This does not support all the features of the pypi-dependencies table yet.

"},{"location":"reference/project_configuration/#source-dependencies-sdist","title":"Source dependencies (sdist)","text":"

The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc for example, it's used from the conda environment instead of the system.

"},{"location":"reference/project_configuration/#host-dependencies","title":"host-dependencies","text":"

This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.

[host-dependencies]\npython = \"~=3.10.3\"\n

Typical examples of host dependencies are:

"},{"location":"reference/project_configuration/#build-dependencies","title":"build-dependencies","text":"

This table contains dependencies that are needed to build the project. Different from dependencies and host-dependencies these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.

[build-dependencies]\ncmake = \"~=3.24\"\n

Typical examples of build dependencies are:

Info

The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.

For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64 and your host platform is linux-64.

"},{"location":"reference/project_configuration/#the-activation-table","title":"The activation table","text":"

The activation table is used for specialized activation operations that need to be run when the environment is activated.

There are two types of activation operations a user can modify in the manifest:

These activation operations will be run before the pixi run and pixi shell commands.

Note

The activation operations are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe on windows and bash on linux and osx (Unix). Only .sh, .bash and .bat files are supported.

And the environment variables are set in the shell that is running the activation script, thus take note when using e.g. $ or %.

If you have scripts or env variable per platform use the target table.

[activation]\nscripts = [\"env_setup.sh\"]\nenv = { ENV_VAR = \"value\" }\n\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n\n[target.linux-64.activation.env]\nENV_VAR = \"linux-value\"\n\n# You can also reference existing environment variables, but this has\n# to be done separately for unix-like operating systems and Windows\n[target.unix.activation.env]\nENV_VAR = \"$OTHER_ENV_VAR/unix-value\"\n\n[target.win.activation.env]\nENV_VAR = \"%OTHER_ENV_VAR%\\\\windows-value\"\n
"},{"location":"reference/project_configuration/#the-target-table","title":"The target table","text":"

The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.

The target table is currently implemented for the following sub-tables:

The target table is defined using [target.PLATFORM.SUB-TABLE]. E.g [target.linux-64.dependencies]

The platform can be any of:

The sub-table can be any of the specified above.

To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake being used for all targets but on osx-64 or osx-arm64 a different version of python will be selected.

[dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n

Here are some more examples:

[target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n
"},{"location":"reference/project_configuration/#the-feature-and-environments-tables","title":"The feature and environments tables","text":"

The feature table allows you to define features that can be used to create different [environments]. The [environments] table allows you to define different environments. The design is explained in the this design document.

Simplest example
[feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n

This will create an environment called test that has pytest installed.

"},{"location":"reference/project_configuration/#the-feature-table","title":"The feature table","text":"

The feature table allows you to define the following fields per feature.

These tables are all also available without the feature prefix. When those are used we call them the default feature. This is a protected name you can not use for your own feature.

Cuda feature table example
[feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\n# Results in:  [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\nchannels = [\"nvidia\"]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Cuda feature table example but written as separate tables
[feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
"},{"location":"reference/project_configuration/#the-environments-table","title":"The environments table","text":"

The [environments] table allows you to define environments that are created using the features defined in the [feature] tables.

The environments table is defined using the following fields:

Full environments table specification

[environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = {features = [\"lint\"], no-default-feature = true}\n
As shown in the example above, in the simplest of cases, it is possible to define an environment only by listing its features:

Simplest example
[environments]\ntest = [\"test\"]\n

is equivalent to

Simplest example expanded
[environments]\ntest = {features = [\"test\"]}\n

When an environment comprises several features (including the default feature): - The activation and tasks of the environment are the union of the activation and tasks of all its features. - The dependencies and pypi-dependencies of the environment are the union of the dependencies and pypi-dependencies of all its features. This means that if several features define a requirement for the same package, both requirements will be combined. Beware of conflicting requirements across features added to the same environment. - The system-requirements of the environment is the union of the system-requirements of all its features. If multiple features specify a requirement for the same system package, the highest version is chosen. - The channels of the environment is the union of the channels of all its features. Channel priorities can be specified in each feature, to ensure channels are considered in the right order in the environment. - The platforms of the environment is the intersection of the platforms of all its features. Be aware that the platforms supported by a feature (including the default feature) will be considered as the platforms defined at project level (unless overridden in the feature). This means that it is usually a good idea to set the project platforms to all platforms it can support across its environments.

"},{"location":"reference/project_configuration/#global-configuration","title":"Global configuration","text":"

The global configuration options are documented in the global configuration section.

"},{"location":"switching_from/conda/","title":"Transitioning from the conda or mamba to pixi","text":"

Welcome to the guide designed to ease your transition from conda or mamba to pixi. This document compares key commands and concepts between these tools, highlighting pixi's unique approach to managing environments and packages. With pixi, you'll experience a project-based workflow, enhancing your development process, and allowing for easy sharing of your work.

"},{"location":"switching_from/conda/#why-pixi","title":"Why Pixi?","text":"

Pixi builds upon the foundation of the conda ecosystem, introducing a project-centric approach rather than focusing solely on environments. This shift towards projects offers a more organized and efficient way to manage dependencies and run code, tailored to modern development practices.

"},{"location":"switching_from/conda/#key-differences-at-a-glance","title":"Key Differences at a Glance","text":"Task Conda/Mamba Pixi Installation Requires an installer Download and add to path (See installation) Creating an Environment conda create -n myenv -c conda-forge python=3.8 pixi init myenv followed by pixi add python=3.8 Activating an Environment conda activate myenv pixi shell within the project directory Deactivating an Environment conda deactivate exit from the pixi shell Running a Task conda run -n myenv python my_program.py pixi run python my_program.py (See run) Installing a Package conda install numpy pixi add numpy Uninstalling a Package conda remove numpy pixi remove numpy

No base environment

Conda has a base environment, which is the default environment when you start a new shell. Pixi does not have a base environment. And requires you to install the tools you need in the project or globally. Using pixi global install bat will install bat in a global environment, which is not the same as the base environment in conda.

Activating pixi environment in the current shell

For some advanced use-cases, you can activate the environment in the current shell. This uses the pixi shell-hook which prints the activation script, which can be used to activate the environment in the current shell without pixi itself.

~/myenv > eval \"$(pixi shell-hook)\"\n

"},{"location":"switching_from/conda/#environment-vs-project","title":"Environment vs Project","text":"

Conda and mamba focus on managing environments, while pixi emphasizes projects. In pixi, a project is a folder containing a manifest(pixi.toml/pyproject.toml) file that describes the project, a pixi.lock lock-file that describes the exact dependencies, and a .pixi folder that contains the environment.

This project-centric approach allows for easy sharing and collaboration, as the project folder contains all the necessary information to recreate the environment. It manages more than one environment for more than one platform in a single project, and allows for easy switching between them. (See multiple environments)

"},{"location":"switching_from/conda/#global-environments","title":"Global environments","text":"

conda installs all environments in one global location. When this is important to you for filesystem reasons, you can use the detached-environments feature of pixi.

pixi config set detached-environments true\n# or a specific location\npixi config set detached-environments /path/to/envs\n
This doesn't allow you to activate the environments using pixi shell -n but it will make the installation of the environments go to the same folder.

pixi does have the pixi global command to install tools on your machine. (See global) This is not a replacement for conda but works the same as pipx and condax. It creates a single isolated environment for the given requirement and installs the binaries into the global path.

pixi global install bat\nbat pixi.toml\n

Never install pip with pixi global

Installations with pixi global get their own isolated environment. Installing pip with pixi global will create a new isolated environment with its own pip binary. Using that pip binary will install packages in the pip environment, making it unreachable form anywhere as you can't activate it.

"},{"location":"switching_from/conda/#automated-switching","title":"Automated switching","text":"

With pixi you can import environment.yml files into a pixi project. (See import)

pixi init --import environment.yml\n
This will create a new project with the dependencies from the environment.yml file.

Exporting your environment

If you are working with Conda users or systems, you can export your environment to a environment.yml file to share them.

pixi project export conda\n
Additionally you can export a conda explicit specification.

"},{"location":"switching_from/conda/#troubleshooting","title":"Troubleshooting","text":"

Encountering issues? Here are solutions to some common problems when being used to the conda workflow:

"},{"location":"switching_from/poetry/","title":"Transitioning from poetry to pixi","text":"

Welcome to the guide designed to ease your transition from poetry to pixi. This document compares key commands and concepts between these tools, highlighting pixi's unique approach to managing environments and packages. With pixi, you'll experience a project-based workflow similar to poetry while including the conda ecosystem and allowing for easy sharing of your work.

"},{"location":"switching_from/poetry/#why-pixi","title":"Why Pixi?","text":"

Poetry is most-likely the closest tool to pixi in terms of project management, in the python ecosystem. On top of the PyPI ecosystem, pixi adds the power of the conda ecosystem, allowing for a more flexible and powerful environment management.

"},{"location":"switching_from/poetry/#quick-look-at-the-differences","title":"Quick look at the differences","text":"Task Poetry Pixi Creating an Environment poetry new myenv pixi init myenv Running a Task poetry run which python pixi run which python pixi uses a built-in cross platform shell for run where poetry uses your shell. Installing a Package poetry add numpy pixi add numpy adds the conda variant. pixi add --pypi numpy adds the PyPI variant. Uninstalling a Package poetry remove numpy pixi remove numpy removes the conda variant. pixi remove --pypi numpy removes the PyPI variant. Building a package poetry build We've yet to implement package building and publishing Publishing a package poetry publish We've yet to implement package building and publishing Reading the pyproject.toml [tool.poetry] [tool.pixi] Defining dependencies [tool.poetry.dependencies] [tool.pixi.dependencies] for conda, [tool.pixi.pypi-dependencies] or [project.dependencies] for PyPI dependencies Dependency definition - numpy = \"^1.2.3\"- numpy = \"~1.2.3\"- numpy = \"*\" - numpy = \">=1.2.3 <2.0.0\"- numpy = \">=1.2.3 <1.3.0\"- numpy = \"*\" Lock file poetry.lock pixi.lock Environment directory ~/.cache/pypoetry/virtualenvs/myenv ./.pixi Defaults to the project folder, move this using the detached-environments"},{"location":"switching_from/poetry/#support-both-poetry-and-pixi-in-my-project","title":"Support both poetry and pixi in my project","text":"

You can allow users to use poetry and pixi in the same project, they will not touch each other's parts of the configuration or system. It's best to duplicate the dependencies, basically making an exact copy of the tool.poetry.dependencies into tool.pixi.pypi-dependencies. Make sure that python is only defined in the tool.pixi.dependencies and not in the tool.pixi.pypi-dependencies.

Mixing pixi and poetry

It's possible to use poetry in pixi environments but this is advised against. Pixi supports PyPI dependencies in a different way than poetry does, and mixing them can lead to unexpected behavior. As you can only use one package manager at a time, it's best to stick to one.

If using poetry on top of a pixi project, you'll always need to install the poetry environment after the pixi environment. And let pixi handle the python and poetry installation.

"},{"location":"tutorials/python/","title":"Tutorial: Doing Python development with Pixi","text":"

In this tutorial, we will show you how to create a simple Python project with pixi. We will show some of the features that pixi provides, that are currently not a part of pdm, poetry etc.

"},{"location":"tutorials/python/#why-is-this-useful","title":"Why is this useful?","text":"

Pixi builds upon the conda ecosystem, which allows you to create a Python environment with all the dependencies you need. This is especially useful when you are working with multiple Python interpreters and bindings to C and C++ libraries. For example, GDAL from PyPI does not have binary C dependencies, but the conda package does. On the other hand, some packages are only available through PyPI, which pixi can also install for you. Best of both world, let's give it a go!

"},{"location":"tutorials/python/#pixitoml-and-pyprojecttoml","title":"pixi.toml and pyproject.toml","text":"

We support two manifest formats: pyproject.toml and pixi.toml. In this tutorial, we will use the pyproject.toml format because it is the most common format for Python projects.

"},{"location":"tutorials/python/#lets-get-started","title":"Let's get started","text":"

Let's start out by making a directory and creating a new pyproject.toml file.

pixi init pixi-py --format pyproject\n

This gives you the following pyproject.toml:

[project]\nname = \"pixi-py\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [{name = \"Tim de Jager\", email = \"tim@prefix.dev\"}]\nrequires-python = \">= 3.11\"\ndependencies = []\n\n[build-system]\nbuild-backend = \"hatchling.build\"\nrequires = [\"hatchling\"]\n\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\"]\n\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n\n[tool.pixi.tasks]\n

Let's add the Python project to the tree:

Linux & macOSWindows
cd pixi-py # move into the project directory\nmkdir pixi_py\ntouch pixi_py/__init__.py\n
cd pixi-py\nmkdir pixi_py\ntype nul > pixi_py\\__init__.py\n

We now have the following directory structure:

.\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 __init__.py\n\u2514\u2500\u2500 pyproject.toml\n

We've used a flat-layout here but pixi supports both flat- and src-layouts.

"},{"location":"tutorials/python/#whats-in-the-pyprojecttoml","title":"What's in the pyproject.toml?","text":"

Okay, so let's have a look at what sections have been added and how we can modify the pyproject.toml.

These first entries were added to the pyproject.toml file:

# Main pixi entry\n[tool.pixi.project]\nchannels = [\"conda-forge\"]\n# This is your machine platform by default\nplatforms = [\"osx-arm64\"]\n

The channels and platforms are added to the [tool.pixi.project] section. Channels like conda-forge manage packages similar to PyPI but allow for different packages across languages. The keyword platforms determines what platform the project supports.

The pixi_py package itself is added as an editable dependency. This means that the package is installed in editable mode, so you can make changes to the package and see the changes reflected in the environment, without having to re-install the environment.

# Editable installs\n[tool.pixi.pypi-dependencies]\npixi-py = { path = \".\", editable = true }\n

In pixi, unlike other package managers, this is explicitly stated in the pyproject.toml file. The main reason being so that you can choose which environment this package should be included in.

"},{"location":"tutorials/python/#managing-both-conda-and-pypi-dependencies-in-pixi","title":"Managing both conda and PyPI dependencies in pixi","text":"

Our projects usually depend on other packages.

$ pixi add black\nAdded black\n

This will result in the following addition to the pyproject.toml:

# Dependencies\n[tool.pixi.dependencies]\nblack = \">=24.4.2,<24.5\"\n

But we can also be strict about the version that should be used with pixi add black=24, resulting in

[tool.pixi.dependencies]\nblack = \"24.*\"\n

Now, let's add some optional dependencies:

pixi add --pypi --feature test pytest\n

Which results in the following fields added to the pyproject.toml:

[project.optional-dependencies]\ntest = [\"pytest\"]\n

After we have added the optional dependencies to the pyproject.toml, pixi automatically creates a feature, which can contain a collection of dependencies, tasks, channels, and more.

Sometimes there are packages that aren't available on conda channels but are published on PyPI. We can add these as well, which pixi will solve together with the default dependencies.

$ pixi add black --pypi\nAdded black\nAdded these as pypi-dependencies.\n

which results in the addition to the dependencies key in the pyproject.toml

dependencies = [\"black\"]\n

When using the pypi-dependencies you can make use of the optional-dependencies that other packages make available. For example, black makes the cli dependencies option, which can be added with the --pypi keyword:

$ pixi add black[cli] --pypi\nAdded black[cli]\nAdded these as pypi-dependencies.\n

which updates the dependencies entry to

dependencies = [\"black[cli]\"]\n
Optional dependencies in pixi.toml

This tutorial focuses on the use of the pyproject.toml, but in case you're curious, the pixi.toml would contain the following entry after the installation of a PyPI package including an optional dependency:

[pypi-dependencies]\nblack = { version = \"*\", extras = [\"cli\"] }\n

"},{"location":"tutorials/python/#installation-pixi-install","title":"Installation: pixi install","text":"

Now let's install the project with pixi install:

$ pixi install\n\u2714 Project in /path/to/pixi-py is ready to use!\n

We now have a new directory called .pixi in the project root. This directory contains the environment that was created when we ran pixi install. The environment is a conda environment that contains the dependencies that we specified in the pyproject.toml file. We can also install the test environment with pixi install -e test. We can use these environments for executing code.

We also have a new file called pixi.lock in the project root. This file contains the exact versions of the dependencies that were installed in the environment across platforms.

"},{"location":"tutorials/python/#whats-in-the-environment","title":"What's in the environment?","text":"

Using pixi list, you can see what's in the environment, this is essentially a nicer view on the lock file:

$ pixi list\nPackage          Version       Build               Size       Kind   Source\nbzip2            1.0.8         h93a5062_5          119.5 KiB  conda  bzip2-1.0.8-h93a5062_5.conda\nblack            24.4.2                            3.8 MiB    pypi   black-24.4.2-cp312-cp312-win_amd64.http.whl\nca-certificates  2024.2.2      hf0a4a13_0          152.1 KiB  conda  ca-certificates-2024.2.2-hf0a4a13_0.conda\nlibexpat         2.6.2         hebf3989_0          62.2 KiB   conda  libexpat-2.6.2-hebf3989_0.conda\nlibffi           3.4.2         h3422bc3_5          38.1 KiB   conda  libffi-3.4.2-h3422bc3_5.tar.bz2\nlibsqlite        3.45.2        h091b4b1_0          806 KiB    conda  libsqlite-3.45.2-h091b4b1_0.conda\nlibzlib          1.2.13        h53f4e23_5          47 KiB     conda  libzlib-1.2.13-h53f4e23_5.conda\nncurses          6.4.20240210  h078ce10_0          801 KiB    conda  ncurses-6.4.20240210-h078ce10_0.conda\nopenssl          3.2.1         h0d3ecfb_1          2.7 MiB    conda  openssl-3.2.1-h0d3ecfb_1.conda\npython           3.12.3        h4a7b5fc_0_cpython  12.6 MiB   conda  python-3.12.3-h4a7b5fc_0_cpython.conda\nreadline         8.2           h92ec313_1          244.5 KiB  conda  readline-8.2-h92ec313_1.conda\ntk               8.6.13        h5083fa2_1          3 MiB      conda  tk-8.6.13-h5083fa2_1.conda\ntzdata           2024a         h0c530f3_0          117 KiB    conda  tzdata-2024a-h0c530f3_0.conda\npixi-py          0.1.0                                        pypi   . (editable)\nxz               5.2.6         h57fd34a_0          230.2 KiB  conda  xz-5.2.6-h57fd34a_0.tar.bz2\n

Python

The Python interpreter is also installed in the environment. This is because the Python interpreter version is read from the requires-python field in the pyproject.toml file. This is used to determine the Python version to install in the environment. This way, pixi automatically manages/bootstraps the Python interpreter for you, so no more brew, apt or other system install steps.

Here, you can see the different conda and Pypi packages listed. As you can see, the pixi-py package that we are working on is installed in editable mode. Every environment in pixi is isolated but reuses files that are hard-linked from a central cache directory. This means that you can have multiple environments with the same packages but only have the individual files stored once on disk.

We can create the default and test environments based on our own test feature from the optional-dependency:

pixi project environment add default --solve-group default\npixi project environment add test --feature test --solve-group default\n

Which results in:

# Environments\n[tool.pixi.environments]\ndefault = { solve-group = \"default\" }\ntest = { features = [\"test\"], solve-group = \"default\" }\n
Solve Groups

Solve groups are a way to group dependencies together. This is useful when you have multiple environments that share the same dependencies. For example, maybe pytest is a dependency that influences the dependencies of the default environment. By putting these in the same solve group, you ensure that the versions in test and default are exactly the same.

The default environment is created when you run pixi install. The test environment is created from the optional dependencies in the pyproject.toml file. You can execute commands in this environment with e.g. pixi run -e test python

"},{"location":"tutorials/python/#getting-code-to-run","title":"Getting code to run","text":"

Let's add some code to the pixi-py package. We will add a new function to the pixi_py/__init__.py file:

from rich import print\n\ndef hello():\n    return \"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\"\n\ndef say_hello():\n    print(*hello())\n

Now add the rich dependency from PyPI using: pixi add --pypi rich.

Let's see if this works by running:

pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n
Slow?

This might be slow(2 minutes) the first time because pixi installs the project, but it will be near instant the second time.

Pixi runs the self installed Python interpreter. Then, we are importing the pixi_py package, which is installed in editable mode. The code calls the say_hello function that we just added. And it works! Cool!

"},{"location":"tutorials/python/#testing-this-code","title":"Testing this code","text":"

Okay, so let's add a test for this function. Let's add a tests/test_me.py file in the root of the project.

Giving us the following project structure:

.\n\u251c\u2500\u2500 pixi.lock\n\u251c\u2500\u2500 pixi_py\n\u2502\u00a0\u00a0 \u2514\u2500\u2500 __init__.py\n\u251c\u2500\u2500 pyproject.toml\n\u2514\u2500\u2500 tests/test_me.py\n
from pixi_py import hello\n\ndef test_pixi_py():\n    assert hello() == (\"Hello, [bold magenta]World[/bold magenta]!\", \":vampire:\")\n

Let's add an easy task for running the tests.

$ pixi task add --feature test test \"pytest\"\n\u2714 Added task `test`: pytest .\n

So pixi has a task system to make it easy to run commands. Similar to npm scripts or something you would specify in a Justfile.

Pixi tasks

Tasks are actually a pretty cool pixi feature that is powerful and runs in a cross-platform shell. You can do caching, dependencies and more. Read more about tasks in the tasks section.

$ pixi r test\n\u2728 Pixi task (test): pytest .\n================================================================================================= test session starts =================================================================================================\nplatform darwin -- Python 3.12.2, pytest-8.1.1, pluggy-1.4.0\nrootdir: /private/tmp/pixi-py\nconfigfile: pyproject.toml\ncollected 1 item\n\ntest_me.py .                                                                                                                                                                                                    [100%]\n\n================================================================================================== 1 passed in 0.00s =================================================================================================\n

Neat! It seems to be working!

"},{"location":"tutorials/python/#test-vs-default-environment","title":"Test vs Default environment","text":"

Let's compare the output of the test and default environments...

pixi list -e test\n# vs. default environment\npixi list\n

We see that the test environment has:

package          version       build               size       kind   source\n...\npytest           8.1.1                             1.1 mib    pypi   pytest-8.1.1-py3-none-any.whl\n...\n

However, the default environment is missing this package. This way, you can finetune your environments to only have the packages that are needed for that environment. E.g. you could also have a dev environment that has pytest and ruff installed, but you could omit these from the prod environment. There is a docker example that shows how to set up a minimal prod environment and copy from there.

"},{"location":"tutorials/python/#replacing-pypi-packages-with-conda-packages","title":"Replacing PyPI packages with conda packages","text":"

Last thing, pixi provides the ability for pypi packages to depend on conda packages. Let's confirm this with pixi list:

$ pixi list\nPackage          Version       Build               Size       Kind   Source\n...\npygments         2.17.2                            4.1 MiB    pypi   pygments-2.17.2-py3-none-any.http.whl\n...\n

Let's explicitly add pygments to the pyproject.toml file. Which is a dependency of the rich package.

pixi add pygments\n

This will add the following to the pyproject.toml file:

[tool.pixi.dependencies]\npygments = \">=2.17.2,<2.18\"\n

We can now see that the pygments package is now installed as a conda package.

$ pixi list\nPackage          Version       Build               Size       Kind   Source\n...\npygments         2.17.2        pyhd8ed1ab_0        840.3 KiB  conda  pygments-2.17.2-pyhd8ed1ab_0.conda\n

This way, PyPI dependencies and conda dependencies can be mixed and matched to seamlessly interoperate.

$  pixi r python -c \"import pixi_py; pixi_py.say_hello()\"\nHello, World! \ud83e\udddb\n

And it still works!

"},{"location":"tutorials/python/#conclusion","title":"Conclusion","text":"

In this tutorial, you've seen how easy it is to use a pyproject.toml to manage your pixi dependencies and environments. We have also explored how to use PyPI and conda dependencies seamlessly together in the same project and install optional dependencies to manage Python packages.

Hopefully, this provides a flexible and powerful way to manage your Python projects and a fertile base for further exploration with Pixi.

Thanks for reading! Happy Coding \ud83d\ude80

Any questions? Feel free to reach out or share this tutorial on X, join our Discord, send us an e-mail or follow our GitHub.

"},{"location":"tutorials/ros2/","title":"Tutorial: Develop a ROS 2 package with pixi","text":"

In this tutorial, we will show you how to develop a ROS 2 package using pixi. The tutorial is written to be executed from top to bottom, missing steps might result in errors.

The audience for this tutorial is developers who are familiar with ROS 2 and how are interested to try pixi for their development workflow.

"},{"location":"tutorials/ros2/#prerequisites","title":"Prerequisites","text":"

If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.

"},{"location":"tutorials/ros2/#create-a-pixi-project","title":"Create a pixi project.","text":"
pixi init my_ros2_project -c robostack-staging -c conda-forge\ncd my_ros2_project\n

It should have created a directory structure like this:

my_ros2_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n

The pixi.toml file is the manifest file for your project. It should look like this:

pixi.toml
[project]\nname = \"my_ros2_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"robostack-staging\", \"conda-forge\"]\n# Your project can support multiple platforms, the current platform will be automatically added.\nplatforms = [\"linux-64\"]\n\n[tasks]\n\n[dependencies]\n

The channels you added to the init command are repositories of packages, you can search in these repositories through our prefix.dev website. The platforms are the systems you want to support, in pixi you can support multiple platforms, but you have to define which platforms, so pixi can test if those are supported for your dependencies. For the rest of the fields, you can fill them in as you see fit.

"},{"location":"tutorials/ros2/#add-ros-2-dependencies","title":"Add ROS 2 dependencies","text":"

To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.

Let's start with the turtlesim example

pixi add ros-humble-desktop ros-humble-turtlesim\n

This will add the ros-humble-desktop and ros-humble-turtlesim packages to your manifest. Depending on your internet speed this might take a minute, as it will also install ROS in your project folder (.pixi).

Now run the turtlesim example.

pixi run ros2 run turtlesim turtlesim_node\n

Or use the shell command to start an activated environment in your terminal.

pixi shell\nros2 run turtlesim turtlesim_node\n

Congratulations you have ROS 2 running on your machine with pixi!

Some more fun with the turtle

To control the turtle you can run the following command in a new terminal

cd my_ros2_project\npixi run ros2 run turtlesim turtle_teleop_key\n

Now you can control the turtle with the arrow keys on your keyboard.

"},{"location":"tutorials/ros2/#add-a-custom-python-node","title":"Add a custom Python node","text":"

As ros works with custom nodes, let's add a custom node to our project.

pixi run ros2 pkg create --build-type ament_python --destination-directory src --node-name my_node my_package\n

To build the package we need some more dependencies:

pixi add colcon-common-extensions \"setuptools<=58.2.0\"\n

Add the created initialization script for the ros workspace to your manifest file.

Then run the build command

pixi run colcon build\n

This will create a sourceable script in the install folder, you can source this script through an activation script to use your custom node. Normally this would be the script you add to your .bashrc but now you tell pixi to use it.

Linux & macOSWindows pixi.toml
[activation]\nscripts = [\"install/setup.sh\"]\n
pixi.toml
[activation]\nscripts = [\"install/setup.bat\"]\n
Multi platform support

You can add multiple activation scripts for different platforms, so you can support multiple platforms with one project. Use the following example to add support for both Linux and Windows, using the target syntax.

[project]\nplatforms = [\"linux-64\", \"win-64\"]\n\n[activation]\nscripts = [\"install/setup.sh\"]\n[target.win-64.activation]\nscripts = [\"install/setup.bat\"]\n

Now you can run your custom node with the following command

pixi run ros2 run my_package my_node\n
"},{"location":"tutorials/ros2/#simplify-the-user-experience","title":"Simplify the user experience","text":"

In pixi we have a feature called tasks, this allows you to define a task in your manifest file and run it with a simple command. Let's add a task to run the turtlesim example and the custom node.

pixi task add sim \"ros2 run turtlesim turtlesim_node\"\npixi task add build \"colcon build --symlink-install\"\npixi task add hello \"ros2 run my_package my_node\"\n

Now you can run these task by simply running

pixi run sim\npixi run build\npixi run hello\n
Advanced task usage

Tasks are a powerful feature in pixi.

[tasks]\nsim = \"ros2 run turtlesim turtlesim_node\"\nbuild = {cmd = \"colcon build --symlink-install\", inputs = [\"src\"]}\nhello = { cmd = \"ros2 run my_package my_node\", depends-on = [\"build\"] }\n
"},{"location":"tutorials/ros2/#build-a-c-node","title":"Build a C++ node","text":"

To build a C++ node you need to add the ament_cmake and some other build dependencies to your manifest file.

pixi add ros-humble-ament-cmake-auto compilers pkg-config cmake ninja\n

Now you can create a C++ node with the following command

pixi run ros2 pkg create --build-type ament_cmake --destination-directory src --node-name my_cpp_node my_cpp_package\n

Now you can build it again and run it with the following commands

# Passing arguments to the build command to build with Ninja, add them to the manifest if you want to default to ninja.\npixi run build --cmake-args -G Ninja\npixi run ros2 run my_cpp_package my_cpp_node\n
Tip

Add the cpp task to the manifest file to simplify the user experience.

pixi task add hello-cpp \"ros2 run my_cpp_package my_cpp_node\"\n
"},{"location":"tutorials/ros2/#conclusion","title":"Conclusion","text":"

In this tutorial, we showed you how to create a Python & CMake ROS2 project using pixi. We also showed you how to add dependencies to your project using pixi, and how to run your project using pixi run. This way you can make sure that your project is reproducible on all your machines that have pixi installed.

"},{"location":"tutorials/ros2/#show-off-your-work","title":"Show Off Your Work!","text":"

Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!

"},{"location":"tutorials/ros2/#frequently-asked-questions","title":"Frequently asked questions","text":""},{"location":"tutorials/ros2/#what-happens-with-rosdep","title":"What happens with rosdep?","text":"

Currently, we don't support rosdep in a pixi environment, so you'll have to add the packages using pixi add. rosdep will call conda install which isn't supported in a pixi environment.

"},{"location":"tutorials/rust/","title":"Tutorial: Develop a Rust package using pixi","text":"

In this tutorial, we will show you how to develop a Rust package using pixi. The tutorial is written to be executed from top to bottom, missing steps might result in errors.

The audience for this tutorial is developers who are familiar with Rust and cargo and how are interested to try pixi for their development workflow. The benefit would be within a rust workflow that you lock both rust and the C/System dependencies your project might be using. E.g tokio users will almost most definitely use openssl.

If you're new to pixi, you can check out the basic usage guide. This will teach you the basics of pixi project within 3 minutes.

"},{"location":"tutorials/rust/#prerequisites","title":"Prerequisites","text":""},{"location":"tutorials/rust/#create-a-pixi-project","title":"Create a pixi project.","text":"
pixi init my_rust_project\ncd my_rust_project\n

It should have created a directory structure like this:

my_rust_project\n\u251c\u2500\u2500 .gitattributes\n\u251c\u2500\u2500 .gitignore\n\u2514\u2500\u2500 pixi.toml\n

The pixi.toml file is the manifest file for your project. It should look like this:

pixi.toml
[project]\nname = \"my_rust_project\"\nversion = \"0.1.0\"\ndescription = \"Add a short description here\"\nauthors = [\"User Name <user.name@email.url>\"]\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"] # (1)!\n\n[tasks]\n\n[dependencies]\n
  1. The platforms is set to your system's platform by default. You can change it to any platform you want to support. e.g. [\"linux-64\", \"osx-64\", \"osx-arm64\", \"win-64\"].
"},{"location":"tutorials/rust/#add-rust-dependencies","title":"Add Rust dependencies","text":"

To use a pixi project you don't need any dependencies on your system, all the dependencies you need should be added through pixi, so other users can use your project without any issues.

pixi add rust\n

This will add the rust package to your pixi.toml file under [dependencies]. Which includes the rust toolchain, and cargo.

"},{"location":"tutorials/rust/#add-a-cargo-project","title":"Add a cargo project","text":"

Now that you have rust installed, you can create a cargo project in your pixi project.

pixi run cargo init\n

pixi run is pixi's way to run commands in the pixi environment, it will make sure that the environment is set up correctly for the command to run. It runs its own cross-platform shell, if you want more information checkout the tasks documentation. You can also activate the environment in your own shell by running pixi shell, after that you don't need pixi run ... anymore.

Now we can build a cargo project using pixi.

pixi run cargo build\n
To simplify the build process, you can add a build task to your pixi.toml file using the following command:
pixi task add build \"cargo build\"\n
Which creates this field in the pixi.toml file: pixi.toml
[tasks]\nbuild = \"cargo build\"\n

And now you can build your project using:

pixi run build\n

You can also run your project using:

pixi run cargo run\n
Which you can simplify with a task again.
pixi task add start \"cargo run\"\n

So you should get the following output:

pixi run start\nHello, world!\n

Congratulations, you have a Rust project running on your machine with pixi!

"},{"location":"tutorials/rust/#next-steps-why-is-this-useful-when-there-is-rustup","title":"Next steps, why is this useful when there is rustup?","text":"

Cargo is not a binary package manager, but a source-based package manager. This means that you need to have the Rust compiler installed on your system to use it. And possibly other dependencies that are not included in the cargo package manager. For example, you might need to install openssl or libssl-dev on your system to build a package. This is the case for pixi as well, but pixi will install these dependencies in your project folder, so you don't have to worry about them.

Add the following dependencies to your cargo project:

pixi run cargo add git2\n

If your system is not preconfigured to build C and have the libssl-dev package installed you will not be able to build the project:

pixi run build\n...\nCould not find directory of OpenSSL installation, and this `-sys` crate cannot\nproceed without this knowledge. If OpenSSL is installed and this crate had\ntrouble finding it,  you can set the `OPENSSL_DIR` environment variable for the\ncompilation process.\n\nMake sure you also have the development packages of openssl installed.\nFor example, `libssl-dev` on Ubuntu or `openssl-devel` on Fedora.\n\nIf you're in a situation where you think the directory *should* be found\nautomatically, please open a bug at https://github.com/sfackler/rust-openssl\nand include information about your system as well as this message.\n\n$HOST = x86_64-unknown-linux-gnu\n$TARGET = x86_64-unknown-linux-gnu\nopenssl-sys = 0.9.102\n\n\nIt looks like you're compiling on Linux and also targeting Linux. Currently this\nrequires the `pkg-config` utility to find OpenSSL but unfortunately `pkg-config`\ncould not be found. If you have OpenSSL installed you can likely fix this by\ninstalling `pkg-config`.\n...\n
You can fix this, by adding the necessary dependencies for building git2, with pixi:
pixi add openssl pkg-config compilers\n

Now you should be able to build your project again:

pixi run build\n...\n   Compiling git2 v0.18.3\n   Compiling my_rust_project v0.1.0 (/my_rust_project)\n    Finished dev [unoptimized + debuginfo] target(s) in 7.44s\n     Running `target/debug/my_rust_project`\n

"},{"location":"tutorials/rust/#extra-add-more-tasks","title":"Extra: Add more tasks","text":"

You can add more tasks to your pixi.toml file to simplify your workflow.

For example, you can add a test task to run your tests:

pixi task add test \"cargo test\"\n

And you can add a clean task to clean your project:

pixi task add clean \"cargo clean\"\n

You can add a formatting task to your project:

pixi task add fmt \"cargo fmt\"\n

You can extend these tasks to run multiple commands with the use of the depends-on field.

pixi task add lint \"cargo clippy\" --depends-on fmt\n

"},{"location":"tutorials/rust/#conclusion","title":"Conclusion","text":"

In this tutorial, we showed you how to create a Rust project using pixi. We also showed you how to add dependencies to your project using pixi. This way you can make sure that your project is reproducible on any system that has pixi installed.

"},{"location":"tutorials/rust/#show-off-your-work","title":"Show Off Your Work!","text":"

Finished with your project? We'd love to see what you've created! Share your work on social media using the hashtag #pixi and tag us @prefix_dev. Let's inspire the community together!

"},{"location":"CHANGELOG/","title":"Changelog","text":"

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

"},{"location":"CHANGELOG/#0321-2024-10-08","title":"[0.32.1] - 2024-10-08","text":""},{"location":"CHANGELOG/#fixes","title":"Fixes","text":""},{"location":"CHANGELOG/#documentation","title":"Documentation","text":""},{"location":"CHANGELOG/#0320-2024-10-08","title":"[0.32.0] - 2024-10-08","text":""},{"location":"CHANGELOG/#highlights","title":"\u2728 Highlights","text":"

The biggest fix in this PR is the move to the latest rattler as it came with some major bug fixes for macOS and Rust 1.81 compatibility.

"},{"location":"CHANGELOG/#changed","title":"Changed","text":""},{"location":"CHANGELOG/#fixed","title":"Fixed","text":""},{"location":"CHANGELOG/#0310-2024-10-03","title":"[0.31.0] - 2024-10-03","text":""},{"location":"CHANGELOG/#highlights_1","title":"\u2728 Highlights","text":"

Thanks to our maintainer @baszamstra! He sped up the resolver for all cases we could think of in #2162 Check the result of times it takes to solve the environments in our test set:

"},{"location":"CHANGELOG/#added","title":"Added","text":""},{"location":"CHANGELOG/#changed_1","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_1","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_1","title":"Fixed","text":""},{"location":"CHANGELOG/#performance","title":"Performance","text":""},{"location":"CHANGELOG/#new-contributors","title":"New Contributors","text":""},{"location":"CHANGELOG/#0300-2024-09-19","title":"[0.30.0] - 2024-09-19","text":""},{"location":"CHANGELOG/#highlights_2","title":"\u2728 Highlights","text":"

I want to thank @synapticarbors and @abkfenris for starting the work on pixi project export. Pixi now supports the export of a conda environment.yml file and a conda explicit specification file. This is a great addition to the project and will help users to share their projects with other non pixi users.

"},{"location":"CHANGELOG/#added_1","title":"Added","text":""},{"location":"CHANGELOG/#changed_2","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_2","title":"Documentation","text":""},{"location":"CHANGELOG/#testing","title":"Testing","text":""},{"location":"CHANGELOG/#fixed_2","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_1","title":"New Contributors","text":""},{"location":"CHANGELOG/#0290-2024-09-04","title":"[0.29.0] - 2024-09-04","text":""},{"location":"CHANGELOG/#highlights_3","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_2","title":"Added","text":" "},{"location":"CHANGELOG/#changed_3","title":"Changed","text":" "},{"location":"CHANGELOG/#fixed_3","title":"Fixed","text":" "},{"location":"CHANGELOG/#refactor_1","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_2","title":"New Contributors","text":""},{"location":"CHANGELOG/#0282-2024-08-28","title":"[0.28.2] - 2024-08-28","text":""},{"location":"CHANGELOG/#changed_4","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_3","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_4","title":"Fixed","text":""},{"location":"CHANGELOG/#0281-2024-08-26","title":"[0.28.1] - 2024-08-26","text":""},{"location":"CHANGELOG/#changed_5","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_4","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_5","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_3","title":"New Contributors","text":""},{"location":"CHANGELOG/#0280-2024-08-22","title":"[0.28.0] - 2024-08-22","text":""},{"location":"CHANGELOG/#highlights_4","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_3","title":"Added","text":""},{"location":"CHANGELOG/#changed_6","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_5","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_6","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_2","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_4","title":"New Contributors","text":""},{"location":"CHANGELOG/#0271-2024-08-09","title":"[0.27.1] - 2024-08-09","text":""},{"location":"CHANGELOG/#documentation_6","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_7","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_3","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_5","title":"New Contributors","text":""},{"location":"CHANGELOG/#0270-2024-08-07","title":"[0.27.0] - 2024-08-07","text":""},{"location":"CHANGELOG/#highlights_5","title":"\u2728 Highlights","text":"

This release contains a lot of refactoring and improvements to the codebase, in preparation for future features and improvements. Including with that we've fixed a ton of bugs. To make sure we're not breaking anything we've added a lot of tests and CI checks. But let us know if you find any issues!

As a reminder, you can update pixi using pixi self-update and move to a specific version, including backwards, with pixi self-update --version 0.27.0.

"},{"location":"CHANGELOG/#added_4","title":"Added","text":""},{"location":"CHANGELOG/#changed_7","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_7","title":"Documentation","text":""},{"location":"CHANGELOG/#testing_1","title":"Testing","text":""},{"location":"CHANGELOG/#fixed_8","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_4","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_6","title":"New Contributors","text":""},{"location":"CHANGELOG/#0261-2024-07-22","title":"[0.26.1] - 2024-07-22","text":""},{"location":"CHANGELOG/#fixed_9","title":"Fixed","text":""},{"location":"CHANGELOG/#0260-2024-07-19","title":"[0.26.0] - 2024-07-19","text":""},{"location":"CHANGELOG/#highlights_6","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_5","title":"Added","text":""},{"location":"CHANGELOG/#changed_8","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_8","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_10","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_5","title":"Refactor","text":""},{"location":"CHANGELOG/#removed","title":"Removed","text":""},{"location":"CHANGELOG/#new-contributors_7","title":"New Contributors","text":""},{"location":"CHANGELOG/#0250-2024-07-05","title":"[0.25.0] - 2024-07-05","text":""},{"location":"CHANGELOG/#highlights_7","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#changed_9","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_9","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_11","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_6","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_8","title":"New Contributors","text":""},{"location":"CHANGELOG/#0242-2024-06-14","title":"[0.24.2] - 2024-06-14","text":""},{"location":"CHANGELOG/#documentation_10","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_12","title":"Fixed","text":""},{"location":"CHANGELOG/#0241-2024-06-12","title":"[0.24.1] - 2024-06-12","text":""},{"location":"CHANGELOG/#fixed_13","title":"Fixed","text":""},{"location":"CHANGELOG/#0240-2024-06-12","title":"[0.24.0] - 2024-06-12","text":""},{"location":"CHANGELOG/#highlights_8","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_6","title":"Added","text":""},{"location":"CHANGELOG/#changed_10","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_11","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_14","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_9","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0230-2024-05-27","title":"[0.23.0] - 2024-05-27","text":""},{"location":"CHANGELOG/#highlights_9","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_7","title":"Added","text":""},{"location":"CHANGELOG/#changed_11","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_12","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_15","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_7","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_10","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0220-2024-05-13","title":"[0.22.0] - 2024-05-13","text":""},{"location":"CHANGELOG/#highlights_10","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_8","title":"Added","text":""},{"location":"CHANGELOG/#documentation_13","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_16","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_8","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_11","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0211-2024-05-07","title":"[0.21.1] - 2024-05-07","text":""},{"location":"CHANGELOG/#fixed_17","title":"Fixed","text":"

Full commit history

"},{"location":"CHANGELOG/#0210-2024-05-06","title":"[0.21.0] - 2024-05-06","text":""},{"location":"CHANGELOG/#highlights_11","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_9","title":"Added","text":""},{"location":"CHANGELOG/#changed_12","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_14","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_18","title":"Fixed","text":""},{"location":"CHANGELOG/#refactor_9","title":"Refactor","text":""},{"location":"CHANGELOG/#new-contributors_12","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0201-2024-04-26","title":"[0.20.1] - 2024-04-26","text":""},{"location":"CHANGELOG/#highlights_12","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#fixed_19","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_13","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0200-2024-04-19","title":"[0.20.0] - 2024-04-19","text":""},{"location":"CHANGELOG/#highlights_13","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_10","title":"Added","text":""},{"location":"CHANGELOG/#changed_13","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_15","title":"Documentation","text":" "},{"location":"CHANGELOG/#fixed_20","title":"Fixed","text":""},{"location":"CHANGELOG/#breaking","title":"BREAKING","text":""},{"location":"CHANGELOG/#new-contributors_14","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0191-2024-04-11","title":"[0.19.1] - 2024-04-11","text":""},{"location":"CHANGELOG/#highlights_14","title":"\u2728 Highlights","text":"

This fixes the issue where pixi would generate broken environments/lockfiles when a mapping for a brand-new version of a package is missing.

"},{"location":"CHANGELOG/#changed_14","title":"Changed","text":"

Full commit history

"},{"location":"CHANGELOG/#0190-2024-04-10","title":"[0.19.0] - 2024-04-10","text":""},{"location":"CHANGELOG/#highlights_15","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_11","title":"Added","text":""},{"location":"CHANGELOG/#changed_15","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_16","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_21","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_15","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0180-2024-04-02","title":"[0.18.0] - 2024-04-02","text":""},{"location":"CHANGELOG/#highlights_16","title":"\u2728 Highlights","text":"

[!TIP] These new features are part of the ongoing effort to make pixi more flexible, powerful, and comfortable for the python users. They are still in progress so expect more improvements on these features soon, so please report any issues you encounter and follow our next releases!

"},{"location":"CHANGELOG/#added_12","title":"Added","text":""},{"location":"CHANGELOG/#changed_16","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_17","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_22","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_16","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0171-2024-03-21","title":"[0.17.1] - 2024-03-21","text":""},{"location":"CHANGELOG/#highlights_17","title":"\u2728 Highlights","text":"

A quick bug-fix release for pixi list.

"},{"location":"CHANGELOG/#documentation_18","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_23","title":"Fixed","text":""},{"location":"CHANGELOG/#0170-2024-03-19","title":"[0.17.0] - 2024-03-19","text":""},{"location":"CHANGELOG/#highlights_18","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_13","title":"Added","text":""},{"location":"CHANGELOG/#changed_17","title":"Changed","text":""},{"location":"CHANGELOG/#documentation_19","title":"Documentation","text":""},{"location":"CHANGELOG/#fixed_24","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_17","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0161-2024-03-11","title":"[0.16.1] - 2024-03-11","text":""},{"location":"CHANGELOG/#fixed_25","title":"Fixed","text":"

Full commit history

"},{"location":"CHANGELOG/#0160-2024-03-09","title":"[0.16.0] - 2024-03-09","text":""},{"location":"CHANGELOG/#highlights_19","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_14","title":"Added","text":""},{"location":"CHANGELOG/#changed_18","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_26","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_18","title":"New Contributors","text":"

Full Commit history

"},{"location":"CHANGELOG/#0152-2024-02-29","title":"[0.15.2] - 2024-02-29","text":""},{"location":"CHANGELOG/#changed_19","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_27","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_19","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0151-2024-02-26","title":"[0.15.1] - 2024-02-26","text":""},{"location":"CHANGELOG/#added_15","title":"Added","text":""},{"location":"CHANGELOG/#changed_20","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_28","title":"Fixed","text":"

Full commit history

"},{"location":"CHANGELOG/#0150-2024-02-23","title":"[0.15.0] - 2024-02-23","text":""},{"location":"CHANGELOG/#highlights_20","title":"\u2728 Highlights","text":"

[!WARNING] This versions build failed, use v0.15.1

"},{"location":"CHANGELOG/#added_16","title":"Added","text":""},{"location":"CHANGELOG/#fixed_29","title":"Fixed","text":""},{"location":"CHANGELOG/#other","title":"Other","text":"

Full commit history

"},{"location":"CHANGELOG/#0140-2024-02-15","title":"[0.14.0] - 2024-02-15","text":""},{"location":"CHANGELOG/#highlights_21","title":"\u2728 Highlights","text":"

Now, solve-groups can be used in [environments] to ensure dependency alignment across different environments without simultaneous installation. This feature is particularly beneficial for managing identical dependencies in test and production environments. Example configuration:

[environments]\ntest = { features = [\"prod\", \"test\"], solve-groups = [\"group1\"] }\nprod = { features = [\"prod\"], solve-groups = [\"group1\"] }\n
This setup simplifies managing dependencies that must be consistent across test and production.

"},{"location":"CHANGELOG/#added_17","title":"Added","text":""},{"location":"CHANGELOG/#changed_21","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_30","title":"Fixed","text":""},{"location":"CHANGELOG/#miscellaneous","title":"Miscellaneous","text":""},{"location":"CHANGELOG/#new-contributors_20","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0130-2024-02-01","title":"[0.13.0] - 2024-02-01","text":""},{"location":"CHANGELOG/#highlights_22","title":"\u2728 Highlights","text":"

This release is pretty crazy in amount of features! The major ones are: - We added support for multiple environments. :tada: Checkout the documentation - We added support for sdist installation, which greatly improves the amount of packages that can be installed from PyPI. :rocket:

[!IMPORTANT]

Renaming of PIXI_PACKAGE_* variables:

PIXI_PACKAGE_ROOT -> PIXI_PROJECT_ROOT\nPIXI_PACKAGE_NAME ->  PIXI_PROJECT_NAME\nPIXI_PACKAGE_MANIFEST -> PIXI_PROJECT_MANIFEST\nPIXI_PACKAGE_VERSION -> PIXI_PROJECT_VERSION\nPIXI_PACKAGE_PLATFORMS -> PIXI_ENVIRONMENT_PLATFORMS\n
Check documentation here: https://pixi.sh/environment/

[!IMPORTANT]

The .pixi/env/ folder has been moved to accommodate multiple environments. If you only have one environment it is now named .pixi/envs/default.

"},{"location":"CHANGELOG/#added_18","title":"Added","text":" "},{"location":"CHANGELOG/#changed_22","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_31","title":"Fixed","text":""},{"location":"CHANGELOG/#new-contributors_21","title":"New Contributors","text":"

Full commit history

"},{"location":"CHANGELOG/#0120-2024-01-15","title":"[0.12.0] - 2024-01-15","text":""},{"location":"CHANGELOG/#highlights_23","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_19","title":"Added","text":""},{"location":"CHANGELOG/#changed_23","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_32","title":"Fixed","text":""},{"location":"CHANGELOG/#removed_1","title":"Removed","text":""},{"location":"CHANGELOG/#documentation_20","title":"Documentation","text":""},{"location":"CHANGELOG/#new-contributors_22","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.11.0...v0.12.0

"},{"location":"CHANGELOG/#0111-2024-01-06","title":"[0.11.1] - 2024-01-06","text":""},{"location":"CHANGELOG/#fixed_33","title":"Fixed","text":""},{"location":"CHANGELOG/#0110-2024-01-05","title":"[0.11.0] - 2024-01-05","text":""},{"location":"CHANGELOG/#highlights_24","title":"\u2728 Highlights","text":""},{"location":"CHANGELOG/#added_20","title":"Added","text":""},{"location":"CHANGELOG/#changed_24","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_34","title":"Fixed","text":""},{"location":"CHANGELOG/#documentation_21","title":"Documentation","text":""},{"location":"CHANGELOG/#new-contributors_23","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.10.0...v0.11.0

"},{"location":"CHANGELOG/#0100-2023-12-8","title":"[0.10.0] - 2023-12-8","text":""},{"location":"CHANGELOG/#highlights_25","title":"Highlights","text":""},{"location":"CHANGELOG/#added_21","title":"Added","text":""},{"location":"CHANGELOG/#fixed_35","title":"Fixed","text":""},{"location":"CHANGELOG/#miscellaneous_1","title":"Miscellaneous","text":""},{"location":"CHANGELOG/#new-contributors_24","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.1...v0.10.0

"},{"location":"CHANGELOG/#091-2023-11-29","title":"[0.9.1] - 2023-11-29","text":""},{"location":"CHANGELOG/#highlights_26","title":"Highlights","text":""},{"location":"CHANGELOG/#fixed_36","title":"Fixed","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.9.0...v0.9.1

"},{"location":"CHANGELOG/#090-2023-11-28","title":"[0.9.0] - 2023-11-28","text":""},{"location":"CHANGELOG/#highlights_27","title":"Highlights","text":""},{"location":"CHANGELOG/#added_22","title":"Added","text":""},{"location":"CHANGELOG/#fixed_37","title":"Fixed","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.8.0...v0.9.0

"},{"location":"CHANGELOG/#080-2023-11-27","title":"[0.8.0] - 2023-11-27","text":""},{"location":"CHANGELOG/#highlights_28","title":"Highlights","text":"

[!NOTE] [pypi-dependencies] support is still incomplete, missing functionality is listed here: https://github.com/orgs/prefix-dev/projects/6. Our intent is not to have 100% feature parity with pip, our goal is that you only need pixi for both conda and pypi packages alike.

"},{"location":"CHANGELOG/#added_23","title":"Added","text":""},{"location":"CHANGELOG/#fixed_38","title":"Fixed","text":""},{"location":"CHANGELOG/#miscellaneous_2","title":"Miscellaneous","text":""},{"location":"CHANGELOG/#new-contributors_25","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.7.0...v0.8.0

"},{"location":"CHANGELOG/#070-2023-11-14","title":"[0.7.0] - 2023-11-14","text":""},{"location":"CHANGELOG/#highlights_29","title":"Highlights","text":""},{"location":"CHANGELOG/#added_24","title":"Added","text":""},{"location":"CHANGELOG/#changed_25","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_39","title":"Fixed","text":""},{"location":"CHANGELOG/#docs","title":"Docs","text":""},{"location":"CHANGELOG/#new-contributors_26","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.6.0...v0.7.0

"},{"location":"CHANGELOG/#060-2023-10-17","title":"[0.6.0] - 2023-10-17","text":""},{"location":"CHANGELOG/#highlights_30","title":"Highlights","text":"

This release fixes some bugs and adds the --cwd option to the tasks.

"},{"location":"CHANGELOG/#fixed_40","title":"Fixed","text":""},{"location":"CHANGELOG/#changed_26","title":"Changed","text":""},{"location":"CHANGELOG/#added_25","title":"Added","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.5.0...v0.6.0

"},{"location":"CHANGELOG/#050-2023-10-03","title":"[0.5.0] - 2023-10-03","text":""},{"location":"CHANGELOG/#highlights_31","title":"Highlights","text":"

We rebuilt pixi shell, fixing the fact that your rc file would overrule the environment activation.

"},{"location":"CHANGELOG/#fixed_41","title":"Fixed","text":""},{"location":"CHANGELOG/#added_26","title":"Added","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.4.0...v0.5.0

"},{"location":"CHANGELOG/#040-2023-09-22","title":"[0.4.0] - 2023-09-22","text":""},{"location":"CHANGELOG/#highlights_32","title":"Highlights","text":"

This release adds the start of a new cli command pixi project which will allow users to interact with the project configuration from the command line.

"},{"location":"CHANGELOG/#fixed_42","title":"Fixed","text":""},{"location":"CHANGELOG/#added_27","title":"Added","text":""},{"location":"CHANGELOG/#new-contributors_27","title":"New Contributors","text":"

Full Changelog: https://github.com/prefix-dev/pixi/compare/v0.3.0...v0.4.0

"},{"location":"CHANGELOG/#030-2023-09-11","title":"[0.3.0] - 2023-09-11","text":""},{"location":"CHANGELOG/#highlights_33","title":"Highlights","text":"

This releases fixes a lot of issues encountered by the community as well as some awesome community contributions like the addition of pixi global list and pixi global remove.

"},{"location":"CHANGELOG/#fixed_43","title":"Fixed","text":""},{"location":"CHANGELOG/#added_28","title":"Added","text":""},{"location":"CHANGELOG/#changed_27","title":"Changed","text":""},{"location":"CHANGELOG/#020-2023-08-22","title":"[0.2.0] - 2023-08-22","text":""},{"location":"CHANGELOG/#highlights_34","title":"Highlights","text":""},{"location":"CHANGELOG/#fixed_44","title":"Fixed","text":""},{"location":"CHANGELOG/#added_29","title":"Added","text":""},{"location":"CHANGELOG/#010-2023-08-11","title":"[0.1.0] - 2023-08-11","text":"

As this is our first Semantic Versioning release, we'll change from the prototype to the developing phase, as semver describes. A 0.x release could be anything from a new major feature to a breaking change where the 0.0.x releases will be bugfixes or small improvements.

"},{"location":"CHANGELOG/#highlights_35","title":"Highlights","text":""},{"location":"CHANGELOG/#fixed_45","title":"Fixed","text":""},{"location":"CHANGELOG/#008-2023-08-01","title":"[0.0.8] - 2023-08-01","text":""},{"location":"CHANGELOG/#highlights_36","title":"Highlights","text":""},{"location":"CHANGELOG/#added_30","title":"Added","text":""},{"location":"CHANGELOG/#fixed_46","title":"Fixed","text":""},{"location":"CHANGELOG/#changed_28","title":"Changed","text":""},{"location":"CHANGELOG/#007-2023-07-11","title":"[0.0.7] - 2023-07-11","text":""},{"location":"CHANGELOG/#highlights_37","title":"Highlights","text":""},{"location":"CHANGELOG/#breaking-changes","title":"BREAKING CHANGES","text":""},{"location":"CHANGELOG/#added_31","title":"Added","text":""},{"location":"CHANGELOG/#fixed_47","title":"Fixed","text":""},{"location":"CHANGELOG/#006-2023-06-30","title":"[0.0.6] - 2023-06-30","text":""},{"location":"CHANGELOG/#highlights_38","title":"Highlights","text":"

Improving the reliability is important to us, so we added an integration testing framework, we can now test as close as possible to the CLI level using cargo.

"},{"location":"CHANGELOG/#added_32","title":"Added","text":""},{"location":"CHANGELOG/#fixed_48","title":"Fixed","text":""},{"location":"CHANGELOG/#005-2023-06-26","title":"[0.0.5] - 2023-06-26","text":"

Fixing Windows installer build in CI. (#145)

"},{"location":"CHANGELOG/#004-2023-06-26","title":"[0.0.4] - 2023-06-26","text":""},{"location":"CHANGELOG/#highlights_39","title":"Highlights","text":"

A new command, auth which can be used to authenticate the host of the package channels. A new command, shell which can be used to start a shell in the pixi environment of a project. A refactor of the install command which is changed to global install and the install command now installs a pixi project if you run it in the directory. Platform specific dependencies using [target.linux-64.dependencies] instead of [dependencies] in the pixi.toml

Lots and lots of fixes and improvements to make it easier for this user, where bumping to the new version of rattler helped a lot.

"},{"location":"CHANGELOG/#added_33","title":"Added","text":""},{"location":"CHANGELOG/#changed_29","title":"Changed","text":""},{"location":"CHANGELOG/#fixed_49","title":"Fixed","text":" "}]} \ No newline at end of file diff --git a/dev/sitemap.xml b/dev/sitemap.xml index 476ebd628..36f1d98c0 100644 --- a/dev/sitemap.xml +++ b/dev/sitemap.xml @@ -2,177 +2,177 @@ https://prefix-dev.github.io/pixi/dev/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/Community/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/FAQ/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/basic_usage/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/vision/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/authentication/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/channel_priority/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/explain_info_command/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/github_actions/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/production_deployment/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/pyproject_toml/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/advanced/updates_github_actions/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/design_proposals/pixi_global_manifest/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/examples/cpp-sdl/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/examples/opencv/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/examples/ros2-nav2/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/features/advanced_tasks/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/features/environment/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/features/lockfile/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/features/multi_environment/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/features/multi_platform_configuration/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/features/system_requirements/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/ide_integration/devcontainer/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/ide_integration/jupyterlab/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/ide_integration/pycharm/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/ide_integration/r_studio/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/reference/cli/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/reference/pixi_configuration/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/reference/project_configuration/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/switching_from/conda/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/switching_from/poetry/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/tutorials/python/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/tutorials/ros2/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/tutorials/rust/ - 2024-10-08 + 2024-10-11 daily https://prefix-dev.github.io/pixi/dev/CHANGELOG/ - 2024-10-08 + 2024-10-11 daily \ No newline at end of file diff --git a/dev/sitemap.xml.gz b/dev/sitemap.xml.gz index 74528c92a6922d50103434a4258ae35fad99570c..7fedfb64da60ffe3a97aea8e6dde256e202eaf3b 100644 GIT binary patch delta 576 zcmV-G0>Ayx1keNrABzYG7{UmV2OWP&u){W_DT-q44#S24+gVU7+GZ@dfBtx0yq`bp?zTBO0-lMo zr~7%!96u^`I2@LQ44sLz45?96wempfB2=s8hxzVqCTlcszB{e03)OD4LSla=CaCea zu+T3X$Rv{WH)M6YkUKBf?qpp&<$I4 zBRMmvE7ZT^J%z5z9(kBL==0gPo%_45Z7784HAsm$DLKAmiw?qZ=3Y#%ak_tbryUI&d^ybU><#aY5D< z!yBS>n0+z0DbQx&O~08saSeuZ65>DEsoBsQ6@v}F#}s@CW&6r3N8W#GbWjX%mmQ-n z(bZ>i>8rR7sgthC*VyD#L`Qr52pUL6V=dM*CK&xc`TRr+(#JA17gvi+e9zL}LMn~@ zHOGM=IkPW~JPz`j)Rl&vG;y|~F>!9$gd}YVsLEkEu&2JYs7iRlewlZQ!lwgr)uL~7 zokFLC-I~IfV*pN*|CKPpnM1^6lmSgPtw~ Oy!#DlSWYV$82|v#Y!~zZ delta 576 zcmV-G0>Ayx1keNrABzYGJ0b;<2OWQDutPVbDT-q44#S24+gVU7+GZWEM06<6#Yz)1nJGgp>wky5@GQ7^V{WWJ_BC`i@v#^|N8m7xSKz0Z#FqO0-lMo zr~7%!96u?w-|v@%44sLz45?96wempfEL5xI$NBbVCTlcsx;w6o3)ObCLSla=CaCeS zu+T3X=i34U*_K{ssF zmE^>vE>QoD_Y}G=d*os2pwEv#rjEEXgq6-@8kjn2k7L42ozmFe_zJ8tnOpFTmEn@G zN9!J(jtN65AjdA?aEQ*J*XV!iperd6#@k}xPkY)_W2{gy*f9m0kIdRabYbWu;>)2Q zg#Q&P)*L!l40evzFpw@QW-JgkUdl>{fsCW$j&5Ao8K)&zu?Cua=)l2r(E+I{#syhd z3~z|iVfMw~raTq)xgjUt^O~5gqO2BWNHQjkQ?Mm|*n(3#l~r z*Bl3iBa06Oyzgpel#uz#jY7qAKAP`+43e3ZM4KRg1pS zbqbvlc54b_jsZAL{#P&xXATLEv)RDy^aJCacbpdrqRrEvbf3OFetY@+_4~{J2R&X0 Oc>4$84U=9O82|uiOC1{k diff --git a/dev/tutorials/python/index.html b/dev/tutorials/python/index.html index eebce9bdb..90ae924e6 100644 --- a/dev/tutorials/python/index.html +++ b/dev/tutorials/python/index.html @@ -1830,12 +1830,12 @@

Let's get started
.
 ├── pixi_py
-   ├── __init__.py
+   └── __init__.py
 └── pyproject.toml
 

We've used a flat-layout here but pixi supports both flat- and src-layouts.

What's in the pyproject.toml?#

-

Okay, so let's have a look at what's sections have been added and how we can modify the pyproject.toml.

+

Okay, so let's have a look at what sections have been added and how we can modify the pyproject.toml.

These first entries were added to the pyproject.toml file:

# Main pixi entry
 [tool.pixi.project]
@@ -1995,7 +1995,7 @@ 

Testing this code
.
 ├── pixi.lock
 ├── pixi_py
-   ├── __init__.py
+   └── __init__.py
 ├── pyproject.toml
 └── tests/test_me.py
 

@@ -2030,18 +2030,18 @@

Testing this codeTest vs Default environment#

-

The interesting thing is if we compare the output of the two environments.

+

Let's compare the output of the test and default environments...

pixi list -e test
-# v.s. default environment
+# vs. default environment
 pixi list
 
-

Is that the test environment has:

+

We see that the test environment has:

package          version       build               size       kind   source
 ...
 pytest           8.1.1                             1.1 mib    pypi   pytest-8.1.1-py3-none-any.whl
 ...
 
-

But the default environment is missing this package. +

However, the default environment is missing this package. This way, you can finetune your environments to only have the packages that are needed for that environment. E.g. you could also have a dev environment that has pytest and ruff installed, but you could omit these from the prod environment. There is a docker example that shows how to set up a minimal prod environment and copy from there.