From f8aae5e58cff2986c6e3b89119427737a59c10f3 Mon Sep 17 00:00:00 2001
From: "github-actions[bot]" Pixi supports some global configuration options, as well as project-scoped configuration (that does not belong into the project file).
-The configuration is loaded in the following order: Pixi supports some global configuration options, as well as project-scoped
+configuration (that does not belong into the project file). The configuration is
+loaded in the following order: Note To find the locations where To find the locations where The following reference describes all available configuration options. You can configure mirrors for conda channels. We expect that mirrors are exact
+copies of the original channel. The implementation will look for the mirror key
+(a URL) in the To also include the original URL, you have to repeat it in the list of mirrors. The mirrors are prioritized based on the order of the list. We attempt to fetch
+the repodata (the most important file) from the first mirror in the list. The
+repodata contains all the SHA256 hashes of the individual packages, so it is
+important to get this file from a trusted source. You can also specify mirrors for an entire "host", e.g. This will forward all request to channels on anaconda.org to prefix.dev.
+Channels that are not currently mirrored on prefix.dev will fail in the above example. You can also specify mirrors on the OCI registry. There is a public mirror on
+the Github container registry (ghcr.io) that is maintained by the conda-forge
+team. You can use it like this: The GHCR mirror also contains Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux. To install The above invocation will automatically download the latest version of The script will also update your Global configuration in pixi#
-
-
~/.config/pixi/config.toml
on Linux, dependent on XDG_CONFIG_HOME)~/.pixi/config.toml
(or $PIXI_HOME/config.toml
if the PIXI_HOME
environment variable is set)~/.config/pixi/config.toml
on Linux,
+ dependent on XDG_CONFIG_HOME)~/.pixi/config.toml
(or $PIXI_HOME/config.toml
if
+ the PIXI_HOME
environment variable is set)$PIXI_PROJECT/.pixi/config.toml
--tls-no-verify
, --change-ps1=false
etc.)pixi
looks for configuration files, run pixi
with -v
or --verbose
.pixi
looks for configuration files, run
+pixi
with -v
or --verbose
.Reference#
Reference# file as fallback. This option allows you to force the use of a JSON file.
# Read more in the authentication section.
authentication_override_file = "/path/to/your/override.json"
+
+# configuration for conda channel-mirrors
+[mirrors]
+# redirect all requests for conda-forge to the prefix.dev mirror
+"https://conda.anaconda.org/conda-forge" = [
+ "https://prefix.dev/conda-forge"
+]
+
+# redirect all requests for bioconda to one of the three listed mirrors
+# Note: for repodata we try the first mirror first.
+"https://conda.anaconda.org/bioconda" = [
+ "https://conda.anaconda.org/bioconda",
+ # OCI registries are also supported
+ "oci://ghcr.io/channel-mirrors/bioconda",
+ "https://prefix.dev/bioconda",
+]
+
+
Mirror configuration#
+mirrors
section of the configuration file and replace the
+original URL with the mirror URL.OCI Mirrors#
+[mirrors]
+"https://conda.anaconda.org/conda-forge" = [
+ "oci://ghcr.io/channel-mirrors/conda-forge"
+]
bioconda
packages. You can search the available
+packages on Github.pixi
you can run the following command in your terminal:curl -fsSL https://pixi.sh/install.sh | bash\n
pixi
, extract it, and move the pixi
binary to ~/.pixi/bin
. If this directory does not already exist, the script will create it.~/.bash_profile
to include ~/.pixi/bin
in your PATH, allowing you to invoke the pixi
command from anywhere.PowerShell
: iwr -useb https://pixi.sh/install.ps1 | iex\n
winget
:
The above invocation will automatically download the latest version of winget install prefix-dev.pixi\n
pixi
, extract it, and move the pixi
binary to LocalAppData/pixi/bin
. If this directory does not already exist, the script will create it.
The command will also automatically add LocalAppData/pixi/bin
to your path allowing you to invoke pixi
from anywhere.
Tip
You might need to restart your terminal or source your shell for the changes to take effect.
"},{"location":"#autocompletion","title":"Autocompletion","text":"To get autocompletion run:
Linux & macOSWindows# Pick your shell (use `echo $SHELL` to find the shell you are using.):\necho 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\necho 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\necho 'pixi completion --shell fish | source' >> ~/.config/fish/config.fish\necho 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n
PowerShell:
Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n
Failure because no profile file exists
Make sure your profile file exists, otherwise create it with:
New-Item -Path $PROFILE -ItemType File -Force\n
And then restart the shell or source the shell config file.
"},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"Although we recommend installing pixi through the above method we also provide additional installation methods.
"},{"location":"#homebrew","title":"Homebrew","text":"Pixi is available via homebrew. To install pixi via homebrew simply run:
brew install pixi\n
"},{"location":"#windows-installer","title":"Windows installer","text":"We provide an msi
installer on our GitHub releases page. The installer will download pixi and add it to the path.
pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:
cargo install --locked --git https://github.com/prefix-dev/pixi.git\n
or when you want to make changes use:
cargo build\ncargo test\n
If you have any issues building because of the dependency on rattler
checkout its compile steps.
Updating is as simple as installing, rerunning the installation script gets you the latest version.
Linux & macOSWindowscurl -fsSL https://pixi.sh/install.sh | bash\n
Or get a specific pixi version using: export PIXI_VERSION=vX.Y.Z && curl -fsSL https://pixi.sh/install.sh | bash\n
PowerShell:
iwr -useb https://pixi.sh/install.ps1 | iex\n
Or get a specific pixi version using: PowerShell: $Env:PIXI_VERSION=\"vX.Y.Z\"; iwr -useb https://pixi.sh/install.ps1 | iex\n
Note
If you used a package manager like brew
, mamba
, conda
, paru
to install pixi
. Then use their builtin update mechanism. e.g. brew upgrade pixi
.
To uninstall pixi from your system, simply remove the binary.
Linux & macOSWindowsrm ~/.pixi/bin/pixi\n
$PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n
After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi
directory and remove the directory from your path.
When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:
[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n
Customize your badge
To further customize the look and feel of your badge, you can add &style=<custom-style>
at the end of the URL. See the documentation on shields.io for more info.
scipy
port using xtensor
conda
, mamba
, poetry
, pip
","text":"Tool Installs python Builds packages Runs predefined tasks Has lockfiles builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi
","text":"Starting with the name prefix
we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px
, pex
, pax
, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.
alias not_pixi=\"pixi\"\n
PowerShell:
New-Alias -Name not_pixi -Value pixi\n
"},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build
","text":"TL;DR: It's coming we promise!
pixi build
is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build
which will be used as a library in pixi.
Ensure you've got pixi
set up. If running pixi
doesn't show the help, see the getting started if it doesn't.
pixi\n
Initialize a new project and navigate to the project directory.
pixi init pixi-hello-world\ncd pixi-hello-world\n
Add the dependencies you would like to use.
pixi add python\n
Create a file named hello_world.py
in the directory and paste the following code into the file.
def hello():\n print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n hello()\n
Run the code inside the environment.
pixi run python hello_world.py\n
You can also put this run command in a task.
pixi task add hello python hello_world.py\n
After adding the task, you can run the task using its name.
pixi run hello\n
Use the shell
command to activate the environment and start a new shell in there.
pixi shell\npython\nexit\n
You've just learned the basic features of pixi:
Feel free to play around with what you just learned like adding more tasks, dependencies or code.
Happy coding!
"},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"Use pixi to install tools on your machine.
Some notable examples:
# Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
"},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.
- uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n
See the GitHub Actions for more details.
"},{"location":"cli/","title":"Commands","text":""},{"location":"cli/#global-options","title":"Global options","text":"--verbose (-v|vv|vvv)
Increase the verbosity of the output messages, the -v|vv|vvv increases the level of verbosity respectively.--help (-h)
Shows help information, use -h
to get the short version of the help.--version (-V)
: shows the version of pixi that is used.--quiet (-q)
: Decreases the amount of output.--color <COLOR>
: Whether the log needs to be colored [env: PIXI_COLOR=
] [default: auto
] [possible values: always, never, auto]. Pixi also honor the FORCE_COLOR
and NO_COLOR
environment variables. They both take precedence over --color
and PIXI_COLOR
.init
","text":"This command is used to create a new project. It initializes a pixi.toml
file and also prepares a .gitignore
to prevent the environment from being added to git
.
[PATH]
: Where to place the project (defaults to current path) [default: .]--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that the project supports. (Allowed to be used more than once)--import <ENV_FILE> (-i)
: Import an existing conda environment file, e.g. environment.yml
.Importing an environment.yml
When importing an environment, the pixi.toml
will be created with the dependencies from the environment file. The pixi.lock
will be created when you install the environment. We don't support git+
urls as dependencies for pip packages and for the defaults
channel we use main
, r
and msys2
as the default channels.
pixi init myproject\npixi init ~/myproject\npixi init # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\n
"},{"location":"cli/#add","title":"add
","text":"Adds dependencies to the pixi.toml
. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.
<SPECS>
: The package(s) to add, space separated. The version constraint is optional.--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package. Parses dependencies as PEP508 requirements, supporting extras and versions. See configuration for details.--no-install
: Don't install the package to the environment, only add the package to the lock-file.--no-lockfile-update
: Don't update the lock-file, implies the --no-install
flag.--platform <PLATFORM> (-p)
: The platform for which the dependency should be added. (Allowed to be used more than once)--feature <FEATURE> (-f)
: The feature for which the dependency should be added.pixi add numpy\npixi add numpy pandas \"pytorch>=1.8\"\npixi add \"numpy>=1.22,<1.24\"\npixi add --manifest-path ~/myproject/pixi.toml numpy\npixi add --host \"python>=3.9.0\"\npixi add --build cmake\npixi add --pypi requests[security]\npixi add --platform osx-64 --build clang\npixi add --no-install numpy\npixi add --no-lockfile-update numpy\npixi add --feature featurex numpy\n
"},{"location":"cli/#install","title":"install
","text":"Installs all dependencies specified in the lockfile pixi.lock
. Which gets generated on pixi add
or when you manually change the pixi.toml
file and run pixi install
.
--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\n
"},{"location":"cli/#run","title":"run
","text":"The run
commands first checks if the environment is ready to use. When you didn't run pixi install
the run command will do that for you. The custom tasks defined in the pixi.toml
are also available through the run command.
You cannot run pixi run source setup.bash
as source
is not available in the deno_task_shell
commandos and not an executable.
[TASK]...
The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to run the task in, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi run python\npixi run cowpy \"Hey pixi user\"\npixi run --manifest-path ~/myproject/pixi.toml python\npixi run --frozen python\npixi run --locked python\n# If you have specified a custom task in the pixi.toml you can run it with run as well\npixi run build\n# Extra arguments will be passed to the tasks command.\npixi run task argument1 argument2\n\n# If you have multiple environments you can select the right one with the --environment flag.\npixi run --environment cuda python\n
Info
In pixi
the deno_task_shell
is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.
Cross environment tasks
If you're using the depends_on
feature of the tasks
, the tasks will be run in the order you specified them. The depends_on
can be used cross environment, e.g. you have this pixi.toml
:
[tasks]\nstart = { cmd = \"python start.py\", depends_on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n
Then you're able to run the build
from the build
environment and start
from the default environment. By only calling:
pixi run start\n
"},{"location":"cli/#remove","title":"remove
","text":"Removes dependencies from the pixi.toml
.
<DEPS>...
: List of dependencies you wish to remove from the project.--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package.--platform <PLATFORM> (-p)
: The platform from which the dependency should be removed.--feature <FEATURE> (-f)
: The feature from which the dependency should be removed.pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\n
"},{"location":"cli/#task","title":"task
","text":"If you want to make a shorthand for a specific command you can add a task for it.
"},{"location":"cli/#options_5","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.task add
","text":"Add a task to the pixi.toml
, use --depends-on
to add tasks you want to run before this task, e.g. build before an execute task.
<NAME>
: The name of the task.<COMMAND>
: The command to run. This can be more than one word.Info
If you are using $
for env variables they will be resolved before adding them to the task. If you want to use $
in the task you need to escape it with a \\
, e.g. echo \\$HOME
.
--platform <PLATFORM> (-p)
: the platform for which this task should be added.--feature <FEATURE> (-f)
: the feature for which the task is added, if non provided the default tasks will be added.--depends-on <DEPENDS_ON>
: the task it depends on to be run before the one your adding.--cwd <CWD>
: the working directory for the task relative to the root of the project.pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\n
This adds the following to the pixi.toml
:
[tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends_on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n
Which you can then run with the run
command:
pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
"},{"location":"cli/#task-remove","title":"task remove
","text":"Remove the task from the pixi.toml
<NAMES>
: The names of the tasks, space separated.--platform <PLATFORM> (-p)
: the platform for which this task is removed.--feature <FEATURE> (-f)
: the feature for which the task is removed.pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
"},{"location":"cli/#task-alias","title":"task alias
","text":"Create an alias for a task.
"},{"location":"cli/#arguments_6","title":"Arguments","text":"<ALIAS>
: The alias name<DEPENDS_ON>
: The names of the tasks you want to execute on this alias, order counts, first one runs first.--platform <PLATFORM> (-p)
: the platform for which this alias is created.pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
"},{"location":"cli/#task-list","title":"task list
","text":"List all tasks in the project.
"},{"location":"cli/#options_9","title":"Options","text":"--environment
(-e
): the environment's tasks list, if non is provided the default tasks will be listed.--summary
(-s
): the output gets formatted to be machine parsable. (Used in the autocompletion of pixi run
).pixi task list\npixi task list --environment cuda\npixi task list --summary\n
"},{"location":"cli/#list","title":"list
","text":"List project's packages. Highlighted packages are explicit dependencies.
"},{"location":"cli/#options_10","title":"Options","text":"--platform <PLATFORM> (-p)
: The platform to list packages for. Defaults to the current platform--json
: Whether to output in json format.--json-pretty
: Whether to output in pretty json format--sort-by <SORT_BY>
: Sorting strategy [default: name] [possible values: size, name, type]--manifest-path <MANIFEST_PATH>
: The path to pixi.toml
, by default it searches for one in the parent directories.--environment
(-e
): The environment's packages to list, if non is provided the default environment's packages will be listed.--frozen
: Install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: Only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--no-install
: Don't install the environment for pypi solving, only update the lock-file if it can solve without installing. (Implied by --frozen
and --locked
)```shell\npixi list\npixi list --json-pretty\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n
Output will look like this, where python
will be green as it is the package that was explicitly added to the pixi.toml
: \u279c pixi list\n Package Version Build Size Kind Source\n _libgcc_mutex 0.1 conda_forge 2.5 KiB conda _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex 4.5 2_gnu 23.1 KiB conda _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2 1.0.8 hd590300_5 248.3 KiB conda bzip2-1.0.8-hd590300_5.conda\n ca-certificates 2023.11.17 hbcca054_0 150.5 KiB conda ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64 2.40 h41732ed_0 688.2 KiB conda ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat 2.5.0 hcb278e6_1 76.2 KiB conda libexpat-2.5.0-hcb278e6_1.conda\n libffi 3.4.2 h7f98852_5 56.9 KiB conda libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng 13.2.0 h807b86a_4 755.7 KiB conda libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp 13.2.0 h807b86a_4 412.2 KiB conda libgomp-13.2.0-h807b86a_4.conda\n libnsl 2.0.1 hd590300_0 32.6 KiB conda libnsl-2.0.1-hd590300_0.conda\n libsqlite 3.44.2 h2797004_0 826 KiB conda libsqlite-3.44.2-h2797004_0.conda\n libuuid 2.38.1 h0b41bf4_0 32.8 KiB conda libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt 4.4.36 hd590300_1 98 KiB conda libxcrypt-4.4.36-hd590300_1.conda\n libzlib 1.2.13 hd590300_5 60.1 KiB conda libzlib-1.2.13-hd590300_5.conda\n ncurses 6.4 h59595ed_2 863.7 KiB conda ncurses-6.4-h59595ed_2.conda\n openssl 3.2.0 hd590300_1 2.7 MiB conda openssl-3.2.0-hd590300_1.conda\n python 3.12.1 hab00c5b_1_cpython 30.8 MiB conda python-3.12.1-hab00c5b_1_cpython.conda\n readline 8.2 h8228510_1 274.9 KiB conda readline-8.2-h8228510_1.conda\n tk 8.6.13 noxft_h4845f30_101 3.2 MiB conda tk-8.6.13-noxft_h4845f30_101.conda\n tzdata 2023d h0c530f3_0 116.8 KiB conda tzdata-2023d-h0c530f3_0.conda\n xz 5.2.6 h166bdaf_0 408.6 KiB conda xz-5.2.6-h166bdaf_0.tar.bz2\n
"},{"location":"cli/#shell","title":"shell
","text":"This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit
.
--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate the shell in, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
"},{"location":"cli/#shell-hook","title":"shell-hook
","text":"This command prints the activation script of an environment.
"},{"location":"cli/#options_12","title":"Options","text":"--shell <SHELL> (-s)
: The shell for which the activation script should be printed. Defaults to the current shell. Currently supported variants: [bash
, zsh
, xonsh
, cmd
, powershell
, fish
, nushell
]--manifest-path
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\n
Example use-case, when you want to get rid of the pixi
executable in a Docker container. pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n
"},{"location":"cli/#search","title":"search
","text":"Search a package, output will list the latest version of the package.
"},{"location":"cli/#arguments_7","title":"Arguments","text":"<PACKAGE>
: Name of package to search, it's possible to use wildcards (*
).--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--limit <LIMIT> (-l)
: optionally limit the number of search results--platform <PLATFORM> (-p)
: specify a platform that you want to search for. (default: current platform)pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
"},{"location":"cli/#self-update","title":"self-update
","text":"Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi
), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force
flag
--version <VERSION>
: The desired version (to downgrade or upgrade to). Update to the latest version if not specified.--force
: Force the update even if the pixi binary is not found in the default location.pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
"},{"location":"cli/#info","title":"info
","text":"Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.
"},{"location":"cli/#options_15","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--extended
: extend the information with more slow queries to the system, like directory sizes.--json
: Get a machine-readable version of the information as output.pixi info\npixi info --json --extended\n
"},{"location":"cli/#upload","title":"upload
","text":"Upload a package to a prefix.dev channel
"},{"location":"cli/#arguments_8","title":"Arguments","text":"<HOST>
: The host + channel to upload to.<PACKAGE_FILE>
: The package file to upload.pixi upload repo.prefix.dev/my_channel my_package.conda\n
"},{"location":"cli/#auth","title":"auth
","text":"This command is used to authenticate the user's access to remote hosts such as prefix.dev
or anaconda.org
for private channels.
auth login
","text":"Store authentication information for given host.
Tip
The host is real hostname not a channel.
"},{"location":"cli/#arguments_9","title":"Arguments","text":"<HOST>
: The host to authenticate with.--token <TOKEN>
: The token to use for authentication with prefix.dev.--username <USERNAME>
: The username to use for basic HTTP authentication--password <PASSWORD>
: The password to use for basic HTTP authentication.--conda-token <CONDA_TOKEN>
: The token to use on anaconda.org
/ quetz
authentication.pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaNdHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --user john --password xxxxxx\n
"},{"location":"cli/#auth-logout","title":"auth logout
","text":"Remove authentication information for a given host.
"},{"location":"cli/#arguments_10","title":"Arguments","text":"<HOST>
: The host to authenticate with.pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
"},{"location":"cli/#global","title":"global
","text":"Global is the main entry point for the part of pixi that executes on the global(system) level.
Tip
Binaries and environments installed globally are stored in ~/.pixi
by default, this can be changed by setting the PIXI_HOME
environment variable.
global install
","text":"This command installs package(s) into its own environment and adds the binary to PATH
, allowing you to access it anywhere on your system without activating the environment.
1.<PACKAGE>
: The package(s) to install, this can also be a version constraint.
--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n
After using global install, you can use the package you installed anywhere on your system.
"},{"location":"cli/#global-list","title":"global list
","text":"This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:
> pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n| \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n| \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n| \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n| \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n \u2514\u2500 exec: uv\n
"},{"location":"cli/#global-upgrade","title":"global upgrade
","text":"This command upgrades a globally installed package (to the latest version by default).
"},{"location":"cli/#arguments_12","title":"Arguments","text":"<PACKAGE>
: The package to upgrade.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
"},{"location":"cli/#global-upgrade-all","title":"global upgrade-all
","text":"This command upgrades all globally installed packages to their latest version.
"},{"location":"cli/#options_19","title":"Options","text":"--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
"},{"location":"cli/#global-remove","title":"global remove
","text":"Removes a package previously installed into a globally accessible location via pixi global install
Use pixi global info
to find out what the package name is that belongs to the tool you want to remove.
<PACKAGE>
: The package(s) to remove.pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
"},{"location":"cli/#project","title":"project
","text":"This subcommand allows you to modify the project configuration through the command line interface.
"},{"location":"cli/#options_20","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.project channel add
","text":"Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lockfile and the environment is reinstalled.
"},{"location":"cli/#arguments_14","title":"Arguments","text":"<CHANNEL>
: The channels to add, name or URL.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is added.pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nividia\n
"},{"location":"cli/#project-channel-list","title":"project channel list
","text":"List the channels in the project file
"},{"location":"cli/#options_22","title":"Options","text":"urls
: show the urls of the channels instead of the names.$ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
"},{"location":"cli/#project-channel-remove","title":"project channel remove
","text":"List the channels in the project file
"},{"location":"cli/#arguments_15","title":"Arguments","text":"<CHANNEL>...
: The channels to remove, name(s) or URL(s).--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is removed.pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nividia\n
"},{"location":"cli/#project-description-get","title":"project description get
","text":"Get the project description.
$ pixi project description get\nPackage management made easy!\n
"},{"location":"cli/#project-description-set","title":"project description set
","text":"Set the project description.
"},{"location":"cli/#arguments_16","title":"Arguments","text":"<DESCRIPTION>
: The description to set.pixi project description set \"my new description\"\n
"},{"location":"cli/#project-platform-add","title":"project platform add
","text":"Adds a platform(s) to the project file and updates the lockfile.
"},{"location":"cli/#arguments_17","title":"Arguments","text":"<PLATFORM>...
: The platforms to add.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be added.pixi project platform add win-64\npixi project platform add --feature test win-64\n
"},{"location":"cli/#project-platform-list","title":"project platform list
","text":"List the platforms in the project file.
$ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
"},{"location":"cli/#project-platform-remove","title":"project platform remove
","text":"Remove platform(s) from the project file and updates the lockfile.
"},{"location":"cli/#arguments_18","title":"Arguments","text":"<PLATFORM>...
: The platforms to remove.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be removed.pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
"},{"location":"cli/#project-version-get","title":"project version get
","text":"Get the project version.
$ pixi project version get\n0.11.0\n
"},{"location":"cli/#project-version-set","title":"project version set
","text":"Set the project version.
"},{"location":"cli/#arguments_19","title":"Arguments","text":"<VERSION>
: The version to set.pixi project version set \"0.13.0\"\n
"},{"location":"cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}
","text":"Bump the project version to {MAJOR|MINOR|PATCH}.
pixi project version major\npixi project version minor\npixi project version patch\n
An up-to-date lockfile means that the dependencies in the lockfile are allowed by the dependencies in the manifest file. For example
pixi.toml
with python = \">= 3.11\"
is up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.pixi.toml
with python = \">= 3.12\"
is not up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.Being up-to-date does not mean that the lockfile holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9
The pixi.toml
is the pixi project configuration file, also known as the project manifest.
A toml
file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check crates.io.
project
table","text":"The minimally required information in the project
table is:
[project]\nname = \"project-name\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n
"},{"location":"configuration/#name","title":"name
","text":"The name of the project.
[project]\nname = \"project-name\"\n
"},{"location":"configuration/#channels","title":"channels
","text":"This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org
you only need to use the name of the channel directly.
[project]\nchannels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n
Channels situated on the file system are also supported with absolute file paths:
[project]\nchannels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n
To access private or public channels on prefix.dev or Quetz use the url including the hostname:
[project]\nchannels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n
"},{"location":"configuration/#platforms","title":"platforms
","text":"Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lockfile (pixi.lock
).
[project]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n
The available platforms are listed here: link"},{"location":"configuration/#version-optional","title":"version
(optional)","text":"The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.
[project]\nversion = \"1.2.3\"\n
"},{"location":"configuration/#authors-optional","title":"authors
(optional)","text":"This is a list of authors of the project.
[project]\nauthors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n
"},{"location":"configuration/#description-optional","title":"description
(optional)","text":"This should contain a short description of the project.
[project]\ndescription = \"A simple description\"\n
"},{"location":"configuration/#license-optional","title":"license
(optional)","text":"The license as a valid SPDX string (e.g. MIT AND Apache-2.0)
[project]\nlicense = \"MIT\"\n
"},{"location":"configuration/#license-file-optional","title":"license-file
(optional)","text":"Relative path to the license file.
[project]\nlicense-file = \"LICENSE.md\"\n
"},{"location":"configuration/#readme-optional","title":"readme
(optional)","text":"Relative path to the README file.
[project]\nreadme = \"README.md\"\n
"},{"location":"configuration/#homepage-optional","title":"homepage
(optional)","text":"URL of the project homepage.
[project]\nhomepage = \"https://pixi.sh\"\n
"},{"location":"configuration/#repository-optional","title":"repository
(optional)","text":"URL of the project source repository.
[project]\nrepository = \"https://github.com/prefix-dev/pixi\"\n
"},{"location":"configuration/#documentation-optional","title":"documentation
(optional)","text":"URL of the project documentation.
[project]\ndocumentation = \"https://pixi.sh\"\n
"},{"location":"configuration/#the-tasks-table","title":"The tasks
table","text":"Tasks are a way to automate certain custom commands in your project. For example, a lint
or format
step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run
and are executed using the deno_task_shell
.
[tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends_on=\"simple\"}\nalias = { depends_on=[\"depending\"]}\n
You can modify this table using pixi task
. Note
Specify different tasks for different platforms using the target table
"},{"location":"configuration/#the-system-requirements-table","title":"Thesystem-requirements
table","text":"The system requirements are used to define minimal system specifications used during dependency resolution. For example, we can define a unix system with a specific minimal libc version. This will be the minimal system specification for the project. System specifications are directly related to the virtual packages.
Currently, the specified defaults are the same as conda-lock's implementation:
LinuxWindowsOsxOsx-arm64 default system requirements for linux[system-requirements]\nlinux = \"5.10\"\nlibc = { family=\"glibc\", version=\"2.17\" }\n
default system requirements for windows[system-requirements]\n
default system requirements for osx[system-requirements]\nmacos = \"10.15\"\n
default system requirements for osx-arm64[system-requirements]\nmacos = \"11.0\"\n
Only if a project requires a different set should you define them.
For example, when installing environments on old versions of linux. You may encounter the following error:
\u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '5.10' but the system has version '4.12.14'\n
This suggests that the system requirements for the project should be lowered. To fix this, add the following table to your configuration: [system-requirements]\nlinux = \"4.12.14\"\n
"},{"location":"configuration/#using-cuda-in-pixi","title":"Using Cuda in pixi","text":"If you want to use cuda
in your project you need to add the following to your system-requirements
table:
[system-requirements]\ncuda = \"11\" # or any other version of cuda you want to use\n
This informs the solver that cuda is going to be available, so it can lock it into the lockfile if needed."},{"location":"configuration/#the-dependencies-tables","title":"The dependencies
table(s)","text":"This section defines what dependencies you would like to use for your project.
There are multiple dependencies tables. The default is [dependencies]
, which are dependencies that are shared across platforms.
Dependencies are defined using a VersionSpec. A VersionSpec
combines a Version with an optional operator.
Some examples are:
# Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n
Dependencies can also be defined as a mapping where it is using a matchspec:
package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n
Tip
The dependencies can be easily added using the pixi add
command line. Running add
for an existing dependency will replace it with the newest it can use.
Note
To specify different dependencies for different platforms use the target table
"},{"location":"configuration/#dependencies","title":"dependencies
","text":"Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge
. Even if the dependency defines a channel that channel should be added to the project.channels
list.
[dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
"},{"location":"configuration/#pypi-dependencies-beta-feature","title":"pypi-dependencies
(Beta feature)","text":"Details regarding the PyPI integration We use uv
, which is a new fast pip replacement written in Rust.
We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.
The uv resolution is included in the lock file directly.
Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.
Important considerations
dependencies
table where possible.git
dependencies (git+https://github.com/package-org/package.git
) - Source dependencies - Private PyPI repositoriesThese dependencies don't follow the conda matchspec specification. The version
is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version
is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:
[dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\npytest = \"*\" # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\nrequests = {version = \">= 2.8.1, ==2.8.*\", extras=[\"security\", \"tests\"]}\n
Did you know you can use: add --pypi
? Use the --pypi
flag with the add
command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask
The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc
for example, it's used from the conda environment instead of the system.
host-dependencies
","text":"This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.
[host-dependencies]\npython = \"~=3.10.3\"\n
Typical examples of host dependencies are:
python
here and an R package would list mro-base
or r-base
.openssl
, rapidjson
, or xtensor
.build-dependencies
","text":"This table contains dependencies that are needed to build the project. Different from dependencies
and host-dependencies
these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.
[build-dependencies]\ncmake = \"~=3.24\"\n
Typical examples of build dependencies are:
cmake
is invoked on the build machine to generate additional code- or project-files which are then include in the compilation process.Info
The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.
For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64
and your host platform is linux-64
.
activation
table","text":"If you want to run an activation script inside the environment when either doing a pixi run
or pixi shell
these can be defined here. The scripts defined in this table will be sourced when the environment is activated using pixi run
or pixi shell
Note
The activation scripts are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe
on windows and bash
on linux and osx (Unix). Only .sh
, .bash
and .bat
files are supported.
If you have scripts per platform use the target table.
[activation]\nscripts = [\"env_setup.sh\"]\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n
"},{"location":"configuration/#the-target-table","title":"The target
table","text":"The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.
The target table is currently implemented for the following sub-tables:
activation
dependencies
tasks
The target table is defined using [target.PLATFORM.SUB-TABLE]
. E.g [target.linux-64.dependencies]
The platform can be any of:
win
, osx
, linux
or unix
(unix
matches linux
and osx
)linux-64
, osx-arm64
The sub-table can be any of the specified above.
To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies
with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake
being used for all targets but on osx-64
or osx-arm64
a different version of python will be selected.
[dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n
Here are some more examples:
[target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n
"},{"location":"configuration/#the-feature-and-environments-tables","title":"The feature
and environments
tables","text":"The feature
table allows you to define features that can be used to create different [environments]
. The [environments]
table allows you to define different environments. The design is explained in the this design document.
Simplest example
[feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n
This will create an environment called test
that has pytest
installed."},{"location":"configuration/#the-feature-table","title":"The feature
table","text":"The feature
table allows you to define the following fields per feature.
dependencies
: Same as the dependencies.pypi-dependencies
: Same as the pypi-dependencies.system-requirements
: Same as the system-requirements.activation
: Same as the activation.platforms
: Same as the platforms. When adding features together the intersection of the platforms is taken. Be aware that the default
feature is always implied thus this must contain all platforms the project can support.channels
: Same as the channels. Adding the priority
field to the channels to allow concatenation of channels instead of overwriting.target
: Same as the target.tasks
: Same as the tasks.These tables are all also available without the feature
prefix. When those are used we call them the default
feature. This is a protected name you can not use for your own feature.
[feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nchannels = [\"nvidia\"] # Results in: [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Full feature table but written as separate tables[feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
"},{"location":"configuration/#the-environments-table","title":"The environments
table","text":"The environments
table allows you to define environments that are created using the features defined in the feature
tables.
Important
default
is always implied when creating environments. If you don't want to use the default
feature you can keep all the non feature tables empty.
The environments table is defined using the following fields:
features: Vec<Feature>
: The features that are included in the environment set, which is also the default field in the environments.solve-group: String
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies. These dependencies will then be the same version in all environments that have the same solve group. But the different environments contain different subsets of the solve-groups dependencies set.[environments]\ntest = [\"test\"]\n
Full environments table specification[environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = [\"lint\"]\n
"},{"location":"configuration/#global-configuration","title":"Global configuration","text":"The global configuration options are documented in the global configuration section.
"},{"location":"environment/","title":"Environments","text":"Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.
"},{"location":"environment/#structure","title":"Structure","text":"A pixi environment is located in the .pixi/envs
directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.
If you look at the .pixi/envs
directory, you will see a directory for each environment, the default
being the one that is normally used, if you specify a custom environment the name you specified will be used.
.pixi\n\u2514\u2500\u2500 envs\n \u251c\u2500\u2500 cuda\n \u2502 \u251c\u2500\u2500 bin\n \u2502 \u251c\u2500\u2500 conda-meta\n \u2502 \u251c\u2500\u2500 etc\n \u2502 \u251c\u2500\u2500 include\n \u2502 \u251c\u2500\u2500 lib\n \u2502 ...\n \u2514\u2500\u2500 default\n \u251c\u2500\u2500 bin\n \u251c\u2500\u2500 conda-meta\n \u251c\u2500\u2500 etc\n \u251c\u2500\u2500 include\n \u251c\u2500\u2500 lib\n ...\n
These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml
. Pixi will always make sure the environment is in sync with the pixi.lock
file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run
, pixi shell
.
If you want to clean up the environments, you can simply delete the .pixi/envs
directory, and pixi will recreate the environments when needed.
# either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n
"},{"location":"environment/#activation","title":"Activation","text":"An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin
directory of the environment to the PATH
variable. But there is more to it in a conda environment, as it also sets some environment variables.
To do the activation we have multiple options: - Use the pixi shell
command to open a shell with the environment activated. - Use the pixi shell-hook
command to print the command to activate the environment in your current shell. - Use the pixi run
command to run a command in the environment.
Where the run
command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.
Using the pixi shell-hook
in pixi you would get the following output:
export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n
It sets the PATH
and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh
script. Thus, just adding the bin
directory to the PATH
is not enough."},{"location":"environment/#traditional-conda-activate-like-activation","title":"Traditional conda activate
-like activation","text":"If you prefer to use the traditional conda activate
-like activation, you could use the pixi shell-hook
command.
$ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n
Warning
It is not encouraged to use the traditional conda activate
-like activation, as deactivating the environment is not really possible. Use pixi shell
instead.
pixi
with direnv
","text":"This allows you to use pixi
in combination with direnv
. Enter the following into your .envrc
file:
watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
pixi.lock
changes, direnv
invokes the shell-hook again.direnv
ensures that the environment is deactivated when you leave the directory.$ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
"},{"location":"environment/#environment-variables","title":"Environment variables","text":"The following environment variables are set by pixi, when using the pixi run
, pixi shell
, or pixi shell-hook
command:
PIXI_PROJECT_ROOT
: The root directory of the project.PIXI_PROJECT_NAME
: The name of the project.PIXI_PROJECT_MANIFEST
: The path to the manifest file (pixi.toml
).PIXI_PROJECT_VERSION
: The version of the project.PIXI_PROMPT
: The prompt to use in the shell, also used by pixi shell
itself.PIXI_ENVIRONMENT_NAME
: The name of the environment, defaults to default
.PIXI_ENVIRONMENT_PLATFORMS
: The path to the environment.CONDA_PREFIX
: The path to the environment. (Used by multiple tools that already understand conda environments)CONDA_DEFAULT_ENV
: The name of the environment. (Used by multiple tools that already understand conda environments)PATH
: We prepend the bin
directory of the environment to the PATH
variable, so you can use the tools installed in the environment directly.Note
Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT
in the environment.
When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock
file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml
and will put the output of the solve step into the pixi.lock
file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:
Pixi solves both the conda
and PyPI
dependencies, where the PyPI
dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler
and rip
library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo
. resolve
is able to solve multiple ecosystem like conda
and PyPI
. It implements the lazy solving process for PyPI
packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda
way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.
For the [pypi-dependencies]
, rip
implements sdist
building to retrieve the metadata of the packages, and wheel
building to install the packages. For this building step, pixi
requires to first install python
in the (conda)[dependencies]
section of the pixi.toml
file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies]
section of the pixi.toml
file.
Pixi caches the packages used in the environment. So if you have multiple projects that use the same packages, pixi will only download the packages once.
The cache is located in the ~/.cache/rattler/cache
directory by default. This location is configurable by setting the PIXI_CACHE_DIR
or RATTLER_CACHE_DIR
environment variable.
When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.
"},{"location":"vision/","title":"Vision","text":"We created pixi
because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo
have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.
We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:
We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.
"},{"location":"vision/#target-languages","title":"Target languages","text":"Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.
"},{"location":"advanced/advanced_tasks/","title":"Advanced tasks","text":"When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.
Here are some quick examples
pixi.toml[tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n \"cmake\",\n # Use the cross-platform Ninja generator\n \"-G\",\n \"Ninja\",\n # The source is in the root directory\n \"-S\",\n \".\",\n # We wanna build in the .build directory\n \"-B\",\n \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends_on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
"},{"location":"advanced/advanced_tasks/#depends-on","title":"Depends on","text":"Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.
An obvious example is compiling before running an application.
Checkout our cpp_sdl
example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start
everything is set up as expected.
pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n
Results in the following lines added to the pixi.toml
[tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends_on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends_on = [\"build\"] }\n
pixi run start\n
The tasks will be executed after each other:
configure
because it has no dependencies.build
as it only depends on configure
.start
as all it dependencies are run.If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.
With this logic, you can also create aliases as you don't have to specify any command in a task.
pixi task add fmt ruff\npixi task add lint pylint\n
pixi task alias style fmt lint\n
Results in the following pixi.toml
.
fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends_on = [\"fmt\", \"lint\"] }\n
Now run both tools with one command.
pixi run style\n
"},{"location":"advanced/advanced_tasks/#working-directory","title":"Working directory","text":"Pixi tasks support the definition of a working directory.
cwd
\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml
file is located.
Consider a pixi project structured as follows:
\u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n \u2514\u2500\u2500 bar.py\n
To add a task to run the bar.py
file, use:
pixi task add bar \"python bar.py\" --cwd scripts\n
This will add the following line to pixi.toml
: pixi.toml
[tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n
"},{"location":"advanced/advanced_tasks/#caching","title":"Caching","text":"When you specify inputs
and/or outputs
to a task, pixi will reuse the result of the task.
For the cache, pixi checks that the following are true:
If all of these conditions are met, pixi will not run the task again and instead use the existing result.
Inputs and outputs can be specified as globs, which will be expanded to all matching files.
pixi.toml[tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n
Note: if you want to debug the globs you can use the --verbose
flag to see which files are selected.
# shows info logs of all files that were selected by the globs\npixi run -v start\n
"},{"location":"advanced/advanced_tasks/#our-task-runner-deno_task_shell","title":"Our task runner: deno_task_shell","text":"To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell
. The task shell is a limited implementation of a bourne-shell interface.
Next to running actual executable like ./myprogram
, cmake
or python
the shell has some built-in commandos.
cp
: Copies files.mv
: Moves files.rm
: Remove files or directories. Ex: rm -rf [FILE]...
- Commonly used to recursively delete files or directories.mkdir
: Makes directories. Ex. mkdir -p DIRECTORY...
- Commonly used to make a directory and all its parents with no error if it exists.pwd
: Prints the name of the current/working directory.sleep
: Delays for a specified amount of time. Ex. sleep 1
to sleep for 1 second, sleep 0.5
to sleep for half a second, or sleep 1m
to sleep a minuteecho
: Displays a line of text.cat
: Concatenates files and outputs them on stdout. When no arguments are provided, it reads and outputs stdin.exit
: Causes the shell to exit.unset
: Unsets environment variables.xargs
: Builds arguments from stdin and executes a command.&&
or ||
to separate two commands. - &&
: if the command before &&
succeeds continue with the next command. - ||
: if the command before ||
fails continue with the next command.;
to run two commands without checking if the first command failed or succeeded.export ENV_VAR=value
- Use env variable using: $ENV_VAR
- unset env variable using unset ENV_VAR
VAR=value
- use them: VAR=value && echo $VAR
|
: echo Hello | python receiving_app.py
- |&
: use this to also get the stderr as input.$()
to use the output of a command as input for another command. - python main.py $(git rev-parse HEAD)
!
before any command will negate the exit code from 1 to 0 or visa-versa.>
to redirect the stdout to a file. - echo hello > file.txt
will put hello
in file.txt
and overwrite existing text. - python main.py 2> file.txt
will put the stderr
output in file.txt
. - python main.py &> file.txt
will put the stderr
and stdout
in file.txt
. - echo hello > file.txt
will append hello
to the existing file.txt
.*
to expand all options. - echo *.py
will echo all filenames that end with .py
- echo **/*.py
will echo all filenames that end with .py
in this directory and all descendant directories. - echo data[0-9].csv
will echo all filenames that have a single number after data
and before .csv
More info in deno_task_shell
documentation.
You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.
Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n <HOST> The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n --token <TOKEN> The token to use (for authentication with prefix.dev)\n --username <USERNAME> The username to use (for basic HTTP authentication)\n --password <PASSWORD> The password to use (for basic HTTP authentication)\n --conda-token <CONDA_TOKEN> The token to use on anaconda.org / quetz authentication\n -v, --verbose... More output per occurrence\n -q, --quiet... Less output per occurrence\n -h, --help Print help\n
The different options are \"token\", \"conda-token\" and \"username + password\".
The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>
.
The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/...
.
The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/...
. This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.
Login to prefix.dev:
pixi auth login prefix.dev --token pfx_jj8WDzvnuTEHGdAhwRZMC1Ag8gSto8\n
Login to anaconda.org:
pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n
Login to a basic HTTP secured server:
pixi auth login myserver.com --username user --password password\n
"},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.
On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access
program that comes pre-installed on macOS. Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On Linux, one can use GNOME Keyring
(or just Keyring) to access credentials that are securely stored by libsecret
. Searching for rattler
should list all the credentials stored by pixi and other rattler-based programs.
If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json
and contains the credentials.
You can use the RATTLER_AUTH_FILE
environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.
E.g.
export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n
The JSON should follow the following format:
{\n \"*.prefix.dev\": {\n \"BearerToken\": \"your_token\"\n },\n \"otherhost.com\": {\n \"BasicHttp\": {\n \"username\": \"your_username\",\n \"password\": \"your_password\"\n }\n },\n \"conda.anaconda.org\": {\n \"CondaToken\": \"your_token\"\n }\n}\n
Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev
also matches repo.prefix.dev
).
Lastly you can set the authentication override file in the global configuration file.
"},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.
The actual code regarding this is in the rattler_solve
crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.
When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.
[project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
In the packagex
example, the solver will understand that the package is only available in my-channel
and will not look for it in conda-forge
. The flowchart of the logic that excludes all other channels:
flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Channel Specific Dependency?}\n C -->|Yes| D[Exclude All Other Channels for This Package]\n C -->|No| E{Any Other Dependencies?}\n E -->|Yes| B\n E -->|No| F[End]\n D --> E
"},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"Channel priority is dictated by the order in the project.channels
array, where the first channel is the highest priority. For instance:
[project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
If the package is found in conda-forge
the solver will not look for it in my-channel
and your-channel
, because it tells the solver they are excluded. If the package is not found in conda-forge
the solver will look for it in my-channel
and if it is found there it will tell the solver to exclude your-channel
for this package. This diagram explains the logic: flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Loop Over Channels}\n C --> D{Package in This Channel?}\n D -->|No| C\n D -->|Yes| E{\"This the first channel\n for this package?\"}\n E -->|Yes| F[Include Package in Candidates]\n E -->|No| G[Exclude Package from Candidates]\n F --> H{Any Other Channels?}\n G --> H\n H -->|Yes| C\n H -->|No| I{Any Other Dependencies?}\n I -->|No| J[End]\n I -->|Yes| B
This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.
"},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"A common use case is to use pytorch
with nvidia
drivers, while also needing the conda-forge
channel for the main dependencies.
[project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
What this will do is get as much as possible from the nvidia/label/cuda-11.8.0
channel, which is actually only the cuda
package. Then it will get all packages from the nvidia
channel, which is a little more and some packages overlap the nvidia
and conda-forge
channel. Like the cuda-cudart
package, which will now only be retrieved from the nvidia
channel because of the priority logic.
Then it will get the packages from the conda-forge
channel, which is the main channel for the dependencies.
But the user only wants the pytorch packages from the pytorch
channel, which is why pytorch
is added last and the dependencies are added as channel specific dependencies.
We don't define the pytorch
channel before conda-forge
because we want to get as much as possible from the conda-forge
as the pytorch channel is not always shipping the best versions of all packages.
For example, it also ships the ffmpeg
package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge
channel for ffmpeg
with the priority logic.
pixi info
prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json
format using the --json
flag, which can be useful for programmatically reading it.
\u279c pixi info\n Pixi version: 0.13.0\n Platform: linux-64\n Virtual packages: __unix=0=0\n : __linux=6.5.12=0\n : __glibc=2.36=0\n : __cuda=12.3=0\n : __archspec=1=x86_64\n Cache dir: /home/user/.cache/rattler/cache\n Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n Version: 0.13.0\n Manifest file: /home/user/development/pixi/pixi.toml\n Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n Features: default\n Channels: conda-forge\n Dependency count: 10\n Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n Target platforms: linux-64, osx-arm64, win-64, osx-64\n Tasks: docs, test-all, test, build, lint, install, build-docs\n
"},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"The first part of the info output is information that is always available and tells you what pixi can read on your machine.
"},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.
"},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"The virtual packages that pixi can find on your machine.
In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda
virtual package. In that case, if pixi cannot find the __cuda
virtual package on your machine the installation will fail.
Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools. Normally the locations would be:
Platform Value Linux$XDG_CACHE_HOME/rattler
or $HOME
/.cache/rattler macOS $HOME
/Library/Caches/rattler Windows {FOLDERID_LocalAppData}/rattler
When your system is filling up you can easily remove this folder. It will re-download everything it needs the next time you install a project.
"},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"Check the authentication documentation
"},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"[requires --extended
]
The size of the previously mentioned \"Cache dir\" in Mebibytes.
"},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"Everything below Project
is info about the project you're currently in. This info is only available if your path has a manifest file (pixi.toml
).
The path to the manifest file that describes the project. For now, this can only be pixi.toml
.
The last time the lockfile was updated, either manually or by pixi itself.
"},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"The environment info defined per environment. If you don't have any environments defined, this will only show the default
environment.
This lists which features are enabled in the environment. For the default this is only default
The list of channels used in this environment.
"},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).
"},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"The list of dependencies defined for this environment.
"},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"The platforms the project has defined.
"},{"location":"advanced/github_actions/","title":"GitHub Action","text":"We created prefix-dev/setup-pixi to facilitate using pixi in CI.
"},{"location":"advanced/github_actions/#usage","title":"Usage","text":"- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n pixi-version: v0.16.1\n cache: true\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n
Pin your action versions
Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.5.1
) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.
Put the following in your .github/dependabot.yml
file to enable Dependabot for your GitHub Actions:
version: 2\nupdates:\n - package-ecosystem: github-actions\n directory: /\n schedule:\n interval: monthly # (1)!\n groups:\n dependencies:\n patterns:\n - \"*\"\n
daily
, weekly
To see all available input arguments, see the action.yml
file in setup-pixi
. The most important features are described below.
The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock
file is present. It will then use the pixi.lock
file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache
input argument.
Customize your cache key
If you need to customize your cache-key, you can use the cache-key
input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>
.
Only save caches on main
In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write
argument.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n cache: true\n cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
"},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments
input argument. This will install all environments that are specified and cache them.
[project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
"},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"The following example will install the py311
and py312
environments in different jobs.
test:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: ${{ matrix.environment }}\n
"},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"The following example will install both the py311
and the py312
environment on the runner.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: >- # (1)!\n py311\n py312\n- run: |\n pixi run -e py311 test\n pixi run -e py312 test\n
separated by spaces, equivalent to
environments: py311 py312\n
Caching behavior if you don't specify environments
If you don't specify any environment, the default
environment will be installed and cached, even if you use other environments.
There are currently three ways to authenticate with pixi:
For more information, see Authentication.
Handle secrets with care
Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }}
syntax. These secrets will always be masked in the logs.
Specify the token using the auth-token
input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
"},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"Specify the username and password using the auth-username
and auth-password
input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n auth-host: custom-artifactory.com\n auth-username: ${{ secrets.PIXI_USERNAME }}\n auth-password: ${{ secrets.PIXI_PASSWORD }}\n
"},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"Specify the conda-token using the conda-token
input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel
) is used at anaconda.org or with quetz instances.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n auth-host: anaconda.org # (1)!\n conda-token: ${{ secrets.CONDA_TOKEN }}\n
setup-pixi
allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}
. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run
command for each command.
- run: | # (1)!\n python --version\n pip install -e --no-deps .\n shell: pixi run bash -e {0}\n
You can even run Python scripts like this:
- run: | # (1)!\n import my_package\n print(\"Hello world!\")\n shell: pixi run python {0}\n
If you want to use PowerShell, you need to specify -Command
as well.
- run: | # (1)!\n python --version | Select-String \"3.11\"\n shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
How does it work under the hood?
Under the hood, the shell: xyz {0}
option is implemented by creating a temporary script file and calling xyz
with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0}
directly but instead have to use shell: pixi run bash {0}
. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell
in the documentation. See the official documentation and ADR 0277 for more information about how the shell:
input works in GitHub Actions.
--frozen
and --locked
","text":"You can specify whether setup-pixi
should run pixi install --frozen
or pixi install --locked
depending on the frozen
or the locked
input argument. See the official documentation for more information about the --frozen
and --locked
flags.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n locked: true\n # or\n frozen: true\n
If you don't specify anything, the default behavior is to run pixi install --locked
if a pixi.lock
file is present and pixi install
otherwise.
There are two types of debug logging that you can enable.
"},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:
Debug logging documentation
For more information about debug logging in GitHub Actions, see the official documentation.
"},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"The second type is the debug logging of the pixi executable. This can be specified by setting the log-level
input.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n log-level: vvv # (1)!\n
q
, default
, v
, vv
, or vvv
.If nothing is specified, log-level
will default to default
or vv
depending on if debug logging is enabled for the action.
On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup
input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).
If you set post-cleanup
to true
, the action will delete the following files:
.pixi
environment~/.rattler
If nothing is specified, post-cleanup
will default to true
.
On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi
to do this.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n post-cleanup: true\n pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
${{ runner.temp }}\\Scripts\\pixi.exe
on WindowsIf you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi
repository.
Pixi supports some global configuration options, as well as project-scoped configuration (that does not belong into the project file). The configuration is loaded in the following order:
~/.config/pixi/config.toml
on Linux, dependent on XDG_CONFIG_HOME)~/.pixi/config.toml
(or $PIXI_HOME/config.toml
if the PIXI_HOME
environment variable is set)$PIXI_PROJECT/.pixi/config.toml
--tls-no-verify
, --change-ps1=false
etc.)Note
To find the locations where pixi
looks for configuration files, run pixi
with -v
or --verbose
.
The following reference describes all available configuration options.
# The default channels to select when running `pixi init` or `pixi global install`.\n# This defaults to only conda-forge.\ndefault_channels = [\"conda-forge\"]\n\n# When set to false, the `(pixi)` prefix in the shell prompt is removed.\n# This applies to the `pixi shell` subcommand.\n# You can override this from the CLI with `--change-ps1`.\nchange_ps1 = true\n\n# When set to true, the TLS certificates are not verified. Note that this is a\n# security risk and should only be used for testing purposes or internal networks.\n# You can override this from the CLI with `--tls-no-verify`.\ntls_no_verify = false\n\n# Override from where the authentication information is loaded.\n# Usually we try to use the keyring to load authentication data from, and only use a JSON\n# file as fallback. This option allows you to force the use of a JSON file.\n# Read more in the authentication section.\nauthentication_override_file = \"/path/to/your/override.json\"\n
"},{"location":"advanced/multi_platform_configuration/","title":"Multi platform config","text":"Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.
Here is an example pixi.toml
that highlights some of the features:
[project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
"},{"location":"advanced/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"The project.platforms
defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lockfile.
Running pixi install
on a platform that is not configured will warn the user that it is not setup for that platform:
\u276f pixi install\n \u00d7 the project is not configured for your current platform\n \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n \u00b7 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u00b7 \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n \u2570\u2500\u2500\u2500\u2500\n help: The project needs to be configured to support your platform (linux-64).\n
"},{"location":"advanced/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms
then pixi will throw an error.
It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.
pixi.toml[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n
In the above example, we specify that we depend on msmpi
only on Windows. We also specifically want python
on 3.8
when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.
You can use pixi's cli to add these dependencies to the pixi.toml
pixi add --platform win-64 posix\n
This also works for the host
and build
dependencies.
pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n
Which results in this.
pixi.toml[target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
"},{"location":"advanced/multi_platform_configuration/#activation","title":"Activation","text":"Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash
and for windows they are bat
To deal with this, you can define your activation scripts using the target definition.
pixi.toml
[activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
When this project is run on win-64
it will only execute the target scripts not the scripts specified in the default activation.scripts
"},{"location":"design_proposals/multi_environment_proposal/","title":"Proposal Design: Multi Environment Support","text":""},{"location":"design_proposals/multi_environment_proposal/#objective","title":"Objective","text":"The aim is to introduce an environment set mechanism in the pixi
package manager. This mechanism will enable clear, conflict-free management of dependencies tailored to specific environments, while also maintaining the integrity of fixed lockfiles.
There are multiple scenarios where multiple environments are useful.
py39
and py310
or polars 0.12
and 0.13
.lint
or docs
.dev
.prod
and test-prod
where test-prod
is a strict superset of prod
.cuda
environment and a cpu
environment.This prepares pixi
for use in large projects with multiple use-cases, multiple developers and different CI needs.
Important
This is a proposal, not a final design. The proposal is open for discussion and will be updated based on the feedback.
"},{"location":"design_proposals/multi_environment_proposal/#feature-environment-set-definitions","title":"Feature & Environment Set Definitions","text":"Introduce environment sets into the pixi.toml
this describes environments based on feature
's. Introduce features into the pixi.toml
that can describe parts of environments. As an environment goes beyond just dependencies
the features
should be described including the following fields:
dependencies
: The conda package dependenciespypi-dependencies
: The pypi package dependenciessystem-requirements
: The system requirements of the environmentactivation
: The activation information for the environmentplatforms
: The platforms the environment can be run on.channels
: The channels used to create the environment. Adding the priority
field to the channels to allow concatenation of channels instead of overwriting.target
: All the above features but also separated by targets.tasks
: Feature specific tasks, tasks in one environment are selected as default tasks for the environment.[dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
Different dependencies per feature[feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
Full set of environment modification in one feature[feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}] # Results in: [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Define tasks as defaults of an environment[feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environments test test`\n
The environment definition should contain the following fields:
features: Vec<Feature>
: The features that are included in the environment set, which is also the default field in the environments.solve-group: String
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies.[environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
Testing a production environment with additional dependencies[environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
Creating environments without a default environment[dependencies]\n# Keep empty or undefined to create an empty environment.\n\n[feature.base.dependencies]\npython = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom default\ndefault = [\"base\"]\n# Create a custom environment which only has the `lint` feature as the default feature is empty.\nlint = [\"lint\"]\n
"},{"location":"design_proposals/multi_environment_proposal/#lockfile-structure","title":"Lockfile Structure","text":"Within the pixi.lock
file, a package may now include an additional environments
field, specifying the environment to which it belongs. To avoid duplication the packages environments
field may contain multiple environments so the lockfile is of minimal size.
- platform: linux-64\n name: pre-commit\n version: 3.3.3\n category: main\n environments:\n - dev\n - test\n - lint\n ...:\n- platform: linux-64\n name: python\n version: 3.9.3\n category: main\n environments:\n - dev\n - test\n - lint\n - py39\n - default\n ...:\n
"},{"location":"design_proposals/multi_environment_proposal/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:
Default behaviorpixi run python\n# Runs python in the `default` environment\n
Activating an specific environmentpixi run -e test pytest\npixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n
Activating a shell in an environment
pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
Running any command in an environmentpixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n
Interactive selection of environments if task is in multiple environments# In the scenario where test is a task in multiple environments, interactive selection should be used.\npixi run test\n# Which env?\n# 1. test\n# 2. test39\n
"},{"location":"design_proposals/multi_environment_proposal/#important-links","title":"Important links","text":"In polarify
they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.
[project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
.github/workflows/test.ymljobs:\n tests-per-env:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: ${{ matrix.environment }}\n - name: Run tasks\n run: |\n pixi run --environment ${{ matrix.environment }} test\n tests-with-multiple-envs:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: pl017 pl018\n - run: |\n pixi run -e pl017 test\n pixi run -e pl018 test\n
Test vs Production example This is an example of a project that has a test
feature and prod
environment. The prod
environment is a production environment that contains the run dependencies. The test
feature is a set of dependencies and tasks that we want to put on top of the previously solved prod
environment. This is a common use case where we want to test the production environment with additional dependencies.
pixi.toml
[project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
In ci you would run the following commands: pixi run postinstall-e && pixi run test\n
Locally you would run the following command: pixi run postinstall-e && pixi run dev\n
Then in a Dockerfile you would run the following command: Dockerfile
FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n
Multiple machines from one project This is an example for an ML project that should be executable on a machine that supports cuda
and mlx
. It should also be executable on machines that don't support cuda
or mlx
, we use the cpu
feature for this. pixi.toml
[project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.5.0,<0.6.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n
Running the project on a cuda machinepixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
Running the project with mlxpixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
Running the project on a machine without cuda or mlxpixi run train-model\n
"},{"location":"examples/cpp-sdl/","title":"SDL example","text":" The cpp-sdl
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/cpp-sdl\n
Run the start
command
pixi run start\n
Using the depends_on
feature you only needed to run the start
task but under water it is running the following tasks.
# Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
"},{"location":"examples/opencv/","title":"Opencv example","text":"The opencv
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/opencv\n
"},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"Run the start
command to start the face detection algorithm.
pixi run start\n
The screen that starts should look like this:
Check out the webcame_capture.py
to see how we detect a face.
Next to face recognition, a camera calibration example is also included.
You'll need a checkerboard for this to work. Print this:
Then run
pixi run calibrate\n
To make a picture for calibration press SPACE
Do this approximately 10 times with the chessboard in view of the camera
After that press ESC
which will start the calibration.
When the calibration is done, the camera will be used again to find the distance to the checkerboard.
"},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"The nav2
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/ros2-nav2\n
Run the start
command
pixi run start\n
"},{"location":"ide_integration/pycharm/","title":"PyCharm Integration","text":"You can use PyCharm with pixi environments by using the conda
shim provided by the pixi-pycharm package.
Windows support
Windows is currently not supported, see pavelzw/pixi-pycharm #5. Only Linux and macOS are supported.
"},{"location":"ide_integration/pycharm/#how-to-use","title":"How to use","text":"To get started, add pixi-pycharm
to your pixi project.
pixi add pixi-pycharm\n
This will ensure that the conda shim is installed in your project's environment.
could not determine any available versions for pixi-pycharm on win-64
If you get the error could not determine any available versions for pixi-pycharm on win-64
when running pixi add pixi-pycharm
(even when you're not on Windows), this is because the package is not available on Windows and pixi tries to solve the environment for all platforms. If you still want to use it in your pixi project (and are on Linux/macOS), you can add the following to your pixi.toml
:
[target.unix.dependencies]\npixi-pycharm = \"*\"\n
This will tell pixi to only use this dependency on unix platforms.
Having pixi-pycharm
installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda
file in your pixi environment. You can get the path using the following command:
pixi run 'echo $CONDA_PREFIX/libexec/conda'\n
This is an executable that tricks PyCharm into thinking it's the proper conda
executable. Under the hood it redirects all calls to the corresponding pixi
equivalent.
Use the conda shim from this pixi project
Please make sure that this is the conda
shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.
Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.
PyCharm should now be able to show you the installed packages as well.
You can now run your programs and tests as usual.
"},{"location":"ide_integration/pycharm/#multiple-environments","title":"Multiple environments","text":"
If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.
You can then specify the corresponding environment in the bottom right corner of the PyCharm window.
"},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"
When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.
It is recommended to rename the environments to something unique.
"},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"Logs are written to ~/.cache/pixi-pycharm.log
. You can use them to debug problems. Please attach the logs when filing a bug report.
Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.
"},{"location":"#installation","title":"Installation","text":"To install pixi
you can run the following command in your terminal:
curl -fsSL https://pixi.sh/install.sh | bash\n
The above invocation will automatically download the latest version of pixi
, extract it, and move the pixi
binary to ~/.pixi/bin
. If this directory does not already exist, the script will create it.
The script will also update your ~/.bash_profile
to include ~/.pixi/bin
in your PATH, allowing you to invoke the pixi
command from anywhere.
PowerShell
:
iwr -useb https://pixi.sh/install.ps1 | iex\n
winget
: winget install prefix-dev.pixi\n
The above invocation will automatically download the latest version of pixi
, extract it, and move the pixi
binary to LocalAppData/pixi/bin
. If this directory does not already exist, the script will create it. The command will also automatically add LocalAppData/pixi/bin
to your path allowing you to invoke pixi
from anywhere.
Tip
You might need to restart your terminal or source your shell for the changes to take effect.
"},{"location":"#autocompletion","title":"Autocompletion","text":"To get autocompletion run:
Linux & macOSWindows# Pick your shell (use `echo $SHELL` to find the shell you are using.):\necho 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\necho 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\necho 'pixi completion --shell fish | source' >> ~/.config/fish/config.fish\necho 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n
PowerShell:
Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n
Failure because no profile file exists
Make sure your profile file exists, otherwise create it with:
New-Item -Path $PROFILE -ItemType File -Force\n
And then restart the shell or source the shell config file.
"},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"Although we recommend installing pixi through the above method we also provide additional installation methods.
"},{"location":"#homebrew","title":"Homebrew","text":"Pixi is available via homebrew. To install pixi via homebrew simply run:
brew install pixi\n
"},{"location":"#windows-installer","title":"Windows installer","text":"We provide an msi
installer on our GitHub releases page. The installer will download pixi and add it to the path.
pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:
cargo install --locked --git https://github.com/prefix-dev/pixi.git\n
or when you want to make changes use:
cargo build\ncargo test\n
If you have any issues building because of the dependency on rattler
checkout its compile steps.
Updating is as simple as installing, rerunning the installation script gets you the latest version.
Linux & macOSWindowscurl -fsSL https://pixi.sh/install.sh | bash\n
Or get a specific pixi version using: export PIXI_VERSION=vX.Y.Z && curl -fsSL https://pixi.sh/install.sh | bash\n
PowerShell:
iwr -useb https://pixi.sh/install.ps1 | iex\n
Or get a specific pixi version using: PowerShell: $Env:PIXI_VERSION=\"vX.Y.Z\"; iwr -useb https://pixi.sh/install.ps1 | iex\n
Note
If you used a package manager like brew
, mamba
, conda
, paru
to install pixi
. Then use their builtin update mechanism. e.g. brew upgrade pixi
.
To uninstall pixi from your system, simply remove the binary.
Linux & macOSWindowsrm ~/.pixi/bin/pixi\n
$PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n
After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi
directory and remove the directory from your path.
When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:
[![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n
Customize your badge
To further customize the look and feel of your badge, you can add &style=<custom-style>
at the end of the URL. See the documentation on shields.io for more info.
scipy
port using xtensor
conda
, mamba
, poetry
, pip
","text":"Tool Installs python Builds packages Runs predefined tasks Has lockfiles builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi
","text":"Starting with the name prefix
we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px
, pex
, pax
, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.
alias not_pixi=\"pixi\"\n
PowerShell:
New-Alias -Name not_pixi -Value pixi\n
"},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build
","text":"TL;DR: It's coming we promise!
pixi build
is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build
which will be used as a library in pixi.
Ensure you've got pixi
set up. If running pixi
doesn't show the help, see the getting started if it doesn't.
pixi\n
Initialize a new project and navigate to the project directory.
pixi init pixi-hello-world\ncd pixi-hello-world\n
Add the dependencies you would like to use.
pixi add python\n
Create a file named hello_world.py
in the directory and paste the following code into the file.
def hello():\n print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n hello()\n
Run the code inside the environment.
pixi run python hello_world.py\n
You can also put this run command in a task.
pixi task add hello python hello_world.py\n
After adding the task, you can run the task using its name.
pixi run hello\n
Use the shell
command to activate the environment and start a new shell in there.
pixi shell\npython\nexit\n
You've just learned the basic features of pixi:
Feel free to play around with what you just learned like adding more tasks, dependencies or code.
Happy coding!
"},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"Use pixi to install tools on your machine.
Some notable examples:
# Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
"},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.
- uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n
See the GitHub Actions for more details.
"},{"location":"cli/","title":"Commands","text":""},{"location":"cli/#global-options","title":"Global options","text":"--verbose (-v|vv|vvv)
Increase the verbosity of the output messages, the -v|vv|vvv increases the level of verbosity respectively.--help (-h)
Shows help information, use -h
to get the short version of the help.--version (-V)
: shows the version of pixi that is used.--quiet (-q)
: Decreases the amount of output.--color <COLOR>
: Whether the log needs to be colored [env: PIXI_COLOR=
] [default: auto
] [possible values: always, never, auto]. Pixi also honor the FORCE_COLOR
and NO_COLOR
environment variables. They both take precedence over --color
and PIXI_COLOR
.init
","text":"This command is used to create a new project. It initializes a pixi.toml
file and also prepares a .gitignore
to prevent the environment from being added to git
.
[PATH]
: Where to place the project (defaults to current path) [default: .]--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--platform <PLATFORM> (-p)
: specify a platform that the project supports. (Allowed to be used more than once)--import <ENV_FILE> (-i)
: Import an existing conda environment file, e.g. environment.yml
.Importing an environment.yml
When importing an environment, the pixi.toml
will be created with the dependencies from the environment file. The pixi.lock
will be created when you install the environment. We don't support git+
urls as dependencies for pip packages and for the defaults
channel we use main
, r
and msys2
as the default channels.
pixi init myproject\npixi init ~/myproject\npixi init # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\n
"},{"location":"cli/#add","title":"add
","text":"Adds dependencies to the pixi.toml
. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.
<SPECS>
: The package(s) to add, space separated. The version constraint is optional.--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package. Parses dependencies as PEP508 requirements, supporting extras and versions. See configuration for details.--no-install
: Don't install the package to the environment, only add the package to the lock-file.--no-lockfile-update
: Don't update the lock-file, implies the --no-install
flag.--platform <PLATFORM> (-p)
: The platform for which the dependency should be added. (Allowed to be used more than once)--feature <FEATURE> (-f)
: The feature for which the dependency should be added.pixi add numpy\npixi add numpy pandas \"pytorch>=1.8\"\npixi add \"numpy>=1.22,<1.24\"\npixi add --manifest-path ~/myproject/pixi.toml numpy\npixi add --host \"python>=3.9.0\"\npixi add --build cmake\npixi add --pypi requests[security]\npixi add --platform osx-64 --build clang\npixi add --no-install numpy\npixi add --no-lockfile-update numpy\npixi add --feature featurex numpy\n
"},{"location":"cli/#install","title":"install
","text":"Installs all dependencies specified in the lockfile pixi.lock
. Which gets generated on pixi add
or when you manually change the pixi.toml
file and run pixi install
.
--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\n
"},{"location":"cli/#run","title":"run
","text":"The run
commands first checks if the environment is ready to use. When you didn't run pixi install
the run command will do that for you. The custom tasks defined in the pixi.toml
are also available through the run command.
You cannot run pixi run source setup.bash
as source
is not available in the deno_task_shell
commandos and not an executable.
[TASK]...
The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to run the task in, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi run python\npixi run cowpy \"Hey pixi user\"\npixi run --manifest-path ~/myproject/pixi.toml python\npixi run --frozen python\npixi run --locked python\n# If you have specified a custom task in the pixi.toml you can run it with run as well\npixi run build\n# Extra arguments will be passed to the tasks command.\npixi run task argument1 argument2\n\n# If you have multiple environments you can select the right one with the --environment flag.\npixi run --environment cuda python\n
Info
In pixi
the deno_task_shell
is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.
Cross environment tasks
If you're using the depends_on
feature of the tasks
, the tasks will be run in the order you specified them. The depends_on
can be used cross environment, e.g. you have this pixi.toml
:
[tasks]\nstart = { cmd = \"python start.py\", depends_on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n
Then you're able to run the build
from the build
environment and start
from the default environment. By only calling:
pixi run start\n
"},{"location":"cli/#remove","title":"remove
","text":"Removes dependencies from the pixi.toml
.
<DEPS>...
: List of dependencies you wish to remove from the project.--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--host
: Specifies a host dependency, important for building a package.--build
: Specifies a build dependency, important for building a package.--pypi
: Specifies a PyPI dependency, not a conda package.--platform <PLATFORM> (-p)
: The platform from which the dependency should be removed.--feature <FEATURE> (-f)
: The feature from which the dependency should be removed.pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\n
"},{"location":"cli/#task","title":"task
","text":"If you want to make a shorthand for a specific command you can add a task for it.
"},{"location":"cli/#options_5","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.task add
","text":"Add a task to the pixi.toml
, use --depends-on
to add tasks you want to run before this task, e.g. build before an execute task.
<NAME>
: The name of the task.<COMMAND>
: The command to run. This can be more than one word.Info
If you are using $
for env variables they will be resolved before adding them to the task. If you want to use $
in the task you need to escape it with a \\
, e.g. echo \\$HOME
.
--platform <PLATFORM> (-p)
: the platform for which this task should be added.--feature <FEATURE> (-f)
: the feature for which the task is added, if non provided the default tasks will be added.--depends-on <DEPENDS_ON>
: the task it depends on to be run before the one your adding.--cwd <CWD>
: the working directory for the task relative to the root of the project.pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\n
This adds the following to the pixi.toml
:
[tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends_on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n
Which you can then run with the run
command:
pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
"},{"location":"cli/#task-remove","title":"task remove
","text":"Remove the task from the pixi.toml
<NAMES>
: The names of the tasks, space separated.--platform <PLATFORM> (-p)
: the platform for which this task is removed.--feature <FEATURE> (-f)
: the feature for which the task is removed.pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
"},{"location":"cli/#task-alias","title":"task alias
","text":"Create an alias for a task.
"},{"location":"cli/#arguments_6","title":"Arguments","text":"<ALIAS>
: The alias name<DEPENDS_ON>
: The names of the tasks you want to execute on this alias, order counts, first one runs first.--platform <PLATFORM> (-p)
: the platform for which this alias is created.pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
"},{"location":"cli/#task-list","title":"task list
","text":"List all tasks in the project.
"},{"location":"cli/#options_9","title":"Options","text":"--environment
(-e
): the environment's tasks list, if non is provided the default tasks will be listed.--summary
(-s
): the output gets formatted to be machine parsable. (Used in the autocompletion of pixi run
).pixi task list\npixi task list --environment cuda\npixi task list --summary\n
"},{"location":"cli/#list","title":"list
","text":"List project's packages. Highlighted packages are explicit dependencies.
"},{"location":"cli/#options_10","title":"Options","text":"--platform <PLATFORM> (-p)
: The platform to list packages for. Defaults to the current platform--json
: Whether to output in json format.--json-pretty
: Whether to output in pretty json format--sort-by <SORT_BY>
: Sorting strategy [default: name] [possible values: size, name, type]--manifest-path <MANIFEST_PATH>
: The path to pixi.toml
, by default it searches for one in the parent directories.--environment
(-e
): The environment's packages to list, if non is provided the default environment's packages will be listed.--frozen
: Install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: Only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--no-install
: Don't install the environment for pypi solving, only update the lock-file if it can solve without installing. (Implied by --frozen
and --locked
)```shell\npixi list\npixi list --json-pretty\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n
Output will look like this, where python
will be green as it is the package that was explicitly added to the pixi.toml
: \u279c pixi list\n Package Version Build Size Kind Source\n _libgcc_mutex 0.1 conda_forge 2.5 KiB conda _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex 4.5 2_gnu 23.1 KiB conda _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2 1.0.8 hd590300_5 248.3 KiB conda bzip2-1.0.8-hd590300_5.conda\n ca-certificates 2023.11.17 hbcca054_0 150.5 KiB conda ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64 2.40 h41732ed_0 688.2 KiB conda ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat 2.5.0 hcb278e6_1 76.2 KiB conda libexpat-2.5.0-hcb278e6_1.conda\n libffi 3.4.2 h7f98852_5 56.9 KiB conda libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng 13.2.0 h807b86a_4 755.7 KiB conda libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp 13.2.0 h807b86a_4 412.2 KiB conda libgomp-13.2.0-h807b86a_4.conda\n libnsl 2.0.1 hd590300_0 32.6 KiB conda libnsl-2.0.1-hd590300_0.conda\n libsqlite 3.44.2 h2797004_0 826 KiB conda libsqlite-3.44.2-h2797004_0.conda\n libuuid 2.38.1 h0b41bf4_0 32.8 KiB conda libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt 4.4.36 hd590300_1 98 KiB conda libxcrypt-4.4.36-hd590300_1.conda\n libzlib 1.2.13 hd590300_5 60.1 KiB conda libzlib-1.2.13-hd590300_5.conda\n ncurses 6.4 h59595ed_2 863.7 KiB conda ncurses-6.4-h59595ed_2.conda\n openssl 3.2.0 hd590300_1 2.7 MiB conda openssl-3.2.0-hd590300_1.conda\n python 3.12.1 hab00c5b_1_cpython 30.8 MiB conda python-3.12.1-hab00c5b_1_cpython.conda\n readline 8.2 h8228510_1 274.9 KiB conda readline-8.2-h8228510_1.conda\n tk 8.6.13 noxft_h4845f30_101 3.2 MiB conda tk-8.6.13-noxft_h4845f30_101.conda\n tzdata 2023d h0c530f3_0 116.8 KiB conda tzdata-2023d-h0c530f3_0.conda\n xz 5.2.6 h166bdaf_0 408.6 KiB conda xz-5.2.6-h166bdaf_0.tar.bz2\n
"},{"location":"cli/#shell","title":"shell
","text":"This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit
.
--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate the shell in, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
"},{"location":"cli/#shell-hook","title":"shell-hook
","text":"This command prints the activation script of an environment.
"},{"location":"cli/#options_12","title":"Options","text":"--shell <SHELL> (-s)
: The shell for which the activation script should be printed. Defaults to the current shell. Currently supported variants: [bash
, zsh
, xonsh
, cmd
, powershell
, fish
, nushell
]--manifest-path
: the path to pixi.toml
, by default it searches for one in the parent directories.--frozen
: install the environment as defined in the lockfile. Without checking the status of the lockfile. It can also be controlled by the PIXI_FROZEN
environment variable (example: PIXI_FROZEN=true
).--locked
: only install if the pixi.lock
is up-to-date with the pixi.toml
1. It can also be controlled by the PIXI_LOCKED
environment variable (example: PIXI_LOCKED=true
). Conflicts with --frozen
.--environment <ENVIRONMENT> (-e)
: The environment to activate, if none are provided the default environment will be used or a selector will be given to select the right environment.pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\n
Example use-case, when you want to get rid of the pixi
executable in a Docker container. pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n
"},{"location":"cli/#search","title":"search
","text":"Search a package, output will list the latest version of the package.
"},{"location":"cli/#arguments_7","title":"Arguments","text":"<PACKAGE>
: Name of package to search, it's possible to use wildcards (*
).--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)--limit <LIMIT> (-l)
: optionally limit the number of search results--platform <PLATFORM> (-p)
: specify a platform that you want to search for. (default: current platform)pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
"},{"location":"cli/#self-update","title":"self-update
","text":"Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi
), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force
flag
--version <VERSION>
: The desired version (to downgrade or upgrade to). Update to the latest version if not specified.--force
: Force the update even if the pixi binary is not found in the default location.pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
"},{"location":"cli/#info","title":"info
","text":"Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.
"},{"location":"cli/#options_15","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.--extended
: extend the information with more slow queries to the system, like directory sizes.--json
: Get a machine-readable version of the information as output.pixi info\npixi info --json --extended\n
"},{"location":"cli/#upload","title":"upload
","text":"Upload a package to a prefix.dev channel
"},{"location":"cli/#arguments_8","title":"Arguments","text":"<HOST>
: The host + channel to upload to.<PACKAGE_FILE>
: The package file to upload.pixi upload repo.prefix.dev/my_channel my_package.conda\n
"},{"location":"cli/#auth","title":"auth
","text":"This command is used to authenticate the user's access to remote hosts such as prefix.dev
or anaconda.org
for private channels.
auth login
","text":"Store authentication information for given host.
Tip
The host is real hostname not a channel.
"},{"location":"cli/#arguments_9","title":"Arguments","text":"<HOST>
: The host to authenticate with.--token <TOKEN>
: The token to use for authentication with prefix.dev.--username <USERNAME>
: The username to use for basic HTTP authentication--password <PASSWORD>
: The password to use for basic HTTP authentication.--conda-token <CONDA_TOKEN>
: The token to use on anaconda.org
/ quetz
authentication.pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaNdHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --user john --password xxxxxx\n
"},{"location":"cli/#auth-logout","title":"auth logout
","text":"Remove authentication information for a given host.
"},{"location":"cli/#arguments_10","title":"Arguments","text":"<HOST>
: The host to authenticate with.pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
"},{"location":"cli/#global","title":"global
","text":"Global is the main entry point for the part of pixi that executes on the global(system) level.
Tip
Binaries and environments installed globally are stored in ~/.pixi
by default, this can be changed by setting the PIXI_HOME
environment variable.
global install
","text":"This command installs package(s) into its own environment and adds the binary to PATH
, allowing you to access it anywhere on your system without activating the environment.
1.<PACKAGE>
: The package(s) to install, this can also be a version constraint.
--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. (Allowed to be used more than once)pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n
After using global install, you can use the package you installed anywhere on your system.
"},{"location":"cli/#global-list","title":"global list
","text":"This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:
> pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n| \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n| \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n| \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n| \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n \u2514\u2500 exec: uv\n
"},{"location":"cli/#global-upgrade","title":"global upgrade
","text":"This command upgrades a globally installed package (to the latest version by default).
"},{"location":"cli/#arguments_12","title":"Arguments","text":"<PACKAGE>
: The package to upgrade.--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
"},{"location":"cli/#global-upgrade-all","title":"global upgrade-all
","text":"This command upgrades all globally installed packages to their latest version.
"},{"location":"cli/#options_19","title":"Options","text":"--channel <CHANNEL> (-c)
: specify a channel that the project uses. Defaults to conda-forge
. Note the channel the package was installed from will be always used for upgrade. (Allowed to be used more than once)pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
"},{"location":"cli/#global-remove","title":"global remove
","text":"Removes a package previously installed into a globally accessible location via pixi global install
Use pixi global info
to find out what the package name is that belongs to the tool you want to remove.
<PACKAGE>
: The package(s) to remove.pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
"},{"location":"cli/#project","title":"project
","text":"This subcommand allows you to modify the project configuration through the command line interface.
"},{"location":"cli/#options_20","title":"Options","text":"--manifest-path <MANIFEST_PATH>
: the path to pixi.toml
, by default it searches for one in the parent directories.project channel add
","text":"Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lockfile and the environment is reinstalled.
"},{"location":"cli/#arguments_14","title":"Arguments","text":"<CHANNEL>
: The channels to add, name or URL.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is added.pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nividia\n
"},{"location":"cli/#project-channel-list","title":"project channel list
","text":"List the channels in the project file
"},{"location":"cli/#options_22","title":"Options","text":"urls
: show the urls of the channels instead of the names.$ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
"},{"location":"cli/#project-channel-remove","title":"project channel remove
","text":"List the channels in the project file
"},{"location":"cli/#arguments_15","title":"Arguments","text":"<CHANNEL>...
: The channels to remove, name(s) or URL(s).--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the channel is removed.pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nividia\n
"},{"location":"cli/#project-description-get","title":"project description get
","text":"Get the project description.
$ pixi project description get\nPackage management made easy!\n
"},{"location":"cli/#project-description-set","title":"project description set
","text":"Set the project description.
"},{"location":"cli/#arguments_16","title":"Arguments","text":"<DESCRIPTION>
: The description to set.pixi project description set \"my new description\"\n
"},{"location":"cli/#project-platform-add","title":"project platform add
","text":"Adds a platform(s) to the project file and updates the lockfile.
"},{"location":"cli/#arguments_17","title":"Arguments","text":"<PLATFORM>...
: The platforms to add.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be added.pixi project platform add win-64\npixi project platform add --feature test win-64\n
"},{"location":"cli/#project-platform-list","title":"project platform list
","text":"List the platforms in the project file.
$ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
"},{"location":"cli/#project-platform-remove","title":"project platform remove
","text":"Remove platform(s) from the project file and updates the lockfile.
"},{"location":"cli/#arguments_18","title":"Arguments","text":"<PLATFORM>...
: The platforms to remove.--no-install
: do not update the environment, only add changed packages to the lock-file.--feature <FEATURE> (-f)
: The feature for which the platform will be removed.pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
"},{"location":"cli/#project-version-get","title":"project version get
","text":"Get the project version.
$ pixi project version get\n0.11.0\n
"},{"location":"cli/#project-version-set","title":"project version set
","text":"Set the project version.
"},{"location":"cli/#arguments_19","title":"Arguments","text":"<VERSION>
: The version to set.pixi project version set \"0.13.0\"\n
"},{"location":"cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}
","text":"Bump the project version to {MAJOR|MINOR|PATCH}.
pixi project version major\npixi project version minor\npixi project version patch\n
An up-to-date lockfile means that the dependencies in the lockfile are allowed by the dependencies in the manifest file. For example
pixi.toml
with python = \">= 3.11\"
is up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.pixi.toml
with python = \">= 3.12\"
is not up-to-date with a name: python, version: 3.11.0
in the pixi.lock
.Being up-to-date does not mean that the lockfile holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9
The pixi.toml
is the pixi project configuration file, also known as the project manifest.
A toml
file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check crates.io.
project
table","text":"The minimally required information in the project
table is:
[project]\nname = \"project-name\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n
"},{"location":"configuration/#name","title":"name
","text":"The name of the project.
[project]\nname = \"project-name\"\n
"},{"location":"configuration/#channels","title":"channels
","text":"This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org
you only need to use the name of the channel directly.
[project]\nchannels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n
Channels situated on the file system are also supported with absolute file paths:
[project]\nchannels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n
To access private or public channels on prefix.dev or Quetz use the url including the hostname:
[project]\nchannels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n
"},{"location":"configuration/#platforms","title":"platforms
","text":"Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lockfile (pixi.lock
).
[project]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n
The available platforms are listed here: link"},{"location":"configuration/#version-optional","title":"version
(optional)","text":"The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.
[project]\nversion = \"1.2.3\"\n
"},{"location":"configuration/#authors-optional","title":"authors
(optional)","text":"This is a list of authors of the project.
[project]\nauthors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n
"},{"location":"configuration/#description-optional","title":"description
(optional)","text":"This should contain a short description of the project.
[project]\ndescription = \"A simple description\"\n
"},{"location":"configuration/#license-optional","title":"license
(optional)","text":"The license as a valid SPDX string (e.g. MIT AND Apache-2.0)
[project]\nlicense = \"MIT\"\n
"},{"location":"configuration/#license-file-optional","title":"license-file
(optional)","text":"Relative path to the license file.
[project]\nlicense-file = \"LICENSE.md\"\n
"},{"location":"configuration/#readme-optional","title":"readme
(optional)","text":"Relative path to the README file.
[project]\nreadme = \"README.md\"\n
"},{"location":"configuration/#homepage-optional","title":"homepage
(optional)","text":"URL of the project homepage.
[project]\nhomepage = \"https://pixi.sh\"\n
"},{"location":"configuration/#repository-optional","title":"repository
(optional)","text":"URL of the project source repository.
[project]\nrepository = \"https://github.com/prefix-dev/pixi\"\n
"},{"location":"configuration/#documentation-optional","title":"documentation
(optional)","text":"URL of the project documentation.
[project]\ndocumentation = \"https://pixi.sh\"\n
"},{"location":"configuration/#the-tasks-table","title":"The tasks
table","text":"Tasks are a way to automate certain custom commands in your project. For example, a lint
or format
step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run
and are executed using the deno_task_shell
.
[tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends_on=\"simple\"}\nalias = { depends_on=[\"depending\"]}\n
You can modify this table using pixi task
. Note
Specify different tasks for different platforms using the target table
"},{"location":"configuration/#the-system-requirements-table","title":"Thesystem-requirements
table","text":"The system requirements are used to define minimal system specifications used during dependency resolution. For example, we can define a unix system with a specific minimal libc version. This will be the minimal system specification for the project. System specifications are directly related to the virtual packages.
Currently, the specified defaults are the same as conda-lock's implementation:
LinuxWindowsOsxOsx-arm64 default system requirements for linux[system-requirements]\nlinux = \"5.10\"\nlibc = { family=\"glibc\", version=\"2.17\" }\n
default system requirements for windows[system-requirements]\n
default system requirements for osx[system-requirements]\nmacos = \"10.15\"\n
default system requirements for osx-arm64[system-requirements]\nmacos = \"11.0\"\n
Only if a project requires a different set should you define them.
For example, when installing environments on old versions of linux. You may encounter the following error:
\u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '5.10' but the system has version '4.12.14'\n
This suggests that the system requirements for the project should be lowered. To fix this, add the following table to your configuration: [system-requirements]\nlinux = \"4.12.14\"\n
"},{"location":"configuration/#using-cuda-in-pixi","title":"Using Cuda in pixi","text":"If you want to use cuda
in your project you need to add the following to your system-requirements
table:
[system-requirements]\ncuda = \"11\" # or any other version of cuda you want to use\n
This informs the solver that cuda is going to be available, so it can lock it into the lockfile if needed."},{"location":"configuration/#the-dependencies-tables","title":"The dependencies
table(s)","text":"This section defines what dependencies you would like to use for your project.
There are multiple dependencies tables. The default is [dependencies]
, which are dependencies that are shared across platforms.
Dependencies are defined using a VersionSpec. A VersionSpec
combines a Version with an optional operator.
Some examples are:
# Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n
Dependencies can also be defined as a mapping where it is using a matchspec:
package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n
Tip
The dependencies can be easily added using the pixi add
command line. Running add
for an existing dependency will replace it with the newest it can use.
Note
To specify different dependencies for different platforms use the target table
"},{"location":"configuration/#dependencies","title":"dependencies
","text":"Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge
. Even if the dependency defines a channel that channel should be added to the project.channels
list.
[dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
"},{"location":"configuration/#pypi-dependencies-beta-feature","title":"pypi-dependencies
(Beta feature)","text":"Details regarding the PyPI integration We use uv
, which is a new fast pip replacement written in Rust.
We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.
The uv resolution is included in the lock file directly.
Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.
Important considerations
dependencies
table where possible.git
dependencies (git+https://github.com/package-org/package.git
) - Source dependencies - Private PyPI repositoriesThese dependencies don't follow the conda matchspec specification. The version
is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version
is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:
[dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\npytest = \"*\" # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\nrequests = {version = \">= 2.8.1, ==2.8.*\", extras=[\"security\", \"tests\"]}\n
Did you know you can use: add --pypi
? Use the --pypi
flag with the add
command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask
The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc
for example, it's used from the conda environment instead of the system.
host-dependencies
","text":"This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.
[host-dependencies]\npython = \"~=3.10.3\"\n
Typical examples of host dependencies are:
python
here and an R package would list mro-base
or r-base
.openssl
, rapidjson
, or xtensor
.build-dependencies
","text":"This table contains dependencies that are needed to build the project. Different from dependencies
and host-dependencies
these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.
[build-dependencies]\ncmake = \"~=3.24\"\n
Typical examples of build dependencies are:
cmake
is invoked on the build machine to generate additional code- or project-files which are then include in the compilation process.Info
The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.
For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64
and your host platform is linux-64
.
activation
table","text":"If you want to run an activation script inside the environment when either doing a pixi run
or pixi shell
these can be defined here. The scripts defined in this table will be sourced when the environment is activated using pixi run
or pixi shell
Note
The activation scripts are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe
on windows and bash
on linux and osx (Unix). Only .sh
, .bash
and .bat
files are supported.
If you have scripts per platform use the target table.
[activation]\nscripts = [\"env_setup.sh\"]\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n
"},{"location":"configuration/#the-target-table","title":"The target
table","text":"The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.
The target table is currently implemented for the following sub-tables:
activation
dependencies
tasks
The target table is defined using [target.PLATFORM.SUB-TABLE]
. E.g [target.linux-64.dependencies]
The platform can be any of:
win
, osx
, linux
or unix
(unix
matches linux
and osx
)linux-64
, osx-arm64
The sub-table can be any of the specified above.
To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies
with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake
being used for all targets but on osx-64
or osx-arm64
a different version of python will be selected.
[dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n
Here are some more examples:
[target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n
"},{"location":"configuration/#the-feature-and-environments-tables","title":"The feature
and environments
tables","text":"The feature
table allows you to define features that can be used to create different [environments]
. The [environments]
table allows you to define different environments. The design is explained in the this design document.
Simplest example
[feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n
This will create an environment called test
that has pytest
installed."},{"location":"configuration/#the-feature-table","title":"The feature
table","text":"The feature
table allows you to define the following fields per feature.
dependencies
: Same as the dependencies.pypi-dependencies
: Same as the pypi-dependencies.system-requirements
: Same as the system-requirements.activation
: Same as the activation.platforms
: Same as the platforms. When adding features together the intersection of the platforms is taken. Be aware that the default
feature is always implied thus this must contain all platforms the project can support.channels
: Same as the channels. Adding the priority
field to the channels to allow concatenation of channels instead of overwriting.target
: Same as the target.tasks
: Same as the tasks.These tables are all also available without the feature
prefix. When those are used we call them the default
feature. This is a protected name you can not use for your own feature.
[feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nchannels = [\"nvidia\"] # Results in: [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Full feature table but written as separate tables[feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
"},{"location":"configuration/#the-environments-table","title":"The environments
table","text":"The environments
table allows you to define environments that are created using the features defined in the feature
tables.
Important
default
is always implied when creating environments. If you don't want to use the default
feature you can keep all the non feature tables empty.
The environments table is defined using the following fields:
features: Vec<Feature>
: The features that are included in the environment set, which is also the default field in the environments.solve-group: String
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies. These dependencies will then be the same version in all environments that have the same solve group. But the different environments contain different subsets of the solve-groups dependencies set.[environments]\ntest = [\"test\"]\n
Full environments table specification[environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = [\"lint\"]\n
"},{"location":"configuration/#global-configuration","title":"Global configuration","text":"The global configuration options are documented in the global configuration section.
"},{"location":"environment/","title":"Environments","text":"Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.
"},{"location":"environment/#structure","title":"Structure","text":"A pixi environment is located in the .pixi/envs
directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.
If you look at the .pixi/envs
directory, you will see a directory for each environment, the default
being the one that is normally used, if you specify a custom environment the name you specified will be used.
.pixi\n\u2514\u2500\u2500 envs\n \u251c\u2500\u2500 cuda\n \u2502 \u251c\u2500\u2500 bin\n \u2502 \u251c\u2500\u2500 conda-meta\n \u2502 \u251c\u2500\u2500 etc\n \u2502 \u251c\u2500\u2500 include\n \u2502 \u251c\u2500\u2500 lib\n \u2502 ...\n \u2514\u2500\u2500 default\n \u251c\u2500\u2500 bin\n \u251c\u2500\u2500 conda-meta\n \u251c\u2500\u2500 etc\n \u251c\u2500\u2500 include\n \u251c\u2500\u2500 lib\n ...\n
These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml
. Pixi will always make sure the environment is in sync with the pixi.lock
file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run
, pixi shell
.
If you want to clean up the environments, you can simply delete the .pixi/envs
directory, and pixi will recreate the environments when needed.
# either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n
"},{"location":"environment/#activation","title":"Activation","text":"An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin
directory of the environment to the PATH
variable. But there is more to it in a conda environment, as it also sets some environment variables.
To do the activation we have multiple options: - Use the pixi shell
command to open a shell with the environment activated. - Use the pixi shell-hook
command to print the command to activate the environment in your current shell. - Use the pixi run
command to run a command in the environment.
Where the run
command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.
Using the pixi shell-hook
in pixi you would get the following output:
export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n
It sets the PATH
and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh
script. Thus, just adding the bin
directory to the PATH
is not enough."},{"location":"environment/#traditional-conda-activate-like-activation","title":"Traditional conda activate
-like activation","text":"If you prefer to use the traditional conda activate
-like activation, you could use the pixi shell-hook
command.
$ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n
Warning
It is not encouraged to use the traditional conda activate
-like activation, as deactivating the environment is not really possible. Use pixi shell
instead.
pixi
with direnv
","text":"This allows you to use pixi
in combination with direnv
. Enter the following into your .envrc
file:
watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
pixi.lock
changes, direnv
invokes the shell-hook again.direnv
ensures that the environment is deactivated when you leave the directory.$ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
"},{"location":"environment/#environment-variables","title":"Environment variables","text":"The following environment variables are set by pixi, when using the pixi run
, pixi shell
, or pixi shell-hook
command:
PIXI_PROJECT_ROOT
: The root directory of the project.PIXI_PROJECT_NAME
: The name of the project.PIXI_PROJECT_MANIFEST
: The path to the manifest file (pixi.toml
).PIXI_PROJECT_VERSION
: The version of the project.PIXI_PROMPT
: The prompt to use in the shell, also used by pixi shell
itself.PIXI_ENVIRONMENT_NAME
: The name of the environment, defaults to default
.PIXI_ENVIRONMENT_PLATFORMS
: The path to the environment.CONDA_PREFIX
: The path to the environment. (Used by multiple tools that already understand conda environments)CONDA_DEFAULT_ENV
: The name of the environment. (Used by multiple tools that already understand conda environments)PATH
: We prepend the bin
directory of the environment to the PATH
variable, so you can use the tools installed in the environment directly.Note
Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT
in the environment.
When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock
file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml
and will put the output of the solve step into the pixi.lock
file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:
Pixi solves both the conda
and PyPI
dependencies, where the PyPI
dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler
and rip
library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo
. resolve
is able to solve multiple ecosystem like conda
and PyPI
. It implements the lazy solving process for PyPI
packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda
way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.
For the [pypi-dependencies]
, rip
implements sdist
building to retrieve the metadata of the packages, and wheel
building to install the packages. For this building step, pixi
requires to first install python
in the (conda)[dependencies]
section of the pixi.toml
file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies]
section of the pixi.toml
file.
Pixi caches the packages used in the environment. So if you have multiple projects that use the same packages, pixi will only download the packages once.
The cache is located in the ~/.cache/rattler/cache
directory by default. This location is configurable by setting the PIXI_CACHE_DIR
or RATTLER_CACHE_DIR
environment variable.
When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.
"},{"location":"vision/","title":"Vision","text":"We created pixi
because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo
have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.
We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:
We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.
"},{"location":"vision/#target-languages","title":"Target languages","text":"Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.
"},{"location":"advanced/advanced_tasks/","title":"Advanced tasks","text":"When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.
Here are some quick examples
pixi.toml[tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n \"cmake\",\n # Use the cross-platform Ninja generator\n \"-G\",\n \"Ninja\",\n # The source is in the root directory\n \"-S\",\n \".\",\n # We wanna build in the .build directory\n \"-B\",\n \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends_on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
"},{"location":"advanced/advanced_tasks/#depends-on","title":"Depends on","text":"Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.
An obvious example is compiling before running an application.
Checkout our cpp_sdl
example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start
everything is set up as expected.
pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n
Results in the following lines added to the pixi.toml
[tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends_on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends_on = [\"build\"] }\n
pixi run start\n
The tasks will be executed after each other:
configure
because it has no dependencies.build
as it only depends on configure
.start
as all it dependencies are run.If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.
With this logic, you can also create aliases as you don't have to specify any command in a task.
pixi task add fmt ruff\npixi task add lint pylint\n
pixi task alias style fmt lint\n
Results in the following pixi.toml
.
fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends_on = [\"fmt\", \"lint\"] }\n
Now run both tools with one command.
pixi run style\n
"},{"location":"advanced/advanced_tasks/#working-directory","title":"Working directory","text":"Pixi tasks support the definition of a working directory.
cwd
\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml
file is located.
Consider a pixi project structured as follows:
\u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n \u2514\u2500\u2500 bar.py\n
To add a task to run the bar.py
file, use:
pixi task add bar \"python bar.py\" --cwd scripts\n
This will add the following line to pixi.toml
: pixi.toml
[tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n
"},{"location":"advanced/advanced_tasks/#caching","title":"Caching","text":"When you specify inputs
and/or outputs
to a task, pixi will reuse the result of the task.
For the cache, pixi checks that the following are true:
If all of these conditions are met, pixi will not run the task again and instead use the existing result.
Inputs and outputs can be specified as globs, which will be expanded to all matching files.
pixi.toml[tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n
Note: if you want to debug the globs you can use the --verbose
flag to see which files are selected.
# shows info logs of all files that were selected by the globs\npixi run -v start\n
"},{"location":"advanced/advanced_tasks/#our-task-runner-deno_task_shell","title":"Our task runner: deno_task_shell","text":"To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell
. The task shell is a limited implementation of a bourne-shell interface.
Next to running actual executable like ./myprogram
, cmake
or python
the shell has some built-in commandos.
cp
: Copies files.mv
: Moves files.rm
: Remove files or directories. Ex: rm -rf [FILE]...
- Commonly used to recursively delete files or directories.mkdir
: Makes directories. Ex. mkdir -p DIRECTORY...
- Commonly used to make a directory and all its parents with no error if it exists.pwd
: Prints the name of the current/working directory.sleep
: Delays for a specified amount of time. Ex. sleep 1
to sleep for 1 second, sleep 0.5
to sleep for half a second, or sleep 1m
to sleep a minuteecho
: Displays a line of text.cat
: Concatenates files and outputs them on stdout. When no arguments are provided, it reads and outputs stdin.exit
: Causes the shell to exit.unset
: Unsets environment variables.xargs
: Builds arguments from stdin and executes a command.&&
or ||
to separate two commands. - &&
: if the command before &&
succeeds continue with the next command. - ||
: if the command before ||
fails continue with the next command.;
to run two commands without checking if the first command failed or succeeded.export ENV_VAR=value
- Use env variable using: $ENV_VAR
- unset env variable using unset ENV_VAR
VAR=value
- use them: VAR=value && echo $VAR
|
: echo Hello | python receiving_app.py
- |&
: use this to also get the stderr as input.$()
to use the output of a command as input for another command. - python main.py $(git rev-parse HEAD)
!
before any command will negate the exit code from 1 to 0 or visa-versa.>
to redirect the stdout to a file. - echo hello > file.txt
will put hello
in file.txt
and overwrite existing text. - python main.py 2> file.txt
will put the stderr
output in file.txt
. - python main.py &> file.txt
will put the stderr
and stdout
in file.txt
. - echo hello > file.txt
will append hello
to the existing file.txt
.*
to expand all options. - echo *.py
will echo all filenames that end with .py
- echo **/*.py
will echo all filenames that end with .py
in this directory and all descendant directories. - echo data[0-9].csv
will echo all filenames that have a single number after data
and before .csv
More info in deno_task_shell
documentation.
You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.
Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n <HOST> The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n --token <TOKEN> The token to use (for authentication with prefix.dev)\n --username <USERNAME> The username to use (for basic HTTP authentication)\n --password <PASSWORD> The password to use (for basic HTTP authentication)\n --conda-token <CONDA_TOKEN> The token to use on anaconda.org / quetz authentication\n -v, --verbose... More output per occurrence\n -q, --quiet... Less output per occurrence\n -h, --help Print help\n
The different options are \"token\", \"conda-token\" and \"username + password\".
The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>
.
The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/...
.
The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/...
. This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.
Login to prefix.dev:
pixi auth login prefix.dev --token pfx_jj8WDzvnuTEHGdAhwRZMC1Ag8gSto8\n
Login to anaconda.org:
pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n
Login to a basic HTTP secured server:
pixi auth login myserver.com --username user --password password\n
"},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.
On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access
program that comes pre-installed on macOS. Searching for rattler
(the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).
On Linux, one can use GNOME Keyring
(or just Keyring) to access credentials that are securely stored by libsecret
. Searching for rattler
should list all the credentials stored by pixi and other rattler-based programs.
If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json
and contains the credentials.
You can use the RATTLER_AUTH_FILE
environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.
E.g.
export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n
The JSON should follow the following format:
{\n \"*.prefix.dev\": {\n \"BearerToken\": \"your_token\"\n },\n \"otherhost.com\": {\n \"BasicHttp\": {\n \"username\": \"your_username\",\n \"password\": \"your_password\"\n }\n },\n \"conda.anaconda.org\": {\n \"CondaToken\": \"your_token\"\n }\n}\n
Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev
also matches repo.prefix.dev
).
Lastly you can set the authentication override file in the global configuration file.
"},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.
The actual code regarding this is in the rattler_solve
crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.
When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.
[project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
In the packagex
example, the solver will understand that the package is only available in my-channel
and will not look for it in conda-forge
. The flowchart of the logic that excludes all other channels:
flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Channel Specific Dependency?}\n C -->|Yes| D[Exclude All Other Channels for This Package]\n C -->|No| E{Any Other Dependencies?}\n E -->|Yes| B\n E -->|No| F[End]\n D --> E
"},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"Channel priority is dictated by the order in the project.channels
array, where the first channel is the highest priority. For instance:
[project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
If the package is found in conda-forge
the solver will not look for it in my-channel
and your-channel
, because it tells the solver they are excluded. If the package is not found in conda-forge
the solver will look for it in my-channel
and if it is found there it will tell the solver to exclude your-channel
for this package. This diagram explains the logic: flowchart TD\n A[Start] --> B[Given a Dependency]\n B --> C{Loop Over Channels}\n C --> D{Package in This Channel?}\n D -->|No| C\n D -->|Yes| E{\"This the first channel\n for this package?\"}\n E -->|Yes| F[Include Package in Candidates]\n E -->|No| G[Exclude Package from Candidates]\n F --> H{Any Other Channels?}\n G --> H\n H -->|Yes| C\n H -->|No| I{Any Other Dependencies?}\n I -->|No| J[End]\n I -->|Yes| B
This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.
"},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"A common use case is to use pytorch
with nvidia
drivers, while also needing the conda-forge
channel for the main dependencies.
[project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
What this will do is get as much as possible from the nvidia/label/cuda-11.8.0
channel, which is actually only the cuda
package. Then it will get all packages from the nvidia
channel, which is a little more and some packages overlap the nvidia
and conda-forge
channel. Like the cuda-cudart
package, which will now only be retrieved from the nvidia
channel because of the priority logic.
Then it will get the packages from the conda-forge
channel, which is the main channel for the dependencies.
But the user only wants the pytorch packages from the pytorch
channel, which is why pytorch
is added last and the dependencies are added as channel specific dependencies.
We don't define the pytorch
channel before conda-forge
because we want to get as much as possible from the conda-forge
as the pytorch channel is not always shipping the best versions of all packages.
For example, it also ships the ffmpeg
package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge
channel for ffmpeg
with the priority logic.
pixi info
prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json
format using the --json
flag, which can be useful for programmatically reading it.
\u279c pixi info\n Pixi version: 0.13.0\n Platform: linux-64\n Virtual packages: __unix=0=0\n : __linux=6.5.12=0\n : __glibc=2.36=0\n : __cuda=12.3=0\n : __archspec=1=x86_64\n Cache dir: /home/user/.cache/rattler/cache\n Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n Version: 0.13.0\n Manifest file: /home/user/development/pixi/pixi.toml\n Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n Features: default\n Channels: conda-forge\n Dependency count: 10\n Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n Target platforms: linux-64, osx-arm64, win-64, osx-64\n Tasks: docs, test-all, test, build, lint, install, build-docs\n
"},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"The first part of the info output is information that is always available and tells you what pixi can read on your machine.
"},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.
"},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"The virtual packages that pixi can find on your machine.
In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda
virtual package. In that case, if pixi cannot find the __cuda
virtual package on your machine the installation will fail.
Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools. Normally the locations would be:
Platform Value Linux$XDG_CACHE_HOME/rattler
or $HOME
/.cache/rattler macOS $HOME
/Library/Caches/rattler Windows {FOLDERID_LocalAppData}/rattler
When your system is filling up you can easily remove this folder. It will re-download everything it needs the next time you install a project.
"},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"Check the authentication documentation
"},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"[requires --extended
]
The size of the previously mentioned \"Cache dir\" in Mebibytes.
"},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"Everything below Project
is info about the project you're currently in. This info is only available if your path has a manifest file (pixi.toml
).
The path to the manifest file that describes the project. For now, this can only be pixi.toml
.
The last time the lockfile was updated, either manually or by pixi itself.
"},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"The environment info defined per environment. If you don't have any environments defined, this will only show the default
environment.
This lists which features are enabled in the environment. For the default this is only default
The list of channels used in this environment.
"},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).
"},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"The list of dependencies defined for this environment.
"},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"The platforms the project has defined.
"},{"location":"advanced/github_actions/","title":"GitHub Action","text":"We created prefix-dev/setup-pixi to facilitate using pixi in CI.
"},{"location":"advanced/github_actions/#usage","title":"Usage","text":"- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n pixi-version: v0.16.1\n cache: true\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n
Pin your action versions
Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.5.1
) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.
Put the following in your .github/dependabot.yml
file to enable Dependabot for your GitHub Actions:
version: 2\nupdates:\n - package-ecosystem: github-actions\n directory: /\n schedule:\n interval: monthly # (1)!\n groups:\n dependencies:\n patterns:\n - \"*\"\n
daily
, weekly
To see all available input arguments, see the action.yml
file in setup-pixi
. The most important features are described below.
The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock
file is present. It will then use the pixi.lock
file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache
input argument.
Customize your cache key
If you need to customize your cache-key, you can use the cache-key
input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>
.
Only save caches on main
In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write
argument.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n cache: true\n cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
"},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments
input argument. This will install all environments that are specified and cache them.
[project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
"},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"The following example will install the py311
and py312
environments in different jobs.
test:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: ${{ matrix.environment }}\n
"},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"The following example will install both the py311
and the py312
environment on the runner.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: >- # (1)!\n py311\n py312\n- run: |\n pixi run -e py311 test\n pixi run -e py312 test\n
separated by spaces, equivalent to
environments: py311 py312\n
Caching behavior if you don't specify environments
If you don't specify any environment, the default
environment will be installed and cached, even if you use other environments.
There are currently three ways to authenticate with pixi:
For more information, see Authentication.
Handle secrets with care
Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }}
syntax. These secrets will always be masked in the logs.
Specify the token using the auth-token
input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n auth-host: prefix.dev\n auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
"},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"Specify the username and password using the auth-username
and auth-password
input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n auth-host: custom-artifactory.com\n auth-username: ${{ secrets.PIXI_USERNAME }}\n auth-password: ${{ secrets.PIXI_PASSWORD }}\n
"},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"Specify the conda-token using the conda-token
input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel
) is used at anaconda.org or with quetz instances.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n auth-host: anaconda.org # (1)!\n conda-token: ${{ secrets.CONDA_TOKEN }}\n
setup-pixi
allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}
. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run
command for each command.
- run: | # (1)!\n python --version\n pip install -e --no-deps .\n shell: pixi run bash -e {0}\n
You can even run Python scripts like this:
- run: | # (1)!\n import my_package\n print(\"Hello world!\")\n shell: pixi run python {0}\n
If you want to use PowerShell, you need to specify -Command
as well.
- run: | # (1)!\n python --version | Select-String \"3.11\"\n shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
How does it work under the hood?
Under the hood, the shell: xyz {0}
option is implemented by creating a temporary script file and calling xyz
with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0}
directly but instead have to use shell: pixi run bash {0}
. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell
in the documentation. See the official documentation and ADR 0277 for more information about how the shell:
input works in GitHub Actions.
--frozen
and --locked
","text":"You can specify whether setup-pixi
should run pixi install --frozen
or pixi install --locked
depending on the frozen
or the locked
input argument. See the official documentation for more information about the --frozen
and --locked
flags.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n locked: true\n # or\n frozen: true\n
If you don't specify anything, the default behavior is to run pixi install --locked
if a pixi.lock
file is present and pixi install
otherwise.
There are two types of debug logging that you can enable.
"},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:
Debug logging documentation
For more information about debug logging in GitHub Actions, see the official documentation.
"},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"The second type is the debug logging of the pixi executable. This can be specified by setting the log-level
input.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n log-level: vvv # (1)!\n
q
, default
, v
, vv
, or vvv
.If nothing is specified, log-level
will default to default
or vv
depending on if debug logging is enabled for the action.
On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup
input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).
If you set post-cleanup
to true
, the action will delete the following files:
.pixi
environment~/.rattler
If nothing is specified, post-cleanup
will default to true
.
On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi
to do this.
- uses: prefix-dev/setup-pixi@v0.5.1\n with:\n post-cleanup: true\n pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
${{ runner.temp }}\\Scripts\\pixi.exe
on WindowsIf you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi
repository.
Pixi supports some global configuration options, as well as project-scoped configuration (that does not belong into the project file). The configuration is loaded in the following order:
~/.config/pixi/config.toml
on Linux, dependent on XDG_CONFIG_HOME)~/.pixi/config.toml
(or $PIXI_HOME/config.toml
if the PIXI_HOME
environment variable is set)$PIXI_PROJECT/.pixi/config.toml
--tls-no-verify
, --change-ps1=false
etc.)Note
To find the locations where pixi
looks for configuration files, run pixi
with -v
or --verbose
.
The following reference describes all available configuration options.
# The default channels to select when running `pixi init` or `pixi global install`.\n# This defaults to only conda-forge.\ndefault_channels = [\"conda-forge\"]\n\n# When set to false, the `(pixi)` prefix in the shell prompt is removed.\n# This applies to the `pixi shell` subcommand.\n# You can override this from the CLI with `--change-ps1`.\nchange_ps1 = true\n\n# When set to true, the TLS certificates are not verified. Note that this is a\n# security risk and should only be used for testing purposes or internal networks.\n# You can override this from the CLI with `--tls-no-verify`.\ntls_no_verify = false\n\n# Override from where the authentication information is loaded.\n# Usually we try to use the keyring to load authentication data from, and only use a JSON\n# file as fallback. This option allows you to force the use of a JSON file.\n# Read more in the authentication section.\nauthentication_override_file = \"/path/to/your/override.json\"\n\n# configuration for conda channel-mirrors\n[mirrors]\n# redirect all requests for conda-forge to the prefix.dev mirror\n\"https://conda.anaconda.org/conda-forge\" = [\n \"https://prefix.dev/conda-forge\"\n]\n\n# redirect all requests for bioconda to one of the three listed mirrors\n# Note: for repodata we try the first mirror first.\n\"https://conda.anaconda.org/bioconda\" = [\n \"https://conda.anaconda.org/bioconda\",\n # OCI registries are also supported\n \"oci://ghcr.io/channel-mirrors/bioconda\",\n \"https://prefix.dev/bioconda\",\n]\n
"},{"location":"advanced/global_configuration/#mirror-configuration","title":"Mirror configuration","text":"You can configure mirrors for conda channels. We expect that mirrors are exact copies of the original channel. The implementation will look for the mirror key (a URL) in the mirrors
section of the configuration file and replace the original URL with the mirror URL.
To also include the original URL, you have to repeat it in the list of mirrors.
The mirrors are prioritized based on the order of the list. We attempt to fetch the repodata (the most important file) from the first mirror in the list. The repodata contains all the SHA256 hashes of the individual packages, so it is important to get this file from a trusted source.
You can also specify mirrors for an entire \"host\", e.g.
[mirrors]\n\"https://conda.anaconda.org\" = [\n \"https://prefix.dev/\"\n]\n
This will forward all request to channels on anaconda.org to prefix.dev. Channels that are not currently mirrored on prefix.dev will fail in the above example.
"},{"location":"advanced/global_configuration/#oci-mirrors","title":"OCI Mirrors","text":"You can also specify mirrors on the OCI registry. There is a public mirror on the Github container registry (ghcr.io) that is maintained by the conda-forge team. You can use it like this:
[mirrors]\n\"https://conda.anaconda.org/conda-forge\" = [\n \"oci://ghcr.io/channel-mirrors/conda-forge\"\n]\n
The GHCR mirror also contains bioconda
packages. You can search the available packages on Github.
Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.
Here is an example pixi.toml
that highlights some of the features:
[project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
"},{"location":"advanced/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"The project.platforms
defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lockfile.
Running pixi install
on a platform that is not configured will warn the user that it is not setup for that platform:
\u276f pixi install\n \u00d7 the project is not configured for your current platform\n \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n \u00b7 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n \u00b7 \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n \u2570\u2500\u2500\u2500\u2500\n help: The project needs to be configured to support your platform (linux-64).\n
"},{"location":"advanced/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms
then pixi will throw an error.
It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.
pixi.toml[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n
In the above example, we specify that we depend on msmpi
only on Windows. We also specifically want python
on 3.8
when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.
You can use pixi's cli to add these dependencies to the pixi.toml
pixi add --platform win-64 posix\n
This also works for the host
and build
dependencies.
pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n
Which results in this.
pixi.toml[target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
"},{"location":"advanced/multi_platform_configuration/#activation","title":"Activation","text":"Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash
and for windows they are bat
To deal with this, you can define your activation scripts using the target definition.
pixi.toml
[activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
When this project is run on win-64
it will only execute the target scripts not the scripts specified in the default activation.scripts
"},{"location":"design_proposals/multi_environment_proposal/","title":"Proposal Design: Multi Environment Support","text":""},{"location":"design_proposals/multi_environment_proposal/#objective","title":"Objective","text":"The aim is to introduce an environment set mechanism in the pixi
package manager. This mechanism will enable clear, conflict-free management of dependencies tailored to specific environments, while also maintaining the integrity of fixed lockfiles.
There are multiple scenarios where multiple environments are useful.
py39
and py310
or polars 0.12
and 0.13
.lint
or docs
.dev
.prod
and test-prod
where test-prod
is a strict superset of prod
.cuda
environment and a cpu
environment.This prepares pixi
for use in large projects with multiple use-cases, multiple developers and different CI needs.
Important
This is a proposal, not a final design. The proposal is open for discussion and will be updated based on the feedback.
"},{"location":"design_proposals/multi_environment_proposal/#feature-environment-set-definitions","title":"Feature & Environment Set Definitions","text":"Introduce environment sets into the pixi.toml
this describes environments based on feature
's. Introduce features into the pixi.toml
that can describe parts of environments. As an environment goes beyond just dependencies
the features
should be described including the following fields:
dependencies
: The conda package dependenciespypi-dependencies
: The pypi package dependenciessystem-requirements
: The system requirements of the environmentactivation
: The activation information for the environmentplatforms
: The platforms the environment can be run on.channels
: The channels used to create the environment. Adding the priority
field to the channels to allow concatenation of channels instead of overwriting.target
: All the above features but also separated by targets.tasks
: Feature specific tasks, tasks in one environment are selected as default tasks for the environment.[dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
Different dependencies per feature[feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
Full set of environment modification in one feature[feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}] # Results in: [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
Define tasks as defaults of an environment[feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environments test test`\n
The environment definition should contain the following fields:
features: Vec<Feature>
: The features that are included in the environment set, which is also the default field in the environments.solve-group: String
: The solve group is used to group environments together at the solve stage. This is useful for environments that need to have the same dependencies but might extend them with additional dependencies. For instance when testing a production environment with additional test dependencies.[environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
Testing a production environment with additional dependencies[environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
Creating environments without a default environment[dependencies]\n# Keep empty or undefined to create an empty environment.\n\n[feature.base.dependencies]\npython = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom default\ndefault = [\"base\"]\n# Create a custom environment which only has the `lint` feature as the default feature is empty.\nlint = [\"lint\"]\n
"},{"location":"design_proposals/multi_environment_proposal/#lockfile-structure","title":"Lockfile Structure","text":"Within the pixi.lock
file, a package may now include an additional environments
field, specifying the environment to which it belongs. To avoid duplication the packages environments
field may contain multiple environments so the lockfile is of minimal size.
- platform: linux-64\n name: pre-commit\n version: 3.3.3\n category: main\n environments:\n - dev\n - test\n - lint\n ...:\n- platform: linux-64\n name: python\n version: 3.9.3\n category: main\n environments:\n - dev\n - test\n - lint\n - py39\n - default\n ...:\n
"},{"location":"design_proposals/multi_environment_proposal/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:
Default behaviorpixi run python\n# Runs python in the `default` environment\n
Activating an specific environmentpixi run -e test pytest\npixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n
Activating a shell in an environment
pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
Running any command in an environmentpixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n
Interactive selection of environments if task is in multiple environments# In the scenario where test is a task in multiple environments, interactive selection should be used.\npixi run test\n# Which env?\n# 1. test\n# 2. test39\n
"},{"location":"design_proposals/multi_environment_proposal/#important-links","title":"Important links","text":"In polarify
they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.
[project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
.github/workflows/test.ymljobs:\n tests-per-env:\n runs-on: ubuntu-latest\n strategy:\n matrix:\n environment: [py311, py312]\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: ${{ matrix.environment }}\n - name: Run tasks\n run: |\n pixi run --environment ${{ matrix.environment }} test\n tests-with-multiple-envs:\n runs-on: ubuntu-latest\n steps:\n - uses: actions/checkout@v4\n - uses: prefix-dev/setup-pixi@v0.5.1\n with:\n environments: pl017 pl018\n - run: |\n pixi run -e pl017 test\n pixi run -e pl018 test\n
Test vs Production example This is an example of a project that has a test
feature and prod
environment. The prod
environment is a production environment that contains the run dependencies. The test
feature is a set of dependencies and tasks that we want to put on top of the previously solved prod
environment. This is a common use case where we want to test the production environment with additional dependencies.
pixi.toml
[project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
In ci you would run the following commands: pixi run postinstall-e && pixi run test\n
Locally you would run the following command: pixi run postinstall-e && pixi run dev\n
Then in a Dockerfile you would run the following command: Dockerfile
FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n
Multiple machines from one project This is an example for an ML project that should be executable on a machine that supports cuda
and mlx
. It should also be executable on machines that don't support cuda
or mlx
, we use the cpu
feature for this. pixi.toml
[project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.5.0,<0.6.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n
Running the project on a cuda machinepixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
Running the project with mlxpixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
Running the project on a machine without cuda or mlxpixi run train-model\n
"},{"location":"examples/cpp-sdl/","title":"SDL example","text":" The cpp-sdl
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/cpp-sdl\n
Run the start
command
pixi run start\n
Using the depends_on
feature you only needed to run the start
task but under water it is running the following tasks.
# Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
"},{"location":"examples/opencv/","title":"Opencv example","text":"The opencv
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/opencv\n
"},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"Run the start
command to start the face detection algorithm.
pixi run start\n
The screen that starts should look like this:
Check out the webcame_capture.py
to see how we detect a face.
Next to face recognition, a camera calibration example is also included.
You'll need a checkerboard for this to work. Print this:
Then run
pixi run calibrate\n
To make a picture for calibration press SPACE
Do this approximately 10 times with the chessboard in view of the camera
After that press ESC
which will start the calibration.
When the calibration is done, the camera will be used again to find the distance to the checkerboard.
"},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"The nav2
example is located in the pixi repository.
git clone https://github.com/prefix-dev/pixi.git\n
Move to the example folder
cd pixi/examples/ros2-nav2\n
Run the start
command
pixi run start\n
"},{"location":"ide_integration/pycharm/","title":"PyCharm Integration","text":"You can use PyCharm with pixi environments by using the conda
shim provided by the pixi-pycharm package.
Windows support
Windows is currently not supported, see pavelzw/pixi-pycharm #5. Only Linux and macOS are supported.
"},{"location":"ide_integration/pycharm/#how-to-use","title":"How to use","text":"To get started, add pixi-pycharm
to your pixi project.
pixi add pixi-pycharm\n
This will ensure that the conda shim is installed in your project's environment.
could not determine any available versions for pixi-pycharm on win-64
If you get the error could not determine any available versions for pixi-pycharm on win-64
when running pixi add pixi-pycharm
(even when you're not on Windows), this is because the package is not available on Windows and pixi tries to solve the environment for all platforms. If you still want to use it in your pixi project (and are on Linux/macOS), you can add the following to your pixi.toml
:
[target.unix.dependencies]\npixi-pycharm = \"*\"\n
This will tell pixi to only use this dependency on unix platforms.
Having pixi-pycharm
installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda
file in your pixi environment. You can get the path using the following command:
pixi run 'echo $CONDA_PREFIX/libexec/conda'\n
This is an executable that tricks PyCharm into thinking it's the proper conda
executable. Under the hood it redirects all calls to the corresponding pixi
equivalent.
Use the conda shim from this pixi project
Please make sure that this is the conda
shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.
Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.
PyCharm should now be able to show you the installed packages as well.
You can now run your programs and tests as usual.
"},{"location":"ide_integration/pycharm/#multiple-environments","title":"Multiple environments","text":"
If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.
You can then specify the corresponding environment in the bottom right corner of the PyCharm window.
"},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"
When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.
It is recommended to rename the environments to something unique.
"},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"Logs are written to ~/.cache/pixi-pycharm.log
. You can use them to debug problems. Please attach the logs when filing a bug report.