From f8aae5e58cff2986c6e3b89119427737a59c10f3 Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Tue, 19 Mar 2024 16:01:31 +0000 Subject: [PATCH] Deployed ec4ff23 to dev with MkDocs 1.5.3 and mike 2.0.0 --- dev/advanced/global_configuration/index.html | 107 ++++++++++++++++++- dev/search/search_index.json | 2 +- dev/sitemap.xml.gz | Bin 425 -> 425 bytes 3 files changed, 103 insertions(+), 6 deletions(-) diff --git a/dev/advanced/global_configuration/index.html b/dev/advanced/global_configuration/index.html index 8c19a1909..c12279ca2 100644 --- a/dev/advanced/global_configuration/index.html +++ b/dev/advanced/global_configuration/index.html @@ -701,6 +701,30 @@ + + +
  • + + + Mirror configuration + + + + +
  • @@ -970,6 +994,30 @@ + + +
  • + + + Mirror configuration + + + + +
  • @@ -996,17 +1044,21 @@

    Global configuration in pixi#

    -

    Pixi supports some global configuration options, as well as project-scoped configuration (that does not belong into the project file). -The configuration is loaded in the following order:

    +

    Pixi supports some global configuration options, as well as project-scoped +configuration (that does not belong into the project file). The configuration is +loaded in the following order:

      -
    1. Global configuration folder (e.g. ~/.config/pixi/config.toml on Linux, dependent on XDG_CONFIG_HOME)
    2. -
    3. Global .pixi folder: ~/.pixi/config.toml (or $PIXI_HOME/config.toml if the PIXI_HOME environment variable is set)
    4. +
    5. Global configuration folder (e.g. ~/.config/pixi/config.toml on Linux, + dependent on XDG_CONFIG_HOME)
    6. +
    7. Global .pixi folder: ~/.pixi/config.toml (or $PIXI_HOME/config.toml if + the PIXI_HOME environment variable is set)
    8. Project-local .pixi folder: $PIXI_PROJECT/.pixi/config.toml
    9. Command line arguments (--tls-no-verify, --change-ps1=false etc.)

    Note

    -

    To find the locations where pixi looks for configuration files, run pixi with -v or --verbose.

    +

    To find the locations where pixi looks for configuration files, run +pixi with -v or --verbose.

    Reference#

    The following reference describes all available configuration options.

    @@ -1029,7 +1081,52 @@

    Reference# file as fallback. This option allows you to force the use of a JSON file. # Read more in the authentication section. authentication_override_file = "/path/to/your/override.json" + +# configuration for conda channel-mirrors +[mirrors] +# redirect all requests for conda-forge to the prefix.dev mirror +"https://conda.anaconda.org/conda-forge" = [ + "https://prefix.dev/conda-forge" +] + +# redirect all requests for bioconda to one of the three listed mirrors +# Note: for repodata we try the first mirror first. +"https://conda.anaconda.org/bioconda" = [ + "https://conda.anaconda.org/bioconda", + # OCI registries are also supported + "oci://ghcr.io/channel-mirrors/bioconda", + "https://prefix.dev/bioconda", +] + +

    Mirror configuration#

    +

    You can configure mirrors for conda channels. We expect that mirrors are exact +copies of the original channel. The implementation will look for the mirror key +(a URL) in the mirrors section of the configuration file and replace the +original URL with the mirror URL.

    +

    To also include the original URL, you have to repeat it in the list of mirrors.

    +

    The mirrors are prioritized based on the order of the list. We attempt to fetch +the repodata (the most important file) from the first mirror in the list. The +repodata contains all the SHA256 hashes of the individual packages, so it is +important to get this file from a trusted source.

    +

    You can also specify mirrors for an entire "host", e.g.

    +
    [mirrors]
    +"https://conda.anaconda.org" = [
    +    "https://prefix.dev/"
    +]
    +
    +

    This will forward all request to channels on anaconda.org to prefix.dev. +Channels that are not currently mirrored on prefix.dev will fail in the above example.

    +

    OCI Mirrors#

    +

    You can also specify mirrors on the OCI registry. There is a public mirror on +the Github container registry (ghcr.io) that is maintained by the conda-forge +team. You can use it like this:

    +
    [mirrors]
    +"https://conda.anaconda.org/conda-forge" = [
    +    "oci://ghcr.io/channel-mirrors/conda-forge"
    +]
     
    +

    The GHCR mirror also contains bioconda packages. You can search the available +packages on Github.

    diff --git a/dev/search/search_index.json b/dev/search/search_index.json index dc51faea2..42dc491d0 100644 --- a/dev/search/search_index.json +++ b/dev/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Getting Started","text":"

    Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.

    "},{"location":"#installation","title":"Installation","text":"

    To install pixi you can run the following command in your terminal:

    Linux & macOSWindows
    curl -fsSL https://pixi.sh/install.sh | bash\n

    The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to ~/.pixi/bin. If this directory does not already exist, the script will create it.

    The script will also update your ~/.bash_profile to include ~/.pixi/bin in your PATH, allowing you to invoke the pixi command from anywhere.

    PowerShell:

    iwr -useb https://pixi.sh/install.ps1 | iex\n
    winget:
    winget install prefix-dev.pixi\n
    The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to LocalAppData/pixi/bin. If this directory does not already exist, the script will create it.

    The command will also automatically add LocalAppData/pixi/bin to your path allowing you to invoke pixi from anywhere.

    Tip

    You might need to restart your terminal or source your shell for the changes to take effect.

    "},{"location":"#autocompletion","title":"Autocompletion","text":"

    To get autocompletion run:

    Linux & macOSWindows
    # Pick your shell (use `echo $SHELL` to find the shell you are using.):\necho 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\necho 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\necho 'pixi completion --shell fish | source' >> ~/.config/fish/config.fish\necho 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n

    PowerShell:

    Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n

    Failure because no profile file exists

    Make sure your profile file exists, otherwise create it with:

    New-Item -Path $PROFILE -ItemType File -Force\n

    And then restart the shell or source the shell config file.

    "},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"

    Although we recommend installing pixi through the above method we also provide additional installation methods.

    "},{"location":"#homebrew","title":"Homebrew","text":"

    Pixi is available via homebrew. To install pixi via homebrew simply run:

    brew install pixi\n
    "},{"location":"#windows-installer","title":"Windows installer","text":"

    We provide an msi installer on our GitHub releases page. The installer will download pixi and add it to the path.

    "},{"location":"#install-from-source","title":"Install from source","text":"

    pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:

    cargo install --locked --git https://github.com/prefix-dev/pixi.git\n

    or when you want to make changes use:

    cargo build\ncargo test\n

    If you have any issues building because of the dependency on rattler checkout its compile steps.

    "},{"location":"#update","title":"Update","text":"

    Updating is as simple as installing, rerunning the installation script gets you the latest version.

    Linux & macOSWindows

    curl -fsSL https://pixi.sh/install.sh | bash\n
    Or get a specific pixi version using:
    export PIXI_VERSION=vX.Y.Z && curl -fsSL https://pixi.sh/install.sh | bash\n

    PowerShell:

    iwr -useb https://pixi.sh/install.ps1 | iex\n
    Or get a specific pixi version using: PowerShell:
    $Env:PIXI_VERSION=\"vX.Y.Z\"; iwr -useb https://pixi.sh/install.ps1 | iex\n

    Note

    If you used a package manager like brew, mamba, conda, paru to install pixi. Then use their builtin update mechanism. e.g. brew upgrade pixi.

    "},{"location":"#uninstall","title":"Uninstall","text":"

    To uninstall pixi from your system, simply remove the binary.

    Linux & macOSWindows
    rm ~/.pixi/bin/pixi\n
    $PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n

    After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi directory and remove the directory from your path.

    "},{"location":"Community/","title":"Community","text":"

    When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:

    [![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n

    Customize your badge

    To further customize the look and feel of your badge, you can add &style=<custom-style> at the end of the URL. See the documentation on shields.io for more info.

    "},{"location":"Community/#built-using-pixi","title":"Built using Pixi","text":" "},{"location":"FAQ/","title":"Frequently asked questions","text":""},{"location":"FAQ/#what-is-the-difference-with-conda-mamba-poetry-pip","title":"What is the difference with conda, mamba, poetry, pip","text":"Tool Installs python Builds packages Runs predefined tasks Has lockfiles builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi","text":"

    Starting with the name prefix we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px, pex, pax, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.

    Linux & macOSWindows
    alias not_pixi=\"pixi\"\n

    PowerShell:

    New-Alias -Name not_pixi -Value pixi\n

    "},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build","text":"

    TL;DR: It's coming we promise!

    pixi build is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build which will be used as a library in pixi.

    "},{"location":"basic_usage/","title":"Basic usage","text":"

    Ensure you've got pixi set up. If running pixi doesn't show the help, see the getting started if it doesn't.

    pixi\n

    Initialize a new project and navigate to the project directory.

    pixi init pixi-hello-world\ncd pixi-hello-world\n

    Add the dependencies you would like to use.

    pixi add python\n

    Create a file named hello_world.py in the directory and paste the following code into the file.

    hello_world.py
    def hello():\n    print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n    hello()\n

    Run the code inside the environment.

    pixi run python hello_world.py\n

    You can also put this run command in a task.

    pixi task add hello python hello_world.py\n

    After adding the task, you can run the task using its name.

    pixi run hello\n

    Use the shell command to activate the environment and start a new shell in there.

    pixi shell\npython\nexit\n

    You've just learned the basic features of pixi:

    1. initializing a project
    2. adding a dependency.
    3. adding a task, and executing it.
    4. running a program.

    Feel free to play around with what you just learned like adding more tasks, dependencies or code.

    Happy coding!

    "},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"

    Use pixi to install tools on your machine.

    Some notable examples:

    # Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
    "},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"

    You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.

    - uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n

    See the GitHub Actions for more details.

    "},{"location":"cli/","title":"Commands","text":""},{"location":"cli/#global-options","title":"Global options","text":""},{"location":"cli/#init","title":"init","text":"

    This command is used to create a new project. It initializes a pixi.toml file and also prepares a .gitignore to prevent the environment from being added to git.

    "},{"location":"cli/#arguments","title":"Arguments","text":"
    1. [PATH]: Where to place the project (defaults to current path) [default: .]
    "},{"location":"cli/#options","title":"Options","text":"

    Importing an environment.yml

    When importing an environment, the pixi.toml will be created with the dependencies from the environment file. The pixi.lock will be created when you install the environment. We don't support git+ urls as dependencies for pip packages and for the defaults channel we use main, r and msys2 as the default channels.

    pixi init myproject\npixi init ~/myproject\npixi init  # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\n
    "},{"location":"cli/#add","title":"add","text":"

    Adds dependencies to the pixi.toml. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.

    "},{"location":"cli/#arguments_1","title":"Arguments","text":"
    1. <SPECS>: The package(s) to add, space separated. The version constraint is optional.
    "},{"location":"cli/#options_1","title":"Options","text":"
    pixi add numpy\npixi add numpy pandas \"pytorch>=1.8\"\npixi add \"numpy>=1.22,<1.24\"\npixi add --manifest-path ~/myproject/pixi.toml numpy\npixi add --host \"python>=3.9.0\"\npixi add --build cmake\npixi add --pypi requests[security]\npixi add --platform osx-64 --build clang\npixi add --no-install numpy\npixi add --no-lockfile-update numpy\npixi add --feature featurex numpy\n
    "},{"location":"cli/#install","title":"install","text":"

    Installs all dependencies specified in the lockfile pixi.lock. Which gets generated on pixi add or when you manually change the pixi.toml file and run pixi install.

    "},{"location":"cli/#options_2","title":"Options","text":"
    pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\n
    "},{"location":"cli/#run","title":"run","text":"

    The run commands first checks if the environment is ready to use. When you didn't run pixi install the run command will do that for you. The custom tasks defined in the pixi.toml are also available through the run command.

    You cannot run pixi run source setup.bash as source is not available in the deno_task_shell commandos and not an executable.

    "},{"location":"cli/#arguments_2","title":"Arguments","text":"
    1. [TASK]... The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.
    "},{"location":"cli/#options_3","title":"Options","text":"
    pixi run python\npixi run cowpy \"Hey pixi user\"\npixi run --manifest-path ~/myproject/pixi.toml python\npixi run --frozen python\npixi run --locked python\n# If you have specified a custom task in the pixi.toml you can run it with run as well\npixi run build\n# Extra arguments will be passed to the tasks command.\npixi run task argument1 argument2\n\n# If you have multiple environments you can select the right one with the --environment flag.\npixi run --environment cuda python\n

    Info

    In pixi the deno_task_shell is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.

    Cross environment tasks

    If you're using the depends_on feature of the tasks, the tasks will be run in the order you specified them. The depends_on can be used cross environment, e.g. you have this pixi.toml:

    pixi.toml
    [tasks]\nstart = { cmd = \"python start.py\", depends_on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n

    Then you're able to run the build from the build environment and start from the default environment. By only calling:

    pixi run start\n

    "},{"location":"cli/#remove","title":"remove","text":"

    Removes dependencies from the pixi.toml.

    "},{"location":"cli/#arguments_3","title":"Arguments","text":"
    1. <DEPS>...: List of dependencies you wish to remove from the project.
    "},{"location":"cli/#options_4","title":"Options","text":"
    pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\n
    "},{"location":"cli/#task","title":"task","text":"

    If you want to make a shorthand for a specific command you can add a task for it.

    "},{"location":"cli/#options_5","title":"Options","text":""},{"location":"cli/#task-add","title":"task add","text":"

    Add a task to the pixi.toml, use --depends-on to add tasks you want to run before this task, e.g. build before an execute task.

    "},{"location":"cli/#arguments_4","title":"Arguments","text":"
    1. <NAME>: The name of the task.
    2. <COMMAND>: The command to run. This can be more than one word.

    Info

    If you are using $ for env variables they will be resolved before adding them to the task. If you want to use $ in the task you need to escape it with a \\, e.g. echo \\$HOME.

    "},{"location":"cli/#options_6","title":"Options","text":"
    pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\n

    This adds the following to the pixi.toml:

    [tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends_on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n

    Which you can then run with the run command:

    pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
    "},{"location":"cli/#task-remove","title":"task remove","text":"

    Remove the task from the pixi.toml

    "},{"location":"cli/#arguments_5","title":"Arguments","text":""},{"location":"cli/#options_7","title":"Options","text":"
    pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
    "},{"location":"cli/#task-alias","title":"task alias","text":"

    Create an alias for a task.

    "},{"location":"cli/#arguments_6","title":"Arguments","text":"
    1. <ALIAS>: The alias name
    2. <DEPENDS_ON>: The names of the tasks you want to execute on this alias, order counts, first one runs first.
    "},{"location":"cli/#options_8","title":"Options","text":"
    pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
    "},{"location":"cli/#task-list","title":"task list","text":"

    List all tasks in the project.

    "},{"location":"cli/#options_9","title":"Options","text":"
    pixi task list\npixi task list --environment cuda\npixi task list --summary\n
    "},{"location":"cli/#list","title":"list","text":"

    List project's packages. Highlighted packages are explicit dependencies.

    "},{"location":"cli/#options_10","title":"Options","text":"

    ```shell\npixi list\npixi list --json-pretty\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n
    Output will look like this, where python will be green as it is the package that was explicitly added to the pixi.toml:

    \u279c pixi list\n Package           Version     Build               Size       Kind   Source\n _libgcc_mutex     0.1         conda_forge         2.5 KiB    conda  _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex     4.5         2_gnu               23.1 KiB   conda  _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2             1.0.8       hd590300_5          248.3 KiB  conda  bzip2-1.0.8-hd590300_5.conda\n ca-certificates   2023.11.17  hbcca054_0          150.5 KiB  conda  ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64  2.40        h41732ed_0          688.2 KiB  conda  ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat          2.5.0       hcb278e6_1          76.2 KiB   conda  libexpat-2.5.0-hcb278e6_1.conda\n libffi            3.4.2       h7f98852_5          56.9 KiB   conda  libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng         13.2.0      h807b86a_4          755.7 KiB  conda  libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp           13.2.0      h807b86a_4          412.2 KiB  conda  libgomp-13.2.0-h807b86a_4.conda\n libnsl            2.0.1       hd590300_0          32.6 KiB   conda  libnsl-2.0.1-hd590300_0.conda\n libsqlite         3.44.2      h2797004_0          826 KiB    conda  libsqlite-3.44.2-h2797004_0.conda\n libuuid           2.38.1      h0b41bf4_0          32.8 KiB   conda  libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt         4.4.36      hd590300_1          98 KiB     conda  libxcrypt-4.4.36-hd590300_1.conda\n libzlib           1.2.13      hd590300_5          60.1 KiB   conda  libzlib-1.2.13-hd590300_5.conda\n ncurses           6.4         h59595ed_2          863.7 KiB  conda  ncurses-6.4-h59595ed_2.conda\n openssl           3.2.0       hd590300_1          2.7 MiB    conda  openssl-3.2.0-hd590300_1.conda\n python            3.12.1      hab00c5b_1_cpython  30.8 MiB   conda  python-3.12.1-hab00c5b_1_cpython.conda\n readline          8.2         h8228510_1          274.9 KiB  conda  readline-8.2-h8228510_1.conda\n tk                8.6.13      noxft_h4845f30_101  3.2 MiB    conda  tk-8.6.13-noxft_h4845f30_101.conda\n tzdata            2023d       h0c530f3_0          116.8 KiB  conda  tzdata-2023d-h0c530f3_0.conda\n xz                5.2.6       h166bdaf_0          408.6 KiB  conda  xz-5.2.6-h166bdaf_0.tar.bz2\n
    "},{"location":"cli/#shell","title":"shell","text":"

    This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit.

    "},{"location":"cli/#options_11","title":"Options","text":"
    pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
    "},{"location":"cli/#shell-hook","title":"shell-hook","text":"

    This command prints the activation script of an environment.

    "},{"location":"cli/#options_12","title":"Options","text":"

    pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\n
    Example use-case, when you want to get rid of the pixi executable in a Docker container.
    pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n

    "},{"location":"cli/#search","title":"search","text":"

    Search a package, output will list the latest version of the package.

    "},{"location":"cli/#arguments_7","title":"Arguments","text":"
    1. <PACKAGE>: Name of package to search, it's possible to use wildcards (*).
    "},{"location":"cli/#options_13","title":"Options","text":"
    pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
    "},{"location":"cli/#self-update","title":"self-update","text":"

    Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force flag

    "},{"location":"cli/#options_14","title":"Options","text":"
    pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
    "},{"location":"cli/#info","title":"info","text":"

    Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.

    "},{"location":"cli/#options_15","title":"Options","text":"
    pixi info\npixi info --json --extended\n
    "},{"location":"cli/#upload","title":"upload","text":"

    Upload a package to a prefix.dev channel

    "},{"location":"cli/#arguments_8","title":"Arguments","text":"
    1. <HOST>: The host + channel to upload to.
    2. <PACKAGE_FILE>: The package file to upload.
    pixi upload repo.prefix.dev/my_channel my_package.conda\n
    "},{"location":"cli/#auth","title":"auth","text":"

    This command is used to authenticate the user's access to remote hosts such as prefix.dev or anaconda.org for private channels.

    "},{"location":"cli/#auth-login","title":"auth login","text":"

    Store authentication information for given host.

    Tip

    The host is real hostname not a channel.

    "},{"location":"cli/#arguments_9","title":"Arguments","text":"
    1. <HOST>: The host to authenticate with.
    "},{"location":"cli/#options_16","title":"Options","text":"
    pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaNdHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --user john --password xxxxxx\n
    "},{"location":"cli/#auth-logout","title":"auth logout","text":"

    Remove authentication information for a given host.

    "},{"location":"cli/#arguments_10","title":"Arguments","text":"
    1. <HOST>: The host to authenticate with.
    pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
    "},{"location":"cli/#global","title":"global","text":"

    Global is the main entry point for the part of pixi that executes on the global(system) level.

    Tip

    Binaries and environments installed globally are stored in ~/.pixi by default, this can be changed by setting the PIXI_HOME environment variable.

    "},{"location":"cli/#global-install","title":"global install","text":"

    This command installs package(s) into its own environment and adds the binary to PATH, allowing you to access it anywhere on your system without activating the environment.

    "},{"location":"cli/#arguments_11","title":"Arguments","text":"

    1.<PACKAGE>: The package(s) to install, this can also be a version constraint.

    "},{"location":"cli/#options_17","title":"Options","text":"
    pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n

    After using global install, you can use the package you installed anywhere on your system.

    "},{"location":"cli/#global-list","title":"global list","text":"

    This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:

    > pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n|   \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n|   \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n|   \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n|   \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n    \u2514\u2500 exec: uv\n
    "},{"location":"cli/#global-upgrade","title":"global upgrade","text":"

    This command upgrades a globally installed package (to the latest version by default).

    "},{"location":"cli/#arguments_12","title":"Arguments","text":"
    1. <PACKAGE>: The package to upgrade.
    "},{"location":"cli/#options_18","title":"Options","text":"
    pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
    "},{"location":"cli/#global-upgrade-all","title":"global upgrade-all","text":"

    This command upgrades all globally installed packages to their latest version.

    "},{"location":"cli/#options_19","title":"Options","text":"
    pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
    "},{"location":"cli/#global-remove","title":"global remove","text":"

    Removes a package previously installed into a globally accessible location via pixi global install

    Use pixi global info to find out what the package name is that belongs to the tool you want to remove.

    "},{"location":"cli/#arguments_13","title":"Arguments","text":"
    1. <PACKAGE>: The package(s) to remove.
    pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
    "},{"location":"cli/#project","title":"project","text":"

    This subcommand allows you to modify the project configuration through the command line interface.

    "},{"location":"cli/#options_20","title":"Options","text":""},{"location":"cli/#project-channel-add","title":"project channel add","text":"

    Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lockfile and the environment is reinstalled.

    "},{"location":"cli/#arguments_14","title":"Arguments","text":"
    1. <CHANNEL>: The channels to add, name or URL.
    "},{"location":"cli/#options_21","title":"Options","text":"
    pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nividia\n
    "},{"location":"cli/#project-channel-list","title":"project channel list","text":"

    List the channels in the project file

    "},{"location":"cli/#options_22","title":"Options","text":"
    $ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
    "},{"location":"cli/#project-channel-remove","title":"project channel remove","text":"

    List the channels in the project file

    "},{"location":"cli/#arguments_15","title":"Arguments","text":"
    1. <CHANNEL>...: The channels to remove, name(s) or URL(s).
    "},{"location":"cli/#options_23","title":"Options","text":"
    pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nividia\n
    "},{"location":"cli/#project-description-get","title":"project description get","text":"

    Get the project description.

    $ pixi project description get\nPackage management made easy!\n
    "},{"location":"cli/#project-description-set","title":"project description set","text":"

    Set the project description.

    "},{"location":"cli/#arguments_16","title":"Arguments","text":"
    1. <DESCRIPTION>: The description to set.
    pixi project description set \"my new description\"\n
    "},{"location":"cli/#project-platform-add","title":"project platform add","text":"

    Adds a platform(s) to the project file and updates the lockfile.

    "},{"location":"cli/#arguments_17","title":"Arguments","text":"
    1. <PLATFORM>...: The platforms to add.
    "},{"location":"cli/#options_24","title":"Options","text":"
    pixi project platform add win-64\npixi project platform add --feature test win-64\n
    "},{"location":"cli/#project-platform-list","title":"project platform list","text":"

    List the platforms in the project file.

    $ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
    "},{"location":"cli/#project-platform-remove","title":"project platform remove","text":"

    Remove platform(s) from the project file and updates the lockfile.

    "},{"location":"cli/#arguments_18","title":"Arguments","text":"
    1. <PLATFORM>...: The platforms to remove.
    "},{"location":"cli/#options_25","title":"Options","text":"
    pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
    "},{"location":"cli/#project-version-get","title":"project version get","text":"

    Get the project version.

    $ pixi project version get\n0.11.0\n
    "},{"location":"cli/#project-version-set","title":"project version set","text":"

    Set the project version.

    "},{"location":"cli/#arguments_19","title":"Arguments","text":"
    1. <VERSION>: The version to set.
    pixi project version set \"0.13.0\"\n
    "},{"location":"cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}","text":"

    Bump the project version to {MAJOR|MINOR|PATCH}.

    pixi project version major\npixi project version minor\npixi project version patch\n
    1. An up-to-date lockfile means that the dependencies in the lockfile are allowed by the dependencies in the manifest file. For example

      • a pixi.toml with python = \">= 3.11\" is up-to-date with a name: python, version: 3.11.0 in the pixi.lock.
      • a pixi.toml with python = \">= 3.12\" is not up-to-date with a name: python, version: 3.11.0 in the pixi.lock.

      Being up-to-date does not mean that the lockfile holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9

    "},{"location":"configuration/","title":"Configuration","text":"

    The pixi.toml is the pixi project configuration file, also known as the project manifest.

    A toml file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check crates.io.

    "},{"location":"configuration/#the-project-table","title":"The project table","text":"

    The minimally required information in the project table is:

    [project]\nname = \"project-name\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n

    "},{"location":"configuration/#name","title":"name","text":"

    The name of the project.

    [project]\nname = \"project-name\"\n

    "},{"location":"configuration/#channels","title":"channels","text":"

    This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org you only need to use the name of the channel directly.

    [project]\nchannels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n

    Channels situated on the file system are also supported with absolute file paths:

    [project]\nchannels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n

    To access private or public channels on prefix.dev or Quetz use the url including the hostname:

    [project]\nchannels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n

    "},{"location":"configuration/#platforms","title":"platforms","text":"

    Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lockfile (pixi.lock).

    [project]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n
    The available platforms are listed here: link

    "},{"location":"configuration/#version-optional","title":"version (optional)","text":"

    The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.

    [project]\nversion = \"1.2.3\"\n

    "},{"location":"configuration/#authors-optional","title":"authors (optional)","text":"

    This is a list of authors of the project.

    [project]\nauthors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n

    "},{"location":"configuration/#description-optional","title":"description (optional)","text":"

    This should contain a short description of the project.

    [project]\ndescription = \"A simple description\"\n

    "},{"location":"configuration/#license-optional","title":"license (optional)","text":"

    The license as a valid SPDX string (e.g. MIT AND Apache-2.0)

    [project]\nlicense = \"MIT\"\n

    "},{"location":"configuration/#license-file-optional","title":"license-file (optional)","text":"

    Relative path to the license file.

    [project]\nlicense-file = \"LICENSE.md\"\n

    "},{"location":"configuration/#readme-optional","title":"readme (optional)","text":"

    Relative path to the README file.

    [project]\nreadme = \"README.md\"\n

    "},{"location":"configuration/#homepage-optional","title":"homepage (optional)","text":"

    URL of the project homepage.

    [project]\nhomepage = \"https://pixi.sh\"\n

    "},{"location":"configuration/#repository-optional","title":"repository (optional)","text":"

    URL of the project source repository.

    [project]\nrepository = \"https://github.com/prefix-dev/pixi\"\n

    "},{"location":"configuration/#documentation-optional","title":"documentation (optional)","text":"

    URL of the project documentation.

    [project]\ndocumentation = \"https://pixi.sh\"\n

    "},{"location":"configuration/#the-tasks-table","title":"The tasks table","text":"

    Tasks are a way to automate certain custom commands in your project. For example, a lint or format step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run and are executed using the deno_task_shell.

    [tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends_on=\"simple\"}\nalias = { depends_on=[\"depending\"]}\n
    You can modify this table using pixi task.

    Note

    Specify different tasks for different platforms using the target table

    "},{"location":"configuration/#the-system-requirements-table","title":"The system-requirements table","text":"

    The system requirements are used to define minimal system specifications used during dependency resolution. For example, we can define a unix system with a specific minimal libc version. This will be the minimal system specification for the project. System specifications are directly related to the virtual packages.

    Currently, the specified defaults are the same as conda-lock's implementation:

    LinuxWindowsOsxOsx-arm64 default system requirements for linux
    [system-requirements]\nlinux = \"5.10\"\nlibc = { family=\"glibc\", version=\"2.17\" }\n
    default system requirements for windows
    [system-requirements]\n
    default system requirements for osx
    [system-requirements]\nmacos = \"10.15\"\n
    default system requirements for osx-arm64
    [system-requirements]\nmacos = \"11.0\"\n

    Only if a project requires a different set should you define them.

    For example, when installing environments on old versions of linux. You may encounter the following error:

    \u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '5.10' but the system has version '4.12.14'\n
    This suggests that the system requirements for the project should be lowered. To fix this, add the following table to your configuration:
    [system-requirements]\nlinux = \"4.12.14\"\n

    "},{"location":"configuration/#using-cuda-in-pixi","title":"Using Cuda in pixi","text":"

    If you want to use cuda in your project you need to add the following to your system-requirements table:

    [system-requirements]\ncuda = \"11\" # or any other version of cuda you want to use\n
    This informs the solver that cuda is going to be available, so it can lock it into the lockfile if needed.

    "},{"location":"configuration/#the-dependencies-tables","title":"The dependencies table(s)","text":"

    This section defines what dependencies you would like to use for your project.

    There are multiple dependencies tables. The default is [dependencies], which are dependencies that are shared across platforms.

    Dependencies are defined using a VersionSpec. A VersionSpec combines a Version with an optional operator.

    Some examples are:

    # Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n

    Dependencies can also be defined as a mapping where it is using a matchspec:

    package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n

    Tip

    The dependencies can be easily added using the pixi add command line. Running add for an existing dependency will replace it with the newest it can use.

    Note

    To specify different dependencies for different platforms use the target table

    "},{"location":"configuration/#dependencies","title":"dependencies","text":"

    Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge. Even if the dependency defines a channel that channel should be added to the project.channels list.

    [dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
    "},{"location":"configuration/#pypi-dependencies-beta-feature","title":"pypi-dependencies (Beta feature)","text":"Details regarding the PyPI integration

    We use uv, which is a new fast pip replacement written in Rust.

    We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.

    The uv resolution is included in the lock file directly.

    Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.

    Important considerations

    "},{"location":"configuration/#pep404-version-specification","title":"PEP404 Version specification:","text":"

    These dependencies don't follow the conda matchspec specification. The version is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:

    [dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\npytest = \"*\"  # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\nrequests = {version = \">= 2.8.1, ==2.8.*\", extras=[\"security\", \"tests\"]}\n
    Did you know you can use: add --pypi?

    Use the --pypi flag with the add command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask

    "},{"location":"configuration/#source-dependencies","title":"Source dependencies","text":"

    The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc for example, it's used from the conda environment instead of the system.

    "},{"location":"configuration/#host-dependencies","title":"host-dependencies","text":"

    This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.

    [host-dependencies]\npython = \"~=3.10.3\"\n

    Typical examples of host dependencies are:

    "},{"location":"configuration/#build-dependencies","title":"build-dependencies","text":"

    This table contains dependencies that are needed to build the project. Different from dependencies and host-dependencies these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.

    [build-dependencies]\ncmake = \"~=3.24\"\n

    Typical examples of build dependencies are:

    Info

    The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.

    For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64 and your host platform is linux-64.

    "},{"location":"configuration/#the-activation-table","title":"The activation table","text":"

    If you want to run an activation script inside the environment when either doing a pixi run or pixi shell these can be defined here. The scripts defined in this table will be sourced when the environment is activated using pixi run or pixi shell

    Note

    The activation scripts are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe on windows and bash on linux and osx (Unix). Only .sh, .bash and .bat files are supported.

    If you have scripts per platform use the target table.

    [activation]\nscripts = [\"env_setup.sh\"]\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n
    "},{"location":"configuration/#the-target-table","title":"The target table","text":"

    The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.

    The target table is currently implemented for the following sub-tables:

    The target table is defined using [target.PLATFORM.SUB-TABLE]. E.g [target.linux-64.dependencies]

    The platform can be any of:

    The sub-table can be any of the specified above.

    To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake being used for all targets but on osx-64 or osx-arm64 a different version of python will be selected.

    [dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n

    Here are some more examples:

    [target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n

    "},{"location":"configuration/#the-feature-and-environments-tables","title":"The feature and environments tables","text":"

    The feature table allows you to define features that can be used to create different [environments]. The [environments] table allows you to define different environments. The design is explained in the this design document.

    Simplest example

    [feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n
    This will create an environment called test that has pytest installed.

    "},{"location":"configuration/#the-feature-table","title":"The feature table","text":"

    The feature table allows you to define the following fields per feature.

    These tables are all also available without the feature prefix. When those are used we call them the default feature. This is a protected name you can not use for your own feature.

    Full feature table specification
    [feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nchannels = [\"nvidia\"] # Results in:  [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
    Full feature table but written as separate tables
    [feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
    "},{"location":"configuration/#the-environments-table","title":"The environments table","text":"

    The environments table allows you to define environments that are created using the features defined in the feature tables.

    Important

    default is always implied when creating environments. If you don't want to use the default feature you can keep all the non feature tables empty.

    The environments table is defined using the following fields:

    Simplest example
    [environments]\ntest = [\"test\"]\n
    Full environments table specification
    [environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = [\"lint\"]\n
    "},{"location":"configuration/#global-configuration","title":"Global configuration","text":"

    The global configuration options are documented in the global configuration section.

    "},{"location":"environment/","title":"Environments","text":"

    Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.

    "},{"location":"environment/#structure","title":"Structure","text":"

    A pixi environment is located in the .pixi/envs directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.

    If you look at the .pixi/envs directory, you will see a directory for each environment, the default being the one that is normally used, if you specify a custom environment the name you specified will be used.

    .pixi\n\u2514\u2500\u2500 envs\n    \u251c\u2500\u2500 cuda\n    \u2502   \u251c\u2500\u2500 bin\n    \u2502   \u251c\u2500\u2500 conda-meta\n    \u2502   \u251c\u2500\u2500 etc\n    \u2502   \u251c\u2500\u2500 include\n    \u2502   \u251c\u2500\u2500 lib\n    \u2502   ...\n    \u2514\u2500\u2500 default\n        \u251c\u2500\u2500 bin\n        \u251c\u2500\u2500 conda-meta\n        \u251c\u2500\u2500 etc\n        \u251c\u2500\u2500 include\n        \u251c\u2500\u2500 lib\n        ...\n

    These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml. Pixi will always make sure the environment is in sync with the pixi.lock file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run, pixi shell.

    "},{"location":"environment/#cleaning-up","title":"Cleaning up","text":"

    If you want to clean up the environments, you can simply delete the .pixi/envs directory, and pixi will recreate the environments when needed.

    # either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n

    "},{"location":"environment/#activation","title":"Activation","text":"

    An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin directory of the environment to the PATH variable. But there is more to it in a conda environment, as it also sets some environment variables.

    To do the activation we have multiple options: - Use the pixi shell command to open a shell with the environment activated. - Use the pixi shell-hook command to print the command to activate the environment in your current shell. - Use the pixi run command to run a command in the environment.

    Where the run command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.

    Using the pixi shell-hook in pixi you would get the following output:

    export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n
    It sets the PATH and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh script. Thus, just adding the bin directory to the PATH is not enough.

    "},{"location":"environment/#traditional-conda-activate-like-activation","title":"Traditional conda activate-like activation","text":"

    If you prefer to use the traditional conda activate-like activation, you could use the pixi shell-hook command.

    $ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n

    Warning

    It is not encouraged to use the traditional conda activate-like activation, as deactivating the environment is not really possible. Use pixi shell instead.

    "},{"location":"environment/#using-pixi-with-direnv","title":"Using pixi with direnv","text":"

    This allows you to use pixi in combination with direnv. Enter the following into your .envrc file:

    .envrc
    watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
    1. This ensures that every time your pixi.lock changes, direnv invokes the shell-hook again.
    2. This installs if needed, and activates the environment. direnv ensures that the environment is deactivated when you leave the directory.
    $ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
    "},{"location":"environment/#environment-variables","title":"Environment variables","text":"

    The following environment variables are set by pixi, when using the pixi run, pixi shell, or pixi shell-hook command:

    Note

    Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT in the environment.

    "},{"location":"environment/#solving-environments","title":"Solving environments","text":"

    When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml and will put the output of the solve step into the pixi.lock file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:

    Pixi solves both the conda and PyPI dependencies, where the PyPI dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler and rip library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo. resolve is able to solve multiple ecosystem like conda and PyPI. It implements the lazy solving process for PyPI packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.

    For the [pypi-dependencies], rip implements sdist building to retrieve the metadata of the packages, and wheel building to install the packages. For this building step, pixi requires to first install python in the (conda)[dependencies] section of the pixi.toml file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies] section of the pixi.toml file.

    "},{"location":"environment/#caching","title":"Caching","text":"

    Pixi caches the packages used in the environment. So if you have multiple projects that use the same packages, pixi will only download the packages once.

    The cache is located in the ~/.cache/rattler/cache directory by default. This location is configurable by setting the PIXI_CACHE_DIR or RATTLER_CACHE_DIR environment variable.

    When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.

    "},{"location":"vision/","title":"Vision","text":"

    We created pixi because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.

    "},{"location":"vision/#pixi-values","title":"Pixi values","text":"

    We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:

    1. Fast. We want to have a fast package manager, that is able to solve the environment in a few seconds.
    2. User Friendly. We want to have a package manager that puts user friendliness on the front-line. Providing easy, accessible and intuitive commands. That have the element of least surprise.
    3. Isolated Environment. We want to have isolated environments, that are reproducible and easy to share. Ideally, it should run on all common platforms. The Conda packaging system provides an excellent base for this.
    4. Single Tool. We want to integrate most common uses when working on a development project with Pixi, so it should support at least dependency management, command management, building and uploading packages. You should not need to reach to another external tool for this.
    5. Fun. It should be fun to use pixi and not cause frustrations, you should not need to think about it a lot and it should generally just get out of your way.
    "},{"location":"vision/#conda","title":"Conda","text":"

    We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.

    "},{"location":"vision/#target-languages","title":"Target languages","text":"

    Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.

    "},{"location":"advanced/advanced_tasks/","title":"Advanced tasks","text":"

    When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.

    Here are some quick examples

    pixi.toml
    [tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n    \"cmake\",\n    # Use the cross-platform Ninja generator\n    \"-G\",\n    \"Ninja\",\n    # The source is in the root directory\n    \"-S\",\n    \".\",\n    # We wanna build in the .build directory\n    \"-B\",\n    \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends_on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
    "},{"location":"advanced/advanced_tasks/#depends-on","title":"Depends on","text":"

    Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.

    An obvious example is compiling before running an application.

    Checkout our cpp_sdl example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start everything is set up as expected.

    pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n

    Results in the following lines added to the pixi.toml

    pixi.toml
    [tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends_on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends_on = [\"build\"] }\n
    pixi run start\n

    The tasks will be executed after each other:

    If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.

    With this logic, you can also create aliases as you don't have to specify any command in a task.

    pixi task add fmt ruff\npixi task add lint pylint\n
    pixi task alias style fmt lint\n

    Results in the following pixi.toml.

    pixi.toml
    fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends_on = [\"fmt\", \"lint\"] }\n

    Now run both tools with one command.

    pixi run style\n
    "},{"location":"advanced/advanced_tasks/#working-directory","title":"Working directory","text":"

    Pixi tasks support the definition of a working directory.

    cwd\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml file is located.

    Consider a pixi project structured as follows:

    \u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n    \u2514\u2500\u2500 bar.py\n

    To add a task to run the bar.py file, use:

    pixi task add bar \"python bar.py\" --cwd scripts\n

    This will add the following line to pixi.toml: pixi.toml

    [tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n

    "},{"location":"advanced/advanced_tasks/#caching","title":"Caching","text":"

    When you specify inputs and/or outputs to a task, pixi will reuse the result of the task.

    For the cache, pixi checks that the following are true:

    If all of these conditions are met, pixi will not run the task again and instead use the existing result.

    Inputs and outputs can be specified as globs, which will be expanded to all matching files.

    pixi.toml
    [tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n

    Note: if you want to debug the globs you can use the --verbose flag to see which files are selected.

    # shows info logs of all files that were selected by the globs\npixi run -v start\n
    "},{"location":"advanced/advanced_tasks/#our-task-runner-deno_task_shell","title":"Our task runner: deno_task_shell","text":"

    To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell. The task shell is a limited implementation of a bourne-shell interface.

    "},{"location":"advanced/advanced_tasks/#built-in-commands","title":"Built-in commands","text":"

    Next to running actual executable like ./myprogram, cmake or python the shell has some built-in commandos.

    "},{"location":"advanced/advanced_tasks/#syntax","title":"Syntax","text":"

    More info in deno_task_shell documentation.

    "},{"location":"advanced/authentication/","title":"Authenticate pixi with a server","text":"

    You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.

    Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n  <HOST>  The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n      --token <TOKEN>              The token to use (for authentication with prefix.dev)\n      --username <USERNAME>        The username to use (for basic HTTP authentication)\n      --password <PASSWORD>        The password to use (for basic HTTP authentication)\n      --conda-token <CONDA_TOKEN>  The token to use on anaconda.org / quetz authentication\n  -v, --verbose...                 More output per occurrence\n  -q, --quiet...                   Less output per occurrence\n  -h, --help                       Print help\n

    The different options are \"token\", \"conda-token\" and \"username + password\".

    The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>.

    The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/....

    The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/.... This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.

    "},{"location":"advanced/authentication/#examples","title":"Examples","text":"

    Login to prefix.dev:

    pixi auth login prefix.dev --token pfx_jj8WDzvnuTEHGdAhwRZMC1Ag8gSto8\n

    Login to anaconda.org:

    pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n

    Login to a basic HTTP secured server:

    pixi auth login myserver.com --username user --password password\n
    "},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"

    The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.

    On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

    On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access program that comes pre-installed on macOS. Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

    On Linux, one can use GNOME Keyring (or just Keyring) to access credentials that are securely stored by libsecret. Searching for rattler should list all the credentials stored by pixi and other rattler-based programs.

    "},{"location":"advanced/authentication/#fallback-storage","title":"Fallback storage","text":"

    If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json and contains the credentials.

    "},{"location":"advanced/authentication/#override-the-authentication-storage","title":"Override the authentication storage","text":"

    You can use the RATTLER_AUTH_FILE environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.

    E.g.

    export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n

    The JSON should follow the following format:

    {\n    \"*.prefix.dev\": {\n        \"BearerToken\": \"your_token\"\n    },\n    \"otherhost.com\": {\n        \"BasicHttp\": {\n            \"username\": \"your_username\",\n            \"password\": \"your_password\"\n        }\n    },\n    \"conda.anaconda.org\": {\n        \"CondaToken\": \"your_token\"\n    }\n}\n

    Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev also matches repo.prefix.dev).

    Lastly you can set the authentication override file in the global configuration file.

    "},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"

    All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.

    The actual code regarding this is in the rattler_solve crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.

    "},{"location":"advanced/channel_priority/#channel-specific-dependencies","title":"Channel specific dependencies","text":"

    When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.

    [project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
    In the packagex example, the solver will understand that the package is only available in my-channel and will not look for it in conda-forge.

    The flowchart of the logic that excludes all other channels:

    flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Channel Specific Dependency?}\n    C -->|Yes| D[Exclude All Other Channels for This Package]\n    C -->|No| E{Any Other Dependencies?}\n    E -->|Yes| B\n    E -->|No| F[End]\n    D --> E
    "},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"

    Channel priority is dictated by the order in the project.channels array, where the first channel is the highest priority. For instance:

    [project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
    If the package is found in conda-forge the solver will not look for it in my-channel and your-channel, because it tells the solver they are excluded. If the package is not found in conda-forge the solver will look for it in my-channel and if it is found there it will tell the solver to exclude your-channel for this package. This diagram explains the logic:
    flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Loop Over Channels}\n    C --> D{Package in This Channel?}\n    D -->|No| C\n    D -->|Yes| E{\"This the first channel\n     for this package?\"}\n    E -->|Yes| F[Include Package in Candidates]\n    E -->|No| G[Exclude Package from Candidates]\n    F --> H{Any Other Channels?}\n    G --> H\n    H -->|Yes| C\n    H -->|No| I{Any Other Dependencies?}\n    I -->|No| J[End]\n    I -->|Yes| B

    This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.

    "},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"

    A common use case is to use pytorch with nvidia drivers, while also needing the conda-forge channel for the main dependencies.

    [project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
    What this will do is get as much as possible from the nvidia/label/cuda-11.8.0 channel, which is actually only the cuda package.

    Then it will get all packages from the nvidia channel, which is a little more and some packages overlap the nvidia and conda-forge channel. Like the cuda-cudart package, which will now only be retrieved from the nvidia channel because of the priority logic.

    Then it will get the packages from the conda-forge channel, which is the main channel for the dependencies.

    But the user only wants the pytorch packages from the pytorch channel, which is why pytorch is added last and the dependencies are added as channel specific dependencies.

    We don't define the pytorch channel before conda-forge because we want to get as much as possible from the conda-forge as the pytorch channel is not always shipping the best versions of all packages.

    For example, it also ships the ffmpeg package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge channel for ffmpeg with the priority logic.

    "},{"location":"advanced/explain_info_command/","title":"Info command","text":"

    pixi info prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json format using the --json flag, which can be useful for programmatically reading it.

    Running pixi info in the pixi repo
    \u279c pixi info\n      Pixi version: 0.13.0\n          Platform: linux-64\n  Virtual packages: __unix=0=0\n                  : __linux=6.5.12=0\n                  : __glibc=2.36=0\n                  : __cuda=12.3=0\n                  : __archspec=1=x86_64\n         Cache dir: /home/user/.cache/rattler/cache\n      Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n           Version: 0.13.0\n     Manifest file: /home/user/development/pixi/pixi.toml\n      Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n          Features: default\n          Channels: conda-forge\n  Dependency count: 10\n      Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n  Target platforms: linux-64, osx-arm64, win-64, osx-64\n             Tasks: docs, test-all, test, build, lint, install, build-docs\n
    "},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"

    The first part of the info output is information that is always available and tells you what pixi can read on your machine.

    "},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"

    This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.

    "},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"

    The virtual packages that pixi can find on your machine.

    In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda virtual package. In that case, if pixi cannot find the __cuda virtual package on your machine the installation will fail.

    "},{"location":"advanced/explain_info_command/#cache-dir","title":"Cache dir","text":"

    Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools. Normally the locations would be:

    Platform Value Linux $XDG_CACHE_HOME/rattler or $HOME/.cache/rattler macOS $HOME/Library/Caches/rattler Windows {FOLDERID_LocalAppData}/rattler

    When your system is filling up you can easily remove this folder. It will re-download everything it needs the next time you install a project.

    "},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"

    Check the authentication documentation

    "},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"

    [requires --extended]

    The size of the previously mentioned \"Cache dir\" in Mebibytes.

    "},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"

    Everything below Project is info about the project you're currently in. This info is only available if your path has a manifest file (pixi.toml).

    "},{"location":"advanced/explain_info_command/#manifest-file","title":"Manifest file","text":"

    The path to the manifest file that describes the project. For now, this can only be pixi.toml.

    "},{"location":"advanced/explain_info_command/#last-updated","title":"Last updated","text":"

    The last time the lockfile was updated, either manually or by pixi itself.

    "},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"

    The environment info defined per environment. If you don't have any environments defined, this will only show the default environment.

    "},{"location":"advanced/explain_info_command/#features","title":"Features","text":"

    This lists which features are enabled in the environment. For the default this is only default

    "},{"location":"advanced/explain_info_command/#channels","title":"Channels","text":"

    The list of channels used in this environment.

    "},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"

    The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).

    "},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"

    The list of dependencies defined for this environment.

    "},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"

    The platforms the project has defined.

    "},{"location":"advanced/github_actions/","title":"GitHub Action","text":"

    We created prefix-dev/setup-pixi to facilitate using pixi in CI.

    "},{"location":"advanced/github_actions/#usage","title":"Usage","text":"
    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    pixi-version: v0.16.1\n    cache: true\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n

    Pin your action versions

    Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.5.1) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.

    Put the following in your .github/dependabot.yml file to enable Dependabot for your GitHub Actions:

    .github/dependabot.yml
    version: 2\nupdates:\n  - package-ecosystem: github-actions\n    directory: /\n    schedule:\n      interval: monthly # (1)!\n    groups:\n      dependencies:\n        patterns:\n          - \"*\"\n
    1. or daily, weekly
    "},{"location":"advanced/github_actions/#features","title":"Features","text":"

    To see all available input arguments, see the action.yml file in setup-pixi. The most important features are described below.

    "},{"location":"advanced/github_actions/#caching","title":"Caching","text":"

    The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock file is present. It will then use the pixi.lock file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache input argument.

    Customize your cache key

    If you need to customize your cache-key, you can use the cache-key input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>.

    Only save caches on main

    In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write argument.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    cache: true\n    cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
    "},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"

    With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments input argument. This will install all environments that are specified and cache them.

    [project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
    "},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"

    The following example will install the py311 and py312 environments in different jobs.

    test:\n  runs-on: ubuntu-latest\n  strategy:\n    matrix:\n      environment: [py311, py312]\n  steps:\n  - uses: actions/checkout@v4\n  - uses: prefix-dev/setup-pixi@v0.5.1\n    with:\n      environments: ${{ matrix.environment }}\n
    "},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"

    The following example will install both the py311 and the py312 environment on the runner.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    environments: >- # (1)!\n      py311\n      py312\n- run: |\n  pixi run -e py311 test\n  pixi run -e py312 test\n
    1. separated by spaces, equivalent to

      environments: py311 py312\n

    Caching behavior if you don't specify environments

    If you don't specify any environment, the default environment will be installed and cached, even if you use other environments.

    "},{"location":"advanced/github_actions/#authentication","title":"Authentication","text":"

    There are currently three ways to authenticate with pixi:

    For more information, see Authentication.

    Handle secrets with care

    Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }} syntax. These secrets will always be masked in the logs.

    "},{"location":"advanced/github_actions/#token","title":"Token","text":"

    Specify the token using the auth-token input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
    "},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"

    Specify the username and password using the auth-username and auth-password input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    auth-host: custom-artifactory.com\n    auth-username: ${{ secrets.PIXI_USERNAME }}\n    auth-password: ${{ secrets.PIXI_PASSWORD }}\n
    "},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"

    Specify the conda-token using the conda-token input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel) is used at anaconda.org or with quetz instances.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    auth-host: anaconda.org # (1)!\n    conda-token: ${{ secrets.CONDA_TOKEN }}\n
    1. or my-quetz-instance.com
    "},{"location":"advanced/github_actions/#custom-shell-wrapper","title":"Custom shell wrapper","text":"

    setup-pixi allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run command for each command.

    - run: | # (1)!\n    python --version\n    pip install -e --no-deps .\n  shell: pixi run bash -e {0}\n
    1. everything here will be run inside of the pixi environment

    You can even run Python scripts like this:

    - run: | # (1)!\n    import my_package\n    print(\"Hello world!\")\n  shell: pixi run python {0}\n
    1. everything here will be run inside of the pixi environment

    If you want to use PowerShell, you need to specify -Command as well.

    - run: | # (1)!\n    python --version | Select-String \"3.11\"\n  shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
    1. everything here will be run inside of the pixi environment

    How does it work under the hood?

    Under the hood, the shell: xyz {0} option is implemented by creating a temporary script file and calling xyz with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0} directly but instead have to use shell: pixi run bash {0}. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell in the documentation. See the official documentation and ADR 0277 for more information about how the shell: input works in GitHub Actions.

    "},{"location":"advanced/github_actions/#-frozen-and-locked","title":"--frozen and --locked","text":"

    You can specify whether setup-pixi should run pixi install --frozen or pixi install --locked depending on the frozen or the locked input argument. See the official documentation for more information about the --frozen and --locked flags.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    locked: true\n    # or\n    frozen: true\n

    If you don't specify anything, the default behavior is to run pixi install --locked if a pixi.lock file is present and pixi install otherwise.

    "},{"location":"advanced/github_actions/#debugging","title":"Debugging","text":"

    There are two types of debug logging that you can enable.

    "},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"

    The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:

    Debug logging documentation

    For more information about debug logging in GitHub Actions, see the official documentation.

    "},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"

    The second type is the debug logging of the pixi executable. This can be specified by setting the log-level input.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    log-level: vvv # (1)!\n
    1. One of q, default, v, vv, or vvv.

    If nothing is specified, log-level will default to default or vv depending on if debug logging is enabled for the action.

    "},{"location":"advanced/github_actions/#self-hosted-runners","title":"Self-hosted runners","text":"

    On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).

    If you set post-cleanup to true, the action will delete the following files:

    If nothing is specified, post-cleanup will default to true.

    On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi to do this.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    post-cleanup: true\n    pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
    1. ${{ runner.temp }}\\Scripts\\pixi.exe on Windows
    "},{"location":"advanced/github_actions/#more-examples","title":"More examples","text":"

    If you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi repository.

    "},{"location":"advanced/global_configuration/","title":"Global configuration in pixi","text":"

    Pixi supports some global configuration options, as well as project-scoped configuration (that does not belong into the project file). The configuration is loaded in the following order:

    1. Global configuration folder (e.g. ~/.config/pixi/config.toml on Linux, dependent on XDG_CONFIG_HOME)
    2. Global .pixi folder: ~/.pixi/config.toml (or $PIXI_HOME/config.toml if the PIXI_HOME environment variable is set)
    3. Project-local .pixi folder: $PIXI_PROJECT/.pixi/config.toml
    4. Command line arguments (--tls-no-verify, --change-ps1=false etc.)

    Note

    To find the locations where pixi looks for configuration files, run pixi with -v or --verbose.

    "},{"location":"advanced/global_configuration/#reference","title":"Reference","text":"

    The following reference describes all available configuration options.

    # The default channels to select when running `pixi init` or `pixi global install`.\n# This defaults to only conda-forge.\ndefault_channels = [\"conda-forge\"]\n\n# When set to false, the `(pixi)` prefix in the shell prompt is removed.\n# This applies to the `pixi shell` subcommand.\n# You can override this from the CLI with `--change-ps1`.\nchange_ps1 = true\n\n# When set to true, the TLS certificates are not verified. Note that this is a\n# security risk and should only be used for testing purposes or internal networks.\n# You can override this from the CLI with `--tls-no-verify`.\ntls_no_verify = false\n\n# Override from where the authentication information is loaded.\n# Usually we try to use the keyring to load authentication data from, and only use a JSON\n# file as fallback. This option allows you to force the use of a JSON file.\n# Read more in the authentication section.\nauthentication_override_file = \"/path/to/your/override.json\"\n
    "},{"location":"advanced/multi_platform_configuration/","title":"Multi platform config","text":"

    Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.

    Here is an example pixi.toml that highlights some of the features:

    pixi.toml
    [project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
    "},{"location":"advanced/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"

    The project.platforms defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lockfile.

    Running pixi install on a platform that is not configured will warn the user that it is not setup for that platform:

    \u276f pixi install\n  \u00d7 the project is not configured for your current platform\n   \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n   \u00b7             \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n   \u00b7                             \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n   \u2570\u2500\u2500\u2500\u2500\n  help: The project needs to be configured to support your platform (linux-64).\n
    "},{"location":"advanced/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"

    With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms then pixi will throw an error.

    "},{"location":"advanced/multi_platform_configuration/#dependencies","title":"Dependencies","text":"

    It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.

    pixi.toml
    [dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n

    In the above example, we specify that we depend on msmpi only on Windows. We also specifically want python on 3.8 when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.

    You can use pixi's cli to add these dependencies to the pixi.toml

    pixi add --platform win-64 posix\n

    This also works for the host and build dependencies.

    pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n

    Which results in this.

    pixi.toml
    [target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
    "},{"location":"advanced/multi_platform_configuration/#activation","title":"Activation","text":"

    Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash and for windows they are bat

    To deal with this, you can define your activation scripts using the target definition.

    pixi.toml

    [activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
    When this project is run on win-64 it will only execute the target scripts not the scripts specified in the default activation.scripts

    "},{"location":"design_proposals/multi_environment_proposal/","title":"Proposal Design: Multi Environment Support","text":""},{"location":"design_proposals/multi_environment_proposal/#objective","title":"Objective","text":"

    The aim is to introduce an environment set mechanism in the pixi package manager. This mechanism will enable clear, conflict-free management of dependencies tailored to specific environments, while also maintaining the integrity of fixed lockfiles.

    "},{"location":"design_proposals/multi_environment_proposal/#motivating-example","title":"Motivating Example","text":"

    There are multiple scenarios where multiple environments are useful.

    This prepares pixi for use in large projects with multiple use-cases, multiple developers and different CI needs.

    "},{"location":"design_proposals/multi_environment_proposal/#design-considerations","title":"Design Considerations","text":"
    1. User-friendliness: Pixi is a user focussed tool that goes beyond developers. The feature should have good error reporting and helpful documentation from the start.
    2. Keep it simple: Not understanding the multiple environments feature shouldn't limit a user to use pixi. The feature should be \"invisible\" to the non-multi env use-cases.
    3. No Automatic Combinatorial: To ensure the dependency resolution process remains manageable, the solution should avoid a combinatorial explosion of dependency sets. By making the environments user defined and not automatically inferred by testing a matrix of the features.
    4. Single environment Activation: The design should allow only one environment to be active at any given time, simplifying the resolution process and preventing conflicts.
    5. Fixed Lockfiles: It's crucial to preserve fixed lockfiles for consistency and predictability. Solutions must ensure reliability not just for authors but also for end-users, particularly at the time of lockfile creation.
    "},{"location":"design_proposals/multi_environment_proposal/#proposed-solution","title":"Proposed Solution","text":"

    Important

    This is a proposal, not a final design. The proposal is open for discussion and will be updated based on the feedback.

    "},{"location":"design_proposals/multi_environment_proposal/#feature-environment-set-definitions","title":"Feature & Environment Set Definitions","text":"

    Introduce environment sets into the pixi.toml this describes environments based on feature's. Introduce features into the pixi.toml that can describe parts of environments. As an environment goes beyond just dependencies the features should be described including the following fields:

    Default features
    [dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
    Different dependencies per feature
    [feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
    Full set of environment modification in one feature
    [feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}] # Results in:  [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
    Define tasks as defaults of an environment
    [feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environments test test`\n

    The environment definition should contain the following fields:

    Creating environments from features
    [environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
    Testing a production environment with additional dependencies
    [environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
    Creating environments without a default environment
    [dependencies]\n# Keep empty or undefined to create an empty environment.\n\n[feature.base.dependencies]\npython = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom default\ndefault = [\"base\"]\n# Create a custom environment which only has the `lint` feature as the default feature is empty.\nlint = [\"lint\"]\n
    "},{"location":"design_proposals/multi_environment_proposal/#lockfile-structure","title":"Lockfile Structure","text":"

    Within the pixi.lock file, a package may now include an additional environments field, specifying the environment to which it belongs. To avoid duplication the packages environments field may contain multiple environments so the lockfile is of minimal size.

    - platform: linux-64\n  name: pre-commit\n  version: 3.3.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n  ...:\n- platform: linux-64\n  name: python\n  version: 3.9.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n    - py39\n    - default\n  ...:\n

    "},{"location":"design_proposals/multi_environment_proposal/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"

    Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:

    Default behavior
    pixi run python\n# Runs python in the `default` environment\n
    Activating an specific environment
    pixi run -e test pytest\npixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n

    Activating a shell in an environment

    pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
    Running any command in an environment
    pixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n

    Interactive selection of environments if task is in multiple environments
    # In the scenario where test is a task in multiple environments, interactive selection should be used.\npixi run test\n# Which env?\n# 1. test\n# 2. test39\n
    "},{"location":"design_proposals/multi_environment_proposal/#important-links","title":"Important links","text":""},{"location":"design_proposals/multi_environment_proposal/#real-world-example-use-cases","title":"Real world example use cases","text":"Polarify test setup

    In polarify they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.

    pixi.toml
    [project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
    .github/workflows/test.yml
    jobs:\n  tests-per-env:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        environment: [py311, py312]\n    steps:\n    - uses: actions/checkout@v4\n      - uses: prefix-dev/setup-pixi@v0.5.1\n        with:\n          environments: ${{ matrix.environment }}\n      - name: Run tasks\n        run: |\n          pixi run --environment ${{ matrix.environment }} test\n  tests-with-multiple-envs:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - uses: prefix-dev/setup-pixi@v0.5.1\n      with:\n       environments: pl017 pl018\n    - run: |\n        pixi run -e pl017 test\n        pixi run -e pl018 test\n
    Test vs Production example

    This is an example of a project that has a test feature and prod environment. The prod environment is a production environment that contains the run dependencies. The test feature is a set of dependencies and tasks that we want to put on top of the previously solved prod environment. This is a common use case where we want to test the production environment with additional dependencies.

    pixi.toml

    [project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
    In ci you would run the following commands:
    pixi run postinstall-e && pixi run test\n
    Locally you would run the following command:
    pixi run postinstall-e && pixi run dev\n

    Then in a Dockerfile you would run the following command: Dockerfile

    FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n

    Multiple machines from one project

    This is an example for an ML project that should be executable on a machine that supports cuda and mlx. It should also be executable on machines that don't support cuda or mlx, we use the cpu feature for this. pixi.toml

    [project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.5.0,<0.6.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n

    Running the project on a cuda machine
    pixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
    Running the project with mlx
    pixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
    Running the project on a machine without cuda or mlx
    pixi run train-model\n
    "},{"location":"examples/cpp-sdl/","title":"SDL example","text":"

    The cpp-sdl example is located in the pixi repository.

    git clone https://github.com/prefix-dev/pixi.git\n

    Move to the example folder

    cd pixi/examples/cpp-sdl\n

    Run the start command

    pixi run start\n

    Using the depends_on feature you only needed to run the start task but under water it is running the following tasks.

    # Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
    "},{"location":"examples/opencv/","title":"Opencv example","text":"

    The opencv example is located in the pixi repository.

    git clone https://github.com/prefix-dev/pixi.git\n

    Move to the example folder

    cd pixi/examples/opencv\n
    "},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"

    Run the start command to start the face detection algorithm.

    pixi run start\n

    The screen that starts should look like this:

    Check out the webcame_capture.py to see how we detect a face.

    "},{"location":"examples/opencv/#camera-calibration","title":"Camera Calibration","text":"

    Next to face recognition, a camera calibration example is also included.

    You'll need a checkerboard for this to work. Print this:

    Then run

    pixi run calibrate\n

    To make a picture for calibration press SPACE Do this approximately 10 times with the chessboard in view of the camera

    After that press ESC which will start the calibration.

    When the calibration is done, the camera will be used again to find the distance to the checkerboard.

    "},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"

    The nav2 example is located in the pixi repository.

    git clone https://github.com/prefix-dev/pixi.git\n

    Move to the example folder

    cd pixi/examples/ros2-nav2\n

    Run the start command

    pixi run start\n
    "},{"location":"ide_integration/pycharm/","title":"PyCharm Integration","text":"

    You can use PyCharm with pixi environments by using the conda shim provided by the pixi-pycharm package.

    Windows support

    Windows is currently not supported, see pavelzw/pixi-pycharm #5. Only Linux and macOS are supported.

    "},{"location":"ide_integration/pycharm/#how-to-use","title":"How to use","text":"

    To get started, add pixi-pycharm to your pixi project.

    pixi add pixi-pycharm\n

    This will ensure that the conda shim is installed in your project's environment.

    could not determine any available versions for pixi-pycharm on win-64

    If you get the error could not determine any available versions for pixi-pycharm on win-64 when running pixi add pixi-pycharm (even when you're not on Windows), this is because the package is not available on Windows and pixi tries to solve the environment for all platforms. If you still want to use it in your pixi project (and are on Linux/macOS), you can add the following to your pixi.toml:

    [target.unix.dependencies]\npixi-pycharm = \"*\"\n

    This will tell pixi to only use this dependency on unix platforms.

    Having pixi-pycharm installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda file in your pixi environment. You can get the path using the following command:

    pixi run 'echo $CONDA_PREFIX/libexec/conda'\n

    This is an executable that tricks PyCharm into thinking it's the proper conda executable. Under the hood it redirects all calls to the corresponding pixi equivalent.

    Use the conda shim from this pixi project

    Please make sure that this is the conda shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.

    Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.

    PyCharm should now be able to show you the installed packages as well.

    You can now run your programs and tests as usual.

    "},{"location":"ide_integration/pycharm/#multiple-environments","title":"Multiple environments","text":"

    If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.

    You can then specify the corresponding environment in the bottom right corner of the PyCharm window.

    "},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"

    When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.

    It is recommended to rename the environments to something unique.

    "},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"

    Logs are written to ~/.cache/pixi-pycharm.log. You can use them to debug problems. Please attach the logs when filing a bug report.

    "}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Getting Started","text":"

    Pixi is a package management tool for developers. It allows the developer to install libraries and applications in a reproducible way. Use pixi cross-platform, on Windows, Mac and Linux.

    "},{"location":"#installation","title":"Installation","text":"

    To install pixi you can run the following command in your terminal:

    Linux & macOSWindows
    curl -fsSL https://pixi.sh/install.sh | bash\n

    The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to ~/.pixi/bin. If this directory does not already exist, the script will create it.

    The script will also update your ~/.bash_profile to include ~/.pixi/bin in your PATH, allowing you to invoke the pixi command from anywhere.

    PowerShell:

    iwr -useb https://pixi.sh/install.ps1 | iex\n
    winget:
    winget install prefix-dev.pixi\n
    The above invocation will automatically download the latest version of pixi, extract it, and move the pixi binary to LocalAppData/pixi/bin. If this directory does not already exist, the script will create it.

    The command will also automatically add LocalAppData/pixi/bin to your path allowing you to invoke pixi from anywhere.

    Tip

    You might need to restart your terminal or source your shell for the changes to take effect.

    "},{"location":"#autocompletion","title":"Autocompletion","text":"

    To get autocompletion run:

    Linux & macOSWindows
    # Pick your shell (use `echo $SHELL` to find the shell you are using.):\necho 'eval \"$(pixi completion --shell bash)\"' >> ~/.bashrc\necho 'eval \"$(pixi completion --shell zsh)\"' >> ~/.zshrc\necho 'pixi completion --shell fish | source' >> ~/.config/fish/config.fish\necho 'eval (pixi completion --shell elvish | slurp)' >> ~/.elvish/rc.elv\n

    PowerShell:

    Add-Content -Path $PROFILE -Value '(& pixi completion --shell powershell) | Out-String | Invoke-Expression'\n

    Failure because no profile file exists

    Make sure your profile file exists, otherwise create it with:

    New-Item -Path $PROFILE -ItemType File -Force\n

    And then restart the shell or source the shell config file.

    "},{"location":"#alternative-installation-methods","title":"Alternative installation methods","text":"

    Although we recommend installing pixi through the above method we also provide additional installation methods.

    "},{"location":"#homebrew","title":"Homebrew","text":"

    Pixi is available via homebrew. To install pixi via homebrew simply run:

    brew install pixi\n
    "},{"location":"#windows-installer","title":"Windows installer","text":"

    We provide an msi installer on our GitHub releases page. The installer will download pixi and add it to the path.

    "},{"location":"#install-from-source","title":"Install from source","text":"

    pixi is 100% written in Rust, and therefore it can be installed, built and tested with cargo. To start using pixi from a source build run:

    cargo install --locked --git https://github.com/prefix-dev/pixi.git\n

    or when you want to make changes use:

    cargo build\ncargo test\n

    If you have any issues building because of the dependency on rattler checkout its compile steps.

    "},{"location":"#update","title":"Update","text":"

    Updating is as simple as installing, rerunning the installation script gets you the latest version.

    Linux & macOSWindows

    curl -fsSL https://pixi.sh/install.sh | bash\n
    Or get a specific pixi version using:
    export PIXI_VERSION=vX.Y.Z && curl -fsSL https://pixi.sh/install.sh | bash\n

    PowerShell:

    iwr -useb https://pixi.sh/install.ps1 | iex\n
    Or get a specific pixi version using: PowerShell:
    $Env:PIXI_VERSION=\"vX.Y.Z\"; iwr -useb https://pixi.sh/install.ps1 | iex\n

    Note

    If you used a package manager like brew, mamba, conda, paru to install pixi. Then use their builtin update mechanism. e.g. brew upgrade pixi.

    "},{"location":"#uninstall","title":"Uninstall","text":"

    To uninstall pixi from your system, simply remove the binary.

    Linux & macOSWindows
    rm ~/.pixi/bin/pixi\n
    $PIXI_BIN = \"$Env:LocalAppData\\pixi\\bin\\pixi\"; Remove-Item -Path $PIXI_BIN\n

    After this command, you can still use the tools you installed with pixi. To remove these as well, just remove the whole ~/.pixi directory and remove the directory from your path.

    "},{"location":"Community/","title":"Community","text":"

    When you want to show your users and contributors that they can use pixi in your repo, you can use the following badge:

    [![Pixi Badge](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/prefix-dev/pixi/main/assets/badge/v0.json)](https://pixi.sh)\n

    Customize your badge

    To further customize the look and feel of your badge, you can add &style=<custom-style> at the end of the URL. See the documentation on shields.io for more info.

    "},{"location":"Community/#built-using-pixi","title":"Built using Pixi","text":" "},{"location":"FAQ/","title":"Frequently asked questions","text":""},{"location":"FAQ/#what-is-the-difference-with-conda-mamba-poetry-pip","title":"What is the difference with conda, mamba, poetry, pip","text":"Tool Installs python Builds packages Runs predefined tasks Has lockfiles builtin Fast Use without python Conda \u2705 \u274c \u274c \u274c \u274c \u274c Mamba \u2705 \u274c \u274c \u274c \u2705 \u2705 Pip \u274c \u2705 \u274c \u274c \u274c \u274c Pixi \u2705 \ud83d\udea7 \u2705 \u2705 \u2705 \u2705 Poetry \u274c \u2705 \u274c \u2705 \u274c \u274c"},{"location":"FAQ/#why-the-name-pixi","title":"Why the name pixi","text":"

    Starting with the name prefix we iterated until we had a name that was easy to pronounce, spell and remember. There also wasn't a cli tool yet using that name. Unlike px, pex, pax, etc. We think it sparks curiosity and fun, if you don't agree, I'm sorry, but you can always alias it to whatever you like.

    Linux & macOSWindows
    alias not_pixi=\"pixi\"\n

    PowerShell:

    New-Alias -Name not_pixi -Value pixi\n

    "},{"location":"FAQ/#where-is-pixi-build","title":"Where is pixi build","text":"

    TL;DR: It's coming we promise!

    pixi build is going to be the subcommand that can generate a conda package out of a pixi project. This requires a solid build tool which we're creating with rattler-build which will be used as a library in pixi.

    "},{"location":"basic_usage/","title":"Basic usage","text":"

    Ensure you've got pixi set up. If running pixi doesn't show the help, see the getting started if it doesn't.

    pixi\n

    Initialize a new project and navigate to the project directory.

    pixi init pixi-hello-world\ncd pixi-hello-world\n

    Add the dependencies you would like to use.

    pixi add python\n

    Create a file named hello_world.py in the directory and paste the following code into the file.

    hello_world.py
    def hello():\n    print(\"Hello World, to the new revolution in package management.\")\n\nif __name__ == \"__main__\":\n    hello()\n

    Run the code inside the environment.

    pixi run python hello_world.py\n

    You can also put this run command in a task.

    pixi task add hello python hello_world.py\n

    After adding the task, you can run the task using its name.

    pixi run hello\n

    Use the shell command to activate the environment and start a new shell in there.

    pixi shell\npython\nexit\n

    You've just learned the basic features of pixi:

    1. initializing a project
    2. adding a dependency.
    3. adding a task, and executing it.
    4. running a program.

    Feel free to play around with what you just learned like adding more tasks, dependencies or code.

    Happy coding!

    "},{"location":"basic_usage/#use-pixi-as-a-global-installation-tool","title":"Use pixi as a global installation tool","text":"

    Use pixi to install tools on your machine.

    Some notable examples:

    # Awesome cross shell prompt, huge tip when using pixi!\npixi global install starship\n\n# Want to try a different shell?\npixi global install fish\n\n# Install other prefix.dev tools\npixi global install rattler-build\n\n# Install a linter you want to use in multiple projects.\npixi global install ruff\n
    "},{"location":"basic_usage/#use-pixi-in-github-actions","title":"Use pixi in GitHub Actions","text":"

    You can use pixi in GitHub Actions to install dependencies and run commands. It supports automatic caching of your environments.

    - uses: prefix-dev/setup-pixi@v0.5.1\n- run: pixi run cowpy \"Thanks for using pixi\"\n

    See the GitHub Actions for more details.

    "},{"location":"cli/","title":"Commands","text":""},{"location":"cli/#global-options","title":"Global options","text":""},{"location":"cli/#init","title":"init","text":"

    This command is used to create a new project. It initializes a pixi.toml file and also prepares a .gitignore to prevent the environment from being added to git.

    "},{"location":"cli/#arguments","title":"Arguments","text":"
    1. [PATH]: Where to place the project (defaults to current path) [default: .]
    "},{"location":"cli/#options","title":"Options","text":"

    Importing an environment.yml

    When importing an environment, the pixi.toml will be created with the dependencies from the environment file. The pixi.lock will be created when you install the environment. We don't support git+ urls as dependencies for pip packages and for the defaults channel we use main, r and msys2 as the default channels.

    pixi init myproject\npixi init ~/myproject\npixi init  # Initializes directly in the current directory.\npixi init --channel conda-forge --channel bioconda myproject\npixi init --platform osx-64 --platform linux-64 myproject\npixi init --import environment.yml\n
    "},{"location":"cli/#add","title":"add","text":"

    Adds dependencies to the pixi.toml. It will only add if the package with its version constraint is able to work with rest of the dependencies in the project. More info on multi-platform configuration.

    "},{"location":"cli/#arguments_1","title":"Arguments","text":"
    1. <SPECS>: The package(s) to add, space separated. The version constraint is optional.
    "},{"location":"cli/#options_1","title":"Options","text":"
    pixi add numpy\npixi add numpy pandas \"pytorch>=1.8\"\npixi add \"numpy>=1.22,<1.24\"\npixi add --manifest-path ~/myproject/pixi.toml numpy\npixi add --host \"python>=3.9.0\"\npixi add --build cmake\npixi add --pypi requests[security]\npixi add --platform osx-64 --build clang\npixi add --no-install numpy\npixi add --no-lockfile-update numpy\npixi add --feature featurex numpy\n
    "},{"location":"cli/#install","title":"install","text":"

    Installs all dependencies specified in the lockfile pixi.lock. Which gets generated on pixi add or when you manually change the pixi.toml file and run pixi install.

    "},{"location":"cli/#options_2","title":"Options","text":"
    pixi install\npixi install --manifest-path ~/myproject/pixi.toml\npixi install --frozen\npixi install --locked\n
    "},{"location":"cli/#run","title":"run","text":"

    The run commands first checks if the environment is ready to use. When you didn't run pixi install the run command will do that for you. The custom tasks defined in the pixi.toml are also available through the run command.

    You cannot run pixi run source setup.bash as source is not available in the deno_task_shell commandos and not an executable.

    "},{"location":"cli/#arguments_2","title":"Arguments","text":"
    1. [TASK]... The task you want to run in the projects environment, this can also be a normal command. And all arguments after the task will be passed to the task.
    "},{"location":"cli/#options_3","title":"Options","text":"
    pixi run python\npixi run cowpy \"Hey pixi user\"\npixi run --manifest-path ~/myproject/pixi.toml python\npixi run --frozen python\npixi run --locked python\n# If you have specified a custom task in the pixi.toml you can run it with run as well\npixi run build\n# Extra arguments will be passed to the tasks command.\npixi run task argument1 argument2\n\n# If you have multiple environments you can select the right one with the --environment flag.\npixi run --environment cuda python\n

    Info

    In pixi the deno_task_shell is the underlying runner of the run command. Checkout their documentation for the syntax and available commands. This is done so that the run commands can be run across all platforms.

    Cross environment tasks

    If you're using the depends_on feature of the tasks, the tasks will be run in the order you specified them. The depends_on can be used cross environment, e.g. you have this pixi.toml:

    pixi.toml
    [tasks]\nstart = { cmd = \"python start.py\", depends_on = [\"build\"] }\n\n[feature.build.tasks]\nbuild = \"cargo build\"\n[feature.build.dependencies]\nrust = \">=1.74\"\n\n[environments]\nbuild = [\"build\"]\n

    Then you're able to run the build from the build environment and start from the default environment. By only calling:

    pixi run start\n

    "},{"location":"cli/#remove","title":"remove","text":"

    Removes dependencies from the pixi.toml.

    "},{"location":"cli/#arguments_3","title":"Arguments","text":"
    1. <DEPS>...: List of dependencies you wish to remove from the project.
    "},{"location":"cli/#options_4","title":"Options","text":"
    pixi remove numpy\npixi remove numpy pandas pytorch\npixi remove --manifest-path ~/myproject/pixi.toml numpy\npixi remove --host python\npixi remove --build cmake\npixi remove --pypi requests\npixi remove --platform osx-64 --build clang\npixi remove --feature featurex clang\npixi remove --feature featurex --platform osx-64 clang\npixi remove --feature featurex --platform osx-64 --build clang\n
    "},{"location":"cli/#task","title":"task","text":"

    If you want to make a shorthand for a specific command you can add a task for it.

    "},{"location":"cli/#options_5","title":"Options","text":""},{"location":"cli/#task-add","title":"task add","text":"

    Add a task to the pixi.toml, use --depends-on to add tasks you want to run before this task, e.g. build before an execute task.

    "},{"location":"cli/#arguments_4","title":"Arguments","text":"
    1. <NAME>: The name of the task.
    2. <COMMAND>: The command to run. This can be more than one word.

    Info

    If you are using $ for env variables they will be resolved before adding them to the task. If you want to use $ in the task you need to escape it with a \\, e.g. echo \\$HOME.

    "},{"location":"cli/#options_6","title":"Options","text":"
    pixi task add cow cowpy \"Hello User\"\npixi task add tls ls --cwd tests\npixi task add test cargo t --depends-on build\npixi task add build-osx \"METAL=1 cargo build\" --platform osx-64\npixi task add train python train.py --feature cuda\n

    This adds the following to the pixi.toml:

    [tasks]\ncow = \"cowpy \\\"Hello User\\\"\"\ntls = { cmd = \"ls\", cwd = \"tests\" }\ntest = { cmd = \"cargo t\", depends_on = [\"build\"] }\n\n[target.osx-64.tasks]\nbuild-osx = \"METAL=1 cargo build\"\n\n[feature.cuda.tasks]\ntrain = \"python train.py\"\n

    Which you can then run with the run command:

    pixi run cow\n# Extra arguments will be passed to the tasks command.\npixi run test --test test1\n
    "},{"location":"cli/#task-remove","title":"task remove","text":"

    Remove the task from the pixi.toml

    "},{"location":"cli/#arguments_5","title":"Arguments","text":""},{"location":"cli/#options_7","title":"Options","text":"
    pixi task remove cow\npixi task remove --platform linux-64 test\npixi task remove --feature cuda task\n
    "},{"location":"cli/#task-alias","title":"task alias","text":"

    Create an alias for a task.

    "},{"location":"cli/#arguments_6","title":"Arguments","text":"
    1. <ALIAS>: The alias name
    2. <DEPENDS_ON>: The names of the tasks you want to execute on this alias, order counts, first one runs first.
    "},{"location":"cli/#options_8","title":"Options","text":"
    pixi task alias test-all test-py test-cpp test-rust\npixi task alias --platform linux-64 test test-linux\npixi task alias moo cow\n
    "},{"location":"cli/#task-list","title":"task list","text":"

    List all tasks in the project.

    "},{"location":"cli/#options_9","title":"Options","text":"
    pixi task list\npixi task list --environment cuda\npixi task list --summary\n
    "},{"location":"cli/#list","title":"list","text":"

    List project's packages. Highlighted packages are explicit dependencies.

    "},{"location":"cli/#options_10","title":"Options","text":"

    ```shell\npixi list\npixi list --json-pretty\npixi list --sort-by size\npixi list --platform win-64\npixi list --environment cuda\npixi list --frozen\npixi list --locked\npixi list --no-install\n
    Output will look like this, where python will be green as it is the package that was explicitly added to the pixi.toml:

    \u279c pixi list\n Package           Version     Build               Size       Kind   Source\n _libgcc_mutex     0.1         conda_forge         2.5 KiB    conda  _libgcc_mutex-0.1-conda_forge.tar.bz2\n _openmp_mutex     4.5         2_gnu               23.1 KiB   conda  _openmp_mutex-4.5-2_gnu.tar.bz2\n bzip2             1.0.8       hd590300_5          248.3 KiB  conda  bzip2-1.0.8-hd590300_5.conda\n ca-certificates   2023.11.17  hbcca054_0          150.5 KiB  conda  ca-certificates-2023.11.17-hbcca054_0.conda\n ld_impl_linux-64  2.40        h41732ed_0          688.2 KiB  conda  ld_impl_linux-64-2.40-h41732ed_0.conda\n libexpat          2.5.0       hcb278e6_1          76.2 KiB   conda  libexpat-2.5.0-hcb278e6_1.conda\n libffi            3.4.2       h7f98852_5          56.9 KiB   conda  libffi-3.4.2-h7f98852_5.tar.bz2\n libgcc-ng         13.2.0      h807b86a_4          755.7 KiB  conda  libgcc-ng-13.2.0-h807b86a_4.conda\n libgomp           13.2.0      h807b86a_4          412.2 KiB  conda  libgomp-13.2.0-h807b86a_4.conda\n libnsl            2.0.1       hd590300_0          32.6 KiB   conda  libnsl-2.0.1-hd590300_0.conda\n libsqlite         3.44.2      h2797004_0          826 KiB    conda  libsqlite-3.44.2-h2797004_0.conda\n libuuid           2.38.1      h0b41bf4_0          32.8 KiB   conda  libuuid-2.38.1-h0b41bf4_0.conda\n libxcrypt         4.4.36      hd590300_1          98 KiB     conda  libxcrypt-4.4.36-hd590300_1.conda\n libzlib           1.2.13      hd590300_5          60.1 KiB   conda  libzlib-1.2.13-hd590300_5.conda\n ncurses           6.4         h59595ed_2          863.7 KiB  conda  ncurses-6.4-h59595ed_2.conda\n openssl           3.2.0       hd590300_1          2.7 MiB    conda  openssl-3.2.0-hd590300_1.conda\n python            3.12.1      hab00c5b_1_cpython  30.8 MiB   conda  python-3.12.1-hab00c5b_1_cpython.conda\n readline          8.2         h8228510_1          274.9 KiB  conda  readline-8.2-h8228510_1.conda\n tk                8.6.13      noxft_h4845f30_101  3.2 MiB    conda  tk-8.6.13-noxft_h4845f30_101.conda\n tzdata            2023d       h0c530f3_0          116.8 KiB  conda  tzdata-2023d-h0c530f3_0.conda\n xz                5.2.6       h166bdaf_0          408.6 KiB  conda  xz-5.2.6-h166bdaf_0.tar.bz2\n
    "},{"location":"cli/#shell","title":"shell","text":"

    This command starts a new shell in the project's environment. To exit the pixi shell, simply run exit.

    "},{"location":"cli/#options_11","title":"Options","text":"
    pixi shell\nexit\npixi shell --manifest-path ~/myproject/pixi.toml\nexit\npixi shell --frozen\nexit\npixi shell --locked\nexit\npixi shell --environment cuda\nexit\n
    "},{"location":"cli/#shell-hook","title":"shell-hook","text":"

    This command prints the activation script of an environment.

    "},{"location":"cli/#options_12","title":"Options","text":"

    pixi shell-hook\npixi shell-hook --shell bash\npixi shell-hook --shell zsh\npixi shell-hook -s powershell\npixi shell-hook --manifest-path ~/myproject/pixi.toml\npixi shell-hook --frozen\npixi shell-hook --locked\npixi shell-hook --environment cuda\n
    Example use-case, when you want to get rid of the pixi executable in a Docker container.
    pixi shell-hook --shell bash > /etc/profile.d/pixi.sh\nrm ~/.pixi/bin/pixi # Now the environment will be activated without the need for the pixi executable.\n

    "},{"location":"cli/#search","title":"search","text":"

    Search a package, output will list the latest version of the package.

    "},{"location":"cli/#arguments_7","title":"Arguments","text":"
    1. <PACKAGE>: Name of package to search, it's possible to use wildcards (*).
    "},{"location":"cli/#options_13","title":"Options","text":"
    pixi search pixi\npixi search --limit 30 \"py*\"\n# search in a different channel and for a specific platform\npixi search -c robostack --platform linux-64 \"plotjuggler*\"\n
    "},{"location":"cli/#self-update","title":"self-update","text":"

    Update pixi to the latest version or a specific version. If the pixi binary is not found in the default location (e.g. ~/.pixi/bin/pixi), pixi won't update to prevent breaking the current installation (Homebrew, etc.). The behaviour can be overridden with the --force flag

    "},{"location":"cli/#options_14","title":"Options","text":"
    pixi self-update\npixi self-update --version 0.13.0\npixi self-update --force\n
    "},{"location":"cli/#info","title":"info","text":"

    Shows helpful information about the pixi installation, cache directories, disk usage, and more. More information here.

    "},{"location":"cli/#options_15","title":"Options","text":"
    pixi info\npixi info --json --extended\n
    "},{"location":"cli/#upload","title":"upload","text":"

    Upload a package to a prefix.dev channel

    "},{"location":"cli/#arguments_8","title":"Arguments","text":"
    1. <HOST>: The host + channel to upload to.
    2. <PACKAGE_FILE>: The package file to upload.
    pixi upload repo.prefix.dev/my_channel my_package.conda\n
    "},{"location":"cli/#auth","title":"auth","text":"

    This command is used to authenticate the user's access to remote hosts such as prefix.dev or anaconda.org for private channels.

    "},{"location":"cli/#auth-login","title":"auth login","text":"

    Store authentication information for given host.

    Tip

    The host is real hostname not a channel.

    "},{"location":"cli/#arguments_9","title":"Arguments","text":"
    1. <HOST>: The host to authenticate with.
    "},{"location":"cli/#options_16","title":"Options","text":"
    pixi auth login repo.prefix.dev --token pfx_JQEV-m_2bdz-D8NSyRSaNdHANx0qHjq7f2iD\npixi auth login anaconda.org --conda-token ABCDEFGHIJKLMNOP\npixi auth login https://myquetz.server --user john --password xxxxxx\n
    "},{"location":"cli/#auth-logout","title":"auth logout","text":"

    Remove authentication information for a given host.

    "},{"location":"cli/#arguments_10","title":"Arguments","text":"
    1. <HOST>: The host to authenticate with.
    pixi auth logout <HOST>\npixi auth logout repo.prefix.dev\npixi auth logout anaconda.org\n
    "},{"location":"cli/#global","title":"global","text":"

    Global is the main entry point for the part of pixi that executes on the global(system) level.

    Tip

    Binaries and environments installed globally are stored in ~/.pixi by default, this can be changed by setting the PIXI_HOME environment variable.

    "},{"location":"cli/#global-install","title":"global install","text":"

    This command installs package(s) into its own environment and adds the binary to PATH, allowing you to access it anywhere on your system without activating the environment.

    "},{"location":"cli/#arguments_11","title":"Arguments","text":"

    1.<PACKAGE>: The package(s) to install, this can also be a version constraint.

    "},{"location":"cli/#options_17","title":"Options","text":"
    pixi global install ruff\n# multiple packages can be installed at once\npixi global install starship rattler-build\n# specify the channel(s)\npixi global install --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global install -c conda-forge -c bioconda trackplot\n\n# Support full conda matchspec\npixi global install python=3.9.*\npixi global install \"python [version='3.11.0', build_number=1]\"\npixi global install \"python [version='3.11.0', build=he550d4f_1_cpython]\"\npixi global install python=3.11.0=h10a6764_1_cpython\n

    After using global install, you can use the package you installed anywhere on your system.

    "},{"location":"cli/#global-list","title":"global list","text":"

    This command shows the current installed global environments including what binaries come with it. A global installed package/environment can possibly contain multiple binaries and they will be listed out in the command output. Here is an example of a few installed packages:

    > pixi global list\nGlobal install location: /home/hanabi/.pixi\n\u251c\u2500\u2500 bat 0.24.0\n|   \u2514\u2500 exec: bat\n\u251c\u2500\u2500 conda-smithy 3.31.1\n|   \u2514\u2500 exec: feedstocks, conda-smithy\n\u251c\u2500\u2500 rattler-build 0.13.0\n|   \u2514\u2500 exec: rattler-build\n\u251c\u2500\u2500 ripgrep 14.1.0\n|   \u2514\u2500 exec: rg\n\u2514\u2500\u2500 uv 0.1.17\n    \u2514\u2500 exec: uv\n
    "},{"location":"cli/#global-upgrade","title":"global upgrade","text":"

    This command upgrades a globally installed package (to the latest version by default).

    "},{"location":"cli/#arguments_12","title":"Arguments","text":"
    1. <PACKAGE>: The package to upgrade.
    "},{"location":"cli/#options_18","title":"Options","text":"
    pixi global upgrade ruff\npixi global upgrade --channel conda-forge --channel bioconda trackplot\n# Or in a more concise form\npixi global upgrade -c conda-forge -c bioconda trackplot\n\n# Conda matchspec is supported\n# You can specify the version to upgrade to when you don't want the latest version\n# or you can even use it to downgrade a globally installed package\npixi global upgrade python=3.10\n
    "},{"location":"cli/#global-upgrade-all","title":"global upgrade-all","text":"

    This command upgrades all globally installed packages to their latest version.

    "},{"location":"cli/#options_19","title":"Options","text":"
    pixi global upgrade-all\npixi global upgrade-all --channel conda-forge --channel bioconda\n# Or in a more concise form\npixi global upgrade-all -c conda-forge -c bioconda trackplot\n
    "},{"location":"cli/#global-remove","title":"global remove","text":"

    Removes a package previously installed into a globally accessible location via pixi global install

    Use pixi global info to find out what the package name is that belongs to the tool you want to remove.

    "},{"location":"cli/#arguments_13","title":"Arguments","text":"
    1. <PACKAGE>: The package(s) to remove.
    pixi global remove pre-commit\n\n# multiple packages can be removed at once\npixi global remove pre-commit starship\n
    "},{"location":"cli/#project","title":"project","text":"

    This subcommand allows you to modify the project configuration through the command line interface.

    "},{"location":"cli/#options_20","title":"Options","text":""},{"location":"cli/#project-channel-add","title":"project channel add","text":"

    Add channels to the channel list in the project configuration. When you add channels, the channels are tested for existence, added to the lockfile and the environment is reinstalled.

    "},{"location":"cli/#arguments_14","title":"Arguments","text":"
    1. <CHANNEL>: The channels to add, name or URL.
    "},{"location":"cli/#options_21","title":"Options","text":"
    pixi project channel add robostack\npixi project channel add bioconda conda-forge robostack\npixi project channel add file:///home/user/local_channel\npixi project channel add https://repo.prefix.dev/conda-forge\npixi project channel add --no-install robostack\npixi project channel add --feature cuda nividia\n
    "},{"location":"cli/#project-channel-list","title":"project channel list","text":"

    List the channels in the project file

    "},{"location":"cli/#options_22","title":"Options","text":"
    $ pixi project channel list\nEnvironment: default\n- conda-forge\n\n$ pixi project channel list --urls\nEnvironment: default\n- https://conda.anaconda.org/conda-forge/\n
    "},{"location":"cli/#project-channel-remove","title":"project channel remove","text":"

    List the channels in the project file

    "},{"location":"cli/#arguments_15","title":"Arguments","text":"
    1. <CHANNEL>...: The channels to remove, name(s) or URL(s).
    "},{"location":"cli/#options_23","title":"Options","text":"
    pixi project channel remove conda-forge\npixi project channel remove https://conda.anaconda.org/conda-forge/\npixi project channel remove --no-install conda-forge\npixi project channel remove --feature cuda nividia\n
    "},{"location":"cli/#project-description-get","title":"project description get","text":"

    Get the project description.

    $ pixi project description get\nPackage management made easy!\n
    "},{"location":"cli/#project-description-set","title":"project description set","text":"

    Set the project description.

    "},{"location":"cli/#arguments_16","title":"Arguments","text":"
    1. <DESCRIPTION>: The description to set.
    pixi project description set \"my new description\"\n
    "},{"location":"cli/#project-platform-add","title":"project platform add","text":"

    Adds a platform(s) to the project file and updates the lockfile.

    "},{"location":"cli/#arguments_17","title":"Arguments","text":"
    1. <PLATFORM>...: The platforms to add.
    "},{"location":"cli/#options_24","title":"Options","text":"
    pixi project platform add win-64\npixi project platform add --feature test win-64\n
    "},{"location":"cli/#project-platform-list","title":"project platform list","text":"

    List the platforms in the project file.

    $ pixi project platform list\nosx-64\nlinux-64\nwin-64\nosx-arm64\n
    "},{"location":"cli/#project-platform-remove","title":"project platform remove","text":"

    Remove platform(s) from the project file and updates the lockfile.

    "},{"location":"cli/#arguments_18","title":"Arguments","text":"
    1. <PLATFORM>...: The platforms to remove.
    "},{"location":"cli/#options_25","title":"Options","text":"
    pixi project platform remove win-64\npixi project platform remove --feature test win-64\n
    "},{"location":"cli/#project-version-get","title":"project version get","text":"

    Get the project version.

    $ pixi project version get\n0.11.0\n
    "},{"location":"cli/#project-version-set","title":"project version set","text":"

    Set the project version.

    "},{"location":"cli/#arguments_19","title":"Arguments","text":"
    1. <VERSION>: The version to set.
    pixi project version set \"0.13.0\"\n
    "},{"location":"cli/#project-version-majorminorpatch","title":"project version {major|minor|patch}","text":"

    Bump the project version to {MAJOR|MINOR|PATCH}.

    pixi project version major\npixi project version minor\npixi project version patch\n
    1. An up-to-date lockfile means that the dependencies in the lockfile are allowed by the dependencies in the manifest file. For example

      • a pixi.toml with python = \">= 3.11\" is up-to-date with a name: python, version: 3.11.0 in the pixi.lock.
      • a pixi.toml with python = \">= 3.12\" is not up-to-date with a name: python, version: 3.11.0 in the pixi.lock.

      Being up-to-date does not mean that the lockfile holds the latest version available on the channel for the given dependency.\u00a0\u21a9\u21a9\u21a9\u21a9\u21a9

    "},{"location":"configuration/","title":"Configuration","text":"

    The pixi.toml is the pixi project configuration file, also known as the project manifest.

    A toml file is structured in different tables. This document will explain the usage of the different tables. For more technical documentation check crates.io.

    "},{"location":"configuration/#the-project-table","title":"The project table","text":"

    The minimally required information in the project table is:

    [project]\nname = \"project-name\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n

    "},{"location":"configuration/#name","title":"name","text":"

    The name of the project.

    [project]\nname = \"project-name\"\n

    "},{"location":"configuration/#channels","title":"channels","text":"

    This is a list that defines the channels used to fetch the packages from. If you want to use channels hosted on anaconda.org you only need to use the name of the channel directly.

    [project]\nchannels = [\"conda-forge\", \"robostack\", \"bioconda\", \"nvidia\", \"pytorch\"]\n

    Channels situated on the file system are also supported with absolute file paths:

    [project]\nchannels = [\"conda-forge\", \"file:///home/user/staged-recipes/build_artifacts\"]\n

    To access private or public channels on prefix.dev or Quetz use the url including the hostname:

    [project]\nchannels = [\"conda-forge\", \"https://repo.prefix.dev/channel-name\"]\n

    "},{"location":"configuration/#platforms","title":"platforms","text":"

    Defines the list of platforms that the project supports. Pixi solves the dependencies for all these platforms and puts them in the lockfile (pixi.lock).

    [project]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n
    The available platforms are listed here: link

    "},{"location":"configuration/#version-optional","title":"version (optional)","text":"

    The version of the project. This should be a valid version based on the conda Version Spec. See the version documentation, for an explanation of what is allowed in a Version Spec.

    [project]\nversion = \"1.2.3\"\n

    "},{"location":"configuration/#authors-optional","title":"authors (optional)","text":"

    This is a list of authors of the project.

    [project]\nauthors = [\"John Doe <j.doe@prefix.dev>\", \"Marie Curie <mss1867@gmail.com>\"]\n

    "},{"location":"configuration/#description-optional","title":"description (optional)","text":"

    This should contain a short description of the project.

    [project]\ndescription = \"A simple description\"\n

    "},{"location":"configuration/#license-optional","title":"license (optional)","text":"

    The license as a valid SPDX string (e.g. MIT AND Apache-2.0)

    [project]\nlicense = \"MIT\"\n

    "},{"location":"configuration/#license-file-optional","title":"license-file (optional)","text":"

    Relative path to the license file.

    [project]\nlicense-file = \"LICENSE.md\"\n

    "},{"location":"configuration/#readme-optional","title":"readme (optional)","text":"

    Relative path to the README file.

    [project]\nreadme = \"README.md\"\n

    "},{"location":"configuration/#homepage-optional","title":"homepage (optional)","text":"

    URL of the project homepage.

    [project]\nhomepage = \"https://pixi.sh\"\n

    "},{"location":"configuration/#repository-optional","title":"repository (optional)","text":"

    URL of the project source repository.

    [project]\nrepository = \"https://github.com/prefix-dev/pixi\"\n

    "},{"location":"configuration/#documentation-optional","title":"documentation (optional)","text":"

    URL of the project documentation.

    [project]\ndocumentation = \"https://pixi.sh\"\n

    "},{"location":"configuration/#the-tasks-table","title":"The tasks table","text":"

    Tasks are a way to automate certain custom commands in your project. For example, a lint or format step. Tasks in a pixi project are essentially cross-platform shell commands, with a unified syntax across platforms. For more in-depth information, check the Advanced tasks documentation. Pixi's tasks are run in a pixi environment using pixi run and are executed using the deno_task_shell.

    [tasks]\nsimple = \"echo This is a simple task\"\ncmd = { cmd=\"echo Same as a simple task but now more verbose\"}\ndepending = { cmd=\"echo run after simple\", depends_on=\"simple\"}\nalias = { depends_on=[\"depending\"]}\n
    You can modify this table using pixi task.

    Note

    Specify different tasks for different platforms using the target table

    "},{"location":"configuration/#the-system-requirements-table","title":"The system-requirements table","text":"

    The system requirements are used to define minimal system specifications used during dependency resolution. For example, we can define a unix system with a specific minimal libc version. This will be the minimal system specification for the project. System specifications are directly related to the virtual packages.

    Currently, the specified defaults are the same as conda-lock's implementation:

    LinuxWindowsOsxOsx-arm64 default system requirements for linux
    [system-requirements]\nlinux = \"5.10\"\nlibc = { family=\"glibc\", version=\"2.17\" }\n
    default system requirements for windows
    [system-requirements]\n
    default system requirements for osx
    [system-requirements]\nmacos = \"10.15\"\n
    default system requirements for osx-arm64
    [system-requirements]\nmacos = \"11.0\"\n

    Only if a project requires a different set should you define them.

    For example, when installing environments on old versions of linux. You may encounter the following error:

    \u00d7 The current system has a mismatching virtual package. The project requires '__linux' to be at least version '5.10' but the system has version '4.12.14'\n
    This suggests that the system requirements for the project should be lowered. To fix this, add the following table to your configuration:
    [system-requirements]\nlinux = \"4.12.14\"\n

    "},{"location":"configuration/#using-cuda-in-pixi","title":"Using Cuda in pixi","text":"

    If you want to use cuda in your project you need to add the following to your system-requirements table:

    [system-requirements]\ncuda = \"11\" # or any other version of cuda you want to use\n
    This informs the solver that cuda is going to be available, so it can lock it into the lockfile if needed.

    "},{"location":"configuration/#the-dependencies-tables","title":"The dependencies table(s)","text":"

    This section defines what dependencies you would like to use for your project.

    There are multiple dependencies tables. The default is [dependencies], which are dependencies that are shared across platforms.

    Dependencies are defined using a VersionSpec. A VersionSpec combines a Version with an optional operator.

    Some examples are:

    # Use this exact package version\npackage0 = \"1.2.3\"\n# Use 1.2.3 up to 1.3.0\npackage1 = \"~=1.2.3\"\n# Use larger than 1.2 lower and equal to 1.4\npackage2 = \">1.2,<=1.4\"\n# Bigger or equal than 1.2.3 or lower not including 1.0.0\npackage3 = \">=1.2.3|<1.0.0\"\n

    Dependencies can also be defined as a mapping where it is using a matchspec:

    package0 = { version = \">=1.2.3\", channel=\"conda-forge\" }\npackage1 = { version = \">=1.2.3\", build=\"py34_0\" }\n

    Tip

    The dependencies can be easily added using the pixi add command line. Running add for an existing dependency will replace it with the newest it can use.

    Note

    To specify different dependencies for different platforms use the target table

    "},{"location":"configuration/#dependencies","title":"dependencies","text":"

    Add any conda package dependency that you want to install into the environment. Don't forget to add the channel to the project table should you use anything different than conda-forge. Even if the dependency defines a channel that channel should be added to the project.channels list.

    [dependencies]\npython = \">3.9,<=3.11\"\nrust = \"1.72\"\npytorch-cpu = { version = \"~=1.1\", channel = \"pytorch\" }\n
    "},{"location":"configuration/#pypi-dependencies-beta-feature","title":"pypi-dependencies (Beta feature)","text":"Details regarding the PyPI integration

    We use uv, which is a new fast pip replacement written in Rust.

    We integrate uv as a library, so we use the uv resolver, to which we pass the conda packages as 'locked'. This disallows uv from installing these dependencies itself, and ensures it uses the exact version of these packages in the resolution. This is unique amongst conda based package managers, which usually just call pip from a subprocess.

    The uv resolution is included in the lock file directly.

    Pixi directly supports depending on PyPI packages, the PyPA calls a distributed package a 'distribution'. There are Source and Binary distributions both of which are supported by pixi. These distributions are installed into the environment after the conda environment has been resolved and installed. PyPI packages are not indexed on prefix.dev but can be viewed on pypi.org.

    Important considerations

    "},{"location":"configuration/#pep404-version-specification","title":"PEP404 Version specification:","text":"

    These dependencies don't follow the conda matchspec specification. The version is a string specification of the version according to PEP404/PyPA. Additionally, a list of extra's can be included, which are essentially optional dependencies. Note that this version is distinct from the conda MatchSpec type. See the example below to see how this is used in practice:

    [dependencies]\n# When using pypi-dependencies, python is needed to resolve pypi dependencies\n# make sure to include this\npython = \">=3.6\"\n\n[pypi-dependencies]\npytest = \"*\"  # This means any version (the wildcard `*` is a pixi addition, not part of the specification)\npre-commit = \"~=3.5.0\" # This is a single version specifier\n# Using the toml map allows the user to add `extras`\nrequests = {version = \">= 2.8.1, ==2.8.*\", extras=[\"security\", \"tests\"]}\n
    Did you know you can use: add --pypi?

    Use the --pypi flag with the add command to quickly add PyPI packages from the CLI. E.g pixi add --pypi flask

    "},{"location":"configuration/#source-dependencies","title":"Source dependencies","text":"

    The Source Distribution Format is a source based format (sdist for short), that a package can include alongside the binary wheel format. Because these distributions need to be built, the need a python executable to do this. This is why python needs to be present in a conda environment. Sdists usually depend on system packages to be built, especially when compiling C/C++ based python bindings. Think for example of Python SDL2 bindindings depending on the C library: SDL2. To help built these dependencies we activate the conda environment that includes these pypi dependencies before resolving. This way when a source distribution depends on gcc for example, it's used from the conda environment instead of the system.

    "},{"location":"configuration/#host-dependencies","title":"host-dependencies","text":"

    This table contains dependencies that are needed to build your project but which should not be included when your project is installed as part of another project. In other words, these dependencies are available during the build but are no longer available when your project is installed. Dependencies listed in this table are installed for the architecture of the target machine.

    [host-dependencies]\npython = \"~=3.10.3\"\n

    Typical examples of host dependencies are:

    "},{"location":"configuration/#build-dependencies","title":"build-dependencies","text":"

    This table contains dependencies that are needed to build the project. Different from dependencies and host-dependencies these packages are installed for the architecture of the build machine. This enables cross-compiling from one machine architecture to another.

    [build-dependencies]\ncmake = \"~=3.24\"\n

    Typical examples of build dependencies are:

    Info

    The build target refers to the machine that will execute the build. Programs and libraries installed by these dependencies will be executed on the build machine.

    For example, if you compile on a MacBook with an Apple Silicon chip but target Linux x86_64 then your build platform is osx-arm64 and your host platform is linux-64.

    "},{"location":"configuration/#the-activation-table","title":"The activation table","text":"

    If you want to run an activation script inside the environment when either doing a pixi run or pixi shell these can be defined here. The scripts defined in this table will be sourced when the environment is activated using pixi run or pixi shell

    Note

    The activation scripts are run by the system shell interpreter as they run before an environment is available. This means that it runs as cmd.exe on windows and bash on linux and osx (Unix). Only .sh, .bash and .bat files are supported.

    If you have scripts per platform use the target table.

    [activation]\nscripts = [\"env_setup.sh\"]\n# To support windows platforms as well add the following\n[target.win-64.activation]\nscripts = [\"env_setup.bat\"]\n
    "},{"location":"configuration/#the-target-table","title":"The target table","text":"

    The target table is a table that allows for platform specific configuration. Allowing you to make different sets of tasks or dependencies per platform.

    The target table is currently implemented for the following sub-tables:

    The target table is defined using [target.PLATFORM.SUB-TABLE]. E.g [target.linux-64.dependencies]

    The platform can be any of:

    The sub-table can be any of the specified above.

    To make it a bit more clear, let's look at an example below. Currently, pixi combines the top level tables like dependencies with the target-specific ones into a single set. Which, in the case of dependencies, can both add or overwrite dependencies. In the example below, we have cmake being used for all targets but on osx-64 or osx-arm64 a different version of python will be selected.

    [dependencies]\ncmake = \"3.26.4\"\npython = \"3.10\"\n\n[target.osx.dependencies]\npython = \"3.11\"\n

    Here are some more examples:

    [target.win-64.activation]\nscripts = [\"setup.bat\"]\n\n[target.win-64.dependencies]\nmsmpi = \"~=10.1.1\"\n\n[target.win-64.build-dependencies]\nvs2022_win-64 = \"19.36.32532\"\n\n[target.win-64.tasks]\ntmp = \"echo $TEMP\"\n\n[target.osx-64.dependencies]\nclang = \">=16.0.6\"\n

    "},{"location":"configuration/#the-feature-and-environments-tables","title":"The feature and environments tables","text":"

    The feature table allows you to define features that can be used to create different [environments]. The [environments] table allows you to define different environments. The design is explained in the this design document.

    Simplest example

    [feature.test.dependencies]\npytest = \"*\"\n\n[environments]\ntest = [\"test\"]\n
    This will create an environment called test that has pytest installed.

    "},{"location":"configuration/#the-feature-table","title":"The feature table","text":"

    The feature table allows you to define the following fields per feature.

    These tables are all also available without the feature prefix. When those are used we call them the default feature. This is a protected name you can not use for your own feature.

    Full feature table specification
    [feature.cuda]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nchannels = [\"nvidia\"] # Results in:  [\"nvidia\", \"conda-forge\"] when the default is `conda-forge`\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"==1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nsystem-requirements = {cuda = \"12\"}\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
    Full feature table but written as separate tables
    [feature.cuda.activation]\nscripts = [\"cuda_activation.sh\"]\n\n[feature.cuda.dependencies]\ncuda = \"x.y.z\"\ncudnn = \"12.0\"\n\n[feature.cuda.pypi-dependencies]\ntorch = \"==1.9.0\"\n\n[feature.cuda.system-requirements]\ncuda = \"12\"\n\n[feature.cuda.tasks]\nwarmup = \"python warmup.py\"\n\n[feature.cuda.target.osx-arm64.dependencies]\nmlx = \"x.y.z\"\n\n# Channels and Platforms are not available as separate tables as they are implemented as lists\n[feature.cuda]\nchannels = [\"nvidia\"]\nplatforms = [\"linux-64\", \"osx-arm64\"]\n
    "},{"location":"configuration/#the-environments-table","title":"The environments table","text":"

    The environments table allows you to define environments that are created using the features defined in the feature tables.

    Important

    default is always implied when creating environments. If you don't want to use the default feature you can keep all the non feature tables empty.

    The environments table is defined using the following fields:

    Simplest example
    [environments]\ntest = [\"test\"]\n
    Full environments table specification
    [environments]\ntest = {features = [\"test\"], solve-group = \"test\"}\nprod = {features = [\"prod\"], solve-group = \"test\"}\nlint = [\"lint\"]\n
    "},{"location":"configuration/#global-configuration","title":"Global configuration","text":"

    The global configuration options are documented in the global configuration section.

    "},{"location":"environment/","title":"Environments","text":"

    Pixi is a tool to manage virtual environments. This document explains what an environment looks like and how to use it.

    "},{"location":"environment/#structure","title":"Structure","text":"

    A pixi environment is located in the .pixi/envs directory of the project. This location is not configurable as it is a specific design decision to keep the environments in the project directory. This keeps your machine and your project clean and isolated from each other, and makes it easy to clean up after a project is done.

    If you look at the .pixi/envs directory, you will see a directory for each environment, the default being the one that is normally used, if you specify a custom environment the name you specified will be used.

    .pixi\n\u2514\u2500\u2500 envs\n    \u251c\u2500\u2500 cuda\n    \u2502   \u251c\u2500\u2500 bin\n    \u2502   \u251c\u2500\u2500 conda-meta\n    \u2502   \u251c\u2500\u2500 etc\n    \u2502   \u251c\u2500\u2500 include\n    \u2502   \u251c\u2500\u2500 lib\n    \u2502   ...\n    \u2514\u2500\u2500 default\n        \u251c\u2500\u2500 bin\n        \u251c\u2500\u2500 conda-meta\n        \u251c\u2500\u2500 etc\n        \u251c\u2500\u2500 include\n        \u251c\u2500\u2500 lib\n        ...\n

    These directories are conda environments, and you can use them as such, but you cannot manually edit them, this should always go through the pixi.toml. Pixi will always make sure the environment is in sync with the pixi.lock file. If this is not the case then all the commands that use the environment will automatically update the environment, e.g. pixi run, pixi shell.

    "},{"location":"environment/#cleaning-up","title":"Cleaning up","text":"

    If you want to clean up the environments, you can simply delete the .pixi/envs directory, and pixi will recreate the environments when needed.

    # either:\nrm -rf .pixi/envs\n\n# or per environment:\nrm -rf .pixi/envs/default\nrm -rf .pixi/envs/cuda\n

    "},{"location":"environment/#activation","title":"Activation","text":"

    An environment is nothing more than a set of files that are installed into a certain location, that somewhat mimics a global system install. You need to activate the environment to use it. In the most simple sense that mean adding the bin directory of the environment to the PATH variable. But there is more to it in a conda environment, as it also sets some environment variables.

    To do the activation we have multiple options: - Use the pixi shell command to open a shell with the environment activated. - Use the pixi shell-hook command to print the command to activate the environment in your current shell. - Use the pixi run command to run a command in the environment.

    Where the run command is special as it runs its own cross-platform shell and has the ability to run tasks. More information about tasks can be found in the tasks documentation.

    Using the pixi shell-hook in pixi you would get the following output:

    export PATH=\"/home/user/development/pixi/.pixi/envs/default/bin:/home/user/.local/bin:/home/user/bin:/usr/local/bin:/usr/local/sbin:/usr/bin:/home/user/.pixi/bin\"\nexport CONDA_PREFIX=\"/home/user/development/pixi/.pixi/envs/default\"\nexport PIXI_PROJECT_NAME=\"pixi\"\nexport PIXI_PROJECT_ROOT=\"/home/user/development/pixi\"\nexport PIXI_PROJECT_VERSION=\"0.12.0\"\nexport PIXI_PROJECT_MANIFEST=\"/home/user/development/pixi/pixi.toml\"\nexport CONDA_DEFAULT_ENV=\"pixi\"\nexport PIXI_ENVIRONMENT_PLATFORMS=\"osx-64,linux-64,win-64,osx-arm64\"\nexport PIXI_ENVIRONMENT_NAME=\"default\"\nexport PIXI_PROMPT=\"(pixi) \"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-binutils_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gcc_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gfortran_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/activate-gxx_linux-64.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/libglib_activate.sh\"\n. \"/home/user/development/pixi/.pixi/envs/default/etc/conda/activate.d/rust.sh\"\n
    It sets the PATH and some more environment variables. But more importantly it also runs activation scripts that are presented by the installed packages. An example of this would be the libglib_activate.sh script. Thus, just adding the bin directory to the PATH is not enough.

    "},{"location":"environment/#traditional-conda-activate-like-activation","title":"Traditional conda activate-like activation","text":"

    If you prefer to use the traditional conda activate-like activation, you could use the pixi shell-hook command.

    $ which python\npython not found\n$ eval \"$(pixi shell-hook)\"\n$ (default) which python\n/path/to/project/.pixi/envs/default/bin/python\n

    Warning

    It is not encouraged to use the traditional conda activate-like activation, as deactivating the environment is not really possible. Use pixi shell instead.

    "},{"location":"environment/#using-pixi-with-direnv","title":"Using pixi with direnv","text":"

    This allows you to use pixi in combination with direnv. Enter the following into your .envrc file:

    .envrc
    watch_file pixi.lock # (1)!\neval \"$(pixi shell-hook)\" # (2)!\n
    1. This ensures that every time your pixi.lock changes, direnv invokes the shell-hook again.
    2. This installs if needed, and activates the environment. direnv ensures that the environment is deactivated when you leave the directory.
    $ cd my-project\ndirenv: error /my-project/.envrc is blocked. Run `direnv allow` to approve its content\n$ direnv allow\ndirenv: loading /my-project/.envrc\n\u2714 Project in /my-project is ready to use!\ndirenv: export +CONDA_DEFAULT_ENV +CONDA_PREFIX +PIXI_ENVIRONMENT_NAME +PIXI_ENVIRONMENT_PLATFORMS +PIXI_PROJECT_MANIFEST +PIXI_PROJECT_NAME +PIXI_PROJECT_ROOT +PIXI_PROJECT_VERSION +PIXI_PROMPT ~PATH\n$ which python\n/my-project/.pixi/envs/default/bin/python\n$ cd ..\ndirenv: unloading\n$ which python\npython not found\n
    "},{"location":"environment/#environment-variables","title":"Environment variables","text":"

    The following environment variables are set by pixi, when using the pixi run, pixi shell, or pixi shell-hook command:

    Note

    Even though the variables are environment variables these cannot be overridden. E.g. you can not change the root of the project by setting PIXI_PROJECT_ROOT in the environment.

    "},{"location":"environment/#solving-environments","title":"Solving environments","text":"

    When you run a command that uses the environment, pixi will check if the environment is in sync with the pixi.lock file. If it is not, pixi will solve the environment and update it. This means that pixi will retrieve the best set of packages for the dependency requirements that you specified in the pixi.toml and will put the output of the solve step into the pixi.lock file. Solving is a mathematical problem and can take some time, but we take pride in the way we solve environments, and we are confident that we can solve your environment in a reasonable time. If you want to learn more about the solving process, you can read these:

    Pixi solves both the conda and PyPI dependencies, where the PyPI dependencies use the conda packages as a base, so you can be sure that the packages are compatible with each other. These solvers are split between the rattler and rip library, these control the heavy lifting of the solving process, which is executed by our custom SAT solver: resolvo. resolve is able to solve multiple ecosystem like conda and PyPI. It implements the lazy solving process for PyPI packages, which means that it only downloads the metadata of the packages that are needed to solve the environment. It also supports the conda way of solving, which means that it downloads the metadata of all the packages at once and then solves in one go.

    For the [pypi-dependencies], rip implements sdist building to retrieve the metadata of the packages, and wheel building to install the packages. For this building step, pixi requires to first install python in the (conda)[dependencies] section of the pixi.toml file. This will always be slower than the pure conda solves. So for the best pixi experience you should stay within the [dependencies] section of the pixi.toml file.

    "},{"location":"environment/#caching","title":"Caching","text":"

    Pixi caches the packages used in the environment. So if you have multiple projects that use the same packages, pixi will only download the packages once.

    The cache is located in the ~/.cache/rattler/cache directory by default. This location is configurable by setting the PIXI_CACHE_DIR or RATTLER_CACHE_DIR environment variable.

    When you want to clean the cache, you can simply delete the cache directory, and pixi will re-create the cache when needed.

    "},{"location":"vision/","title":"Vision","text":"

    We created pixi because we want to have a cargo/npm/yarn like package management experience for conda. We really love what the conda packaging ecosystem achieves, but we think that the user experience can be improved a lot. Modern package managers like cargo have shown us, how great a package manager can be. We want to bring that experience to the conda ecosystem.

    "},{"location":"vision/#pixi-values","title":"Pixi values","text":"

    We want to make pixi a great experience for everyone, so we have a few values that we want to uphold:

    1. Fast. We want to have a fast package manager, that is able to solve the environment in a few seconds.
    2. User Friendly. We want to have a package manager that puts user friendliness on the front-line. Providing easy, accessible and intuitive commands. That have the element of least surprise.
    3. Isolated Environment. We want to have isolated environments, that are reproducible and easy to share. Ideally, it should run on all common platforms. The Conda packaging system provides an excellent base for this.
    4. Single Tool. We want to integrate most common uses when working on a development project with Pixi, so it should support at least dependency management, command management, building and uploading packages. You should not need to reach to another external tool for this.
    5. Fun. It should be fun to use pixi and not cause frustrations, you should not need to think about it a lot and it should generally just get out of your way.
    "},{"location":"vision/#conda","title":"Conda","text":"

    We are building on top of the conda packaging ecosystem, this means that we have a huge number of packages available for different platforms on conda-forge. We believe the conda packaging ecosystem provides a solid base to manage your dependencies. Conda-forge is community maintained and very open to contributions. It is widely used in data science and scientific computing, robotics and other fields. And has a proven track record.

    "},{"location":"vision/#target-languages","title":"Target languages","text":"

    Essentially, we are language agnostics, we are targeting any language that can be installed with conda. Including: C++, Python, Rust, Zig etc. But we do believe the python ecosystem can benefit from a good package manager that is based on conda. So we are trying to provide an alternative to existing solutions there. We also think we can provide a good solution for C++ projects, as there are a lot of libraries available on conda-forge today. Pixi also truly shines when using it for multi-language projects e.g. a mix of C++ and Python, because we provide a nice way to build everything up to and including system level packages.

    "},{"location":"advanced/advanced_tasks/","title":"Advanced tasks","text":"

    When building a package, you often have to do more than just run the code. Steps like formatting, linting, compiling, testing, benchmarking, etc. are often part of a project. With pixi tasks, this should become much easier to do.

    Here are some quick examples

    pixi.toml
    [tasks]\n# Commands as lists so you can also add documentation in between.\nconfigure = { cmd = [\n    \"cmake\",\n    # Use the cross-platform Ninja generator\n    \"-G\",\n    \"Ninja\",\n    # The source is in the root directory\n    \"-S\",\n    \".\",\n    # We wanna build in the .build directory\n    \"-B\",\n    \".build\",\n] }\n\n# Depend on other tasks\nbuild = { cmd = [\"ninja\", \"-C\", \".build\"], depends_on = [\"configure\"] }\n\n# Using environment variables\nrun = \"python main.py $PIXI_PROJECT_ROOT\"\nset = \"export VAR=hello && echo $VAR\"\n\n# Cross platform file operations\ncopy = \"cp pixi.toml pixi_backup.toml\"\nclean = \"rm pixi_backup.toml\"\nmove = \"mv pixi.toml backup.toml\"\n
    "},{"location":"advanced/advanced_tasks/#depends-on","title":"Depends on","text":"

    Just like packages can depend on other packages, our tasks can depend on other tasks. This allows for complete pipelines to be run with a single command.

    An obvious example is compiling before running an application.

    Checkout our cpp_sdl example for a running example. In that package we have some tasks that depend on each other, so we can assure that when you run pixi run start everything is set up as expected.

    pixi task add configure \"cmake -G Ninja -S . -B .build\"\npixi task add build \"ninja -C .build\" --depends-on configure\npixi task add start \".build/bin/sdl_example\" --depends-on build\n

    Results in the following lines added to the pixi.toml

    pixi.toml
    [tasks]\n# Configures CMake\nconfigure = \"cmake -G Ninja -S . -B .build\"\n# Build the executable but make sure CMake is configured first.\nbuild = { cmd = \"ninja -C .build\", depends_on = [\"configure\"] }\n# Start the built executable\nstart = { cmd = \".build/bin/sdl_example\", depends_on = [\"build\"] }\n
    pixi run start\n

    The tasks will be executed after each other:

    If one of the commands fails (exit with non-zero code.) it will stop and the next one will not be started.

    With this logic, you can also create aliases as you don't have to specify any command in a task.

    pixi task add fmt ruff\npixi task add lint pylint\n
    pixi task alias style fmt lint\n

    Results in the following pixi.toml.

    pixi.toml
    fmt = \"ruff\"\nlint = \"pylint\"\nstyle = { depends_on = [\"fmt\", \"lint\"] }\n

    Now run both tools with one command.

    pixi run style\n
    "},{"location":"advanced/advanced_tasks/#working-directory","title":"Working directory","text":"

    Pixi tasks support the definition of a working directory.

    cwd\" stands for Current Working Directory. The directory is relative to the pixi package root, where the pixi.toml file is located.

    Consider a pixi project structured as follows:

    \u251c\u2500\u2500 pixi.toml\n\u2514\u2500\u2500 scripts\n    \u2514\u2500\u2500 bar.py\n

    To add a task to run the bar.py file, use:

    pixi task add bar \"python bar.py\" --cwd scripts\n

    This will add the following line to pixi.toml: pixi.toml

    [tasks]\nbar = { cmd = \"python bar.py\", cwd = \"scripts\" }\n

    "},{"location":"advanced/advanced_tasks/#caching","title":"Caching","text":"

    When you specify inputs and/or outputs to a task, pixi will reuse the result of the task.

    For the cache, pixi checks that the following are true:

    If all of these conditions are met, pixi will not run the task again and instead use the existing result.

    Inputs and outputs can be specified as globs, which will be expanded to all matching files.

    pixi.toml
    [tasks]\n# This task will only run if the `main.py` file has changed.\nrun = { cmd = \"python main.py\", inputs = [\"main.py\"] }\n\n# This task will remember the result of the `curl` command and not run it again if the file `data.csv` already exists.\ndownload_data = { cmd = \"curl -o data.csv https://example.com/data.csv\", outputs = [\"data.csv\"] }\n\n# This task will only run if the `src` directory has changed and will remember the result of the `make` command.\nbuild = { cmd = \"make\", inputs = [\"src/*.cpp\", \"include/*.hpp\"], outputs = [\"build/app.exe\"] }\n

    Note: if you want to debug the globs you can use the --verbose flag to see which files are selected.

    # shows info logs of all files that were selected by the globs\npixi run -v start\n
    "},{"location":"advanced/advanced_tasks/#our-task-runner-deno_task_shell","title":"Our task runner: deno_task_shell","text":"

    To support the different OS's (Windows, OSX and Linux), pixi integrates a shell that can run on all of them. This is deno_task_shell. The task shell is a limited implementation of a bourne-shell interface.

    "},{"location":"advanced/advanced_tasks/#built-in-commands","title":"Built-in commands","text":"

    Next to running actual executable like ./myprogram, cmake or python the shell has some built-in commandos.

    "},{"location":"advanced/advanced_tasks/#syntax","title":"Syntax","text":"

    More info in deno_task_shell documentation.

    "},{"location":"advanced/authentication/","title":"Authenticate pixi with a server","text":"

    You can authenticate pixi with a server like prefix.dev, a private quetz instance or anaconda.org. Different servers use different authentication methods. In this documentation page, we detail how you can authenticate against the different servers and where the authentication information is stored.

    Usage: pixi auth login [OPTIONS] <HOST>\n\nArguments:\n  <HOST>  The host to authenticate with (e.g. repo.prefix.dev)\n\nOptions:\n      --token <TOKEN>              The token to use (for authentication with prefix.dev)\n      --username <USERNAME>        The username to use (for basic HTTP authentication)\n      --password <PASSWORD>        The password to use (for basic HTTP authentication)\n      --conda-token <CONDA_TOKEN>  The token to use on anaconda.org / quetz authentication\n  -v, --verbose...                 More output per occurrence\n  -q, --quiet...                   Less output per occurrence\n  -h, --help                       Print help\n

    The different options are \"token\", \"conda-token\" and \"username + password\".

    The token variant implements a standard \"Bearer Token\" authentication as is used on the prefix.dev platform. A Bearer Token is sent with every request as an additional header of the form Authentication: Bearer <TOKEN>.

    The conda-token option is used on anaconda.org and can be used with a quetz server. With this option, the token is sent as part of the URL following this scheme: conda.anaconda.org/t/<TOKEN>/conda-forge/linux-64/....

    The last option, username & password, are used for \"Basic HTTP Authentication\". This is the equivalent of adding http://user:password@myserver.com/.... This authentication method can be configured quite easily with a reverse NGinx or Apache server and is thus commonly used in self-hosted systems.

    "},{"location":"advanced/authentication/#examples","title":"Examples","text":"

    Login to prefix.dev:

    pixi auth login prefix.dev --token pfx_jj8WDzvnuTEHGdAhwRZMC1Ag8gSto8\n

    Login to anaconda.org:

    pixi auth login anaconda.org --conda-token xy-72b914cc-c105-4ec7-a969-ab21d23480ed\n

    Login to a basic HTTP secured server:

    pixi auth login myserver.com --username user --password password\n
    "},{"location":"advanced/authentication/#where-does-pixi-store-the-authentication-information","title":"Where does pixi store the authentication information?","text":"

    The storage location for the authentication information is system-dependent. By default, pixi tries to use the keychain to store this sensitive information securely on your machine.

    On Windows, the credentials are stored in the \"credentials manager\". Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

    On macOS, the passwords are stored in the keychain. To access the password, you can use the Keychain Access program that comes pre-installed on macOS. Searching for rattler (the underlying library pixi uses) you should find any credentials stored by pixi (or other rattler-based programs).

    On Linux, one can use GNOME Keyring (or just Keyring) to access credentials that are securely stored by libsecret. Searching for rattler should list all the credentials stored by pixi and other rattler-based programs.

    "},{"location":"advanced/authentication/#fallback-storage","title":"Fallback storage","text":"

    If you run on a server with none of the aforementioned keychains available, then pixi falls back to store the credentials in an insecure JSON file. This JSON file is located at ~/.rattler/credentials.json and contains the credentials.

    "},{"location":"advanced/authentication/#override-the-authentication-storage","title":"Override the authentication storage","text":"

    You can use the RATTLER_AUTH_FILE environment variable to override the default location of the credentials file. When this environment variable is set, it provides the only source of authentication data that is used by pixi.

    E.g.

    export RATTLER_AUTH_FILE=$HOME/credentials.json\n# You can also specify the file in the command line\npixi global install --auth-file $HOME/credentials.json ...\n

    The JSON should follow the following format:

    {\n    \"*.prefix.dev\": {\n        \"BearerToken\": \"your_token\"\n    },\n    \"otherhost.com\": {\n        \"BasicHttp\": {\n            \"username\": \"your_username\",\n            \"password\": \"your_password\"\n        }\n    },\n    \"conda.anaconda.org\": {\n        \"CondaToken\": \"your_token\"\n    }\n}\n

    Note: if you use a wildcard in the host, any subdomain will match (e.g. *.prefix.dev also matches repo.prefix.dev).

    Lastly you can set the authentication override file in the global configuration file.

    "},{"location":"advanced/channel_priority/","title":"Channel Logic","text":"

    All logic regarding the decision which dependencies can be installed from which channel is done by the instruction we give the solver.

    The actual code regarding this is in the rattler_solve crate. This might however be hard to read. Therefore, this document will continue with simplified flow charts.

    "},{"location":"advanced/channel_priority/#channel-specific-dependencies","title":"Channel specific dependencies","text":"

    When a user defines a channel per dependency, the solver needs to know the other channels are unusable for this dependency.

    [project]\nchannels = [\"conda-forge\", \"my-channel\"]\n\n[dependencies]\npackgex = { version = \"*\", channel = \"my-channel\" }\n
    In the packagex example, the solver will understand that the package is only available in my-channel and will not look for it in conda-forge.

    The flowchart of the logic that excludes all other channels:

    flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Channel Specific Dependency?}\n    C -->|Yes| D[Exclude All Other Channels for This Package]\n    C -->|No| E{Any Other Dependencies?}\n    E -->|Yes| B\n    E -->|No| F[End]\n    D --> E
    "},{"location":"advanced/channel_priority/#channel-priority","title":"Channel priority","text":"

    Channel priority is dictated by the order in the project.channels array, where the first channel is the highest priority. For instance:

    [project]\nchannels = [\"conda-forge\", \"my-channel\", \"your-channel\"]\n
    If the package is found in conda-forge the solver will not look for it in my-channel and your-channel, because it tells the solver they are excluded. If the package is not found in conda-forge the solver will look for it in my-channel and if it is found there it will tell the solver to exclude your-channel for this package. This diagram explains the logic:
    flowchart TD\n    A[Start] --> B[Given a Dependency]\n    B --> C{Loop Over Channels}\n    C --> D{Package in This Channel?}\n    D -->|No| C\n    D -->|Yes| E{\"This the first channel\n     for this package?\"}\n    E -->|Yes| F[Include Package in Candidates]\n    E -->|No| G[Exclude Package from Candidates]\n    F --> H{Any Other Channels?}\n    G --> H\n    H -->|Yes| C\n    H -->|No| I{Any Other Dependencies?}\n    I -->|No| J[End]\n    I -->|Yes| B

    This method ensures the solver only adds a package to the candidates if it's found in the highest priority channel available. If you have 10 channels and the package is found in the 5th channel it will exclude the next 5 channels from the candidates if they also contain the package.

    "},{"location":"advanced/channel_priority/#use-case-pytorch-and-nvidia-with-conda-forge","title":"Use case: pytorch and nvidia with conda-forge","text":"

    A common use case is to use pytorch with nvidia drivers, while also needing the conda-forge channel for the main dependencies.

    [project]\nchannels = [\"nvidia/label/cuda-11.8.0\", \"nvidia\", \"conda-forge\", \"pytorch\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\ncuda = {version = \"*\", channel=\"nvidia/label/cuda-11.8.0\"}\npytorch = {version = \"2.0.1.*\", channel=\"pytorch\"}\ntorchvision = {version = \"0.15.2.*\", channel=\"pytorch\"}\npytorch-cuda = {version = \"11.8.*\", channel=\"pytorch\"}\npython = \"3.10.*\"\n
    What this will do is get as much as possible from the nvidia/label/cuda-11.8.0 channel, which is actually only the cuda package.

    Then it will get all packages from the nvidia channel, which is a little more and some packages overlap the nvidia and conda-forge channel. Like the cuda-cudart package, which will now only be retrieved from the nvidia channel because of the priority logic.

    Then it will get the packages from the conda-forge channel, which is the main channel for the dependencies.

    But the user only wants the pytorch packages from the pytorch channel, which is why pytorch is added last and the dependencies are added as channel specific dependencies.

    We don't define the pytorch channel before conda-forge because we want to get as much as possible from the conda-forge as the pytorch channel is not always shipping the best versions of all packages.

    For example, it also ships the ffmpeg package, but only an old version which doesn't work with the newer pytorch versions. Thus breaking the installation if we would skip the conda-forge channel for ffmpeg with the priority logic.

    "},{"location":"advanced/explain_info_command/","title":"Info command","text":"

    pixi info prints out useful information to debug a situation or to get an overview of your machine/project. This information can also be retrieved in json format using the --json flag, which can be useful for programmatically reading it.

    Running pixi info in the pixi repo
    \u279c pixi info\n      Pixi version: 0.13.0\n          Platform: linux-64\n  Virtual packages: __unix=0=0\n                  : __linux=6.5.12=0\n                  : __glibc=2.36=0\n                  : __cuda=12.3=0\n                  : __archspec=1=x86_64\n         Cache dir: /home/user/.cache/rattler/cache\n      Auth storage: /home/user/.rattler/credentials.json\n\nProject\n------------\n           Version: 0.13.0\n     Manifest file: /home/user/development/pixi/pixi.toml\n      Last updated: 25-01-2024 10:29:08\n\nEnvironments\n------------\ndefault\n          Features: default\n          Channels: conda-forge\n  Dependency count: 10\n      Dependencies: pre-commit, rust, openssl, pkg-config, git, mkdocs, mkdocs-material, pillow, cairosvg, compilers\n  Target platforms: linux-64, osx-arm64, win-64, osx-64\n             Tasks: docs, test-all, test, build, lint, install, build-docs\n
    "},{"location":"advanced/explain_info_command/#global-info","title":"Global info","text":"

    The first part of the info output is information that is always available and tells you what pixi can read on your machine.

    "},{"location":"advanced/explain_info_command/#platform","title":"Platform","text":"

    This defines the platform you're currently on according to pixi. If this is incorrect, please file an issue on the pixi repo.

    "},{"location":"advanced/explain_info_command/#virtual-packages","title":"Virtual packages","text":"

    The virtual packages that pixi can find on your machine.

    In the Conda ecosystem, you can depend on virtual packages. These packages aren't real dependencies that are going to be installed, but rather are being used in the solve step to find if a package can be installed on the machine. A simple example: When a package depends on Cuda drivers being present on the host machine it can do that by depending on the __cuda virtual package. In that case, if pixi cannot find the __cuda virtual package on your machine the installation will fail.

    "},{"location":"advanced/explain_info_command/#cache-dir","title":"Cache dir","text":"

    Pixi caches all previously downloaded packages in a cache folder. This cache folder is shared between all pixi projects and globally installed tools. Normally the locations would be:

    Platform Value Linux $XDG_CACHE_HOME/rattler or $HOME/.cache/rattler macOS $HOME/Library/Caches/rattler Windows {FOLDERID_LocalAppData}/rattler

    When your system is filling up you can easily remove this folder. It will re-download everything it needs the next time you install a project.

    "},{"location":"advanced/explain_info_command/#auth-storage","title":"Auth storage","text":"

    Check the authentication documentation

    "},{"location":"advanced/explain_info_command/#cache-size","title":"Cache size","text":"

    [requires --extended]

    The size of the previously mentioned \"Cache dir\" in Mebibytes.

    "},{"location":"advanced/explain_info_command/#project-info","title":"Project info","text":"

    Everything below Project is info about the project you're currently in. This info is only available if your path has a manifest file (pixi.toml).

    "},{"location":"advanced/explain_info_command/#manifest-file","title":"Manifest file","text":"

    The path to the manifest file that describes the project. For now, this can only be pixi.toml.

    "},{"location":"advanced/explain_info_command/#last-updated","title":"Last updated","text":"

    The last time the lockfile was updated, either manually or by pixi itself.

    "},{"location":"advanced/explain_info_command/#environment-info","title":"Environment info","text":"

    The environment info defined per environment. If you don't have any environments defined, this will only show the default environment.

    "},{"location":"advanced/explain_info_command/#features","title":"Features","text":"

    This lists which features are enabled in the environment. For the default this is only default

    "},{"location":"advanced/explain_info_command/#channels","title":"Channels","text":"

    The list of channels used in this environment.

    "},{"location":"advanced/explain_info_command/#dependency-count","title":"Dependency count","text":"

    The amount of dependencies defined that are defined for this environment (not the amount of installed dependencies).

    "},{"location":"advanced/explain_info_command/#dependencies","title":"Dependencies","text":"

    The list of dependencies defined for this environment.

    "},{"location":"advanced/explain_info_command/#target-platforms","title":"Target platforms","text":"

    The platforms the project has defined.

    "},{"location":"advanced/github_actions/","title":"GitHub Action","text":"

    We created prefix-dev/setup-pixi to facilitate using pixi in CI.

    "},{"location":"advanced/github_actions/#usage","title":"Usage","text":"
    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    pixi-version: v0.16.1\n    cache: true\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n- run: pixi run test\n

    Pin your action versions

    Since pixi is not yet stable, the API of this action may change between minor versions. Please pin the versions of this action to a specific version (i.e., prefix-dev/setup-pixi@v0.5.1) to avoid breaking changes. You can automatically update the version of this action by using Dependabot.

    Put the following in your .github/dependabot.yml file to enable Dependabot for your GitHub Actions:

    .github/dependabot.yml
    version: 2\nupdates:\n  - package-ecosystem: github-actions\n    directory: /\n    schedule:\n      interval: monthly # (1)!\n    groups:\n      dependencies:\n        patterns:\n          - \"*\"\n
    1. or daily, weekly
    "},{"location":"advanced/github_actions/#features","title":"Features","text":"

    To see all available input arguments, see the action.yml file in setup-pixi. The most important features are described below.

    "},{"location":"advanced/github_actions/#caching","title":"Caching","text":"

    The action supports caching of the pixi environment. By default, caching is enabled if a pixi.lock file is present. It will then use the pixi.lock file to generate a hash of the environment and cache it. If the cache is hit, the action will skip the installation and use the cached environment. You can specify the behavior by setting the cache input argument.

    Customize your cache key

    If you need to customize your cache-key, you can use the cache-key input argument. This will be the prefix of the cache key. The full cache key will be <cache-key><conda-arch>-<hash>.

    Only save caches on main

    In order to not exceed the 10 GB cache size limit as fast, you might want to restrict when the cache is saved. This can be done by setting the cache-write argument.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    cache: true\n    cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}\n
    "},{"location":"advanced/github_actions/#multiple-environments","title":"Multiple environments","text":"

    With pixi, you can create multiple environments for different requirements. You can also specify which environment(s) you want to install by setting the environments input argument. This will install all environments that are specified and cache them.

    [project]\nname = \"my-package\"\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\"]\n\n[dependencies]\npython = \">=3.11\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n\n[environments]\npy311 = [\"py311\"]\npy312 = [\"py312\"]\n
    "},{"location":"advanced/github_actions/#multiple-environments-using-a-matrix","title":"Multiple environments using a matrix","text":"

    The following example will install the py311 and py312 environments in different jobs.

    test:\n  runs-on: ubuntu-latest\n  strategy:\n    matrix:\n      environment: [py311, py312]\n  steps:\n  - uses: actions/checkout@v4\n  - uses: prefix-dev/setup-pixi@v0.5.1\n    with:\n      environments: ${{ matrix.environment }}\n
    "},{"location":"advanced/github_actions/#install-multiple-environments-in-one-job","title":"Install multiple environments in one job","text":"

    The following example will install both the py311 and the py312 environment on the runner.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    environments: >- # (1)!\n      py311\n      py312\n- run: |\n  pixi run -e py311 test\n  pixi run -e py312 test\n
    1. separated by spaces, equivalent to

      environments: py311 py312\n

    Caching behavior if you don't specify environments

    If you don't specify any environment, the default environment will be installed and cached, even if you use other environments.

    "},{"location":"advanced/github_actions/#authentication","title":"Authentication","text":"

    There are currently three ways to authenticate with pixi:

    For more information, see Authentication.

    Handle secrets with care

    Please only store sensitive information using GitHub secrets. Do not store them in your repository. When your sensitive information is stored in a GitHub secret, you can access it using the ${{ secrets.SECRET_NAME }} syntax. These secrets will always be masked in the logs.

    "},{"location":"advanced/github_actions/#token","title":"Token","text":"

    Specify the token using the auth-token input argument. This form of authentication (bearer token in the request headers) is mainly used at prefix.dev.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    auth-host: prefix.dev\n    auth-token: ${{ secrets.PREFIX_DEV_TOKEN }}\n
    "},{"location":"advanced/github_actions/#username-and-password","title":"Username and password","text":"

    Specify the username and password using the auth-username and auth-password input arguments. This form of authentication (HTTP Basic Auth) is used in some enterprise environments with artifactory for example.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    auth-host: custom-artifactory.com\n    auth-username: ${{ secrets.PIXI_USERNAME }}\n    auth-password: ${{ secrets.PIXI_PASSWORD }}\n
    "},{"location":"advanced/github_actions/#conda-token","title":"Conda-token","text":"

    Specify the conda-token using the conda-token input argument. This form of authentication (token is encoded in URL: https://my-quetz-instance.com/t/<token>/get/custom-channel) is used at anaconda.org or with quetz instances.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    auth-host: anaconda.org # (1)!\n    conda-token: ${{ secrets.CONDA_TOKEN }}\n
    1. or my-quetz-instance.com
    "},{"location":"advanced/github_actions/#custom-shell-wrapper","title":"Custom shell wrapper","text":"

    setup-pixi allows you to run command inside of the pixi environment by specifying a custom shell wrapper with shell: pixi run bash -e {0}. This can be useful if you want to run commands inside of the pixi environment, but don't want to use the pixi run command for each command.

    - run: | # (1)!\n    python --version\n    pip install -e --no-deps .\n  shell: pixi run bash -e {0}\n
    1. everything here will be run inside of the pixi environment

    You can even run Python scripts like this:

    - run: | # (1)!\n    import my_package\n    print(\"Hello world!\")\n  shell: pixi run python {0}\n
    1. everything here will be run inside of the pixi environment

    If you want to use PowerShell, you need to specify -Command as well.

    - run: | # (1)!\n    python --version | Select-String \"3.11\"\n  shell: pixi run pwsh -Command {0} # pwsh works on all platforms\n
    1. everything here will be run inside of the pixi environment

    How does it work under the hood?

    Under the hood, the shell: xyz {0} option is implemented by creating a temporary script file and calling xyz with that script file as an argument. This file does not have the executable bit set, so you cannot use shell: pixi run {0} directly but instead have to use shell: pixi run bash {0}. There are some custom shells provided by GitHub that have slightly different behavior, see jobs.<job_id>.steps[*].shell in the documentation. See the official documentation and ADR 0277 for more information about how the shell: input works in GitHub Actions.

    "},{"location":"advanced/github_actions/#-frozen-and-locked","title":"--frozen and --locked","text":"

    You can specify whether setup-pixi should run pixi install --frozen or pixi install --locked depending on the frozen or the locked input argument. See the official documentation for more information about the --frozen and --locked flags.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    locked: true\n    # or\n    frozen: true\n

    If you don't specify anything, the default behavior is to run pixi install --locked if a pixi.lock file is present and pixi install otherwise.

    "},{"location":"advanced/github_actions/#debugging","title":"Debugging","text":"

    There are two types of debug logging that you can enable.

    "},{"location":"advanced/github_actions/#debug-logging-of-the-action","title":"Debug logging of the action","text":"

    The first one is the debug logging of the action itself. This can be enabled by for the action by re-running the action in debug mode:

    Debug logging documentation

    For more information about debug logging in GitHub Actions, see the official documentation.

    "},{"location":"advanced/github_actions/#debug-logging-of-pixi","title":"Debug logging of pixi","text":"

    The second type is the debug logging of the pixi executable. This can be specified by setting the log-level input.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    log-level: vvv # (1)!\n
    1. One of q, default, v, vv, or vvv.

    If nothing is specified, log-level will default to default or vv depending on if debug logging is enabled for the action.

    "},{"location":"advanced/github_actions/#self-hosted-runners","title":"Self-hosted runners","text":"

    On self-hosted runners, it may happen that some files are persisted between jobs. This can lead to problems or secrets getting leaked between job runs. To avoid this, you can use the post-cleanup input to specify the post cleanup behavior of the action (i.e., what happens after all your commands have been executed).

    If you set post-cleanup to true, the action will delete the following files:

    If nothing is specified, post-cleanup will default to true.

    On self-hosted runners, you also might want to alter the default pixi install location to a temporary location. You can use pixi-bin-path: ${{ runner.temp }}/bin/pixi to do this.

    - uses: prefix-dev/setup-pixi@v0.5.1\n  with:\n    post-cleanup: true\n    pixi-bin-path: ${{ runner.temp }}/bin/pixi # (1)!\n
    1. ${{ runner.temp }}\\Scripts\\pixi.exe on Windows
    "},{"location":"advanced/github_actions/#more-examples","title":"More examples","text":"

    If you want to see more examples, you can take a look at the GitHub Workflows of the setup-pixi repository.

    "},{"location":"advanced/global_configuration/","title":"Global configuration in pixi","text":"

    Pixi supports some global configuration options, as well as project-scoped configuration (that does not belong into the project file). The configuration is loaded in the following order:

    1. Global configuration folder (e.g. ~/.config/pixi/config.toml on Linux, dependent on XDG_CONFIG_HOME)
    2. Global .pixi folder: ~/.pixi/config.toml (or $PIXI_HOME/config.toml if the PIXI_HOME environment variable is set)
    3. Project-local .pixi folder: $PIXI_PROJECT/.pixi/config.toml
    4. Command line arguments (--tls-no-verify, --change-ps1=false etc.)

    Note

    To find the locations where pixi looks for configuration files, run pixi with -v or --verbose.

    "},{"location":"advanced/global_configuration/#reference","title":"Reference","text":"

    The following reference describes all available configuration options.

    # The default channels to select when running `pixi init` or `pixi global install`.\n# This defaults to only conda-forge.\ndefault_channels = [\"conda-forge\"]\n\n# When set to false, the `(pixi)` prefix in the shell prompt is removed.\n# This applies to the `pixi shell` subcommand.\n# You can override this from the CLI with `--change-ps1`.\nchange_ps1 = true\n\n# When set to true, the TLS certificates are not verified. Note that this is a\n# security risk and should only be used for testing purposes or internal networks.\n# You can override this from the CLI with `--tls-no-verify`.\ntls_no_verify = false\n\n# Override from where the authentication information is loaded.\n# Usually we try to use the keyring to load authentication data from, and only use a JSON\n# file as fallback. This option allows you to force the use of a JSON file.\n# Read more in the authentication section.\nauthentication_override_file = \"/path/to/your/override.json\"\n\n# configuration for conda channel-mirrors\n[mirrors]\n# redirect all requests for conda-forge to the prefix.dev mirror\n\"https://conda.anaconda.org/conda-forge\" = [\n    \"https://prefix.dev/conda-forge\"\n]\n\n# redirect all requests for bioconda to one of the three listed mirrors\n# Note: for repodata we try the first mirror first.\n\"https://conda.anaconda.org/bioconda\" = [\n    \"https://conda.anaconda.org/bioconda\",\n    # OCI registries are also supported\n    \"oci://ghcr.io/channel-mirrors/bioconda\",\n    \"https://prefix.dev/bioconda\",\n]\n
    "},{"location":"advanced/global_configuration/#mirror-configuration","title":"Mirror configuration","text":"

    You can configure mirrors for conda channels. We expect that mirrors are exact copies of the original channel. The implementation will look for the mirror key (a URL) in the mirrors section of the configuration file and replace the original URL with the mirror URL.

    To also include the original URL, you have to repeat it in the list of mirrors.

    The mirrors are prioritized based on the order of the list. We attempt to fetch the repodata (the most important file) from the first mirror in the list. The repodata contains all the SHA256 hashes of the individual packages, so it is important to get this file from a trusted source.

    You can also specify mirrors for an entire \"host\", e.g.

    [mirrors]\n\"https://conda.anaconda.org\" = [\n    \"https://prefix.dev/\"\n]\n

    This will forward all request to channels on anaconda.org to prefix.dev. Channels that are not currently mirrored on prefix.dev will fail in the above example.

    "},{"location":"advanced/global_configuration/#oci-mirrors","title":"OCI Mirrors","text":"

    You can also specify mirrors on the OCI registry. There is a public mirror on the Github container registry (ghcr.io) that is maintained by the conda-forge team. You can use it like this:

    [mirrors]\n\"https://conda.anaconda.org/conda-forge\" = [\n    \"oci://ghcr.io/channel-mirrors/conda-forge\"\n]\n

    The GHCR mirror also contains bioconda packages. You can search the available packages on Github.

    "},{"location":"advanced/multi_platform_configuration/","title":"Multi platform config","text":"

    Pixi's vision includes being supported on all major platforms. Sometimes that needs some extra configuration to work well. On this page, you will learn what you can configure to align better with the platform you are making your application for.

    Here is an example pixi.toml that highlights some of the features:

    pixi.toml
    [project]\n# Default project info....\n# A list of platforms you are supporting with your package.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\n# Overwrite the needed python version only on win-64\npython = \"3.7\"\n\n\n[activation]\nscripts = [\"setup.sh\"]\n\n[target.win-64.activation]\n# Overwrite activation scripts only for windows\nscripts = [\"setup.bat\"]\n
    "},{"location":"advanced/multi_platform_configuration/#platform-definition","title":"Platform definition","text":"

    The project.platforms defines which platforms your project supports. When multiple platforms are defined, pixi determines which dependencies to install for each platform individually. All of this is stored in a lockfile.

    Running pixi install on a platform that is not configured will warn the user that it is not setup for that platform:

    \u276f pixi install\n  \u00d7 the project is not configured for your current platform\n   \u256d\u2500[pixi.toml:6:1]\n 6 \u2502 channels = [\"conda-forge\"]\n 7 \u2502 platforms = [\"osx-64\", \"osx-arm64\", \"win-64\"]\n   \u00b7             \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u252c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\n   \u00b7                             \u2570\u2500\u2500 add 'linux-64' here\n 8 \u2502\n   \u2570\u2500\u2500\u2500\u2500\n  help: The project needs to be configured to support your platform (linux-64).\n
    "},{"location":"advanced/multi_platform_configuration/#target-specifier","title":"Target specifier","text":"

    With the target specifier, you can overwrite the original configuration specifically for a single platform. If you are targeting a specific platform in your target specifier that was not specified in your project.platforms then pixi will throw an error.

    "},{"location":"advanced/multi_platform_configuration/#dependencies","title":"Dependencies","text":"

    It might happen that you want to install a certain dependency only on a specific platform, or you might want to use a different version on different platforms.

    pixi.toml
    [dependencies]\npython = \">=3.8\"\n\n[target.win-64.dependencies]\nmsmpi = \"*\"\npython = \"3.8\"\n

    In the above example, we specify that we depend on msmpi only on Windows. We also specifically want python on 3.8 when installing on Windows. This will overwrite the dependencies from the generic set of dependencies. This will not touch any of the other platforms.

    You can use pixi's cli to add these dependencies to the pixi.toml

    pixi add --platform win-64 posix\n

    This also works for the host and build dependencies.

    pixi add --host --platform win-64 posix\npixi add --build --platform osx-64 clang\n

    Which results in this.

    pixi.toml
    [target.win-64.host-dependencies]\nposix = \"1.0.0.*\"\n\n[target.osx-64.build-dependencies]\nclang = \"16.0.6.*\"\n
    "},{"location":"advanced/multi_platform_configuration/#activation","title":"Activation","text":"

    Pixi's vision is to enable completely cross-platform projects, but you often need to run tools that are not built by your projects. Generated activation scripts are often in this category, default scripts in unix are bash and for windows they are bat

    To deal with this, you can define your activation scripts using the target definition.

    pixi.toml

    [activation]\nscripts = [\"setup.sh\", \"local_setup.bash\"]\n\n[target.win-64.activation]\nscripts = [\"setup.bat\", \"local_setup.bat\"]\n
    When this project is run on win-64 it will only execute the target scripts not the scripts specified in the default activation.scripts

    "},{"location":"design_proposals/multi_environment_proposal/","title":"Proposal Design: Multi Environment Support","text":""},{"location":"design_proposals/multi_environment_proposal/#objective","title":"Objective","text":"

    The aim is to introduce an environment set mechanism in the pixi package manager. This mechanism will enable clear, conflict-free management of dependencies tailored to specific environments, while also maintaining the integrity of fixed lockfiles.

    "},{"location":"design_proposals/multi_environment_proposal/#motivating-example","title":"Motivating Example","text":"

    There are multiple scenarios where multiple environments are useful.

    This prepares pixi for use in large projects with multiple use-cases, multiple developers and different CI needs.

    "},{"location":"design_proposals/multi_environment_proposal/#design-considerations","title":"Design Considerations","text":"
    1. User-friendliness: Pixi is a user focussed tool that goes beyond developers. The feature should have good error reporting and helpful documentation from the start.
    2. Keep it simple: Not understanding the multiple environments feature shouldn't limit a user to use pixi. The feature should be \"invisible\" to the non-multi env use-cases.
    3. No Automatic Combinatorial: To ensure the dependency resolution process remains manageable, the solution should avoid a combinatorial explosion of dependency sets. By making the environments user defined and not automatically inferred by testing a matrix of the features.
    4. Single environment Activation: The design should allow only one environment to be active at any given time, simplifying the resolution process and preventing conflicts.
    5. Fixed Lockfiles: It's crucial to preserve fixed lockfiles for consistency and predictability. Solutions must ensure reliability not just for authors but also for end-users, particularly at the time of lockfile creation.
    "},{"location":"design_proposals/multi_environment_proposal/#proposed-solution","title":"Proposed Solution","text":"

    Important

    This is a proposal, not a final design. The proposal is open for discussion and will be updated based on the feedback.

    "},{"location":"design_proposals/multi_environment_proposal/#feature-environment-set-definitions","title":"Feature & Environment Set Definitions","text":"

    Introduce environment sets into the pixi.toml this describes environments based on feature's. Introduce features into the pixi.toml that can describe parts of environments. As an environment goes beyond just dependencies the features should be described including the following fields:

    Default features
    [dependencies] # short for [feature.default.dependencies]\npython = \"*\"\nnumpy = \"==2.3\"\n\n[pypi-dependencies] # short for [feature.default.pypi-dependencies]\npandas = \"*\"\n\n[system-requirements] # short for [feature.default.system-requirements]\nlibc = \"2.33\"\n\n[activation] # short for [feature.default.activation]\nscripts = [\"activate.sh\"]\n
    Different dependencies per feature
    [feature.py39.dependencies]\npython = \"~=3.9.0\"\n[feature.py310.dependencies]\npython = \"~=3.10.0\"\n[feature.test.dependencies]\npytest = \"*\"\n
    Full set of environment modification in one feature
    [feature.cuda]\ndependencies = {cuda = \"x.y.z\", cudnn = \"12.0\"}\npypi-dependencies = {torch = \"1.9.0\"}\nplatforms = [\"linux-64\", \"osx-arm64\"]\nactivation = {scripts = [\"cuda_activation.sh\"]}\nsystem-requirements = {cuda = \"12\"}\n# Channels concatenate using a priority instead of overwrite, so the default channels are still used.\n# Using the priority the concatenation is controlled, default is 0, the default channels are used last.\n# Highest priority comes first.\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}] # Results in:  [\"nvidia\", \"conda-forge\", \"pytorch\"] when the default is `conda-forge`\ntasks = { warmup = \"python warmup.py\" }\ntarget.osx-arm64 = {dependencies = {mlx = \"x.y.z\"}}\n
    Define tasks as defaults of an environment
    [feature.test.tasks]\ntest = \"pytest\"\n\n[environments]\ntest = [\"test\"]\n\n# `pixi run test` == `pixi run --environments test test`\n

    The environment definition should contain the following fields:

    Creating environments from features
    [environments]\n# implicit: default = [\"default\"]\ndefault = [\"py39\"] # implicit: default = [\"py39\", \"default\"]\npy310 = [\"py310\"] # implicit: py310 = [\"py310\", \"default\"]\ntest = [\"test\"] # implicit: test = [\"test\", \"default\"]\ntest39 = [\"test\", \"py39\"] # implicit: test39 = [\"test\", \"py39\", \"default\"]\n
    Testing a production environment with additional dependencies
    [environments]\n# Creating a `prod` environment which is the minimal set of dependencies used for production.\nprod = {features = [\"py39\"], solve-group = \"prod\"}\n# Creating a `test_prod` environment which is the `prod` environment plus the `test` feature.\ntest_prod = {features = [\"py39\", \"test\"], solve-group = \"prod\"}\n# Using the `solve-group` to solve the `prod` and `test_prod` environments together\n# Which makes sure the tested environment has the same version of the dependencies as the production environment.\n
    Creating environments without a default environment
    [dependencies]\n# Keep empty or undefined to create an empty environment.\n\n[feature.base.dependencies]\npython = \"*\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n\n[environments]\n# Create a custom default\ndefault = [\"base\"]\n# Create a custom environment which only has the `lint` feature as the default feature is empty.\nlint = [\"lint\"]\n
    "},{"location":"design_proposals/multi_environment_proposal/#lockfile-structure","title":"Lockfile Structure","text":"

    Within the pixi.lock file, a package may now include an additional environments field, specifying the environment to which it belongs. To avoid duplication the packages environments field may contain multiple environments so the lockfile is of minimal size.

    - platform: linux-64\n  name: pre-commit\n  version: 3.3.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n  ...:\n- platform: linux-64\n  name: python\n  version: 3.9.3\n  category: main\n  environments:\n    - dev\n    - test\n    - lint\n    - py39\n    - default\n  ...:\n

    "},{"location":"design_proposals/multi_environment_proposal/#user-interface-environment-activation","title":"User Interface Environment Activation","text":"

    Users can manually activate the desired environment via command line or configuration. This approach guarantees a conflict-free environment by allowing only one feature set to be active at a time. For the user the cli would look like this:

    Default behavior
    pixi run python\n# Runs python in the `default` environment\n
    Activating an specific environment
    pixi run -e test pytest\npixi run --environment test pytest\n# Runs `pytest` in the `test` environment\n

    Activating a shell in an environment

    pixi shell -e cuda\npixi shell --environment cuda\n# Starts a shell in the `cuda` environment\n
    Running any command in an environment
    pixi run -e test any_command\n# Runs any_command in the `test` environment which doesn't require to be predefined as a task.\n

    Interactive selection of environments if task is in multiple environments
    # In the scenario where test is a task in multiple environments, interactive selection should be used.\npixi run test\n# Which env?\n# 1. test\n# 2. test39\n
    "},{"location":"design_proposals/multi_environment_proposal/#important-links","title":"Important links","text":""},{"location":"design_proposals/multi_environment_proposal/#real-world-example-use-cases","title":"Real world example use cases","text":"Polarify test setup

    In polarify they want to test multiple versions combined with multiple versions of polars. This is currently done by using a matrix in GitHub actions. This can be replaced by using multiple environments.

    pixi.toml
    [project]\nname = \"polarify\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"linux-64\", \"osx-arm64\", \"osx-64\", \"win-64\"]\n\n[tasks]\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\n\n[dependencies]\npython = \">=3.9\"\npip = \"*\"\npolars = \">=0.14.24,<0.21\"\n\n[feature.py39.dependencies]\npython = \"3.9.*\"\n[feature.py310.dependencies]\npython = \"3.10.*\"\n[feature.py311.dependencies]\npython = \"3.11.*\"\n[feature.py312.dependencies]\npython = \"3.12.*\"\n[feature.pl017.dependencies]\npolars = \"0.17.*\"\n[feature.pl018.dependencies]\npolars = \"0.18.*\"\n[feature.pl019.dependencies]\npolars = \"0.19.*\"\n[feature.pl020.dependencies]\npolars = \"0.20.*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-emoji = \"*\"\nhypothesis = \"*\"\n[feature.test.tasks]\ntest = \"pytest\"\n\n[feature.lint.dependencies]\npre-commit = \"*\"\n[feature.lint.tasks]\nlint = \"pre-commit run --all\"\n\n[environments]\npl017 = [\"pl017\", \"py39\", \"test\"]\npl018 = [\"pl018\", \"py39\", \"test\"]\npl019 = [\"pl019\", \"py39\", \"test\"]\npl020 = [\"pl020\", \"py39\", \"test\"]\npy39 = [\"py39\", \"test\"]\npy310 = [\"py310\", \"test\"]\npy311 = [\"py311\", \"test\"]\npy312 = [\"py312\", \"test\"]\n
    .github/workflows/test.yml
    jobs:\n  tests-per-env:\n    runs-on: ubuntu-latest\n    strategy:\n      matrix:\n        environment: [py311, py312]\n    steps:\n    - uses: actions/checkout@v4\n      - uses: prefix-dev/setup-pixi@v0.5.1\n        with:\n          environments: ${{ matrix.environment }}\n      - name: Run tasks\n        run: |\n          pixi run --environment ${{ matrix.environment }} test\n  tests-with-multiple-envs:\n    runs-on: ubuntu-latest\n    steps:\n    - uses: actions/checkout@v4\n    - uses: prefix-dev/setup-pixi@v0.5.1\n      with:\n       environments: pl017 pl018\n    - run: |\n        pixi run -e pl017 test\n        pixi run -e pl018 test\n
    Test vs Production example

    This is an example of a project that has a test feature and prod environment. The prod environment is a production environment that contains the run dependencies. The test feature is a set of dependencies and tasks that we want to put on top of the previously solved prod environment. This is a common use case where we want to test the production environment with additional dependencies.

    pixi.toml

    [project]\nname = \"my-app\"\n# ...\nchannels = [\"conda-forge\"]\nplatforms = [\"osx-arm64\", \"linux-64\"]\n\n[tasks]\npostinstall-e = \"pip install --no-build-isolation --no-deps --disable-pip-version-check -e .\"\npostinstall = \"pip install --no-build-isolation --no-deps --disable-pip-version-check .\"\ndev = \"uvicorn my_app.app:main --reload\"\nserve = \"uvicorn my_app.app:main\"\n\n[dependencies]\npython = \">=3.12\"\npip = \"*\"\npydantic = \">=2\"\nfastapi = \">=0.105.0\"\nsqlalchemy = \">=2,<3\"\nuvicorn = \"*\"\naiofiles = \"*\"\n\n[feature.test.dependencies]\npytest = \"*\"\npytest-md = \"*\"\npytest-asyncio = \"*\"\n[feature.test.tasks]\ntest = \"pytest --md=report.md\"\n\n[environments]\n# both default and prod will have exactly the same dependency versions when they share a dependency\ndefault = {features = [\"test\"], solve-group = \"prod-group\"}\nprod = {features = [], solve-group = \"prod-group\"}\n
    In ci you would run the following commands:
    pixi run postinstall-e && pixi run test\n
    Locally you would run the following command:
    pixi run postinstall-e && pixi run dev\n

    Then in a Dockerfile you would run the following command: Dockerfile

    FROM ghcr.io/prefix-dev/pixi:latest # this doesn't exist yet\nWORKDIR /app\nCOPY . .\nRUN pixi run --environment prod postinstall\nEXPOSE 8080\nCMD [\"/usr/local/bin/pixi\", \"run\", \"--environment\", \"prod\", \"serve\"]\n

    Multiple machines from one project

    This is an example for an ML project that should be executable on a machine that supports cuda and mlx. It should also be executable on machines that don't support cuda or mlx, we use the cpu feature for this. pixi.toml

    [project]\nname = \"my-ml-project\"\ndescription = \"A project that does ML stuff\"\nauthors = [\"Your Name <your.name@gmail.com>\"]\nchannels = [\"conda-forge\", \"pytorch\"]\n# All platforms that are supported by the project as the features will take the intersection of the platforms defined there.\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[tasks]\ntrain-model = \"python train.py\"\nevaluate-model = \"python test.py\"\n\n[dependencies]\npython = \"3.11.*\"\npytorch = {version = \">=2.0.1\", channel = \"pytorch\"}\ntorchvision = {version = \">=0.15\", channel = \"pytorch\"}\npolars = \">=0.20,<0.21\"\nmatplotlib-base = \">=3.8.2,<3.9\"\nipykernel = \">=6.28.0,<6.29\"\n\n[feature.cuda]\nplatforms = [\"win-64\", \"linux-64\"]\nchannels = [\"nvidia\", {channel = \"pytorch\", priority = \"-1\"}]\nsystem-requirements = {cuda = \"12.1\"}\n\n[feature.cuda.tasks]\ntrain-model = \"python train.py --cuda\"\nevaluate-model = \"python test.py --cuda\"\n\n[feature.cuda.dependencies]\npytorch-cuda = {version = \"12.1.*\", channel = \"pytorch\"}\n\n[feature.mlx]\nplatforms = [\"osx-arm64\"]\n\n[feature.mlx.tasks]\ntrain-model = \"python train.py --mlx\"\nevaluate-model = \"python test.py --mlx\"\n\n[feature.mlx.dependencies]\nmlx = \">=0.5.0,<0.6.0\"\n\n[feature.cpu]\nplatforms = [\"win-64\", \"linux-64\", \"osx-64\", \"osx-arm64\"]\n\n[environments]\ncuda = [\"cuda\"]\nmlx = [\"mlx\"]\ndefault = [\"cpu\"]\n

    Running the project on a cuda machine
    pixi run train-model --environment cuda\n# will execute `python train.py --cuda`\n# fails if not on linux-64 or win-64 with cuda 12.1\n
    Running the project with mlx
    pixi run train-model --environment mlx\n# will execute `python train.py --mlx`\n# fails if not on osx-arm64\n
    Running the project on a machine without cuda or mlx
    pixi run train-model\n
    "},{"location":"examples/cpp-sdl/","title":"SDL example","text":"

    The cpp-sdl example is located in the pixi repository.

    git clone https://github.com/prefix-dev/pixi.git\n

    Move to the example folder

    cd pixi/examples/cpp-sdl\n

    Run the start command

    pixi run start\n

    Using the depends_on feature you only needed to run the start task but under water it is running the following tasks.

    # Configure the CMake project\npixi run configure\n\n# Build the executable\npixi run build\n\n# Start the build executable\npixi run start\n
    "},{"location":"examples/opencv/","title":"Opencv example","text":"

    The opencv example is located in the pixi repository.

    git clone https://github.com/prefix-dev/pixi.git\n

    Move to the example folder

    cd pixi/examples/opencv\n
    "},{"location":"examples/opencv/#face-detection","title":"Face detection","text":"

    Run the start command to start the face detection algorithm.

    pixi run start\n

    The screen that starts should look like this:

    Check out the webcame_capture.py to see how we detect a face.

    "},{"location":"examples/opencv/#camera-calibration","title":"Camera Calibration","text":"

    Next to face recognition, a camera calibration example is also included.

    You'll need a checkerboard for this to work. Print this:

    Then run

    pixi run calibrate\n

    To make a picture for calibration press SPACE Do this approximately 10 times with the chessboard in view of the camera

    After that press ESC which will start the calibration.

    When the calibration is done, the camera will be used again to find the distance to the checkerboard.

    "},{"location":"examples/ros2-nav2/","title":"Navigation 2 example","text":"

    The nav2 example is located in the pixi repository.

    git clone https://github.com/prefix-dev/pixi.git\n

    Move to the example folder

    cd pixi/examples/ros2-nav2\n

    Run the start command

    pixi run start\n
    "},{"location":"ide_integration/pycharm/","title":"PyCharm Integration","text":"

    You can use PyCharm with pixi environments by using the conda shim provided by the pixi-pycharm package.

    Windows support

    Windows is currently not supported, see pavelzw/pixi-pycharm #5. Only Linux and macOS are supported.

    "},{"location":"ide_integration/pycharm/#how-to-use","title":"How to use","text":"

    To get started, add pixi-pycharm to your pixi project.

    pixi add pixi-pycharm\n

    This will ensure that the conda shim is installed in your project's environment.

    could not determine any available versions for pixi-pycharm on win-64

    If you get the error could not determine any available versions for pixi-pycharm on win-64 when running pixi add pixi-pycharm (even when you're not on Windows), this is because the package is not available on Windows and pixi tries to solve the environment for all platforms. If you still want to use it in your pixi project (and are on Linux/macOS), you can add the following to your pixi.toml:

    [target.unix.dependencies]\npixi-pycharm = \"*\"\n

    This will tell pixi to only use this dependency on unix platforms.

    Having pixi-pycharm installed, you can now configure PyCharm to use your pixi environments. Go to the Add Python Interpreter dialog (bottom right corner of the PyCharm window) and select Conda Environment. Set Conda Executable to the full path of the conda file in your pixi environment. You can get the path using the following command:

    pixi run 'echo $CONDA_PREFIX/libexec/conda'\n

    This is an executable that tricks PyCharm into thinking it's the proper conda executable. Under the hood it redirects all calls to the corresponding pixi equivalent.

    Use the conda shim from this pixi project

    Please make sure that this is the conda shim from this pixi project and not another one. If you use multiple pixi projects, you might have to adjust the path accordingly as PyCharm remembers the path to the conda executable.

    Having selected the environment, PyCharm will now use the Python interpreter from your pixi environment.

    PyCharm should now be able to show you the installed packages as well.

    You can now run your programs and tests as usual.

    "},{"location":"ide_integration/pycharm/#multiple-environments","title":"Multiple environments","text":"

    If your project uses multiple environments to tests different Python versions or dependencies, you can add multiple environments to PyCharm by specifying Use existing environment in the Add Python Interpreter dialog.

    You can then specify the corresponding environment in the bottom right corner of the PyCharm window.

    "},{"location":"ide_integration/pycharm/#multiple-pixi-projects","title":"Multiple pixi projects","text":"

    When using multiple pixi projects, remember to select the correct Conda Executable for each project as mentioned above. It also might come up that you have multiple environments it might come up that you have multiple environments with the same name.

    It is recommended to rename the environments to something unique.

    "},{"location":"ide_integration/pycharm/#debugging","title":"Debugging","text":"

    Logs are written to ~/.cache/pixi-pycharm.log. You can use them to debug problems. Please attach the logs when filing a bug report.

    "}]} \ No newline at end of file diff --git a/dev/sitemap.xml.gz b/dev/sitemap.xml.gz index c79fafcb871ba76d873f1007d6eab23453834c8d..bef0a2cd41f89c74601fd82b93d9cc1636ac7b33 100644 GIT binary patch delta 15 WcmZ3