Skip to content

Commit

Permalink
update readme for 2.3.100 (#212) (#214)
Browse files Browse the repository at this point in the history
  • Loading branch information
Chao1Han committed Sep 9, 2024
1 parent 2aa8a83 commit fd9e031
Showing 1 changed file with 9 additions and 7 deletions.
16 changes: 9 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,13 +29,14 @@ The table below shows which functions are available for use with CPU / Intel dGP
| `barrier` |||


## Pytorch API Align
## PyTorch API Align

We recommend using Anaconda as Python package management system. The followings are the corresponding branches (tags) of `oneccl_bindings_for_pytorch` and supported Pytorch.
We recommend using Anaconda as Python package management system. The followings are the corresponding branches (tags) of `oneccl_bindings_for_pytorch` and supported PyTorch.

| `torch` | `oneccl_bindings_for_pytorch` |
| :-------------------------------------------------------------: | :-----------------------------------------------------------------------: |
| `master` | `master` |
| [v2.3.1](https://github.com/pytorch/pytorch/tree/v2.3.1) | [ccl_torch2.3.100](https://github.com/intel/torch-ccl/tree/ccl_torch2.3.100+xpu) |
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.400](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.400+xpu) |
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.300](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.300+xpu) |
| [v2.1.0](https://github.com/pytorch/pytorch/tree/v2.1.0) | [ccl_torch2.1.200](https://github.com/intel/torch-ccl/tree/ccl_torch2.1.200+xpu) |
Expand All @@ -58,19 +59,19 @@ The usage details can be found in the README of corresponding branch.

- Python 3.8 or later and a C++17 compiler

- PyTorch v2.1.0
- PyTorch v2.3.1

## Build Option List

The following build options are supported in Intel® oneCCL Bindings for PyTorch*.

| Build Option | Default Value | Description |
| :---------------------------------- | :------------- | :-------------------------------------------------------------------------------------------------- |
| COMPUTE_BACKEND | | Set oneCCL `COMPUTE_BACKEND`, set to `dpcpp` and use DPC++ compiler to enable support for Intel XPU |
| COMPUTE_BACKEND | N/A | Set oneCCL `COMPUTE_BACKEND`, set to `dpcpp` and use DPC++ compiler to enable support for Intel XPU |
| USE_SYSTEM_ONECCL | OFF | Use oneCCL library in system |
| CCL_PACKAGE_NAME | oneccl-bind-pt | Set wheel name |
| ONECCL_BINDINGS_FOR_PYTORCH_BACKEND | cpu | Set backend |
| CCL_SHA_VERSION | False | Add git head sha version to be wheel name |
| CCL_SHA_VERSION | False | Add git head sha version into wheel name |

## Launch Option List

Expand All @@ -92,7 +93,7 @@ The following launch options are supported in Intel® oneCCL Bindings for PyTorc

```bash
git clone https://github.com/intel/torch-ccl.git && cd torch-ccl
git checkout ccl_torch2.1.400+xpu
git checkout ccl_torch2.3.100+xpu
git submodule sync
git submodule update --init --recursive
```
Expand All @@ -116,6 +117,7 @@ Wheel files are available for the following Python versions. Please always use t

| Extension Version | Python 3.6 | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | Python 3.11 |
| :---------------: | :--------: | :--------: | :--------: | :--------: | :---------: | :---------: |
| 2.3.100 | | |||||
| 2.1.400 | | |||||
| 2.1.300 | | |||||
| 2.1.200 | | |||||
Expand All @@ -128,7 +130,7 @@ Wheel files are available for the following Python versions. Please always use t
| 1.10.0 ||||| | |

```bash
python -m pip install oneccl_bind_pt==2.1.400 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
python -m pip install oneccl_bind_pt==2.3.100 --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/
```

**Note:** Please set proxy or update URL address to https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/ if you meet connection issue.
Expand Down

0 comments on commit fd9e031

Please sign in to comment.