Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up -jit suffix in feature flags and modules #2705

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

laggui
Copy link
Member

@laggui laggui commented Jan 15, 2025

Checklist

  • Confirmed that run-checks all script has been executed.

Changes

Feature flags:

  • "cuda-jit" -> "cuda"
  • "hip-jit" -> "hip"
  • "wgpu-spirv" -> "vulkan" (for burn and burn-core, which enables burn-wgpu/spirv)

Modules:

  • burn::backend::cuda_jit -> burn::backend::cuda
  • burn::backend::hip_jit -> burn::backend::hip

Backends:

  • CudaJit -> Cuda
  • HipJit -> Hip

Copy link
Member

@syl20bnr syl20bnr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice!
Tested on the AMD runner.

You can update the commented tests in burn-hip/src/lib.rs to match the CUDA ones. I did not execute them all, I just verified that there was no compilation issues. Keep them commented.

mod tests {
    use burn_jit::JitBackend;

    pub type TestRuntime = cubecl::hip::HipRuntime;
    pub use half::f16;

    burn_jit::testgen_all!([f16, f32], [i8, i16, i32, i64], [u8, u32]);
}

Copy link

codecov bot commented Jan 15, 2025

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 83.20%. Comparing base (f630b3b) to head (5e63c00).

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #2705   +/-   ##
=======================================
  Coverage   83.20%   83.20%           
=======================================
  Files         819      819           
  Lines      106814   106814           
=======================================
  Hits        88870    88870           
  Misses      17944    17944           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Member

@nathanielsimard nathanielsimard left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should create a type alias for vulkan.

pub type Vulkan = Wgpu<Compiler = Spirv>;

Also, I would keep a feature flag spirv in burn_wgpu, so that users can customize the compiler used for Wgpu without using the type alias.

Also, the Wgpu backend should not have a default compiler, I think. We could create a WebGPU type alias like Vulkan with the default compiler being Wgsl.

So, we can have a hierarchy of types, where we don't compromise on flexibility but make it clearer for Burn users.

New feature flags:

vulkan = ["burn_wgpu", "spirv"]
webgpu = ["burn_wgpu", "burn_wgpu/wgsl"]
wgpu = ["burn_wgpu"]

We have to make it clear that the Wgpu backend is decoupled from the compiler; users can create their own feature flag to choose the compiler if they want it to change depending on the platform. We could (and probably should) create a composed compiler that will become the default Wgpu compiler, where each compiler feature flag activates a new compiler option to be determined at runtime. However, for now, we can't do that just yet, and users won't have the best performance by default because of that, and it's not clear how to do that right now.

Also, having the possibility to just have one compiler activated will always be a good option to reduce binary size and ease deployment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants