Skip to content

Commit

Permalink
chore: Drop OpenCL, and refactor Cargo.toml for workspace use (#1046)
Browse files Browse the repository at this point in the history
* chore: Drop OpenCL, and refactor Cargo.toml for workspace use

- Refined GitHub GPU CI workflow by removing OpenCL Rust tests and enabling only CUDA tests
- Revamped package settings to workspace level across the GitHub repository, implemented in `lurk-macros`, `lurk-metrics` and `Cargo.toml`
- Removed specific dependencies `pasta-msm` and `grumpkin-msm` from `Cargo.toml`,
- Reordered the features in `Cargo.toml`, removed `opencl` and some deps off `portable` for better maintainability
- Cleaned up minor formatting issue in 'lurk-macros/src/lib.rs'

* chore: bump rust msrv & allow CI to find it
  • Loading branch information
huitseeker authored Jan 12, 2024
1 parent 29ea314 commit 9f8231e
Show file tree
Hide file tree
Showing 5 changed files with 37 additions and 63 deletions.
39 changes: 2 additions & 37 deletions .github/workflows/gpu-ci.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Runs the test suite on a self-hosted GPU machine with CUDA and OpenCL enabled
# Runs the test suite on a self-hosted GPU machine with CUDA enabled
name: GPU tests

on:
Expand Down Expand Up @@ -66,39 +66,4 @@ jobs:
env:
EC_GPU_FRAMEWORK: cuda
run: |
cargo nextest run --profile ci --cargo-profile dev-ci --features cuda
opencl:
name: Rust tests on OpenCL
if: github.event_name != 'pull_request' || github.event.action == 'enqueued'
runs-on: [self-hosted, gpu-ci]
env:
NVIDIA_VISIBLE_DEVICES: all
NVIDIA_DRIVER_CAPABILITITES: compute,utility
steps:
- uses: actions/checkout@v4
with:
submodules: recursive
- uses: dtolnay/rust-toolchain@stable
- uses: taiki-e/install-action@nextest
- uses: Swatinem/rust-cache@v2
# Check we have access to the machine's Nvidia drivers
- run: nvidia-smi
# The `compute`/`sm` number corresponds to the Nvidia GPU architecture
# In this case, the self-hosted machine uses the Ampere architecture, but we want this to be configurable
# See https://arnon.dk/matching-sm-architectures-arch-and-gencode-for-various-nvidia-cards/
- name: Set env for CUDA compute
run: echo "CUDA_ARCH=$(nvidia-smi --query-gpu=compute_cap --format=csv,noheader | sed 's/\.//g')" >> $GITHUB_ENV
- name: set env for EC_GPU
run: echo 'EC_GPU_CUDA_NVCC_ARGS=--fatbin --gpu-architecture=sm_${{ env.CUDA_ARCH }} --generate-code=arch=compute_${{ env.CUDA_ARCH }},code=sm_${{ env.CUDA_ARCH }}' >> $GITHUB_ENV
- run: echo "${{ env.EC_GPU_CUDA_NVCC_ARGS}}"
# Check that CUDA is installed with a driver-compatible version
# This must also be compatible with the GPU architecture, see above link
- run: nvcc --version
# Check that we can access the OpenCL headers
- run: clinfo
- name: OpenCL tests
env:
EC_GPU_FRAMEWORK: opencl
run: |
cargo nextest run --profile ci --cargo-profile dev-ci --features cuda,opencl
cargo nextest run --profile ci --cargo-profile dev-ci --features cuda
40 changes: 23 additions & 17 deletions Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
[package]
name = "lurk"
version = "0.3.1"
authors = ["Lurk Lab Engineering <[email protected]>"]
license = "MIT OR Apache-2.0"
description = "Turing-Complete Zero Knowledge"
edition = "2021"
repository = "https://github.com/lurk-lab/lurk-rs"
rust-version = "1.71.1"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
edition.workspace = true
repository.workspace = true
authors.workspace = true
homepage.workspace = true
license.workspace = true
rust-version = "1.72" # allows msrv verify to work in CI

[dependencies]
ahash = "0.8.6"
Expand Down Expand Up @@ -74,7 +74,6 @@ halo2curves = { version = "0.6.0", features = ["bits", "derive_serde"] }

[target.'cfg(not(target_arch = "wasm32"))'.dependencies]
memmap = { version = "0.5.10", package = "memmap2" }
pasta-msm = { workspace = true }
proptest = { workspace = true }
proptest-derive = { workspace = true }
rand = "0.8.5"
Expand All @@ -88,14 +87,6 @@ home = "0.5.5"
getrandom = { version = "0.2", features = ["js"] }
rustyline = { version = "13.0", features = ["derive"], default-features = false }

[features]
default = []
opencl = ["neptune/opencl", "nova/opencl"]
cuda = ["neptune/cuda", "nova/cuda"]
# compile without ISA extensions
portable = ["pasta-msm/portable", "nova/portable"]
flamegraph = ["pprof/flamegraph", "pprof/criterion"]

[dev-dependencies]
assert_cmd = "2.0.12"
cfg-if = "1.0.0"
Expand All @@ -113,6 +104,13 @@ tracing-test = "0.2"
[build-dependencies]
vergen = { version = "8", features = ["build", "git", "gitcl"] }

[features]
default = []
cuda = ["neptune/cuda", "nova/cuda"]
# compile without ISA extensions
portable = ["nova/portable"]
flamegraph = ["pprof/flamegraph", "pprof/criterion"]

[workspace]
resolver = "2"
members = ["lurk-macros", "lurk-metrics"]
Expand All @@ -133,8 +131,6 @@ nova = { git = "https://github.com/lurk-lab/arecibo", branch = "dev", package =
once_cell = "1.18.0"
pairing = { version = "0.23" }
pasta_curves = { git = "https://github.com/lurk-lab/pasta_curves", branch = "dev" }
pasta-msm = { git = "https://github.com/lurk-lab/pasta-msm", branch = "dev" }
grumpkin-msm = { git = "https://github.com/lurk-lab/grumpkin-msm", branch = "dev" }
proptest = "1.2.0"
proptest-derive = "0.3"
rand = "0.8"
Expand All @@ -147,6 +143,16 @@ tracing = "0.1.37"
tracing-texray = "0.2.0"
tracing-subscriber = "0.3.17"

# All workspace members should inherit these keys
# for package declarations.
[workspace.package]
authors = ["Lurk Lab Engineering <[email protected]>"]
edition = "2021"
homepage = "https://lurk-lang.org/"
license = "MIT OR Apache-2.0"
repository = "https://github.com/lurk-lab/lurk-rs"
rust-version = "1.72"

[[bin]]
name = "lurk"
path = "src/main.rs"
Expand Down
10 changes: 6 additions & 4 deletions lurk-macros/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
[package]
name = "lurk-macros"
version = "0.2.0"
authors = ["porcuquine <[email protected]>"]
license = "MIT OR Apache-2.0"
description = "Custom derives for `lurk`"
edition = "2021"
repository = "https://github.com/lurk-lab/lurk-rs"
edition.workspace = true
repository.workspace = true
authors.workspace = true
homepage.workspace = true
license.workspace = true
rust-version.workspace = true

[lib]
proc-macro = true
Expand Down
1 change: 0 additions & 1 deletion lurk-macros/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
//!
//! Although severely limited in the expressions it can represent, and still lacking quasiquoting,
//! the `lurk` macro allows embedding Lurk code in Rust source. See tests for examples.

use proc_macro::TokenStream;
use proc_macro2::Span;
use quote::{quote, ToTokens};
Expand Down
10 changes: 6 additions & 4 deletions lurk-metrics/Cargo.toml
Original file line number Diff line number Diff line change
@@ -1,11 +1,13 @@
[package]
name = "lurk-metrics"
authors = ["Lurk Lab <[email protected]>"]
version = "0.2.0"
edition = "2021"
license = "MIT OR Apache-2.0"
description = "Metrics Sink for lurk"
repository = "https://github.com/lurk-lab/lurk-rs"
edition.workspace = true
repository.workspace = true
authors.workspace = true
homepage.workspace = true
license.workspace = true
rust-version.workspace = true

[dependencies]
metrics = { workspace = true }
Expand Down

1 comment on commit 9f8231e

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmarks

Table of Contents

Overview

This benchmark report shows the Fibonacci GPU benchmark.
NVIDIA L4
Intel(R) Xeon(R) CPU @ 2.20GHz
125.78 GB RAM
Workflow run: https://github.com/lurk-lab/lurk-rs/actions/runs/7506793175

Benchmark Results

LEM Fibonacci Prove - rc = 100

fib-ref=29ea314f590390d9a27bbcd8125adfabd318dcac fib-ref=9f8231ec309611f76a4f5301df2e7f87464b1564
num-100 1.74 s (✅ 1.00x) 1.75 s (✅ 1.01x slower)
num-200 3.36 s (✅ 1.00x) 3.37 s (✅ 1.00x slower)

LEM Fibonacci Prove - rc = 600

fib-ref=29ea314f590390d9a27bbcd8125adfabd318dcac fib-ref=9f8231ec309611f76a4f5301df2e7f87464b1564
num-100 2.03 s (✅ 1.00x) 2.03 s (✅ 1.00x slower)
num-200 3.39 s (✅ 1.00x) 3.40 s (✅ 1.00x slower)

Made with criterion-table

Please sign in to comment.