Skip to content

Commit

Permalink
Merge pull request #1638 from LLNL/v2024.02.2-RC
Browse files Browse the repository at this point in the history
V2024.02.2 RC merge to main
  • Loading branch information
rhornung67 authored May 6, 2024
2 parents 3ada095 + b33995f commit 593f756
Show file tree
Hide file tree
Showing 74 changed files with 5,284 additions and 2,033 deletions.
6 changes: 4 additions & 2 deletions .gitlab-ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ stages:
include:
- local: '.gitlab/custom-jobs-and-variables.yml'
- project: 'radiuss/radiuss-shared-ci'
ref: 'v2023.12.3'
ref: 'v2024.04.0'
file: 'pipelines/${CI_MACHINE}.yml'
- artifact: '${CI_MACHINE}-jobs.yml'
job: 'generate-job-lists'
Expand All @@ -100,9 +100,11 @@ trigger-rajaperf:
strategy: depend

include:
- project: 'lc-templates/id_tokens'
file: 'id_tokens.yml'
# [Optional] checks preliminary to running the actual CI test
- project: 'radiuss/radiuss-shared-ci'
ref: 'v2023.12.3'
ref: 'v2024.04.0'
file: 'utilities/preliminary-ignore-draft-pr.yml'
# pipelines subscribed by the project
- local: '.gitlab/subscribed-pipelines.yml'
16 changes: 8 additions & 8 deletions .gitlab/custom-jobs-and-variables.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,17 @@ variables:
# Note: We repeat the reservation, necessary when jobs are manually re-triggered.
RUBY_JOB_ALLOC: "--reservation=ci --nodes=1"
# Project specific variants for ruby
PROJECT_RUBY_VARIANTS: "~shared +openmp +tests"
PROJECT_RUBY_VARIANTS: "~shared +openmp +vectorization +tests"
# Project specific deps for ruby
PROJECT_RUBY_DEPS: ""

# Poodle
# Arguments for top level allocation
POODLE_SHARED_ALLOC: "--exclusive --time=60 --nodes=1"
POODLE_SHARED_ALLOC: "--exclusive --time=120 --nodes=1"
# Arguments for job level allocation
POODLE_JOB_ALLOC: "--nodes=1"
# Project specific variants for poodle
PROJECT_POODLE_VARIANTS: "~shared +openmp +tests"
PROJECT_POODLE_VARIANTS: "~shared +openmp +vectorization +tests"
# Project specific deps for poodle
PROJECT_POODLE_DEPS: ""

Expand All @@ -39,26 +39,26 @@ variables:
# Arguments for job level allocation
CORONA_JOB_ALLOC: "--nodes=1 --begin-time=+5s"
# Project specific variants for corona
PROJECT_CORONA_VARIANTS: "~shared ~openmp +tests"
PROJECT_CORONA_VARIANTS: "~shared ~openmp +vectorization +tests"
# Project specific deps for corona
PROJECT_CORONA_DEPS: "^blt@develop "

# Tioga
# Arguments for top level allocation
TIOGA_SHARED_ALLOC: "--exclusive --time-limit=60m --nodes=1 -o per-resource.count=2"
TIOGA_SHARED_ALLOC: "--exclusive --queue=pci --time-limit=60m --nodes=1 -o per-resource.count=2"
# Arguments for job level allocation
TIOGA_JOB_ALLOC: "--nodes=1 --begin-time=+5s"
# Project specific variants for corona
PROJECT_TIOGA_VARIANTS: "~shared ~openmp +tests"
PROJECT_TIOGA_VARIANTS: "~shared +openmp +vectorization +tests"
# Project specific deps for corona
PROJECT_TIOGA_DEPS: "^blt@develop "

# Lassen and Butte use a different job scheduler (spectrum lsf) that does not
# allow pre-allocation the same way slurm does.
# Arguments for job level allocation
LASSEN_JOB_ALLOC: "1 -W 30 -q pci"
LASSEN_JOB_ALLOC: "1 -W 40 -q pci"
# Project specific variants for lassen
PROJECT_LASSEN_VARIANTS: "~shared +openmp +tests cuda_arch=70"
PROJECT_LASSEN_VARIANTS: "~shared +openmp +vectorization +tests cuda_arch=70"
# Project specific deps for lassen
PROJECT_LASSEN_DEPS: "^blt@develop "

Expand Down
2 changes: 1 addition & 1 deletion CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ include(CMakeDependentOption)
# Set version number
set(RAJA_VERSION_MAJOR 2024)
set(RAJA_VERSION_MINOR 02)
set(RAJA_VERSION_PATCHLEVEL 1)
set(RAJA_VERSION_PATCHLEVEL 2)

if (RAJA_LOADED AND (NOT RAJA_LOADED STREQUAL "${RAJA_VERSION_MAJOR}.${RAJA_VERSION_MINOR}.${RAJA_VERSION_PATCHLEVEL}"))
message(FATAL_ERROR "You are mixing RAJA versions. Loaded is ${RAJA_LOADED}, expected ${RAJA_VERSION_MAJOR}.${RAJA_VERSION_MINOR}.${RAJA_VERSION_PATCHLEVEL}")
Expand Down
33 changes: 33 additions & 0 deletions RELEASE_NOTES.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,39 @@ Notable changes include:
* Bug fixes/improvements:


Version 2024.02.2 -- Release date 2024-05-08
============================================

This release contains a bugfix and new execution policies that improve
performance for GPU kernels with reductions.

Notable changes include:

* New features / API changes:
* New GPU execution policies for CUDA and HIP added which provide
improved performance for GPU kernels with reductions. Please see the
RAJA User Guide for more information. Short summary:
* Option added to change max grid size in policies that use the
occupancy calculator.
* Policies added to run with max occupancy, a fraction of of the
max occupancy, and to run with a "concretizer" which allows a
user to determine how to run based on what the occupancy
calculator determines about a kernel.
* Additional options to tune kernels containing reductions, such as
* an option to initialize data on host for reductions that use
atomic operations
* an option to avoid device scope memory fences
* Change ordering of SYCL thread index ordering in RAJA::launch to
follow the SYCL "row-major" convention. Please see RAJA User Guide
for more information.

* Build changes/improvements:
* NONE.

* Bug fixes/improvements:
* Fixed issue in bump-style allocator used internally in RAJA::launch.


Version 2024.02.1 -- Release date 2024-04-03
============================================

Expand Down
21 changes: 21 additions & 0 deletions docs/Licenses/rocprim-license.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
MIT License

Copyright (c) 2017-2024 Advanced Micro Devices, Inc. All rights reserved.

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
2 changes: 1 addition & 1 deletion docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@
# The short X.Y version.
version = u'2024.02'
# The full version, including alpha/beta/rc tags.
release = u'2024.02.1'
release = u'2024.02.2'

# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
Expand Down
23 changes: 23 additions & 0 deletions docs/sphinx/user_guide/cook_book.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
.. ##
.. ## Copyright (c) 2016-24, Lawrence Livermore National Security, LLC
.. ## and RAJA project contributors. See the RAJA/LICENSE file
.. ## for details.
.. ##
.. ## SPDX-License-Identifier: (BSD-3-Clause)
.. ##
.. _cook-book-label:

************************
RAJA Cook Book
************************

The following sections show common use case patterns and the recommended
RAJA features and policies to use with them. They are intended
to provide users with complete beyond usage examples beyond what can be found in other parts of the RAJA User Guide. In particular, the examples and discussion provide guidance on RAJA execution policy selection to improve performance of user application codes.

.. toctree::
:maxdepth: 2

cook_book/reduction

110 changes: 110 additions & 0 deletions docs/sphinx/user_guide/cook_book/reduction.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
.. ##
.. ## Copyright (c) 2016-24, Lawrence Livermore National Security, LLC
.. ## and other RAJA project contributors. See the RAJA/LICENSE file
.. ## for details.
.. ##
.. ## SPDX-License-Identifier: (BSD-3-Clause)
.. ##
.. _cook-book-reductions-label:

=======================
Cooking with Reductions
=======================

Please see the following section for overview discussion about RAJA reductions:

* :ref:`feat-reductions-label`.


----------------------------
Reductions with RAJA::forall
----------------------------

Here is the setup for a simple reduction example::

const int N = 1000;

int vec[N];

for (int i = 0; i < N; ++i) {

vec[i] = 1;

}

Here a simple sum reduction is performed in a for loop::

int vsum = 0;

// Run a kernel using the reduction objects
for (int i = 0; i < N; ++i) {

vsum += vec[i];

}

The results of these operations will yield the following values:

* ``vsum == 1000``

RAJA uses policy types to specify how things are implemented.

The forall *execution policy* specifies how the loop is run by the ``RAJA::forall`` method. The following discussion includes examples of several other RAJA execution policies that could be applied.
For example ``RAJA::seq_exec`` runs a C-style for loop sequentially on a CPU. The
``RAJA::cuda_exec_with_reduce<256>`` runs the loop as a CUDA GPU kernel with
256 threads per block and other CUDA kernel launch parameters, like the
number of blocks, optimized for performance with reducers.::

using exec_policy = RAJA::seq_exec;
// using exec_policy = RAJA::omp_parallel_for_exec;
// using exec_policy = RAJA::omp_target_parallel_for_exec<256>;
// using exec_policy = RAJA::cuda_exec_with_reduce<256>;
// using exec_policy = RAJA::hip_exec_with_reduce<256>;
// using exec_policy = RAJA::sycl_exec<256>;

The reduction policy specifies how the reduction is done and must match the
execution policy. For example ``RAJA::seq_reduce`` does a sequential reduction
and can only be used with sequential execution policies. The
``RAJA::cuda_reduce_atomic`` policy uses atomics, if possible with the given
data type, and can only be used with cuda execution policies. Similarly for other RAJA execution back-ends, such as HIP and OpenMP. Here are example RAJA reduction policies whose names are indicative of which execution policies they work with::

using reduce_policy = RAJA::seq_reduce;
// using reduce_policy = RAJA::omp_reduce;
// using reduce_policy = RAJA::omp_target_reduce;
// using reduce_policy = RAJA::cuda_reduce_atomic;
// using reduce_policy = RAJA::hip_reduce_atomic;
// using reduce_policy = RAJA::sycl_reduce;


Here a simple sum reduction is performed using RAJA::

RAJA::ReduceSum<reduce_policy, int> vsum(0);

RAJA::forall<exec_policy>( RAJA::RangeSegment(0, N),
[=](RAJA::Index_type i) {

vsum += vec[i];

});

The results of these operations will yield the following values:

* ``vsum.get() == 1000``


Another option for the execution policy when using the cuda or hip backends are
the base policies which have a boolean parameter to choose between the general
use ``cuda/hip_exec`` policy and the ``cuda/hip_exec_with_reduce`` policy.::

// static constexpr bool with_reduce = ...;
// using exec_policy = RAJA::cuda_exec_base<with_reduce, 256>;
// using exec_policy = RAJA::hip_exec_base<with_reduce, 256>;

Another option for the reduction policy when using the cuda or hip backends are
the base policies which have a boolean parameter to choose between the atomic
``cuda/hip_reduce_atomic`` policy and the non-atomic ``cuda/hip_reduce`` policy.::

// static constexpr bool with_atomic = ...;
// using reduce_policy = RAJA::cuda_reduce_base<with_atomic>;
// using reduce_policy = RAJA::hip_reduce_base<with_atomic>;
Loading

0 comments on commit 593f756

Please sign in to comment.