forked from ROCm/ROCm.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathparams.json
6 lines (6 loc) · 3.9 KB
/
params.json
1
2
3
4
5
6
{
"name": "ROCm, A New Era in Open GPU Computing",
"tagline": "Platform for GPU Enabled HPC and UltraScale Computing ",
"body": "####Welcome to the ROCm Platform\r\n\r\nWe are excited to present to you the ROCm the first Open Source HPC/Ultrascale class GPU computing platform which is also programming language independent. We are bringing the UNIX philosophy of choice, minimalist, modular software development to GPU Computing. The new ROCm foundation lets you choose or even develop your tools and language runtime needed to develop your application.\r\n\r\nROCm built for scale; it supports multi-GPU computing in server node and out of server node communication via RDMA. We also focused on simplifying the stack were RDMA Peer Sync support is built directly into the driver. \r\n\r\nROCm has a rich system runtime with the key features needed to support large scale application, compiler, and language runtime development:\r\n\r\n* At the core is the Heterogeneous System Architecture (\"HSA\") Runtime API\r\n * Multi-GPU Coarse-grain Shared Virtual Memory \r\n * Process Concurrency & Preemption\r\n * Large Memory Allocations \r\n * HSA Signals and Atomics\r\n * User Mode Queues and DMA\r\n* Standardized loader and Code Object Format\r\n * Dynamics and Offline Compilation Support\r\n* Peer to Peer Multi-GPU with RDMA Support\r\n* Profiler Trace and Event Collection API \r\n* Systems Management API and Tools\r\n\r\nWe are also delivering a rich open source llvm based compiler foundation with native GCN ISA code generation. This foundation will all you to develop commercial quality development tools and a framework to explore GPU computing language development. \r\n\r\n* LLVM Compiler Foundation \r\n * LLVM Compiler with GCN Native Compilation \r\n * Supports GCN Assembler and Disassembler\r\n * Fully Upstream in the LLVM source repository \r\n \r\n \r\nBuilding on this is rich system runtime is a set of GPU Enabled Programing Languages to allow to focus on your application idea in familiar application development environment. The ROCm platform initially is supporting C, C++ and Python based GPU enabled programming solutions. \r\n\r\nHeterogeneous Compute Compiler (HCC) supports C11 & C++ 11/14 with llvm code generation backend for AMD x64 and GCN ISA. The HCC Compiler has three language persona. \r\n\r\n* Single Source C++ 11/14 with Parallel Standard Template Library (STL)\r\n* HIP is C++ GPU Kernel Language with C-Style Language Runtime that to ease conversion of CUDA applications into portable C++ code. \r\n* OpenMP 3.1 for CPU based programs that can integrate HIP or C++ 11/14 based GPU acceleration \r\n\r\nOpenCL 1.2+ Compiler and Language Runtime release will be made available in the Fall, which will support the new native GCN ISA compiler and also integrated into the new capabilities in the ROCm system runtime. \r\n\r\nWith the explosion of use of Python in application programming and Data Analytics. We are bringing Continuum Analytics Anaconda with Numba Acceleration to ROCm. Numba helps you accelerate array-oriented and math-heavy Python codes. Kernels compiled with Numba also have direct access to NumPy arrays. \r\n\r\nTo give you deeper visibility into ROCm platform are developing the fundamental profiling and debugging tools( GDB Debugger, ROCm Profiler). \r\n\r\nTo accelerate the speed at which you can build your application, we are bringing a comprehensive set of math libraries ( BLAS, FFT, Sparse, RNG) and programming frameworks( RAJA, Kokkos, CHARM++, HPX) to ROCm as well. \r\n\r\nThe frontiers of where you take ROCm is just beginning, we look forward working with you to improve the platform to help you drive the exploration of your application domains. We know we have opened the door to unique heterogeneous computing applications and a new opportunity to explore what possible with heterogeneous computing. \r\n",
"note": "Don't delete this file! It's used internally to help with page regeneration."
}