NVIDIA AI Just Released cuda-oxide: An Experimental Rust-to-CUDA Compiler Backend that Compiles SIMT GPU Kernels Directly to PTX
NVIDIA AI researchers recently released cuda-oxide, an experimental compiler that allows developers to write CUDA SIMT (Single Instruction, Multiple Threads) GPU kernels in standard Rust code. The project compiles Rust directly to PTX (Parallel Thread Execution) — the assembly-like intermediate representation that CUDA uses to target NVIDIA GPUs — without requiring domain-specific languages, foreign function interface bindings, or C/C++ code. Writing GPU kernels today typically means writing C++ and using the CUDA programming model directly, or relying on Python-level abstractions like Triton that generate CUDA under the hood. The Rust GPU ecosystem has had projects attempting to bridge this gap — Rust-GPU targets SPIR-V for Vulkan/graphics compute, rust-cuda uses a rustc codegen backend targeting NVVM IR, CubeCL uses an embedded DSL with a JIT runtime that cross-compiles to CUDA/ROCm/WGPU, and std::offload uses LLVM’s implicit offload path. cuda-oxide occupies a specific position in this space. Its stated design center is “bringing CUDA into Rust” — kernel authoring, device intrinsics, the SIMT execution model, and the CUDA programming model expressed natively in safe Rust — closer in spirit to writing a __global__ function in C++ than to writing a generic Rust function that happens to run on a GPU. By contrast, the closest neighbor, rust-cuda, focuses on “bringing Rust to NVIDIA GPUs”: Rust ergonomics like async /.await , parts of the standard library running on-device, and a Rust-first programming model that abstracts over CUDA concepts. The NVlabs team notes it has been coordinating with rust-cuda maintainers and considers the two projects complementary. At the core of cuda-oxide is a custom rustc codegen backend — the layer in the Rust compiler responsible for generating machine code. Instead of emitting native CPU code, the rustc-codegen-cuda crate intercepts the compiler at the CodegenBackend::codegen_crate() entry point and runs a separate pipeline for device code: Rust Source → rustc frontend → rustc_public (Stable MIR) → dialect-mir → mem2reg → dialect-llvm → LLVM IR (.ll) → PTX (.ptx) Here are some important elements: Why rustc_public ? The raw internal MIR representation in rustc changes between nightly versions with no stability guarantees. cuda-oxide uses rustc_public — also known as Stable MIR — which is Rust’s official versioned, stable API over the compiler’s internals. This lets the backend read MIR without breaking on every nightly update. What is Pliron? The middle stages use Pliron, a Rust-native MLIR-like IR framework written entirely in Rust. Choosing Pliron instead of upstream MLIR means the entire compiler builds with cargo — no C++ toolchain, no CMake, no tablegen. cuda-oxide defines three custom Pliron dialects: dialect-mir (modeling Rust MIR semantics — places, projections, rvalues, terminators), dialect-llvm (modeling LLVM IR with textual .ll export), and dialect-nvvm (NVIDIA GPU intrinsics like thread indexing, barriers, and TMA). What does llc do? After the dialect-llvm printer serializes the IR into a textual .ll file, the external llc binary (the LLVM static compiler with NVPTX backend) compiles it to PTX assembly. This is the one stage outside pure Rust. The resulting .ptx file is written next to the host binary — for example, target/debug/vecadd.ptx — and loaded by the CUDA driver at runtime. You as a developer can observe each stage with: cargo oxide pipeline vecadd This prints the full trace from Rust MIR through each dialect down to PTX output. Host and device code live in the same .rs source file. cargo oxide sets -Z codegen-backend=librustc_codegen_cuda.so , which routes code generation through cuda-oxide’s backend. The backend then scans compiled code for monomorphized functions whose names carry the reserved cuda_oxide_kernel_<hash>_<name> prefix — the namespace that the #[kernel] proc macro creates. Functions matching that prefix go through the cuda-oxide pipeline to produce PTX; all other host code is delegated to rustc’s standard LLVM backend. The result of a single cargo oxide build is a host binary plus a .ptx file. cargo oxide run vecadd cargo oxide debug vecadd --tui # debug with cuda-gdb Device code from library dependencies is compiled lazily: the backend reads their Stable MIR from .rlib metadata on demand, only compiling functions a kernel actually calls. cuda-oxide supports a meaningful subset of Rust in GPU kernel functions, marked with the #[kernel] attribute macro. This includes: - Generic functions with monomorphization — fn scale<T: Copy>(...) is compiled to a concrete PTX kernel per type used at the call site. - Closures with captures — closures passed from the host are scalarized and passed as PTX kernel parameters automatically. - User-defined structs and enums — standard Rust data structures work inside kernels. - Pattern matching — match ,if let , and related constructs work in device code. - Full GPU intrinsics — the cuda-device crate provides wrappers for thread indexing, warp operations (shfl_sync ,ballot_sync , etc.), shared memory, barriers, TMA (Tensor Memory Accelerator), Thread Block Clusters, and scoped atomics (6 types × 3 scopes × 5 orderings). One important GPU-specific compiler detail: rustc’s JumpThreading MIR optimization — which duplicates function calls into both branches of an if-statement — is disabled for device code in cuda-oxide. On CPUs this is a safe optimization, but on GPUs it breaks barrier semantics: all threads in a block must converge at the same bar.sync instruction, and duplicating it across branches violates that requirement. Additionally, sync primitives are marked convergent in the emitted LLVM IR so that LLVM’s optimization passes cannot move or duplicate them across control flow. NVlabs cuda-oxide — Step-by-Step Guide Rust → Stable MIR → Pliron IR → LLVM IR → PTX | v0.1.0 Document Created by Marktechpost.com - cuda-oxide is a custom rustc codegen backend from NVlabs that compiles#[kernel] -annotated Rust functions to PTX through a Rust →rustc_public Stable MIR → Pliron IR → LLVM IR → PTX pipeline, all buildable withcargo . - Host and device code coexist in a single .rs file, compiled with onecargo oxide build command; the output is a host binary plus a.ptx file placed next to it. - The safety model has three documented tiers: Tier 1 (race-free by construction via DisjointSlice<T> +ThreadIndex ), Tier 2 (scopedunsafe for shared memory, warp intrinsics, atomics), and Tier 3 (raw hardware intrinsics for TMA, WGMMA, tcgen05).index_2d(stride) is documented as currently unsound in the 0.x release. - The gemm_sol example hits 868 TFLOPS on the B200 (58% of cuBLAS SoL) using a multi-phase GEMM pipeline with CLC andcta_group::2 . Check out the GitHub Repo. Also, feel free to follow us on Twitter and don’t forget to join our 150k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well. Need to partner with us for promoting your GitHub Repo OR Hugging Face Page OR Product Release OR Webinar etc.? Connect with us The post NVIDIA AI Just Released cuda-oxide: An Experimental Rust-to-CUDA Compiler Backend that Compiles SIMT GPU Kernels Directly to PTX appeared first on MarkTechPost.
Send this story to anyone — or drop the embed into a blog post, Substack, Notion page. Every play sends rev-share back to MarkTechPost.