Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
cass / README.md
ahmedheakl's picture
Update README.md
da6198a verified
---
license: mit
dataset_info:
features:
- name: filename
dtype: string
- name: cuda_source
dtype: string
- name: cuda_host
dtype: string
- name: cuda_device
dtype: string
- name: hip_source
dtype: string
- name: hip_host
dtype: string
- name: hip_device
dtype: string
splits:
- name: train
num_bytes: 18979794237
num_examples: 70694
- name: stack
num_bytes: 6087813411
num_examples: 24170
- name: synth
num_bytes: 11766271412
num_examples: 40591
- name: bench
num_bytes: 3676152
num_examples: 40
download_size: 10789629544
dataset_size: 36837555212
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: stack
path: data/stack-*
- split: synth
path: data/synth-*
- split: bench
path: data/bench-*
---
# 💻 CASS: CUDA–AMD Assembly and Source Mapping
[CASS](https://huggingface.co/datasets/MBZUAI/CASS) is the **first large-scale dataset** for cross-architecture GPU transpilation, providing semantically aligned CUDA–HIP source pairs and their corresponding host/device assemblies for **NVIDIA (SASS)** and **AMD (RDNA3)** platforms. It enables research in:
* 🔁 Source-to-source translation (CUDA ↔ HIP)
* ⚙️ Assembly-level translation (SASS ↔ RDNA3)
* 🧠 LLM-guided GPU code transpilation
---
## 📚 Dataset Structure
Each sample contains the following fields:
| Field | Description |
| ------------- | ------------------------------------------ |
| `filename` | Sample ID or file name |
| `cuda_source` | Original CUDA source code |
| `cuda_host` | Compiled x86 host-side assembly from CUDA |
| `cuda_device` | Compiled SASS (Nvidia GPU) device assembly |
| `hip_source` | Transpiled HIP source code (via HIPIFY) |
| `hip_host` | Compiled x86 host-side assembly from HIP |
| `hip_device` | Compiled RDNA3 (AMD GPU) device assembly |
---
## 🔀 Dataset Splits
| Split | Description | # Examples |
| ------- | ----------------------------------------- | ---------- |
| `train` | Union of `synth`, `stack`, and `opencl` | 70,694 |
| `synth` | LLM-synthesized CUDA programs | 40,591 |
| `stack` | Scraped and filtered CUDA from StackV2 | 24,170 |
| `bench` | 40 curated eval tasks from 16 GPU domains | 40 |
---
## 📦 How to Load
```python
from datasets import load_dataset
# 🧠 Load the full dataset (default config with all splits)
cass = load_dataset("MBZUAI/cass", name="default")
# Access a specific split
train_data = cass["train"] # train = stack + synth + opencl
stack_data = cass["stack"]
synth_data = cass["synth"]
bench_data = cass["bench"]
```
---
## 📈 Benchmark and Evaluation
The `bench` split includes 40 samples across 16 domains like:
* 🧪 Physics Simulation
* 📊 Data Structures
* 📸 Image Processing
* 🧮 Linear Algebra
All samples have been manually verified for semantic equivalence across CUDA and HIP and come with executable device/host binaries.
---
## 📄 License
Released under the **MIT license**.
---
## 🔗 Useful Links
* 🤗 Hugging Face Collection: [CASS on Hugging Face](https://huggingface.co/collections/MBZUAI/cass-6825b5bf7414503cf16f87b2)
* 📂 Code & Tools: [GitHub Repository](https://github.com/GustavoStahl/CASS)
* Paper: [Arxiv CASS](https://arxiv.org/abs/2505.16968)