Skip to content

Instantly share code, notes, and snippets.

@teleprint-me
Last active December 14, 2025 02:00
Show Gist options
  • Select an option

  • Save teleprint-me/5c3eaebb10901ebf8380218d950836f2 to your computer and use it in GitHub Desktop.

Select an option

Save teleprint-me/5c3eaebb10901ebf8380218d950836f2 to your computer and use it in GitHub Desktop.
A quick‑start guide that works on Arch Linux for installing ROCm, configuring the environment, and getting PyTorch to see the GPU.

ROCm AMD GPU Setup on Arch Linux

A quick‑start guide that works on Arch Linux for installing ROCm, configuring the environment, and getting PyTorch to see the GPU.
Feel free to copy this into your own notes or a public repo.

TL;DR

  1. sudo pacman -S hip-runtime-amd rocminfo
  2. Set the ROCm env‑vars (shown below).
  3. Symlink /usr/share/libdrm/amdgpu.ids/opt/amdgpu/share/libdrm/amdgpu.ids.
  4. Install the ROCm PyTorch wheel.
  5. Verify with torch.cuda.is_available().

Links to referenced resources can be found at the end of this document.

1. Install ROCm runtime & tooling

sudo pacman -S --needed hip-runtime-amd rocminfo
  • hip-runtime-amd – the ROCm runtime (hip, rocBLAS, etc.).
  • rocminfo – diagnostics that list the devices and their capabilities.

Arch Wikihttps://wiki.archlinux.org/title/General-purpose_computing_on_graphics_processing_units#ROCm

2. Set up environment variables

Add the following block to your ~/.bashrc, ~/.zshrc, or whatever shell init file you use.
Adjust the AMDGPU_TARGETS, PYTORCH_ROCM_ARCH, and HSA_OVERRIDE_GFX_VERSION values to match your card’s gfx code (e.g. gfx1100 for a RX 7900 XTX).

# ROCm is installed under /opt/rocm
export ROCM_PATH=/opt/rocm
export ROCM_HOME=/opt/rocm

# Which GPU(s) should ROCm/hip see?
export HIP_VISIBLE_DEVICES=0
export ROCR_VISIBLE_DEVICES=0
export TRITON_USE_ROCM=1

# Architecture overrides (replace gfx1100 with your card’s gfx code)
export AMDGPU_TARGETS="gfx1100"
export HCC_AMDGPU_TARGET="gfx1100"
export PYTORCH_ROCM_ARCH="gfx1100"
export HSA_OVERRIDE_GFX_VERSION="11.0.0"

# Optional: disable hipBLASLt if you hit BLAS errors
export USE_HIPBLASLT=0
export TORCH_BLAS_PREFER_HIPBLASLT=0

# PyTorch ROCm memory configuration
export PYTORCH_ALLOC_CONF="expandable_segments:False,garbage_collection_threshold:0.8"

# Add ROCm libs to the search paths
export LD_LIBRARY_PATH=${LD_LIBRARY_PATH:+${LD_LIBRARY_PATH}:}/opt/rocm/lib
export PATH=${PATH:+${PATH}:}/opt/rocm/bin:/opt/rocm/lib

Reload the shell:

source ~/.zshrc   # or ~/.bashrc

3. Verify the ROCm installation

rocm_agent_enumerator

You should see one line per GPU, e.g.

gfx1100   # RX 7900 XTX
gfx1036   # integrated GPU (if present)

If you see warnings or nothing appears, double‑check that ROCm’s packages are installed correctly.

4. Locate the AMDGPU ID table

Arch ships the amdgpu.ids file in the standard libdrm location:

/usr/share/libdrm/amdgpu.ids

If you’d like a quick search utility:

sudo pacman -S mlocate
sudo updatedb
locate amdgpu.ids   # should show the path above

The amdgpu.ids file contains the GPU‑ID → architecture mappings that the ROCm runtime needs.

5. Make the file visible to PyTorch

PyTorch’s ROCm wheel was built on a system where amdgpu.ids lives under /opt/amdgpu/share/libdrm.
Create that directory and symlink to the real file:

sudo mkdir -p /opt/amdgpu/share/libdrm
sudo ln -s /usr/share/libdrm/amdgpu.ids /opt/amdgpu/share/libdrm/amdgpu.ids

Verify:

file /opt/amdgpu/share/libdrm/amdgpu.ids
# → /opt/amdgpu/share/libdrm/amdgpu.ids: symbolic link to /usr/share/libdrm/amdgpu.ids

6. Create a Python virtual environment and install PyTorch

python -m venv .venv
source .venv/bin/activate
pip install -U pip

Install the ROCm‑enabled PyTorch wheel (replace rocm6.4 with the latest stable release that matches your ROCm version):

pip install numpy torch --index-url https://download.pytorch.org/whl/rocm6.4

numpy is a hard dependency for PyTorch and is explicitly pulled in here.

7. Test the setup

python -c "import torch; print('CUDA available:', torch.cuda.is_available())"
python -c "import torch; print('Arch list:', torch.cuda.get_arch_list())"

You should see:

CUDA available: True
Arch list: ['gfx900', 'gfx906', 'gfx908', 'gfx90a', 'gfx942', 'gfx1030', 'gfx1100', 'gfx1101', 'gfx1102', 'gfx1200', 'gfx1201']

No errors or warnings should appear.

deactivate   # exit the virtual environment when finished

8. Summary

  1. Install hip-runtime-amd + rocminfo.
  2. Add ROCm env‑vars (arch‑specific).
  3. Verify device visibility with rocm_agent_enumerator.
  4. Symlink /usr/share/libdrm/amdgpu.ids/opt/amdgpu/share/libdrm/amdgpu.ids.
  5. Create a venv & install the ROCm PyTorch wheel.
  6. Test with torch.cuda.is_available().

Related Issues & Resources


Happy Hacking!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment