This contains notes about the AMD QDMA Subsystem for PCI Express, based upon trying to write a VFIO based poll-mode driver for QDMA.
Have UltraScale+ and Versal AI Edge devices to try with QDMA, which support the soft QDMA. Don't have devices with the CPM4 nor CPM4 hard QDMA.
- QDMA Subsystem for PCI Express Product Guide (PG302) gives the supported product families as "AMD UltraScale+™ , AMD Spartan™ UltraScale+™". For driver details https://github.com/Xilinx/dma_ip_drivers. The register reference file is https://download.amd.com/docnav/documents/ip_attachments/qdma-v5-1-register-map.zip.
- Versal Adaptive SoC DMA and Bridge Subsystem for PCI Express Product Guide (PG344) mentions both the XMDA and QDMA substems. The supported product family is "AMD Versal™ adaptive SoC". The register map reference file is https://download.amd.com/docnav/documents/ip_attachments/qdma-v5-1-register-map.zip. The Supported S/W Driver is "QDMA Subsystem: x86 Linux Kernel and x86 Linux DPDK drivers for endpoint designs"
- Versal Adaptive SoC CPM Mode for PCI Express Product Guide (PG346). Covers CPM4 and CPM5. Drivers can be found at https://github.com/Xilinx/dma_ip_drivers.
- Versal Adaptive SoC CPM DMA and Bridge Mode for PCI Express Product Guide (PG347). Drivers can be found at https://github.com/Xilinx/dma_ip_drivers. The register map reference file is https://www.xilinx.com/support/documents/ip_documentation/versal_cips/v3_0/pg347-versal-cpm-dma-v3-0-register-map.zip
When used Vivado 2025.2 to run the QDMA block automation for a xcve2302-sfva784-1LP-e-S the QDMA pcie_cfg_external_msix_without_msi_if was connected to an external interface.
In the documentation can't find a description of the signals on the pcie_cfg_external_msix_without_msi_if, nor what they are supposed to be connected to / used for. The block design validated OK when deleted this interface.
Searching /opt/Xilinx/2025.2/Vivado/data the following two files have pcie_cfg_external_msix_without_msi_if:
- /opt/Xilinx/2025.2/Vivado/data/ip/xilinx/qdma_v5_1/component.xml
- /opt/Xilinx/2025.2/Vivado/data/rsb/design_assist/block/qdma/bd.tcl
From looking at the above files it isn't clear what the interface is used for. In the QDMA configuration MSI-X is disabled. The enumerated endpoint isn't reporting Message Signaled Interrupts capabilities.
When used Vivado 2025.2 to create a QDMA design for a Virtex Ultrascale+ device, there was no pcie_cfg_external_msix_without_msi_if.
Searching the generated files in the design, seems to be related to the MSI Interrupt Interface in the Versal Adaptive SoC Integrated Block for PCIe. Use the term related since haven't attempted to trace all the connections.
Looked for existing Linux drivers.
The QDMA documentation points at the github https://github.com/Xilinx/dma_ip_drivers. That also has XDMA support.
For QDMA has support in different sub-directories for:
- DPDK
- linux-kernel
- windows
This builds two modules qdma-pf.ko and qdma-vf.ko. There is common source code used for the physical and virtual function modules, with some conditional compilation on __QDMA_VF__ - defined when building qdma-vf.ko and undefined when bulding qdma-pf.ko.
linux-kernel/driver/src/pci_ids.h has different vendor and device IDs for the physical and virtual function modules. The device IDs depend upon the PCIe generation, lane width and function number.
The qdma_access directory has sub-directories for:
The qdma_hw_access_init function in qdma_access_common.c# sets up pointers to eqdma_cpm5_*, eqdma_*, qdma_cpm4_* or qdma_* functions based upon the identified device. is_vf is passed as constant input, with the value depending upon if called from the qdma-pf.ko or qdma-vf.ko module.
qdma_hw_access_init can call one of the following "get version" functions:
eqdma_get_versionalways called for a physical function. Called for a virtual function whenQDMA_GLBL2_VF_UNIQUE_ID_MASKdoesn't contain the the QDMA identifier andEQDMA_CPM5_VF_GT_256Q_SUPPORTEDis undefined.eqdma_cpm5_get_versioncalled for a virtual function whenQDMA_GLBL2_VF_UNIQUE_ID_MASKdoesn't contain the the QDMA identifier andEQDMA_CPM5_VF_GT_256Q_SUPPORTEDis defined.qdma_get_versioncalled for a virtual function whenQDMA_GLBL2_VF_UNIQUE_ID_MASKcontains the the QDMA identifier.
The 3 "get version" functions only differ in the register address they use to obtain the version information for a virtual function.
mm_channel_max is set:
- 1 for qdma_soft_access and eqdma_soft_access
- 2 for qdma_cpm4_access and eqdma_cpm5_access
After copying the logic from qdma_hw_access_init into a VFIO driver, on projects built with Vivado 2025.2 a Versal AI edge reported:
$ identify_pcie_fpga_design/display_identified_pcie_fpga_designs
Opening device 0000:01:00.1 (10ee:b144) with IOMMU group 12
Enabled bus master for 0000:01:00.1
Enabled bus master for 0000:01:00.3
Enabled bus master for 0000:01:00.0
Enabled bus master for 0000:01:00.2
Design VD100_qdma_ddr4:
PCI device 0000:01:00.1 rev 00 IOMMU group 12
QDMA bar 0 memory base offset 0x800000000 size 0x100000000
rtl_version : RTL Base
vivado_release : vivado 2020.2
ip_type : EQDMA5.0 Soft IP
device_type : Soft IP
Number of PFs supported : 4
Total number of queues supported : 512
MM channels : 1
FLR Present : no
ST enabled : no
MM enabled : yes
Mailbox enabled : no
MM completion enabled : no
Debug Mode enabled : no
Desc Engine Mode : Internal only mode
Design VD100_qdma_ddr4:
PCI device 0000:01:00.3 rev 00 IOMMU group 12
QDMA bar 0 memory base offset 0x800000000 size 0x100000000
rtl_version : RTL Base
vivado_release : vivado 2020.2
ip_type : EQDMA5.0 Soft IP
device_type : Soft IP
Number of PFs supported : 4
Total number of queues supported : 512
MM channels : 1
FLR Present : no
ST enabled : no
MM enabled : yes
Mailbox enabled : no
MM completion enabled : no
Debug Mode enabled : no
Desc Engine Mode : Internal only mode
Design VD100_qdma_ddr4:
PCI device 0000:01:00.0 rev 00 IOMMU group 12
QDMA bar 0 memory base offset 0x800000000 size 0x100000000
rtl_version : RTL Base
vivado_release : vivado 2020.2
ip_type : EQDMA5.0 Soft IP
device_type : Soft IP
Number of PFs supported : 4
Total number of queues supported : 512
MM channels : 1
FLR Present : no
ST enabled : no
MM enabled : yes
Mailbox enabled : no
MM completion enabled : no
Debug Mode enabled : no
Desc Engine Mode : Internal only mode
Design VD100_qdma_ddr4:
PCI device 0000:01:00.2 rev 00 IOMMU group 12
QDMA bar 0 memory base offset 0x800000000 size 0x100000000
rtl_version : RTL Base
vivado_release : vivado 2020.2
ip_type : EQDMA5.0 Soft IP
device_type : Soft IP
Number of PFs supported : 4
Total number of queues supported : 512
MM channels : 1
FLR Present : no
ST enabled : no
MM enabled : yes
Mailbox enabled : no
MM completion enabled : no
Debug Mode enabled : no
Desc Engine Mode : Internal only mode
And a Virtex UltraScale+ reported:
> identify_pcie_fpga_design/display_identified_pcie_fpga_designs
Opening device 0000:31:00.0 (10ee:903f) with IOMMU group 81
Enabled bus master for 0000:31:00.0
Design U200_qdma_ram:
PCI device 0000:31:00.0 rev 00 IOMMU group 81 physical slot 2-2
QDMA bar 0 memory base offset 0x0 size 0x800000
rtl_version : RTL Base
vivado_release : vivado 2020.2
ip_type : EQDMA5.0 Soft IP
device_type : Soft IP
Number of PFs supported : 1
Total number of queues supported : 512
MM channels : 1
FLR Present : no
ST enabled : no
MM enabled : yes
Mailbox enabled : no
MM completion enabled : no
Debug Mode enabled : no
Desc Engine Mode : Internal only mode
User access build timestamp : 08B48C5E - 01/01/2026 08:49:30
I.e. both the same version information. The ip_type is EQDMA_SOFT_IP, which means the qdma-pf.ko module would use the eqdma_soft_access functions for those designs.
The vivado_release is only used to make the following runtime selections:
qdma_mbox_is_irq_availablehas:/*MBOX is available in all QDMA Soft Devices for vivado release > * 2019.1 */ if (((xdev->version_info.device_type == QDMA_DEVICE_SOFT) && (xdev->version_info.vivado_release >= QDMA_VIVADO_2019_1))) return true;
qdma_queue_starthas:if ((xdev->version_info.ip_type == EQDMA_SOFT_IP) && (xdev->version_info.vivado_release >= QDMA_VIVADO_2020_2)) { if (xdev->dev_cap.desc_eng_mode == QDMA_DESC_ENG_BYPASS_ONLY) { pr_err("Err: Bypass Only Design is not supported\n"); snprintf(buf, buflen, "%s Bypass Only Design is not supported\n", descq->conf.name); unlock_descq(descq); return -EINVAL; } if (descq->conf.desc_bypass) { if (xdev->dev_cap.desc_eng_mode == QDMA_DESC_ENG_INTERNAL_ONLY) { pr_err("Err: Bypass mode not supported in Internal Mode only design\n"); snprintf(buf, buflen, "%s Bypass mode not supported in Internal Mode only design\n", descq->conf.name); unlock_descq(descq); return -EINVAL; } } }
With a EQDMA soft IP reporting "vivado 2020.2" the above runtime paths will be enabled.
The 6.12.0-124.21.1.el10_1.x86_64 Kernel supplied with AlmaLinux 10.1 does't have the AMD_QDMA module configured to be built:
$ egrep "(XDMA|QDMA)" /boot/config-$(uname -r)
# CONFIG_XILINX_XDMA is not set
# CONFIG_AMD_QDMA is not set
The drivers/dma/amd/qdma source is simpler than that in dma_ip_drivers since:
- Only handles AXI4-MM transfers, i.e. no support for streams.
- Only handles physical functions.
- Doesn't handle multiple IP versions.
dev_get_platdata is called to get the max_mm_channels (number of MM channels) and irq_index. On x86_64 not sure where the platform data is obtained from.