This document describes the investigation of a UDP packet loss / corruption issue in a virtualized networking environment.
While iperf is used as a controlled and repeatable test case, the issue is not limited to iperf.
Similar behavior has been observed in other UDP-based applications. iperf is used solely to demonstrate and quantify the issue.
- UDP packets sent by the server (this system) are:
- Dropped
- Not received
- Received malformed
- The issue:
- Is directional (server → client)
- Increases with throughput
- Is reproducible
- TCP traffic is not affected
- The issue occurs under multiple conditions, not limited to iperf testing
iperf(UDP mode, TCP mode just to confirm no other issues are present on the link)
- Server: This environment (tested via LXC container, Docker container, and dedicated VM to eliminate container-specific issues)
- Clients: External and internal test hosts
| Parameter | Values |
|---|---|
| Direction | Normal, Reverse (-R) |
| Bandwidth | 1M, 5M, 10M, 15M (--bitrate) |
| Protocol | UDP (--udp) |
| Bandwidth | Normal | Reverse (-R) |
|---|---|---|
| 1M | OK | Mostly OK |
| 5M | OK | Mostly OK |
| 10M | OK | Some loss |
| 15M | OK | Consistent and significant loss |
The network consists of an FTTH fiber connection, an ONT, a Proxmox virtualization host, and an OpenWrt router VM with multiple NICs including PCIe passthrough for WAN and LACP LAN aggregation. The server runs inside LXC, Docker, or a dedicated VM, and clients connect either via WAN (Client A) or LAN/subnet cross (Client B). Bridges (vmbr0/vmbr1) are used for container/VM connectivity.
- 4 physical NICs (PCIe passthrough)
eth2→ WAN- Remaining 3 NICs → aggregated into
LACP0for LAN
eth0→ virtio device, part ofbr-lanbridgebr-lanbridge composed ofeth0+LACP0LACP0VLAN-aware but no VLANs configured- Subnets & firewall zones:
br-lan→ 192.168.1.0/24 →lanzonepublic→ 192.168.100.0/24 →publiczonewan→ ISP-assigned →wanzone
- Cross-subnet traffic requires explicit firewall rules and may traverse DNAT when moving between zones
vmbr0→ Linux bridge, no IPvmbr1→ Linux bridge, no IP- LXC / Docker containers or dedicated VMs can attach to either bridge
- Bridges act purely at L2
- PVE host physical NIC also connected to main switch (for some tests)
| Client | Connection | Notes |
|---|---|---|
| Client A | External (WAN) | Cross-ISP testing over FTTH or 4G |
| Client B | Internal LAN | Connected to main switch; traffic crosses subnets and zones, traversing firewall and DNAT rules |
+--------------+
| Client A |
| (WAN/ISP) |
+--------------+
|
Fiber
|
[ONT]
|
Ethernet RJ45
|
eth2 (WAN)
|
+--------------------------+
| PVE Host |
|--------------------------|
| vmbr0 (bridge) |
| vmbr1 (bridge) |
| | |
| +--> LXC Container |
| | UDP Server |
| | 192.168.100.X |
| +--> Docker / VM test | (alternative to LXC)
|--------------------------|
| OpenWrt Router VM |
|--------------------------|
| eth2 -> WAN | (NIC PCIe passthrough)
| |
| eth0 (virtio) -> br-lan | (mapped to vmbr0)
| lacp0 -> eth3-5 | (LACP uses 3 NICS in PCIe passthrough)
| br-lan -> eth0 + LACP0 |
| [LAN] 192.168.1.0/24 |
| |
| eth1 (virtio) -> public | (mapped to vmbr1)
| [PUBLIC] 192.168.100.0/24|
| |
| Firewall / NAT / Zones |
+--------------------------+
| (lacp0)
+--------------------> Switch -> Client B
| (eth0)
+--------------------> vmbr0 -> Server VM/Container -> Client B
| Client | Network Location | ISP / Access Type | Result |
|---|---|---|---|
| Client A | External (WAN) | TIM – FTTH | ❌ Packet loss present |
| Client A | External (WAN) | TIM – 4G | ❌ Packet loss present |
| Client A | External (WAN) | Vodafone – 4G | ❌ Packet loss present |
| Client A | External (WAN) | Wind3 – 4G | ✅ No packet loss |
| Client B | Internal LAN | N/A | ✅ No packet loss (traffic crosses subnets, firewall, and DNAT rules) |
| Server Location | Network Attachment | Result |
|---|---|---|
| LXC container | vmbr1 (virtio) | ❌ Issue present |
| Docker container on PVE host | vmbr1 (virtio) | ❌ Issue present |
| Dedicated VM | vmbr1 (virtio) | ❌ Issue present |
| PVE host (bare metal) | Physical NIC to main switch | ✅ No issue |
| OpenWrt router VM | Local router interface | ✅ No issue |
Client A (Internet)
|
v
WAN (ISP)
|
v
eth2 (PCIe passthrough, WAN)
|
v
OpenWrt Firewall / NAT
|
v
OpenWrt eth1 (virtio) -> public subnet 192.168.100.0/24
|
v
vmbr1 (PVE bridge)
|
v
Server (LXC / Docker / dedicated VM)
Packet loss is observed on this path when the OpenWrt VM uses virtio NICs, regardless of whether the server is attached to vmbr1 or vmbr0.
Client A (Internet)
|
v
WAN (ISP)
|
v
eth2 (PCIe passthrough, WAN)
|
v
OpenWrt Firewall / NAT
|
v
OpenWrt br-lan (bridge) -> lan subnet 192.168.1.0/24
|
v
lacp0
|
v
Switch
|
v
Server
The diagrams above show the direction of the connection from the Client to Server, but traffic would flow backward by using the -R option of iperf.
Internal-only traffic does not exhibit packet loss. Also in the case of the server behind vmbr1 and client behind vmbr0 cross subnets and firewall zones.
- All failing cases involve Internet-based clients; purely internal paths work reliably.
- The server may be connected to either
vmbr1orvmbr0; the choice of bridge does not change the outcome. - Packet loss is observed only when traffic enters from the WAN and is forwarded by the OpenWrt VM using virtio NICs.
- The issue remains directional (server → client) and rate-dependent, affecting high-throughput UDP.
- Proxmox bridges (
vmbr0/vmbr1) function correctly and are not a source of packet loss. - LXC, Docker, and dedicated VM server runtimes are ruled out.
- If the iperf server is connected via the LACP path through the physical switch (i.e. not using vmbr interfaces), Internet-sourced traffic works correctly.
- During the test the server has also been run in Openwrt VM binded to both
lanandpublicsubnets no issue is shown. - Traffic between
LACP0(or a VM/container attached tovmbr0) to the server that traverses routing/firewall logic and exits viavmbr1also works correctly. - Internet ingress doesn't show any significant packet loss.
- The failure therefore requires the combination of Internet egress + virtio forwarding toward a vmbr-attached server
- Replacing OpenWrt VM virtio NICs with Intel E1000e fully mitigates the issue for all tested paths.
- This strongly points to a defect or limitation in virtio-based forwarding inside OpenWrt under high UDP load, not in the host bridges or container stack.
- The problem manifests only for Internet-destinated traffic.
- Server-side attachment to
vmbr0orvmbr1does not influence the issue - Internal-only routing and firewalling paths are stable.
- The issue is eliminated by switching OpenWrt VM NICs from virtio to Intel E1000e.
- Most likely root cause: virtio NIC handling in OpenWrt when sending high-rate UDP traffic from virtual bridges to WAN.