Skip to content

Instantly share code, notes, and snippets.

@RamLavi
Last active July 10, 2025 10:10
Show Gist options
  • Select an option

  • Save RamLavi/91a25858a56f87e47039ceef99df662b to your computer and use it in GitHub Desktop.

Select an option

Save RamLavi/91a25858a56f87e47039ceef99df662b to your computer and use it in GitHub Desktop.
primary UDN interface on VMs example (Passt binding pluging)
# Overview
The following depict example manifests of two scenarios of communication between VMs connected via primay-UDN:
1. VMs under the same namespace
2. VMs under different namespaces
## Prerequisits
- CNV 4.17 cluster.
- oc tool
# Configuring priamry-UDN on a CNV cluster
1. Enable the OCP FGs needed in order to turn on the TechPreviewNoUpgrade flag.
```
oc patch FeatureGate cluster --type=json -p '[{"op": "add", "path": "/spec/featureSet", "value": "TechPreviewNoUpgrade"}]'
```
2. Set the HCO FGs needed in order to deploy the primary-UDN components by CNV.
```bash
oc patch hco -n openshift-cnv kubevirt-hyperconverged --type=json -p='[{"op":"replace","path":"/spec/featureGates/primaryUserDefinedNetworkBinding","value":true},{"op":"replace","path":"/spec/featureGates/deployKubevirtIpamController","value":true}]'
```
2.1. check HCO FGs enabled
```
oc get hco kubevirt-hyperconverged -n openshift-cnv -ojson | jq .spec.featureGates
```
2.2. check kubevirt CR has the correct binding FG
```
oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -ojson | jq .spec.configuration.developerConfiguration.featureGates | grep NetworkBindingPlugins
```
2.3 check passt network-binding correctly intalled on kubevirt CR
```
oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -ojson | jq .spec.configuration.network.binding
```
2.4 check primary-udn net-attach-def:
```
oc get net-attach-def primary-udn-kubevirt-binding -n default -oyaml
```
# demo scenario #1: 2 VMs on same namespace
1. Create a new namespace where the VMs will reside
```bash
oc create ns blue-ns
```
apply the user NAD to allow OVN-K to plumb the pods in the selected namespace
```bash
cat <<EOF | oc apply -f -
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: access-tenant-blue
namespace: blue-ns
spec:
config: |2
{
"cniVersion": "0.3.0",
"name": "tenantblue",
"type": "ovn-k8s-cni-overlay",
"topology":"layer2",
"subnets": "10.100.0.0/16",
"mtu": 1400,
"netAttachDefName": "blue-ns/access-tenant-blue",
"role": "primary"
}
EOF
```
3. Create two VMs on the selected namespace
```bash
cat <<EOF | oc apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-a
namespace: blue-ns
spec:
running: true
template:
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- name: passtnet
binding:
name: passt
rng: {}
resources:
requests:
memory: 2048M
networks:
- name: passtnet
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0
name: containerdisk
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
eth0:
dhcp4: true
name: cloudinitdisk
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-b
namespace: blue-ns
spec:
running: true
template:
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- name: passtnet
binding:
name: passt
rng: {}
resources:
requests:
memory: 2048M
networks:
- name: passtnet
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0
name: containerdisk
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
eth0:
dhcp4: true
name: cloudinitdisk
EOF
```
4. Ping opposite VM from guest.
```
virtctl console <vm-a>
<login>
...
(ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vma-ip
virtctl console <vm-b>
<login>
...
(ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vmb-ip
virtctl console <vm-a>
ping <vmb-ip>
```
# demo scenario #2: 2 VMs on different namespaces
1. Create a new namespace where the VMs will reside
```bash
oc create ns red-ns
oc create ns yellow-ns
```
apply the user NAD to allow OVN-K to plumb the pods in the selected namespace
```bash
cat <<EOF | oc apply -f -
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: access-tenant-red
namespace: red-ns
spec:
config: |2
{
"cniVersion": "0.3.0",
"name": "orange-network",
"type": "ovn-k8s-cni-overlay",
"topology":"layer2",
"subnets": "10.101.0.0/16",
"mtu": 1400,
"netAttachDefName": "red-ns/access-tenant-red",
"role": "primary"
}
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
name: access-tenant-yellow
namespace: yellow-ns
spec:
config: |2
{
"cniVersion": "0.3.0",
"name": "orange-network",
"type": "ovn-k8s-cni-overlay",
"topology":"layer2",
"subnets": "10.101.0.0/16",
"mtu": 1400,
"netAttachDefName": "yellow-ns/access-tenant-yellow",
"role": "primary"
}
EOF
```
3. Create two VMs on the selected namespace
```bash
cat <<EOF | oc apply -f -
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-a
namespace: red-ns
spec:
running: true
template:
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- name: passtnet
binding:
name: passt
rng: {}
resources:
requests:
memory: 2048M
networks:
- name: passtnet
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0
name: containerdisk
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
eth0:
dhcp4: true
name: cloudinitdisk
---
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: vm-b
namespace: yellow-ns
spec:
running: true
template:
spec:
domain:
devices:
disks:
- disk:
bus: virtio
name: containerdisk
- disk:
bus: virtio
name: cloudinitdisk
interfaces:
- name: passtnet
binding:
name: passt
rng: {}
resources:
requests:
memory: 2048M
networks:
- name: passtnet
pod: {}
terminationGracePeriodSeconds: 0
volumes:
- containerDisk:
image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0
name: containerdisk
- cloudInitNoCloud:
networkData: |
version: 2
ethernets:
eth0:
dhcp4: true
name: cloudinitdisk
EOF
```
4. Ping opposite VM from guest.
```
virtctl console <vm-a> -n red-ns
<login>
...
(ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vma-ip
virtctl console <vm-b> -n yellow-ns
<login>
...
(ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vmb-ip
virtctl console <vm-a> -n red-ns
ping <vmb-ip>
```
@oshoval
Copy link

oshoval commented Aug 21, 2024

I think that it worth removing "allowPersistentIPs": "true", at this stage
we don't fully support it yet for UDN as Miguel explained

@RamLavi
Copy link
Author

RamLavi commented Aug 21, 2024

I think that it worth removing "allowPersistentIPs": "true", at this stage we don't fully support it yet for UDN as Miguel explained

what's not supported exactly?

@oshoval
Copy link

oshoval commented Aug 21, 2024

Network Selection Element is not populated for UDN, hence basically OVN wont respect the IPAM feature for UDN,
Having half baked config is prone for bugs

@RamLavi
Copy link
Author

RamLavi commented Aug 21, 2024

I see then. Removing

@oshoval
Copy link

oshoval commented Aug 22, 2024

For upstream we have different NS

kubectl patch hco -n kubevirt-hyperconverged kubevirt-hyperconverged --type=json -p='[{"op":"replace","path":"/spec/featureGates/primaryUserDefinedNetworkBinding","value":true},{"op":"replace","path":"/spec/featureGates/deployKubevirtIpamController","value":true}]'

@maiqueb
Copy link

maiqueb commented Aug 29, 2024

I would appreciate a 3rd scenario, feature a network spanning across multiple namespaces. Workloads on separate namespaces would be able to communicate.

The NADs to be used would be:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: shared-net
  namespace: yellow
spec:
  config: |2

    {
            "cniVersion": "0.3.0",
            "name": "green-network",
            "type": "ovn-k8s-cni-overlay",
            "topology":"layer2",
            "subnets": "10.128.0.0/16",
            "mtu": 1400,
            "netAttachDefName": "yellow/shared-net",
            "role": "primary"
    }
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: shared-net
  namespace: blue
spec:
  config: |2

    {
            "cniVersion": "0.3.0",
            "name": "green-network",
            "type": "ovn-k8s-cni-overlay",
            "topology":"layer2",
            "subnets": "10.128.0.0/16",
            "mtu": 1400,
            "netAttachDefName": "blue/shared-net",
            "role": "primary"
    }

@RamLavi
Copy link
Author

RamLavi commented Aug 29, 2024

I would appreciate a 3rd scenario, feature a network spanning across multiple namespaces. Workloads on separate namespaces would be able to communicate.

The NADs to be used would be:

apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: shared-net
  namespace: yellow
spec:
  config: |2

    {
            "cniVersion": "0.3.0",
            "name": "green-network",
            "type": "ovn-k8s-cni-overlay",
            "topology":"layer2",
            "subnets": "10.128.0.0/16",
            "mtu": 1400,
            "netAttachDefName": "yellow/shared-net",
            "role": "primary"
    }
---
apiVersion: k8s.cni.cncf.io/v1
kind: NetworkAttachmentDefinition
metadata:
  name: shared-net
  namespace: blue
spec:
  config: |2

    {
            "cniVersion": "0.3.0",
            "name": "green-network",
            "type": "ovn-k8s-cni-overlay",
            "topology":"layer2",
            "subnets": "10.128.0.0/16",
            "mtu": 1400,
            "netAttachDefName": "blue/shared-net",
            "role": "primary"
    }

hmm, isn't it what we do on scenario #2 2 VMs on different namespaces?
my NADs may be wrong btw, they need to be like what you suggested.

@maiqueb
Copy link

maiqueb commented Aug 29, 2024

my NADs may be wrong btw, they need to be like what you suggested.

If so, your NADs are wrong. They are using different network names, which essentially connects them to different networks. Those will not be able to communicate.

@oshoval
Copy link

oshoval commented Aug 29, 2024

what about the following case
one NAD on default network

two NS, one VM on each, both use that NAD
unless the VM and the NAD must be on the same NS for UDN (i.e even default NS is forbidden)

@RamLavi
Copy link
Author

RamLavi commented Sep 1, 2024

my NADs may be wrong btw, they need to be like what you suggested.

If so, your NADs are wrong. They are using different network names, which essentially connects them to different networks. Those will not be able to communicate.

They indeed are wrong. will fix. Thanks for the heads up :)

[[UPDATE]] fixed

@RamLavi
Copy link
Author

RamLavi commented Sep 1, 2024

what about the following case one NAD on default network

two NS, one VM on each, both use that NAD unless the VM and the NAD must be on the same NS for UDN (i.e even default NS is forbidden)

I think that each namespace needs to have a NAD. Like Miguel said - the connect to the same network by the fact that the NAD's config.Name are the same.

@oshoval
Copy link

oshoval commented Sep 2, 2024

Those are two different things
the NS where the NAD is versus the underline network
The question is if we need to support NAD on the default NS, where the VM is on a custom NS
Some other features of kubevirt do support it, while some don't

EDIT - on the other hand, it might be indeed not supported because the NS acts as an abstraction layer

@oshoval
Copy link

oshoval commented Sep 3, 2024

https://gist.github.com/RamLavi/91a25858a56f87e47039ceef99df662b#file-demo-scenario-2-L41

should be green-network right ? not network-green
as line 22

@RamLavi
Copy link
Author

RamLavi commented Sep 3, 2024

https://gist.github.com/RamLavi/91a25858a56f87e47039ceef99df662b#file-demo-scenario-2-L41

should be green-network right ? not network-green as line 22

It does. good catch, thanks!

@oshoval
Copy link

oshoval commented Sep 16, 2024

If you want you can add now "allowPersistentIPs": true as support for it was added lately for UDN.

@oshoval
Copy link

oshoval commented Nov 5, 2024

Maybe we should have one that use UserDefinedNetwork because different teams might look on this gist
and UserDefinedNetwork is more formal

@oshoval
Copy link

oshoval commented May 18, 2025

For passt since it was removed from OVN

cat <<EOF | run_kubectl apply -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: primary-udn-kubevirt-binding
  namespace: default
spec:
  config: '{
  "cniVersion": "1.0.0",
  "name": "primary-udn-kubevirt-binding",
  "plugins": [
    {
      "type": "kubevirt-passt-binding"
    }
  ]
}'
EOF

@oshoval
Copy link

oshoval commented May 18, 2025

need

  labels:
    k8s.ovn.org/primary-user-defined-network: ""

on ns where the primary nad is created (blue-ns etc)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment