-
-
Save RamLavi/91a25858a56f87e47039ceef99df662b to your computer and use it in GitHub Desktop.
| # Overview | |
| The following depict example manifests of two scenarios of communication between VMs connected via primay-UDN: | |
| 1. VMs under the same namespace | |
| 2. VMs under different namespaces | |
| ## Prerequisits | |
| - CNV 4.17 cluster. | |
| - oc tool | |
| # Configuring priamry-UDN on a CNV cluster | |
| 1. Enable the OCP FGs needed in order to turn on the TechPreviewNoUpgrade flag. | |
| ``` | |
| oc patch FeatureGate cluster --type=json -p '[{"op": "add", "path": "/spec/featureSet", "value": "TechPreviewNoUpgrade"}]' | |
| ``` | |
| 2. Set the HCO FGs needed in order to deploy the primary-UDN components by CNV. | |
| ```bash | |
| oc patch hco -n openshift-cnv kubevirt-hyperconverged --type=json -p='[{"op":"replace","path":"/spec/featureGates/primaryUserDefinedNetworkBinding","value":true},{"op":"replace","path":"/spec/featureGates/deployKubevirtIpamController","value":true}]' | |
| ``` | |
| 2.1. check HCO FGs enabled | |
| ``` | |
| oc get hco kubevirt-hyperconverged -n openshift-cnv -ojson | jq .spec.featureGates | |
| ``` | |
| 2.2. check kubevirt CR has the correct binding FG | |
| ``` | |
| oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -ojson | jq .spec.configuration.developerConfiguration.featureGates | grep NetworkBindingPlugins | |
| ``` | |
| 2.3 check passt network-binding correctly intalled on kubevirt CR | |
| ``` | |
| oc get kv kubevirt-kubevirt-hyperconverged -n openshift-cnv -ojson | jq .spec.configuration.network.binding | |
| ``` | |
| 2.4 check primary-udn net-attach-def: | |
| ``` | |
| oc get net-attach-def primary-udn-kubevirt-binding -n default -oyaml | |
| ``` |
| # demo scenario #1: 2 VMs on same namespace | |
| 1. Create a new namespace where the VMs will reside | |
| ```bash | |
| oc create ns blue-ns | |
| ``` | |
| apply the user NAD to allow OVN-K to plumb the pods in the selected namespace | |
| ```bash | |
| cat <<EOF | oc apply -f - | |
| apiVersion: k8s.cni.cncf.io/v1 | |
| kind: NetworkAttachmentDefinition | |
| metadata: | |
| name: access-tenant-blue | |
| namespace: blue-ns | |
| spec: | |
| config: |2 | |
| { | |
| "cniVersion": "0.3.0", | |
| "name": "tenantblue", | |
| "type": "ovn-k8s-cni-overlay", | |
| "topology":"layer2", | |
| "subnets": "10.100.0.0/16", | |
| "mtu": 1400, | |
| "netAttachDefName": "blue-ns/access-tenant-blue", | |
| "role": "primary" | |
| } | |
| EOF | |
| ``` | |
| 3. Create two VMs on the selected namespace | |
| ```bash | |
| cat <<EOF | oc apply -f - | |
| apiVersion: kubevirt.io/v1 | |
| kind: VirtualMachine | |
| metadata: | |
| name: vm-a | |
| namespace: blue-ns | |
| spec: | |
| running: true | |
| template: | |
| spec: | |
| domain: | |
| devices: | |
| disks: | |
| - disk: | |
| bus: virtio | |
| name: containerdisk | |
| - disk: | |
| bus: virtio | |
| name: cloudinitdisk | |
| interfaces: | |
| - name: passtnet | |
| binding: | |
| name: passt | |
| rng: {} | |
| resources: | |
| requests: | |
| memory: 2048M | |
| networks: | |
| - name: passtnet | |
| pod: {} | |
| terminationGracePeriodSeconds: 0 | |
| volumes: | |
| - containerDisk: | |
| image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0 | |
| name: containerdisk | |
| - cloudInitNoCloud: | |
| networkData: | | |
| version: 2 | |
| ethernets: | |
| eth0: | |
| dhcp4: true | |
| name: cloudinitdisk | |
| --- | |
| apiVersion: kubevirt.io/v1 | |
| kind: VirtualMachine | |
| metadata: | |
| name: vm-b | |
| namespace: blue-ns | |
| spec: | |
| running: true | |
| template: | |
| spec: | |
| domain: | |
| devices: | |
| disks: | |
| - disk: | |
| bus: virtio | |
| name: containerdisk | |
| - disk: | |
| bus: virtio | |
| name: cloudinitdisk | |
| interfaces: | |
| - name: passtnet | |
| binding: | |
| name: passt | |
| rng: {} | |
| resources: | |
| requests: | |
| memory: 2048M | |
| networks: | |
| - name: passtnet | |
| pod: {} | |
| terminationGracePeriodSeconds: 0 | |
| volumes: | |
| - containerDisk: | |
| image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0 | |
| name: containerdisk | |
| - cloudInitNoCloud: | |
| networkData: | | |
| version: 2 | |
| ethernets: | |
| eth0: | |
| dhcp4: true | |
| name: cloudinitdisk | |
| EOF | |
| ``` | |
| 4. Ping opposite VM from guest. | |
| ``` | |
| virtctl console <vm-a> | |
| <login> | |
| ... | |
| (ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vma-ip | |
| virtctl console <vm-b> | |
| <login> | |
| ... | |
| (ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vmb-ip | |
| virtctl console <vm-a> | |
| ping <vmb-ip> | |
| ``` |
| # demo scenario #2: 2 VMs on different namespaces | |
| 1. Create a new namespace where the VMs will reside | |
| ```bash | |
| oc create ns red-ns | |
| oc create ns yellow-ns | |
| ``` | |
| apply the user NAD to allow OVN-K to plumb the pods in the selected namespace | |
| ```bash | |
| cat <<EOF | oc apply -f - | |
| apiVersion: k8s.cni.cncf.io/v1 | |
| kind: NetworkAttachmentDefinition | |
| metadata: | |
| name: access-tenant-red | |
| namespace: red-ns | |
| spec: | |
| config: |2 | |
| { | |
| "cniVersion": "0.3.0", | |
| "name": "orange-network", | |
| "type": "ovn-k8s-cni-overlay", | |
| "topology":"layer2", | |
| "subnets": "10.101.0.0/16", | |
| "mtu": 1400, | |
| "netAttachDefName": "red-ns/access-tenant-red", | |
| "role": "primary" | |
| } | |
| --- | |
| apiVersion: k8s.cni.cncf.io/v1 | |
| kind: NetworkAttachmentDefinition | |
| metadata: | |
| name: access-tenant-yellow | |
| namespace: yellow-ns | |
| spec: | |
| config: |2 | |
| { | |
| "cniVersion": "0.3.0", | |
| "name": "orange-network", | |
| "type": "ovn-k8s-cni-overlay", | |
| "topology":"layer2", | |
| "subnets": "10.101.0.0/16", | |
| "mtu": 1400, | |
| "netAttachDefName": "yellow-ns/access-tenant-yellow", | |
| "role": "primary" | |
| } | |
| EOF | |
| ``` | |
| 3. Create two VMs on the selected namespace | |
| ```bash | |
| cat <<EOF | oc apply -f - | |
| apiVersion: kubevirt.io/v1 | |
| kind: VirtualMachine | |
| metadata: | |
| name: vm-a | |
| namespace: red-ns | |
| spec: | |
| running: true | |
| template: | |
| spec: | |
| domain: | |
| devices: | |
| disks: | |
| - disk: | |
| bus: virtio | |
| name: containerdisk | |
| - disk: | |
| bus: virtio | |
| name: cloudinitdisk | |
| interfaces: | |
| - name: passtnet | |
| binding: | |
| name: passt | |
| rng: {} | |
| resources: | |
| requests: | |
| memory: 2048M | |
| networks: | |
| - name: passtnet | |
| pod: {} | |
| terminationGracePeriodSeconds: 0 | |
| volumes: | |
| - containerDisk: | |
| image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0 | |
| name: containerdisk | |
| - cloudInitNoCloud: | |
| networkData: | | |
| version: 2 | |
| ethernets: | |
| eth0: | |
| dhcp4: true | |
| name: cloudinitdisk | |
| --- | |
| apiVersion: kubevirt.io/v1 | |
| kind: VirtualMachine | |
| metadata: | |
| name: vm-b | |
| namespace: yellow-ns | |
| spec: | |
| running: true | |
| template: | |
| spec: | |
| domain: | |
| devices: | |
| disks: | |
| - disk: | |
| bus: virtio | |
| name: containerdisk | |
| - disk: | |
| bus: virtio | |
| name: cloudinitdisk | |
| interfaces: | |
| - name: passtnet | |
| binding: | |
| name: passt | |
| rng: {} | |
| resources: | |
| requests: | |
| memory: 2048M | |
| networks: | |
| - name: passtnet | |
| pod: {} | |
| terminationGracePeriodSeconds: 0 | |
| volumes: | |
| - containerDisk: | |
| image: quay.io/kubevirt/fedora-with-test-tooling-container-disk:v1.1.0 | |
| name: containerdisk | |
| - cloudInitNoCloud: | |
| networkData: | | |
| version: 2 | |
| ethernets: | |
| eth0: | |
| dhcp4: true | |
| name: cloudinitdisk | |
| EOF | |
| ``` | |
| 4. Ping opposite VM from guest. | |
| ``` | |
| virtctl console <vm-a> -n red-ns | |
| <login> | |
| ... | |
| (ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vma-ip | |
| virtctl console <vm-b> -n yellow-ns | |
| <login> | |
| ... | |
| (ip -o -4 addr list eth0 | awk '{print $4}' | cut -d/ -f1 #vmb-ip | |
| virtctl console <vm-a> -n red-ns | |
| ping <vmb-ip> | |
| ``` |
my NADs may be wrong btw, they need to be like what you suggested.
If so, your NADs are wrong. They are using different network names, which essentially connects them to different networks. Those will not be able to communicate.
what about the following case
one NAD on default network
two NS, one VM on each, both use that NAD
unless the VM and the NAD must be on the same NS for UDN (i.e even default NS is forbidden)
my NADs may be wrong btw, they need to be like what you suggested.
If so, your NADs are wrong. They are using different network names, which essentially connects them to different networks. Those will not be able to communicate.
They indeed are wrong. will fix. Thanks for the heads up :)
[[UPDATE]] fixed
what about the following case one NAD on default network
two NS, one VM on each, both use that NAD unless the VM and the NAD must be on the same NS for UDN (i.e even default NS is forbidden)
I think that each namespace needs to have a NAD. Like Miguel said - the connect to the same network by the fact that the NAD's config.Name are the same.
Those are two different things
the NS where the NAD is versus the underline network
The question is if we need to support NAD on the default NS, where the VM is on a custom NS
Some other features of kubevirt do support it, while some don't
EDIT - on the other hand, it might be indeed not supported because the NS acts as an abstraction layer
https://gist.github.com/RamLavi/91a25858a56f87e47039ceef99df662b#file-demo-scenario-2-L41
should be green-network right ? not network-green
as line 22
https://gist.github.com/RamLavi/91a25858a56f87e47039ceef99df662b#file-demo-scenario-2-L41
should be green-network right ? not network-green as line 22
It does. good catch, thanks!
If you want you can add now "allowPersistentIPs": true as support for it was added lately for UDN.
Maybe we should have one that use UserDefinedNetwork because different teams might look on this gist
and UserDefinedNetwork is more formal
For passt since it was removed from OVN
cat <<EOF | run_kubectl apply -f -
apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
name: primary-udn-kubevirt-binding
namespace: default
spec:
config: '{
"cniVersion": "1.0.0",
"name": "primary-udn-kubevirt-binding",
"plugins": [
{
"type": "kubevirt-passt-binding"
}
]
}'
EOF
need
labels:
k8s.ovn.org/primary-user-defined-network: ""
on ns where the primary nad is created (blue-ns etc)
hmm, isn't it what we do on scenario #2
2 VMs on different namespaces?my NADs may be wrong btw, they need to be like what you suggested.