Skip to content

Instantly share code, notes, and snippets.

@Fiooodooor
Last active August 11, 2025 07:30
Show Gist options
  • Select an option

  • Save Fiooodooor/7f75a4ef421dadb70af55f38eaa104c9 to your computer and use it in GitHub Desktop.

Select an option

Save Fiooodooor/7f75a4ef421dadb70af55f38eaa104c9 to your computer and use it in GitHub Desktop.
https://x.com/i/grok/share/u7qUed0FwgrMDIA7nMJl11iLL
coverity https://x.com/i/grok/share/eZ60X6fI3c7i6ZUutW0eeDAm4
netbox: https://x.com/i/grok/share/kBw2S30rCgA3mZ5VMKtESNdVS
https://x.com/i/grok/share/UZUEebJmIIqhEFdSZDUZYukDe
"Configuring HCI Harvester for VLAN 1076": https://x.com/i/grok/share/motnb7lGG6VLPnop4m9ctsxVi
"Ansible Script for PXE Boot via BMC LAN": https://x.com/i/grok/share/Z4Cxy97XDEjGbWMAnJEptkSln
"Ansible Script for PXE Boot via BMC": https://x.com/i/grok/share/8dU03Skr2MCECoGw7SJsscXsC
"Harvester Cluster Networking and Hugepages Setup": https://x.com/i/grok/share/uWufwtXjrgOg11yc3Q64qk9ke
I) I have a Harvester Cluster, RKE2 up and running and accessible under ip address 10.123.235.200/22. I have also Rancher standalone deployed as well as Argo stack.
Everything is running on Intel based CPU (Xeon) with VT-d enabled and ready to use Intel E810 network cards in each node with SRIOV support enabled.
I want to deploy a 3 node ephemeral Kubernetes cluster on Virtual Machines provisioned on demand, that uses custom build of DPDK and Ice drivers for E810 hardware network interfaces from host I a way that I could urilize it as Virtual NIC and/or VF driver inside the VM.
I need the best known method to do it in an automated way, for example using ansible, helm or just argo. The purpose of this is to test Intel ICE drivers on top of witch MTL and MCM from https://www.github.com/OpenVisualCloud are being run.
# RKE2 configuration optimized for PCI passthrough, hugepages, NUMA, CPU topology, and VM performance
write-kubeconfig-mode: "0644"
tls-cipher-suites: "TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256"
kube-apiserver-arg:
- "feature-gates=DevicePlugins=true,HugePageStorage=true,TopologyManager=true"
- "enable-admission-plugins=NodeRestriction,PodNodeSelector,PodSecurity,DefaultStorageClass,ServiceAccount"
kubelet-arg:
- "feature-gates=DevicePlugins=true,HugePageStorage=true,TopologyManager=true,CPUCFSQuotaPeriod=50ms"
- "cpu-manager-policy=static"
- "topology-manager-policy=best-effort"
- "topology-manager-scope=pod"
- "kube-reserved=cpu=500m,memory=512Mi,ephemeral-storage=1Gi"
- "system-reserved=cpu=500m,memory=512Mi,ephemeral-storage=1Gi"
- "eviction-hard=memory.available<5%,nodefs.available<10%"
- "eviction-soft=memory.available<10%,nodefs.available<15%"
- "eviction-soft-grace-period=memory.available=1m30s,nodefs.available=1m30s"
- "max-pods=100"
- "allowed-unsafe-sysctls=kernel.shm*,kernel.msg*,kernel.sem,fs.mqueue.*,net.*"
- "container-runtime-endpoint=unix:///run/k3s/containerd/containerd.sock"
node-label:
- "feature.node.kubernetes.io/pci-passthrough=enabled"
- "feature.node.kubernetes.io/hugepages=enabled"
- "feature.node.kubernetes.io/numa=enabled"
node-taint:
- "node-role.kubernetes.io/worker=:NoSchedule" # Optional, adjust based on node roles
II) Reiterate above, but focus on NIC part utilizing Intel Ethernet Operator and/or SR-IOV Network Device Plugin for Kubernetes.
@Fiooodooor
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment