Skip to content

Instantly share code, notes, and snippets.

@trozet
Last active January 22, 2026 16:58
Show Gist options
  • Select an option

  • Save trozet/32047c30ea00f2dc763a5afd97f4dcb3 to your computer and use it in GitHub Desktop.

Select an option

Save trozet/32047c30ea00f2dc763a5afd97f4dcb3 to your computer and use it in GitHub Desktop.
OVN EVPN

EVPN Support in OVN — Design & Architecture

Table of Contents

  1. Executive Summary
  2. High‑Level Architecture & Design
  3. EVPN Implementation Components
    • Data Model / Schema Options
    • Northd / Logical Flow Changes
    • ovn‑controller EVPN Modules
    • Physical Data‑Plane Integration
  4. Configuration & Usage
    • Enabling EVPN L2
    • L3‑VNI / VRF Support (Current State)
    • Open_vSwitch External‑IDs
    • Kernel / BGP Preconditions
  5. Data Plane Behavior
  6. Limitations & Future Work
  7. References

1. Executive Summary

OVN (Open Virtual Network) added support for EVPN primarily to enable integration of OVN logical switches into an external EVPN fabric under BGP control. This allows logical switch MAC/IP reachability to be exchanged with external routers and remote VTEPs, leveraging the EVPN control plane with VXLAN encapsulation.

  • EVPN L2 support: OVN uses a VNI per logical switch to export/learn MAC/IP entries to/from an EVPN BGP fabric.
  • Experimental configuration: The support is controlled by other_config:dynamic-routing-vni and related options on logical switches.
  • Emerging L3 / VRF integration: OVN introduces dynamic-routing-vrf-id on logical routers to integrate with host routing tables/VRFs, enabling BGP EVPN representation of L3 VRFs.

2. High‑Level Architecture & Design

OVN’s EVPN integration extends the existing Dynamic Routing framework, leveraging:

  • ovn‑northd: enriches SBDB with EVPN metadata and logical flows,
  • ovn‑controller: maintains EVPN state, learns remote endpoints, and programs OVS flows,
  • Kernel + FRR: handles the EVPN control plane (BGP), populates Linux bridges / FDB tables.

Architecture Overview

Logical Config (NBDB)
     |
northd translates
     v
Southbound DB (SBDB) with EVPN‑VNI metadata
     |
ovn‑controller per chassis
     |   \
     v    \--- Netlink neighbor/FDB events <--> Linux BR/VRF (via FRR/BGP EVPN)
  OpenFlow
     |
Open vSwitch (br‑int + VXLAN tunnels)
     |
Fabric EVPN (VXLAN) via remote VTEPs / PEs

Key concepts:

  • EVPN L2 domain: Each Logical_Switch with a valid VNI participates in MAC/IP exchange with the EVPN fabric.
  • EVPN L3 / VRF: Logical routers can be associated with a VRF table ID which ties OVN routing tables into a host VRF and corresponding EVPN L3 VNI representation.

3. EVPN Implementation Components

3.1 Data Model & Schema Options

Logical_Switch.other_config EVPN Options

The following other_config keys control EVPN behavior on a logical switch:

  • dynamic-routing-vni: Integer VNI for EVPN L2 domain.
  • dynamic-routing-fdb-prefer-local: Prefer SBDB FDB vs local EVPN cache.
  • dynamic-routing-arp-prefer-local: Prefer SB neighbor/ARP vs EVPN cache.
  • dynamic-routing-redistribute: Controls whether to advertise local FDB (and IP) into the EVPN.
  • dynamic-routing-bridge-ifname, dynamic-routing-vxlan-ifname, dynamic-routing-advertise-ifname: Device name hints for EVPN integration.

These options are experimental and may be revised.

Logical_Router.dynamic‑routing‑vrf‑id

  • This option lets a logical router specify a Linux routing table ID corresponding to a host VRF, tying OVN L3 routing into the host’s VRF context (critical for EVPN L3 operation).

3.2 ovn‑northd / Logical Flow Changes

ovn‑northd propagates:

  • Which logical switches have an EVPN VNI,
  • Logical flows that carry VNI context and defer FDB lookup to either SBDB or the local EVPN cache,
  • Optional flows for ARP/ND preference logic.

This ensures the OVN logical pipeline is EVPN‑aware at the SBDB level that ovn‑controller will consume.


3.3 ovn‑controller EVPN Modules

The core EVPN functionality lives in ovn‑controller modules:

EVPN Binding & FDB

  • Tracks remote VTEPs and remote EVPN FDB entries in memory,
  • Associates EVPN FDB entries (MAC/IP/VNI) with particular logical datapaths,
  • Responds to Linux neighbor/FDB events (usually injected by FRR via Netlink).

Modules include:

  • evpn-binding.c, evpn-binding.h
  • evpn-fdb.c, evpn-fdb.h

Neighbor Exchange & Host Interface Monitor

These modules watch and translate kernel neighbor/FDB notifications into ovn‑controller state.

Relevant files:

  • neighbor-exchange.c, neighbor-exchange-netlink.c
  • host-if-monitor.c

Physical Flows

physical.c installs OVS flows:

  • To steer traffic destined to EVPN endpoints out of VXLAN tunnels,
  • From VXLAN back into the local logical datapath.

4. Configuration & Usage

4.1 Enabling EVPN L2 on a Logical Switch

ovn‑nbctl set Logical_Switch ls‑evpn \
  other_config:dynamic‑routing‑vni=100 \
  other_config:dynamic‑routing‑redistribute=fdb
  • Assigns VNI=100 for EVPN L2.
  • With redistribute=fdb, local MACs/IPs are advertised.

4.2 Configuring L3‑VNI / VRF (Host Integration)

OVN does not currently define a full EVPN “L3 VNI” in the NB for logical routers, but you can map a logical router to a host VRF:

ovn‑nbctl set Logical_Router lr‑evpn \
  dynamic‑routing‑vrf‑id=<table_id>

This tells ovn‑controller which Linux routing table / VRF to use for injecting routes and correlating with external BGP/EVPN.


4.3 OVS External‑IDs for EVPN Tunnels

Set on the Open_vSwitch database:

ovs‑vsctl set Open_vSwitch . \
  external‑ids:ovn‑evpn‑vxlan‑ports="4789" \
  external‑ids:ovn‑evpn‑local‑ip="192.0.2.10"
  • ovn‑evpn‑vxlan‑ports: VXLAN UDP ports used for EVPN traffic.
  • ovn‑evpn‑local‑ip: Local source IP for EVPN traffic.

4.4 Kernel & BGP Prerequisites

4.5 Route Import (Netlink → OVN Logical Router) for BGP/EVPN

Recent dynamic routing enhancements in OVN support route import from the host kernel's routing tables into OVN logical routers.

Mechanism:

  • ovn-controller subscribes to Netlink route events (IPv4/IPv6) for relevant routing tables.
  • When a route belongs to a routing table matching a logical router's dynamic-routing-vrf-id, it may be:
    • Converted into an OVN Logical_Router_Static_Route, and/or
    • Considered for advertisement by BGP/EVPN via the host BGP daemon (e.g., FRR).

Benefits:

  • Enables automatic advertisement of host/service prefixes into the EVPN fabric.
  • Supports dynamic failover and ECMP when combined with FRR BGP.
  • Removes the need for manual static route injection between OVN and the host.

This capability is a foundational piece for EVPN L3 (Type-5) prefix distribution and VRF-based L3VNI topologies.

To integrate OVN with a BGP EVPN fabric:

  1. FRR/BGP daemon in a VRF on the host to peer with external routers.
  2. Linux bridge / VXLAN devices per VNI to anchor FDBs and route EVPN Control‑Plane traffic.
  3. VRF configuration matching logical router’s dynamic‑routing‑vrf‑id so that host routing table aligns with EVPN.

OVN doesn’t provision these — CMS or operators must configure them.


5. Data Plane Behavior

Local VM to Remote EVPN Endpoint

  • OVN logical pipeline identifies VNI + destination MAC,
  • OVS Flow forwards via VXLAN to remote VTEP based on EVPN FDB.
  • Kernel + BGP populate EVPN FDB entries.

Remote EVPN to Local VM

  • VXLAN traffic arrives on host,
  • OVS decapsulates, uses physical flows to deliver to local logical port.

6. Limitations & Future Work

  • EVPN support is experimental and may change.
  • Future support for ECMP routes with multiple uplinks

7. References

  1. OVN NB schema EVPN options (manpage excerpts).
  2. OVN v25.09 release notes with EVPN options.
  3. EVPN/UDN background for L2/L3 contexts.
  4. Example EVPN L3VNI integration via OVN BGP agent (OpenStack context).

8. Architecture Diagrams (Mermaid)

8.1 High-Level Control & Data Plane

flowchart LR
  subgraph ControlPlane[Control Plane]
    NBDB[(NBDB)]
    northd[northd]
    SBDB[(SBDB)]
    ovnCtrl[ovn-controller]
    FRR(BGP/EVPN Daemon)
    Kernel((Linux Kernel RIB/FDB))
  end

  subgraph DataPlane[Data Plane]
    OVS[Open vSwitch]
    Fabric[BGP EVPN Fabric]
  end

  NBDB --> northd --> SBDB --> ovnCtrl

  %% Netlink relationships
  FRR -- "Netlink (routes,FDB,neighbors)" --> Kernel
  Kernel -- "Netlink (routes,FDB,neighbors)" --> ovnCtrl
  ovnCtrl -- "Netlink (route add/del)" --> Kernel
  Kernel -- "RIB sync" --> FRR

  %% Data plane connections
  ovnCtrl --> OVS
  OVS <---> Fabric
  FRR <---> Fabric
Loading

8.2 MAC-VRF (EVPN L2) — VM→VM in Same MAC-VRF

MAC-VRF Data Flow (Forwarding Plane)

sequenceDiagram
    participant VM1
    participant OVS1 as OVS/Hypervisor1
    participant FAB as EVPN Fabric (L2 VNI)
    participant OVS2 as OVS/Hypervisor2
    participant VM2

    VM1->>OVS1: ARP Request (Broadcast)
    OVS1->>FAB: VXLAN BUM Flood (VNI)
    FAB->>OVS2: Deliver BUM
    OVS2->>VM2: ARP Request
    VM2-->>OVS2: ARP Reply (Unicast)
    OVS2-->>FAB: VXLAN Unicast (VNI)
    FAB-->>OVS1: Deliver Unicast
    OVS1-->>VM1: ARP Reply
Loading

MAC-VRF Control Plane (EVPN Type-2)

sequenceDiagram
    participant VM
    participant K as Kernel (FDB/ARP)
    participant FRR
    participant FAB as EVPN Fabric

    VM->>K: ARP/ND Populates MAC/IP
    K->>FRR: Netlink FDB/Neighbor Event
    FRR->>FAB: Advertise EVPN Type-2 (MAC+IP)
    FAB-->>FRR: Remote Type-2 Updates
    FRR-->>K: Install Remote MAC/IP (FDB/Neighbor)
Loading

8.3 IP-VRF (EVPN L3)

IP-VRF Data Flow (Forwarding Plane)

sequenceDiagram
    participant VM as VM/Pod (Local)
    participant OVS as OVS + OVN Pipelines
    participant LR as OVN Logical Router
    participant K as Kernel VRF (RIB, table=vrf-id)
    participant FRR as FRR (BGP EVPN)
    participant FAB as EVPN Fabric (L3 VNI/IP-VRF)
    participant FRR2 as Remote FRR
    participant K2 as Remote Kernel VRF
    participant VM2 as Remote VM/Server

    %% Local egress
    VM->>OVS: IP packet to remote prefix
    OVS->>LR: Logical routing lookup
    LR->>OVS: Forward to external/VRF-facing port
    OVS->>K: Send packet into VRF (table=vrf-id)

    %% VRF routing on local node
    K->>FRR: Lookup next-hop (using Type-5 routes learned via EVPN)
    FRR->>FAB: Encapsulate/forward to remote VTEP (underlay)

    %% Fabric transit
    FAB->>FRR2: Deliver packet to remote PE (VTEP)

    %% Remote VRF routing
    FRR2->>K2: Decapsulate, route in remote VRF
    K2->>VM2: Deliver packet via local L2 (or local OVN LR/bridge)

    %% Optional return traffic
    VM2-->>K2: Response packet
    K2-->>FRR2: Route via VRF
    FRR2-->>FAB: Send back over EVPN
    FAB-->>FRR: Deliver to local PE
    FRR-->>K: Route in local VRF
    K-->>OVS: Send into OVN external port
    OVS-->>VM: Deliver to source VM/Pod
Loading

IP-VRF Control Plane (EVPN Type-5)

sequenceDiagram
    participant OVN as OVN Logical Router
    participant Ctrl as ovn-controller
    participant K as Kernel VRF (RIB)
    participant FRR
    participant FAB as EVPN Fabric (L3 VNI)

    OVN->>Ctrl: Logical Route (prefix)
    Ctrl->>K: Netlink Route Add (table=vrf-id)
    K->>FRR: RIB Sync (via Zebra)
    FRR->>FAB: Advertise EVPN Type-5 (IP Prefix)
    FAB-->>FRR: Remote Type-5 Updates
    FRR-->>K: Install Remote Prefix (Kernel Route)
    K-->>Ctrl: Netlink Route Event (remote)
    Ctrl-->>OVN: Optional Route Import (policy)
Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment