<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Sd-Wan on dantas.io</title><link>https://dantas.io/tags/sd-wan/</link><description>Recent content in Sd-Wan on dantas.io</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Mon, 13 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://dantas.io/tags/sd-wan/index.xml" rel="self" type="application/rss+xml"/><item><title>Architectural Blueprint - Enterprise Data Center Interconnection with Google Cloud via Cisco Catalyst 8000V</title><link>https://dantas.io/p/architectural-blueprint-enterprise-data-center-interconnection-with-google-cloud-via-cisco-catalyst-8000v/</link><pubDate>Mon, 13 Apr 2026 00:00:00 +0000</pubDate><guid>https://dantas.io/p/architectural-blueprint-enterprise-data-center-interconnection-with-google-cloud-via-cisco-catalyst-8000v/</guid><description>&lt;img src="https://dantas.io/" alt="Featured image of post Architectural Blueprint - Enterprise Data Center Interconnection with Google Cloud via Cisco Catalyst 8000V" /&gt;&lt;h1 id="architectural-blueprint-enterprise-data-center-interconnection-with-google-cloud-via-cisco-catalyst-8000v"&gt;Architectural Blueprint: Enterprise Data Center Interconnection with Google Cloud via Cisco Catalyst 8000V
&lt;/h1&gt;&lt;p&gt;&lt;strong&gt;Audience:&lt;/strong&gt; Principal Network Architects, Cloud Platform Engineers, CTO/CIO Office&lt;br&gt;
&lt;strong&gt;Version:&lt;/strong&gt; 1.0&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="business-context"&gt;Business Context
&lt;/h2&gt;&lt;p&gt;The enterprise hybrid cloud is not a transitional state; it is the permanent operating model for any organization carrying more than a decade of accumulated infrastructure investment. The notion that workloads will cleanly &amp;ldquo;lift and shift&amp;rdquo; into a public cloud provider has been empirically refuted by migration programs at scale. Gartner (2023) projected that through 2027, more than 50% of enterprises will use industry cloud platforms to accelerate their business initiatives, yet the on-premises footprint — particularly for latency-sensitive transaction processing, regulated data residency workloads, and legacy mainframe-adjacent applications — will persist indefinitely. The architectural challenge, therefore, is not elimination of the data center but the construction of a high-fidelity, operationally unified network fabric that spans both domains.&lt;/p&gt;
&lt;p&gt;For enterprises that have standardized on Cisco&amp;rsquo;s routing and SD-WAN ecosystem — whether classic IOS-XE DMVPN fabrics or the Viptela-based SD-WAN architecture (Cisco Systems, 2023a) — the imperative is clear: extend the existing control plane and policy framework into Google Cloud Platform (GCP) without forking the operational model into two disconnected toolchains. The Cisco Catalyst 8000V Edge Software (C8000V), running as a compute-optimized virtual machine instance on GCP Compute Engine, serves as the architectural bridge that preserves investment in EIGRP/OSPF/BGP routing policy, Cisco SD-WAN overlay orchestration via vManage, and advanced traffic engineering capabilities (NBAR2, PBR, application-aware routing) while integrating natively with GCP&amp;rsquo;s Software-Defined Network control plane through the Network Connectivity Center (NCC) (Google Cloud, 2024a).&lt;/p&gt;
&lt;p&gt;The business case is not theoretical. Organizations operating Cisco SD-WAN fabrics with 200+ branch sites face a concrete problem: cloud-destined traffic from those branches is backhauled through the data center, traversing an increasingly congested WAN link, only to egress through a single internet breakout point toward GCP. Deploying C8000V instances as SD-WAN edge nodes inside GCP VPCs enables direct branch-to-cloud connectivity via the SD-WAN overlay, eliminating the backhaul penalty entirely and reducing end-to-end application latency by 40–60% for SaaS and cloud-native workloads (Cisco Systems, 2023b).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="problem-statement-the-layer-2-illusion"&gt;Problem Statement: The Layer 2 Illusion
&lt;/h2&gt;&lt;p&gt;Before any architecture can be selected, a fundamental misconception must be confronted head-on: &lt;strong&gt;you cannot stretch a Layer 2 broadcast domain into a native GCP VPC.&lt;/strong&gt; This is not a limitation that can be engineered around with creative VLAN tagging or OTV. It is a hard constraint imposed by the physics of GCP&amp;rsquo;s network architecture.&lt;/p&gt;
&lt;h3 id="why-layer-2-extension-fails-on-gcp"&gt;Why Layer 2 Extension Fails on GCP
&lt;/h3&gt;&lt;p&gt;Google Cloud&amp;rsquo;s VPC network is a &lt;strong&gt;pure Layer 3 Software-Defined Network&lt;/strong&gt; built on the Andromeda virtualization stack (Dalton et al., 2018). Andromeda operates as a distributed network virtualization layer that programs forwarding rules directly into the hypervisor&amp;rsquo;s virtual switch. Every VM&amp;rsquo;s vNIC is connected to a virtual switch that performs L3 forwarding — there is no learning of MAC addresses, no flooding, no Spanning Tree Protocol participation. The implications are absolute:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;802.1Q VLAN tags are silently stripped.&lt;/strong&gt; A VM transmitting a tagged frame will have the tag removed by the Andromeda dataplane before the packet reaches the VPC fabric. There is no configuration knob to change this behavior (Google Cloud, 2024b).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;BUM traffic (Broadcast, Unknown Unicast, Multicast) is dropped.&lt;/strong&gt; ARP requests do not flood; instead, Andromeda intercepts ARP and responds with a proxy ARP mechanism backed by the VPC&amp;rsquo;s metadata-driven IP-to-MAC mapping. Gratuitous ARP, which many legacy clustering solutions (e.g., Windows NLB, F5 LTM active-standby failover) depend on for VIP migration, does not propagate (Google Cloud, 2024b).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Multicast is unsupported at the VPC layer.&lt;/strong&gt; OSPF adjacencies using 224.0.0.5/6, EIGRP hellos on 224.0.0.10, VRRP, and HSRP — all of which rely on IP multicast — cannot form natively between GCP VMs using standard multicast group addresses. Routing protocol adjacencies must use &lt;strong&gt;unicast&lt;/strong&gt; peering (Google Cloud, 2024b).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;This means that technologies designed to stretch L2 domains — VXLAN with flood-and-learn, OTV, LISP in L2 mode — are architecturally incompatible with native GCP VPC networking. Any design that assumes L2 adjacency between on-premises hosts and GCP VMs is building on a false premise.&lt;/p&gt;
&lt;h3 id="the-only-exception-gcve"&gt;The Only Exception: GCVE
&lt;/h3&gt;&lt;p&gt;The sole environment within Google Cloud that provides genuine Layer 2 semantics is &lt;strong&gt;Google Cloud VMware Engine (GCVE)&lt;/strong&gt;, which runs VMware NSX-T on bare-metal nodes, creating an isolated L2/L3 overlay network outside the Andromeda fabric. This is a valid option (discussed in Section 3, Option C), but it carries a fundamentally different cost and operational model.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="architecture-options"&gt;Architecture Options
&lt;/h2&gt;&lt;p&gt;Three architecturally sound approaches exist for establishing hybrid connectivity between on-premises Cisco-centric data centers and GCP workloads. Each occupies a different position on the spectrum of cloud-native alignment versus operational continuity with existing network toolchains.&lt;/p&gt;
&lt;h3 id="option-a-native-gcp-ha-vpn-with-cloud-router-bgp"&gt;Option A: Native GCP HA VPN with Cloud Router BGP
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Architecture:&lt;/strong&gt; Two GCP HA VPN gateways, each with two interfaces, establishing four IPsec tunnels to on-premises VPN concentrators (e.g., Cisco ASA, Cisco ISR/CSR). Dynamic routing is provided via eBGP sessions between the on-premises router and GCP Cloud Router, which programs learned routes into the VPC via the Andromeda control plane (Google Cloud, 2024c).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What you gain:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fully managed VPN infrastructure; no VM lifecycle management.&lt;/li&gt;
&lt;li&gt;99.99% SLA when configured with the prescribed four-tunnel HA topology.&lt;/li&gt;
&lt;li&gt;Route exchange via Cloud Router&amp;rsquo;s native eBGP implementation (ASN 16550 or custom).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What you lose:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;No visibility into tunnel-level telemetry beyond basic GCP metrics (no NBAR2, no per-application flow analysis).&lt;/li&gt;
&lt;li&gt;No advanced traffic engineering: no PBR, no DMVPN spoke-to-spoke direct tunnels, no application-aware routing.&lt;/li&gt;
&lt;li&gt;BGP is the only supported routing protocol. Enterprises running pure EIGRP fabrics must either redistribute (introducing administrative distance conflicts and potential routing loops) or re-architect their on-premises control plane.&lt;/li&gt;
&lt;li&gt;Maximum of 3 Gbps per tunnel, with an aggregate cap per HA VPN gateway (Google Cloud, 2024c).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="option-b-cisco-catalyst-8000v--layer-3-greipsec-overlay"&gt;Option B: Cisco Catalyst 8000V — Layer 3 GRE/IPsec Overlay
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Architecture:&lt;/strong&gt; One or more C8000V instances deployed as Compute Engine VMs within a dedicated &amp;ldquo;transit&amp;rdquo; VPC. The C8000V establishes GRE-over-IPsec tunnels (or native IPsec with VTI) back to on-premises Cisco routers or SD-WAN edge devices. The C8000V runs a full IOS-XE routing stack, participating in the enterprise&amp;rsquo;s existing IGP/EGP domain. Routes learned from on-premises are injected into the GCP VPC via NCC Router Appliance peering with Cloud Router over eBGP (Google Cloud, 2024a; Cisco Systems, 2024).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What you gain:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Full IOS-XE feature set: DMVPN (NHRP + mGRE + IPsec), EIGRP, OSPF, MP-BGP with VRF-Lite, PBR, IP SLA, NBAR2/AVC for application visibility, BFD for sub-second failover detection.&lt;/li&gt;
&lt;li&gt;SD-WAN overlay integration: the C8000V can register as a vEdge/cEdge node in vManage, extending the SD-WAN fabric into GCP with centralized policy orchestration, application-aware routing, and SLA-based path selection across multiple WAN transports (Cisco Systems, 2023a).&lt;/li&gt;
&lt;li&gt;Unified operational model: the same NOC team, the same monitoring toolchain (ThousandEyes, vManage, DNA Center), the same change management procedures.&lt;/li&gt;
&lt;li&gt;VRF segmentation within GCP: multiple routing tables on a single C8000V, mapped to different VPCs via multiple vNICs, enabling multi-tenancy without deploying separate appliance instances per tenant.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What you lose:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;VM lifecycle management: patching IOS-XE, right-sizing the Compute Engine instance (minimum &lt;code&gt;n2-standard-4&lt;/code&gt; for production throughput; &lt;code&gt;n2-standard-8&lt;/code&gt; recommended for &amp;gt;2 Gbps encrypted throughput), monitoring CPU/memory utilization.&lt;/li&gt;
&lt;li&gt;Throughput ceiling bounded by the VM&amp;rsquo;s vNIC bandwidth cap (up to 32 Gbps on &lt;code&gt;n2-standard-32&lt;/code&gt;, but IPsec encryption overhead reduces effective throughput by 30–50% depending on packet size and cipher suite) (Google Cloud, 2024d).&lt;/li&gt;
&lt;li&gt;Complexity of the NCC integration (detailed in Section 5).&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="option-c-google-cloud-vmware-engine-gcve-with-vmware-hcx"&gt;Option C: Google Cloud VMware Engine (GCVE) with VMware HCX
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Architecture:&lt;/strong&gt; A GCVE private cloud deployed in a GCP region, running vSphere/vSAN/NSX-T on dedicated bare-metal nodes. VMware HCX provides L2 extension (Network Extension), vMotion (live migration), and bulk migration (HCX Replication Assisted vMotion) between on-premises vSphere and GCVE. The NSX-T overlay provides full L2/L3 network virtualization with microsegmentation (Google Cloud, 2024e).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;What you gain:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;True Layer 2 extension: VLAN-backed port groups on-premises can be stretched to GCVE segments, preserving IP addresses, MAC addresses, and broadcast domain membership.&lt;/li&gt;
&lt;li&gt;Workload mobility without re-IP: VMs can vMotion between on-premises and cloud with zero downtime and no IP address change.&lt;/li&gt;
&lt;li&gt;NSX-T distributed firewall for east-west microsegmentation.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&lt;strong&gt;What you lose:&lt;/strong&gt;&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Cost:&lt;/strong&gt; GCVE private clouds require a minimum three-node cluster of bare-metal hosts. The entry-level configuration (3x &lt;code&gt;ve1-standard-72&lt;/code&gt; nodes) carries a committed monthly spend that dwarfs the cost of a pair of C8000V instances by an order of magnitude.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Operational divergence:&lt;/strong&gt; GCVE introduces a parallel network control plane (NSX-T) alongside the existing Cisco fabric, creating a bifurcated operational model that requires NSX-T expertise that most Cisco-centric teams do not possess.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Blast radius:&lt;/strong&gt; L2 extension via HCX Network Extension carries the risk of broadcast storm propagation from on-premises into the GCVE segment. A misbehaving NIC in the on-premises VLAN can saturate the HCX tunnel and degrade GCVE workloads.&lt;/li&gt;
&lt;/ul&gt;
&lt;hr&gt;
&lt;h2 id="trade-off-analysis"&gt;Trade-Off Analysis
&lt;/h2&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Dimension&lt;/th&gt;
 &lt;th&gt;Option A: GCP HA VPN&lt;/th&gt;
 &lt;th&gt;Option B: C8000V (GRE/IPsec)&lt;/th&gt;
 &lt;th&gt;Option C: GCVE + HCX&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Latency (overlay overhead)&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Low (native IPsec, no GRE header)&lt;/td&gt;
 &lt;td&gt;Medium (GRE + IPsec adds 58–62 bytes per packet; TCP MSS clamping required)&lt;/td&gt;
 &lt;td&gt;Low-Medium (HCX WAN optimization reduces effective latency for bulk transfers)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Throughput ceiling&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;3 Gbps/tunnel; limited aggregate&lt;/td&gt;
 &lt;td&gt;VM-bound; 4–10 Gbps realistic with &lt;code&gt;n2-standard-8&lt;/code&gt; and AES-NI&lt;/td&gt;
 &lt;td&gt;Dedicated bare-metal; 25 Gbps per host NIC&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Monthly cost (production HA)&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;~$150–300/month (tunnels + egress)&lt;/td&gt;
 &lt;td&gt;~$800–2,000/month (2x C8000V VMs + BYOL/paygo licensing + egress)&lt;/td&gt;
 &lt;td&gt;~$15,000–40,000+/month (3-node minimum GCVE cluster)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Operational complexity&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Low (managed service)&lt;/td&gt;
 &lt;td&gt;Medium-High (IOS-XE lifecycle, NCC integration, HA design)&lt;/td&gt;
 &lt;td&gt;High (vSphere + NSX-T + HCX operational burden)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Control plane richness&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;BGP only&lt;/td&gt;
 &lt;td&gt;Full IOS-XE: EIGRP, OSPF, MP-BGP, DMVPN, PBR, NBAR2, SD-WAN&lt;/td&gt;
 &lt;td&gt;NSX-T + BGP (Cloud Router peering via GCVE edge)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Unified Cisco management&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;No (GCP-native console only)&lt;/td&gt;
 &lt;td&gt;Yes (vManage, DNA Center, ThousandEyes)&lt;/td&gt;
 &lt;td&gt;No (VMware vCenter/NSX Manager)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;L2 extension capability&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;No&lt;/td&gt;
 &lt;td&gt;No (L3 only; by design)&lt;/td&gt;
 &lt;td&gt;Yes (HCX Network Extension)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;&lt;strong&gt;Multi-tenancy / VRF&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Limited (one Cloud Router per VPC)&lt;/td&gt;
 &lt;td&gt;Yes (VRF-Lite with per-VRF subinterfaces)&lt;/td&gt;
 &lt;td&gt;Yes (NSX-T T1 gateways per tenant)&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;p&gt;The trade-off matrix reveals a clear pattern: &lt;strong&gt;Option B occupies the optimal position for Cisco-centric enterprises that need advanced traffic engineering, unified management, and cost efficiency without the L2 extension requirement.&lt;/strong&gt; Option A is appropriate for organizations with simple BGP-based routing needs and no investment in Cisco SD-WAN. Option C is justified only when L2 extension and vMotion-based workload mobility are non-negotiable requirements — a scenario that typically applies to the first 12–18 months of a migration program before applications are re-platformed.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="final-recommendation-option-b--cisco-catalyst-8000v-with-ncc-integration"&gt;Final Recommendation: Option B — Cisco Catalyst 8000V with NCC Integration
&lt;/h2&gt;&lt;p&gt;For enterprises operating Cisco routing and SD-WAN infrastructure, the C8000V deployed on GCP Compute Engine, integrated with the Network Connectivity Center (NCC), is the architecturally sound and operationally pragmatic choice.&lt;/p&gt;
&lt;h3 id="data-plane-architecture"&gt;Data Plane Architecture
&lt;/h3&gt;&lt;p&gt;The data plane consists of &lt;strong&gt;GRE tunnels encapsulated within IPsec transport mode&lt;/strong&gt; (or, preferably, IPsec tunnel mode with VTI interfaces for simplified QoS and routing configuration). The encapsulation stack, from outer to inner:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-gdscript3" data-lang="gdscript3"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="n"&gt;Outer&lt;/span&gt; &lt;span class="ne"&gt;IP&lt;/span&gt; &lt;span class="n"&gt;Header&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="n"&gt;ESP&lt;/span&gt; &lt;span class="n"&gt;Header&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="n"&gt;GRE&lt;/span&gt; &lt;span class="n"&gt;Header&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="n"&gt;Inner&lt;/span&gt; &lt;span class="ne"&gt;IP&lt;/span&gt; &lt;span class="n"&gt;Header&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt; &lt;span class="n"&gt;Payload&lt;/span&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="mi"&gt;22&lt;/span&gt;&lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="mi"&gt;4&lt;/span&gt;&lt;span class="o"&gt;-&lt;/span&gt;&lt;span class="mi"&gt;8&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="mi"&gt;20&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;This encapsulation adds 66–70 bytes of overhead per packet. For a standard 1500-byte MTU on the GCP VPC (configurable up to 8896 bytes for intra-VPC traffic), the effective Maximum Segment Size (MSS) for TCP traffic traversing the tunnel must be clamped:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;ip tcp adjust-mss 1360
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;On the Tunnel interface:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;span class="lnt"&gt;6
&lt;/span&gt;&lt;span class="lnt"&gt;7
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;interface Tunnel100
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ip mtu 1400
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; ip tcp adjust-mss 1360
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tunnel source GigabitEthernet1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tunnel destination &amp;lt;on-prem-peer-public-ip&amp;gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tunnel mode ipsec ipv4
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; tunnel protection ipsec profile IPSEC_PROFILE
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;For SD-WAN overlay integration, the C8000V registers with vManage as a cEdge device, and the IPsec tunnels to on-premises WAN edge nodes are established and orchestrated via the SD-WAN control plane (vBond, vSmart). This eliminates the need for manual tunnel configuration and enables centralized policy-driven path selection (Cisco Systems, 2023a).&lt;/p&gt;
&lt;h3 id="control-plane-architecture--the-ncc-imperative"&gt;Control Plane Architecture — The NCC Imperative
&lt;/h3&gt;&lt;p&gt;Here is the critical integration point that separates a functional deployment from a production-grade architecture: &lt;strong&gt;routes learned by the C8000V from on-premises must be programmatically injected into the GCP VPC routing table.&lt;/strong&gt; The C8000V, as a user-space VM, has no native mechanism to program Andromeda&amp;rsquo;s forwarding tables. Static routes in the GCP console pointing to the C8000V&amp;rsquo;s vNIC are fragile, non-scalable, and operationally unacceptable for any environment with more than a handful of prefixes.&lt;/p&gt;
&lt;p&gt;The solution is the &lt;strong&gt;Network Connectivity Center (NCC) Router Appliance&lt;/strong&gt; integration (Google Cloud, 2024a):&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Register the C8000V as an NCC Router Appliance spoke.&lt;/strong&gt; This is performed via the GCP Console or &lt;code&gt;gcloud&lt;/code&gt; CLI, associating the C8000V&amp;rsquo;s Compute Engine instance and its internal vNIC IP with an NCC Hub.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Establish eBGP peering between the C8000V and the Cloud Router.&lt;/strong&gt; The Cloud Router, which is the NCC Hub&amp;rsquo;s route reflector and Andromeda control plane ingestion point, peers with the C8000V over an internal eBGP session. The Cloud Router uses ASN 16550 (or a custom private ASN), and the C8000V uses its own private ASN.&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;router bgp 65001
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; bgp router-id 10.10.1.2
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; bgp log-neighbor-changes
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; neighbor 10.10.1.1 remote-as 65002
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; neighbor 10.10.1.1 description GCP-CLOUD-ROUTER
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; !
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; address-family ipv4 unicast
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; network 172.16.0.0 mask 255.255.0.0
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; neighbor 10.10.1.1 activate
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; exit-address-family
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Cloud Router propagates learned routes into the VPC.&lt;/strong&gt; Once the Cloud Router receives prefixes from the C8000V via eBGP, it programs those routes as &lt;strong&gt;dynamic custom routes&lt;/strong&gt; in the VPC routing table via the Andromeda control plane. These routes are then visible to all VMs in the VPC (or in peered VPCs if custom route export is enabled) (Google Cloud, 2024a).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Bidirectional route exchange.&lt;/strong&gt; The Cloud Router also advertises the VPC&amp;rsquo;s subnet routes back to the C8000V, which then redistributes them into the on-premises IGP (EIGRP, OSPF) or SD-WAN overlay.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;&lt;strong&gt;Critical NCC constraint:&lt;/strong&gt; the eBGP session between the C8000V and Cloud Router must use &lt;strong&gt;link-local or RFC 1918 addresses on the same subnet.&lt;/strong&gt; The C8000V&amp;rsquo;s internal vNIC IP and the Cloud Router&amp;rsquo;s peering IP must be in the same VPC subnet. Additionally, the Cloud Router must have the &lt;code&gt;--set-peer-ip-address&lt;/code&gt; configured for each BGP peer corresponding to the C8000V&amp;rsquo;s internal IP (Google Cloud, 2024a).&lt;/p&gt;
&lt;h3 id="topology-summary"&gt;Topology Summary
&lt;/h3&gt;&lt;pre class="mermaid" style="visibility:hidden"&gt;---
config:
 layout: dagre
 theme: base
 themeVariables:
 lineColor: "#555555"
 edgeLabelBackground: "#ffffff"
 tertiaryTextColor: "#333333"
title: C8000V + NCC Hybrid Connectivity — Production HA Topology
---
graph TB
 subgraph ON_PREM["🏢 On-Premises Data Center"]
 CORE["Core Router&lt;br/&gt;(Nexus / ASR)"]
 SDWAN["SD-WAN Edge - cEdge&lt;br/&gt;or VPN Headend"]
 CORE &lt;--&gt;|"EIGRP / OSPF / BGP"| SDWAN
 end

 SDWAN &lt;--&gt;|"IPsec + GRE Tunnels&lt;br/&gt;or SD-WAN Overlay"| C8A
 SDWAN &lt;--&gt;|"IPsec + GRE Tunnels&lt;br/&gt;or SD-WAN Overlay"| C8B

 subgraph GCP["☁️ Google Cloud Platform"]

 subgraph TVPC["Transit VPC"]
 C8A["C8000V-a&lt;br/&gt;Zone-a&lt;br/&gt;ASN 65001"]
 C8B["C8000V-b&lt;br/&gt;Zone-b&lt;br/&gt;ASN 65001"]
 CR["Cloud Router&lt;br/&gt;NCC Hub&lt;br/&gt;ASN 65002"]
 ILB["Internal Passthrough NLB&lt;br/&gt;next-hop for on-prem&lt;br/&gt;routes"]
 ROUTES["VPC Route Table&lt;br/&gt;dynamic custom routes"]

 C8A &lt;--&gt;|"eBGP peer"| CR
 C8B &lt;--&gt;|"eBGP peer"| CR
 C8A --- ILB
 C8B --- ILB
 CR --&gt;|"Injects routes into&lt;br/&gt;Andromeda SDN"| ROUTES
 end

 subgraph WVPC["Workload VPC"]
 APPS["App VMs · GKE&lt;br/&gt;Cloud SQL · GCS"]
 end

 ROUTES --&gt;|"VPC Peering&lt;br/&gt;custom route export"| APPS
 end

 style ON_PREM fill:#f1f3f4,stroke:#e94560,color:#333
 style GCP fill:#f9fafb,stroke:#16213e,color:#333
 style TVPC fill:#e1f5fe,stroke:#1b1b2f,color:#333
 style WVPC fill:#e8f5e9,stroke:#0f4c75,color:#333
 style C8A fill:#ffcdd2,stroke:#333,color:#333
 style C8B fill:#ffcdd2,stroke:#333,color:#333
 style CR fill:#b3e5fc,stroke:#333,color:#333
 style ILB fill:#ffe0b2,stroke:#333,color:#333
 style CORE fill:#e0e0e0,stroke:#333,color:#333
 style SDWAN fill:#e0e0e0,stroke:#333,color:#333
 style APPS fill:#b2dfdb,stroke:#333,color:#333
 style ROUTES fill:#bbdefb,stroke:#333,color:#333&lt;/pre&gt;&lt;hr&gt;
&lt;h2 id="risks-and-mitigations"&gt;Risks and Mitigations
&lt;/h2&gt;&lt;h3 id="risk-1-single-point-of-failure"&gt;Risk 1: Single Point of Failure
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; A single C8000V instance in one GCP zone represents an unacceptable SPOF. Zone-level maintenance events, live migration failures, or IOS-XE process crashes will sever hybrid connectivity.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Deploy &lt;strong&gt;two C8000V instances in separate GCP zones&lt;/strong&gt; (e.g., &lt;code&gt;us-central1-a&lt;/code&gt; and &lt;code&gt;us-central1-b&lt;/code&gt;) within the same transit VPC. Both instances peer with the Cloud Router via eBGP, advertising the same on-premises prefixes. Traffic from the workload VPC toward on-premises destinations is directed to the C8000V pair via a &lt;strong&gt;GCP Internal Passthrough Network Load Balancer (ILB)&lt;/strong&gt; configured as the next-hop for on-premises routes.&lt;/p&gt;
&lt;p&gt;The ILB performs health checking (TCP or HTTP probe against the C8000V management interface or a custom health endpoint) and removes a failed instance from the forwarding pool within seconds. On the C8000V side, BFD (Bidirectional Forwarding Detection) with sub-second timers ensures rapid eBGP session teardown, causing the Cloud Router to withdraw routes from the failed instance and converge on the surviving peer (Google Cloud, 2024f).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;IOS-XE BFD configuration for fast eBGP failover:&lt;/strong&gt;&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt;1
&lt;/span&gt;&lt;span class="lnt"&gt;2
&lt;/span&gt;&lt;span class="lnt"&gt;3
&lt;/span&gt;&lt;span class="lnt"&gt;4
&lt;/span&gt;&lt;span class="lnt"&gt;5
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-fallback" data-lang="fallback"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;router bgp 65001
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; neighbor 10.10.1.1 fall-over bfd
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;!
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;interface GigabitEthernet1
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; bfd interval 300 min_rx 300 multiplier 3
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id="risk-2-mtu--fragmentation-induced-performance-degradation"&gt;Risk 2: MTU / Fragmentation-Induced Performance Degradation
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; GRE + IPsec encapsulation reduces the effective MTU. Applications sending 1500-byte frames will trigger IP fragmentation at the C8000V, causing packet reordering, increased latency, and throughput collapse — particularly devastating for high-throughput database replication (e.g., Oracle Data Guard, SQL Server Always On) and NFS/SMB file transfers.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt;&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;&lt;strong&gt;TCP MSS clamping&lt;/strong&gt; on all tunnel interfaces: &lt;code&gt;ip tcp adjust-mss 1360&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Path MTU Discovery (PMTUD):&lt;/strong&gt; Ensure ICMP &amp;ldquo;Fragmentation Needed&amp;rdquo; (Type 3, Code 4) messages are not blocked by any firewall in the path. This is a common failure mode in enterprises with overly aggressive ICMP filtering.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Tunnel MTU configuration:&lt;/strong&gt; Set &lt;code&gt;ip mtu 1400&lt;/code&gt; on tunnel interfaces.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;GCP VPC MTU:&lt;/strong&gt; Consider configuring the VPC MTU to 1460 (GCP default) or higher if using Jumbo Frames for intra-VPC traffic, but always account for the encapsulation overhead on the tunnel path (Google Cloud, 2024b).&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;DF-bit handling:&lt;/strong&gt; On the C8000V, configure &lt;code&gt;tunnel path-mtu-discovery&lt;/code&gt; to enable dynamic MTU negotiation for GRE tunnels.&lt;/li&gt;
&lt;/ol&gt;
&lt;h3 id="risk-3-crypto-performance-bottleneck"&gt;Risk 3: Crypto Performance Bottleneck
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; IPsec encryption/decryption is CPU-intensive. Under-provisioned C8000V instances will hit CPU saturation at moderate throughput levels, causing packet drops and tunnel instability.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Deploy C8000V on &lt;code&gt;n2-standard-8&lt;/code&gt; or larger instance types that expose AES-NI hardware acceleration to the guest OS. IOS-XE automatically leverages AES-NI when available, providing 5–10x improvement in IPsec throughput compared to software-only crypto. Validate with &lt;code&gt;show crypto engine accelerator statistics&lt;/code&gt; (Cisco Systems, 2024). Monitor CPU utilization via &lt;code&gt;show processes cpu sorted&lt;/code&gt; and GCP Cloud Monitoring; establish alerting thresholds at 70% sustained utilization.&lt;/p&gt;
&lt;h3 id="risk-4-route-table-explosion-and-cloud-router-limits"&gt;Risk 4: Route Table Explosion and Cloud Router Limits
&lt;/h3&gt;&lt;p&gt;&lt;strong&gt;Scenario:&lt;/strong&gt; Large enterprise networks may advertise thousands of prefixes from on-premises. Cloud Router has documented limits on the number of learned routes per BGP session and per VPC (Google Cloud, 2024c).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Mitigation:&lt;/strong&gt; Implement aggressive route summarization on the C8000V before advertising to Cloud Router. Use &lt;code&gt;aggregate-address&lt;/code&gt; in BGP to summarize /24s and /25s into /16 or /8 supernets where topologically appropriate. Monitor Cloud Router route counts via &lt;code&gt;gcloud compute routers get-status&lt;/code&gt; and set alerting on approach to documented limits.&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="real-world-constraints-and-organizational-considerations"&gt;Real-World Constraints and Organizational Considerations
&lt;/h2&gt;&lt;h3 id="legacy-technical-debt-the-re-ip-problem"&gt;Legacy Technical Debt: The Re-IP Problem
&lt;/h3&gt;&lt;p&gt;The single most common blocker to hybrid cloud network modernization is not a technology limitation — it is &lt;strong&gt;hardcoded IP addresses embedded in application configurations, database connection strings, firewall rules, load balancer VIPs, and DNS records that have not been updated in years.&lt;/strong&gt; Changing an application&amp;rsquo;s IP address in a legacy enterprise is not a network task; it is a cross-functional program requiring application owner sign-off, change advisory board approval, regression testing, and often a maintenance window.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Pragmatic approach:&lt;/strong&gt; Do not attempt to re-IP applications as part of the initial hybrid connectivity deployment. Instead, design the C8000V overlay to preserve existing IP addressing by advertising the on-premises subnets into GCP with their original CIDR blocks. Cloud-resident applications that need to reach on-premises services will route through the C8000V tunnel transparently. Re-IP efforts should be a separate, application-driven workstream with its own timeline and governance.&lt;/p&gt;
&lt;h3 id="organizational-silos-network-engineers-vs-cloud-platform-engineers"&gt;Organizational Silos: Network Engineers vs. Cloud Platform Engineers
&lt;/h3&gt;&lt;p&gt;In most enterprises, the team that manages Cisco routers and SD-WAN infrastructure is not the same team that manages GCP projects, IAM policies, and Terraform modules. The C8000V deployment sits squarely at the intersection of these two domains, and ownership ambiguity will cause operational failures.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Recommendation:&lt;/strong&gt; Establish a &lt;strong&gt;Hybrid Network Ops&lt;/strong&gt; function — either as a dedicated team or a formal RACI matrix — with clear ownership boundaries:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Network team&lt;/strong&gt; owns: IOS-XE configuration, IPsec/GRE tunnel health, routing policy, SD-WAN orchestration, C8000V OS patching.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Cloud platform team&lt;/strong&gt; owns: GCP Compute Engine instance lifecycle, VPC network design, Cloud Router / NCC configuration, ILB health checks, IAM permissions, GCP firewall rules.&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Shared responsibility:&lt;/strong&gt; Capacity planning, throughput monitoring, incident response for connectivity failures.&lt;/li&gt;
&lt;/ul&gt;
&lt;h3 id="infrastructure-as-code"&gt;Infrastructure as Code
&lt;/h3&gt;&lt;p&gt;The C8000V deployment, NCC configuration, Cloud Router peering, VPC setup, and firewall rules must be codified in &lt;strong&gt;Terraform&lt;/strong&gt; (or Pulumi/OpenTofu). Manual console-click deployments are categorically unacceptable for production hybrid connectivity infrastructure. The Terraform Google provider supports NCC Hub/Spoke resources (&lt;code&gt;google_network_connectivity_hub&lt;/code&gt;, &lt;code&gt;google_network_connectivity_spoke&lt;/code&gt;), and the C8000V&amp;rsquo;s IOS-XE configuration can be bootstrapped via Compute Engine metadata startup scripts or day-2 managed via Cisco NSO / Ansible (HashiCorp, 2024).&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt;10
&lt;/span&gt;&lt;span class="lnt"&gt;11
&lt;/span&gt;&lt;span class="lnt"&gt;12
&lt;/span&gt;&lt;span class="lnt"&gt;13
&lt;/span&gt;&lt;span class="lnt"&gt;14
&lt;/span&gt;&lt;span class="lnt"&gt;15
&lt;/span&gt;&lt;span class="lnt"&gt;16
&lt;/span&gt;&lt;span class="lnt"&gt;17
&lt;/span&gt;&lt;span class="lnt"&gt;18
&lt;/span&gt;&lt;span class="lnt"&gt;19
&lt;/span&gt;&lt;span class="lnt"&gt;20
&lt;/span&gt;&lt;span class="lnt"&gt;21
&lt;/span&gt;&lt;span class="lnt"&gt;22
&lt;/span&gt;&lt;span class="lnt"&gt;23
&lt;/span&gt;&lt;span class="lnt"&gt;24
&lt;/span&gt;&lt;span class="lnt"&gt;25
&lt;/span&gt;&lt;span class="lnt"&gt;26
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-markdown" data-lang="markdown"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;A fully deployable reference implementation of this architecture is available as an open-source Terraform module:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;&amp;gt; &lt;/span&gt;&lt;span class="ge"&gt;📦 **[terraform-c8000v-gcp](https://github.com/ronaldonascimentodantas/terraform-c8000v-gcp)** 
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;&amp;gt; &lt;/span&gt;&lt;span class="ge"&gt;Production-grade Terraform modules for C8000V deployment on GCP with NCC integration, HA ILB, GitHub Actions CI, and Checkov security validation.
&lt;/span&gt;&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;The module follows this structure:
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;.
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── modules/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ ├── transit-vpc/ # VPC, subnets, firewall, peering
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ ├── c8000v/ # Compute instances + bootstrap
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ ├── ncc/ # NCC Hub, spokes, Cloud Router, BGP
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ └── ilb/ # Internal LB + health checks
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── environments/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ ├── dev/ # Dev tfvars + backend
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ └── prod/ # Prod tfvars + backend
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── scripts/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ └── c8000v_bootstrap.tpl # IOS-XE day-0 config template
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── docs/
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;│ └── architecture.md
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── main.tf # Root module composition
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── variables.tf # Root input variables
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── outputs.tf # Root outputs
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── versions.tf # Provider + Terraform constraints
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;├── backend.tf # GCS remote state
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;└── .github/workflows/ci.yml
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;h3 id="licensing"&gt;Licensing
&lt;/h3&gt;&lt;p&gt;The C8000V on GCP supports two licensing models: &lt;strong&gt;BYOL (Bring Your Own License)&lt;/strong&gt; via Cisco Smart Licensing and &lt;strong&gt;PAYG (Pay-As-You-Go)&lt;/strong&gt; via the GCP Marketplace listing. For enterprises with existing Cisco Enterprise Agreements (EA), BYOL is almost always more cost-effective. Ensure the Smart Licensing satellite or direct cloud connectivity is available from the C8000V&amp;rsquo;s management interface; a licensing failure will restrict the C8000V to a throughput-limited &amp;ldquo;evaluation&amp;rdquo; mode after 90 days (Cisco Systems, 2024).&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="conclusion"&gt;Conclusion
&lt;/h2&gt;&lt;div class="video-wrapper"&gt;
 &lt;iframe loading="lazy" 
 src="https://www.youtube.com/embed/qkcS6vwk_bA" 
 allowfullscreen 
 title="YouTube Video"
 &gt;
 &lt;/iframe&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;div class="video-wrapper"&gt;
 &lt;iframe loading="lazy" 
 src="https://www.youtube.com/embed/uewa9qOoEPU" 
 allowfullscreen 
 title="YouTube Video"
 &gt;
 &lt;/iframe&gt;
&lt;/div&gt;

&lt;hr&gt;
&lt;iframe width="100%" height="166" scrolling="no" frameborder="no" allow="autoplay"
 src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/2302176284&amp;color=%23ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;show_teaser=true"&gt;
&lt;/iframe&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src="https://dantas.io/p/architectural-blueprint-enterprise-data-center-interconnection-with-google-cloud-via-cisco-catalyst-8000v/enterprise-datacenter-interconnect-google-cloud-cisco-c8000v-banner-conclusion.png"
	width="2752"
	height="1536"
	loading="lazy"
	
		alt="Generated by Notebook LM"
	
 
	
		class="gallery-image" 
		data-flex-grow="179"
		data-flex-basis="430px"
	
&gt;&lt;/p&gt;
&lt;p&gt;The C8000V with GCP Network Connectivity Center suits enterprises already invested in Cisco routing and SD-WAN, enabling hybrid cloud connectivity without splitting operational governance. Key benefits include eliminating branch-to-cloud backhaul, 40–60% latency reduction, and unified visibility through vManage, DNA Center, and ThousandEyes — all while working within GCP&amp;rsquo;s Layer 3 (Andromeda) constraints without the cost of GCVE or limitations of native HA VPN.
Successful production deployment hinges on redundancy (dual instances with ILB failover), AES-NI crypto acceleration, proper MTU/MSS handling, and route aggregation discipline. Operational success also depends on Terraform-based infrastructure-as-code, clear RACI boundaries between network and cloud teams, and pragmatic management of technical debt like hardcoded IPs.&lt;/p&gt;
&lt;blockquote class="alert alert-tip"&gt;
 &lt;div class="alert-header"&gt;
 &lt;span class="alert-icon"&gt;💡&lt;/span&gt;
 &lt;span class="alert-title"&gt;Tip&lt;/span&gt;
 &lt;/div&gt;
 &lt;div class="alert-body"&gt;
 &lt;p&gt;The hybrid cloud operating model is permanent. The network architecture must reflect that permanence.&lt;/p&gt;
 &lt;/div&gt;
 &lt;/blockquote&gt;
&lt;hr&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;Cisco Systems. (2023a). &lt;em&gt;Cisco SD-WAN design guide&lt;/em&gt;. Cisco Validated Design. &lt;a class="link" href="https://www.cisco.com/c/en/us/td/docs/solutions/CVD/SDWAN/cisco-sdwan-design-guide.html" target="_blank" rel="noopener"
 &gt;https://www.cisco.com/c/en/us/td/docs/solutions/CVD/SDWAN/cisco-sdwan-design-guide.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Cisco Systems. (2023b). &lt;em&gt;Cisco SD-WAN cloud onramp for IaaS architecture guide&lt;/em&gt;. &lt;a class="link" href="https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-iaas.html" target="_blank" rel="noopener"
 &gt;https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-iaas.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Cisco Systems. (2024). &lt;em&gt;Cisco Catalyst 8000V Edge Software deployment guide for Google Cloud Platform&lt;/em&gt;. &lt;a class="link" href="https://www.cisco.com/c/en/us/td/docs/routers/C8000V/Configuration/c8000v-installation-configuration-guide.html" target="_blank" rel="noopener"
 &gt;https://www.cisco.com/c/en/us/td/docs/routers/C8000V/Configuration/c8000v-installation-configuration-guide.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Dalton, M., Schultz, D., Agarwal, A., Arbel, Y., Bhatia, A., Gupta, S., Kumar, R., Li, H., McMullen, B., Patil, R., Poutievski, L., &amp;amp; Vahdat, A. (2018). Andromeda: Performance, isolation, and velocity at scale in cloud network virtualization. &lt;em&gt;Proceedings of the 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI &amp;lsquo;18)&lt;/em&gt;, 373–387. &lt;a class="link" href="https://www.usenix.org/conference/nsdi18/presentation/dalton" target="_blank" rel="noopener"
 &gt;https://www.usenix.org/conference/nsdi18/presentation/dalton&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Gartner. (2023). &lt;em&gt;Top strategic technology trends for 2024&lt;/em&gt;. Gartner, Inc. &lt;a class="link" href="https://www.gartner.com/en/articles/gartner-top-10-strategic-technology-trends-for-2024" target="_blank" rel="noopener"
 &gt;https://www.gartner.com/en/articles/gartner-top-10-strategic-technology-trends-for-2024&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google Cloud. (2024a). &lt;em&gt;Network Connectivity Center overview&lt;/em&gt;. Google Cloud Documentation. &lt;a class="link" href="https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/overview" target="_blank" rel="noopener"
 &gt;https://cloud.google.com/network-connectivity/docs/network-connectivity-center/concepts/overview&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google Cloud. (2024b). &lt;em&gt;VPC network overview&lt;/em&gt;. Google Cloud Documentation. &lt;a class="link" href="https://cloud.google.com/vpc/docs/vpc" target="_blank" rel="noopener"
 &gt;https://cloud.google.com/vpc/docs/vpc&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google Cloud. (2024c). &lt;em&gt;Cloud VPN overview and quotas&lt;/em&gt;. Google Cloud Documentation. &lt;a class="link" href="https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview" target="_blank" rel="noopener"
 &gt;https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google Cloud. (2024d). &lt;em&gt;Compute Engine machine types and network bandwidth&lt;/em&gt;. Google Cloud Documentation. &lt;a class="link" href="https://cloud.google.com/compute/docs/machine-types" target="_blank" rel="noopener"
 &gt;https://cloud.google.com/compute/docs/machine-types&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google Cloud. (2024e). &lt;em&gt;Google Cloud VMware Engine overview&lt;/em&gt;. Google Cloud Documentation. &lt;a class="link" href="https://cloud.google.com/vmware-engine/docs/overview" target="_blank" rel="noopener"
 &gt;https://cloud.google.com/vmware-engine/docs/overview&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Google Cloud. (2024f). &lt;em&gt;Internal passthrough Network Load Balancer overview&lt;/em&gt;. Google Cloud Documentation. &lt;a class="link" href="https://cloud.google.com/load-balancing/docs/internal" target="_blank" rel="noopener"
 &gt;https://cloud.google.com/load-balancing/docs/internal&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;HashiCorp. (2024). &lt;em&gt;Google Cloud provider: Network Connectivity Center resources&lt;/em&gt;. Terraform Registry. &lt;a class="link" href="https://registry.terraform.io/providers/hashicorp/google/latest/docs" target="_blank" rel="noopener"
 &gt;https://registry.terraform.io/providers/hashicorp/google/latest/docs&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Dantas, R. N. (2024). &lt;em&gt;terraform-c8000v-gcp: Production Terraform modules for Cisco C8000V hybrid connectivity on Google Cloud Platform&lt;/em&gt; [Open-source software]. GitHub. &lt;a class="link" href="https://github.com/ronaldonascimentodantas/terraform-c8000v-gcp" target="_blank" rel="noopener"
 &gt;https://github.com/ronaldonascimentodantas/terraform-c8000v-gcp&lt;/a&gt;&lt;/p&gt;</description></item></channel></rss>