<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>AI-Infrastructure on dantas.io</title><link>https://dantas.io/tags/ai-infrastructure/</link><description>Recent content in AI-Infrastructure on dantas.io</description><generator>Hugo -- gohugo.io</generator><language>en</language><lastBuildDate>Tue, 21 Apr 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://dantas.io/tags/ai-infrastructure/index.xml" rel="self" type="application/rss+xml"/><item><title>ECS vs. EKS in 2026: The Container Orchestration Decision Every AWS Architect Eventually Gets Wrong</title><link>https://dantas.io/p/ecs-vs-eks-container-orchestration-decision/</link><pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate><guid>https://dantas.io/p/ecs-vs-eks-container-orchestration-decision/</guid><description>&lt;h2 id="the-container-orchestration-divide-that-aws-created--and-wont-resolve"&gt;The Container Orchestration Divide That AWS Created — and Won&amp;rsquo;t Resolve
&lt;/h2&gt;&lt;p&gt;AWS continues to operate two production container orchestrators in 2026, and the persistence of that duality is itself the most useful piece of architectural intelligence available to a Platform Engineering team. Amazon ECS is not on a deprecation path. Amazon EKS is not subsuming it. The two services share Fargate, share the IAM control surface, share the VPC fabric — and diverge on every dimension that matters once a workload moves past hello-world.&lt;/p&gt;
&lt;p&gt;The market data explains why. The Cloud Native Computing Foundation&amp;rsquo;s 2024 Annual Survey reports that Kubernetes production deployment reached 80% in 2024, up from 66% in 2023, with 93% of surveyed organizations either using, piloting, or evaluating Kubernetes (Cloud Native Computing Foundation [CNCF], 2025). A follow-up CNCF survey published in early 2026 found that 82% of IT organizations now run Kubernetes clusters in production environments, with 98% using some form of cloud-native technology (CNCF, 2026). Gartner forecasts that more than 95% of global organizations will run containerized applications in production by 2029, up from fewer than 50% in 2023 (Bauman &amp;amp; Chandrasekaran, 2024). Kubernetes won the orchestration war years ago. ECS&amp;rsquo;s continued investment is the answer to a different question — what AWS thinks should happen for the customer who does not need a CRD, an admission webhook, or a custom scheduler, and who would prefer not to pay for the operational surface area that comes with them.&lt;/p&gt;
&lt;p&gt;Read AWS&amp;rsquo;s investment posture and the boundary becomes clear. ECS gets new integrations with the AWS-native control plane: Service Connect replacing App Mesh (Amazon Web Services [AWS], 2024a), Bedrock AgentCore as a serverless target for agent containers (AWS, n.d.-a), VPC Lattice as a multi-VPC fabric. EKS gets the Kubernetes ecosystem: Auto Mode with Karpenter as the native node provisioner (AWS, 2024b), a 99.99% Service Level Agreement on Provisioned Control Plane (AWS, 2026), Pod Identity replacing IAM Roles for Service Accounts, native AWS Neuron SDK integration for Trainium (AWS, n.d.-b). Two platforms. Two trajectories. One decision that compounds.&lt;/p&gt;
&lt;h2 id="amazon-ecs-in-2026-what-it-is-and-what-it-has-become"&gt;Amazon ECS in 2026: What It Is and What It Has Become
&lt;/h2&gt;&lt;p&gt;ECS is a task scheduler with a fixed mental model and a deliberately narrow surface area. The unit of work is the task definition, which declares one or more container definitions, an &lt;code&gt;awsvpc&lt;/code&gt; network configuration, a CPU/memory pair drawn from a fixed allowed-combinations list when running on Fargate, an execution role for image-pull and log-write permissions, and a task role for application-level AWS API access. The unit of long-running orchestration is the service, which maintains a desired count of tasks, performs rolling updates, and integrates with Application Load Balancer or Network Load Balancer target groups for ingress.&lt;/p&gt;
&lt;p&gt;What this model gets right is operational footprint. There is no control plane to upgrade, because ECS does not expose one. There are no CRDs to install, no admission webhooks to debug, no node groups to size, no Helm chart version skew. A team with prior AWS Console familiarity can have a production-grade ECS service running on Fargate within an afternoon — the conceptual surface area maps onto AWS primitives a CloudOps engineer already knows.&lt;/p&gt;
&lt;p&gt;ECS Service Connect, launched at re:Invent 2022 and now the recommended path forward for service-to-service communication on ECS, is the operational replacement for AWS App Mesh. AWS announced the discontinuation of App Mesh effective September 30, 2026, with new customer onboarding closed since September 24, 2024 (AWS, 2024a). Service Connect deploys an AWS-managed Envoy sidecar per task, registers endpoints in AWS Cloud Map automatically, and provides built-in health checks, outlier detection, and retry behavior — without the App Mesh control-plane abstractions that proved too complex for the value they delivered.&lt;/p&gt;
&lt;p&gt;ECS Anywhere extends the task scheduler to on-premises hardware, allowing the same task definitions to run on customer-managed compute through an SSM-based agent. ECS integrates with Amazon Bedrock AgentCore Runtime as a deployment target for containerized AI agents — AgentCore packages the agent code into an OCI image, pushes it to ECR, and runs it in a managed serverless runtime that mirrors the Lambda model rather than persisting an ECS service (AWS, n.d.-a). For teams that have already standardized on ECS as their containerization layer, AgentCore provides a Bedrock-native escape hatch for the specific case of agentic workloads without forcing a Kubernetes migration.&lt;/p&gt;
&lt;p&gt;The operational ceiling is precise and worth naming explicitly. ECS does not support custom schedulers. There is no analog to a Kubernetes admission webhook, no extension point for policy-as-code engines such as Kyverno or OPA Gatekeeper that operate on the Kubernetes API. The Helm ecosystem does not exist for ECS — every operator pattern that the Kubernetes community has encoded as a chart (cert-manager, External Secrets Operator, Argo CD, Crossplane) has either no ECS equivalent or requires custom Lambda glue. Multi-tenancy is implemented at the ECS cluster boundary, not the namespace boundary, which means tenant isolation in ECS at scale produces a proliferation of clusters rather than a single cluster with NetworkPolicy enforcement. None of this is a weakness in the abstract. All of it becomes a weakness the moment a platform team&amp;rsquo;s roadmap requires any of those capabilities.&lt;/p&gt;
&lt;h2 id="amazon-eks-in-2026-what-aws-finally-got-right"&gt;Amazon EKS in 2026: What AWS Finally Got Right
&lt;/h2&gt;&lt;p&gt;Three releases reshape what EKS is in 2026, and a senior architect should treat them as the actual product baseline rather than the marketing brochure.&lt;/p&gt;
&lt;p&gt;The first is &lt;strong&gt;EKS Auto Mode&lt;/strong&gt;, generally available since December 2024 (AWS, 2024b). Auto Mode shifts AWS responsibility past the control plane and into the data plane: Karpenter as the in-tree node provisioner, NVIDIA GPU support, EBS CSI for block storage, network policy enforcement, AWS Load Balancer Controller, and CoreDNS are all managed as core capabilities rather than as add-ons the customer installs and patches. AWS provisions the EC2 instances under its own management identity, applies OS patches on a rolling basis, treats node AMIs as immutable with read-only root filesystems and SELinux mandatory access control enforcement (AWS, n.d.-c). The cost premium runs roughly 12% on top of the underlying EC2 spend (AWS, n.d.-d). Auto Mode is available on any EKS cluster running Kubernetes 1.29 and above, in every commercial region.&lt;/p&gt;
&lt;p&gt;The second is the &lt;strong&gt;99.99% Service Level Agreement on Provisioned Control Plane&lt;/strong&gt;, announced March 2026, up from the standard control plane&amp;rsquo;s 99.95% (AWS, 2026). Provisioned Control Plane gives the customer a pre-warmed control plane sized to a specific scaling tier — 4XL, and now 8XL — designed for sustained API server request rates that overwhelm the burst-managed default. The 8XL tier is explicitly positioned for ultra-scale AI/ML training, HPC, and large-scale data processing workloads. The 99.99% SLA is measured in 1-minute intervals, which is a stricter granularity than the 5-minute windows typically used by managed Kubernetes competitors.&lt;/p&gt;
&lt;p&gt;The third is &lt;strong&gt;Pod Identity&lt;/strong&gt;, launched at re:Invent 2023 and now the preferred IAM credential delivery mechanism for new EKS workloads (AWS, 2023). Pod Identity replaces the operational complexity of IAM Roles for Service Accounts (IRSA), which required per-cluster OIDC providers, trust-policy edits each time a role moved between clusters, and produced trust-policy size limits at scale. Pod Identity collapses the model: install the Pod Identity Agent as an EKS add-on (a DaemonSet), associate an IAM role to a Kubernetes service account through the EKS API, done. No OIDC plumbing, no trust-policy churn during blue/green cluster swaps. IRSA continues to work and AWS still supports it for cross-distribution use cases such as EKS Anywhere or self-managed clusters; for new EKS-in-the-cloud workloads, Pod Identity is the correct default.&lt;/p&gt;
&lt;p&gt;The current Kubernetes version baseline on EKS standard support is 1.33, 1.34, and 1.35 (AWS, n.d.-e). Pod Security Admission, which replaced the deprecated PodSecurityPolicy, is enabled as an admission controller across all 1.33 and 1.34 platform versions (AWS, n.d.-f). EKS standard support runs 14 months from the release date, with extended support adding another 12 months at $0.60 per cluster-hour versus the standard $0.10 (AWS, 2024c). A cluster left on an extended-support version pays a 6× control-plane premium.&lt;/p&gt;
&lt;p&gt;What Auto Mode does &lt;strong&gt;not&lt;/strong&gt; abstract is worth stating bluntly, because the marketing language obscures it. VPC CNI configuration — prefix delegation, security group attachment per pod, custom networking — remains the customer&amp;rsquo;s responsibility. Control plane logging configuration (audit, authenticator, controllerManager, scheduler) does not enable itself. IAM boundary design — the trust relationships between Pod Identity, the cluster role, and the node IAM role — is the customer&amp;rsquo;s job. Auto Mode reduces the day-2 surface area; it does not eliminate the day-0 architectural decisions that determine whether the cluster will scale to 10,000 pods or break at 1,000.&lt;/p&gt;
&lt;h2 id="ai-workload-readiness-the-dimension-that-changes-the-calculus-in-2026"&gt;AI Workload Readiness: The Dimension That Changes the Calculus in 2026
&lt;/h2&gt;&lt;p&gt;AI infrastructure is the dimension that decides this article. A 2026 Platform Engineering team that ignores AI workload readiness when picking between ECS and EKS is making the decision against a workload portfolio that no longer exists. Forty-eight percent of organizations have not yet deployed AI/ML workloads on Kubernetes, but among early adopters the use cases are batch jobs, model experimentation, real-time model inference, and data pre-processing — every one of them container-orchestrated (CNCF, 2025).&lt;/p&gt;
&lt;p&gt;GPU scheduling, distributed training topology, and LLM inference serving are three distinct architectural problems. They each require different scheduler behavior, different network fabric, and different runtime support. A single &amp;ldquo;AI support&amp;rdquo; checkbox does not exist.&lt;/p&gt;
&lt;h3 id="ecs-for-ai-workloads"&gt;ECS for AI Workloads
&lt;/h3&gt;&lt;p&gt;ECS supports GPU instances on the EC2 launch type via the NVIDIA Container Toolkit, with the standard task placement constraint mechanism used to target G4dn, G5, P4d, or P5 instances. The task definition declares &lt;code&gt;resourceRequirements&lt;/code&gt; with type &lt;code&gt;GPU&lt;/code&gt; and a count, the ECS agent on the instance binds the requested GPU device into the container at launch, and the workload runs. This is sufficient for single-instance inference serving, GPU-accelerated rendering, or any workload that fits within one EC2 host&amp;rsquo;s GPU complement.&lt;/p&gt;
&lt;p&gt;For AI use cases that map to the AWS-managed control plane rather than to GPU compute directly, ECS is a credible deployment target. Amazon Bedrock for serverless LLM inference is consumed via API calls from any ECS task on Fargate; the model serving infrastructure is AWS-managed and never touches the customer&amp;rsquo;s compute. Bedrock AgentCore Runtime accepts container images and runs them on a managed microVM substrate that the ECS team does not operate (AWS, n.d.-a). For an organization whose AI strategy is &amp;ldquo;call Bedrock from microservices,&amp;rdquo; ECS imposes no architectural penalty.&lt;/p&gt;
&lt;p&gt;The boundary is multi-instance distributed training. ECS schedules at the task boundary, and a task is bounded to a single host. There is no native primitive for multi-node tightly coupled training — no equivalent to Kubernetes&amp;rsquo; Volcano gang scheduler or the MPI Operator, no first-class concept of a job that spans hosts and coordinates through MPI AllReduce or NCCL collectives over an EFA fabric. Workarounds exist (multiple coordinated ECS tasks plus an external orchestrator) but they are not the supported pattern. ECS also lacks a native bin-packing scheduler that understands GPU as a constrained resource; placement strategies are coarse-grained relative to what Karpenter does on EKS. For inference fan-out across a fleet of GPU instances, ECS works. For training a transformer that does not fit in a single host&amp;rsquo;s HBM, ECS does not.&lt;/p&gt;
&lt;h3 id="eks-for-ai-workloads"&gt;EKS for AI Workloads
&lt;/h3&gt;&lt;p&gt;EKS is the AWS-native path for serious AI infrastructure and it is not close. The reasons are technical, not promotional.&lt;/p&gt;
&lt;p&gt;GPU bin-packing on EKS goes through Karpenter&amp;rsquo;s NodeClass and NodePool primitives, with GPU-aware instance type selection across G5, G6, P4d, P5, Inferentia2 (Inf2), and Trainium (Trn1, Trn2) families (Karpenter, n.d.). EKS Auto Mode ships with a GPU-optimized NodePool that automatically launches the appropriate Bottlerocket Accelerated AMI when a workload requests a GPU resource. The NVIDIA Device Plugin DaemonSet is managed as part of Auto Mode rather than as a customer-installed component. Fractional GPU sharing for inference serving is supported through NVIDIA time-slicing or MIG (Multi-Instance GPU) on supported hardware, both of which are configured through the device plugin&amp;rsquo;s ConfigMap — there is no equivalent on ECS.&lt;/p&gt;
&lt;p&gt;Distributed training is where the EKS lead becomes structural. The Volcano gang scheduler provides batch job semantics with all-or-nothing scheduling for tightly coupled training jobs that cannot tolerate partial readiness. The MPI Operator provides Kubernetes-native lifecycle management for AllReduce patterns underlying frameworks such as PyTorch FSDP and DeepSpeed. The Kubeflow Training Operator handles PyTorchJob, TFJob, and other framework-specific job types. EFA networking integrates through the AWS EFA Kubernetes Device Plugin, which exposes the Elastic Fabric Adapter as a schedulable resource and enables RDMA-class collectives between pods on Trn2 or P5 instances. None of these components have ECS analogs.&lt;/p&gt;
&lt;p&gt;The AWS Neuron SDK, the toolchain for compiling and running models on Trainium and Inferentia, integrates natively with EKS, ECS, SageMaker, ParallelCluster, and Batch (AWS, n.d.-b). Trainium2-powered Trn2 instances became generally available in December 2024, delivering 20.8 FP8 petaflops per instance across 16 Trainium2 chips with 1.5 TB of HBM3 and 3.2 Tbps of EFAv3 networking — and Trn2 UltraServers connect 64 chips across four instances via NeuronLink for 83.2 FP8 petaflops in a single logical node (AWS, n.d.-g). AWS positions Trn2 as offering 30–40% better price-performance than current-generation GPU-based EC2 instances (Amazon, 2024). Neuron technically runs on ECS, but the actual operator patterns published by AWS, the reference architectures for PyTorch + NeuronX distributed training, and the supported integrations with SageMaker HyperPod and EKS Hybrid Nodes are written for Kubernetes — not for ECS.&lt;/p&gt;
&lt;p&gt;LLM inference serving at scale follows the same pattern. The vLLM, NVIDIA Triton Inference Server, and Hugging Face TGI deployment patterns published by AWS Solutions Architects assume Kubernetes primitives — Deployments, HorizontalPodAutoscalers wired to Prometheus metrics, KEDA scalers reacting to queue depth, Karpenter NodePools provisioning Inf2 nodes on demand. Building this on ECS is possible only by re-implementing the autoscaling and lifecycle behavior in custom Lambda glue.&lt;/p&gt;
&lt;h3 id="the-ai-decision-line"&gt;The AI Decision Line
&lt;/h3&gt;&lt;p&gt;If the organization is building or operating AI/ML infrastructure beyond simple Bedrock API calls from a stateless microservice, ECS is architecturally disqualified. The boundary is precise: any workload that requires multi-node distributed training, GPU bin-packing across heterogeneous instance families, fractional GPU sharing for inference serving, or first-class operator-pattern lifecycle management for ML jobs (PyTorchJob, MPIJob, RayJob) cannot be built on ECS in 2026 without significant custom orchestration code that re-implements what the Kubernetes ecosystem provides natively. The decision is not about whether ECS could theoretically be made to work — anything is theoretically possible — but about whether a platform team should accept that engineering debt for a class of workloads where the supported AWS-published patterns assume Kubernetes.&lt;/p&gt;
&lt;h2 id="head-to-head-seven-decision-dimensions"&gt;Head-to-Head: Seven Decision Dimensions
&lt;/h2&gt;&lt;h3 id="operational-complexity--time-to-first-deployment"&gt;Operational Complexity &amp;amp; Time-to-First-Deployment
&lt;/h3&gt;&lt;p&gt;ECS wins decisively for the first 90 days of any new platform engineering effort. A team with no prior orchestration experience can ship a production ECS Fargate service — task definition, service, ALB target group, CloudWatch logging, IAM task role — in a single sprint, because the conceptual model is &amp;ldquo;AWS resources connected through familiar AWS primitives.&amp;rdquo; EKS Auto Mode has narrowed the gap considerably by managing the node lifecycle, but the team still has to internalize Kubernetes primitives (Deployment, Service, Ingress, ConfigMap, Secret, ServiceAccount, NetworkPolicy) and Helm before reaching the same delivery confidence.&lt;/p&gt;
&lt;p&gt;The inflection point is roughly 50 services or 5 platform engineers — whichever arrives first. Past that scale, ECS&amp;rsquo;s lack of a CRD model forces every cross-cutting concern (secrets rotation, service mesh policy, certificate lifecycle, GitOps reconciliation) into bespoke Lambda functions, Step Functions workflows, or homegrown CLI tooling. EKS handles each of those through an existing operator pattern, with reusable source code, prior-art documentation, and SRE muscle memory across the industry. The same characteristic that makes ECS faster on day 1 makes it slower on day 365.&lt;/p&gt;
&lt;h3 id="networking-model"&gt;Networking Model
&lt;/h3&gt;&lt;p&gt;Both platforms run on &lt;code&gt;awsvpc&lt;/code&gt; semantics: each task or pod gets its own ENI and routes through the VPC fabric. The implementation diverges at scale. ECS attaches one ENI per task. EKS, through the AWS VPC CNI plugin with prefix delegation enabled, attaches one ENI per node and assigns IP addresses from &lt;code&gt;/28&lt;/code&gt; IPv4 prefixes carved out of the subnet — increasing pod density per node by roughly 16× and substantially reducing ENI consumption. On a &lt;code&gt;m6i.large&lt;/code&gt; with 3 ENIs and 10 secondary IPs per ENI, the default CNI behavior tops out at 29 pods; with prefix delegation enabled, the same instance handles up to 110 pods limited by the kubelet &lt;code&gt;--max-pods&lt;/code&gt; setting.&lt;/p&gt;
&lt;p&gt;This matters operationally at the 1,000-task threshold. An ECS cluster running 1,000 Fargate tasks consumes 1,000 ENIs, and ENIs draw against the per-region service quota (default 5,000) and against subnet IP exhaustion. An EKS cluster running 1,000 pods on prefix-delegated CNI consumes ENIs roughly proportional to node count — perhaps 50–100 ENIs across the node fleet — leaving headroom in both quota and subnet space. Service-to-service communication on ECS goes through Service Connect&amp;rsquo;s per-task Envoy sidecar; on EKS, CoreDNS handles service discovery and the AWS Load Balancer Controller manages NLB and ALB provisioning from Service and Ingress resources. EKS&amp;rsquo;s networking model is the one designed for high pod density. ECS&amp;rsquo;s model is designed for service-per-task isolation at moderate scale.&lt;/p&gt;
&lt;h3 id="security-posture-out-of-the-box"&gt;Security Posture Out of the Box
&lt;/h3&gt;&lt;p&gt;EKS leads here, but only after the right add-ons are configured. ECS provides task IAM roles, Secrets Manager and SSM Parameter Store injection at task launch, and per-task ENIs that enable security group enforcement at the workload boundary — a strong starting baseline that requires almost no custom configuration to meet a CIS Benchmark equivalent.&lt;/p&gt;
&lt;p&gt;EKS delivers a deeper security surface but requires explicit configuration. Pod Identity replaces IRSA&amp;rsquo;s OIDC complexity and reduces the attack surface (no in-pod credentials, no projected tokens to manage). Pod Security Admission, enabled by default on 1.33+ platform versions, enforces baseline, restricted, or privileged profiles at the namespace level (AWS, n.d.-f). NetworkPolicy enforcement requires either Calico, Cilium, or the AWS-native VPC CNI Network Policy engine to be active. The External Secrets Operator integrates Secrets Manager and SSM into the Kubernetes Secret API. Audited against the CIS Kubernetes Benchmark (Center for Internet Security [CIS], n.d.), a default-configuration EKS cluster passes more controls than a default-configuration ECS cluster only because the Kubernetes benchmark itself encodes more controls — meaning EKS has both a higher ceiling and a higher floor for security investment.&lt;/p&gt;
&lt;h3 id="terraform-support-maturity"&gt;Terraform Support Maturity
&lt;/h3&gt;&lt;p&gt;Both platforms have mature &lt;code&gt;terraform-provider-aws&lt;/code&gt; support under the &lt;code&gt;~&amp;gt; 5.0&lt;/code&gt; constraint. The complexity profiles differ.&lt;/p&gt;
&lt;p&gt;A production ECS Fargate service with Service Connect, a task role, two container definitions (application plus an OpenTelemetry Collector sidecar), and CloudWatch log routing:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt; 10
&lt;/span&gt;&lt;span class="lnt"&gt; 11
&lt;/span&gt;&lt;span class="lnt"&gt; 12
&lt;/span&gt;&lt;span class="lnt"&gt; 13
&lt;/span&gt;&lt;span class="lnt"&gt; 14
&lt;/span&gt;&lt;span class="lnt"&gt; 15
&lt;/span&gt;&lt;span class="lnt"&gt; 16
&lt;/span&gt;&lt;span class="lnt"&gt; 17
&lt;/span&gt;&lt;span class="lnt"&gt; 18
&lt;/span&gt;&lt;span class="lnt"&gt; 19
&lt;/span&gt;&lt;span class="lnt"&gt; 20
&lt;/span&gt;&lt;span class="lnt"&gt; 21
&lt;/span&gt;&lt;span class="lnt"&gt; 22
&lt;/span&gt;&lt;span class="lnt"&gt; 23
&lt;/span&gt;&lt;span class="lnt"&gt; 24
&lt;/span&gt;&lt;span class="lnt"&gt; 25
&lt;/span&gt;&lt;span class="lnt"&gt; 26
&lt;/span&gt;&lt;span class="lnt"&gt; 27
&lt;/span&gt;&lt;span class="lnt"&gt; 28
&lt;/span&gt;&lt;span class="lnt"&gt; 29
&lt;/span&gt;&lt;span class="lnt"&gt; 30
&lt;/span&gt;&lt;span class="lnt"&gt; 31
&lt;/span&gt;&lt;span class="lnt"&gt; 32
&lt;/span&gt;&lt;span class="lnt"&gt; 33
&lt;/span&gt;&lt;span class="lnt"&gt; 34
&lt;/span&gt;&lt;span class="lnt"&gt; 35
&lt;/span&gt;&lt;span class="lnt"&gt; 36
&lt;/span&gt;&lt;span class="lnt"&gt; 37
&lt;/span&gt;&lt;span class="lnt"&gt; 38
&lt;/span&gt;&lt;span class="lnt"&gt; 39
&lt;/span&gt;&lt;span class="lnt"&gt; 40
&lt;/span&gt;&lt;span class="lnt"&gt; 41
&lt;/span&gt;&lt;span class="lnt"&gt; 42
&lt;/span&gt;&lt;span class="lnt"&gt; 43
&lt;/span&gt;&lt;span class="lnt"&gt; 44
&lt;/span&gt;&lt;span class="lnt"&gt; 45
&lt;/span&gt;&lt;span class="lnt"&gt; 46
&lt;/span&gt;&lt;span class="lnt"&gt; 47
&lt;/span&gt;&lt;span class="lnt"&gt; 48
&lt;/span&gt;&lt;span class="lnt"&gt; 49
&lt;/span&gt;&lt;span class="lnt"&gt; 50
&lt;/span&gt;&lt;span class="lnt"&gt; 51
&lt;/span&gt;&lt;span class="lnt"&gt; 52
&lt;/span&gt;&lt;span class="lnt"&gt; 53
&lt;/span&gt;&lt;span class="lnt"&gt; 54
&lt;/span&gt;&lt;span class="lnt"&gt; 55
&lt;/span&gt;&lt;span class="lnt"&gt; 56
&lt;/span&gt;&lt;span class="lnt"&gt; 57
&lt;/span&gt;&lt;span class="lnt"&gt; 58
&lt;/span&gt;&lt;span class="lnt"&gt; 59
&lt;/span&gt;&lt;span class="lnt"&gt; 60
&lt;/span&gt;&lt;span class="lnt"&gt; 61
&lt;/span&gt;&lt;span class="lnt"&gt; 62
&lt;/span&gt;&lt;span class="lnt"&gt; 63
&lt;/span&gt;&lt;span class="lnt"&gt; 64
&lt;/span&gt;&lt;span class="lnt"&gt; 65
&lt;/span&gt;&lt;span class="lnt"&gt; 66
&lt;/span&gt;&lt;span class="lnt"&gt; 67
&lt;/span&gt;&lt;span class="lnt"&gt; 68
&lt;/span&gt;&lt;span class="lnt"&gt; 69
&lt;/span&gt;&lt;span class="lnt"&gt; 70
&lt;/span&gt;&lt;span class="lnt"&gt; 71
&lt;/span&gt;&lt;span class="lnt"&gt; 72
&lt;/span&gt;&lt;span class="lnt"&gt; 73
&lt;/span&gt;&lt;span class="lnt"&gt; 74
&lt;/span&gt;&lt;span class="lnt"&gt; 75
&lt;/span&gt;&lt;span class="lnt"&gt; 76
&lt;/span&gt;&lt;span class="lnt"&gt; 77
&lt;/span&gt;&lt;span class="lnt"&gt; 78
&lt;/span&gt;&lt;span class="lnt"&gt; 79
&lt;/span&gt;&lt;span class="lnt"&gt; 80
&lt;/span&gt;&lt;span class="lnt"&gt; 81
&lt;/span&gt;&lt;span class="lnt"&gt; 82
&lt;/span&gt;&lt;span class="lnt"&gt; 83
&lt;/span&gt;&lt;span class="lnt"&gt; 84
&lt;/span&gt;&lt;span class="lnt"&gt; 85
&lt;/span&gt;&lt;span class="lnt"&gt; 86
&lt;/span&gt;&lt;span class="lnt"&gt; 87
&lt;/span&gt;&lt;span class="lnt"&gt; 88
&lt;/span&gt;&lt;span class="lnt"&gt; 89
&lt;/span&gt;&lt;span class="lnt"&gt; 90
&lt;/span&gt;&lt;span class="lnt"&gt; 91
&lt;/span&gt;&lt;span class="lnt"&gt; 92
&lt;/span&gt;&lt;span class="lnt"&gt; 93
&lt;/span&gt;&lt;span class="lnt"&gt; 94
&lt;/span&gt;&lt;span class="lnt"&gt; 95
&lt;/span&gt;&lt;span class="lnt"&gt; 96
&lt;/span&gt;&lt;span class="lnt"&gt; 97
&lt;/span&gt;&lt;span class="lnt"&gt; 98
&lt;/span&gt;&lt;span class="lnt"&gt; 99
&lt;/span&gt;&lt;span class="lnt"&gt;100
&lt;/span&gt;&lt;span class="lnt"&gt;101
&lt;/span&gt;&lt;span class="lnt"&gt;102
&lt;/span&gt;&lt;span class="lnt"&gt;103
&lt;/span&gt;&lt;span class="lnt"&gt;104
&lt;/span&gt;&lt;span class="lnt"&gt;105
&lt;/span&gt;&lt;span class="lnt"&gt;106
&lt;/span&gt;&lt;span class="lnt"&gt;107
&lt;/span&gt;&lt;span class="lnt"&gt;108
&lt;/span&gt;&lt;span class="lnt"&gt;109
&lt;/span&gt;&lt;span class="lnt"&gt;110
&lt;/span&gt;&lt;span class="lnt"&gt;111
&lt;/span&gt;&lt;span class="lnt"&gt;112
&lt;/span&gt;&lt;span class="lnt"&gt;113
&lt;/span&gt;&lt;span class="lnt"&gt;114
&lt;/span&gt;&lt;span class="lnt"&gt;115
&lt;/span&gt;&lt;span class="lnt"&gt;116
&lt;/span&gt;&lt;span class="lnt"&gt;117
&lt;/span&gt;&lt;span class="lnt"&gt;118
&lt;/span&gt;&lt;span class="lnt"&gt;119
&lt;/span&gt;&lt;span class="lnt"&gt;120
&lt;/span&gt;&lt;span class="lnt"&gt;121
&lt;/span&gt;&lt;span class="lnt"&gt;122
&lt;/span&gt;&lt;span class="lnt"&gt;123
&lt;/span&gt;&lt;span class="lnt"&gt;124
&lt;/span&gt;&lt;span class="lnt"&gt;125
&lt;/span&gt;&lt;span class="lnt"&gt;126
&lt;/span&gt;&lt;span class="lnt"&gt;127
&lt;/span&gt;&lt;span class="lnt"&gt;128
&lt;/span&gt;&lt;span class="lnt"&gt;129
&lt;/span&gt;&lt;span class="lnt"&gt;130
&lt;/span&gt;&lt;span class="lnt"&gt;131
&lt;/span&gt;&lt;span class="lnt"&gt;132
&lt;/span&gt;&lt;span class="lnt"&gt;133
&lt;/span&gt;&lt;span class="lnt"&gt;134
&lt;/span&gt;&lt;span class="lnt"&gt;135
&lt;/span&gt;&lt;span class="lnt"&gt;136
&lt;/span&gt;&lt;span class="lnt"&gt;137
&lt;/span&gt;&lt;span class="lnt"&gt;138
&lt;/span&gt;&lt;span class="lnt"&gt;139
&lt;/span&gt;&lt;span class="lnt"&gt;140
&lt;/span&gt;&lt;span class="lnt"&gt;141
&lt;/span&gt;&lt;span class="lnt"&gt;142
&lt;/span&gt;&lt;span class="lnt"&gt;143
&lt;/span&gt;&lt;span class="lnt"&gt;144
&lt;/span&gt;&lt;span class="lnt"&gt;145
&lt;/span&gt;&lt;span class="lnt"&gt;146
&lt;/span&gt;&lt;span class="lnt"&gt;147
&lt;/span&gt;&lt;span class="lnt"&gt;148
&lt;/span&gt;&lt;span class="lnt"&gt;149
&lt;/span&gt;&lt;span class="lnt"&gt;150
&lt;/span&gt;&lt;span class="lnt"&gt;151
&lt;/span&gt;&lt;span class="lnt"&gt;152
&lt;/span&gt;&lt;span class="lnt"&gt;153
&lt;/span&gt;&lt;span class="lnt"&gt;154
&lt;/span&gt;&lt;span class="lnt"&gt;155
&lt;/span&gt;&lt;span class="lnt"&gt;156
&lt;/span&gt;&lt;span class="lnt"&gt;157
&lt;/span&gt;&lt;span class="lnt"&gt;158
&lt;/span&gt;&lt;span class="lnt"&gt;159
&lt;/span&gt;&lt;span class="lnt"&gt;160
&lt;/span&gt;&lt;span class="lnt"&gt;161
&lt;/span&gt;&lt;span class="lnt"&gt;162
&lt;/span&gt;&lt;span class="lnt"&gt;163
&lt;/span&gt;&lt;span class="lnt"&gt;164
&lt;/span&gt;&lt;span class="lnt"&gt;165
&lt;/span&gt;&lt;span class="lnt"&gt;166
&lt;/span&gt;&lt;span class="lnt"&gt;167
&lt;/span&gt;&lt;span class="lnt"&gt;168
&lt;/span&gt;&lt;span class="lnt"&gt;169
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-hcl" data-lang="hcl"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_ecs_cluster&amp;#34; &amp;#34;platform&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;platform&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;setting&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;containerInsights&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;enhanced&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_service_discovery_http_namespace&amp;#34; &amp;#34;platform&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;platform.internal&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; description&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;Service Connect namespace for the platform cluster&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_cloudwatch_log_group&amp;#34; &amp;#34;api&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;/ecs/api-service&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; retention_in_days&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_iam_role&amp;#34; &amp;#34;api_task&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;ecs-api-task&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; assume_role_policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;2012-10-17&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Statement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Effect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;Allow&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Action&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;sts:AssumeRole&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Principal&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { Service&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;ecs-tasks.amazonaws.com&amp;#34;&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Condition&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; ArnLike&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;aws:SourceArn&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;arn:aws:ecs:${var.region}:${var.account_id}:*&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_iam_role_policy&amp;#34; &amp;#34;api_task&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; role&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api_task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;id&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; policy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;2012-10-17&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Statement&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Effect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;Allow&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Action&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;s3:GetObject&amp;#34;, &amp;#34;s3:PutObject&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Resource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;${aws_s3_bucket.api_data.arn}/*&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Effect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;Allow&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Action&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;secretsmanager:GetSecretValue&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; Resource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_secretsmanager_secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api_db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_ecs_task_definition&amp;#34; &amp;#34;api&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; family&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; network_mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;awsvpc&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; requires_compatibilities&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;FARGATE&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; cpu&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;1024&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; memory&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;2048&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; task_role_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api_task&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; execution_role_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;task_execution&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;runtime_platform&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; operating_system_family&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;LINUX&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; cpu_architecture&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;ARM64&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; container_definitions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;${aws_ecr_repository.api.repository_url}:${var.api_image_tag}&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; essential&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; portMappings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; containerPort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; protocol&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;tcp&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api-http&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; appProtocol&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;http&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; environment&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; { name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;OTEL_EXPORTER_OTLP_ENDPOINT&amp;#34;, value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;http://localhost:4317&amp;#34;&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; secrets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;DATABASE_URL&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; valueFrom&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_secretsmanager_secret&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api_db&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; logConfiguration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; logDriver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;awslogs&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-group&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_cloudwatch_log_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-region&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;region&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-stream-prefix&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; healthCheck&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; command&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;CMD-SHELL&amp;#34;, &amp;#34;curl -f http://localhost:8080/healthz || exit 1&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; interval&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;15&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; timeout&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; retries&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; startPeriod&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;30&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;otel-collector&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; image&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;public.ecr.aws/aws-observability/aws-otel-collector:v0.40.0&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; essential&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; portMappings&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; { containerPort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; 4317, protocol&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;tcp&amp;#34;&lt;/span&gt; }&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; { containerPort&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; 4318, protocol&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;tcp&amp;#34;&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; logConfiguration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; logDriver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;awslogs&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-group&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_cloudwatch_log_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-region&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;region&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-stream-prefix&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;otel&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;])&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_ecs_service&amp;#34; &amp;#34;api&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; cluster&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_ecs_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;id&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; task_definition&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_ecs_task_definition&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; desired_count&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; launch_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;FARGATE&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; propagate_tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;SERVICE&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;network_configuration&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; subnets&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;private_subnet_ids&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; security_groups&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; assign_public_ip&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;service_connect_configuration&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; namespace&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_service_discovery_http_namespace&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;service&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; port_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api-http&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; discovery_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;client_alias&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; port&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; dns_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;log_configuration&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; log_driver&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;awslogs&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; options&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-group&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_cloudwatch_log_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;api&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-region&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;region&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; &amp;#34;awslogs-stream-prefix&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;service-connect&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;deployment_circuit_breaker&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; enable&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; rollback&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;A production EKS cluster with Auto Mode bootstrap, Pod Identity add-on, VPC CNI add-on with prefix delegation enabled, and a custom Karpenter NodePool with a GPU NodeClass for inference workloads:&lt;/p&gt;
&lt;div class="highlight"&gt;&lt;div class="chroma"&gt;
&lt;table class="lntable"&gt;&lt;tr&gt;&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code&gt;&lt;span class="lnt"&gt; 1
&lt;/span&gt;&lt;span class="lnt"&gt; 2
&lt;/span&gt;&lt;span class="lnt"&gt; 3
&lt;/span&gt;&lt;span class="lnt"&gt; 4
&lt;/span&gt;&lt;span class="lnt"&gt; 5
&lt;/span&gt;&lt;span class="lnt"&gt; 6
&lt;/span&gt;&lt;span class="lnt"&gt; 7
&lt;/span&gt;&lt;span class="lnt"&gt; 8
&lt;/span&gt;&lt;span class="lnt"&gt; 9
&lt;/span&gt;&lt;span class="lnt"&gt; 10
&lt;/span&gt;&lt;span class="lnt"&gt; 11
&lt;/span&gt;&lt;span class="lnt"&gt; 12
&lt;/span&gt;&lt;span class="lnt"&gt; 13
&lt;/span&gt;&lt;span class="lnt"&gt; 14
&lt;/span&gt;&lt;span class="lnt"&gt; 15
&lt;/span&gt;&lt;span class="lnt"&gt; 16
&lt;/span&gt;&lt;span class="lnt"&gt; 17
&lt;/span&gt;&lt;span class="lnt"&gt; 18
&lt;/span&gt;&lt;span class="lnt"&gt; 19
&lt;/span&gt;&lt;span class="lnt"&gt; 20
&lt;/span&gt;&lt;span class="lnt"&gt; 21
&lt;/span&gt;&lt;span class="lnt"&gt; 22
&lt;/span&gt;&lt;span class="lnt"&gt; 23
&lt;/span&gt;&lt;span class="lnt"&gt; 24
&lt;/span&gt;&lt;span class="lnt"&gt; 25
&lt;/span&gt;&lt;span class="lnt"&gt; 26
&lt;/span&gt;&lt;span class="lnt"&gt; 27
&lt;/span&gt;&lt;span class="lnt"&gt; 28
&lt;/span&gt;&lt;span class="lnt"&gt; 29
&lt;/span&gt;&lt;span class="lnt"&gt; 30
&lt;/span&gt;&lt;span class="lnt"&gt; 31
&lt;/span&gt;&lt;span class="lnt"&gt; 32
&lt;/span&gt;&lt;span class="lnt"&gt; 33
&lt;/span&gt;&lt;span class="lnt"&gt; 34
&lt;/span&gt;&lt;span class="lnt"&gt; 35
&lt;/span&gt;&lt;span class="lnt"&gt; 36
&lt;/span&gt;&lt;span class="lnt"&gt; 37
&lt;/span&gt;&lt;span class="lnt"&gt; 38
&lt;/span&gt;&lt;span class="lnt"&gt; 39
&lt;/span&gt;&lt;span class="lnt"&gt; 40
&lt;/span&gt;&lt;span class="lnt"&gt; 41
&lt;/span&gt;&lt;span class="lnt"&gt; 42
&lt;/span&gt;&lt;span class="lnt"&gt; 43
&lt;/span&gt;&lt;span class="lnt"&gt; 44
&lt;/span&gt;&lt;span class="lnt"&gt; 45
&lt;/span&gt;&lt;span class="lnt"&gt; 46
&lt;/span&gt;&lt;span class="lnt"&gt; 47
&lt;/span&gt;&lt;span class="lnt"&gt; 48
&lt;/span&gt;&lt;span class="lnt"&gt; 49
&lt;/span&gt;&lt;span class="lnt"&gt; 50
&lt;/span&gt;&lt;span class="lnt"&gt; 51
&lt;/span&gt;&lt;span class="lnt"&gt; 52
&lt;/span&gt;&lt;span class="lnt"&gt; 53
&lt;/span&gt;&lt;span class="lnt"&gt; 54
&lt;/span&gt;&lt;span class="lnt"&gt; 55
&lt;/span&gt;&lt;span class="lnt"&gt; 56
&lt;/span&gt;&lt;span class="lnt"&gt; 57
&lt;/span&gt;&lt;span class="lnt"&gt; 58
&lt;/span&gt;&lt;span class="lnt"&gt; 59
&lt;/span&gt;&lt;span class="lnt"&gt; 60
&lt;/span&gt;&lt;span class="lnt"&gt; 61
&lt;/span&gt;&lt;span class="lnt"&gt; 62
&lt;/span&gt;&lt;span class="lnt"&gt; 63
&lt;/span&gt;&lt;span class="lnt"&gt; 64
&lt;/span&gt;&lt;span class="lnt"&gt; 65
&lt;/span&gt;&lt;span class="lnt"&gt; 66
&lt;/span&gt;&lt;span class="lnt"&gt; 67
&lt;/span&gt;&lt;span class="lnt"&gt; 68
&lt;/span&gt;&lt;span class="lnt"&gt; 69
&lt;/span&gt;&lt;span class="lnt"&gt; 70
&lt;/span&gt;&lt;span class="lnt"&gt; 71
&lt;/span&gt;&lt;span class="lnt"&gt; 72
&lt;/span&gt;&lt;span class="lnt"&gt; 73
&lt;/span&gt;&lt;span class="lnt"&gt; 74
&lt;/span&gt;&lt;span class="lnt"&gt; 75
&lt;/span&gt;&lt;span class="lnt"&gt; 76
&lt;/span&gt;&lt;span class="lnt"&gt; 77
&lt;/span&gt;&lt;span class="lnt"&gt; 78
&lt;/span&gt;&lt;span class="lnt"&gt; 79
&lt;/span&gt;&lt;span class="lnt"&gt; 80
&lt;/span&gt;&lt;span class="lnt"&gt; 81
&lt;/span&gt;&lt;span class="lnt"&gt; 82
&lt;/span&gt;&lt;span class="lnt"&gt; 83
&lt;/span&gt;&lt;span class="lnt"&gt; 84
&lt;/span&gt;&lt;span class="lnt"&gt; 85
&lt;/span&gt;&lt;span class="lnt"&gt; 86
&lt;/span&gt;&lt;span class="lnt"&gt; 87
&lt;/span&gt;&lt;span class="lnt"&gt; 88
&lt;/span&gt;&lt;span class="lnt"&gt; 89
&lt;/span&gt;&lt;span class="lnt"&gt; 90
&lt;/span&gt;&lt;span class="lnt"&gt; 91
&lt;/span&gt;&lt;span class="lnt"&gt; 92
&lt;/span&gt;&lt;span class="lnt"&gt; 93
&lt;/span&gt;&lt;span class="lnt"&gt; 94
&lt;/span&gt;&lt;span class="lnt"&gt; 95
&lt;/span&gt;&lt;span class="lnt"&gt; 96
&lt;/span&gt;&lt;span class="lnt"&gt; 97
&lt;/span&gt;&lt;span class="lnt"&gt; 98
&lt;/span&gt;&lt;span class="lnt"&gt; 99
&lt;/span&gt;&lt;span class="lnt"&gt;100
&lt;/span&gt;&lt;span class="lnt"&gt;101
&lt;/span&gt;&lt;span class="lnt"&gt;102
&lt;/span&gt;&lt;span class="lnt"&gt;103
&lt;/span&gt;&lt;span class="lnt"&gt;104
&lt;/span&gt;&lt;span class="lnt"&gt;105
&lt;/span&gt;&lt;span class="lnt"&gt;106
&lt;/span&gt;&lt;span class="lnt"&gt;107
&lt;/span&gt;&lt;span class="lnt"&gt;108
&lt;/span&gt;&lt;span class="lnt"&gt;109
&lt;/span&gt;&lt;span class="lnt"&gt;110
&lt;/span&gt;&lt;span class="lnt"&gt;111
&lt;/span&gt;&lt;span class="lnt"&gt;112
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;
&lt;td class="lntd"&gt;
&lt;pre tabindex="0" class="chroma"&gt;&lt;code class="language-hcl" data-lang="hcl"&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_eks_cluster&amp;#34; &amp;#34;platform&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;platform&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;1.34&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; role_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;vpc_config&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; subnet_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;private_subnet_ids&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; endpoint_private_access&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; endpoint_public_access&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; security_group_ids&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="k"&gt;aws_security_group&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;id&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;access_config&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; authentication_mode&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;API&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; bootstrap_cluster_creator_admin_permissions&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;false&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;compute_config&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; node_pools&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;general-purpose&amp;#34;, &amp;#34;system&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; node_role_arn&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;arn&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;kubernetes_network_config&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; elastic_load_balancing { enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; ip_family&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;ipv4&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;storage_config&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; block_storage { enabled&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="kt"&gt;true&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="k"&gt;upgrade_policy&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; support_type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;STANDARD&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; enabled_cluster_log_types&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="s2"&gt;&amp;#34;api&amp;#34;, &amp;#34;audit&amp;#34;, &amp;#34;authenticator&amp;#34;, &amp;#34;controllerManager&amp;#34;, &amp;#34;scheduler&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_eks_addon&amp;#34; &amp;#34;pod_identity&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; cluster_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_eks_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; addon_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;eks-pod-identity-agent&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; addon_version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;v1.3.10-eksbuild.2&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; resolve_conflicts_on_update&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;OVERWRITE&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;aws_eks_addon&amp;#34; &amp;#34;vpc_cni&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; cluster_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_eks_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; addon_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;vpc-cni&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; addon_version&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;v1.19.2-eksbuild.1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; resolve_conflicts_on_update&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;OVERWRITE&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; configuration_values&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;jsonencode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;{
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; env&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; ENABLE_PREFIX_DELEGATION&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; WARM_PREFIX_TARGET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; ENABLE_POD_ENI&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;true&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;kubernetes_manifest&amp;#34; &amp;#34;gpu_nodeclass&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; manifest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; apiVersion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;eks.amazonaws.com/v1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; kind&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;NodeClass&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;gpu-inference&amp;#34;&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; spec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; role&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_iam_role&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;node&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; subnetSelectorTerms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; [{ tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { &amp;#34;karpenter.sh/discovery&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_eks_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt; } }&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; securityGroupSelectorTerms&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; [{ tags&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { &amp;#34;karpenter.sh/discovery&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;aws_eks_cluster&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;platform&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="k"&gt;name&lt;/span&gt; } }&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; ephemeralStorage&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; size&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;200Gi&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; iops&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;3000&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; throughput&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;125&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;kubernetes_manifest&amp;#34; &amp;#34;gpu_nodepool&amp;#34;&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; manifest&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; apiVersion&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;karpenter.sh/v1&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; kind&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;NodePool&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;gpu-inference&amp;#34;&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; spec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; metadata&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { labels&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { workload&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;inference&amp;#34;&lt;/span&gt; } }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; spec&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; nodeClassRef&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; group&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;eks.amazonaws.com&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; kind&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;NodeClass&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;gpu-inference&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; requirements&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; { key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;karpenter.k8s.aws/instance-family&amp;#34;, operator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;In&amp;#34;, values&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;g5&amp;#34;, &amp;#34;g6&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; }&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; { key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;karpenter.k8s.aws/instance-gpu-count&amp;#34;, operator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;In&amp;#34;, values&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;1&amp;#34;, &amp;#34;4&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; }&lt;span class="p"&gt;,&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; { key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;karpenter.sh/capacity-type&amp;#34;, operator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;In&amp;#34;, values&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;&amp;#34;spot&amp;#34;, &amp;#34;on-demand&amp;#34;&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; &lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; taints&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; [{ key&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;nvidia.com/gpu&amp;#34;, value&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; &amp;#34;true&amp;#34;, effect&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;NoSchedule&amp;#34;&lt;/span&gt; }&lt;span class="p"&gt;]&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; expireAfter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;720h&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; limits&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt; { &amp;#34;nvidia.com/gpu&amp;#34;&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="m"&gt;64&lt;/span&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; disruption&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; {
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; consolidationPolicy&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;WhenEmptyOrUnderutilized&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;&lt;span class="n"&gt; consolidateAfter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;&amp;#34;30s&amp;#34;&lt;/span&gt;
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt; }
&lt;/span&gt;&lt;/span&gt;&lt;span class="line"&gt;&lt;span class="cl"&gt;}
&lt;/span&gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/td&gt;&lt;/tr&gt;&lt;/table&gt;
&lt;/div&gt;
&lt;/div&gt;&lt;p&gt;Both modules are complete enough to validate with &lt;code&gt;terraform plan&lt;/code&gt; against AWS provider 5.x and Kubernetes provider 2.x. The EKS module is roughly twice the line count, which understates the difference — the implicit prerequisites (VPC tagging for subnet discovery, IAM trust policies for the cluster and node roles, OIDC provider creation for IRSA fallback if needed) push a complete EKS Terraform footprint to several hundred lines before the first workload is deployed.&lt;/p&gt;
&lt;h3 id="ecosystem--tooling-depth"&gt;Ecosystem &amp;amp; Tooling Depth
&lt;/h3&gt;&lt;p&gt;ECS has the AWS Console, the CLI, and a small set of integrations (App Mesh, deprecated; Service Connect; ECS Anywhere; AgentCore). The Terraform modules from &lt;code&gt;terraform-aws-modules/ecs&lt;/code&gt; and the CloudFormation &lt;code&gt;AWS::ECS::*&lt;/code&gt; resources cover the surface area. Beyond AWS-native tooling, the third-party ecosystem is sparse.&lt;/p&gt;
&lt;p&gt;EKS exposes the entire CNCF landscape. Argo CD for GitOps reconciliation. Crossplane for AWS resource provisioning from Kubernetes manifests. Kyverno or OPA Gatekeeper for admission-time policy enforcement. cert-manager for ACME certificate automation. External Secrets Operator for secrets injection. KEDA for event-driven autoscaling. Volcano for batch scheduling. Karpenter for node provisioning. Cilium or Calico for advanced NetworkPolicy and eBPF-based observability. Prometheus, Grafana, Loki, Tempo, OpenTelemetry for the observability stack. None of this is hypothetical — every one is a Kubernetes operator or CRD that runs in EKS today, with active maintainers and production references.&lt;/p&gt;
&lt;p&gt;The honest assessment is that this depth matters only at a certain platform team maturity. A 5-person team running 20 services does not need Crossplane. A 30-person platform team running 500 services across 50 product squads does, and on ECS that team will end up reinventing each of these capabilities in custom AWS Lambda. Choose the ecosystem you will need at year three, not the one that minimizes your day-30 onboarding.&lt;/p&gt;
&lt;h3 id="observability-stack"&gt;Observability Stack
&lt;/h3&gt;&lt;p&gt;The ECS observability baseline is Container Insights with Enhanced Observability (now generally available with per-container metrics) plus FireLens for log routing through Fluent Bit to CloudWatch, OpenSearch, or third-party destinations. Adequate for a small fleet, expensive at scale because every metric series flows through CloudWatch with its standard cardinality and ingestion pricing.&lt;/p&gt;
&lt;p&gt;The EKS observability stack is structurally different. AWS Managed Service for Prometheus handles high-cardinality time series at a fraction of the per-metric cost of CloudWatch metrics. CloudWatch Container Insights for EKS provides the cluster-level dashboards. The OpenTelemetry Operator provisions OTLP collection pipelines as Kubernetes resources. Loki handles log aggregation at lower cost than CloudWatch Logs ingestion for high-volume application logs. Tempo handles distributed tracing. None of this is mandatory, but for a 200-service microservices architecture the ECS-native CloudWatch-only approach hits cost ceilings — and cardinality limits on individual metrics — that are difficult to engineer around without significant custom work. EKS&amp;rsquo;s Prometheus-compatible stack is production-grade for that scale; the ECS stack requires significant FireLens routing and CloudWatch log group sharding to approximate the same outcome.&lt;/p&gt;
&lt;h3 id="total-cost-of-ownership-three-workload-profiles"&gt;Total Cost of Ownership: Three Workload Profiles
&lt;/h3&gt;&lt;p&gt;All pricing assumes us-east-1 On-Demand as the baseline, drawn from AWS&amp;rsquo;s published pricing pages (AWS, n.d.-h, n.d.-i). Fargate Linux/x86 in us-east-1 is $0.04048 per vCPU-hour and $0.004445 per GB-hour, with 20 GB of ephemeral storage included and additional storage at $0.000111 per GB-hour. The EKS standard control plane is $0.10 per cluster-hour. EKS Auto Mode adds an approximately 12% management fee on top of the EC2 instance cost (AWS, n.d.-d).&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Profile A: 20-service microservices backend, steady-state 50 tasks/pods, 1 vCPU + 2 GB each, 24/7.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;ECS Fargate:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;vCPU: 50 × 1 × $0.04048 × 730 = $1,477.52/month&lt;/li&gt;
&lt;li&gt;Memory: 50 × 2 × $0.004445 × 730 = $324.49/month&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: $1,802.01/month&lt;/strong&gt; plus ALB/data transfer.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;EKS Fargate profile (same workload, same Fargate rates):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Control plane: $0.10 × 730 = $73.00/month&lt;/li&gt;
&lt;li&gt;Compute: $1,802.01/month&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: $1,875.01/month&lt;/strong&gt; - a $73 monthly delta for the EKS control plane.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;EKS EC2 managed node group running on three &lt;code&gt;m6i.xlarge&lt;/code&gt; On-Demand (4 vCPU, 16 GB) at $0.192/hour:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Control plane: $73.00/month&lt;/li&gt;
&lt;li&gt;Compute: 3 × $0.192 × 730 = $420.48/month&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: $493.48/month&lt;/strong&gt; — 73% cheaper than ECS Fargate at steady state, before any Reserved Instance discount.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The break-even point: a 1-year No Upfront Reserved Instance for &lt;code&gt;m6i.xlarge&lt;/code&gt; reduces the per-hour cost to roughly $0.122, dropping monthly compute to $267/month. Profile A makes the same architectural argument that has held for years: at steady-state utilization above approximately 60%, EKS on EC2 with RIs or Savings Plans defeats Fargate by a wide margin. Fargate&amp;rsquo;s premium is justified only when avoided idle time dominates.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Profile B: Event-driven batch, bursting 0 → 500 tasks/pods in under 60 seconds, 1 vCPU + 2 GB each, average 12 minutes per invocation, 1M invocations/month.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;ECS Fargate Spot (Linux/x86, up to 70% off On-Demand):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;vCPU: 1,000,000 × 1 × $0.04048 × 0.30 × (12/60) = $2,428.80/month&lt;/li&gt;
&lt;li&gt;Memory: 1,000,000 × 2 × $0.004445 × 0.30 × (12/60) = $533.40/month&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: ≈$2,962.20/month&lt;/strong&gt; with 1-minute minimum billing absorbed in the average.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;EKS with Karpenter Spot NodeClass on &lt;code&gt;c6i.large&lt;/code&gt; (2 vCPU, 4 GB) at $0.085/hour On-Demand, ~70% Spot discount = $0.026/hour, packing 2 pods per node:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Effective per-pod compute: $0.013/hour&lt;/li&gt;
&lt;li&gt;1M invocations × (12/60) hours × $0.013 = $2,600/month&lt;/li&gt;
&lt;li&gt;Control plane: $73/month&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: ≈$2,673/month&lt;/strong&gt; — 10% cheaper than Fargate Spot with significantly faster cold start (Karpenter&amp;rsquo;s just-in-time node provisioning typically completes in 30–45 seconds versus Fargate Spot&amp;rsquo;s 60–90 seconds for first-task launch on cold capacity).&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;For pure burst behavior with 100% utilization during the burst window, EKS on Karpenter Spot defeats Fargate Spot on both cost and latency. The Fargate Spot advantage is operational simplicity — no nodes to manage, no Karpenter consolidation policy to tune.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Profile C: AI inference serving, 10× &lt;code&gt;g5.4xlarge&lt;/code&gt; (1× A10G GPU, 16 vCPU, 64 GB, $1.624/hour) sustained, 200 ms p99 latency SLA.&lt;/strong&gt;&lt;/p&gt;
&lt;p&gt;ECS GPU task scheduling, one task per instance (single-tenant GPU):&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Compute: 10 × $1.624 × 730 = $11,855.20/month&lt;/li&gt;
&lt;li&gt;GPU utilization at single-tenant ≈ 35% (typical for inference workloads with bursty traffic)&lt;/li&gt;
&lt;li&gt;Effective cost per useful inference-hour: $11,855.20 / 0.35 = &lt;strong&gt;$33,872/month equivalent useful work&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;EKS with NVIDIA time-slicing (4 replicas per A10G via the device plugin), Karpenter consolidation:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Same base compute: $11,855.20/month&lt;/li&gt;
&lt;li&gt;Effective GPU utilization ≈ 70% with time-slicing absorbing inference latency variance&lt;/li&gt;
&lt;li&gt;Effective cost per useful inference-hour: $11,855.20 / 0.70 = &lt;strong&gt;$16,936/month equivalent useful work&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;Control plane: $73/month&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Total: $11,928.20/month&lt;/strong&gt; to deliver roughly twice the inference throughput per dollar.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;The GPU utilization differential is the cost story for AI workloads. ECS schedules whole GPUs to whole tasks. EKS, through the NVIDIA device plugin&amp;rsquo;s time-slicing or MIG modes, schedules fractional GPU access to multiple pods sharing the same physical accelerator. For inference workloads where p99 latency tolerates some queueing, that fractional sharing roughly doubles useful throughput per GPU dollar. This is the largest single TCO delta in this article and it does not narrow over time — it widens as inference traffic grows.&lt;/p&gt;
&lt;h2 id="migration-path-ecs-to-eks-and-when-to-resist-it"&gt;Migration Path: ECS to EKS (and When to Resist It)
&lt;/h2&gt;&lt;p&gt;Three signals indicate ECS has become a ceiling for the organization. First, AI workloads beyond Bedrock API calls are on the roadmap — see Section 4.3. Second, the platform team is scaling past 10 engineers and is starting to reinvent CRD-equivalents in custom Lambda. Third, GitOps as a delivery model is becoming a hard requirement and the team is uncomfortable building it on ECS without Argo CD or Flux.&lt;/p&gt;
&lt;p&gt;Two signals indicate ECS is the right permanent home. First, the workload portfolio is dominated by stateless API backends with no ML roadmap and no plans for one. Second, the team is under 10 engineers and the cost of operating EKS — even Auto Mode — exceeds the value the Kubernetes ecosystem delivers to that team.&lt;/p&gt;
&lt;p&gt;When migration is the answer, the supported sequence is the strangler fig pattern at the service level, not a big-bang cluster cutover. Stand up EKS Auto Mode in parallel with the existing ECS environment. Pick one non-critical service. Rebuild it as a Kubernetes Deployment with Pod Identity, deploy it to EKS, route a percentage of production traffic through Route 53 weighted records or an external load balancer that fronts both. Increase the EKS percentage as confidence grows. Repeat per service. The cost of running both environments during the migration is real but bounded; the cost of a failed big-bang cutover is unbounded. There is no shortcut that justifies skipping this discipline.&lt;/p&gt;
&lt;p&gt;The single most common migration mistake is treating EKS as &amp;ldquo;ECS with more YAML.&amp;rdquo; It is not. The IAM model (Pod Identity vs. task roles), the networking model (CNI + NetworkPolicy vs. awsvpc + security groups), the deployment model (Deployment + ReplicaSet vs. ECS Service), the service discovery model (CoreDNS + Service vs. Service Connect + Cloud Map), and the secrets model (External Secrets Operator vs. inline &lt;code&gt;secrets&lt;/code&gt; in task definitions) are different at every layer. Plan for one quarter of platform team focus on the pattern translation before declaring the migration complete.&lt;/p&gt;
&lt;h2 id="the-decision-matrix"&gt;The Decision Matrix
&lt;/h2&gt;&lt;table&gt;
 &lt;thead&gt;
 &lt;tr&gt;
 &lt;th&gt;Organizational archetype&lt;/th&gt;
 &lt;th&gt;Recommended platform&lt;/th&gt;
 &lt;th&gt;Rationale&lt;/th&gt;
 &lt;/tr&gt;
 &lt;/thead&gt;
 &lt;tbody&gt;
 &lt;tr&gt;
 &lt;td&gt;Startup, &amp;lt; 10 engineers, no AI roadmap&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;ECS Fargate&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Operational simplicity wins until the team has the headcount to operate Kubernetes patterns.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Scale-up with credible ML roadmap&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;EKS Auto Mode&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Building on ECS now means migrating later — pay the Kubernetes tax once.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Enterprise microservices platform, &amp;gt; 100 services&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;EKS Auto Mode&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;CRD ecosystem and operator patterns dominate the day-2 cost equation.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;AI/ML platform team&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;EKS&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;Distributed training, GPU bin-packing, and Neuron SDK integration require Kubernetes primitives.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Multi-region regulated workload&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;EKS Provisioned Control Plane&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;99.99% SLA, audit logging maturity, and policy-as-code via Kyverno/Gatekeeper meet compliance bars.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;tr&gt;
 &lt;td&gt;Hybrid on-premises + cloud&lt;/td&gt;
 &lt;td&gt;&lt;strong&gt;EKS (with Hybrid Nodes) or ECS Anywhere&lt;/strong&gt;&lt;/td&gt;
 &lt;td&gt;EKS Hybrid Nodes for Kubernetes-native hybrid; ECS Anywhere for AWS-native task scheduling on-prem.&lt;/td&gt;
 &lt;/tr&gt;
 &lt;/tbody&gt;
&lt;/table&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src="https://dantas.io/p/ecs-vs-eks-container-orchestration-decision/ecs-vs-eks-container-orchestration-decision-info1.png"
	width="2752"
	height="1536"
	loading="lazy"
	
		alt="Generated by Notebook LM"
	
 
	
		class="gallery-image" 
		data-flex-grow="179"
		data-flex-basis="430px"
	
&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="conclusion"&gt;Conclusion
&lt;/h2&gt;&lt;p&gt;For the median Platform Engineering team in 2026 — one running a mix of microservices, planning at minimum exploratory AI/ML investment, and building toward GitOps and policy-as-code as table-stakes capabilities — the answer is EKS Auto Mode. The 12% Auto Mode premium is small relative to the engineering cost of either reinventing the Kubernetes ecosystem on ECS or migrating later under deadline pressure.&lt;/p&gt;
&lt;p&gt;ECS remains the correct choice in 2026 under one specific condition: the workload is a small fleet of stateless services with no AI/ML roadmap, operated by a team that does not have the headcount or the inclination to absorb the Kubernetes operating model, and where the projected scale will not pass the 50-service / 5-engineer inflection point during the planning horizon. That is not a legacy hedge — it is a legitimate architectural choice for a real and common context.&lt;/p&gt;
&lt;p&gt;ECS is the wrong choice when any of the following is true: distributed AI training is on the 18-month roadmap; the platform team is scaling past 10 engineers and has started building bespoke equivalents of Kyverno or Argo CD in Lambda; or compliance requirements are pushing the organization toward policy-as-code admission control and immutable audit pipelines. In any of those cases, choosing ECS is an architectural error that compounds. The migration cost compounds with it.&lt;/p&gt;
&lt;p&gt;AWS&amp;rsquo;s investment signals are clear. ECS will continue to receive integrations with AWS-native control planes — Bedrock, AgentCore, VPC Lattice, Service Connect — and will remain the correct tool for the AWS-native simple-services use case. EKS will continue to absorb the operational surface area of running Kubernetes on AWS, with Auto Mode, Pod Identity, the 99.99% SLA on Provisioned Control Plane, and native Neuron SDK integration as the trajectory markers. The two services are not converging. They are specializing. Choose the platform whose specialization matches your workload portfolio at year three, not at month three.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src="https://dantas.io/p/ecs-vs-eks-container-orchestration-decision/ecs-vs-eks-container-orchestration-decision-info2.png"
	width="864"
	height="1821"
	loading="lazy"
	
		alt="Generated by Notebook LM"
	
 
	
		class="gallery-image" 
		data-flex-grow="47"
		data-flex-basis="113px"
	
&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;Amazon. (2024, December 3). &lt;em&gt;AWS Trainium2 instances now generally available&lt;/em&gt;. Amazon press release. &lt;a class="link" href="https://press.aboutamazon.com/2024/12/aws-trainium2-instances-now-generally-available" target="_blank" rel="noopener"
 &gt;https://press.aboutamazon.com/2024/12/aws-trainium2-instances-now-generally-available&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (2023, December 14). &lt;em&gt;Amazon EKS Pod Identity: A new way for applications on EKS to obtain IAM credentials&lt;/em&gt;. AWS Containers Blog. &lt;a class="link" href="https://aws.amazon.com/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/blogs/containers/amazon-eks-pod-identity-a-new-way-for-applications-on-eks-to-obtain-iam-credentials/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (2024a, September 24). &lt;em&gt;Migrating from AWS App Mesh to Amazon ECS Service Connect&lt;/em&gt;. AWS Containers Blog. &lt;a class="link" href="https://aws.amazon.com/blogs/containers/migrating-from-aws-app-mesh-to-amazon-ecs-service-connect/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/blogs/containers/migrating-from-aws-app-mesh-to-amazon-ecs-service-connect/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (2024b, December 4). &lt;em&gt;Streamline Kubernetes cluster management with new Amazon EKS Auto Mode&lt;/em&gt;. AWS News Blog. &lt;a class="link" href="https://aws.amazon.com/blogs/aws/streamline-kubernetes-cluster-management-with-new-amazon-eks-auto-mode/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/blogs/aws/streamline-kubernetes-cluster-management-with-new-amazon-eks-auto-mode/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (2024c, March 11). &lt;em&gt;Amazon EKS extended support for Kubernetes versions pricing&lt;/em&gt;. AWS Containers Blog. &lt;a class="link" href="https://aws.amazon.com/blogs/containers/amazon-eks-extended-support-for-kubernetes-versions-pricing/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/blogs/containers/amazon-eks-extended-support-for-kubernetes-versions-pricing/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (2026, March 20). &lt;em&gt;Amazon EKS announces 99.99% Service Level Agreement and new 8XL scaling tier for Provisioned Control Plane clusters&lt;/em&gt;. AWS What&amp;rsquo;s New. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-eks-announces-sla-8xl-scaling-tier/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/about-aws/whats-new/2026/03/amazon-eks-announces-sla-8xl-scaling-tier/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-a). &lt;em&gt;Amazon Bedrock AgentCore&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/bedrock/agentcore/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/bedrock/agentcore/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-b). &lt;em&gt;AI accelerator — AWS Trainium&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/ai/machine-learning/trainium/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/ai/machine-learning/trainium/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-c). &lt;em&gt;Automate cluster infrastructure with EKS Auto Mode&lt;/em&gt;. Amazon EKS User Guide. Retrieved April 20, 2026, from &lt;a class="link" href="https://docs.aws.amazon.com/eks/latest/userguide/automode.html" target="_blank" rel="noopener"
 &gt;https://docs.aws.amazon.com/eks/latest/userguide/automode.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-d). &lt;em&gt;Amazon EKS pricing&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/eks/pricing/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/eks/pricing/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-e). &lt;em&gt;Understand the Kubernetes version lifecycle on EKS&lt;/em&gt;. Amazon EKS User Guide. Retrieved April 20, 2026, from &lt;a class="link" href="https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html" target="_blank" rel="noopener"
 &gt;https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-f). &lt;em&gt;View Amazon EKS platform versions for each Kubernetes version&lt;/em&gt;. Amazon EKS User Guide. Retrieved April 20, 2026, from &lt;a class="link" href="https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html" target="_blank" rel="noopener"
 &gt;https://docs.aws.amazon.com/eks/latest/userguide/platform-versions.html&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-g). &lt;em&gt;Gen AI compute instance — Amazon EC2 Trn2 instances&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/ec2/instance-types/trn2/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/ec2/instance-types/trn2/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-h). &lt;em&gt;AWS Fargate pricing&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/fargate/pricing/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/fargate/pricing/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Amazon Web Services. (n.d.-i). &lt;em&gt;Amazon EC2 On-Demand instance pricing&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://aws.amazon.com/ec2/pricing/on-demand/" target="_blank" rel="noopener"
 &gt;https://aws.amazon.com/ec2/pricing/on-demand/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Bauman, S., &amp;amp; Chandrasekaran, A. (2024, April 18). &lt;em&gt;How to run containers and Kubernetes in production&lt;/em&gt;. Gartner Research. &lt;a class="link" href="https://www.gartner.com/en/documents/5361263" target="_blank" rel="noopener"
 &gt;https://www.gartner.com/en/documents/5361263&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Center for Internet Security. (n.d.). &lt;em&gt;CIS Kubernetes benchmark&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://www.cisecurity.org/benchmark/kubernetes" target="_blank" rel="noopener"
 &gt;https://www.cisecurity.org/benchmark/kubernetes&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Cloud Native Computing Foundation. (2025, April 1). &lt;em&gt;Cloud native 2024: Approaching a decade of code, cloud, and change&lt;/em&gt; (CNCF Annual Survey 2024). &lt;a class="link" href="https://www.cncf.io/reports/cncf-annual-survey-2024/" target="_blank" rel="noopener"
 &gt;https://www.cncf.io/reports/cncf-annual-survey-2024/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Cloud Native Computing Foundation. (2026, January). &lt;em&gt;CNCF survey: Widespread adoption of Kubernetes clusters in production&lt;/em&gt;. &lt;a class="link" href="https://cloudnativenow.com/features/cncf-survey-surfaces-widespread-adoption-of-kubernetes-clusters/" target="_blank" rel="noopener"
 &gt;https://cloudnativenow.com/features/cncf-survey-surfaces-widespread-adoption-of-kubernetes-clusters/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Karpenter. (n.d.). &lt;em&gt;Karpenter documentation&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://karpenter.sh/" target="_blank" rel="noopener"
 &gt;https://karpenter.sh/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Kubernetes Project. (n.d.). &lt;em&gt;Kubernetes releases&lt;/em&gt;. Retrieved April 20, 2026, from &lt;a class="link" href="https://kubernetes.io/releases/" target="_blank" rel="noopener"
 &gt;https://kubernetes.io/releases/&lt;/a&gt;&lt;/p&gt;</description></item><item><title>NVIDIA Ising and the Quantum-GPU Data Center: What Enterprise Architects Need to Know Now</title><link>https://dantas.io/p/nvidia-ising-and-the-quantum-gpu-data-center-what-enterprise-architects-need-to-know-now/</link><pubDate>Tue, 21 Apr 2026 00:00:00 +0000</pubDate><guid>https://dantas.io/p/nvidia-ising-and-the-quantum-gpu-data-center-what-enterprise-architects-need-to-know-now/</guid><description>&lt;img src="https://dantas.io/" alt="Featured image of post NVIDIA Ising and the Quantum-GPU Data Center: What Enterprise Architects Need to Know Now" /&gt;&lt;h2 id="introduction"&gt;Introduction
&lt;/h2&gt;&lt;p&gt;&amp;ldquo;With Ising, AI becomes the control plane — the operating system of quantum machines — transforming fragile qubits to scalable and reliable quantum-GPU systems.&amp;rdquo; That is Jensen Huang announcing NVIDIA Ising on April 14, 2026 (NVIDIA, 2026a). Read it again, and strip out the word &lt;em&gt;quantum&lt;/em&gt;. What is left is a description of the last fifteen years of enterprise infrastructure: a control plane, an operating system, an abstraction layer sitting on top of hardware the operators would rather not touch directly.&lt;/p&gt;
&lt;p&gt;This article is not about quantum computing. It is about what NVIDIA&amp;rsquo;s announcement signals for the people who design, provision, and operate data centers — the architects who care about rack power, interconnect roadmaps, and which middleware stack their HPC team will ask them to support in 2029. The quantum physics underneath Ising is not the interesting part for that audience. The pattern is. NVIDIA is positioning AI as the operational layer for a class of hardware most enterprises do not yet own, and it is doing so using the same open-source, vendor-neutral playbook that made CUDA unavoidable a decade ago.&lt;/p&gt;
&lt;h2 id="what-nvidia-ising-actually-is"&gt;What NVIDIA Ising Actually Is
&lt;/h2&gt;&lt;p&gt;Ising is a family of open-source AI models for two specific quantum engineering problems: processor calibration and error-correction decoding (NVIDIA, 2026a). The family has two members.&lt;/p&gt;
&lt;p&gt;Ising Calibration is a 35-billion-parameter vision-language model fine-tuned to read experimental measurements off a quantum processing unit and infer the tuning adjustments the hardware needs (NVIDIA Developer, 2026). Paired with an agent, it reduces calibration cycles from days to hours. On the QCalEval benchmark NVIDIA introduced alongside the release, the 35B model outperforms Gemini 3.1 Pro, Claude Opus 4.6, and GPT 5.4 on quantum calibration tasks (NVIDIA Developer, 2026; NVIDIA Research, 2026a).&lt;/p&gt;
&lt;p&gt;Ising Decoding is a pair of 3D convolutional neural networks — 0.9M and 1.8M parameters, tuned respectively for speed and accuracy — that perform pre-decoding for surface-code quantum error correction (NVIDIA Research, 2026b). NVIDIA benchmarks it against pyMatching, the open-source decoder that most of the research community currently deploys, and reports 2.5x faster inference and 3x higher accuracy while requiring an order of magnitude less training data (NVIDIA, 2026a).&lt;/p&gt;
&lt;p&gt;Both models are released under Apache 2.0 on HuggingFace, GitHub, and build.nvidia.com. Both integrate with CUDA-Q, NVIDIA&amp;rsquo;s hybrid classical-quantum programming platform, and with NVQLink, the QPU-GPU interconnect NVIDIA introduced in late 2025 (NVIDIA Developer, 2026). Early adopters named in the announcement include Fermilab, Harvard&amp;rsquo;s Paulson School, Lawrence Berkeley&amp;rsquo;s Advanced Quantum Testbed, the UK National Physical Laboratory, and quantum vendors IQM and Infleqtion (NVIDIA, 2026a). This is a platform play, not a science experiment.&lt;/p&gt;
&lt;h2 id="the-pattern-ai-as-operational-layer"&gt;The Pattern: AI as Operational Layer
&lt;/h2&gt;&lt;pre class="mermaid" style="visibility:hidden"&gt;flowchart TB
 subgraph era1["2012: Networking"]
 direction TB
 hw1["Vendor ASIC
 Cisco, Juniper, Arista"]
 cp1["SDN Controller
 OpenFlow"]
 wl1["Network Workloads"]
 hw1 -.-&gt;|open interface| cp1
 cp1 --&gt; wl1
 end

 subgraph era2["2016: Compute"]
 direction TB
 hw2["Bare Metal / Hypervisor
 VMware, Xen, KVM"]
 cp2["Kubernetes
 Declarative API"]
 wl2["Containerized Workloads"]
 hw2 -.-&gt;|open interface| cp2
 cp2 --&gt; wl2
 end

 subgraph era3["2026: Quantum"]
 direction TB
 hw3["QPU
 Superconducting, Trapped-Ion,
 Neutral-Atom, Photonic"]
 cp3["CUDA-Q + Ising
 NVQLink Interconnect"]
 wl3["Hybrid Quantum-Classical Workloads"]
 hw3 -.-&gt;|open interface| cp3
 cp3 --&gt; wl3
 end

 era1 ==&gt; era2
 era2 ==&gt; era3

 classDef hardware fill:#1e293b,stroke:#0a1628,color:#f8fafc,stroke-width:2px
 classDef controlplane fill:#0a1628,stroke:#0a1628,color:#f8fafc,stroke-width:3px
 classDef workload fill:#faf7f2,stroke:#334155,color:#0a1628,stroke-width:1px

 class hw1,hw2,hw3 hardware
 class cp1,cp2,cp3 controlplane
 class wl1,wl2,wl3 workload&lt;/pre&gt;&lt;p&gt;Anyone who sat through the SDN transition from 2012 onward recognizes what is happening here. Networking hardware did not get smarter. What changed was the location of the control logic. Forwarding tables used to be programmed in hardware by a closed, vendor-specific CLI. Then OpenFlow, and later the broader software-defined networking movement, pulled that logic into a software control plane that ran on commodity compute and spoke to the data plane through an open protocol. The ASIC did not disappear. It stopped being the integration point.&lt;/p&gt;
&lt;p&gt;Kubernetes did the same thing to compute. Bare metal did not get more flexible. The scheduling, placement, and lifecycle decisions moved off the hypervisor and onto a declarative API, and every infrastructure decision above that API started being negotiated in YAML instead of in ticket queues. Enterprise architects who internalized that shift in 2016 and 2017 spent the next five years in demand. The ones who dismissed it as researcher toys spent the same five years explaining why their VMware estate could not do what the new hires expected.&lt;/p&gt;
&lt;p&gt;NVIDIA is now executing the same move against quantum hardware. Qubits are fragile, noisy, and vendor-specific — superconducting, trapped-ion, neutral-atom, photonic. Every modality has its own calibration procedure and its own error profile. The traditional response to that heterogeneity would be a vertically integrated stack per vendor. NVIDIA is proposing the opposite: a horizontal control layer — CUDA-Q for orchestration, Ising for the ML-driven operational tasks, NVQLink for the physical interconnect — that treats the QPU as a pluggable accelerator behind a standardized software boundary (NVIDIA, 2026b).&lt;/p&gt;
&lt;p&gt;This is the same bet that made CUDA the default for GPU compute even when AMD hardware was competitive on specs. Own the abstraction layer, make it open enough to be adopted, and the hardware underneath becomes interchangeable. Huang&amp;rsquo;s &amp;ldquo;operating system of quantum machines&amp;rdquo; is not a metaphor. It is a product strategy. For infrastructure architects, the lesson from the last two transitions is uncomfortable and consistent: the abstraction layer wins, and the teams that learn it first set the architectural vocabulary for everyone else.&lt;/p&gt;
&lt;h2 id="infrastructure-implications"&gt;Infrastructure Implications
&lt;/h2&gt;&lt;pre class="mermaid" style="visibility:hidden"&gt;flowchart TB
 subgraph app["Application Layer"]
 workload["Enterprise Workload
 optimization, simulation, ML"]
 end

 subgraph orchestration["Orchestration Layer - Software"]
 cudaq["CUDA-Q
 Hybrid job scheduling
 Quantum-classical programming"]
 ising_cal["Ising Calibration
 35B VLM
 QPU tuning automation"]
 ising_dec["Ising Decoding
 3D CNN
 Real-time error correction"]
 end

 subgraph interconnect["Interconnect Layer"]
 nvqlink["NVQLink
 QPU to GPU low-latency bus
 Microsecond-scale"]
 end

 subgraph hardware["Physical Layer"]
 qpu["QPU
 cryogenic, vendor-specific"]
 gpu["Classical GPU
 error-correction compute,
 AI inference"]
 end

 workload --&gt; cudaq
 cudaq --&gt; ising_cal
 cudaq --&gt; ising_dec
 ising_cal --&gt; nvqlink
 ising_dec --&gt; nvqlink
 nvqlink &lt;--&gt; qpu
 nvqlink &lt;--&gt; gpu

 classDef appclass fill:#faf7f2,stroke:#334155,color:#0a1628,stroke-width:2px
 classDef orchclass fill:#0a1628,stroke:#0a1628,color:#f8fafc,stroke-width:2px
 classDef interclass fill:#1e293b,stroke:#0a1628,color:#f8fafc,stroke-width:3px
 classDef hwclass fill:#334155,stroke:#0a1628,color:#f8fafc,stroke-width:2px

 class workload appclass
 class cudaq,ising_cal,ising_dec orchclass
 class nvqlink interclass
 class qpu,gpu hwclass&lt;/pre&gt;&lt;p&gt;Hybrid quantum-GPU racks are no longer a research slide. NVQLink exists as a low-latency interconnect specifically because error correction has to run on a classical accelerator faster than decoherence accumulates on the QPU — microseconds, not milliseconds (NVIDIA Developer, 2026). That is a physical-layer requirement. It implies co-located GPUs and QPUs sharing a rack or adjacent racks, with cooling profiles that combine cryogenic dilution refrigerators (millikelvin for superconducting qubits) and standard liquid-cooled GPU density. Power, floor loading, EMI isolation, and cable path engineering all move. This is not a 2030 problem. Early adopter sites are building these rooms now.&lt;/p&gt;
&lt;p&gt;NVQLink is worth tracking the same way InfiniBand was worth tracking in 2008. It may not be the standard that wins, but it is the standard with the largest vendor pushing it, and its adoption curve will tell you which quantum vendors are playing in the NVIDIA ecosystem versus building their own closed stacks. For procurement and roadmap planning, that signal matters more than the qubit count in any given press release.&lt;/p&gt;
&lt;p&gt;CUDA-Q is the middleware layer to learn. Not because every architect needs to write quantum kernels, but because CUDA-Q is where the orchestration model for hybrid jobs is being defined — how a workload schedules across classical GPUs, QPUs, and the AI models that sit between them. The parallel to learning Kubernetes primitives in 2017 is exact. Engineers who understood pods and services before they became interview table stakes had an unreasonable career advantage. CUDA-Q documentation is free; the time to read it is now.&lt;/p&gt;
&lt;p&gt;The open-source release matters for a reason that tends to get lost in the coverage. Apache 2.0 licensing on both Ising models means enterprises can retrain them on proprietary QPU telemetry without surrendering the data to a vendor cloud (NVIDIA, 2026a). For regulated industries — pharma, defense, finance — this is the difference between a quantum roadmap that is viable and one that dies in legal review. It is also the answer to the reflexive concern about NVIDIA lock-in: the models are open, the weights are open, the training framework is open. What NVIDIA owns is the interconnect and the orchestration layer, which is exactly where it has always made its money.&lt;/p&gt;
&lt;h2 id="what-enterprise-architects-should-actually-do-now"&gt;What Enterprise Architects Should Actually Do Now
&lt;/h2&gt;&lt;p&gt;The actionable list is short and deliberately unglamorous.&lt;/p&gt;
&lt;p&gt;Read the CUDA-Q documentation and the Ising Calibration model card on HuggingFace (NVIDIA, 2026b; NVIDIA Developer, 2026). Not to implement anything. To calibrate your own mental model of where the abstraction boundaries are being drawn. A two-hour reading session will put you ahead of 95% of your peers.&lt;/p&gt;
&lt;p&gt;Track NVQLink adoption announcements across the quantum vendor landscape — IQM, Infleqtion, Quantinuum, PsiQuantum, IonQ. The ones that integrate are joining an ecosystem with gravitational pull. The ones that do not are making a different bet that may or may not pay off.&lt;/p&gt;
&lt;p&gt;Start an internal conversation with your HPC, research, or advanced-engineering team about quantum readiness. Not a budget request. Not a vendor evaluation. An awareness conversation. The question is: &lt;em&gt;if one of our research workloads became viable on a hybrid quantum-classical system in three years, what would our data center need to change?&lt;/em&gt; The answer will expose whatever gaps exist in cooling, interconnect, and skill coverage, and those gaps take years to close.&lt;/p&gt;
&lt;p&gt;What not to do: do not panic-buy quantum roadmap consulting, do not commit capex to qubit-count milestones that have no connection to your workload, and do not let a vendor sell you a &amp;ldquo;quantum-ready&amp;rdquo; anything. The QED-C 2026 industry report projects the quantum computing market at roughly $3 billion by 2028 — real, growing, but still two orders of magnitude below enterprise AI infrastructure spend (QED-C, 2026). This is a watch-and-learn phase, not a procurement phase.&lt;/p&gt;
&lt;h2 id="conclusion"&gt;Conclusion
&lt;/h2&gt;&lt;p&gt;NVIDIA Ising is not a quantum computing announcement dressed up as an AI announcement. It is an infrastructure announcement about where the control plane of a future hardware class is being built, and who is building it. The pattern is one enterprise architects have lived through twice already — in networking and in compute — and the lesson from both is that the abstraction layer, once it becomes open enough to adopt, decides the shape of the ecosystem. The qubits will do what qubits do. The interesting architectural question is what sits between them and your workload, and NVIDIA has just told you what it thinks the answer is.&lt;/p&gt;
&lt;hr&gt;
&lt;p&gt;&lt;img src="https://dantas.io/p/nvidia-ising-and-the-quantum-gpu-data-center-what-enterprise-architects-need-to-know-now/nvidia-ising-conclusion.png"
	width="912"
	height="642"
	loading="lazy"
	
		alt="Three infrastructure transitions — the abstraction layer always wins"
	
 
	
		class="gallery-image" 
		data-flex-grow="142"
		data-flex-basis="340px"
	
&gt;&lt;/p&gt;
&lt;hr&gt;
&lt;h2 id="references"&gt;References
&lt;/h2&gt;&lt;p&gt;NVIDIA. (2026a, April 14). &lt;em&gt;NVIDIA launches Ising, the world&amp;rsquo;s first open AI models to accelerate the path to useful quantum computers&lt;/em&gt; [Press release]. &lt;a class="link" href="https://nvidianews.nvidia.com/news/nvidia-launches-ising-the-worlds-first-open-ai-models-to-accelerate-the-path-to-useful-quantum-computers" target="_blank" rel="noopener"
 &gt;https://nvidianews.nvidia.com/news/nvidia-launches-ising-the-worlds-first-open-ai-models-to-accelerate-the-path-to-useful-quantum-computers&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;NVIDIA. (2026b). &lt;em&gt;Open AI models for quantum computing: NVIDIA Ising&lt;/em&gt;. NVIDIA Developer. &lt;a class="link" href="https://developer.nvidia.com/ising" target="_blank" rel="noopener"
 &gt;https://developer.nvidia.com/ising&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;NVIDIA Developer. (2026, April 14). &lt;em&gt;NVIDIA Ising introduces AI-powered workflows to build fault-tolerant quantum systems&lt;/em&gt;. NVIDIA Technical Blog. &lt;a class="link" href="https://developer.nvidia.com/blog/nvidia-ising-introduces-ai-powered-workflows-to-build-fault-tolerant-quantum-systems/" target="_blank" rel="noopener"
 &gt;https://developer.nvidia.com/blog/nvidia-ising-introduces-ai-powered-workflows-to-build-fault-tolerant-quantum-systems/&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;NVIDIA Research. (2026a, April). &lt;em&gt;QCalEval: Benchmarking vision-language models on quantum calibration plot interpretation&lt;/em&gt;. &lt;a class="link" href="https://research.nvidia.com/publication/2026-04_qcaleval-benchmarking-vision-language-models-quantum-calibration-plot" target="_blank" rel="noopener"
 &gt;https://research.nvidia.com/publication/2026-04_qcaleval-benchmarking-vision-language-models-quantum-calibration-plot&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;NVIDIA Research. (2026b, April). &lt;em&gt;Fast AI-based pre-decoders for surface codes&lt;/em&gt;. &lt;a class="link" href="https://research.nvidia.com/publication/2026-04_fast-ai-based-pre-decoders-surface-codes" target="_blank" rel="noopener"
 &gt;https://research.nvidia.com/publication/2026-04_fast-ai-based-pre-decoders-surface-codes&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;NVIDIA. (2026c). &lt;em&gt;Ising-Calibration-1-35B-A3B&lt;/em&gt; [Model card]. HuggingFace. &lt;a class="link" href="https://huggingface.co/nvidia/Ising-Calibration-1-35B-A3B" target="_blank" rel="noopener"
 &gt;https://huggingface.co/nvidia/Ising-Calibration-1-35B-A3B&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Quantum Economic Development Consortium. (2026, April 14). &lt;em&gt;State of the global quantum industry 2026&lt;/em&gt;. &lt;a class="link" href="https://quantumconsortium.org/publication/2026-state-of-the-global-quantum-industry-report/" target="_blank" rel="noopener"
 &gt;https://quantumconsortium.org/publication/2026-state-of-the-global-quantum-industry-report/&lt;/a&gt;&lt;/p&gt;</description></item></channel></rss>