Cloud Platform Engineering

Architecting Network Traffic: GKE vs. Cloud Run

A deep dive into the architectural differences of routing internal traffic, SSL termination, and service-to-service communication between Kubernetes and Serverless environments.

8 min read
The GKE Setup: NLB to Istio Ingress

In our GKE environment, traffic first hits a GCP Network Load Balancer (Layer 4) which forwards raw TCP traffic. SSL termination happens inside the cluster at the Istio Ingress Gateway. Istio then acts as a Layer 7 proxy, utilizing VirtualService rules to route decrypted traffic to internal Kubernetes Services.

The Cloud Run Setup: ALB and Serverless NEGs

For Cloud Run, we use a Regional Internal Application Load Balancer (Layer 7). SSL termination happens at the GCP Target HTTPS Proxy before routing. The URL Map inspects the host and path, forwarding traffic to a Backend Service, which points to a Serverless Network Endpoint Group (SNEG) connected to the Cloud Run instance.

GKE vs Cloud Run Traffic Flow
flowchart TD subgraph GKE Architecture ClientA -->|HTTPS| NLB[GCP L4 NLB] NLB -->|HTTPS| Istio[Istio Ingress Gateway<br/>SSL Termination] Istio -->|HTTP| K8sSvc[K8s Service] end subgraph Cloud Run Architecture ClientB -->|HTTPS| ALB[GCP L7 ALB] ALB -->|SSL Termination| URLMap[URL Map] URLMap -->|HTTP| SNEG[Serverless NEG] SNEG --> CR[Cloud Run] end
Internal Service Communication

In GKE, internal services communicate directly, bypassing the ingress, using Kubernetes DNS (e.g., service.namespace.svc.cluster.local). In contrast, Cloud Run typically employs a "star schema" where services communicate with each other by routing requests back through the centralized Load Balancer using internal full domains.

yaml
# GKE internal communication bypasses Istio
- name: DIFY_API_URL
  value: "http://dify-api-svc.dify.svc.cluster.local/v1"

# Cloud Run communication routes via ALB
- name: KB_URL
  value: "https://kb.aip-cloudrun.mylab.local/api/v2"

More Recent Posts