Architecting Network Traffic: GKE vs. Cloud Run
A deep dive into the architectural differences of routing internal traffic, SSL termination, and service-to-service communication between Kubernetes and Serverless environments.
In our GKE environment, traffic first hits a GCP Network Load Balancer (Layer 4) which forwards raw TCP traffic. SSL termination happens inside the cluster at the Istio Ingress Gateway. Istio then acts as a Layer 7 proxy, utilizing VirtualService rules to route decrypted traffic to internal Kubernetes Services.
For Cloud Run, we use a Regional Internal Application Load Balancer (Layer 7). SSL termination happens at the GCP Target HTTPS Proxy before routing. The URL Map inspects the host and path, forwarding traffic to a Backend Service, which points to a Serverless Network Endpoint Group (SNEG) connected to the Cloud Run instance.
In GKE, internal services communicate directly, bypassing the ingress, using Kubernetes DNS (e.g., service.namespace.svc.cluster.local). In contrast, Cloud Run typically employs a "star schema" where services communicate with each other by routing requests back through the centralized Load Balancer using internal full domains.
# GKE internal communication bypasses Istio
- name: DIFY_API_URL
value: "http://dify-api-svc.dify.svc.cluster.local/v1"
# Cloud Run communication routes via ALB
- name: KB_URL
value: "https://kb.aip-cloudrun.mylab.local/api/v2"