Last year, I wrote a tutorial on deploying a serverless application to all Google Cloud regions, and route your users to the closer region using a load balancer with anycast IP. (I’ve since moved that article into our documentation). It was a 13+ step tutorial to get it working.

Naturally, I scratched the itch and released a Terraform module that makes this much easier. I posted it on Twitter, and it got over 200 likes, so I decided to write about how this works. In this article I’ll show you how you can automate deploying to “all regions”much easier using this new Terraform module. This repository contains all the code and examples I’ll talk here.

Several of our customers and apps like are already using this Terraform module in production to deploy to all regions.

Why deploy everywhere?

Long story short: Round-trip latencies. Assume you live in Singapore, and trying to connect to your application running in Iowa datacenter. Establishing a TCP connection and a TLS handshake takes, 3 full round-trips between these points. If your round-trip latency to the server is 200ms, your users will take 600ms just to connect to your application before they receive any data. For most applications, that’s not acceptable.

Enter global anycast IPs

As I explained in my earlier article, Google Cloud’s HTTPS Load Balancer works with anycast public IPs, that are advertised globally and routed to the nearest Google datacenter.

If a user from Signapore connects to your app, they are routed to the nearest datacenter or edge PoP location (likely ⪅30ms round-trip latency) so the whole handshake will complete in 100ms or so. After the TLS is terminated at the edge, your traffic will now flow inside Google’s global network (still encrypted) and will be routed to the nearest Cloud Run service you deployed behind the load balancer.

How to do this in Terraform?

To make this easier on Terraform, I worked on extending our lb-http module to support serverless network endpoint groups.

First, we need a data source to retrieve all Cloud Run regions as a list:

data "google_cloud_run_locations" "default" { }

Then, we deploy the Cloud Run service using for_each construct in HCL:

# deploy Cloud Run Service
resource "google_cloud_run_service" "default" {
  for_each = toset(data.google_cloud_run_locations.default.locations)

  name     = "${}--${each.value}"
  location = each.value
  project  = var.project_id

  template {
    spec {
      containers {
        image = var.image

Optionally, we make the service publicly accessible by everyone (assuming it’s a public website or API) by letting allUsers invoke this service, using IAM. Note that we’re using for_each again.

resource "google_cloud_run_service_iam_member" "default" {
  for_each = toset(data.google_cloud_run_locations.default.locations)

  location = google_cloud_run_service.default[each.key].location
  project  = google_cloud_run_service.default[each.key].project
  service  = google_cloud_run_service.default[each.key].name
  role     = "roles/run.invoker"
  member   = "allUsers"

Then, we need to create a serverless network endpoint group to be able to add the Cloud Run service as a backend to the load balancer. Note that we’re using for_each here as well.

resource "google_compute_region_network_endpoint_group" "default" {
  for_each = toset(data.google_cloud_run_locations.default.locations)

  provider              = google-beta
  name                  = "${}--neg--${each.key}"
  network_endpoint_type = "SERVERLESS"
  region                = google_cloud_run_service.default[each.key].location
  cloud_run {
    service = google_cloud_run_service.default[each.key].name

Finally, create the load balancer, using our lb-http module’s serverless_negs submodule. (I omitted some details here, but you can find a working example in the repo.) This is where you configure your domain name, IP address, CDN settings, TLS certificate (automatic or bring-your-own) etc. for the load balancer. We use for_each here to add the NEGs we created above as a backend here.

module "lb-http" {
  source            = "GoogleCloudPlatform/lb-http/google//modules/serverless_negs"
  version           = "~> 4.5"

  project = var.project_id
  name    =

  # ...
  backends = {
    # ...
    default = {
      # ...
      groups = [
        for neg in google_compute_region_network_endpoint_group.default:
          group =

      # ...

That’s it! Even with the parts omitted, in less than 100 lines of Terraform, you can configure an automated way to deploy a containerized app to all regions, and serve it on a global load balancer (with a domain name and TLS certificate).

Every time we roll out new regions, all you need to do is re-apply this configuration and it will pick up the newly added regions from the API.

Right now, when I deploy this configuration to all 19 regions, it creates about 68 resources on various APIs, and the resulting Cloud Run services look like this (and thanks to terraform destroy, it cleanly removes them all):

$ gcloud run services list
SERVICE                               REGION                   URL                                                                 
zoneprinter--asia-east2               asia-east2                  
zoneprinter--asia-northeast1          asia-northeast1        
zoneprinter--asia-northeast2          asia-northeast2        
zoneprinter--asia-northeast3          asia-northeast3        
zoneprinter--asia-south1              asia-south1                
zoneprinter--asia-southeast1          asia-southeast1        
zoneprinter--asia-southeast2          asia-southeast2        
zoneprinter--australia-southeast1     australia-southeast1   
zoneprinter--europe-north1            europe-north1            
zoneprinter--europe-west1             europe-west1              
zoneprinter--europe-west2             europe-west2              
zoneprinter--europe-west3             europe-west3              
zoneprinter--europe-west4             europe-west4              
zoneprinter--europe-west6             europe-west6              
zoneprinter--northamerica-northeast1  northamerica-northeast1
zoneprinter--southamerica-east1       southamerica-east1     
zoneprinter--us-central1              us-central1                
zoneprinter--us-east1                 us-east1                      
zoneprinter--us-east4                 us-east4                      
zoneprinter--us-west1                 us-west1                      

Hopefully, this is useful to you. Let me know on Twitter if you end up using it. Make sure to read the Terraform module documentation to see what else you can tune and fork the example repo if you want to give this a try.