Cheapest Google Kubernetes Engine cluster you can create is a 3-node f1-micro VMs (shared CPU, 0.6 GB RAM). (Costs $11/month today.) But each of nodes will have ~50 MB free memory. (That’s not enough room to run pretty much anything.)

Baseline

You can’t create less than 3 instances when you use f1-micro.

But you can use 1-instance clusters if you specify the next smallest type, n1-standard-1 that has 3.5 GB, but it’s more expensive than 3 f1-micro nodes).

gcloud container clusters create minimalist \
    --num-nodes 3 \
    --machine-type f1-micro

free -m on these nodes show about 50 MB free space.

At this point, pods running in the kube-system will be:

$ kubectl get pods --namespace kube-system
NAME                                                READY     STATUS
event-exporter-v0.1.7-1642279337-m61kx              2/2       Running
fluentd-gcp-v2.0.9-744bc                            2/2       Running
fluentd-gcp-v2.0.9-mf356                            2/2       Running
fluentd-gcp-v2.0.9-t6nvb                            2/2       Running
heapster-v1.4.3-761904113-xp2xp                     3/3       Running
kube-dns-3468831164-cxl6v                           3/3       Running
kube-dns-3468831164-m1c5v                           3/3       Running
kube-dns-autoscaler-244676396-c3df7                 1/1       Running
kube-proxy-gke-minimal-default-pool-95260d28-7sgw   1/1       Running
kube-proxy-gke-minimal-default-pool-95260d28-g2m2   1/1       Running
kube-proxy-gke-minimal-default-pool-95260d28-m116   1/1       Running
kubernetes-dashboard-1265873680-k5hjq               1/1       Running
l7-default-backend-3623108927-5rq4t                 1/1       Running

Disabling Add-ons and Monitoring

  1. Opt-out from monitoring/logging: If you do not need your cluster and container logs, metrics and events collected and uploaded to Google Stackdriver.

     --no-enable-cloud-logging \
     --no-enable-cloud-monitoring
    
  2. Opt-out add-ons: These add-ons will be on your clusters, unless you explicitly disable them:

    • KubernetesDashboard: the web UI for Kubernetes. GKE UI in Cloud Console is pretty good, so you probably don’t need this.
    • HttpLoadBalancing: L7 load balancing (Ingress) controller. Disable if you don’t use Ingress.
    • HorizontalPodAutoscaling: metrics-based pod scaling. Disable if you don’t use the HPA feature.

Result:

gcloud container clusters create minimalist \
    --num-nodes 3 \
    --machine-type f1-micro \
    --no-enable-cloud-logging \
    --no-enable-cloud-monitoring \
    --disable-addons=KubernetesDashboard,HttpLoadBalancing,HorizontalPodAutoscaling

After this, you’re down to these pods:

NAME                                                   READY     STATUS
kube-dns-3468831164-5pcgp                              3/3       Running
kube-dns-3468831164-t6v46                              3/3       Running
kube-dns-autoscaler-244676396-qwt07                    1/1       Running
kube-proxy-gke-minimalist-default-pool-8d74ed1d-3m7f   1/1       Running
kube-proxy-gke-minimalist-default-pool-8d74ed1d-7z82   1/1       Running
kube-proxy-gke-minimalist-default-pool-8d74ed1d-jp6l   1/1       Running

Disabling kube-dns

kube-dns provides DNS resolution for:

  1. pods and services in the cluster
  2. external domain names

If your pods do not talk to each other within the cluster or domain names outside your cluster, you can disable the kube-dns:

kubectl scale -n kube-system --replicas=0 deploy/kube-dns-autoscaler
kubectl scale -n kube-system --replicas=0 deploy/kube-dns
kubectl delete -n kube-system service kube-dns

Even after disabling kube-dns, the Pods will have an /etc/resolv.conf that points to the kube-dns Service you just deleted. This is because GKE sets up kubelet to hardcode the IP address of kube-dns to --cluster-dns.

So if your pods connect external domain names, you can still specify nameservers on your Pod spec (documentation).

For example, using Google public DNS would be:

spec:
  dnsConfig:
    nameservers:
      - 8.8.8.8
      - 8.8.4.4
  containers:
  - "..."

Result

Now each node has ~75 MB free space compared to ~50 MB in the beginning.

I don’t think we saved much.