Autoscale NGINX and PHP-FPM independently on Google Kubernetes Engine

Let’s say I have a stateless php app that needs to run 24/7 and automatically scale up to perform well and scale down to be cost-effective. Perfect use case for Kubernetes, right? (Let’s be honest here now — you would use Kubernetes anyway because all the cool kids do it these days).

Cool then, let’s start the naive way —

Solution 1:

FROM debian
RUN apt-get update && apt-get -y install nginx php-fpm
 

I dare you!

Ok ok, let’s just forget about this and start a bit better, shall we?

Solution 2:

Separate the nginx a php-fpm into two images

FROM nginx:1.13
COPY src/ /code/
COPY site.conf /etc/nginx/conf.d/default.conf

 

server {
index index.php index.html;
server_name localhost;
error_log /var/log/nginx/error.log;
access_log /var/log/nginx/access.log;
root /code/;
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass localhost:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
 
notice the fastcgi_pass and try_files

Then build the php image

FROM php:7.1-fpm
COPY src/ /code/

Now make a simple kubernetes deployment with two containers — nginx and php, the communicate via the pod’s shared network using tcp port 9000(fastcgi_pass localhost:9000)

Great, now we can scale horizontally and vertically with ease. Just set the number of replicas and create a horizontal pod autoscaler. Don’t forget to set the containers’ cpu/mem quotas.

But wait. What if my php app is very cpu intensive in peaks and I need to run a lot of php replicas but nginx, on the other hand, just handles a couple of requests and two/three replicas are more than enough just to satisfy the HA needs?

Problem 1:

Horizontal scaling scales both php and nginx, even if only one of them needs to be scaled out.

Problem 2:

What is more, I need to build two docker images with the same application code. That just doesn’t seem right. Deployments/rollbacks need to change both image tags at the same time otherwise we have an undefined state. And god kills a kitten.

Solution 3:

Nginx doesn’t know php. There is no need to have the src/ in nginx image. Let’s use the standard nginx image and put the source code in php image only.

We had one pod with both nginx and php, now we want both of them have their own pod. Make two kubernetes deployments and kubernetes service for php pods.

FROM nginx:1.13
COPY site.conf /etc/nginx/conf.d/default.conf

since the php script are no longer in nginx, delete the try_files directive, next change the fastcgi_pass to the kubernetes service’s local dns (I named it php):

#try_files $uri =404;
fastcgi_pass php:9000;

The kubernetes service called php loadbalances tcp traffic internally between pods labeled php. A local dns record php is created automatically for that internal loadbalancer. Nginx now sends the requests to the balanced pods group.

Now we can scale nginx and php-fpm independently and the app code is in php image only.

 

check the php logs for loadbalancing:

 

Solution 4:

Is there a better solution for GKE?

Is it a bad idea to get rid of the try_files in nginx.conf? Should I even use nginx at all? (let’s assume I’m serving the static assets from a different source)

How does this work? Why am I getting nginx’s IP address with this —

<?php echo $_SERVER[‘SERVER_ADDR’]; ?>

Feel free to discuss this with me, I’m having a hard time finding anything useful for running php on GKE. Would be nice to get hands on some best practices! Drop me a message at mab@revolgy.com.

FAQs

Q1: What is the first improved approach described for running an Nginx and PHP application on Kubernetes?

The first approach is to create a single Kubernetes deployment that runs one pod. This pod contains two separate containers: one for Nginx and one for PHP-FPM. The two containers communicate with each other over the pod’s shared network, with Nginx using fastcgi_pass localhost:9000 to send requests to the PHP container.

Q2: What are the two main problems with running Nginx and PHP as separate containers within the same pod?

There are two main problems with this setup:

  • Inefficient Scaling: Horizontal scaling affects the entire pod, meaning both the Nginx and PHP containers are scaled up together, even if only the PHP application is under heavy load and needs more replicas.
  • Code Duplication: The application’s source code needs to be built into both the Nginx and the PHP Docker images, which is poor practice and makes deployments and rollbacks riskier.

Q3: How can you architect the application on Kubernetes to allow the Nginx and PHP components to scale independently?

To enable independent scaling, you should separate Nginx and PHP-FPM into their own pods. This is achieved by creating two separate Kubernetes deployments — one for Nginx and one for PHP — and then creating a Kubernetes service for the PHP pods.

Q4: When Nginx and PHP are in separate deployments, how does Nginx send requests to the PHP application?

Nginx sends requests to the Kubernetes service that sits in front of the PHP pods. This service load-balances TCP traffic internally between all the pods in the PHP deployment. A local DNS record (e.g., php) is automatically created for this internal load balancer, so the fastcgi_pass directive in the Nginx configuration can be set to php:9000.

Q5: What modifications are required for the Nginx image and its configuration in this improved setup?

Two key modifications are needed:

  • The Nginx Docker image no longer needs the application source code copied into it; a standard Nginx image can be used.
  • The Nginx configuration file (site.conf) must be updated. The try_files directive should be removed (as Nginx no longer has access to the PHP files), and the fastcgi_pass directive must be changed from localhost:9000 to point to the name of the PHP Kubernetes service (e.g., php:9000).