In this post, I tell you how to implement cache with nginx, assuming, you have zero ability to inject into the code.

Problem

Let’s create simple application using FastAPI:

import time

from fastapi import FastAPI

app = FastAPI()

def __get_latest_items():
  time.sleep(2)
  return ["post1", "post2"]


@app.get("/")
def get_latest_items():
    return __get_latest_items()

This application has single endpoint and it’s pretty slow, as it takes 2 seconds to gather some items (e.g. latest articles).

Good thing is that those items doesn’t refresh often, so it’s safe to say that we would get same response over and over for at least couple of minutes or even hours.

Solution

In such situation we may cache the response and do not query service each time user wants to get the data.

How to do so?

Build test stand

First, we create application container:

FROM python:3.9

WORKDIR /app

COPY app/requirements.txt ./
RUN pip install -r requirements.txt

COPY app/ ./

ENTRYPOINT [""]
CMD ["uvicorn", "main:app", "--port", "8085"]

Then, let’s create couple nginx configs:

Nginx, regular:

server {
  listen 8080;
  server_name _;

  root /usr/share/nginx/html;
  index index.html;

  rewrite_log on;
  access_log stdout;
  error_log stderr;

  location / {
    proxy_pass http://127.0.0.1:8085/;
  }
}

Nginx, with 1-minute cache:

proxy_cache_path /var/cache/nginx/sandbox-app keys_zone=sandbox-app:1m max_size=10m;

server {
  listen 8081;
  server_name _;

  root /usr/share/nginx/html;
  index index.html;

  rewrite_log on;
  access_log stdout;
  error_log stderr;

  location / {
    proxy_pass http://127.0.0.1:8085/;

    proxy_cache sandbox-app;
    proxy_cache_key $scheme$request_method$host$request_uri;
    proxy_cache_valid 200 1m;
  }
}

Now, build nginx container:

FROM nginx:1.19

RUN rm -f /etc/nginx/conf.d/default.conf

COPY ./nginx /etc/nginx/conf.d

And run the things up:

docker build . -f nginx.Dockerfile -t local/nginx_cache:nginx
docker run --rm -d --name sandbox-nginx --network host local/nginx_cache:nginx

docker build . -f app.Dockerfile -t local/nginx_cache:app
docker run --rm -d --name sandbox-app --network host local/nginx_cache:app

Tests

We’re ready to test how it works:

Cache disabled

First, go for service with no cache enabled:

$ siege -c1 -t5s 127.0.0.1:8080
...
Transactions:		           2 hits
Availability:		      100.00 %
Transaction rate:	        0.41 trans/sec
Successful transactions:           2
Longest transaction:	        2.01
Shortest transaction:	        2.00

We have 2 completed requests over 5 seconds, which seems fair.

Cache enabled

Then we query cached service:

$ siege -c1 -t5s 127.0.0.1:8081
...
Transactions:		       16556 hits
Availability:		      100.00 %
Transaction rate:	     3687.31 trans/sec
Successful transactions:       16556
Longest transaction:	        2.00
Shortest transaction:	        0.00

First request was fair and took 2 seconds, but all transactions followed first one, was lighting fast, those took 0.00 seconds (thanks to localhost and keep-alive feature).

Thus we were able to process 16556 requests, which is 8k times more than non-cached service.

Have a good day!