Skip to main content

Command Palette

Search for a command to run...

Whitelisting an API With a Static IP Using Nginx

Published
7 min read
T
Most developers ask how. I ask why. Why does one site load in 0.5 seconds and another takes 4? Why does one decision make your app scale and another silently kills it? Why do users leave a page before it even finishes loading? I'm a full stack developer based in Kitchener ON building with Next.js React TypeScript Tailwind CSS MongoDB and Stripe. But the tech stack is just the surface. What actually drives me is understanding how the web works at a level deep enough to make it genuinely better for every person who uses it. I don't write polished expert guides. I write what I learn the same day I learn it. Raw and real. If you have ever stared at your screen wondering why something works but not how to explain it you'll feel at home here.

The Problem

We integrate with SuperControl, a property management API. Their security model is the old-school kind: they don't issue API keys you carry in a header. Instead, you give them a list of IP addresses, and they only accept requests coming from those IPs. Anything else gets dropped at the edge.

That works perfectly if your backend is one server with one IP. It falls apart the moment you deploy on Vercel.

Vercel runs your code on serverless functions. Every invocation can come from a different machine in a different data center, with a different outbound IP. The whole pool changes over time — they add new edges, retire old ones, scale up during traffic spikes. There's no published list of stable IPs to whitelist, and even if there were, it would be huge and changing.

So I had two options. Either ask SuperControl to whitelist "the internet" (not going to happen), or put something in the middle that has one IP that never changes.

The Managed Alternatives (and Why I Didn't Use Them)

This problem is common enough that every major platform has a paid product for it. The question is whether the price tag matches the integration.

  • Vercel Static IPs — $100/month per project on the Pro plan, plus regional Private Data Transfer fees on top. Available on Pro and Enterprise; not on Hobby.

  • AWS NAT Gateway — \(0.045/hour (~\)32.85/month) per gateway, plus \(0.045/GB processed, plus standard data transfer out. A single-AZ setup at low traffic lands around \)35–40/month; multi-AZ production setups easily run $100+ before data charges.

  • Third-party proxies like QuotaGuard — around $19/month for entry tiers. Cheaper than the cloud-native options, but it's still a subscription to a black box.

All of them solve the problem. Justifiable at scale, when the cost is a rounding error against engineering hours saved. Overkill for a single integration where the upstream API itself doesn't justify enterprise spend.

The DIY answer: a $5/month VPS from any provider (Hetzner, DigitalOcean, Vultr) with a permanent IP and Nginx installed. Same outcome, a fraction of the cost. The trade-off is that you're now responsible for keeping the box patched and the cert renewed — Certbot's auto-renewal handles the second part; weekly apt update && apt upgrade handles the first.

For a single low-volume integration with one upstream, the trade is worth it. The moment I'm running ten of these or doing high-volume traffic, I'd reconsider.

Quick Detour: What's a Reverse Proxy? What's Nginx?

Skip this section if you already know.

A reverse proxy is a server that sits in front of another server and forwards traffic to it. The client thinks it's talking to the proxy; the proxy actually talks to the real backend on the client's behalf. "Reverse" because a normal proxy hides the client from the server (think VPN), while a reverse proxy hides the server from the client. Reverse proxies are how big sites do load balancing, caching, SSL termination, rate limiting, and — in our case — IP consolidation.

Nginx (pronounced "engine-x") is the software you run on a server to make it a reverse proxy. It's one of the most-used web servers on the internet, free, fast, and configured through plain text files. You describe what should happen to incoming requests, reload it, and it does that. No code, just rules.

Put together: I'm running Nginx on a VPS, configured so that requests hitting my proxy domain get forwarded to SuperControl's API.

The Solution

App on Vercel (dynamic IPs)
            ↓
Nginx on VPS (one static IP — whitelisted with SuperControl)
            ↓
SuperControl API

The VPS costs a few dollars a month and has a fixed IP. I gave that one IP to SuperControl. Every API call from the app now hits the VPS first, the VPS forwards it upstream, and SuperControl sees the same trusted address every time.

The catch: that subdomain is now publicly reachable. If I left it open, anyone who guessed the URL could use my VPS as a free, pre-authorized gateway into SuperControl's API. So the proxy needs its own auth layer — a shared secret in a custom header that the app sends and the VPS verifies before forwarding anything.

The Config

server {
    listen 443 ssl;
    server_name proxy.example.com;

    # Certbot-managed SSL certs here

    location = /health {
        return 200 "ok\n";
    }

    location / {
        if ($http_x_proxy_auth_key != "REPLACE_WITH_SECRET") {
            return 401;
        }

        proxy_pass https://api.upstream.example.com;
        proxy_ssl_server_name on;

        proxy_set_header Host api.upstream.example.com;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header x-proxy-auth-key "";
    }
}

server {
    listen 80;
    server_name proxy.example.com;
    return 301 https://\(host\)request_uri;
}

Forty lines doing six things:

  1. HTTPS termination with a Certbot/Let's Encrypt cert, and HTTP→HTTPS redirect for anything that knocks on port 80.

  2. /health as an exact-match endpoint for uptime monitors — no auth, no logging noise, just 200 ok.

  3. Auth check on every other path: if the x-proxy-auth-key header doesn't match the shared secret, return 401 and forward nothing.

  4. proxy_pass forwards the request to SuperControl, preserving the path, query string, method, and body.

  5. Host header rewrite so the upstream sees its own domain instead of our proxy subdomain — most APIs reject requests with the wrong Host.

  6. Strip the secret before forwarding. The auth key was for us; SuperControl has no business seeing it.

Three Things Worth Knowing

The header-to-variable rule. Nginx exposes every incoming header as a variable. The conversion: lowercase, replace hyphens with underscores, prefix with \(http_. So x-proxy-auth-key becomes \)http_x_proxy_auth_key, Authorization becomes \(http_authorization, Content-Type becomes \)http_content_type. Once you know the rule, every header is readable inside the config.

location = /path vs location /path. The = makes it an exact match — /health and nothing else. Without =, it's a prefix match, so / catches everything underneath it. This isn't just style: exact matches are checked first and short-circuit the rest of the routing, which is why they're the right choice for things like health checks.

proxy_ssl_server_name on is the gotcha that ate an hour of my time. SuperControl, like most modern APIs, uses SNI to pick which certificate to present during the TLS handshake. Without this directive, Nginx opens the connection without telling the upstream which hostname it's asking for, and the handshake fails with errors that look like generic 502s. The fix is one line; finding it isn't.

The Test Loop

sudo nginx -t                          # validate config syntax
sudo systemctl reload nginx            # apply if valid

# Health check — should always return 200
curl -i https://proxy.example.com/health

# Authed call — should return whatever the upstream returns
curl -i https://proxy.example.com/endpoint \
  -H "x-proxy-auth-key: REPLACE_WITH_SECRET"

# No header — should return 401, never touch the upstream
curl -i https://proxy.example.com/endpoint

nginx -t is the most important habit. It validates the whole file before you reload — catches misplaced braces, missing semicolons, and unreachable blocks before they take down a live config. Never reload without it.

Takeaway

Vercel and friends abstract away IPs, DNS, SSL, and routing so well that you can forget the network layer exists — until a third-party constraint forces you back into it. Forty lines of Nginx and a $5 VPS turned an unsolvable problem into a solved one. The skill isn't memorizing the syntax; it's recognizing when the abstraction has run out and being willing to drop a layer down to fix it.

20 views