
Overview
Learn how to set up a reverse proxy with HAProxy on a VPS to hide your real origin IP, improve performance, and protect your apps from third parties. How to create a reverse proxy with HAProxy to hide your Real Origin Server IP?
What is a reverse proxy?
A reverse proxy sits between clients and your servers. It receives incoming requests, decides where to send them,
and returns responses—while keeping your origin servers hidden from the public internet. It can also load balance
across multiple backends, add security headers, rate-limit abusive clients, and centralize TLS (HTTPS) termination.
Think of it as a smart bouncer: it directs people to the right rooms but keeps the backstage private.
In short, a reverse proxy is the quiet workhorse that keeps everything running smoothly while your origin stays private.
Reverse Proxy with HAProxy: How it works
HAProxy is a powerful L4/L7 proxy and load balancer. Client requests hit HAProxy first, where:
- TLS (HTTPS) can be terminated.
- Headers like
X-Forwarded-For
,X-Forwarded-Proto
, andX-Forwarded-Host
are added. - Traffic is routed to backends by hostname, path, or custom rules.
- Health checks, automatic failover, rate limits, compression, light caching, WebSockets, and gRPC pass-through are available.
- Detailed logs and a live stats page provide observability.
Bottom line: HAProxy simplifies your architecture, boosts security and performance, and makes scaling straightforward.
Pros and Cons of HAProxy
Pros (why HAProxy shines)
- High performance & low overhead (event-driven, multi-threaded).
- L4 + L7 smarts (TCP/SNI passthrough or full HTTP routing/rewrites).
- Robust load balancing & health checks (round-robin, leastconn, hashing; active checks, failover).
- Security features (TLS termination, HSTS, ACLs, rate limiting via stick-tables, IP allow/deny).
- Observability (rich logs, live stats socket/page; Prometheus exporters available).
- Reliability (graceful, near-zero-downtime reloads; battle-tested).
- Small footprint (runs virtually anywhere: Linux/BSD/containers).
Cons (trade-offs)
- Learning curve (powerful but verbose configuration).
- Cert automation not built-in (pair with Certbot/lego or Data Plane API).
- Manual service discovery by default (dynamic backends need templates/API).
- Limited built-in caching/static serving (use CDN/Varnish/Nginx if needed).
- No native community WAF (use a separate WAF or HAProxy Enterprise).
- Complex rewrites can get wordy.
- Windows support limited (best on Linux/BSD).
What you’ll need
- A VPS/public server for HAProxy (the reverse proxy).
- Your origin server (e.g.,
10.0.0.10:8080
). - A domain (e.g.,
example.com
) with DNS A/AAAA pointing to the HAProxy server’s public IP.
Privacy tip: To truly hide your origin IP, ensure the origin is not publicly reachable, firewall it to accept traffic only from the HAProxy server, and avoid DNS records that reveal the origin.
Step 1 — Install HAProxy
Ubuntu/Debian
sudo apt update
sudo apt install -y haproxy
RHEL/Alma/Rocky
sudo dnf install -y haproxy
Step 2 — Get a TLS certificate (Let’s Encrypt)
We’ll let Certbot obtain a certificate and we’ll bundle it for HAProxy.
Install certbot and get the cert (one-time)
# Ubuntu/Debian
sudo apt install -y certbot
sudo certbot certonly --standalone -d example.com --agree-tos -m you@example.com --non-interactive
Create HAProxy’s PEM bundle (fullchain + privkey)
sudo mkdir -p /etc/haproxy/certs
sudo bash -c 'cat /etc/letsencrypt/live/example.com/fullchain.pem \
/etc/letsencrypt/live/example.com/privkey.pem \
> /etc/haproxy/certs/example.com.pem'
sudo chmod 600 /etc/haproxy/certs/example.com.pem
Auto-rebundle & reload HAProxy on renewals
sudo bash -c 'cat >/etc/letsencrypt/renewal-hooks/deploy/haproxy.sh' <<'EOF'
#!/usr/bin/env bash
cat /etc/letsencrypt/live/example.com/fullchain.pem \
/etc/letsencrypt/live/example.com/privkey.pem \
> /etc/haproxy/certs/example.com.pem
systemctl reload haproxy
EOF
sudo chmod +x /etc/letsencrypt/renewal-hooks/deploy/haproxy.sh
Step 3 — Minimal, production-ready HAProxy config (HTTPS + redirect)
Replace example.com
and your backend IP/port where noted.
# /etc/haproxy/haproxy.cfg
global
log /dev/log local0
maxconn 50000
daemon
defaults
log global
mode http
option httplog
timeout connect 5s
timeout client 60s
timeout server 60s
http-reuse safe
# Frontend: listen on 80/443, redirect to HTTPS, route ACME and app traffic
frontend fe_https
bind :80
bind :443 ssl crt /etc/haproxy/certs/example.com.pem alpn h2,http/1.1
# Force HTTPS
http-request redirect scheme https unless { ssl_fc }
# Basic security header
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" if { ssl_fc }
# Preserve client info for your app
option forwardfor header X-Forwarded-For
http-request set-header X-Forwarded-Proto https if { ssl_fc }
http-request set-header X-Forwarded-Host %[req.hdr(host)]
# Simple rate cap: 100 requests / 10s per IP
stick-table type ip size 100k expire 10m store http_req_rate(10s)
http-request track-sc0 src
acl too_fast sc0_http_req_rate gt 100
http-request deny status 429 if too_fast
# Route ACME HTTP-01 challenges to local certbot (used during renewals)
acl acme path_beg /.well-known/acme-challenge/
use_backend be_acme if acme
# Route your domain to the origin backend
acl host_example hdr(host) -i example.com
use_backend be_app if host_example
default_backend be_app
# Backend: your origin server
backend be_app
balance leastconn
option httpchk GET /health
http-check expect status 200
server app1 10.0.0.10:8080 check
# Backend to serve ACME challenges (certbot standalone hook)
backend be_acme
server local 127.0.0.1:8081
Why this works
- HAProxy terminates TLS on
:443
and redirects:80 → HTTPS
. - Regular traffic goes to your origin at
10.0.0.10:8080
. - Only
/.well-known/acme-challenge/*
is routed to a tiny local webserver Certbot will run during renewals.
Step 4 — Start, reload & validate
# Validate config
sudo haproxy -c -f /etc/haproxy/haproxy.cfg
# Enable and start
sudo systemctl enable --now haproxy
# Reload after edits/renewals
sudo systemctl reload haproxy
Step 5 — Hands-off renewals
Let Certbot briefly bind to :8081
while HAProxy keeps :80/:443
open:
# Typically handled by systemd timer; safe to run manually for testing
sudo certbot renew --deploy-hook "/etc/letsencrypt/renewal-hooks/deploy/haproxy.sh" \
--http-01-port 8081 --pre-hook "systemctl start haproxy" --post-hook "systemctl start haproxy"
During renewal, Certbot answers the challenge on port 8081
; HAProxy already routes that path to 127.0.0.1:8081
.
Variations (pick what you need)
A) Multiple origins by hostname
# Add in frontend:
acl host_api hdr(host) -i api.example.com
use_backend be_api if host_api
# Define an API backend:
backend be_api
balance roundrobin
option httpchk GET /healthz
server api1 10.0.0.21:9000 check
server api2 10.0.0.22:9000 check
B) TLS passthrough (origin handles TLS/mTLS)
Use TCP mode with SNI routing. No header rewrites or L7 features here.
frontend fe_tcp
mode tcp
bind :443
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend be_tls_app if { req_ssl_sni -i example.com }
backend be_tls_app
mode tcp
server app_tls 10.0.0.10:443 check
C) Minimal HTTP-only reverse proxy (no TLS)
For internal/testing only—use HTTPS for production.
global
log /dev/log local0
defaults
mode http
log global
option httplog
timeout connect 5s
timeout client 60s
timeout server 60s
frontend public_http
bind :80
option forwardfor
default_backend app
backend app
server app1 10.0.0.10:8080 check
Quick checks & troubleshooting
# DNS should point to HAProxy
dig +short example.com
# HTTP should redirect to HTTPS (301)
curl -I http://example.com
# HTTPS should serve content
curl -I https://example.com
# See headers the app receives (in your app logs):
# X-Forwarded-For, X-Forwarded-Proto, X-Forwarded-Host
Firewall tips:
- Lock down your origin so it only accepts traffic from the HAProxy server (e.g., with
ufw
,firewalld
, or cloud security groups). - Optionally block direct public access to the origin IP at your provider level.
Final notes
- Keep timeouts reasonable for your workloads (WebSockets/gRPC may need higher).
- Expose a
/health
endpoint in your app forhttpchk
. - Plan for zero-downtime deployments: drain a server (
disabled
) during deploys, then re-enable.
Important Notice:
In the event that you are unsure how to configure the server correctly, we strongly recommend hiring a professional to complete the configuration. It is essential to ensure that all settings are done accurately, including checking firewall ports to confirm that there are no port blocks.
It is important to have at least a basic understanding of firewalls and Linux commands to navigate the configuration process effectively.
Please note that we are not responsible for any damages or issues that may arise from the configuration process. All information provided here is for technical knowledge and learning purposes only.
Thank you for your understanding.