Sitemap

How in-depth are you on AWS Network Load Balancers and Reverse-Proxy ?— Part 1

3 min readOct 31, 2025
Press enter or click to view image in full size
Generate by: my imagination and my BF ( Copilot ❤)

How in-depth are you on AWS Network Load Balancers working with Reverse-Proxy — Part 1?

If you ask random people about ELB on AWS, you’ll likely get these kinds of answers:

1° ALB is for Application

2° CLB is not used anymore

3° NLB is used when you have a high network traffic, or just need to work with TCP/UDP protocol.

4° What is GWLB?

Well, we can’t say that’s wrong, all of these make sense. The problem is not to know HOW the ELB works, but HOW configure these balancers.

Many people face issues like rejected connections, connection resets, timeouts, or the dreaded 502 errors in their environments. Some simply ignore them, thinking “it’s normal,” while others end up working late nights, increasing CPU and memory resources (hoping it will help), scaling servers, changing instance types, and spending money on solutions without fully understanding the root cause.

On this posts, I’II talk a little bit about Network Load Balancers working with NGINX, and how you can improve the efficiency without sacrify your resources ( and money ).

PLEAS READ BOTH POSTS TO UNDERSTAND THE ENTIRE PROCESS.

Enjoy it.

NGINX side

Do you know what is keepalive?

By default, NGINX establishes a new connection to the upstream server for every incoming request. This behavior can become problematic under heavy load for several reasons:

  • Even if multiple requests originate from the same client, each one initiates a separate connection to the upstream server, using a different local port.
  • Once a request is completed, the local port enters the TIME_WAIT state for approximately 120 seconds, during which it cannot be reused.
  • As traffic increases, more local ports are consumed, and under high load, the system can quickly exhaust the ~65,000 available ports.
  • Constantly opening and closing connections is computationally expensive and inefficient.

To address this, the keepalive directive allows NGINX to reuse existing connections to upstream servers. This leads to:

  • Reduced consumption of local ports.
  • Reuse of connections instead of repeatedly establishing new ones.
  • Improved overall performance and scalability.
Syntax: keepalive connections;
Default: —
Context: upstream

This directive appeared in version 1.1.4.

However, caution. In some scenarios, enabling keepalive can be overkill. Persistent connections consume server resources, and under resource pressure, NGINX may abruptly close connections, potentially causing unexpected behavior.

Do you know what is the keepalive_timeout ? — http/server side

Syntax: keepalive_timeout timeout [header_timeout];
Default:
keepalive_timeout 75s;
Context: http, server, location

Do you know what is keepalive_timeout? — Upstream side

This directive will define the timeout during which an idle keepalive connection to an upstream server will stay open.

IS TOO IMPORTANT TO PAY ATTENTION HERE! Your upstream ( app ) will have your own directives to idle timeouts, you need to have async configuration between your app & nginx.

Of course, this directive need to have async configurations with idle_timeout and keepalive_timeout ( server/http ) directives too.

Part 2 = How in-depth are you on AWS Network Load Balancers and Reverse- Proxy? — Part 2 | by Oliveira_Gustavo01 | Oct, 2025 | Medium

Syntax: keepalive_timeout timeout;
Default:
keepalive_timeout 60s; #default
Context: upstream
This directive appeared in version 1.15.3.

--

--

No responses yet