How to Efficiently Monitor NGINX: Tips, Tools, Metrics

By Staff Contributor on April 3, 2023

Have you ever needed to quickly set up a web server? Or have you been required to distribute a load among many nodes? In these situations, the solution is often NGINX. NGINX can perform both functions, as it’s a web server but can also work as a load balancer, an HTTP cache, or even an email proxy.

Since NGINX features allow us to put it on the frontline for our applications, we need to make sure it works as expected, providing fast response to web requests and raising a minimum number of errors.

Let’s walk through some of the features of NGINX, metrics to observe, and tools to ensure NGINX is healthy and working fine.

NGINX Capabilities Overview

As briefly mentioned, NGINX has four main use cases: web server, load balancer, HTTP cache, and mail proxy. Each of these are available by setting the right configurations in a text file. Let’s dig a bit more into each function.

NGINX Main Use Cases

Web Server
Using NGINX as a web server means defining what URLs it will handle and how to process requests to those addresses. This is done by configuring virtual servers, each one with its own processing rules. Some of those rules may include returning a file, rewriting the request for further processing, or returning specific HTTP codes.

Whenever there’s a new connection, NGINX puts it in a queue to listen. Under normal circumstances, this queue is small or empty. However, in heavy traffic situations, this queue can grow very big.

You can use this command to review queue size:

netstat -Lan

The above command displays network statistics, and for this case, it displays listening connections (-a parameter) as numerical addresses (-n parameter).

The output is similar to this:

Current listen queue sizes (qlen/incqlen/maxqlen)

ListenLocal Address
0/0/128*.12345
10/0/128*.80
0/0/128*.8080

For example, in port 80, there are 10 connections in the queue (unaccepted connections), while the maximum is 128. For a busy site, this number can grow bigger than the maximum allowed, resulting in connection errors for the client. You have to adjust this number based on your load.

Load Balancing
Load balancing is a technique to distribute traffic among different application nodes to improve application performance and make the app fault tolerant.

NGINX can balance HTTP traffic or network protocols TCP and UDP. Some of the methods it uses to distribute loads are round robin, least connections, hash, or random. It also has health checks to monitor node status, automatically removing unhealthy nodes and adding them back once they’ve recovered. You can also use custom health checks—just define thresholds in the configuration.

HTTP Cache
When NGINX is configured for caching, it saves responses in a disk cache and uses them to respond without executing further requests for the same content. You can configure cache size and data validity time.

Mail Proxy
With its proxy functionality, NGINX can proxy IMAP, POP3, and SMTP to the proper server, making it a single endpoint for all email clients.

Metrics to Keep an Eye On

As explained in the previous section, NGINX serves multiple purposes. But, no matter how you use it, the metrics you should observe are the same.

Requests Flow

When a client makes a request to NGINX, the client makes a connection to the server. Once the connection is established, the server reads the information provided by the client, processes the request, and provides a response. In this process, we can identify the following metrics: connections waiting to be accepted, accepted connections, processed connections, response codes (success or error), and processing time. These numbers provide an overview on the server’s health status.

Errors

There are different error types, including timeouts, processing, authentication, and application errors. Each of them will most likely have a different root cause. Tracking these errors will allow us to identify if the server configuration needs to be updated, if the server needs more resources, or if the application behind the server isn’t working the way it should.

Tools for Monitoring NGINX

To identify errors described in the last section, you need to have a way to gather such information.

Logs

NGINX, like many applications, can write error messages to log files. However, this isn’t the only option. Since you can integrate it with syslog, you can send logs over the network to a different log server, where you can merge all your log entries to perform more advanced analysis.

Take this configuration for example:

error_log syslog:server=unix:/var/log/nginx.sock debug;
access_log
syslog:server=[2001:db8::1]:1234,facility=local7,tag=nginx,severity=info;

In the first line, NGINX will send error messages at debug level to a UNIX socket. As for the second line, NGINX will send access logs to a log server on IPv6 address 2001:db8::1 on port 1234. The additional parameters have the following functions:

  • facility: to indicate the type of program sending the message
  • tag: a custom tag added to the syslog messages
  • severity: message severity level

stub_status Module

This is the main module that exposes metrics about NGINX. This module is already included in most supported platforms. However, if you build NGINX from source, you need to include it explicitly. Afterward, you have to enable it in your server configuration.

This is an example configuration:

server {
  listen 127.0.0.1:80;
  server_name 127.0.0.1;

  location /nginx_status {
    stub_status;
  }
}

NGINX will listen on interface 127.0.0.1, port 80, and the server name is 127.0.0.1. When NGINX receives a request for /nginx_status, the module will display the following information:

Active connections: 291
server accepts handled requests
16630948 16630948 31070465
Reading: 6 Writing: 179 Waiting: 106

Output description:

  • Active connections: current active client connections
  • Accepts: accepted client connections
  • Handled: handled connections
  • Requests: number of client requests
  • Reading: connections where NGINX is reading request header
  • Writing: connections where NGINX is writing the response back to the client
  • Waiting: idle client connections waiting for a request

Other Tools

In the last section, we showed the default tools available with a NGINX installation. Even so, those are somewhat basic and will need some extra effort to take advantage of them. Other modules, extensions, and plugins make it easier to view the information provided by NGINX.

We suggest you consider the following characteristics when selecting an additional tool:

  • Syslog integration for remote monitoring:
    This will allow you to send logs anywhere your log server is located, either on-premises, on a different data center, or in the cloud.
  • Cloud-based tools:
    Cloud-based logging allows you to leverage management efforts and focus on making sure your application provides the expected outcome.
  • Log aggregation from multiple applications:
    NGINX isn’t the only tool to monitor, and having multiple logging platforms isn’t efficient.
  • Analytics and reports:
    The ability to configure reports and analyze data will enable you to be proactive by identifying trends and noticing issues before they come to life.
  • Alerts:
    Alerts help you act quickly toward error resolution.

Summary

In this article, we’ve shown you some options to include in your NGINX configuration to get a better view of your server’s health. Still, the default options require additional work to make them completely functional, so if you require a solution at the enterprise level, we recommend a cloud solution like SolarWinds Observability SaaS (formerly known as SolarWinds Observability). Select the options that better accommodate your needs.

This post was written by Juan Pablo Macias Gonzalez. Juan is a computer systems engineer with experience in back end, front end, databases, and systems administration.

Related Posts