Red Hat Training

A Red Hat training course is available for RHEL 8

Chapter 2. Setting up and configuring NGINX

NGINX is a high performance and modular server that you can use, for example, as a:

  • Web server
  • Reverse proxy
  • Load balancer

This section describes how to NGINX in these scenarios.

2.1. Installing and preparing NGINX

Red Hat uses Application Streams to provide different versions of NGINX. You can do the following:

  • Select a stream and install NGINX
  • Open the required ports in the firewall
  • Enable and start the nginx service

Using the default configuration, NGINX runs as a web server on port 80 and provides content from the /usr/share/nginx/html/ directory.

Prerequisites

  • RHEL 8 is installed.
  • The host is subscribed to the Red Hat Customer Portal.
  • The firewalld service is enabled and started.

Procedure

  1. Display the available NGINX module streams:

    # yum module list nginx
    Red Hat Enterprise Linux 8 for x86_64 - AppStream (RPMs)
    Name        Stream        Profiles        Summary
    nginx       1.14 [d]      common [d]      nginx webserver
    nginx       1.16          common [d]      nginx webserver
    ...
    
    Hint: [d]efault, [e]nabled, [x]disabled, [i]nstalled
  2. If you want to install a different stream than the default, select the stream:

    # yum module enable nginx:stream_version
  3. Install the nginx package:

    # yum install nginx
  4. Open the ports on which NGINX should provide its service in the firewall. For example, to open the default ports for HTTP (port 80) and HTTPS (port 443) in firewalld, enter:

    # firewall-cmd --permanent --add-port={80/tcp,443/tcp}
    # firewall-cmd --reload
  5. Enable the nginx service to start automatically when the system boots:

    # systemctl enable nginx
  6. Optionally, start the nginx service:

    # systemctl start nginx

    If you do not want to use the default configuration, skip this step, and configure NGINX accordingly before you start the service.

Verification steps

  1. Use the yum utility to verify that the nginx package is installed:

    # yum list installed nginx
    Installed Packages
    nginx.x86_64    1:1.14.1-9.module+el8.0.0+4108+af250afe    @rhel-8-for-x86_64-appstream-rpms
  2. Ensure that the ports on which NGINX should provide its service are opened in the firewalld:

    # firewall-cmd --list-ports
    80/tcp 443/tcp
  3. Verify that the nginx service is enabled:

    # systemctl is-enabled nginx
    enabled

Additional resources

2.2. Configuring NGINX as a web server that provides different content for different domains

By default, NGINX acts as a web server that provides the same content to clients for all domain names associated with the IP addresses of the server. This procedure explains how to configure NGINX:

  • To serve requests to the example.com domain with content from the /var/www/example.com/ directory
  • To serve requests to the example.net domain with content from the /var/www/example.net/ directory
  • To serve all other requests, for example, to the IP address of the server or to other domains associated with the IP address of the server, with content from the /usr/share/nginx/html/ directory

Prerequisites

  • NGINX is installed
  • Clients and the web server resolve the example.com and example.net domain to the IP address of the web server.

    Note that you must manually add these entries to your DNS server.

Procedure

  1. Edit the /etc/nginx/nginx.conf file:

    1. By default, the /etc/nginx/nginx.conf file already contains a catch-all configuration. If you have deleted this part from the configuration, re-add the following server block to the http block in the /etc/nginx/nginx.conf file:

      server {
          listen       80 default_server;
          listen       [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
      }

      These settings configure the following:

      • The listen directive define which IP address and ports the service listens. In this case, NGINX listens on port 80 on both all IPv4 and IPv6 addresses. The default_server parameter indicates that NGINX uses this server block as the default for requests matching the IP addresses and ports.
      • The server_name parameter defines the host names for which this server block is responsible. Setting server_name to _ configures NGINX to accept any host name for this server block.
      • The root directive sets the path to the web content for this server block.
    2. Append a similar server block for the example.com domain to the http block:

      server {
          server_name  example.com;
          root         /var/www/example.com/;
          access_log   /var/log/nginx/example.com/access.log;
          error_log    /var/log/nginx/example.com/error.log;
      }
      • The access_log directive defines a separate access log file for this domain.
      • The error_log directive defines a separate error log file for this domain.
    3. Append a similar server block for the example.net domain to the http block:

      server {
          server_name  example.net;
          root         /var/www/example.net/;
          access_log   /var/log/nginx/example.net/access.log;
          error_log    /var/log/nginx/example.net/error.log;
      }
  2. Create the root directories for both domains:

    # mkdir -p /var/www/example.com/
    # mkdir -p /var/www/example.net/
  3. Set the httpd_sys_content_t context on both root directories:

    # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.com(/.*)?"
    # restorecon -Rv /var/www/example.com/
    # semanage fcontext -a -t httpd_sys_content_t "/var/www/example.net(/.\*)?"
    # restorecon -Rv /var/www/example.net/

    These commands set the httpd_sys_content_t context on the /var/www/example.com/ and /var/www/example.net/ directories.

    Note that you must install the policycoreutils-python-utils package to run the restorecon commands.

  4. Create the log directories for both domains:

    # mkdir /var/log/nginx/example.com/
    # mkdir /var/log/nginx/example.net/
  5. Restart the nginx service:

    # systemctl restart nginx

Verification steps

  1. Create a different example file in each virtual host’s document root:

    # echo "Content for example.com" > /var/www/example.com/index.html
    # echo "Content for example.net" > /var/www/example.net/index.html
    # echo "Catch All content" > /usr/share/nginx/html/index.html
  2. Use a browser and connect to http://example.com. The web server shows the example content from the /var/www/example.com/index.html file.
  3. Use a browser and connect to http://example.net. The web server shows the example content from the /var/www/example.net/index.html file.
  4. Use a browser and connect to http://IP_address_of_the_server. The web server shows the example content from the /usr/share/nginx/html/index.html file.

2.3. Adding TLS encryption to an NGINX web server

You can enable TLS encryption on an NGINX web server for the example.com domain.

Prerequisites

  • NGINX is installed.
  • The private key is stored in the /etc/pki/tls/private/example.com.key file.

    For details about creating a private key and certificate signing request (CSR), as well as how to request a certificate from a certificate authority (CA), see your CA’s documentation.

  • The TLS certificate is stored in the /etc/pki/tls/certs/example.com.crt file. If you use a different path, adapt the corresponding steps of the procedure.
  • The CA certificate has been appended to the TLS certificate file of the server.
  • Clients and the web server resolve the host name of the server to the IP address of the web server.
  • Port 443 is open in the local firewall.

Procedure

  1. Edit the /etc/nginx/nginx.conf file, and add the following server block to the http block in the configuration:

    server {
        listen              443 ssl;
        server_name         example.com;
        root                /usr/share/nginx/html;
        ssl_certificate     /etc/pki/tls/certs/example.com.crt;
        ssl_certificate_key /etc/pki/tls/private/example.com.key;
    }
  2. For security reasons, configure that only the root user can access the private key file:

    # chown root:root /etc/pki/tls/private/example.com.key
    # chmod 600 /etc/pki/tls/private/example.com.key
    Warning

    If the private key was accessed by unauthorized users, revoke the certificate, create a new private key, and request a new certificate. Otherwise, the TLS connection is no longer secure.

  3. Restart the nginx service:

    # systemctl restart nginx

Verification steps

  • Use a browser and connect to https://example.com

2.4. Configuring NGINX as a reverse proxy for the HTTP traffic

You can configure the NGINX web server to act as a reverse proxy for HTTP traffic. For example, you can use this functionality to forward requests to a specific subdirectory on a remote server. From the client perspective, the client loads the content from the host it accesses. However, NGINX loads the actual content from the remote server and forwards it to the client.

This procedure explains how to forward traffic to the /example directory on the web server to the URL https://example.com.

Prerequisites

Procedure

  1. Edit the /etc/nginx/nginx.conf file and add the following settings to the server block that should provide the reverse proxy:

    location /example {
        proxy_pass https://example.com;
    }

    The location block defines that NGINX passes all requests in the /example directory to https://example.com.

  2. Set the httpd_can_network_connect SELinux boolean parameter to 1 to configure that SELinux allows NGINX to forward traffic:

    # setsebool -P httpd_can_network_connect 1
  3. Restart the nginx service:

    # systemctl restart nginx

Verification steps

  • Use a browser and connect to http://host_name/example and the content of https://example.com is shown.

2.5. Configuring NGINX as an HTTP load balancer

You can use the NGINX reverse proxy feature to load-balance traffic. This procedure describes how to configure NGINX as an HTTP load balancer that sends requests to different servers, based on which of them has the least number of active connections. If both servers are not available, the procedure also defines a third host for fallback reasons.

Prerequisites

Procedure

  1. Edit the /etc/nginx/nginx.conf file and add the following settings:

    http {
        upstream backend {
            least_conn;
            server server1.example.com;
            server server2.example.com;
            server server3.example.com backup;
        }
    
        server {
            location / {
                proxy_pass http://backend;
            }
        }
    }

    The least_conn directive in the host group named backend defines that NGINX sends requests to server1.example.com or server2.example.com, depending on which host has the least number of active connections. NGINX uses server3.example.com only as a backup in case that the other two hosts are not available.

    With the proxy_pass directive set to http://backend, NGINX acts as a reverse proxy and uses the backend host group to distribute requests based on the settings of this group.

    Instead of the least_conn load balancing method, you can specify:

    • No method to use round robin and distribute requests evenly across servers.
    • ip_hash to send requests from one client address to the same server based on a hash calculated from the first three octets of the IPv4 address or the whole IPv6 address of the client.
    • hash to determine the server based on a user-defined key, which can be a string, a variable, or a combination of both. The consistent parameter configures that NGINX distributes requests across all servers based on the user-defined hashed key value.
    • random to send requests to a randomly selected server.
  2. Restart the nginx service:

    # systemctl restart nginx

2.6. Additional resources