NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo

2024/06/2705:54:33 technology 1427

Nginx is an integrated open source, high-performance Web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website performance.

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

You can use NGINX to speed up local origin servers by configuring it to cache responses from upstream servers, or you can create edge servers for content delivery networks (CDNs). NGINX powers some of the largest CDN.

When configured as a cache, NGINX will:

  • cache static and dynamic content.
  • improves dynamic content performance through microcaching.
  • serves stale content while revalidating in the background for better performance.
  • overrides or sets the Cache-Control message header.

In this article, you will learn how to configure NGINX as a content cache in Linux to make your web server run as efficiently as possible.

The premise is that NGINX has been installed on your Linux server, and the configuration will be introduced below.

Caching static content on Nginx

Static content is website content that remains the same (does not change) across pages. Examples of static content include files such as images, videos, documents; CSS files, and JavaScript files.

If your website uses a lot of static content, then you can optimize its performance by enabling client-side caching, where the browser stores copies of static content for faster access. The example configuration below

is a good choice, just replace ww.example.com with the URL of your website name and modify its pathname appropriately.

server {# substitute your web server's URL for www.example.comserver_name www.example.com;root /var/www/example.com/htdocs;index index.php;access_log /var/log/nginx/example.com.access .log;error_log /var/log/nginx/example.com.error.log;location / {try_files $uri $uri/ /index.php?$args;}location ~ .php$ {try_files $uri =404;include fastcgi_params;# substitute the socket, or address and port, of your WordPress serverfastcgi_pass unix:/var/run/php5-fpm.sock;#fastcgi_pass 27.0.0.1:9000; } location ~* .(ogg|ogv|svg |svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar |mid|midi|wav|bmp|rtf)$ {expires max;log_not_found off;access_log off;}}

Caching dynamic content on Nginx

NGINX uses a disk-based persistent cache located somewhere on the local file system. Therefore, first create a local disk directory to store the cached content.

# mkdir -p /var/cache/nginx

Next, set the appropriate permissions on the cache directory. It should be owned by the NGINX user ( nginx ) and group ( nginx ) as follows:

# chown nginx:nginx /var/cache/nginx

Now proceed to the section below to learn more about how to enable dynamic content on Nginx.

Enable FastCGI cache in NGINX

FastCGI (or FCGI) is a widely used protocol for connecting interactive applications such as PHP with web servers such as NGINX. It is an extension of CGI (Common Gateway Interface). The main advantage of

FCGI is that it manages multiple CGI requests in a single process. Without it, the web server would have to open a new process for every client request for service (which would have to be controlled, handle a request, and then shut down).

To process PHP scripts in LEMP stack deployments, NGINX uses FPM (FastCGI Process Manager) or PHP-FPM, a popular alternative to the PHP FastCGI implementation. Once the PHP-FPM process is running, NGINX is configured to proxy requests to it for processing. Therefore, NGINX can also be configured to cache responses from the PHP-FPM backend application server.

Under NGINX, the FastCGI content cache is declared using the directive called in the top-level http{} context in the NGINX configuration structure fastcgi_cache_path. You can also add fastcgi_cache_key to define the cache key (request identifier).

Additionally, to read the upstream cache status, add the add_header X-Cache-Status directive http{} in the context - this is useful for debugging purposes.

Assuming that your site's server block configuration file is located at /etc/nginx/conf.d/testapp.conf or /etc/nginx/sites-available/testapp.conf (under Ubuntu and its derivatives), open the edit file and add the following lines at the top of the file.

fastcgi_cache_path /var/cache/NGINX levels=1:2 keys_zone=CACHEZONE:10m; inactive=60m max_size=40m;fastcgi_cache_key "$scheme$request_method$host$request_uri";add_header X-Cache $upstream_cache_status;

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

The fastcgi_cache_path directive specifies parameters The number, they are:

  • /var/cache/nginx – the path to the local disk directory of the cache.
  • levels - Defines the cache hierarchy level, which sets up a two-level directory hierarchy under /var/cache/nginx.
  • keys_zone (name:size) – Enables the creation of a shared memory zone where all active keys and information about data (meta) are stored. Note that storing the key in memory speeds up the checking process, making it easier for NGINX to determine whether it is a MISS or a HIT without having to check the state on disk.
  • Inactive - Specifies the amount of time that cached data that has not been accessed within the specified time is removed from the cache, regardless of its freshness. In our example configuration, a value of 60m means that files not accessed after 60 will be removed from the cache.
  • max_size – Specifies the maximum size of the cache. You can use more parameters here (read the NGINX documentation for more information). The variables in the

fastcgi_cache_key directive are described below.

NGINX uses them to calculate the requested key (identifier). Importantly, to send a cached response to the client, the request must have the same key as the cached response.

  • $scheme – Request scheme, HTTP or HTTPS.
  • $request_method – Request method, usually “GET” or “POST”.
  • $host - This can be the hostname in the request line, or the hostname in the "Host" request header field, or the server name that matches the request, in order of preference.
  • $request_uri – Represents the complete original request URI (with parameters).

Additionally, the add_header variable in the response) or any other supported value.

Next, in the location directive that passes the PHP request to PHP-FPM, use these fastcgi_cache directives to activate the cache you just defined above.

fastcgi_cache_valid You can also set the cache time for different responses using the instructions shown in the figure.

fastcgi_cache CACHEZONE;fastcgi_cache_valid60m;

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

If only the cache time is specified in our example, only the 200, 301 and 302 responses are cached.But you can also specify the response explicitly or use any (for any response code):

fastcgi_cache CACHEZONE;fastcgi_cache_valid 200301 203 60m;fastcgi_cache_valid 404 10m;ORfastcgi_cache CACHEZONE;fastcgi_cache_validany 10m;

on Nginx Adjust FastCGI cache performance

to be set before caching the response Minimum number of times a request with the same key must be made, include the fastcgi_cache_min_uses directive in the or context. http{}server{}location{}

fastcgi_cache_min_uses 3

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

To revalidate expired cache entries using a conditional request with the " If-Modified-Since " and " If-None-Match " header fields, make a request in http{} or server Add the fastcgi_cache_revalidate directive in the {} or location{} context.

fastcgi_cache_revalidate on; 

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

You can also use the proxy_cache_use_stale directive in the location directive to instruct NGINX to deliver cached content when the origin server or FCGI server is down.

This example configuration means that when NGINX receives an error, timeout, and any specified errors from the upstream server, and the cached content has an expired version of the requested file, it will submit the expired file.

proxy_cache_use_stale error timeout http_500;

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

Another useful directive to fine-tune FCGI cache performance is fastcgi_cache_background_update in combination with the proxy_cache_use_stale directive. When set to on , it instructs NGINX to serve expired content when a client requests an expired file or when a file is being updated from an upstream server.

fastcgi_cache_background_update on;

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

fastcgi_cache_lock is also useful to fine-tune cache performance. If multiple clients request the same content and the content is not in the cache, NGINX will only forward the first request to the upstream server, cache the response, and then Other client requests are processed in the cache.

fastcgi_cache_lock on;

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

After making all the above changes in the NGINX configuration file, save and close it. Then check the configuration structure for any syntax errors before restarting the NGINX service.

# nginx -t# systemctl restart nginx

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

Next, test that the cache is working properly by trying to access your web application or site using the following curl command (the first time should indicate MISS, but subsequent requests should indicate HIT, as shown in the screenshot Show).

# curl -I http://testapp.tecmint.com

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews


Here is another screenshot showing NGINX serving stale data.

NGINX is a consolidated open source, high-performance web server that accelerates content and application delivery, enhances security, and improves scalability. One of the most common use cases for Nginx is content caching, which is the most effective way to improve website perfo - DayDayNews

Add exceptions to bypass caching

Using the fastcgi_cache_bypass directive, you can set that NGINX should not send cached responses to the client. To have NGINX not cache responses from the upstream server at all, use fastcgi_no_cache.

For example, if you want POST requests and urls with query strings to always go to PHP. First, declare an if statement and set the condition as shown below.

set $skip_cache 0; if ($request_method = POST) { set $skip_cache 1; } 

then activate the above exception in the location directive using the fastcgi_cache_bypass and fastcgi_no_cache directives, which pass the PHP request to PHP- fpm .

fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache;

As with many other parts of your site, you may not want to enable content caching. Below is an example of NGINX configuration to improve the performance of a WordPress site, please see the nginx.com blog.

To use it, changes need to be made (like domain, path, filename, etc.) to reflect what exists in the environment.

fastcgi_cache_path /var/run/NGINX-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m; fastcgi_cache_key "$scheme$request_method$host$request_uri"; server { server_name example.com www.example.com; root /var /www/example.com; index index.php; access_log /var/log/NGINX/example.com.access.log; error_log /var/log/NGINX/example.com.error.log; set $skip_cache 0; # POST requests and URLs with a query string should always go to PHP if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } # Don't cache URIs containing the following segments if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php |sitemap(_index)?.xml") { set $skip_cache 1 ; } # Don't use the cache for logged-in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass |wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1 ; } location / { try_files $uri $uri/ /index.php?$args; } location ~ .php$ { try_files $uri /index.php; include fastcgi_params; fastcgi_pass unix:/var/run/php5-fpm.sock ; fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 60m; } location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; } location ~* ^.+.(ogg|ogv |svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg |gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt |tar|mid|midi |wav|bmp|rtf)$ { access_log off; log_not_found off; expires max; } location = /robots.txt { access_log off; log_not_found off; } location ~ /. { deny all; access_log off; log_not_found off; } }

Enable proxy caching in NGINX

NGINX also supports caching responses from other proxy servers (defined by the proxy_pass directive). In this test case, we are using NGINX as a reverse proxy for the Node.js web application, so we will enable NGINX as cache for the Node.js application. All configuration directives used here have similar meanings to the FastCGI directives in the previous section, so we will not explain them again.

To enable caching of responses from a proxy server, the proxy_cache_path directive needs to be included in the top-level http{} context. To specify how requests are cached, you can also add the proxy_cache_key directive as shown below.

proxy_cache_path /var/cache/nginx app1 keys_zone=PROXYCACHE:100m inactive=60m max_size=500m;proxy_cache_key"$scheme$request_method$host$request_uri";add_header X-Cache-Status $upstream_cache_status;proxy_cache_min_uses 3; 8

Next, in location Activate caching in the directive.

location / { proxy_pass http://127.0.0.1:3000; proxy_cachePROXYCACHE; proxy_cache_valid 200 302 10m; proxy_cache_valid 4041m;}

To define conditions under which NGINX does not send cached content and does not cache responses from upstream servers at all, include proxy_c ache_bypass and proxy_no_cache.

proxy_cache_bypass$cookie_nocache $arg_nocache$arg_comment;proxy_no_cache$http_pragma $http_authorization;

Fine-tuning proxy cache performance

The following directives can be used to fine-tune the performance of proxy cache. They also have the same meaning as FastCGI directives.

proxy_cache_min_uses 3;proxy_cache_revalidate on;proxy_cache_use_stale error timeout updating http_500;proxy_cache_background_update on;proxy_cache_lock on;

For more information and cache configuration instructions, see the two main modules ngx_http_fastcgi_module and ngx_http_proxy_module documentation.

technology Category Latest News