Table of Contents

What is Nginx Web server?

Nginx Web Server: Nginx is a high-performance, open-source web server and reverse proxy server. It excels in serving static content, load balancing, and handling concurrent connections efficiently. Nginx is often used as a front-end proxy server to route requests to backend servers like Node.js or PHP, making it a critical component for optimizing web application performance and security.

Why might someone choose to use Nginx over another web server software ?

Main reasons:

  1. Nginx is known for its performance and efficiency. It can handle a large number of concurrent connections and has a small memory footprint, which makes it a good choice for high-traffic websites.
  2. Nginx is also highly scalable. It can be easily configured to handle a large amount of traffic, and it can be easily scaled up or down as needed.
  3. Nginx is also very versatile. It can be used as a reverse proxy, load balancer, and HTTP cache, which makes it a good choice for a wide range of applications.
  4. Nginx is also easy to configure and maintain. Its configuration files are simple and straightforward, which makes it easy to manage even for users who are not experienced with web server software.

Overall, Nginx is a popular choice for web server software because of its performance, efficiency, scalability, versatility, and ease of use.

Step#1 : How to install Nginx on Ubuntu 20.04 LTS/22.04 LTS?

To install Nginx, use following command:

ubuntu@RushiInfotech:~$ sudo apt update
ubuntu@RushiInfotech:~$ sudo apt install nginx -y

Step#2 : Checking your Nginx Web Server

At the end of the installation process, Ubuntu 20.04 starts Nginx. The web server should already be up and running.

Now, we will check whether it is active running or not to check this simply type command:

ubuntu@RushiInfotech:~$ sudo service status nginx

After installing it, you already have everything you need.

You can point your browser to your server IP address. You should see this page:

If you see this page, you have successfully installed Nginx on your web server.

Step#4 : Creating our own website on Nginx Server

Default page is placed in /var/www/html/ location. You can place your static pages here, or use virtual host and place it other location.

Virtual host is a method of hosting multiple domain names on the same server.

Let’s create simple HTML page in /var/www/tutorial.devopshint.info/ (it can be anything you want). Create index.html file in this location.

ubuntu@RushiInfotech:~$ cd /var/www
ubuntu@RushiInfotech:~$ sudo mkdir tutorial.devopshint.info
ubuntu@RushiInfotech:~$ cd tutorial.devopshint.info
ubuntu@RushiInfotech:~$ sudo "${EDITOR:-vi}" index.html

Paste the following to the index.html file:

!doctype html>
<html>
<head>
    <meta charset="utf-8">
    <title>Hello, Nginx!</title>
</head>
<body>
    <h1>Hello, Nginx!</h1>
    <p>We have just configured our Nginx web server on Ubuntu Server!</p>
</body>
</html>

Save this file. In next step we are going to set up virtual host to make Nginx use pages from this location.

Step#5 : Setting up a Server Block(Virtual Host) on Nginx Server

To set up virtual host, we need to create file in /etc/nginx/sites-enabled/ directory.

For this tutorial, we will make our site available on 81 port, not the standard 80 port. You can change it if you would like to.

ubuntu@RushiInfotech:~$ cd /etc/nginx/sites-enabled
ubuntu@RushiInfotech:~$ sudo "${EDITOR:-vi}" tutorial.devopshint.info
server {
       listen 80;
       listen [::]:80;

       server_name tutorial.devopshint.info;

       root /var/www/tutorial.devopshint.info;
       index index.html;

       location / {
               try_files $uri $uri/ =404;
       }
}

root is a directory where we have placed our .html file. index is used to specify file available when visiting root directory of site. server_name can be anything you want, because you aren’t pointing it to any real domain by now.

step#6 : Add DNS record on godaddy

Step#7 : Activating virtual host and testing results

To make our site working, simply restart Nginx service.

ubuntu@RushiInfotech:~$ sudo service nginx restart

Let’s check if everything works as it should. Open our newly created site in web browser. Remember that we used :80 port.

Congratulations! Everything works as it should. We have just configured Nginx web server.

Host Multiple Websites On Nginx Server With Single IP Address

What is Nginx Server Blocks?

Nginx Server Blocks, also known as Virtual Hosts, are a feature that allows a single Nginx server to host multiple websites or applications with different configurations. Each Server Block is defined in Nginx configuration files and specifies how requests to specific domains or subdomains should be handled, making it possible to serve multiple sites from the same server. Server Blocks are essential for efficient web hosting and traffic routing.

Step#1 : Create Document Root Directories

You will make two individual document root folders for each domain.

ubuntu@RushiInfotech:~$ sudo mkdir -p /var/www/tutorial1.devopshint.info/html
ubuntu@RushiInfotech:~$ sudo mkdir -p /var/www/tutorial2.devopshint.info/html

The requests made to the first example domain are served from /var/www/html/tutorial1.devopshint.info

The requests made, for tutorial2 site will be served from /var/www/html/tutorial2.devopshint.info

Use the $USER environmental variable to assign ownership to the account that we are currently signed in on (make sure you’re not logged in as root). This will allow us to easily create or edit the content in this directory:

ubuntu@RushiInfotech:/var/www$ sudo chown -R $USER:$USER /var/www/tutorial1.devopshint.info/html

ubuntu@RushiInfotech:/var/www$ sudo chown -R $USER:$USER /var/www/tutorial2.devopshint.info/html

The permissions of our web roots should be correct already if you have not modified your umask value, but we can make sure by typing:

ubuntu@RushiInfotech:/var/www$ sudo chmod -R 755 /var/www

Our directory structure is now configured and we can move on.

Step#2 : Adjust the Firewall

If you are installing Nginx on you local system then allow traffic on port 80 in ufw , if you are installing on Cloud Instance then allow port in cloud security group.sudo ufw allow 80/tcp

OR

ubuntu@RushiInfotech:/$ sudo ufw app list

you will see list of profiles

OutputAvailable applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH

To allow traffic on port 80, enable this by typing:

ubuntu@RushiInfotech:/$ sudo ufw allow 'Nginx HTTP'

You can verify the change by typing:

ubuntu@RushiInfotech:/$ sudo ufw status

The output will indicated which HTTP traffic is allowed:

Output
Status: active

Step#3 : Creating index.html files for both sites

Now that we have our directory structure set up, let’s create a default page for each of our sites so that we will have something to display.

Create an index.html file in your first domain:

ubuntu@RushiInfotech:/$ sudo vim /var/www/tutorial1.devopshint.info/html/index.html
                    /var/www/tutorial1.devopshint.info/html/index.html


<html>
    <head>
        <title>Welcome to tutorial1.com!</title>
    </head>
    <body>
        <h1>Success! The  tutorial1.com server block is working!</h1>
    </body>
</html>
                         /var/www/tutorial2.devopshint.info/html/index.html

<html>
    <head>
        <title>Welcome to tutorial2.com!</title>
    </head>
    <body>
        <h1>Success!  The tutorial2.com server block is working!</h1>
    </body>
</html>

Step#4 : Create Server Blocks

Now that we have the content we wish to serve, we need to create the server blocks that will tell Nginx how to do this.

Creating the First Server Block File:-

As mentioned above, we will create our first server block config file by copying over the default file:

ubuntu@RushiInfotech:/$ sudo vim /etc/nginx/sites-available/tutorial1.devopshint.info
server{
listen 80;
listen [::]:80;
root /var/www/tutorial1.devopshint.info/html;
index index.html index.htm;
server_name tutorial1.devopshint.info;
location / {
try_files $uri $uri/ =404;
}
}

Creating the Second Server Block File:-

Now that we have our initial server block configuration, we can use that as a basis for our second file.

ubuntu@RushiInfotech:/$ sudo vim /etc/nginx/sites-available/tutorial2.devopshint.info
server{
listen 80;
listen [::]:80;
root /var/www/tutorial2.devopshint.info/html;
index index.html index.htm;
server_name tutorial2.devopshint.info;
location / {
try_files $uri $uri/ =404;
}
}

step#5 : Enabling your Server Blocks and Restart Nginx

Now that we have our server block files, we need to enable them. We can do this by creating symbolic links from these files to the sites-enabled directory, which Nginx reads from during startup.

We can create these links by typing:

ubuntu@RushiInfotech:/$ sudo ln -s /etc/nginx/sites-available/tutorial1.devopshint.info /etc/nginx/sites-enabled/

ubuntu@RushiInfotech:/$ sudo ln -s /etc/nginx/sites-available/tutorial2.devopshint.info /etc/nginx/sites-enabled/

You should now confirm whether all configurations are in order.

ubuntu@RushiInfotech:/$ sudo nginx -t

step#6 : To verify changes restart Nginx service and check Nginx is running or not

ubuntu@RushiInfotech:/$ sudo systemctl restart nginx
ubuntu@RushiInfotech:/$ sudo systemctl status nginx

step#7 : Test the Nginx server block we will test by browsing your domain name in your favorite browser

Add A record in with your domain provider with your VM IP to access your server blocks using domain name

Creating domain for our first website:-

Creating domain for our second website:-

Step#8 : Testing Your Results on browser

Now that you are all set up, you should test that your server blocks are functioning correctly. You can do that by visiting the domains in your web browser:

http://tutorial1.devopshint.info/

You should see a page that looks like this:

If you visit your second domain name, you should see a slightly different site:

http://tutorial2.devopshint.info/

If both of these sites work, you have successfully configured two independent server blocks with Nginx.

Nginx Configuration files and Directories

Below are Important Nginx configuration files and directory which you should know

Default Nginx web content

  • /var/www/html: When we install Nginx be default /var/www/html directory gets created and when you access it on browser then content loads from this directory.

Nginx Server configuration files

  • /etc/nginx: this is Nginx configuration directory. All of the Nginx configuration files reside here.
  • /etc/nginx/nginx.conf: this is Nginx global configuration file when you modify it , it will apply to all content which used Nginx.
  • /etc/nginx/sites-available/: To enable a website, you must create a symbolic link inside the /etc/nginx/sites-enabled directory pointing to the actual vhost file in /etc/nginx/sites-available . The nginx. conf file reviews the contents of the sites-enabled directory and determines which virtual host files to include
  • /etc/nginx/sites-enabled/: The ../sites-enabled/ folder will include symlinks to the site configuration files located within /etc/nginx/sites-available/
  • /etc/nginx/snippets: Snippets allow you to insert raw NGINX config into different contexts of the NGINX configurations that the Ingress Controller generates.

Basic Nginx Troubleshooting

Troubleshooting Nginx on Ubuntu involves identifying and resolving common issues that may arise when configuring or running the web server. Here are some basic steps you can follow for Nginx troubleshooting on Ubuntu:

Check Nginx Service Status:

Verify if Nginx is running by using the following command:

ubuntu@RushiInfotech:~$ sudo service nginx status

If Nginx is not running, you can start it with:

ubuntu@RushiInfotech:~$ sudo service nginx start

Syntax Check:

Ensure the Nginx configuration files contain no syntax errors by running:

ubuntu@RushiInfotech:~$ sudo nginx -t

Verify if Nginx is running by using the following command:

ubuntu@RushiInfotech:~$ sudo systemct1 status nginx

If Nginx is not running, you can start it with:

ubuntu@RushiInfotech:~$ sudo systemct1 nginx start

Log Files:

Examine Nginx log files for error messages:

ubuntu@RushiInfotech:~$ sudo tail -f /var/log/nginx/error.log 

Access logs can also be helpful for diagnosing issues:

ubuntu@RushiInfotech:~$ sudo tail -f /var/log/nginx/access.log 

Firewall and Ports:

Ensure that your firewall allows traffic on the necessary ports (usually 80 and 443 for HTTP and HTTPS, respectively):

ubuntu@RushiInfotech:~$ sudo ufw allow 'Nginx Full'

Check Website Configuration:

Verify the configuration of your website or server blocks in /etc/nginx/sites-available/ to ensure it’s correctly set up.

Reload or Restart Nginx:

After making changes to the Nginx configuration, reload the configuration to apply changes without stopping the server:

ubuntu@RushiInfotech:~$ sudo service nginx reload

If reloading doesn’t work, you can restart Nginx:

ubuntu@RushiInfotech:~$ sudo service nginx restart

Check for Port Conflicts:

Make sure there are no other services running on ports 80 or 443 that could conflict with Nginx.

Test from a Browser:

Try accessing your website from a web browser and check for any issues or error messages.

NGINX Logs

Logs in Nginx

By default, NGINX writes its events in two types of logs – the error log and the access log. In most of the popular Linux distro like Ubuntu, CentOS or Debian, both the access and error log can be found in /var/log/nginx, assuming you have already enabled the access and error logs in the core NGINX configuration file. Let us find out more about NGINX access log, error log and how to enable them if you have not done it earlier.

What is NGINX access log?

The access log records details about every HTTP request made to the Nginx server. This log includes information like the client’s IP address, the timestamp of the request, the requested URL, HTTP status code, and more.

Here are some important NGINX access log fields you should be aware of:

  • remote_addr: The IP address of the client that requested the resource
  • http_user_agent: The user agent in use that sent the request
  • time_local: The local time zone of the server
  • request: What resource was requested by the client (an API path or any file)
  • status: The status code of the response
  • body_bytes_sent: The size of the response in bytes
  • request_time: The total time spent processing the request
  • remote_user: Information about the user making the request
  • http_referer: The IP address of the HTTP referer
  • gzip_ratio: The compression ratio of gzip, if gzip is enabled

What is NGINX error log?

The error log records information about issues, errors, and problems that occur within the Nginx server. This includes errors related to configuration, server errors (e.g., 500 Internal Server Error), and other unexpected events.

NGINX Error Log Levels

NGINX has eight log levels for different degrees of severity and verbosity:

  1. emerg: These are the emergency logs. They mean that the system is unusable.
  2. alert: An immediate action is required.
  3. crit: A critical condition occurred.
  4. error: An error or failure occurred while processing a request.
  5. warn: There was an unexpected event, or something needs to be fixed, but NGINX fulfilled the request as expected.
  6. notice: Something normal, but important, has happened, and it needs to be noted.
  7. info: These are messages that give you information about the process.
  8. debug: These are messages that help with debugging and troubleshooting. They are generally not enabled unless needed because they create a lot of noise.

7 Tips to Optimize Nginx Performance

Tip#1: Keep Your Nginx Updated

Keeping your Nginx updated is one of the most straightforward ways to boost its performance. The Nginx team regularly releases updates that include performance improvements, new features, and security patches. By keeping your Nginx updated, you can take advantage of these enhancements and ensure that your server is protected against known security vulnerabilities.

First, update the package list to ensure you have the latest version information. Open your terminal and type:

ubuntu@RushiInfotech:~$ sudo apt-update

Once the package list is updated, you can upgrade Nginx by typing:

ubuntu@RushiInfotech:~$ sudo apt-get upgrade nginx

Tip#2: Enable Gzip Compression

Gzip compression is a method of compressing files for faster network transfers. It is particularly effective for improving the performance of a website because it reduces the size of HTML, CSS, and JavaScript files. This can significantly speed up data transfer, especially for clients with slow network connections.

To enable Gzip compression in Nginx, you need to modify the Nginx configuration file. Here’s how you can do it:

Open the Nginx configuration file in a text editor. The default location of the configuration file is /etc/nginx/nginx.conf. You can open it with the nano text editor by typing:

ubuntu@RushiInfotech:~$ sudo nano /etc/nginx/nginx.conf

In the http block, add the following lines to enable Gzip compression:

gzip on;
gzip_vary on;
gzip_proxied any;
gzip_comp_level 6;
gzip_buffers 16 8k;
gzip_http_version 1.1;
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

These lines do the following:

  • gzip on; enables Gzip compression.
  • gzip_vary on; tells proxies to cache both gzipped and regular versions of a resource.
  • gzip_min_length 10240; does not compress anything smaller than the defined size.
  • gzip_proxied expired no-cache no-store private auth; compresses data even for clients that are being proxied.
  • gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; compresses the specified MIME types.

Save and close the file. If you’re using nano, you can do this by pressing Ctrl + X, then Y, then Enter.

Test the configuration to make sure there are no syntax errors:

ubuntu@RushiInfotech:~$ sudo nginx -t

If the test is successful, reload the Nginx configuration:

ubuntu@RushiInfotech:~$ sudo systemctl reload nginx

Now, Gzip compression is enabled on your Nginx server. This should help to reduce the size of the data that Nginx sends to clients, speeding up your website or application.

Tip#3 : File Cache Opening

Caching is a technique that stores data in a temporary storage area (cache) so that future requests for that data can be served faster. Nginx can cache responses from your application servers and serve them to clients, which can significantly reduce the load on your application servers and speed up response times.

Virtually everything is a file in Linux, and when you use open_file, file descriptors and all files accessed regularly will be cached to the server. Serving static html files with open file cache will improve the performance of NGINX, as it opens and stores cache in memory for a specific period of time.

To start caching, enter this into the http area:

http {
open_file_cache max=1024 inactive=10s;
open_file_cache_valid 60s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
}

Tip#4: Configure Caching Properly

Here’s how you can configure caching in Nginx:

Open the Nginx configuration file in a text editor. The default location of the configuration file is /etc/nginx/nginx.conf. You can open it with the nano text editor by typing:

ubuntu@RushiInfotech:~$ sudo nano /etc/nginx/nginx.conf

In the http block, add the following lines to set up a cache:

proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=1g 
inactive=60m use_temp_path=off;

These lines do the following:

  • proxy_cache_path /var/cache/nginx sets the path on your file system where the cache will be stored.
  • levels=1:2 sets the levels parameter to define the hierarchy levels of a cache.
  • keys_zone=my_cache:10m creates a shared memory zone named my_cache that will store the cache keys and metadata, such as usage times. The size (10m) can store about 8000 keys.
  • max_size=1g sets the maximum size of the cache.
  • inactive=60m sets how long an item can remain in the cache without being accessed.
  • use_temp_path=off tells Nginx to not use a temporary path for storing large files.

In the server block, add the following lines to enable caching

:location / { proxy_cache my_cache;
proxy_pass http://test.devopshint.info; 
proxy_cache_valid 200 302 60m; 
proxy_cache_valid 404 1m; }

These lines do the following:

  • proxy_cache my_cache; enables caching and specifies the shared memory zone.
  • proxy_pass http://your_upstream; sets the protocol and address of the proxied server.
  • proxy_cache_valid 200 302 60m; sets the cache time for 200 and 302 responses to 60 minutes.
  • proxy_cache_valid 404 1m; sets the cache time for 404 responses to 1 minute.

Save and close the file. If you’re using nano, you can do this by pressing Ctrl + X, then Y, then Enter.

Test the configuration to make sure there are no syntax errors:

ubuntu@RushiInfotech:~$ sudo nginx -t

If the test is successful, reload the Nginx configuration:

ubuntu@RushiInfotech:~$ sudo systemctl reload nginx

Now, caching is properly configured on your Nginx server. This should help to reduce the load on your application servers and speed up response times.

Tip#5: Optimize Worker Processes and Connections

Nginx uses worker processes to handle client requests. Each worker can handle a limited number of connections, and the total capacity of the server is determined by the number of workers and the number of connections each worker can handle. Optimizing these settings can have a significant impact on Nginx’s performance.

Here’s how you can optimize worker processes and connections in Nginx:

Open the Nginx configuration file in a text editor. The default location of the configuration file is /etc/nginx/nginx.conf. You can open it with the nano text editor by typing:

ubuntu@RushiInfotech:~$ sudo nano /etc/nginx/nginx.conf

Look for the worker_processes directive. This directive sets the number of worker processes. The optimal value depends on many factors, including the number of CPU cores, the number of hard disk drives, and the load pattern. As a starting point, you can set worker_processes to the number of CPU cores. If this directive is not present, add it inside the events block:

worker_processes auto;

The auto value will automatically set the number of worker processes to the number of CPU cores.

Look for the worker_connections directive. This directive sets the maximum number of simultaneous connections that can be opened by a worker process. A good starting point is 1024, but you can increase this value if you expect a high number of simultaneous connections. If this directive is not present, add it inside the events block:

worker_connections 1024;

Save and close the file. If you’re using nano, you can do this by pressing Ctrl + X, then Y, then Enter.

and then test the configuration.

Tip#6: Change the size of the Buffers

Nginx buffers are also somewhat important for the optimization of Nginx performance. Because the buffer sizes are too low, then Nginx going to write to a temporary file causing huge disk I/O operations constantly. To prevent it, set the buffer size accordingly.

The following are the parameters that need to be adjusted inside the /etc/nginx/nginx.conf file for optimum performance:

http{
client_body_buffer_size 10K;
client_header_buffer_size 1k;
client_max_body_size 8m;
large_client_header_buffers 4 4k;
}
  1. client_body_buffer_size – Sets buffer size for reading client request body. In case the request body is larger than the buffer, the whole body or only its part is written to a temporary file.
  1. client_header_buffer_size – Refers to the buffer size relative to the client request header.
  2. client_max_body_size – Sets the maximum allowed size of the client request body, specified in the “Content-Length” request header field. If the size of a request exceeds the configured value, the 413 (Request Entity Too Large) error is returned to the client.
  3. large_client_header_buffers – Maximum number and size of buffers for large client headers. A request line cannot exceed the size of one buffer, or the 414 (Request-URI Too Large) error is returned to the client.

With the values above the Nginx will work optimally but for even further optimization, you can tweak the values and test the performance.

Tip#7: Reducing Timeouts

Timeouts also really improve the Nginx performance considerably. The keepalive connections reduce CPU and network overhead required when opening and closing connections.

The following are the parameters that need to be adjusted inside the /etc/nginx/nginx.conf file for optimum performance:

http {
client_body_timeout 12;
client_header_timeout 12;
send_timeout 10;
}
  • client_header_timeout – Defines a timeout for reading the client request header. If a client does not transmit the entire header within this time, the request is terminated with the 408 (Request Time-out) error.
  • client_body_timeout  – Defines a timeout for reading the client request body. The timeout is set only for a period between two successive read operations, not for the transmission of the whole request body. If a client does not transmit anything within this time, the request is terminated with the 408 (Request Time-out) error.
  • send_timeout – Sets a timeout for transmitting a response to the client. If the client does not receive anything from the server within this time, the connection is closed.

Secure Nginx with Let’s Encrypt

Let’s Encrypt is a Certificate Authority (CA) that provides an easy way to obtain and install free TLS/SSL certificates, thereby enabling encrypted HTTPS on web servers. It simplifies the process by providing a software client, Certbot, that attempts to automate most (if not all) of the required steps. Currently, the entire process of obtaining and installing a certificate is fully automated on both Apache and Nginx.

Step#1 : Installing Certbot

The first step to using Let’s Encrypt to obtain an SSL certificate is to install the Certbot software on your server.

Install Certbot and it’s Nginx plugin with apt:

ubuntu@RushiInfotech:~$ sudo apt install certbot python3-certbot-nginx

Certbot is now ready to use, but in order for it to automatically configure SSL for Nginx, we need to verify some of Nginx’s configuration.

Step#2 : Confirming Nginx Configuration

Certbot needs to be able to find the correct server block in your Nginx configuration for it to be able to automatically configure SSL. Specifically, it does this by looking for a server_name directive that matches the domain you request a certificate for.

If you followed the server block set up step in the Nginx installation tutorial, you should have a server block for your domain at /etc/nginx/sites-available/test.devopshint.info with the server_name directive already set appropriately.

To check, open the configuration file for your domain using nano or your favorite text editor:

ubuntu@RushiInfotech:~$ sudo vim /etc/nginx/sites-available/test.devopshint.info

Step#3 : Adjust the Firewall

If you are installing Nginx on you local system then allow traffic on port 80 in ufw , if you are installing on Cloud Instance then allow port in cloud security group.sudo ufw allow 80/tcp

OR

ubuntu@RushiInfotech:/$ sudo ufw app list

you will see list of profiles

OutputAvailable applications:
  Nginx Full
  Nginx HTTP
  Nginx HTTPS
  OpenSSH

To allow traffic on port 80, enable this by typing:

ubuntu@RushiInfotech:/$ sudo ufw allow 'Nginx HTTP'

You can verify the change by typing:

ubuntu@RushiInfotech:/$ sudo ufw status

The output will indicated which HTTP traffic is allowed:

Output
Status: active

Step#4 : Obtaining an SSL Certificate

Certbot provides a variety of ways to obtain SSL certificates through plugins. The Nginx plugin will take care of reconfiguring Nginx and reloading the config whenever necessary. To use this plugin, type the following:

ubuntu@RushiInfotech:/$ sudo certbot --nginx -d test.devopshint.info -d test.devopshint.info 

This runs certbot with the --nginx plugin, using -d to specify the domain names we’d like the certificate to be valid for.

If this is your first time running certbot, you will be prompted to enter an email address and agree to the terms of service. After doing so, certbot will communicate with the Let’s Encrypt server, then run a challenge to verify that you control the domain you’re requesting a certificate for.

If that’s successful, certbot will ask how you’d like to configure your HTTPS settings.

Output
Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel):

Select your choice then hit ENTER. The configuration will be updated, and Nginx will reload to pick up the new settings. certbot will wrap up with a message telling you the process was successful and where your certificates are stored:

Output
IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/test.devopshint.info/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/test.devopshint.info/privkey.pem
   Your cert will expire on 2023-10-18. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Your certificates are downloaded, installed, and loaded. Try reloading your website using https:// and notice your browser’s security indicator. It should indicate that the site is properly secured, usually with a lock icon. If you test your server using the SSL Labs Server Test, it will get an A grade.

Step#5 : Verifying Certbot Auto-Renewal

Let’s Encrypt’s certificates are only valid for ninety days. This is to encourage users to automate their certificate renewal process. The certbot package we installed takes care of this for us by adding a systemd timer that will run twice a day and automatically renew any certificate that’s within thirty days of expiration.

You can query the status of the timer with systemctl:

ubuntu@RushiInfotech:/$ sudo systemctl status certbot.timer

To test the renewal process, you can do a dry run with certbot:

ubuntu@RushiInfotech:/$ sudo certbot renew --dry-run

If you see no errors, you’re all set. When necessary, Certbot will renew your certificates and reload Nginx to pick up the changes. If the automated renewal process ever fails, Let’s Encrypt will send a message to the email you specified, warning you when your certificate is about to expire.

Deploy Node.js Application for Production on Nginx Server on Ubuntu 20.04/22.04 LTS

What is Nodejs ?

Node.js: Node.js is a runtime environment that allows developers to execute JavaScript on the server-side. It’s known for its non-blocking, event-driven architecture, making it highly efficient for building scalable and real-time applications. Node.js is commonly used for web servers, APIs, and applications that require high concurrency and low latency.

What is PM2?

PM2 is an advanced daemon process manager that will help us manage and keep our application online. We will use it to keep our server up and running 24/7.

Step#1 – Install Nodejs on Ubuntu 20.04 LTS

Let’s download nodejs from Nodesource. NodeSource is a company which provides enterprise-grade Node support and maintains a repository containing the latest versions of Node.js.

curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -

Let’s install nodejs and npm now:

ubuntu@RushiInfotech:~$ sudo apt-get install -y nodejs
ubuntu@RushiInfotech:~$ sudo apt-get install npm

Check the installation of node and npm using the following commands:

ubuntu@RushiInfotech:~$ node --version && npm --version

You will see the versions of nodejs and npm.

Step#2 – Creating a sample nodejs App

Let’s create a sample app and paste some basic code into it.

ubuntu@RushiInfotech:~$ sudo vi app.js

Paste the following code inside it:

const express = require('express')
const app = express()
const port = 3000

app.get('/', (req, res) => {
  res.send('Hello World!')
})

app.listen(port, () => {
  console.log(`Example app listening at http://localhost:${port}`)
})

Lets now install express so that we can run this app server:

ubuntu@RushiInfotech:~$ sudo npm install express

run the application

ubuntu@RushiInfotech:~$ sudo node app.js

You should now be able to see the hello world page when you visit http://server-ip:your ip

Step#3 – Using pm2 as a process manager

Let’s install and use pm2 as a process manager. Install pm2 using the commands below:

ubuntu@RushiInfotech:~$ sudo npm i pm2 -g

Start the application using the following command:

ubuntu@RushiInfotech:~$ pm2 start app.js

Step#4 – Configuring Nginx as a reverse proxy

What is reverse proxy?

A reverse proxy is a server or software that stands between client requests and backend servers, routing requests to the appropriate server. It enhances security, load balancing, and performance optimization.

Now let’s configure Nginx as a reverse proxy. This will help us get the security features from Nginx. Also, we can serve static content using Nginx.

Let’s install Nginx using the following command:

ubuntu@RushiInfotech:~$ sudo apt install nginx

Let’s create a conf file for our Nodejs app using the command below

ubuntu@RushiInfotech:~$ sudo vi /etc/nginx/sites-available/nodeApp

Copy the following content to this file

server{
  server_name 18.144.31.107;

      location / {
        proxy_pass http://localhost:3000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}

Activate this configuration using the command below:

ubuntu@RushiInfotech:~$ sudo ln -s /etc/nginx/sites-available/nodeApp /etc/nginx/sites-enabled

Visit http://your-ip/ and your application should work fine. Happy coding!

Step#5 – Access Nodejs App on Browser

Conclusion:

This Nginx tutorial for Ubuntu 20.04/22.04 LTS provides a concise yet comprehensive guide to installing, configuring, and optimizing Nginx, empowering users to host websites and applications securely and efficiently.

Reference:

For reference visit the official website Nginx Documentation.

Any queries pls contact us @Rushi Infotech

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *