Saturday, December 28, 2024

Master IAM Authentication: Lambda, API Gateway, and Signature V4 in Action

 Authentication and Authorization two very important pillar of web security. Authentication is basically used to identify the user. It verifies the user is the same user that he is supposed to be. Authorization is used to verify what the user can do. What access he is having.

Here, I am going to discuss about the Authorization. In AWS, authorization can be implemented in 3 ways:

  1. IAM Auth
  2. Lambda authorizer
  3. Token authorizer
In this post, I am discussing mainly about IAM AUTH. You can go through my below Youtube video to get detailed idea.

My Youtube Video Link: https://youtu.be/QKouBNm_-BA


I have created an youtube video on IAM Auth. Here I have shown lambda function creation, API creation, Route Creation, attaching routes with corresponding lambda functions, associating IAM auth with routes, User group creation, User creation, Policy creation and passing signature version 4 token in postman to test authorization of these routes. Below are the steps:

STEPS:

  1. Create two lambda functions for handling two routes
  2. In API Gateway - Create two routes like /helloadmin and /hellouser with GET method
  3. Attach both routes with corresponding lambda functions
  4. Attach each route with IAM auth for authorization
  5. In IAM - Create two groups say Admin and Client
  6. Create two users named as admin_user and client_user under corresponding groups
  7. Create in line policy for both groups
            The admin_user should be able to access only /helloadmin NOT /hellouser 
         The client_user should be able to access only /hellouser NOT /helloadmin

     8.  Add authorization header in POSTMAN [SignatureV4]

            a) Find accesskey, secretkey, sessionToken of both users
    b) Pass these values in Authorization tab in POSTMAN

     9.  After Testing delete all Policies, Users, Groups, IAM APIs, Lambda functions

Please go through the above video where I have showed each steps in detail. It is very informative.







Sunday, November 3, 2024

PM2 vs Kubernetes : Do You Really Need Both for Your Node.js App

Today, I thought to discuss with you that Do we really need to use PM2 in Node APPLCIATION along with Kubernets. You can also go through the below recorded video on same topic in my Youtube channel:

Video link: https://youtu.be/dcJ5oBCEZ8I?si=vxo0xtqdIjbHz0LY


PM2 provides features like
  • Process management and monitoring
  • Automatic restart on failure
  • Load balancing
  • Graceful shutdown
  • Real time metrics and logging
All these features also provided by Kubernets along with many more. So, we can say that kubernetes is a kind of superset of PM2.

Now, it can be said that we can use PM2 in a particular pod for above mentioned feature before interference of kubernets. It's true but isn't it a kind of feature redundancy.

Again, it can be said that PM2 can be used to initiate multiple processes in a pod to perform cpu intensive work. But again here, we can use threadpool mechanism
in node js to handle it.

So, on the whole, I am not able to find a real value of using PM2 in Node JS application along with Kubernets. But yes, in absence of kubernets, PM2 is really very important and light weight process manager. This is my opinion. What's your thought. please mention in the comment box.

Tuesday, October 1, 2024

Master PM2: Ultimate Guide to Node.js Process Management & Optimization

Process Management is a very important aspect in any application configuration. There are several process management libraries. PM2 is one of them that is widely used. Here, I am showing how to use it in Windows system along with its different commands.

IMPORTANT NOTE: You can get full detail in my below Youtube video. Here is the link: https://youtu.be/mwGV0thGBfI?si=mC7hdFg--qmaA-XT


Benefits of Process Management libraries:

  • Automatic Restart: If your application crashes, PM2 automatically restarts it, ensuring minimal downtime.
  • Load balancing: PM2 can run your application in cluster mode, leveraging multi-core processors for better performance and scalability.
  • Process monitoring: It tracks resource consumption (CPU, memory) and provides insights through monitoring tools.
  • Log Management: PM2 aggregates logs from multiple instances of your application, providing centralized logging and an easy way to debug issues. It also allows you to view real-time logs and can be configured to send logs to external log services.
  • Performance Optimization: PM2 allows you to run multiple instances of your application (in cluster mode), distributing the load across multiple CPU cores, improving application throughput.
  • Environment Management: PM2 supports different environments (development, production) and allows you to define different configurations for each environment.
  • Memory Management: PM2 provides options to set memory limits for your applications. When memory consumption exceeds the limit, it will automatically restart the app, preventing memory leaks from causing issues.
  • Real-time Monitoring Dashboard: PM2 offers a real-time web dashboard to monitor application status, CPU, memory, and other metrics. This helps in identifying performance bottlenecks or issues in real time.
PM2 Installation: We can use npm command to install PM2 at global mode by using below command:
                    npm install pm2@latest -g

Start application using PM2: We can use below command to start our node js application named as Test1
 pm2 start C:\Personal\Jitendra\workspace\youtube\youtubeCode\app.js --name Test1


Similarly we can another node js application using PM2 on different port like:
pm2 start C:\Personal\Jitendra\workspace\youtube\youtubeCode2\app.js --name Test2

List all applications: We can list down all applications along with their different instances running under PM2.

pm2 list


Monitor application: We can use "pm2 monit" command to open a monitor window to get the real time metrics of the applications running in PM2



Watch logs: We can use below command to get the real time logs of our applications running under pm2


Stop applications: We can use below commands to stop applications running under PM2
                                pm2 stop Test1 - stops only application named as Test1
                                pm2 stop all - stops all applications running under PM2


Passing Environment variables: We can pass some secure data to our application at startup time in PM2 through environment variables using below command:

set PORT=3005 && pm2 start C:\Personal\Jitendra\workspace\youtube\youtubeCode\app.js --name Test1

Here, I am passing PORT data as an environment variable at the startup time. 

Ecosystem: we can add multiple sub-commands here for different purposes like instance handling, log handling, load balancing and thus the start command will become very complex. So, to solve this problem we can use ecosystem config file. We can automatically generate the file named as "ecosystem.config.js" using command like "pm2 ecosystem" and then we can modify the newly created file as per the application requirement:

module.exports = {
  apps : [{
    name: 'Test2',
    script: './app.js',
    env: {
      NODE_ENV: 'development',
      PORT: 3006
    },
    watch: '.'
  }],

  deploy : {
    production : {
      user : 'SSH_USERNAME',
      host : 'SSH_HOSTMACHINE',
      ref  : 'origin/master',
      repo : 'GIT_REPOSITORY',
      path : 'DESTINATION_PATH',
      'pre-deploy-local': '',
      'post-deploy' : 'npm install && pm2 reload ecosystem.config.js --env production',
      'pre-setup': ''
    }
  }
};


In this file we are passing the port and Node Env as environment variables. Similarly we can pass other variables and config parameters.
To start the application using ecosystem.config.js, we need to use command like "pm2 start ecosystem.config.js"


Similarly, if we want to run another application by using ecosystem.config.js file, then we have to create a separate ecosystem.config.js file and run the command "pm2 start ecosystem.config.js" with respect to this file.

But, what if we have 100 microservices, then we have to create such 100 files for ecosystem.config.js and thus it becomes unmanageable.

To solve this issue, we can create a single ecosystem.config.js file and we can define all the required metrics for each of the microservices in this single file and when we start it through PM2 then all microservices will start. The example of such ecosystem.config.js file is as follows:

module.exports = {
    apps: [
        {
            name: 'Test1',
            script: '../youtube/youtubeCode/app.js',
            instances: 2,
            max_memory_restart: "300M",
            // Logging
            out_file: "./test1.out.log",
            error_file: "./test1.error.log",
            merge_logs: true,
            log_date_format: "DD-MM HH:mm:ss Z",
            log_type: "json",
            watch: true,
            ignore_watch: [
                "../youtube/youtubeCode/node_modules",
                "../youtube/youtubeCode/package.json"
            ],
            // Env Specific Config          
            env: {
                NODE_ENV: 'development',
                PORT: 3005
            }
        },
        {
            name: 'Test2',
            script: '../youtube/youtubeCode2/app.js',
            instances: "1",
            exec_mode: "cluster",
            out_file: "./test2.out.log",
            error_file: "./test2.error.log",
            watch: true,
            ignore_watch: [
                "../youtube/youtubeCode2/node_modules",
                "../youtube/youtubeCode2/package.json"
            ],
            env: {
                NODE_ENV: 'development',
                PORT: 3006
            }
        }
    ],

    deploy: {
        production: {
            user: 'SSH_USERNAME',
            host: 'SSH_HOSTMACHINE',
            ref: 'origin/master',
            repo: 'GIT_REPOSITORY',
            path: 'DESTINATION_PATH',
            'pre-deploy-local': '',
            'post-deploy': 'npm install && pm2 reload ecosystem.config.js --env production',
            'pre-setup': ''
        }
    }
};


Please ping in the comment section if you have any question or suggestion.


Friday, June 14, 2024

Install & configure NGINX on Windows | Steps of NGINX setup on Windows

Whenever we search for installation and configuration of NGINX on Windows operating system, we generally find example of Linux or Ubantu machines. So, here I am going to discuss about installation and configuration of NGINX on windows machine. Along with it, I will also discuss about how multiple applications (running on different ports on localhost) can be configured with NGINX.

I have also created an Youtube video about this topic. You can go through the video to get more idea.

NOTE: Link of the video: https://youtu.be/h-NWXxyrkJ0?si=hr-_PZ29B_vbSfkC


STEPS of NGINX Installation:

1) Open the following URL of NGINX download website and download the zip file of latest stable version of NGINX. 
URL: https://nginx.org/en/download.html



2) Unzip the file and extract the folder wherever you want to keep your NGINX folder


3) Open CMD in Administration mode and move to the NGINX folder. In my case it is "C:\nginx-1.27.0". Then type command like "start NGINX". Open browser and type "http://localhost:80" and press Enter. Now, if below response is coming on browser then NGINX is running properly.




We can run few more commands of NGINX as per our needs:


Steps of multiple application configuration on NGINX

Suppose we have two NodeJS application running in our localhost.
  1. localhost:3000/helloworld
  2. localhost:3002/
Here, if we are running these two application on browser, we are getting response. Now, we want to configure both the application in NGINX. After configuring with NGINX, we have to start both the application separately and then restart NGINX server. After this we can access both the application through NGINX in localhost on our Windows machine.

Steps:

1) Open NGINX configuration file that is nginx.conf in Notepad or any editor:


2) Add the following configuration code in http object in nginx.conf file

upstream test_app {
        server 127.0.0.1:3000;
    }

    upstream dev_app {
        server 127.0.0.1:3002;
    }

3) Add the following code in same nginx.conf file under http.server object

location /test/ {
            proxy_pass http://test_app/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /dev/ {
            proxy_pass http://dev_app/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

Below, is the complete nginx.conf file.

#user  nobody;
worker_processes  1;

#error_log  logs/error.log;
#error_log  logs/error.log  notice;
#error_log  logs/error.log  info;

#pid        logs/nginx.pid;


events {
    worker_connections  1024;
}
   
http {
    include       mime.types;
    default_type  application/octet-stream;

    #log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
    #                  '$status $body_bytes_sent "$http_referer" '
    #                  '"$http_user_agent" "$http_x_forwarded_for"';

    #access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  65;

    #gzip  on;
    upstream test_app {
        server 127.0.0.1:3000;
    }

    upstream dev_app {
        server 127.0.0.1:3002;
    }
    server {
        listen       80;
        server_name  localhost;

        #charset koi8-r;

        #access_log  logs/host.access.log  main;

        location / {
            root   html;
            index  index.html index.htm;
        }
       
        location /test/ {
            proxy_pass http://test_app/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        location /dev/ {
            proxy_pass http://dev_app/;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }

        #error_page  404              /404.html;

        # redirect server error pages to the static page /50x.html
        #
        error_page   500 502 503 504  /50x.html;
        location = /50x.html {
            root   html;
        }

        # proxy the PHP scripts to Apache listening on 127.0.0.1:80
        #
        #location ~ \.php$ {
        #    proxy_pass   http://127.0.0.1;
        #}

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        #
        #location ~ \.php$ {
        #    root           html;
        #    fastcgi_pass   127.0.0.1:9000;
        #    fastcgi_index  index.php;
        #    fastcgi_param  SCRIPT_FILENAME  /scripts$fastcgi_script_name;
        #    include        fastcgi_params;
        #}

        # deny access to .htaccess files, if Apache's document root
        # concurs with nginx's one
        #
        #location ~ /\.ht {
        #    deny  all;
        #}
    }


    # another virtual host using mix of IP-, name-, and port-based configuration
    #
    #server {
    #    listen       8000;
    #    listen       somename:8080;
    #    server_name  somename  alias  another.alias;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}


    # HTTPS server
    #
    #server {
    #    listen       443 ssl;
    #    server_name  localhost;

    #    ssl_certificate      cert.pem;
    #    ssl_certificate_key  cert.key;

    #    ssl_session_cache    shared:SSL:1m;
    #    ssl_session_timeout  5m;

    #    ssl_ciphers  HIGH:!aNULL:!MD5;
    #    ssl_prefer_server_ciphers  on;

    #    location / {
    #        root   html;
    #        index  index.html index.htm;
    #    }
    #}

}



4) Test the code changes in nginx.conf file by command "nginx -t" in CMD. If the response of the command is success then its all good otherwise there is some syntax error that should be rectified before starting the NGINX server


5) Restart the NGINX server with command like "nginx -s reload"
  

6) Now, access both the application in browser with url like
  1. Use URL http://localhost/test instead of localhost:3000/helloworld
  2. Use URL http://localhost/dev instead of localhost:3002/





Tuesday, May 28, 2024

The Truth About Node.js: Single-Threaded or Multi-Threaded?

Node.js is often described as single-threaded, which can be somewhat misleading. This description primarily refers to the JavaScript execution model. Node.js uses an event-driven, non-blocking I/O model, which allows it to handle many concurrent connections efficiently with a single main thread. However, this doesn't mean that Node.js can't take advantage of multiple threads. Under the hood, Node.js uses a thread pool for certain operations, and with the introduction of worker threads, Node.js can execute JavaScript code in multiple threads.

NOTE: I have described it very clearly in my below Youtube video. The link of the video is https://youtu.be/8IDW3OQ_blQ?si=XCmM3x45nt1FRgj-


Understanding the Single-Threaded Nature

Node.js runs JavaScript code in a single thread. This means that all JavaScript code execution happens in a single call stack, and operations are executed one at a time. This is efficient for I/O-bound tasks, where the main thread can offload tasks like file I/O or network requests to the system's underlying non-blocking capabilities and continue executing other JavaScript code.

Below is the example code where I have created two routes:

const express = require('express');
const app = express();
const port = 3002;

app.get('/nonblocking', (req, res) => {
  res.send("This is non blocking api");
});
app.get('/blocking', (req, res) => {
    for(let i=0; i<99999999999999; i++) {    
    }
   res.send("This is blocking api");
});

app.listen(port, () => {
  console.log(`Server is running on http://localhost:${port}`);
});


Here, the route '/nonblocking' is very simple and light route where I am just returning a string in response and it is taking just few millisecond to return the response. But, in the 2nd route '/blocking', I am traversing a very big loop that is taking several minutes. Now, if any user is calling this route, then the main thread gets blocked in processing this request and it is not available to other request and thus even "/nonblocking" routes are also waiting until the "/blocking" routes gets processed completely. To solve, this issue, I have used worker thread in my below code.

Example of Worker Threads

App.js: Here, I am creating a thread:

const express = require('express');
const app = express();
const port = 3002;

app.get('/nonblocking', (req, res) => {
  res.send("This is non blocking api");
});
app.get('/blocking', (req, res) => {
  const { Worker } = require('worker_threads');
  const worker = new Worker('./worker.js',  {workerData: 999999} );
    worker.on('message', (data) => {
      res.send("This is blocking api with data = "+data);
    });
    worker.on('error', (error) => {
      console.log(error);
    });
    worker.on('exit', (code) => {
      if (code !== 0)
        reject(new Error(`Worker stopped with exit code ${code}`));
    })  
});

app.listen(port, () => {
  console.log(`Server is running on http://localhost:${port}`);
});


Here, the line const worker = new Worker('./worker.js',  {workerData: 999999} );

is basically creating a new thread. This thread is emitting events like message, error and exit. We should listen these event to process the data accordingly.

worker.js : Here, we are basically, defining the logic of the task that this thread must perform and once task is completed then it will publish the resultant data by parentPort.postMessage(data);

const { parentPort, workerData } = require('worker_threads');

function countTo(number) {
    let count=0;
    for(let i=0; i<number; i++) {
        count = count + i;
    }
    return count;
}

const result = countTo(workerData);

parentPort.postMessage(result);

Pros and Cons of Using Worker Threads Pros:

  • Improved Performance for CPU-bound Tasks: Worker threads can significantly improve the performance of CPU-bound tasks by offloading them to separate threads.
  • Parallel Execution: Allows for true parallel execution of JavaScript code, which can lead to more efficient use of multi-core processors.
  • Enhanced Application Responsiveness: By handling heavy computations in worker threads, the main thread remains responsive to I/O operations and user interactions.

Cons:

  • Complexity: Introducing multi-threading can add complexity to the codebase, making it harder to understand and maintain.
  • Shared Memory Concerns: While worker threads can share memory using SharedArrayBuffer, managing shared memory can be tricky and prone to errors.
  • Overhead: Creating and managing worker threads introduces some overhead, which might not be justified for small or simple tasks.
When to Use Worker Threads in Node.js

Worker threads are particularly useful in the following scenarios:
  • CPU-bound Tasks: Any task that requires significant CPU time, such as complex calculations, image processing, or data parsing, can benefit from being executed in worker threads.
  • Parallel Processing: Tasks that can be divided into smaller, independent sub-tasks and executed in parallel will see performance gains.
  • Offloading Intensive Tasks: Tasks that could block the event loop, such as large file processing, can be offloaded to worker threads to keep the main thread responsive.
  • Real-time Applications: Applications that require real-time processing and cannot afford to have the main thread blocked will benefit from using worker threads.

Conclusion

Node.js is fundamentally single-threaded for JavaScript execution, but it can utilize multiple threads for certain operations and with the introduction of worker threads. This dual capability allows developers to handle both I/O-bound and CPU-bound tasks efficiently. By understanding when and how to use worker threads, developers can optimize their Node.js applications for better performance and responsiveness.





Sunday, May 26, 2024

Deploying Your Node.js App on EC2 (Amazon Web Services) with PM2 and NGINX

In this blog, I am going to discuss about how to deploy your Node JS and Express application in AWS EC2 instance. Along with deploying the application I am also setting up the NGINX server and PM2 (Process Manager) library. I have shown all these steps clearly in this Youtube video. Please go through the video to get the complete and clear idea.

NOTEThe URL of this video: https://youtu.be/z0GkpPYwl8o


Steps:

1) Create your free trial AWS login
2) Commit your code to GitHub
3) Create EC2 instance:        
  • Create an EC2 instance. Basically t2-micro ubuntu
  • Connect to your EC2 instance
  • Trigger sudo apt update to install any update if available
4) Install node js version 18:
  • sudo apt install nodejs
  • node -v
  • npm -v
5) Verify if GIT and clone your project :
  • git --version
  • git clone YOUR_PROJECT_REPOSITORY_URL
6) Project configuration:
  • Go inside your project folder: cd youtubeCode
  • Trigger command "npm install" to create the node_module folder
  • Verify node_module folder by command "ls"
7) Install Process Manager(PM2) library at global level
  • sudo npm install -g pm2
  • Then go to your project folder
  • pm2 start app.js
  • pm2 logs
  • pm2 reload all
8) Now you can visit your api in browser: http://IP:Port/routeName
9) Install nginx and configure:
  • sudo apt install nginx
  • sudo vim etc/nginx/sites-available/default andthen add below configs in this default file
server_name 54.206.158.160;
location / {
proxy_pass http://localhost:3001;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
  • Press ctrl+c and type :wq! to write and quit
  • Check NGINX config by command "sudo nginx -t". If it is returning success that means your NGINX server is configured properly and you can run your application on your NGINX server.
10) Start your NGINX server: sudo systemctl stop nginx
11) If any changes are being done in NGINX configuration, then you have to restart your NGINX server by command like sudo systemctl start nginx
12) If all steps are being followed properly, then till here, your application will be deployed and running successfully on NGINX server using PM2 library on AWS EC2 instance and now you can access your API just by using your EC2 instance's public IP address. No need to provide the port number. By default NGINX is running on PORT 80 and security group of EC2 instance in AWS is already having inbound rule to allow the traffic from 80 port.