Execute Redis Commands with Node.js without Driver

We had written about redis commands through node.js previously.

Below is a code snippet, where we have connected and executed redis commands from node.js without using any driver.

Steps taken –

1> First we have started Redis in our machine.

2> We have written a bare minimum socket program to connect the redis host and port( 127.0.0.1 and 6379). below is connection related code –

client.connect method…

3> In connect method, we have written the commands of Redis. (Some examples are SET, GET, ECHO and PING )

The most important part here is the EXEC command of redis, which will execute all the commands in Queue.

4> We have taken callback from ‘data‘ and ‘close‘ event in node.js

In data event callback, we have got the response from Redis Server.

5> In ‘close’ event we have got callback for socket closing.

Here the work is only for exploration purpose. We will create more structural classes for redis data handling in future, for which we will update important parts in next posts.

Below is the code.

 
var net = require('net');

var HOST = '127.0.0.1';
var PORT = 6379;

var client = new net.Socket();
client.connect(PORT, HOST, function() {

    console.log('CONNECTED TO: ' + HOST + ':' + PORT);
    // Send message to socket after client connection
    client.write('SET samplekey "Hello"\r\n');
    client.write('GET samplekey\r\n');
    client.write('ECHO "HELLO WORLD"\r\n');
    client.write('PING\r\n');
    client.write('EXEC');

});

// Add a 'data' event handler for the client socket
// data is what the server sent to this socket
client.on('data', function(data) {

    console.log('DATA: ' + data);
    // Close the client socket completely
    client.destroy();

});

// Add a 'close' event handler for the client socket
client.on('close', function(err) {
    if(err)
	console.log(err);
    console.log('Connection closed');
});

If you find this article helpful, you can connect us in Google+ and Twitter

The volume of data and time it takes to hunt for an answer can

Golden Goose Deluxe Brand Charlye Boot Black 36 NEWRich http://www.canadagoose7.com/, distressed leather furthers the authentic, vintage appeal of an essential riding boot. Rounded off cone studs with a burnished metallic finish line the back for eye catching style. The boots are brand new, and all scratches, creases and distressed effects are intensional and factory made.

cheap canada goose Creating Recursive Proceduresoffice 365 dev accountLast Updated: 6/12/20171 ContributorProcedures have a limited amount of space for variables. Each time a procedure calls itself, more of that space is used. A procedure that calls itself is a recursive procedure. cheap canada goose

cheap canada goose Think of all the times you’ve visited a product FAQ and encountered many pages of information. The volume of data and time it takes to hunt for an answer can take a while. Wouldn’t it be nice to ask someone who knows the answer right away? This is a good scenario for a chatbot. cheap canada goose

canada goose jackets By the mid 19th century, the replacement of muskets with rifles greatly increased the accuracy of defensive fire. It was too hazardous to march forward into battle in precise formation, and the practice became obsolete. However canada goose, armed forces continued to drill recruits in marching techniques that now focus on team building, military uniformity, and ceremonial functions. canada goose jackets

canada goose jackets Recovery from his injuries caused an approximately year long period in which Rodgers ceased to perform. He eventually returned, though not reaching the Top 100 singles chart again. He did, however, make an appearance on the album chart as late as 1969 canada goose, and his records hit the Billboard Country and Easy Listening charts until 1979. canada goose jackets

canada goose jackets Many of the Irish troops in Spanish service returned to Ireland after the Irish Rebellion of 1641 and fought in the armies of Confederate Ireland a movement of Irish Catholics. When the Confederates were defeated and Ireland occupied after the Cromwellian conquest of Ireland, around 34,000 Irish Confederate troops fled the country to seek service in Spain. Some of them later deserted or defected to French service, where the conditions were deemed better.. canada goose jackets

canada goose jackets This is an original hand signed Autograph, not a pre print. Please see my other auctions, for more authentic Lost in Space autographs. Combined shipping is available on multiple like items. However, a hobby falconer in Kollnburg, Germany, managed to successfully breed hybrids from a male snowy owl and a female Eurasian eagle owl in 2013.[20] The two resulting male hybrid owls possessed the prominent ear tufts (generally absent in snowy owls), general size, orange eyes, and the same pattern of black markings on their plumage from their Eurasian eagle owl mother, while retaining the generally black and white plumage colours from their snowy owl father. The hybrids were dubbed “Schnuhus”, a portmanteau of the German words for snowy owl and Eurasian eagle owl (Schnee Eule and Uhu, respectively). Previous population estimates of about 200,000 individuals are now regarded as substantially overestimated, and a total population size of 28,000 individuals is thought to be more realistic.[1]. canada goose jackets

canada goose jackets Until the end of the German Empire, social and political convention often placed members of noble or royal households in command of its armies or corps but the actual responsibility for the planning and conduct of operations lay with the formation’s staff officers. For other European armies which lacked this professionally trained staff corps, the same conventions were often a recipe for disaster. Even the Army of the French Second Empire, whose senior officers had supposedly reached high rank as a result of bravery and success on the battlefield, was crushed by the Prussian and other German armies during the Franco Prussian War in the campaigns of 1870 71, which highlighted their poor administration and planning, and lack of professional education.. canada goose jackets

canada goose jackets Cerdonis E. Christenseni goat; E. Christianseni swan; E. Any international shipping is paid in part to Pitney Bowes Inc. Learn More opens in a new window or tabProducts must be returned in their original packaging and in new condition, free from scratches, wear marks, or any other damage. Please note that we cannot provide refunds for items that do not meet these requirements. canada goose jackets

canada goose outlet During the operation, departing flights with the exception of police, military, and humanitarian flights were cancelled, marking the first time that Canadian airspace had been shut down. The SitCen is Transport Canada’s emergency operations centre (EOC), originally constructed to deal with earthquakes along the West Coast. It had been used several times prior to September 11, 2001, including during the ice storms in Ontario and Quebec and after Swissair Flight 111 crashed off the coast from Peggys Cove, Nova Scotia. canada goose outlet

canada goose outlet If you love the old school goose down pillow, you will most definitely love this pillow even more. If you want an AMAZING pillow and PEACEFUL sleep anytime, this pillow is for you. I ABSOLUTELY LOVE MINE.. According to old German and Italian traditions, the year’s new wines were sampled for the first time on Martinmas. People who got drunk on Martinmas were often called “Martinmen,” as were people given to spending their money on short lived good times. Indeed, so important was this association between Martinmas and wine that St canada goose outlet.

Use of process.nextick in node.js

This is a post for a quick go through of node.js process.nextTick method implementation.

As we have searched around web, we found that any code snippet within node.js process.nextTick ensures that the mentioned code snippet will execute in next event loop of node.js before doing any I/O bound work.

Also we found that process.nextTick function guarantees that the code snippet within it will execute in asynchronous manner.

We had tried to show this scenario by examples.

ProcessAsync.js

 
function insertUserAsync(name, cb) {
   var username = name;
   process.nextTick(function() 
        {
		if (username == 'admin') {
			return cb(null,'Admin user insert');
		}
		else {
			return cb(null,'General user insert');
		}
	});
}

function allwork()
{
	console.log('Async msg start');
	insertUserAsync('admin',function(err,content){
		console.log('admin user call and content -'+content);
	});
	insertUserAsync('user',function(err,content){
		console.log('general user call and content -'+content);
	});
	console.log('Async msg end');

}

allwork();

Output –
Below is the output of above code, where we can see that the msg start and msg end will execute and then the snippet within process.nextTick is executing. So this calling is truly asynchronous in nature.

 
Async msg start
Async msg end
admin user call and content -Admin user insert
general user call and content -General user insert

According to node.js documentation, it is needed to be ensured that a process is either 100% Synchronous or 100% Asynchronous. In the below code, the process may be synchronous or asynchronous. So the following code may work like synchronous process or asynchronous process. But for node.js, as the main event loop runs always on single thread, any code with synchronous behavior is not a good practice. So to warn the reader, we had created the nodejs code situation, which may not be asynchronous and thus may block the main event loop of node.js. This type of code may block the main event loop for a long running I/O operation in node.js.

ProcessMayBeSync.js

 
function insertUserSync(name, cb) {
   var username = name;
   		if (username == 'admin') {
			return cb(null,'Admin user insert');
		}
		else {
			return cb(null,'General user insert');
		}
}

function allwork()
{
	console.log('sync msg start');
	insertUserSync('admin',function(err,content){
		console.log('sync admin user call and content -'+content);
	});
	insertUserSync('user',function(err,content){
		console.log('sync general user call and content -'+content);
	});
	console.log('sync msg end');

}

allwork();

Output –

In below output, we can see, the code is executing as in synchronous manner, and so it is blocking the main event loop of node.js

 
sync msg start
sync admin user call and content -Admin user insert
sync general user call and content -General user insert
sync msg end

If you find this article helpful, you can connect us in Google+ and Twitter

Implement Caching Server with Express.js custom middleware

In this article, we explored the capability of expressjs custom middleware. We have made a minimal cache server implementation in expressjs middleware, which will serve as cache for GET web requests. So below is the main server code implementation with inline documentation.

Server.js

 

  var express = require('express');
  var app = express();
  var cache = require('./cacheServer'); // the Cache Server middleware
  var cached = []; // Array where the cache to be saved in memory
  var pathname = '';
  var url = require('url');
  var cacheHelper = require('./cacheHelper'); // cache method utility

  app.use(cache(cached)); // injection of the middleware

  app.get('/', function (req, res) {
    cacheHelper.setCache(req, res,cached,"Hello World"); // serving request and save in cache
    res.send('Hello World!');
  });

  app.get('/about', function (req, res) {
    cacheHelper.setCache(req, res,cached,"Hello about"); // serving request and save in cache
   res.send('Hello about!'); 
  }); 

  var server = app.listen(3000, function () { 
    var host = server.address().address; 
    var port = server.address().port; 
    console.log('Application is running at http://%s:%s', host, port); 
})

Below is main implementation of cache server. Here the the request is first checked within the cache array object. If this cache content is found, it will be served from cache, otherwise the request will be forwarded to main request serving process for the server.

cacheServer.js


  var cacheHelper = require('./cacheHelper.js');

  // the middleware function
  module.exports = function(cached) {

      return function(req, res, next) {
          var resource = cacheHelper.getCache(req, res,cached); // get the content from cache
          if(req.method == 'GET') // cache is implemented only for 'GET' method
	  {
	  	if (typeof resource !== 'undefined') {
 	  	    res.end(resource); // serve from cache
	  	}
		else {
	 		next(); // forward to fresh request processing
		}

          }
          else {
	  	next(); // forward to fresh request processing
	  }
      }	

  };

Below is a helper class for get object from cache and set object to cache.

cacheHelper.js

 
  var url = require('url');
  var pathname = '';

  function getCache(req,res,cached)
  {
	pathname = url.parse(req.url).pathname;
        return cached[pathname]; 
  }

  function setCache(req,res,cached,content)
  {
	pathname = url.parse(req.url).pathname;
        cached[pathname] = 'cached response' + ' from '+ pathname + ' ' + content;   
  }

  exports.getCache = getCache;
  exports.setCache = setCache;

We expect reader to be habituated with node.js and express.js coding.

If you find this article helpful, you can connect us in Google+ and Twitter

Nginx Caching Functionality

In this post we will explore some of the caching capabilities of nginx.

Previously we have explored some functionality of nginx as load balancer. Below are the articles –

1> Configure node.js application server with nginx
2> Configuring Load Balancer with Nginx and Node.js
3> Nginx as a Load Balancer – some details as we explored

We expect reader knowledge level is at intermediate stage for web application, where he/she need to work on application performance tuning.

Now below are some discussion about nginx caching capabilities. First the Diagram –

Web Caching through Nginx

  • Nginx can serve static files efficiently without sending any request to web / application server.
  • Nginx can work as cache server on top of web/application servers.

Nginx proxy requests to web / application servers (via HTTP, FastCGI etc – though we have used http only), to increase performance with serving static files will increase performance of applications while sending dynamic requests to application servers. Also nginx can act as load balancer and caching server at the same time.

In a caching server, the static requests as well as many of HHTP GET, HEAD Requests can be cached depending on application situation.

Some functionality of the Cache Server:

  • Send HTTP request to Application Server Server if the request is not needed to be cached or the cache time expierd
  • Serve responses against HTTP Requests from cache or from application server as and when needed

Now some examples of serving static files and configurations  from our previous example configuration

server {
      …
      location ~ ^/(images/|img) {
           root /nodeapps/nodeexpress4mongoangular;
           access_log off;
           expires max;
}

Here the static files will be served from /nodeapps/nodeexpress4mongoangular path and the request for file with <<root-path>>/images or <<root-path>>/img will be served  here. The files which will be served, will expect images or img folder correspondingly from the root path as configured within the nginx configuration file.

Here are some variations of configuration –

  location ~ * \.(json|xml) {
           expires -1;
}
 Here no cache will be done as expires is -1
  location ~ * \.(jpg|jpeg|png) {
           expires 1M;
           add_header Cache-Control “public”;
}
Here caching is for 1 month. add_header Cache-Control “public” means any system can cache those resources.
proxy_cache_path /tmp/nginx keys_zone=cache_zone:20m inactive=120m;
Here proxy_cache_path is for the data location where the cache files are to be stored.
key_zone is the name of the zone, which is to be referred for further configurations and 20M means 20 Megabyte space is to be allocated for cache.
inactive=120m means, after 120 minutes of a particular request serving, if no same request further comes, the cache file is to be deleted.
proxy_cache_key “$scheme$request_method$host$request_uri”
The above means the proxy key saving scheme, that is how a request url will be cached with the required content.
 location / {    proxy_cache server_zone;
    add_header X-Proxy-Cache $upstream_cache_status;
    …
}

Here for proxy cache, the server_zone will be referred. This particular directive will be needed to be configured, when nginx is to be used as cache server.
 add_header X-Proxy-Cache $upstream_cache_status is a useful header can be included as a configuration directive to understand – whether the resources, which are configured for cache, are actually served from cache as per configuration or not.
So above are the main congurations, which we have used for our nginx configuration. Currently we are working on dynamic request serving with proper cache, which we will update in our next post/s.
Reference : A very useful article about nginx caching is here.
If you find this article helpful, you can connect us in Google+ and Twitter

Haproxy as Load Balancer – some details as we have explored

We had written about load balancing with nginx in our previous article.

Now in this article we will try to discuss some of the load balancing schemes in haproxy and their configuration in web applications.

Haproxy server can be used to distribute the load balancing of web request to different web/ application servers.

The main 2 sections in haproxy are –

  • frontend – here Haproxy listens to connections
  • backend – HAPoxy sends incoming connections

The Load Balancing Schemes –

Round Robin – Nginx send the web request to different servers in order they are defined in the nginx configuration file.

Example Configuration  –

backend servers
mode http
balance roundrobin
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server server1 127.0.0.1:3000 check
server server2 127.0.0.1:3001 check

In above only the DNS entries are to be inserted in backend section with a name mentioned, in our case it 
is servers. We attached the servers here -

    server server1 127.0.0.1:3000 check
    server server2 127.0.0.1:3001 check

We can use weight for any server, so that, if we put weight 2 for server1, then for every 3 requests 2 
requests will go to server1 and rest to server2.

When haproxy is used with default session handling (i.e. to sending all requests from a client to be sent to same server) , it is needed to add

cookie COOKIE-NAME prefix directive into the backend. The cookie directive is to be added for all servers where sessions are to be used. Here haproxy will add unique identifier for all subsequent requests from same client. It will look like –

    cookie SRV_ID prefix
    server server1 127.0.0.1:3000 cookie check

leastconn -

Here server with lowest no of connections will receive the connection.

Example Configuration  –

backend servers
mode http
balance leastconn
option forwardfor
http-request set-header X-Forwarded-Port %[dst_port]
http-request add-header X-Forwarded-Proto https if { ssl_fc }
option httpchk HEAD / HTTP/1.1\r\nHost:localhost
server server1 127.0.0.1:3000 check
server server2 127.0.0.1:3001 check

Haproxy is having a web interface which will run in 1936 port for default configuration, where we can view 

status of web/app servers which are under load balancing mode. For our case the web console is like -

Screenshot from 2015-01-07 09:45:40

Where in frontendservers section, the haproxy information is showing and in servers section server1 and 
server2 are running. We have just started to explore haproxy for load balancer, we will update more here, 
when we have more details.  

For more details about those load balancing settings, we should refer to haproxy documentation.

Reference : Load Balancing with Haproxy

If you find this article helpful, you can connect us in Google+ and Twitter for other updates.

Configuring Load Balancer with Haproxy and Node.js

Load Balancing is needed to distribute load of work across multiple resources – in computer science terminology, these multiple resources can be pieces of software or hardware, where same work processes are waiting to be executed on processing request. So in web application, we will need to do load balance to control user requests to server. Complexity of load balance can be done in various formats (physical architecture). In this article we had simulated a load balancing architecture with haproxy server and node.js application servers.

We expect knowledge of reader in web technology at intermediate level.

Below is a picture, where load balancing scenario is drawn.

Load Balancer

Now to simulate this load balancing work, we had taken one haproxy server and spawned 2 node.js server on same computer.

Some commands for haproxy –

1> To install : sudo apt-get install haproxy

2> To start : sudo service haproxy start

3> To stop : sudo service haproxy stop

4> To restart : sudo service haproxy restart

5> To reload configuration : sudo service haproxy reload

To edit configurations of haproxy, we need to edit the /etc/haproxy/haproxy.cfg file.

 

First, the main configurations in the haproxy server for load balancing (haproxy.cfg)-

....
    frontend localnodes
      bind *:80
      mode http
      default_backend servers

    backend servers
      mode http
      balance roundrobin
      option forwardfor
      http-request set-header X-Forwarded-Port %[dst_port]
      http-request add-header X-Forwarded-Proto https if { ssl_fc }
      option httpchk HEAD / HTTP/1.1\r\nHost:localhost
      server server1 127.0.0.1:3000 check
      server server2 127.0.0.1:3001 check

Here we have two node.js server processes in port 3000 and 3001 up and they are doing the same work. In the backend section, we have added two node.js server in port 3000 and 3001.

For the above configuration, the 2 lines are –

server server1 127.0.0.1:3000 check
server server2 127.0.0.1:3001 check

The scheme for load balancing is written in the following line –

balance roundrobin

Also in frontend part we had configured the backend as –

default_backend servers

Which is mentioned in backend section.

Now the node.js server set up –


var http = require('http');
var morgan       = require('morgan');

var server1 = http.createServer(function (req, res) {
  console.log("Request for:  " + req.url + "-- port 3000 ");
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Node.js\n');
}).listen(3000, "127.0.0.1");

var server2 = http.createServer(function (req, res) {
  console.log("Request for:  " + req.url + "-- port 3001 ");
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Node.js\n');
}).listen(3001, "127.0.0.1");

server1.once('listening', function() {
  console.log('Server running at http://127.0.0.1:3000/');
});

server2.once('listening', function() {
  console.log('Server running at http://127.0.0.1:3001/');
});

When we hit on the browser in http://127.0.0.1, if everything goes well,
We will have following output in browser –


  Hello Node.js

In above, we are viewing request on 3000 and 3001 port one after another, which is round robin load balancing scheme in haproxy server.

We will discuss different schemes of haproxy load balancing in our next article.

If you find this article helpful, you can connect us in Google+ and Twitter for other updates.

Nginx as a Load Balancer – some details as we explored

A complex web application will consist Load Balancer at front web request handling. A picture of a typication web application deployment environment can be drawn as below –

Load Balancer

We had written about round-robin load balancing of nginx with node.js in our previous article.

Now in this article we will try to discuss some of the load balancing schemes in nginx and their configuration in web applications.

Nginx server can be used to distribute the load balancing of web request to different web/ application servers.

The Load Balancing Schemes –

Round Robin – Nginx send the web request to different servers in order they are defined in the nginx configuration file.

Example Configuration  –

http{

    upstream sampleapp {
        server <<dns entry or IP Address(optional with port)>>;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above only the DNS entries are to be inserted in upstream section with a name mentioned, in our case it 
is sampleapp. Also the name to be mentioned in proxy_pass section.
 

Least Connections Web request is sent to the server with the least connections which are active/ least load.

Example Configuration  –

 http{

    upstream sampleapp {
        least_conn;
        server <<dns entry or IP Address(optional with port)>>;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above, the only line that is to be added in upstream section is least_conn. Other things are same as previous one.

Ip-Hash – For the above 2 methods, the subsequent web request from client can be sent to different servers. So session handling will be complex here. Only DB based session persistence and handling can be done here. To overcome this scenario, we can use ip_hash scheme. Here subsequent web requests of client will be sent to same server.

Example Configuration  –

 http{

    upstream sampleapp {
        ip_hash;
        server <<dns entry or IP Address(optional with port)>>;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above, the only line that is to be added in upstream section is ip_hash. Other things are same as first one.

Weighted Load Balancing – We can configure nginx such as it can send more web requests to more power full servers and less request to less resource based servers. So weight is defined for the more power full server.

Example Configuration  –

 http{

    upstream sampleapp {
        server <<dns entry or IP Address(optional with port)>> weight =2;
        server <<another dns entry or IP Address(optional with port)>>;
        }
    ....
    server{
       listen 80;
       ...
       location / {
          proxy_pass http://sampleapp;
       }  
  }

In above weight=2 is mentioned for a server. This means for every 3 requests, first 2 will go to first 
server and 1 to second server. This weight parameter can be combined with ip_hash scheme also.

For more details about those load balancing settings, we should refer to nginx documentation.

Reference : Load Balancing with Nginx

We will discuss about nginx caching in our next article/s.

If you find this article helpful, you can connect us in Google+ and Twitter for other updates.

Configuring Load Balancer with Nginx and Node.js

Load Balancing is needed to distribute load of work across multiple resources – in computer science terminology, these multiple resources can be pieces of software or hardware, where same work processes are waiting to be executed on processing request. So in web application, we will need to do load balance to control user requests to server. Complexity of load balance can be done in various formats (physical architecture). In this article we had simulated a load balancing architecture with nginx proxy server and node.js application servers. If reader want to know the integration of node.js and nginx, he/she can go through previous article.

We expect knowledge of reader in web technology at intermediate level.

Below is a picture, where load balancing scenario is drawn.

Load Balancer

Now to simulate this load balancing work, we had taken one nginx server and spawned 2 node.js server on same computer.

First, the main configurations in the nginx server –

....
            upstream sample {
	      server 127.0.0.1:3000;
	      server 127.0.0.1:3001;
	      keepalive 64;
	    }
            server {
	        listen 80;
	        ....
            server_name 127.0.0.1;

            ...

            location / {
               proxy_redirect off;
               proxy_set_header X-Real-IP $remote_addr;
               proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
               proxy_set_header X-Forwarded-Proto $scheme;
               proxy_set_header Host $http_host;
               proxy_set_header X-NginX-Proxy true;
               proxy_set_header Connection "";
               proxy_http_version 1.1;
               proxy_pass http://sample;
           }
        }

Here we have two node.js server processes in port 3000 and 3001 up and they are doing the same work. In the upstream section, we have added two node.js server in port 3000 and 3001. Also we have made proxy of our request in proxy_pass http://sample.

Now the node.js server set up -

var http = require('http');
var morgan       = require('morgan');

var server1 = http.createServer(function (req, res) {
  console.log("Request for:  " + req.url + "-- port 3000 ");
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Node.js\n');
}).listen(3000, "127.0.0.1");

var server2 = http.createServer(function (req, res) {
  console.log("Request for:  " + req.url + "-- port 3001 ");
  res.writeHead(200, {'Content-Type': 'text/plain'});
  res.end('Hello Node.js\n');
}).listen(3001, "127.0.0.1");

server1.once('listening', function() {
  console.log('Server running at http://127.0.0.1:3000/');
});

server2.once('listening', function() {
  console.log('Server running at http://127.0.0.1:3001/');
});

When we hit on the browser in http://127.0.0.1
We will have following output in console-


  Server running at http://127.0.0.1:3000/
  Server running at http://127.0.0.1:3001/
  Request for:  /-- port 3001 
  Request for:  /favicon.ico-- port 3000 
  Request for:  /favicon.ico-- port 3001 
  Request for:  /-- port 3000 
  Request for:  /favicon.ico-- port 3001 
  Request for:  /favicon.ico-- port 3000 
  Request for:  /-- port 3001 
  Request for:  /favicon.ico-- port 3000 
  Request for:  /favicon.ico-- port 3001 
  Request for:  /-- port 3000 
  Request for:  /favicon.ico-- port 3001 
  Request for:  /favicon.ico-- port 3000 

In above, we are viewing request on 3000 and 3001 port one after another, which is round robin load balancing scheme in nginx proxy server.

We will discuss different schemes of nginx load balancing in our next article.

If you find this article helpful, you can connect us in Google+ and Twitter for other updates.

Configure node.js application server with nginx

Generally we want to integrate any web application to run as fast as possible.

To make this case viable as much as possible, our solution approach should be in web front end and back end.

For front end there are multiple optimization elements like –

A> Load Balancing

B> Caching

C> Serving Static Resources from server which is different than the application server which is/are serving dynamic requests.

and other optimization parameters

Now this article scope is to show a way to integrate nginx and node.js server to accomplish most the above.

Here is a diagram which is representing nginx with one of its function.

Nginx on top of Application Server

We have integrated our previous node express.js based application with nginx to proxy the request through nginx server.

So, to install nginx in ubuntu, we need simply 2 commands –

 
sudo apt-get update

sudo apt-get -f install nginx

To

start the service – sudo service nginx start

stop the service – sudo service nginx stop

restart the service – sudo service nginx restart

Now when we are having nginx up and running in browser by typing – http://127.0.0.1,

we will have the following configuration code in gist, which we updated in nginx.conf and then reloaded the server.The nginx.conf can be found in /etc/nginx/ folder for default nginx installation in ubuntu 14.04LTS.

Here is the Github gist –


To integrate the node.js application the important parts are discussed below. In the configuration file, here are the important parts –

 
upstream sample {
              // the node.js server is running here 
	      server 127.0.0.1:8080; // The server for which upstream/proxy to be done
	      ...
	    }

Now in the server section, some important line of snippets –

 
server {
	        listen 80; //nginx server should listen to which port
	        listen 443 ssl; // for secured socket layer

	        server_name 127.0.0.1; // name of the server, the dns name can be provided here

                // static files such as images are to be served from the physical file path
                // and not from main application server 
		 location ~ ^/(images/|img) {
          			root /nodeapps/nodeexpress4mongoangular;
				  access_log off;
				  expires max;
			}

		location / {
			  ...
			  proxy_pass         http://sample;
                          // here the proxy is set up with our previous supstream section
			}
	}

After setting this up, we need to reload the nginx server and to be sure about our node server up and running.

In our next articles, we will discuss about nginx load balancing, haproxy load balancing and nginx cache server configuration.

If you find this article helpful, you can connect us in Google+ and Twitter