Detailed explanation of the main functionality of the Cluster module in node.js through the source code

As is well known, JavaScript code in node.js executes in a single-thread, very fragile, once the unrecised exception is there, the entire application will crash. This is unbearable in many scenarios, especially in web applications. The usual solution is to use the CLUSTER module from Node.js to start multiple application instances in Master-Worker mode. However, we enjoy the wellbeing of cluster module at the same time, many people have begun to wonder:

  1. Why is my application code obviously have app.listen (port) ;, but in a multi-module cluter When this code is this code, it has not been filed with the port.
  2. How is Master how to pass the received request to the WORKER and then respond?
Let us come from the code in the lib / cluster.js of the node.js project.

In order to get the answer to this problem, let us look at the initialization of the Worker process, and the master process will attach it to the work process. Environmental variable node_unique_id, is a incremental number from scratch:

// lib / cluster.js // … function creteworkerprocess (ID, ENV) {///// … workerEnv.NODE_UNIQUE_ID = ” + id; // … return fork (cluster.settings.exec, cluster.settings.args, {env: workerEnv, silent: cluster.settings.silent, execArgv: execArgv, gid : Cluster.SettingS.GID, UID: Cluster.Settings.UID});}

When Node.js is initialized, it is determined whether the process is in accordance with the environment variable. If the CLUSTER module fork is out, if so, execute the wornit () function to initialize the environment, otherwise execute the masterinit () function.
 In the workerinit () function, the Cluster._getServer method is defined. This method is called in the Listen method of any net.server instance:   
// LIB / NET.JS / / … Function Listen (Self, Address, Port, AddresStype, Backlog, FD, Exclusive) {Exclusive = !! EXCLUSIVE; if (! Cluster) Cluster = Require (‘Cluster’ ); if (cluster.isMaster || exclusive) {self._listen2 (address, port, addressType, backlog, fd); return;} cluster._getServer (self, {address: address, port: port, addressType: addressType, fd : FD, Flags: 0}, CB); Function CB (Err, Handle) {// … self._handle = handle; self._listen2 (address, port, addresstype, backlOG, FD);}}

You may have guessed that the answer is in the code of this cluster._getserver function. It mainly dries two things:

Registering this worker to the master process, if the Master process is the first time that the Worker under this port / descriptor, then a internal TCP server, To assume responsibility to listen to the port / descriptor, then record the Worker in the master.
 HACK offers the part of the Net.Server instance in the Worker process listening to the port / descriptor, making it no longer assumes the responsibility.   For the first thing, since the master is received, the request is received to Worker, it meets a certain load balancing rule (in the non-Windows platform, the polling is somewhat polling), these logic is packaged in The RoundrobinHandle class. Therefore, the internal TCP server is initialized here: 

// LiB / Cluster.js // … function RoundrobinHandle (Key, Address, Port, AddresStype , backlog, fd) {// … this.handles = []; this.handle = null; this.server = net.createserver (; if (fd> = 0) this.server.listen {fd: fd}; else if (port> = 0) this.server.listen (port, address); else this.server.listen (address); // UNIX SockeT path. /// …}
  1. For the second thing, due to the Listen method of the net.server instance, it will eventually call the listen method under the _handle property. To complete the monitoring action, modify it in the code:
  2. // lib / cluster.js // … function RR (Message, CB) {//. .. // The Listen function here no longer does any listening action function listen (backlog) {return 0;} function close () {// …} function ref () {} function unref () {} var handle = {Close: Close, Listen: Listen, Ref: Ref, Unref: unref,}; // … Handles [key] = handle; cb (0, handle); // Incoming the Handle in this CB will Assigned to the _Handle property} // lib / net.js // … function listen (Self, AddRESS, PORT, ADDRESSSSTYPE, BACKLOG, FD, EXCLUSIVE) {// … IF Cluster.ismaster || {Self._Listen2 (AddRESS, PORT, ADDRESSTYPE, BACKLOG, FD); Return; // Only change in the Worker environment} Cluster._getServer (Self, {AddRess: Address, Port: Port, AddressType: AddresStype, FD: FD, FLAGS: 0}, CB); Function CB (Err, Handle) {// … self._handle = handle; // …}}

At this point, the first problem has been suddenly turned out, summarizing:
 The port is listening only by the internal TCP server in the Master process. Once.   Does not appear that the port is repeated listening error, because in the WORKER process, the method of last executing the listening port operation has been active by the Cluster module. 
Question 2

solved the problem 1. The problem is a lot of easy ways. Through the problem, we have learned that the listener port is the internal TCP server created in the master process, so the second problem is solved, and the hand is the operation of the internal TCP server takes over, the operation is performed. The CLUSTER module is the way to listen to the Connection event of the internal TCP server. In the listener function, there is a WORKER in the listener function, send a newconn internal message to it (including the cmd: ‘node_cluster’ attribute) and A client handle (ie the second parameter of the Connection event handler), the relevant code is as follows:
// LiB / cluster.js // … function roundrobinhandle (Key, Address, Port, AddResStype, Backlog, FD) {// … this.server = net.createserver (assert.Fail); // … var self = this; this.server.once (‘listening’, function () {// … self.handle.onconnection = self.distribute.bind (self);}); RoundrobinHandle.Prototype.Distribute = function (err, handle) {this.handles.push (handle); var worker = (); if (worker) this.handoff (worker);}; rovendrobinhandle.Prototype .Handoff = function (worker) {// … var message = {act: ‘newconn’, key: this.key}; var self = this; sendHelper (Worker.Process, Message, Handle, Function) { // …});};
After receiving the Newconn internal message, call the actual business logic processing and return according to the passed handle.

    // LIB / Cluster.js // … // This method will call cluster._set.js when Node.js is initialized by SRC / Node.js = function. () {// … process.on (‘InternalMessage’, Internal (Worker, Onmessage)); // … function onMessage (Message, Handle) {if (message.act === ‘newconn’) onConnection (Message, Handle); // …}}; Function OnConnection (Message, Handle) {// … var accepted = server! == undefined; // … if (accountted) server.onconnection (0, handle);}
The problem is also resolved, and it is also summed up:

All requests first pass the internal TCP server.

In the request processing logic of the internal TCP server, there is a WORKER process in balanced, sending it to a newconn internal message, and sends a client handle with the message.

The Worker process receives this internal message, creates a Net.Socket instance according to the client handle, performs specific business logic, returns.
The Cluster module in node.js In addition to the above mentioned functions, it also provides a very rich API for Master and Worker process. The previous communication, for different operating system platforms, also provides different default behaviors. This article only selected a functional line for analysis. If everyone is idle, I recommend complete code implementation of the Cluster module.

 The above is all of this article, I hope to help everyone, I hope everyone will support Tumi Cloud. 
© Copyright Notice
Just support it if you like
comment Grab the couch

Please log in to comment