How do i get data from multiple sockets in one node to another socket in another node - socket.io-redis

Basically my problem is mentioned in the image below at (4) but im trying to retrieve information that is held in sockets in a different node, how do i retrieve data from sockets in a different node?

The figure itself gives you the answer
If you look closely
client 1 requests info from all the other clients
client 2 3 4 all go to node 2 in side and outside of the frame.
the originally requested data can be done by node 1 and node 2 to client 1 and clients 2 3 4 respectively.

Found the answer
// sending to individual socketid (private message)
socket.to(socketid).emit('hey', 'I just met you');
but the one thing im curious about is whether this will work between the sockets placed in the different nodes, I'm guessing that it will.

Related

Is it good to send a new RabbitMQ message from consumer to the same queue?

I do mailings via rabbitmq: I send a mailing list from the main application, the consumer reads it and sends it.
A broadcast may consist of different messages, which must be sent in the correct order.
In fact, a mailing list is a list of messages: [message_1, message_2, message_3, message_4]
Some of the messages can be sent and at some point the third-party service stops accepting requests.
I will describe the process of the consumer:
I take out the message from queue which contains distribution.
Sending: 1 part > 2 part
An error occurs. It remains to send 3 part > 4 part.
Acknowledge the original message from the queue.
Put a new one at the beginning of the same queue: [message 3, message 4].
Question 1: Is it good to send a new message (from consumer) created from parts of an old one to the same queue?
Question 2: Is it a good solution?
Are there any other solutions?
The sequence you posted loses a message if the handler process crashes between steps 4 and 5. So you have to switch the order of steps 4 and 5. But as soon as you do that you have to deal with the duplication of messages. If for some reason (like a bug) ack fails for a large percentage of messages you can end up with the same broadcast repeated multiple times in the queue. So if you want to avoid duplicated messages you have to use some external persistence to perform deduping. Also, RabbitMQ doesn't guarantee that messages are delivered in the same order. So you can end up in a situation where two messages for the same address are delivered out of order. So deduping should be on the level of individual parts, not entire messages.
Question 2: Is it a good solution? Are there any other solutions?
Consider using an orchestrator like temporal.io which eliminates most of the consistency problems I described.

Predefining route for message to take on OMNeT++

The following image of the network successfully sends messages around in random direction. It is a basic generic network with no specific protocols or connection types.
Now, I want to be able to simply program the route that the message takes from source node to destination node and everything in between. Say for example I want the message to start from London and be sent to SouthBank, then Manchester, then Preston, and then arrive and be deleted at MiltonKeynes.
The route would then be:
London --> SouthBank --> Manchester --> Preston --> MiltonKeynes
How would I implement this? the OMNeT++ tictoc tutorials (specifically part 4.4 on: https://docs.omnetpp.org/tutorials/tictoc/part4/) only explain how to make the node arrive at a predefined node but the message still travel in random directions in between.
This is called source routing where you explicitly fill inn the routing info at the source node. This is pretty easy to implement. Add a variable size routing info data to the packet you are sending, something like a stack. And provide the names of the cities on the route one by one. Then pop the first element from the stack in the packet, route it towards the city it specifies. All the other nodes should use this algorithm until the stack is empty, when the packet has arrived.

Managing Players on a World Server

I am currently developing the server part of a game (MMORPG) and I am stuck on a point that seems to me quite important: how to manage the packets received by the clients and their logic?
Let me explain: I know how to get a connection from a client, how to store the socket of this client but I don't know how to manage packets that it will send later and apply the modifications on the server (all asynchronously).
I had thought of 2 solutions:
1) As soon as the server detects a client connection, it creates a thread for the client. So there is 1 thread per client that will handle the packets of a single client. But in this case, the more clients there are, the more processor will be called right?
2) As soon as the server detects a new client, it stores it in a list. A thread will loop on the client list and see if the current client is sending a packet. If so, it manages it. But this solution also poses a problem: how to manage this packet? Create a new thread specifically for this packet? But I come back to the starting point: too many packets will overload the machine.
A friend offered me a third solution: make a mixture of both. In this way, a thread would take care of NB_MAX_CLIENT.
I would like to know if there are other ways of doing that.
I'm on Windows. I develop with Visual Studio in C ++ and I use the Winsocks.
Thanks in advance and sorry for my bad english.
As soon as the server detects a client connection, it creates a thread for the client. So there is 1 thread per client that will handle the packets of a single client. But in this case, the more clients there are, the more processor will be called right?
This is fairly common unless you are running out of RAM from the stacks that each thread requires (typically OS threads require an OS stack per physical thread). The other issue is too many context switches that might make you consider otherwise.
Avoiding the thread issue is really difficult because you lose the ability to do anything per client without pivoting off a data structure since you have no idea what stack will handle the next packet.
As soon as the server detects a new client, it stores it in a list. A thread will loop on the client list and see if the current client is sending a packet. If so, it manages it. But this solution also poses a problem: how to manage this packet?
Typically you setup a producer consumer set of threads for this. One producer gets each packet and sends it to a queue which is then consumed by some number of worker threads that just handle each item.
Honestly doing this correctly requires a ton of work (as in an example of it was a major piece of technology that Netflix developed) you probably should avoid it to simplify things.
Especially since RAM is cheap and 1MB per thread requires concurrency that will knock you over from other problems before your dedicated thread stacks kill you. (Similarly when context switches become your biggest issue you are pretty far along unless you are doing something unrelated to this discussion wrong).

Serialized connection using Socket Programming

I am trying to make a loop of communicating nodes passing very low level messages and I was wondering whether socket programming would be a good fit for my purpose. I will explain what I intend to do below:
Consider three nodes A, B, C.
Node A generates some data and sends it to B.
Node B receives this data, does some computation and sends it to C.
Node C receives this data, does some computation and sends it back to A.
To make it work, I was thinking of having all nodes as both client and servers.
Client A ----> Server B [After Computation] Client B ---> Server C [After Computation] Client C---> Server A
My question is that would this work? Or is there a major flaw in my thought process?
Thank you all :)
I'm affraid stackoverflow is not a good place for such question, this is "direct question -> direct answer" site. But here are some my thoughts:
It is weird architecture i must say. It could work this way, but how do you want to run client and server at the same node? No matter if they are 2 threads or 2 processes or even 2 applications, you will have to deal with communication between them.
You can also try to do peer-to-peer communication through UDP, but that's not going to be easier.
Consider alternative having server at node B offering service "some computation 1",
server at node C offering "some computation 2" and then client at node A, which will first query server B with the initial data, and after receiving response query server C with the returned data.

Separating messages in a simple TCP echo server using Winsock DLL

Please consider a simple echo server using TCP and the Winsock DLL. The client application sends messages from multiple threads. The recv call on the server sometimes returns with multiple messages stored in the passed buffer. At this point, there's no chance for the server to know, whether this is one huge message or multiple small messages.
I've read that one could use setsockopt in combination with the TCP_NODELAY option. Besides that MSDN states, that this option is implemented for backward compatibility only, it doesn't even change the behavior described above.
Of course, I could introduce some kind of delimiter at the end of each message and split the message on server-side. But I don't think that's way one should do it. So, what is the right way to do it?
Firstly, TCP_NODELAY was not the right way to do this... TCP is a byte stream protocol and any given connection only maintains the byte ordering - not necessarily the boundaries of any given send/write. It's inherently broken to rely on multiple threads that don't use any synchronisation being able to even keep the messages they want to send together on the stream. For example, say thread 1 wants to send the two-byte message "AB" and thread 2 wants to send "XY"... say thread 1 starts first and the output buffer only has room for one byte, send will enqueue "A" and let thread 1 know it's only sent one byte (so it should loop and retry - preferable after waiting for notification that the output queue has more space). Then, thread 2 might get some or all of "XY" into the queue before thread 1 can get "Y". These sorts of problems become more severe on slower connections, for slow and loaded machines (e.g. perhaps a low-powered phone that's playing video and multitasking while your app runs over 3G).
The ways to ensure the logical messages stay together over TCP include:
have a single sending thread that picks up messages sequentially from a shared queue (a mutex might be used to let the threads enqueue messages)
contest a lock (mutex) so the threads' sends have an uninterrupted ability to loop to send until a complete message is sent (this wouldn't suit some apps because any of the threads could be held up for quite a while doing comms work)
use a separate TCP connection per thread