I am writing a client server program and it is my idea to have the server as simple as possible. The server will need to complete tasks such as "move this file", "delete a file" , run this complex algorithm on the columns in a file and send back results.
I have created an abstract class that will be sent between server and client called aTodo
There is one method called DoClass() in this abstract class that is to be run by the server.
Currently my server listens and waits for a connection. When it receives a connection it creates an object of type aTodo via unserialization. Then the server runs the DoClass function in that object. The server then serializes the object and sends it back to the client.
here is the code for reference:
protocolBaseServer pBase(newSockFd); //create socket
std::unique_ptr<aTodo> DoThis; //create object variable
DoThis=protocolCom::Read<aTodo>(pBase); //read the stream into the variable
DoThis->DoClass();//call the do function in the
protocolCom::Write(DoThis,pBase);//write back to the client
Is this a good way to program a server? It is VERY simple on the server.
Another way I was thinking was to create a delegate class that would be serialized and sent back and forth. The delegate would have a DoDelegate method. Then the user could put ANY function into the delegate's DoDelegate method. This would in affect allow the server to run any method in a class rather than just the Single DoClass method I have the server run now.
Thanks
It's a way good to dream...
How would you be serializing the generic delegate? That's not something that C++ facilitates...
Also, at this point your client must contain all the logic that the server implements. Because you're actually sending server classes via serialization.
I think you need to separate the command objects and the handlers that implement the corresponding actions.
Also, I would suggest that the server queues the commands received and a separate worker processes the commands from the queue.
The common way to approach this is using a framework for asynchronous socket io, such as ACE or Asio.
Related
I have a service written in C++ working in the background, and a client written in python but actually calls functions in C++ (using pybind11) which talks to that service. In my client example I am creating 2 processes. After the fork, the client in the new child process is able to sent requests via gRPC but not receiving the answer message back.
I read there are problems in gRPC and forking in python, but I am not sure how can this be avoided? Is creating new stub for each object in each child process supposed to work?
The flow:
I have a request from the main process - getting an object from the server via pybind+gRPC.
Then forking 2 processes and sending each one the returned object from previous service call.
In the child process, calling another request using that object - the request was sent, the answer is created in the service but I didn't get it in the client.
I want to extend an existing application with a simple REST server and a dedicated UDP client for streaming data to a UDP server.
The REST server should manage the UDP client based on the API requests
and have access to classes in the hosting application.
I want to create a new class in the hosting application so I can have access to the data.
I saw Poco::Net::HTTPServer and it seems a good candidate. I want to start and stop HTTPServer from that class methods.
Thing is, this class should know of the API requests types so that it can react accordingly.
Is it a good practice to inherit HTTPServer and extend it with the above behavior (that way I may have simple access to the requests)?
Or should it only be a member of the main class?
I also thought of POCO events/notifications but not sure if it is a good idea or how to subscribe my new class to events from different short-living objects (of class HTTPRequestHandler), or to events from the HTTPRequestHandlerFactory object (which HTTPServer owns upon creation).
P.S.
The new REST server and UDP socket should serv a single connection. Also, the expected frequency of API requests is very low.
Any help or alternative ideas are highly appreciated!
Is it a good practice to inherit HTTPServer and extend it with the
above behavior (that way I may have simple access to the requests)?
I don't recommend that. Create a standalone class, hold a reference to it in the request handler factory, pass the reference to each request handler, and do the required work at the request handling time.
As for using notifications, from the description it sounds like simply calling back your new object methods from request handlers would do the job. But perhaps having a NotificationQueue in the new class and process notifications in a separate thread would make sense.
It seems with Django Channels each time anything happens on the websocket there is no persistent state. Even within the same websocket connection, you can not preserve anything between each call to receive() on a class based consumer. If it can't be serialized into the channel_session, it can't be stored.
I assumed that the class based consumer would be persisted for the duration of the web socket connection.
What I'm trying to build is a simple terminal emulator, where a shell session would be created when the websocket connects. Read data would be passed as input to the shell and the shell's output would be passed out the websocket.
I can not find a way to persist anything between calls to receive(). It seems like they took all the bad things about HTTP and brought them over to websockets. With each call to conenct(), recieve(), and disconnect() the whole Consumer class is reinstantiated.
So am I missing something obvious. Can I make another thread and have it read from a Group?
Edit: The answers to this can be found in the comments below. You can hack around it. Channels 3.0 will not instantiate the Consumers on every receive call.
The new version of Channels does not have this limitation. Consumers stay in memory for the duration of the websocket request.
I am writing some code for a client and server using c++.
I am using boost serialize to serialize an object on the client side and then send it to a server.
The server then unserializes the stream and recreates the object.
The server then calls the run function of the object.
The server then serializes the object and sends it back to the client.
The reason I am doing the above is for instance the client needs to know what files are in the /home/dataIncoming folder on the server.
This is just one simple example
My question is that it seems that for the objects that are serialized I need the same code on the server AND the client. Or how would the server know how to unserialize the object sent.
Thus if I change the server code i need to make sure I also get the code over to the client program.
How do programmers easily solve this problem of duplicate code on the client and the server?
Or is there some way I could serialize so this duplicate code did not need to exist on the client and the server?
or do i simply send a protocol stream over to the server and have the server read in that protocol stream to reconstruct commands to be run and info to be sent back.
It seems easiest to just construct an object on the client side, have it run on the server, then sent it back to the client with results.
Thanks for all your ideas!
For generic client server operations:
You compile your code into a library, and use the library on both the client and server.
Your client and server both need to understand the data format. There is no way around this.
You could have a more abstract layer that allows the client to submit the format of new data types, but this abstract layer itself would need to be a data format understood by both the client and server.
It's unclear to me how your example differs (it at all) from the generic case.
The client and the server have to share data (and the data format), but not the code.
It sounds like your object encapsulates three things:
The data
Some client code
Some server code
A lot of object oriented programming encourages putting data and the code that operates on it into the same object, but that's not always the best solution. Why not separate those things out?
Put your data into a data object and serialize that. The server would have something that operated on that data (presumably some object, not necessarily though) and the client would have something that operated on that data. So, I think you just need to reconsider how your programs are structured.
The architecture you describe also has another downside. If you want to modify the behavior of the server, you have to also update the client. If these concerns were separate, this wouldn't be a problem. The only reason you would have to update both the client and the server is if the data format changes.
Usually the Encoding and Decoding is taken care by the serialization library that you use. (google prorpbuf, nano-pb, boost etc)
However Since the encoded from and decoded to classes/structures would be essentially be the same.
So essentially we need the same information on both ends (server and the client)
A good idea is to create a dll or an .so for this functional module which when changed can be reflected on both the server and the client, without recompilation.
I have used C++ & Winsock to create both server and client applications. After connecting to the server, the client displays an animation sequence. The server machine can handle multiple client connections and keeps a count of the total number of clients connected.
Currently, the client animation sequence begins as soon as the client connects. What I want to happen, is when 2 clients have connected to the server, the server should send a message to client 1 to call the Render() function (which is in the client) then, at a later time, send another message to client 2 which calls the same Render() function.
Just looking for some help as to the best way to achieve this.
Thanks
You can't send function calls (in any direct meaning of the word), since functions live within a single process space and can't (easily) be sent across a socket connection.
What you can do is send a message which the client will act on and call the desired function.
Depending on what protocols you are using, this could be as simple as the server sending a single byte (e.g. 'R' for render, or something) to each client's TCP connection, and the client code would know to call Render() when it receives that byte. More complex protocols might encode the same information more elaborately, of course.
In interpreted languages (e.g. Java or Python) it is possible to send actual code across the socket connect, in the form of Java .class files or Python source text, or etc. But it's almost always a bad idea to do so, as an attacker could exploit the mechanism to send malware to the client and have the client execute it.