How to signal a class that some HTTPServer request was received? - c++

I want to extend an existing application with a simple REST server and a dedicated UDP client for streaming data to a UDP server.
The REST server should manage the UDP client based on the API requests
and have access to classes in the hosting application.
I want to create a new class in the hosting application so I can have access to the data.
I saw Poco::Net::HTTPServer and it seems a good candidate. I want to start and stop HTTPServer from that class methods.
Thing is, this class should know of the API requests types so that it can react accordingly.
Is it a good practice to inherit HTTPServer and extend it with the above behavior (that way I may have simple access to the requests)?
Or should it only be a member of the main class?
I also thought of POCO events/notifications but not sure if it is a good idea or how to subscribe my new class to events from different short-living objects (of class HTTPRequestHandler), or to events from the HTTPRequestHandlerFactory object (which HTTPServer owns upon creation).
P.S.
The new REST server and UDP socket should serv a single connection. Also, the expected frequency of API requests is very low.
Any help or alternative ideas are highly appreciated!

Is it a good practice to inherit HTTPServer and extend it with the
above behavior (that way I may have simple access to the requests)?
I don't recommend that. Create a standalone class, hold a reference to it in the request handler factory, pass the reference to each request handler, and do the required work at the request handling time.
As for using notifications, from the description it sounds like simply calling back your new object methods from request handlers would do the job. But perhaps having a NotificationQueue in the new class and process notifications in a separate thread would make sense.

Related

gRPC callback vs streaming in C++

I'm writing an application where a client will connect to a server and subscribe to data updates. The client tells the server what data items it is interested in, and then subscribes using a method with a streaming response. This works well.
However, there are also non-data related notifications that the client should know about. I'm not sure about the best way to handle those. I've thought of:
Adding another method to the existing service. This would be just like the data subscription but be used for event subscription. The client could then subscribe to both types of updates. Not sure what the best practice is for the number of methods in a service, or the mixing of responsibilities in a service.
Exposing a second service from the server with a streaming method for event notifications. This would make the client use multiple connections to get its data - and use another TCP port. The event notifications would be rare (maybe just a few during the lifetime of the connection) so not sure if that is important to consider. Again - not sure about best practices for the number of services exposed from a server.
This one seems unorthodox, but another method might be to pass connection info (IP address and port) from the client to the server during the client's connection sequence. The server could then use that to connect to the client as a way to send event notifications. So the client and server would each have to implement both client and server roles.
Any advice on ways to manage this? Seems like this a problem that would already have been solved - but it also appears that the C++ implementation of gRPC lags a bit behind some of the other languages which offer some more options.
Oh - and I'm doing this on Windows.
Thanks
I've come up with another alternative that seems to fit the ProtoBuf style better than the others. I've created ProtoBuf message types for each of the data/event/etc notifications that the server should send, and enclosed each of them inside a common 'notification' message that uses the 'oneof' type. This provides a way to have a single streaming method response that can accommodate any type of notification. It looks like this:
message NotificationData
{
oneof oneof_notification_type
{
DataUpdate item_data_update = 1;
EventUpdate event_data_update = 2;
WriteResponse write_response_update = 3;
}
}
service Items
{
...
rpc Subscribe (SubscribeRequest) returns (stream NotificationData) {}
...
}
Any comments or concerns about this usage?
Thanks

Is there any way to build an interactive terminal using Django Channels with it's current limitations?

It seems with Django Channels each time anything happens on the websocket there is no persistent state. Even within the same websocket connection, you can not preserve anything between each call to receive() on a class based consumer. If it can't be serialized into the channel_session, it can't be stored.
I assumed that the class based consumer would be persisted for the duration of the web socket connection.
What I'm trying to build is a simple terminal emulator, where a shell session would be created when the websocket connects. Read data would be passed as input to the shell and the shell's output would be passed out the websocket.
I can not find a way to persist anything between calls to receive(). It seems like they took all the bad things about HTTP and brought them over to websockets. With each call to conenct(), recieve(), and disconnect() the whole Consumer class is reinstantiated.
So am I missing something obvious. Can I make another thread and have it read from a Group?
Edit: The answers to this can be found in the comments below. You can hack around it. Channels 3.0 will not instantiate the Consumers on every receive call.
The new version of Channels does not have this limitation. Consumers stay in memory for the duration of the websocket request.

Client Server API interaction

I am writing a client server program and it is my idea to have the server as simple as possible. The server will need to complete tasks such as "move this file", "delete a file" , run this complex algorithm on the columns in a file and send back results.
I have created an abstract class that will be sent between server and client called aTodo
There is one method called DoClass() in this abstract class that is to be run by the server.
Currently my server listens and waits for a connection. When it receives a connection it creates an object of type aTodo via unserialization. Then the server runs the DoClass function in that object. The server then serializes the object and sends it back to the client.
here is the code for reference:
protocolBaseServer pBase(newSockFd); //create socket
std::unique_ptr<aTodo> DoThis; //create object variable
DoThis=protocolCom::Read<aTodo>(pBase); //read the stream into the variable
DoThis->DoClass();//call the do function in the
protocolCom::Write(DoThis,pBase);//write back to the client
Is this a good way to program a server? It is VERY simple on the server.
Another way I was thinking was to create a delegate class that would be serialized and sent back and forth. The delegate would have a DoDelegate method. Then the user could put ANY function into the delegate's DoDelegate method. This would in affect allow the server to run any method in a class rather than just the Single DoClass method I have the server run now.
Thanks
It's a way good to dream...
How would you be serializing the generic delegate? That's not something that C++ facilitates...
Also, at this point your client must contain all the logic that the server implements. Because you're actually sending server classes via serialization.
I think you need to separate the command objects and the handlers that implement the corresponding actions.
Also, I would suggest that the server queues the commands received and a separate worker processes the commands from the queue.
The common way to approach this is using a framework for asynchronous socket io, such as ACE or Asio.

send data from server with java ee 6 to client

Problem
We have a client-server application, server side is Glassfish 3.1.2. This app has many users, as well as many modules (e.g. View Transactions, View Banks etc). There are some long running processes invoked by client which run on server. Currently we have not found a nice solution to show the user what is going on on the server side. We want the users to get updated messages from server with given frequency. What would you suggest to use?
What we have done/tried
We (independently) used an approach with Singleton bean and a Map of client IDs similar to this, and it works of course. But then on the server side every method doSomething(Object... vars) must be converted to doSomething(Object... vars, String clientID) or whatever ID is type of. The client pulls data from server say once per second. I would like to avoid adding facades between server and client.
I was thinking about JAX-WS or JAX-RS, but I'm not familiar with these technologies deeply and not sure about what they can do.
Sockets
I should note that on the server side we have only Stateless beans (there is a reason for that), that is why I did not mention the use of Stateful bean (which is very good candidate I think).
Regards, Oleg
WebSocket could be a suitable choice, it allows the server to send unsolicited data to clients with no strong coupling, you just have to store a client id to map client connections to running tasks and be able to push updates to the right connection.
The client id/socket connection mapping can be maintained in a singleton bean using an in-memory structure, i.e. a hash map, or a permanent datastore for scalability purposes or in case you need a robust solution.
Some useful links to better understand WebSocket technology are this and this.

Communication between client class library and web service / web service and server class library

Wondering what others do / best practice for communicating between layers. This question relates to communication between layers 2-3 and 3-4.
Our Basic Architecture (in order) as follows:
UI
Front End Business Classes
Web Services
Back End Business Classes
DAL
The web services are just a façade that include logging and authentication to back end class libraries.
As such, the web service is passed a request object that includes the parameters required by the web method along with the user credential (the user credential for example is stored in a base class as we will always need to pass this to the webservice) and responds with response objects (has things such as status and message, if failed etc along with the object required) both request & response use a custom generic class/or interface where only one result is returned, otherwise a class needs to be created.
Sometimes it makes sense to do this for the response object at layer 4 (though we don't use a request object unless a lot of parameters need to be pasaws), in which case we just have an adapter class in layer 3 which returns this to the client. For consistency I have considered doing this all the time, though think it may be overkill.
So to iterate the question, what are the best practices for communicating between layers? and should/do people use this method outlined above (it works well for us) and should layers 3-4 implement similar method to 2-3?
Possible considerations:
currently everything is coded in house by a team of developers, some client code may be outsourced in the future
future web services will be WCF based (not sure if that effects design other than coding to interfaces which I would prefer anyway).
We use .net
For the sake of completeness:
It seems a good idea to have the response / requests in the class library, that way if you want to change the web service to WCF, there is less work to do.