I am trying to store a list of connected QWebSockets in a QHash for a subscription purpose.
A client connects and then sends the websocket server a feature they would like to subscribe to. Each feature set has a QHash that they get added to. When the feature has new information, it loops through the clients in the QHash and sends them a message.
That seems to work fine, but if a client disconnects and then the feature tries to send them a message my application seg faults. On a client disconnect I am trying to remove the client from the QHash but the Qhash::remove function always returns 0.
The way I'm storing Websocket clients is with a pointer to the QWebSocket object as a hash:
QHash<QWebSocket *, QString> diagnosticSubscriptions;
When a client subscribes to the diagnostics feature in my textMessageReceived function I run:
QWebSocket *pClient = qobject_cast<QWebSocket *>(sender());
diagnosticSubscriptions.insert(pClient, callback);
Then when the user disconnects I try to clean up the QHash:
QWebSocket *pClient = qobject_cast<QWebSocket *>(sender());
diagnosticSubscriptions.remove(pClient);
pClient->deleteLater();
If I print the result of remove it is 0 so the client isn't being removed.
Is there a better way of storing QWebSockets?
Related
Can ZeroMQ Publisher Subscriber sockets be configured so that a newly-connected client always receive last published message (if any)?
What am I trying to do: My message is a kind of system state so that new one deprecates previous one. All clients has to have current states. It works for already connected clients (subscribers) but when the new subscriber appears it has to wait for a new state update that triggers a new message. Can I configure PubSub model to send the state to the client immediately after connection or do I have to use a different model?
There is an example in
the ZMQ guide called Last Value Caching. The idea is to put a proxy in between that caches the last messages for each topic and forwards it to new subscribes. It uses an XPUB instead of a PUB socket to react on new connections.
I have an async gRPC server for Windows written in C++. I’d like to detect the loss of connection to a client – whether a network connection is lost, or the client crashes, etc. I see references to the keepalive channel arguments, and I’ve tried various combinations of those settings, such as:
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIME_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_TIMEOUT_MS, 10000);
builder.AddChannelArgument(GRPC_ARG_KEEPALIVE_PERMIT_WITHOUT_CALLS, 1);
builder.AddChannelArgument(GRPC_ARG_HTTP2_MIN_RECV_PING_INTERVAL_WITHOUT_DATA_MS, 9000);
builder.AddChannelArgument(GRPC_ARG_HTTP2_BDP_PROBE, 1);
I've done some testing with a streaming RPC method. If I kill the client process and then try to send data to the client, the lost connection is detected. I don't actually even have to send data. I can set an Alarm object to trigger immediately and that causes the call handler to be cancelled. However, if I don't try to send data (or set an alarm) after killing the client process then there's no notification or callback that I've been able to find/enable. I must not have a complete understanding. So:
How does the detection of a lost connection manifest itself for the server? Is there a callback method, or notification of some type? My server doesn’t receive any errors; the completion queue’s ‘Next()’ method never returns, etc.
Does this detection work for both unary (call/response) and streaming methods?
Does the server detection of a lost connection work whether or not the client has implemented lost connection / keepalive logic?
Is there some method besides the keepalive channel arguments that is preferred?
Thanks - any help is appreciated.
You can use ServerContext::AsyncNotifyWhenDone() to get a notification when the request has been cancelled.
https://grpc.github.io/grpc/cpp/classgrpc__impl_1_1_server_context_base.html#a0f1289f31257e6dbef57bc901bd7b5f2
I have created a simple C++ TCP Server application.
Client connects and receives back as a simple echo everything that the client sends to the server. No purpose at all except for me to test the communication.
So far so good. What comes as next task for me is to decide a way of how to send a notification to the server that specific event has started.
Some event examples:
Player wrote a message - Server accepts the data sent from the client and recognizes that it's a chat message and sends back data to all connected clients that there is new message. Clients recognize that there is new message incoming.
Player is casting spell.
Player has died
Many more examples but you get the main idea.
I was thinking of sending all the data in json format and there all messages will contain identifiers like
0x01 is message event.
0x02 is casting spell event.
0x03 is player dead event.
And once identifier is send server can recognize what event the client is asking/informing and can apply the needed logic behind.
My question is isn't there a better approach to identify for what event the server is notified ?
I am in a search of better approach before I take this road.
You can take a look at standard iso8583 message, it's a financial message but every message has a processing code that determine what action should be done for each incoming message.
I have a web proxy that starts a TCP listener socket that accepts connections from clients. The listener accepts connections via:
clientConnection, clientAddress = listenerSocket.accept()
and then a new thread handles the client connection from there.
To mock a client connection, I am using telnet to connect to the proxy and issue commands. The proxy needs to receive data from telnet and I need to make sure that I receive all of it. To achieve this, I am doing the following:
while True:
requestBytes = clientConnection.recv(1024)
if not requestBytes:
break
requestBuffer += requestBytes
The proxy then decodes the bytes and does some things with them that takes a little bit of time, and then has to send a response back to the same client. However, when using the above code the connection with clientConnection gets closed long before I can process the bytes and respond.
Here's what I don't understand, when I use the following instead:
while True:
requestBytes = clientConnection.recv(1024)
requestBuffer += requestBytes
break
It works just fine and the clientConnection remains intact. This obviously has a problem if I receive more than 1024 bytes, but the clientConnection does not get closed.
More specifically, the error occurs after I have a response to send to the client and call:
clientConnection.sendall(response)
clientConnection.shutdown(1)
clientConnection.close()
The line clientConnection.shutdown(1) throws the error:
[Errno 107] Transport endpoint is not connected
which is confusing because somehow it was able to still call sendall on the previous line. Note that I did not actually receive anything on the client side.
I am sure that the connection is not getting closed elsewhere in the code. What exactly is happening here and what is the best way to do something like recvall and keep the clientConnection open?
I have a function to give recommendations to users. This function need to make a lot of calcs to start, but after start it use the already calculed matrix on memory. After this, any other calc that is made, "fills" the object in memory to continuous learning.
My intention is to use this function to website users, but the response need to come from the same "object" in memory and need to be sequential by request because it is not thread safe.
How is the best way to get this working? My first idea was use signalr so the user dont need to wait to response and a queue to send the requests to objects. But how the signalr can receive the response for this specific request?
The entire flow is:
User enter on a page.
A javascript will call a service with the user ID and actual page.
The server will queue the ID an page.
The service will be calculating the results for each request on queue and sending responses.
The server will "receive" the response and send back to client.
The main problem is that I dont see a way to the service receive the response to send back to client until it is complete, without need to be looping in queues.
Thanks!
If you are going to use SignalR, I would suggest using a hub method to accept these potentially long running requests from the client. By doing so it should be obvious "how the signalr can receive the response for this specific request".
You should be able to queue your calculations from inside your hub method where you will have access to the caller's connection id (via the Context.ConnectionId property).
If you can await the results of your queued operation inside of the hub method you queue from, you can then simply return the result from your hub method and SignalR will flow the result back to the calling JavaScript. You can also use Clients.Caller.... to send the result back.
If you go this route I suggest you use async/await instead of blocking request threads waiting for your long-running calculations to complete.
http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server
If you can't process your calculation results from the same method you queued the calculation from, you still have options. Just be sure to queue the caller's connection id and a request id along with the calculation to be processed.
Then, you can process the results of all your calculations from outside of your hub using GlobalHost.ConnectionManager.GetHubContext:
private IHubContext _context = GlobalHost.ConnectionManager.GetHubContext<MyHub>()
// Call ProcessResults whenever results are ready to send back to the client
public void ProcessResults(string connectionId, uint requestId, MyResult result)
{
// Presumably there's JS code mapping request id's to results
// if you can have multiple ongoing requests per client
_context.Clients.Client(connectionId).receiveResult(requestId, result);
}
http://www.asp.net/signalr/overview/signalr-20/hubs-api/hubs-api-guide-server#callfromoutsidehub