What is the difference between different idle timeout settings in Shiny server? - shiny

In shiny-server configuration,
app_idle_timeout Defines the amount of time (in seconds) an R process
with no active connections should remain open. After the last
connection disconnects from an R process, this timer will start and,
after the specified number of seconds, if no new connections have been
created, the R process will be killed.
http_keepalive_timeout Defines how long a keepalive connection will
sit between HTTP requests/responses before it is closed.
What is the difference between the two?
If the user doesn't move the mouse, or press any key, but the app is working in the background, is it considered an HTTP request?

Related

concept feedback for a multi-client server c++

I am currently writing a server application that should instruct multiple clients.
I am unsure about the concept i have designed and would like to receive feedback on it.
There are several identical clients that record and process sensor data. In addition, the results are sent to the server so that the server can react if necessary and send new parameters to the client.
The client should continue to work after the connection has ended and try to reconnect at the same time. If there is no connection, data do not have to be transmitted subsequently.
My concept is as follows:
The client logs on to the server.
The client requests an initialization -> server ok.
The client requests parameter A -> server sends parameter
The client requests parameter B -> server sends parameter
...
The client requests parameter Z -> server sends parameter
The client sends initialization finished -> server says ok
endless loop
Server queries measured value X -> client sends measured value
Server sends parameter Y -> client says ok.
So first the client is the master and asks for the initialization parameters it needs, then the server and client swap roles and the server becomes the master.
Should the connection break, the client should reconnect to the server. but would then with the command:
The client sends initialization finished -> server says ok
start so that the initialization is skipped.
The request of parameters runs as follows:
Infinite loop
Send (command)
Timout = 1 second
Receive
if (! Timout)
break
so i send the command and wait a little, if no answer comes i send the command again. this is shown here in abbreviated form. I wrote it in c ++ and I use several state machines. The state machines naturally also catch errors when the connection is interrupted and jump back to the initialization status ...
Since this is a multi-client application, I find it a little difficult. it runs as a single client. I have a class client in which a state machine and a socket are stored. the instance runs in a separate thread.
My problem now is, if the connection is lost, how can I establish a new connection (from an old client) to its instance (state machine). i would do this over some id comparison. so that the client sends his id first of all. (maybe also mac address ???)
I currently keep the connections to all clients open at all times. is that state of the art? or should you send a command, wait for an answer and close the connection again and then reconnect if necessary?
Many Thanks
Once TCP connection is established
each side can send for data.
Just that you have to get write -> read sequence correct.
This can be easily implemented using non blocking socket IO.
"My problem now is, if the connection is lost, how can I establish a
new connection (from an old client) to its instance (state machine). i
would do this over some id comparison. so that the client sends his id
first of all. (maybe also mac address ???)"
One solution is each client has a UUID, Client must tell server every time it connects its ID , server can keep a map of UUID vs client socket connection.
If Client is lost server can delete the mapping. Both server & client can detect lost connection that should not be a problem.

Detect half-open websockets with PING/PONG

I'm using Jetty 9.2.24 as a WebSocket server. I want to detect half-open connections, so that no more messages are sent over this connection and buffered instead.
I know PING/PONG frames are used for this, so I tried sending PINGs periodically and set a low maxIdleTimeout. I modified my client to NOT return a PONG to see if Jetty would regard this as a failed connection since the RFC-6455 spec dictates that the remote endpoint MUST respond with a PONG. Apparently Jetty does not detect missing PONGs or I am doing something wrong.
What is the best way to continue. Should I implement the PING/PONG timeouts myself by explicitly receiving all PONG messages and detect a timeout? I would think this would be responsibility of the underlying websocket managing framework.
Note that Jetty 9.2.x is EOL (End of Life) you should consider upgrading.
Setting Max Idle Timeout and then causing the connection to not be idle by sending ping/pong isn't ideal.
The spec says that when you receive a PING you must send a PONG, and Jetty indeed does that.
It does not say that receiving a PONG, or not receiving a PONG, or receiving an unsolicited PONG has any meaning or behavior on it like you think it should.
Jetty 9.4 websocket will only keep a half-open connection open long enough to complete the current message (no matter how many frames it takes) then respond to the CLOSE frame it received (that caused the half-open connection). So half-open is only for the duration of the active message, then CLOSED. If no message is active, then the CLOSE happens immediately.
On Jetty 9.4 you can also add a WebSocketFrameListener and respond accordingly based on the frames received (eg: make the server end the conversation immediately, either via a CLOSE frame, or harsh disconnect)

Redis PUBSUB connection issue after idle period

I am using nelikelov/redisclient version 0.5.0 and I am using code same as in the PUBSUB example provided in the library. My application subscribes to a channel and receives messages.
What I am facing is that every Monday, the application is not being able to receive messages from Redis.
Is there any timeout that I should handle in case the connection remains idle during the weekend? Shall I configure something extra in my application or in Redis to bypass this?
I'm not familiar with the client you're using, but Redis itself doesn't close idle connections (PubSub or not) by default and keeps them alive. You can verify that your Redis server is configured to maintain idle connections and keep them alive by examining the values of the timeout and tcp-keepalive directives (0 and 300 by default, respectively).
Other than the above and given the periodical aspect of the disconnects, I'd investigate the network settings of the client application server.

Are there any known Linux inetd/"wait"-capable web servers with graceful idle shutdown?

I would like to start a web server on-demand as an inetd "tcp/wait" service which shuts itself down after a programmable period of inactivity.
Many web servers already support inetd "tcp/nowait" mode, but this mode has the disadvantage that a new process needs to be forked for every new connection. It is therefore slower and more resource-intensive than running a dedicated server daemon.
A web server supporting inetd's "tcp/wait" would only be launched by inetd for the first request, then serve any number of requests using the same server instance until no requests occurred for some period of idle time, in which case the server instance automatically terminates and lets inetd start it again once the next period of activity starts.
Such a tcp/wait inetd web server should have approximately the same efficiency as a dedicated web server (i. e. running permanently) during times of activity. However, it will automatically shut down in times of inactivity, saving system resources.
Irregular "anti-demand"-driven shutdowns will also clean up any memory leaks from the web server and possibly associated FGCI-services (which would terminate together with the web server).
I know that it is already possible to use systemd's socket activation in combination with lighttpd's -i option to implement what I want.
However, I want a solution that also works without systemd, depending on nothing else than a running Internet superserver no matter how the latter one has been started (inetd/xinetd started by sysvinit, runit, manually, or systemd's socket activation replacing inetd/xinetd).

How to tell libcurl to close idle connections after a period of time?

Is there a way to tell curl to close connections once they become idle for a certain period of time?
Idle connections remain in the connection cache until
the connection is reused
the connection is killed because the cache needs the space
the cache is shut down and killed
libcurl only reuses connections the first N seconds from it was put into the pool. The default is 118 seconds and applications can change the timeout with CURLOPT_MAXAGE_CONN.
Connections older than MAXAGE might remain in the pool longer because the connection age is only checked at certain occasions. It will however never be considered for reuse if older.