Shiny Websocket Idle Timeout - shiny

I'm running a shiny app using the command:
shiny::runApp("./app",8080,FALSE,"0.0.0.0")
The app works fine but when running it behind a load balancer the websocket connection gets dropped after 60 seconds because neither the client (web browser) nor the server are sending data. This is due to an unchangeable timeout in the load balancer. Is there some way to get shiny to send a heartbeat message over the websocket every n seconds? Has anyone an idea how to solve this question?

Related

Idle webservices connection disconnected after 5 minutes

I have a C# application that connects to a JAX-WS webservices endpoint hosted on an embedded tomcat server.
Something is disconnecting the client after exactly 5 minutes of inactivity and I cannot for the life of me work out what's doing it.
I've set server.tomcat.connection-timeout=-1 and server.tomcat.keepAliveTimeout=-1.
session-timeout in web.xml is set to 30 (minutes) and I've checked that the same value is being passed to HttpSession#setMaxInactiveInterval(int).
Nothing is written to the server logs even at TRACE level when a client connection is closed.
I can remotely debug the application server but not sure where to start digging?
After getting the client and server running locally I found the culprit - the keep-alive connection wasn't being closed by the tomcat server it was being closed by a reverse proxy.

Best way to run TCP server alongside Django to gather IoT data

I a django app running on Elasticbeanstalk in AWS. Within my Django application I'd like to gather IoT data coming in via TCP/IP. Currently, I open the socket and switch it to listening through a View function. This leads to the problem that the socket closes, or stops. Furthermore, the socket needs to listen on the port steadily although data is not coming in continiously.
What is a more elegant way to solve this problem? Is there any Django extension to get the socket and listening from a view to a background task? E.g. listen every 60 seconds on the ports and create an object when data comes in?

Why my boost async TCP server connection accept handler stops working after some time?

I have created simple TCP server which accepts connections and request and do some processing. I have referred following example and it is working fine. I send data to to connected client continuously and it is being printed on client side. Though after around period of 20-25 minutes, the client stops receiving any data. Also after such incident, the server shows running but now when I connect my client again to server, the server's connection accept handler doesnt get invoked. But I am able to telnet to the server's port and client is able to connect. Any idea what might be the problem?

Redis PUBSUB connection issue after idle period

I am using nelikelov/redisclient version 0.5.0 and I am using code same as in the PUBSUB example provided in the library. My application subscribes to a channel and receives messages.
What I am facing is that every Monday, the application is not being able to receive messages from Redis.
Is there any timeout that I should handle in case the connection remains idle during the weekend? Shall I configure something extra in my application or in Redis to bypass this?
I'm not familiar with the client you're using, but Redis itself doesn't close idle connections (PubSub or not) by default and keeps them alive. You can verify that your Redis server is configured to maintain idle connections and keep them alive by examining the values of the timeout and tcp-keepalive directives (0 and 300 by default, respectively).
Other than the above and given the periodical aspect of the disconnects, I'd investigate the network settings of the client application server.

Django on Apache - Prevent 504 Gateway Timeout

I have a Django server running on Apache via mod_wsgi. I have a massive background task, called via a API call, that searches emails in the background (generally takes a few hours) that is done in the background.
In order to facilitate debugging - as exceptions and everything else happen in the background - I created a API call to run the task blocking. So the browser actually blocks for those hours and receives the results.
In localhost this is fine. However, in the real Apache environment, after about 30 minutes I get a 504 Gateway Timeout error.
How do I change the settings so that Apache allows - just in this debug phase - for the HTTP request to block for a few hours without returning a 504 Gateway Timeout?
I'm assuming this can be changed in the Apache configuration.
You should not be doing long running tasks within Apache processes, nor even waiting for them. Use a background task queueing system such as Celery to run them. Have any web request return as soon as it is queued and implement some sort of polling mechanism as necessary to see if the job is complete and results can be obtained.
Also, are you sure the 504 isn't coming from some front end proxy (explicit or transparent) or load balancer? There is no default timeout in Apache which is 30 minutes.