I'm using Redisson for caching. Are there any way to make the Redisson fail silently if my Redis instance is offline?
Since I'm using Redis just for caching it will be ok to just ignore some failures for while and just log it.
Related
I have a working Django application that is running locally using an sqlite3 database without problem. However, when I change the Django database settings to use my external AWS RDS database all my pages start taking upwards of 40 seconds to load. I have checked my AWS metrics and my instance is not even close to being fully utilized. When I make a request to a view with no database read/write operations I also get the same problem. My activity monitor shows my local CPU spiking with each request. It shows a process named 'WindowsServer' using most of the CPU during each request.
I am aware more latency is expected when using a remote database but I don't think this should result in 40 second page lags. What other problems that could be causing this behaviour?
AWS database monitoring
Local machine
So your computer has connection to the server in Amazon, that's the problem with latency. Production servers should be in the same place as DB servers(or should have very very good connection, so the latency is lowered as much as possible.)
--edit--
So we need more details. What is your ISP? What is your connection properties? Uplink, downlink? What are pings to servers in AWS?
I am using Redis for caching and queuing of a flask application. However, I observe that my cache object also has worker job related entries which are breaking my cache data fetching code.
Is there any way I can use redis for serving both purpose together ?
Downloaded cxf-client-1.5.4 from Maven Repository, does not contain any configuration for fail-over features about retry strategy. However, during heavy load and slowness in production, I observed and suspected from the log that retry is happening.
Is there any retry strategy configured in CXF plug-in for Grails ? If yes, how to stop retry.
Stress test and applying different measures in application confirmed that there is no retry strategy in Grails cxf-client-1.5.4.
I'm using kube-aws to set up a production Kubernetes cluster on AWS. Now I'm running into an issue I am unable to recreate in my dev environment.
When a pod is running heavy computation, which in my case happens in bursts, the liveness and readiness probes stop responding. In my local environment, where I run docker compose, they work as expected.
The probes use simple HTTP 204 No Content output using native Go functionality.
Has anyone seen this before?
EDIT:
I'd be happy to provide more information, but I am uncertain what I should provide as there is a tremendous amount of configuration and code involved. Primarily, I'm looking for help to troubleshoot and where to look to try to locate the actual issue.
We are in the process of migrating from Heroku to AWS, and I am noticing the Sidekiq stats mysteriously resetting for no apparent reason.
This is happening in several different applications that are connected to the same Redis instance, each with its own namespace set in initializers/sidekiq.rb.
The stats reset across all of the Sidekiq counters at the same time. It seems like perhaps we are momentarily dropping the Redis connection, but that is just wild conjecture and at any rate I'm not sure how to mitigate it.
Is this a common problem? Is there a setting I can tweak?
Someone is running the FLUSHDB or FLUSHALL command and clearing out data in Redis. Perhaps one of the apps when it starts.