Performance of a cluster Redis with Elastic Cache - amazon-web-services

I am making performance tests with a Redis Cluster mode enable ( AWS Elastic Cache - default.redis6.x.cluster.on ). cluster. And monitoring with datadog .
For the test we made a application who make juste a set and get on Redis. We make some threads that make calls to this app. And by the threads, we make around 100 calls simultaneously. So redis will receive also around 100 calls ( a keys 'abc' operation)
what we saw is that when we have only one to 10 thread (10 calls simultaneously) . every redis call spent 1-5 mileseconds. but when we make 100 calls simultaneously or more, this time is 300-500 ms for every calls.
I would know if there are some way to discover a average performance on this scenario, if should be normal have around 300-500 ms (mileseconds) ? What could be the piste for analyse the performance of Redis?
Thank you

Related

Running background processes in Google Cloud Run

I have a lightweight server that runs cron jobs at a given time. As I understand Google Cloud Run only processes incoming requests and then becomes idle after a short time if there is no other request to process. Hence, it is not advisable to deploy that cron service to Cloud Run.
Out of curiosity, I deployed the following server that starts up and then prints a log every hour.
const express = require('express');
const app = express();
setInterval(() => console.log('ping!'), 1000 * 60 * 60);
app.listen(process.env.PORT, () => {
console.log('server listening');
})
I deployed it with a minimum and maximum instance count of 1. It has not received any request and when I checked back the next day, it was precisely printing the log every hour. Was this coincidence or can I use this setup for production?
If you set the min instance to 1 and the CPU always on to true, yes, you can perform background compute intensive processing without CPU Throttling (in your hello world case, you can use the few CPU % allowed to the idle instance without the CPU always on option).
BUT, and the but is very important, you will pay for 1 Cloud Run instance always up. In addition, is you receive request, you can scale up and have more than 1 instance up and running. Does it make sense to have several instances with the same CRON scheduling? (except if you set the max instance to 1).
At the end, the best pattern is to host the scheduling outside, on Cloud Scheduler, and then to query your instance to perform the task. It's serverless, you can handle several task in parallel, it's scalable.
From my understanding no.
From the documentation here, Google indicates that the CPU of idle instances is throttled to nearly zero. I suppose this means that very simple operation can still be performed (e.g. logging a string every hour). I guess you could test it more extensively by doing some more complex operations and evaluate the processing time of these operations.
Either way, I would not count on it in a production environment. There is no guarantee that the CPU "throttled to nearly zero" will be able to complete the operations you need in a reasonable time delay.

How can I optimise requests/second under peak load for Django, UWSGI and Kubernetes

We have an application that experiences some pretty short, sharp spikes - generally about 15-20mins long with a peak of 150-250 requests/second, but roughly an average of 50-100 requests/second over that time. p50 response times around 70ms (whereas p90 is around 450ms)
The application is generally just serving models from a database/memcached cluster, but also sometimes makes requests to 3rd party APIs etc (tracking/Stripe etc).
This is a Django application running with uwsgi, running on Kubernetes.
I'll spare you the full uwsgi/kube settings, but the TLDR:
# uwsgi
master = true
listen = 128 # Limited by Kubernetes
workers = 2 # Limited by CPU cores (2)
threads = 1
# Of course much more detail here that I can load test...but will leave it there to keep the question simple
# kube
Pods: 5-7 (horizontal autoscaling)
If we assume 150ms average response time, I'd roughly calculate a total capacity of 93requests/second - somewhat short of our peak. In our logs we often get uWSGI listen queue of socket ... full logs, which makes sense.
My question is...what are our options here to handle this spike? Limitations:
It seems the 128 listen queue is determined by the kernal, and the kube docs suggest it's unsafe to increase this.
Our Kube nodes have 2 cores. The general advice seems to be to set your number of workers to 2 * cores (possibly + 1), so we're pretty much at our limit here. Increasing to 3 doesn't seem to have much impact.
Multiple threads in Django can apparently cause weird bugs
Is our only option to keep scaling this horizontally at the kubernetes level? Aside from making our queries/caching as efficient as possible of course.

How to improve web-service api throughput?

I'm new to creating web service. So I'd like to know what i'm missing out on performance (assuming i'm missing something).
I've build a simple flask app. Nothing fancy, it just reads from the DB and responds with the result.
uWSGI is used for WSGI layer. I've run multiple tests and set process=2 and threads=5 based on performance monitoring.
processes = 2
threads = 5
enable-threads = True
AWS ALB is used for load balancer. uWSGI and Flask app is dockerized and launched in ECS (3 container [1vCPU]).
The for each DB hit, the flask app takes 1 - 1.5 sec to get the data. There is no other lag on the app side. I know it can be optimised. But assuming that the request processing time takes 1 - 1.5 sec, can the throughput be increased?
The throughput I'm seeing is ~60 request per second. I feel it's too low. Is there any way to increase the throughput with the same infra ?
Am i missing something here or is the throughput reasonable given that the DB hit takes 1.5 sec?
Note : It's synchronous.

Counting number of requests per second generated by JMeter client

This is how application setup goes -
2 c4.8xlarge instances
10 m4.4xlarge jmeter clients generating load. Each client used 70 threads
While conducting load test on a simple GET request (685 bytes size page). I came across issue of reduced throughput after some time of test run. Throughput of about 18000 requests/sec is reached with 700 threads, remains at this level for 40 minutes and then drops. Thread count remains 700 throughout the test. I have executed tests with different load patterns but results have been same.
The application response time considerably low throughout the test -
According to ELB monitor, there is reduction in number of requests (and I suppose hence the lower throughput ) -
There are no errors encountered during test run. I also set connect timeout with http request but yet no errors.
I discussed this issue with aws support at length and according to them I am not blocked by any network limit during test execution.
Given number of threads remain constant during test run, what are these threads doing? Is there a metrics I can check on to find out number of requests generated (not Hits/sec) by a JMeter client instance?
Testplan - http://justpaste.it/qyb0
Try adding the following Test Elements:
HTTP Cache Manager
and especially DNS Cache Manager as it might be the situation where all your threads are hitting only one c4.8xlarge instance while the remaining one is idle. See The DNS Cache Manager: The Right Way To Test Load Balanced Apps article for explanation and details.

Auto scaling batch jobs on Elastic Beanstalk

I am trying to set up a scalable background image processing using beanstalk.
My setup is the following:
Application server (running on Elastic Beanstalk) receives a file, puts it on S3 and sends a request to process it over SQS.
Worker server (also running on Elastic Beanstalk) polls the SQS queue, takes the request, load original image from S3, processes it resulting in 10 different variants and stores them back on S3.
These upload events are happening at a rate of about 1-2 batches per day, 20-40 pics each batch, at unpredictable times.
Problem:
I am currently using one micro-instance for the worker. Generating one variant of the picture can take anywhere from 3 seconds to 25-30 (it seems first ones are done in 3, but then micro instance slows down, I think this is by its 2 second bursty workload design). Anyway, when I upload 30 pictures that means the job takes: 30 pics * 10 variants each * 30 seconds = 2.5 hours to process??!?!
Obviously this is unacceptable, I tried using "small" instance for that, the performance is consistent there, but its about 5 seconds per variant, so still 30*10*5 = 26 minutes per batch. Still not really acceptable.
What is the best way to attack this problem which will get fastest results and will be price efficient at the same time?
Solutions I can think of:
Rely on beanstalk auto-scaling. I've tried that, setting up auto scaling based on CPU utilization. That seems very slow to react and unreliable. I've tried setting measurement time to 1 minute, and breach duration at 1 minute with thresholds of 70% to go up and 30% to go down with 1 increments. It takes the system a while to scale up and then a while to scale down, I can probably fine tune it, but it still feels weird. Ideally I would like to get a faster machine than micro (small, medium?) to use for these spikes of work, but with beanstalk that means I need to run at least one all the time, since most of the time the system is idle that doesn't make any sense price-wise.
Abandon beanstalk for the worker, implement my own monitor of of the SQS queue running on a micro, and let it fire up larger machine(or group of larger machines) when there are enough pending messages in the queue, terminate them the moment we detect queue is idle. That seems like a lot of work, unless there is a solution for this ready out there. In any case, I lose the benefits of beanstalk of deploying the code through git, managing environments etc.
I don't like any of these two solutions
Is there any other nice approach I am missing?
Thanks
CPU utilization on a micro instance is probably not the best metric to use for autoscaling in this case.
Length of the SQS queue would probably be the better metric to use, and the one that makes the most natural sense.
Needless to say, if you can budget for a bigger base-line machine everything would run that much faster.