Performance issues with weave networking on Kubernetes cluster - django

I create a Kubernetes (v1.6.1) cluster on AWS with one master and two slave nodes, then I spin up mysql instance using helm and deploy a simple Django web-app that queries latest five rows from the database and displays it. For my web service I specify 'type: LoadBalancer' which creates an ELB on AWS.
If I use 'weave' networking and scale my web-app to at least two replicas, then I begin experiencing inconsistent response time - most of the time it is reasonable (like 0.1-0.2 s), but 20-40% requests take significantly longer (3-5 s, sometimes even more than 15 s). However, if I switch to 'flannel' networking, everything works fast, even with 20-30 replicas of the web-app. All machines have enough resources, so that's not the problem.
I tried debugging to find out what's causing the delay, and the best explanation I have is that AWS ELB doesn't work well with 'weave'. Has anyone experienced similar issues? What could be the problem? Please let me know if I should provide some relevant information.
P.S. I'm new to using Kubernetes.

Related

Optimizing latency between application server (EC2) and RDS

here's how the story goes.
We started transforming a monolith, single-machine, e-commerce application (Apache/PHP) to cloud infrastructure. Obviously, the application and the database (MySQL) were on the same machine.
We decided to move to AWS. And as the first step of transformation, we decided to split the database and application. Hosting application on a c4.xlarge machine. And hosting database to RDS Aurora MySQL on a db.r5.large machine, with default options.
This setup performed well. Especially the database performance went up high.
Unfortunately, when the traffic spiked up, we started experiencing long response times. Looked like RDS, although being really fast for executing queries, wasn't returning results fast enough over the network to the EC2 machine.
So that was our conclusion after an in-depth analysis of the setup including Apache/MySQL/PHP tuning parameters. The delayed response time was definitely due to the network latency between EC2 and RDS/Aurora machine, both machines being in the same region.
Before adding additional resources (ex: ElastiCache etc) we'd first like to look into any default configuration we can play around with to solve this problem.
What do you think we missed there?
One of the bigest strength with the cloud is the scalability and you should always design your application to utilise it and it sounds like your RDS instance is getting chocked due to nr of request more than the process time for the queries. So rather go more small instances with load balancing than one big doing all the job. And with Load Balancers you will get away from a singel point of failure due to you can have replicas of your database and they can even be placed in different AZ.
Here is a blogpost you can read on the topic:
https://aws.amazon.com/blogs/database/scaling-your-amazon-rds-instance-vertically-and-horizontally/
Good luck in your aws journey.
The Best answer to your question is using read replicas, but remember only your read requests could be sent to your read replicas so you would need to design your application that way
Also for some cost savings, you should try aurora serverless
One more option is passing traffic between ec2 and rds through a private network rather than using the public internet to connect your ec2 to rds that can be one of the mistakes that might be happening

Conceptual question regarding scaling REST API

I have a general question regarding how APIs are scaled. I have a basic RESTful API powered by Django rest framework, the backend uses RDS for database management. Right now, I'm deploying my django application to a digitalocean droplet but thinking of switching over to EC2 or potentially EKS. Is my understanding correct that I can effectively point my application to the RDS endpoint and spin up several EC2 instances with the same Django application fronted by an ELB? Would this take care of incoming traffic and the scalability of the django application?
This isn't exactly a coding question so I'm not sure if this is the best stackexchange site to ask this.
My two cents here:
I`ve been using lambda to serve Django and Flask apis for quite some time now, and it works great. You don't need to worry about scalability at all, unless there is a chance that your API would receive more than 10,000 requests per second (very unlikely on most scenarios). It will be way cheaper than EKS, even cheaper than EC2. I have a app with 400k active users which is served by an API running on lambda, I never paid more than $25 on invokations.
You can use Zappa (which is exclusively for python, I recommend) or Serverless framework, they will take care of most of the heavy work and make the deployment very easy.
But have in mind that lambda is not very good for long running tasks, like cronjobs. If you have have crons that might take some time to be executed your invokations can get a little expensive if you invoke it ofteen (lambdas can run up to 15 minutes, but those 15 minutes will be much more expensive than EC2). Also, the apigateway in front of the lambda function have a 30 seconds timeout, so your requests must be processed before that. If you think your requests will take longer, you will need to leverage some async requests. I think it is a very small price to have a full service without having to worry about the infrastructure.
You are right, but you can think not only about ec2 and EKS. You can also look into ECS and Fargate options. ELB distribute traffic across compute resources inside Target Group and it can be Autoscaling Group for EC2. Also, with RDS you can scale read replicas for handling mor read traffic independent from master node

AWS EC2 Immediate Scaling Up?

I have a web service running on several EC2 boxes. Based on the Cloudwatch latency metric, I'd like to scale up additional boxes. But, given that it takes several minutes to spin up an EC2 from an AMI (with startup code to download the latest application JAR and apply OS patches), is there a way to have a "cold" server that could instantly be turned on/off?
Not by using AutoScaling. At least not, instant in the way you describe. You could make it much faster however, by making your own modified AMI image where you place the JAR and the latest OS patches. These AMI's can be generated as part of your build pipeline. In that case, your only real wait time is for the OS and services to start, similar to a "cold" server.
Packer is a tool commonly used for such use cases.
Alternatively, you can mange it yourself, by having servers switched off, and start them by writing some custom Lambda scripts that gets triggered by Cloudwatch alerts. But since stopped servers aren't exactly free either, i would recommend against that for cost reasons.
Before you venture into the journey of auto scaling your infrastructure and spending time/effort. Perhaps you should do a little bit of analysis on the traffic pattern day over day, week over week and month over month and see if it's even necessary? Try answering some of these questions.
What was the highest traffic ever your app handled, How did the servers fare given the traffic? How was the user response time?
When does your traffic ramp up or hit peak? Some apps get traffic during business hours while others in the evening.
What is your current throughput? For example, you can handle 1k requests/min and two EC2 hosts are averaging 20% CPU. if the requests triple to 3k requests/min are you able to see around 60% - 70% avg cpu? this is a good indication that your app usage is fairly predictable can scale linearly by adding more hosts. But if you've never seen traffic burst like that no point over provisioning.
Unless you have a Zynga like application where you can see large number traffic at once perhaps better understanding your traffic pattern and throwing in an additional host as insurance could be helpful. I'm making these assumptions as I don't know the nature of your business.
If you do want to auto scale anyway, one solution would be to containerize your application with Docker or create your own AMI like others have suggested. Still it will take few minutes to boot them up. Next option is the keep hosts on standby but and add those to your load balancers using scripts ( or lambda functions) that watches metrics you define (I'm assuming your app is running behind load balancers).
Good luck.

High-availability for Restcomm

I'm planning a High-availability set-up with autoscaling for RestComm and some general doubts about the best way to plan it.
This is what I have now:
Restcomm instance using Amazon ECS (docker), so we can launch more instances very easily.
All of them share the Amazon RDS database.
Workspace is shared and persisted between instances.
To move to the next step, I have some questions:
Amazon Load Balancer isn't an option because it doesn't support UDP so I'm considering Telestax LB, is it correct?. Is it possible to deploy it using docker?
Move Restcomm MS outside of the docker Restcomm image so it can scale independently. Restcomm provides env variables to specify the MS, so I would have a LB and several MS behind it. Correct?.
How much RAM needs a Restcomm instance and how many concurrent sessions supports?. How can we know how many concurrent sessions are in real time and in a programtically way?.
There is a "automatic scaling" mechanism implemented in RestComm? More info would be appreciated. Ubuntu Juju isn't an option for me.
We are considering Graylog2 or logstasch for logs management. Any insight here?. How do you install the agent in the docker images?.
The only documentation I found it was this very good document: https://docs.google.com/document/d/13xlaioF065pDnQUoZgfIpi6Noh0qHfAZ7U6afcPd2Y0/edit
Is there any other doc?.
Thanks in advance!
Very good questions :
Yes. See, https://hub.docker.com/r/restcomm/load-balancer/
You would have one LB (better to have 2 with active passive to avoid single point of failure) with X Restcomm behind it speaking to Z Media Servers behind them.
It depends on the complexity of the application on top. But here is some numbers https://github.com/RestComm/Restcomm-Connect/wiki/Load-Testing-on-Docker
Not yet. you can use Mesos or Kubernetes potentially if juju is not an option. We have a set of open issues for kubernetes right now but Mesos should be working.
You can check https://hub.docker.com/r/restcomm/graylog-restcomm/ it contains a docker image pre loaded with everything needed to poll a restcomm server for gathering metrics.

Is Amazon EC Redis an effective caching solution or not?

As you may have noticed Amazon has announced a new feature for its own ElasticCache product, which is supporting Redis.
We are currently using one EC2 instance for our Redis (just queuing for now) and we've decided to use Redis for other upcoming features such as commenting system, discussion, real-time messaging, real-time user tracking and analytics, etc.
We don't mind to run more and bigger EC2 instances, but should we invest in ElasticCache (Redis) and move into it from the beginning now that we haven't started yet or it's too soon to see the results, benchmarks, and downside? Or it's even limited in some prospectives compare to having your own Redis on your own instances?
Update 1:
Let me to be detailed of what we are going to do with Redis. Probably using queuing as we have been doing it by Resque. Not sure if ElasticCache let us do any Pub/Sub but if it does we would like to do that as well. And of course atomic and high-level operations.
Update2:
There is a new video by Senior Product Manger of Amazon Elastic Cache posted a week ago that happened during AWS reInvent Conference. Because it is new he talks about Redis too!
http://www.youtube.com/watch?v=odMmdPBV8hM
I would say that if Redis is an effective caching solution for you, then ElasticCache will work for you - you're simply paying AWS to manage the back end and plumbing for you. Performance may be marginally slower - you have to have a DNS lookup for requests, vs having redis running in a VPC where you can access a private IP address directly - but even accessing it from an EC2 instance should resolve the public DNS name to the internal private IP. And of course you can launch your EC node in your VPC.
There are some complications when running a memcached cluster - you will need to use the amazon client to make sure your code connects to the correct node - but I do not believe as of Dec 2013 that this is needed for redis.
If you're implementing a queue on top of redis, have you looked at SQS to see if it will work for you?