GKE vs Cloud run - google-cloud-platform

I have a python Flask APIs deployed on cloud run. Autoscaling, CPU, concurrency everything is configurable in cloud run.
Now the problem is with the actual load testing with around 40k concurrent users hitting APIs continuously.
Does cloud run handle these huge volumes or should we port our app to GKE?
What are the factors decide Cloud run vs GKE?

Cloud Run is designed to handle exactly what you're talking about. It's very performant and scalable. You can set things like concurrency per container/service as well which can be handy where payloads might be larger.
Where you would use GKE is when you need to customise your platform, perform man in the middle or environment complexity, or to handle potentially long running compute, etc. You might find this in large enterprise or highly regulated environments. Kubernetes is almost like a private cloud though, it's very complex, has its own way of working, and requires ongoing maintenance.
This is obviously opinionated but if you can't think of a reason why you need Kubernetes/GKE specifically, Cloud Run wins for an API.
To provide more detail though; see Cloud Run Limits and Quotas.
The particularly interesting limit is the 1000 container instances but note that it can be increased at request.

Related

Increase resources on a Compute Engine VM without shutting services down

To automatically manage Cloud resources in order to meet the felt need of my infrastructure, I need to increase VM resources. But this is possible only while the machine status is TERMINATED.
The problem is that I have got applications on the VMs that must not stop running. Do you have any suggestions about how I could proceed, like increasing my machines resources without interrupting its services? (database, web, etc...)
The purpose of that is to automate my whole infrastructure, to ensure its quality of service even if I'm not monitoring it by myself.
I suggest to take a look to Managed Instance Groups. It will offer some of the characteristics you need: high availability, scalability and the ability to add the instance group to a load balancer.
According to the Google official documentation about MIGs:
Make your workloads scalable and highly available by taking advantage of automated MIG services, including: autoscaling, autohealing, regional (multiple zone) deployment, and automatic updating.
Regarding the need of automation of the services you want, I suggest to generally use fully managed services. You can check a summary of GCP services and you can always inspect if they fit your demands.

What is best way to performance and load test AWS cloud applications

Want to keep this question generic and expected answer in terms of best practice/approach/guidelines,
We need to know the best way to performance test and load test AWS cloud based applications.
What we have tried:
We used Gatling and Jmeter to execute our performance tests. These frameworks are pretty useful to test our functionality and to benchmark our applications latency and request rate.
Problem:
Performance benchmarks and limits of AWS managed services like Lambda and DDB are already specified by AWS e.g. Lambda concurrency behavior and DDB autoscaling under load etc. AWS also provides high availability and guaranteed performance of managed services.
Is it really worth executing expensive performance test and load test jobs for AWS managed services?
How to ensure that we are testing our application and not actually testing AWS limits which are already known.
What is the best practice and approach to performance test cloud based applications.
Any suggestions will help tremendously.
Thanks,
It depends on how you can use the AWS infrastructure
If you don't use Auto Scaling you can treat the AWS cloud-based application as a "normal" application just deployed not on your company premises, but somewhere at Amazon
If you use Auto Scaling you might want to come up with a some form of scalability testing. AWS instances can scale up in order to adapt to the increased load but your application must be able to scale as well. So you can test:
how fast are scale-up and scale-down processes, i.e. if you rapidly increase the load do AWS/your application react fast enough or there will be performance drop
scalability factor. For example with 100 virtual users you have 50 requests per second. Ideally with 200 virtual users you should have 100 requests per second, with 300 virtual users - 150 requests per second. Response time should remain the same. But normally the situation differs from the "ideal" so it would be good to know this scalability factor.
Also if your application is behind the AWS ELB you will need to add DNS Cache Manager to your JMeter test plan otherwise you will be hitting only one backend instance while others will be idle.

Architecture Questions to Autoscale Moodle on Google Cloud Platform

We're setting up a Moodle for our LMS and we're designing it to autoscale.
Here are the current stack specifications:
-Moodle Application (App + Data) baked into an image and launched into a Managed Instance Group
-Cloud SQL for database (MySQL 5.7 connected through Cloud SQL Proxy)
-Cloud Load Balancer - HTTPS load balancing with the managed instance group as backend + session affinity turned on
Questions:
Do I still need Redis/Memcached for my session? Or is the load balancer session affinity enough?
I'm thinking of using Cloud Filestore for the Data folder. Is this recommendable vs another Compute Engine?
I'm more concerned of the session cache and content cache for future user increase. What would you recommend adding into the mix? Any advise on the CI/CD would also be helpful.
So, I can't properly answer these questions without more information about your use case. Anyway, here's my best :)
How bad do you consider to be forcing the some users to re-login when a machine is taken down from the managed instance group? Related to this, how spiky you foresee your traffic will be? How many users will can a machine serve before forcing the autoscaler to kick in and more machines will be added or removed to/from the pool (ie, how dynamic do you think your app will need to be)? By answering these questions you should get an idea. Also, why not using Datastore/Firestore for user sessions? The few 10s of millisecond of latency shouldn't compromise the snappy feeling of your app.
Cloud Filestore uses NFS and you might hit some of the NFS idiosyncrasies. Will you be ok hitting and dealing with that? Also, what is an acceptable latency? How big is the blobs of data you will be saving? If they are small enough, you are very latency sensitive, and you want atomicity in the read/write operations you can go for Cloud BigTable. If latency is not that critical Google Cloud Storage can do it for you, but you also lose atomicity.
Google Cloud CDN seems what you want, granted that you can set up headers correctly. It is a managed service so it has all the goodies without you lifting a finger and it's cheap compared to serving stuff from your application/Google Cloud Storage/...
Cloud Builder for seems the easy option, unless you want to support more advanced stuff that are not yet supported.
Please provide more details so I can edit and focus my answer.
there is study for the autoscaling, using redis memory store show large network bandwidth from cache server, compare than compute engine with redis installed.
moodle autoscaling on google cloud platform
regarding moodle data, it show compute engine with NFS should have enough performance compare than filestore, much more expensive, as the speed also depend on the disk size.
I use this topology for the implementation
Autoscale Topology Moodle on GCP

Which aws instance type is optimal to improve spark shuffle performance?

For my spark application I'm trying to determine whether I should be using 10 r3.8xlarge or 40 r3.2xlarge. I'm mostly concerned with shuffle performance of the application.
If I go with r3.8xlarge I will need to configure 4 worker instances per machine to keep the JVM size down. The worker instances will likely contend with each other for network and disk I/O if they are on the same machine. If I go with 40 r3.2xlarge I will be able to allocate a single worker instance per box, allowing each worker instance to have its own dedicated network and disk I/O.
Since shuffle performance is heavily impacted by disk and network throughput, it seems like going with 40 r3.2xlarge would be the better configuration between the two. Is my analysis correct? Are there other tradeoffs that I'm not taking into account? Does spark bypass the network transfer and read straight from local disk if worker instances are on the same machine?
Seems you have the answer already : it seems like going with 40 r3.2xlarge would be the better configuration between the two.
Recommend you go through aws well architect.
General Design Principles
The Well-Architected Framework identifies a set of general design principles to
facilitate good design in the cloud:
Stop guessing your capacity needs: Eliminate guessing your
infrastructure capacity needs. When you make a capacity decision before
you deploy a system, you might end up sitting on expensive idle resources
or dealing with the performance implications of limited capacity. With
cloud computing, these problems can go away. You can use as much or as
little capacity as you need, and scale up and down automatically.
Test systems at production scale: In a traditional, non-cloud
environment, it is usually cost-prohibitive to create a duplicate
environment solely for testing. Consequently, most test environments are
not tested at live levels of production demand. In the cloud, you can create
a duplicate environment on demand, complete your testing, and then
decommission the resources. Because you only pay for the test
environment when it is running, you can simulate your live environment
for a fraction of the cost of testing on premises.
refer:
AWS Well-Architected Framework

EC2 Architecture design for Website

I have a site that I will be launching soon. Not entirely sure how heavy the traffic will get.
I am using Django+Nginx+Gunicorn+Mysql. There will be support for SSL/HTTPS.
As a starting point, I am thinking of having two micro instances balanced by Elastic Load Balancing.
The MySql database will be on one of the instances. If traffic gets heavy, I might move static files to a CDN. The micro instances serve as front-end servers responsible for only churning out HTML/JSON and serving static files. Static files are mainly CSS/js and several images (not many). I foresee database will be read-heavy and less writes.
Questions:
Assuming the traffic rises to 100k page views per day, will the 2 micro instances suffice?
Do I have to move the database to a separate instance? And what instance type would be good?
What if the traffic is only 1k page views per day?
How many gunicorn processes to run on a micro instance?
In general, what type of metrics will help me determine what kind and how many instances I would need? What is the methodology to decide what kind of architecture I would need?
Thanks a lot!
Completely dependant on how dynamic the site is planning to be. Do users generate content towards the service or is it mostly static? If the former you're going to get a lot from putting stuff like avatars, images etc. into S3 and putting that on Cloudfront. Same with your static files... keeping your servers stateless will allow you scale with ease.
At 100k page views a day you will definitely struggle with just micros... you really should only use those in a development environment and aren't meant to handle stuff like serving users. I'd use at a minimum a single small instance in-front of a Load Balancer, may sound strange but you will be able to throw in another instance when things get busy without having to mess with Route 53 or potentially having your site fail. The stateless stuff is quite important now as user-generated assets may only be reference able from one instance and not the other.
At 1k page views I'd still use a small for web serving and another small for MySQL. You can look into RDS which is great if you're doing this solo, forget about needing to upgrade versions and stuff like maintenance, backups etc.
You will also be able to one-click spin up read replicas for peak. Look into the Amazon CLI as well to help automate those tasks. Cronjobs will make it a cinch if you're time stressed otherwise Opsworks, Cloudformation and Auto-Scaling will all help with the above.
Oh and just as a comparison, an Application server of mine running Apache, PHP with APC to serve our users starts to struggle with about 80 concurrent users. Runs on a small EC2 Instance with a Small RDS (which sits at about 15% at the same time as the Application Server is going downhill)
Probably not. Micro instances are not designed for heavy production loads. They use a burstable CPU profile. They can run at 2 ECU for a couple of minutes, and then they get locked at 0.1-0.2 ECU. I tend to like c1.medium, but small may be enough for you.
Maybe, as long as they are spread out during the day and not all in a short window.
1-2 per core. Micro only has 1 core.
Every application is different. The best thing to do is run your own benchmarks using tools like ab (Apache Bench)
Following the AWS best practices architecture diagram is always a good start.
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_web_01.pdf
I strongly advise you to store all your files on Amazon S3, and use a Route 53 DNS (or any other DNS if you want) in front of it to distribute the files, because later on if you decide to use CloudFront CDN it will be very easy to change. And, just to mention using CloudFront as CDN will increase your cost only a little bit, not a huge thing.
Doesn't matter the scenario, if we're talking a about production, you should definitely go for separate instances, at least 1 EC2 for web and 1 EC2/RDS for database.
If you are geek and like to get into the nitty gritty details, create your own infrastructure and feel free to use any automation tool (puppet, chef) or not. Or if you just want to collect the profit, or have scarce resources to take care of everything, you should try Elastic Beanstalk (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html)
Anyway, going to create your own infrastructure or choose elastic beanstalk, always execute stress tests to have a better overview of your capacity planning needs. After you choose your initial environment, stress it using apache bench, siege or whatever other tool you may like.
Hope this helps.
I would suggest to use small instances instead of micro as micro instances often stop responding on heavy load and then it requires a stop-start. Use s3 for static files which helps in faster loading and have a look over cloud front.
The region for instance also helps in serving requests and if you target any specific region, create the instance selecting that region.
Create the database in new instance and attach ebs volume to that instance. Automate backup script to copy database files and store in ebs to avoid any issues. The instance selected here can be iops for faster processing over standard. Aws services provide lot of flexibility but you need to have scripts running to scale up and down the servers as per the timings.
Spot instance can help in future as they come cheap in case you are scaling up.