Reduce Cloud Run on GKE costs - google-cloud-platform

would be great if I could have to answers to the following questions on Google Cloud Run
If I create a cluster with resources upwards of 1vCPU, will those extra vCPUs be utilized in my Cloud Run service or is it always capped at 1vCPU irrespective of my Cluster configuration. In the docs here - this line has me confused Cloud Run allocates 1 vCPU per container instance, and this cannot be changed. I know this holds for managed Cloud Run, but does it also hold for Run on GKE?
If the resources specified for the Cluster actually get utilized (say, I create a node pool of 2 nodes of n1-standard-4 15gb memory) then why am I asked to choose a memory again when creating/deploying to Cloud Run on GKE. What is its significance?
The memory allocated dropdowon
If Cloud Run autoscales from 0 to N according to traffic, why can't I set the number of nodes in my cluster to 0 (I tried and started seeing error messages about unscheduled pods)?
I followed the docs on custom mapping and set it up. Can I limit the requests which cause a container instance to handle it to be limited by domain name or ip of where they are coming from (even if it only artificially setup by specifying a Host header like in the Run docs.
curl -v -H "Host: hello.default.example.com" YOUR-IP
So that I don't incur charges if I get HTTP requests from anywhere but my verified domain?
Any help will be very much appreciated. Thank you.

1: cloud run managed platform always allow 1 vcpu per revision. On gke, also by default. But, only for gke, you can override with --cpu param
https://cloud.google.com/sdk/gcloud/reference/beta/run/deploy#--cpu
2: can you precise what is asked and when performing which operation?
3: cloud run is build on top of kubernetes thank to knative. By the way, cloud run is in charge to scale pod up and down based on the traffic. Kubernetes is in charge to scale pod and node based on CPU and memory usage. The mechanism isn't the same. Moreover the node scale is "slow" and can't be compliant with spiky traffic. Finally, something have to run on your cluster for listening incoming request and serving/scaling correctly your pod. This thing has to run on a no 0 node cluster.
4: cloud run don't allow to configure this. I think that knative also can't. But you can deploy a ESP in front for routing requests to a specific cloud run service. By the way, you split the traffic before and address it to different services, and thus you scale independently. Each service can have a Max scale param, different concurrency param. ESP can implement rate limit.

Related

Aws fargate pods

Hi I am new to Fargate and confused about its calculation.
How is the 'Average duration' calculated and charged ? is it calculated and charged only for the time between request arriving and return of response or pods are continually running and are charged for 24*7*365 ?
Also does fargate fetches image from ECR every time a request arrives ?
Do fargate costs even when there is no request and nothing is processing ?
What is the correct way of calculating Average duration section ?
This can make huge difference in cost.
You can learn more details from AWS Fargate Pricing and from AWS Pricing Calculator. When you read details from first link that I mentioned, you will find the explanation for duration and there are 3 example in the link.
How is the 'Average duration' calculated and charged ? is it calculated and charged only for the time between request arriving and return of response or pods are continually running and are charged for 247365 ?
Fargate is not a request-based service. Fargate runs your pod for the entire time you ask it to run that pod. It doesn't deploy pods when a request comes in, the pods are running 24/7 (or as long as you have it configured to run).
Fargate is "serverless" in the sense that you don't have to manage the EC2 server the container(s) are running on yourself, Amazon manages the EC2 server for you.
Also does fargate fetches image from ECR every time a request arrives ?
Fargate pulls from ECR when a pod is deployed. It has to be deployed and running already in order to accept requests. It does not deploy a pod when a request comes in like you are suggesting.
Do fargate costs even when there is no request and nothing is processing ?
Fargate charges for the amount of RAM and CPU you have allocated to your pod, regardless of if they are actively processing requests or not. Fargate does not care about the number of requests. You could even use Fargate for doing things like back-end processing services that don't accept requests at all.
If you want an AWS service that only runs (and charges) when a request comes in, then you would have to use AWS Lambda.
You could also look at AWS App Runner, which is in kind of a middle ground between Lambda and Fargate. It works like Fargate, but it suspends your containers when requests aren't coming in, in order to save some money on the CPU charges.

Cloud Run Web Hosting limitation

I'm considering a cloud run for web hosting rather than a complex compute engine.
I just want to make an api with node.js. I heard that automatic load balancing is also available. If so, is there any problem with concurrent traffic of 1 million people without any configuration? (The database server is somewhere else (which is serverless like cockroachDB)
Or Do I have to configure various complicated settings like aws ec2 or gce?
For such traffic, out of the box configuration must be fine tuned.
Firstly, the concurrency parameter on Cloud Run. This parameter indicate how many concurrent request can be handle per instance. It's 80 the default value, and you can set up to 1000 concurrent requests per instance.
Of course, if you handle 1000 concurrent request per instance (or less) you should require more CPU and Memory. You can also play with those parameters
You also have to change the max instance limit. By default, you are limited to 1000.
If you set 1000 concurrent requests and 1000 instances, you can handle 1 million of concurrent request.
However, you don't have a lot of margins, or your instance with 1000 concurrent requests can be struggle even with max CPU and memory.
You can request more than 1000 instances with a quota increase request.
You can also optimise differently, especially if your 1 million users aren't in the same country/Google Cloud Region. if so, you can deploy a HTTPS load balancer in front of your cloud run service and deploy it in all the region of your users. (The Cloud Run services deployed in different regions must have the same name).
Like that, it's not only one service that will have to absorb 1 million of users, but several, in different regions. In addition, the HTTPS load balancer route the request to the closest region and therefore your optimize the latency, and reduce the egress/cross region traffic.

Cluster nodes only used by internal pods

We are using GKE to host our apps with Anthos, our default node pool ils set to autoscale but I noticed that out of 5 running pods, only 2 are hosting our actual services.
All the others are running internal services like this:
The issue with that is that there's not enough room for running our own services. I guess these are vital for the cluster otherwise the cluster would autoscale and the nodes would get removed.
What would be the best approach to solve this issue? I thought of upgrading the nodes machine type to allow more resources per node and have more room within them and thus have less running nodes, but I wanted to make sure I was not simply missing something on how GKE works.
I've been now digging for quite some time but it seems that would be my only option.
GKE itself requires several add-on resources which are deployed as part of your cluster. You can fine tune the resource usage of some of the GKE add-ons for smaller clusters. Additionally, Anthos each Anthos capability you enable typically deploys a set of controllers as well. GKE and Anthos try to minimize the compute resources used by these services / controllers, but you do need to account for them when calculating the right size(s) for your nodes. A good rule of thumb is to assume that system services/controllers will use ~1 vCPU when using GKE/Anthos (it's typically lower than that, but it makes things easier). So if your workloads all request >=1 vCPU, you'll likely need to use nodes that have a minimum of 4 vCPUs. You'll also want to enable the cluster autoscaler for your node pools if you don't want to pre-provision everything.
A better option would be to use node auto-provisioning as in this case you don't need to create/manage your own node pools as GKE will automatically add/remove nodes / node pools based on the resources requested by your deployments.

how to get budget estimate for copilot?

When I select "scheduled job" while initiating resources, how is the process handled internally?
Can I verify the container in ECS? I guess it will use batch jobs for this option.
# copilot init
Note: It's best to run this command in the root of your Git repository.
Welcome to the Copilot CLI! We're going to walk you through some questions
to help you get set up with an application on ECS. An application is a collection of
containerized services that operate together.
Which workload type best represents your architecture? [Use arrows to move, type to filter, ? for more help]
> Load Balanced Web Service
Backend Service
Scheduled Job
What will be the charges if I select backend service or scheduled job?
Copilot uses Fargate containers under the hood; therefore, your charges for a backend service are based on the number of containers you have running and the CPU/memory size of those containers. The minimum container size is 0.25 vCPU and 512 GB of reserved memory.
For other service types, your pricing depends on a few more things.
Load Balanced Web Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Application Load Balancer (about $20/month depending on traffic)
Backend Service
Fargate containers based on size and number (~$9/month for the smallest possible container)
Scheduled Job
Fargate containers based on size, number, and invocation frequency and duration (ie you only pay for the minutes you use)
State Machine transitions The first 4000 transitions in a month are free, which corresponds to an invocation frequency of about once every 21 minutes assuming there are no retry transitions. Transitions after that limit are billed at a low rate.
Other notes
All Copilot-deployed resources are grouped with a number of resource tags. You can use those tags to understand billing activity, and even add your own tags via the --resource-tags flag in copilot svc deploy or copilot job deploy.
The tags we use to logically group resources are the following:
Tag Name
Value
copilot-application
name of the application this resource belongs to
copilot-environment
name of the environment this resource belongs to
copilot-service
name of the service or job this resource belongs to
The copilot-service tag is used for both jobs and services for legacy reasons.
Copilot refers to these entities as common cloud architectures [1].
I could not find an official document which outlines how these architectures are composed in detail. I guess it might be an implementation detail from the creators' perspective when you look at point one of the AWS Copilot CLI charter [2]:
Users think in terms of architecture, not of infrastructure. Developers creating a new microservice shouldn't have to specify VPCs, load balancer settings, or complex pipeline configuration. They may not know anything about other AWS services. They should be able to specify what "kind" of application it is and how it fits into their overall architecture; the infrastructure should be generated from that.
I have to agree that more sophisticated users always ask themselves how costs of a specific architecture will look like and I completely endorse the idea of having a special Copilot command such as copilot estimate costs service-xy which can be executed before creating the service.
There is some high-level documentation on the architecture types Load Balanced Web Service and Backend Service. [3]
It mentions the command copilot svc show in conjunction with the --resources flag [4]:
You can also provide an optional --resources flag to see all AWS resources associated with your service.
I think this gives you the ability to estimate costs right after bringing the services up and running.
A somehow more complicated approach which I frequently apply to understand complex constructs in the AWS CDK is to look at the source code. For example, you could open the corresponding Go file for the Load Balanced Web Service architecture: [5]. Digging into the code, you'll notice that they make it pretty clear that they are using Fargate containers instead of EC2 instances.
That is also what they tell us in the high-level service docs [4]:
You can select a Load Balanced Web Service and Copilot will provision an application load balancer, security groups, an ECS Service and run your service on Fargate.
More on Fargate: [6]
Btw, there is a really interesting comment in the issue section which outlines why they decided against supporting EC2 in the first place [7]:
What features have the team currently explicitly decided against?
EC2 comes to mind. I think we could have built a really nice experience ontop of EC2 instances - but I think there's a difference between building an "abstraction" around ECS / Fargate and building an "illusion" around ECS / EC2. What I mean by that is that if we created the illusion of a fully hands off EC2 experience, customers might be surprised that they are expected to be in charge of patching and security maintenance of those instances. This isn't something that Copilot, a CLI can really automate for people (realistically). We're still trying to figure out a good way to expose EC2 to folks - but we definitely see Fargate as the future.

Jmeter load test with 30K users with aws

My scenario is mentioned below, please provide the solution.
I need to run 17 HTTP Rest API's for 30K users.
I will create 6 AWS instances (Slaves) for running 30K (6 Instances*5000 Users) users.
Each AWS instance (Slave) needs to handle 5K Users.
I will create 1 AWS instance (Master) for controlling 6 AWS slaves.
1) For Master AWS instance, what instance type and storage I need to use?
2) For Slave AWS instance, what instance type and storage I need to use?
3) The main objective is a Single AWS instance need to handle 5000Users (5k) users, for this what instance type and storage I need to use? This objective needs to solve for low cost (pricing)?
Full ELB DNS Name:
The answer is I don't know, this is something you need to find out how many users you will be able to simulate on this or that AWS instance as it depends on the nature of your test, what it is doing, response size, number of postprocessors/assertions, etc.
So I would recommend the following approach:
First of all make sure you are following recommendations from the 9 Easy Solutions for a JMeter Load Test “Out of Memory” Failure
Start with single AWS server, i.e. t2.large and single virtual user. Gradually increase the load at the same time monitor the AWS health (CPU,RAM, Disk, etc) using either Amazon CloudWatch or JMeter PerfMon Plugin. Once there will be a lack of the monitored metrics (i.e. CPU usage exceeds 90%) stop your test and mention the number of virtual users at this stage (you can use i.e. Active Threads Over Time listener for this)
Depending on the outcome either switch to other instance type (i.e. Compute Optimized if there is a lack of CPU or Memory Optimized if there is a lack of RAM) or go for higher spec instance of the same tier (i.e. t2.xlarge)
Once you get the number of users you can simulate on a single host you should be able to extrapolate it to other hosts.
JMeter master host doesn't need to be as powerful as slave machines, just make sure it has enough memory to handle incoming results.