I submited a GCP dataflow pipeline to receive my data from GCP Pub/Sub, parse and store to GCP Datastore. It seems work perfect.
Through 21 days, I found the cost is $144.54 and worked time is 2,094.72 hour. It means after I submitted it, it will be charged every sec, even there is not receive (process) any data from Pub/Sub.
Is this behavior normal? Or I set a wrong parameters?
I thought CPU use time only be counted when data is received.
Is there have any way to reduce the cost in same working model (receive from Pub/Sub and store to Datastore)?
The Cloud Dataflow service usage is billed in per second increments, on a per job basis. I guess your job used 4 n1-standard-1 workers, which used 4 vCPUs giving an estimated of 2,000 vCPU hr resource usage. Therefore, this behavior is normal. To reduce the cost, you can use either autoscaling, to specify the maximum number of workers, or the pipeline options, to override the resource settings that are allocated to each worker. Depending on your necessities, you could consider using Cloud Functions which cost less, but considering its limits.
Hope it helps.
Related
I have a question about the FlexRS type while I was looking at the dataflow of Google Cloud Platform. :)
As far as I know, dataflow supports batch and stream, but I was curious to know that there is a FlexRS type.
Simply understanding FlexRS was difficult to identify in the document except that it was cheaper than a typical workflow.
Can I ask you to explain FlexRS?
Thank you.
Dataflow is a fully managed batch and streaming analytics service that minimises latency, processing time through autoscaling.
Regarding FlexRS for Dataflow, ou can use it in batch processing pipelines which are not time-critical, such as daily or weekly jobs that can be completed within a certain time-window. Normally, Dataflow uses both and preemptible and regular workers to execute your job. It takes into account the availability of preemptible VM`s and you are charged according to the documentation.
On the other hand, FlexRS offers a discounted rate for CPU and memory pricing for batch processing. It can delay your Dataflow batch job within a 6-hour window to identify the best point in time to start the job, based on the availability of resources. When enabled, FlexRS selects preemptible VMs for 90% of workers in the worker pool by default.
Therefore, FlexRS is used only for non time-critical batch workloads.
I want to setup enhanced monitoring on one of our RDS instances. But I am not able to calculate the cost it will incur every month.
I checked the aws doc at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_Monitoring.OS.html and it says it depends on the several factors, one of them being logs which are free for upto 5gb per month under free tier(the free tier is only for the initial one year and these 5gb will not be applicable for the older aws accounts if am right). Rest 3 somehow seems to again related to writing logs.
Please help me on how I can calculate the cost incurred only due to enabling of enhanced monitoring on an AWS RDS instance.
--Junaid.
RDS's enhanced monitoring cost is just CloudWatch cost
One of the biggest part of CW cost is the total amount of logs you write in Bytes which is about $0.50/GB ( varies in different regions )
Back on to the question, you can approximate the cost incur by just enabling detailed monitoring, I suggest start with one minute granularity. After a few hour, you will have some logs appear in your CW logs. You can get the total amount of data ingestion and estimate from there
Personally, logging at 1 minute interval for a single RDS DB cost me close to $0.00
I'm using Cloud Dataflow streaming pipelines to insert events received from Pub/Sub into a BigQuery dataset. I need a few ones to keep each job simple and easy to maintain.
My concern is about the global cost. Volume of data is not very high. And during a few periods of the day, there isn't any data (any message on pub/sub).
I would like that Dataflow scale to 0 worker, until a new message is received. But it seems that minimum worker is 1.
So minimum price for each job for a day would be : 24 vCPU Hour... so at least $50 a month/job. (without discount for monthly usage)
I plan to run and drain my jobs via api a few times per day to avoid 1 full time worker. But this does not seem to be the right form for a managed service like DataFlow.
Is there something I missed?
Dataflow can't scale to 0 workers, but your alternatives would be to use Cron, or Cloud Functions to create a Dataflow streaming job whenever an event triggers it, and for stopping the Dataflow job by itself, you can read the answers to this question.
You can find an example here for both cases (Cron and Cloud Functions), note that Cloud Functions is not in Alpha release anymore and since July it's in General Availability release.
A streaming Dataflow job must always have a single worker. If the volume of data is very low, perhaps batch jobs fit the use case better. Using a scheduler or cron you can periodically start a batch job to drain the topic and this will save on cost.
AWS Lambda seems nice for running stress tests.
I understand that is it should be able scale up to 1000 instances, and you are charged by 0.1s rather than per hour, which is handy for short stress tests. On the other hand, automatically scaling up gives you even less control over costs than EC2. For development having explicit budget would be nice. I understand that Amazon doesn't allow for explicit budgets since they can bring down websites in their moment of fame. However, for development having explicit budget would be nice.
Is there a workaround, or best practices for managing cost of AWS Lambda services during development? (For example, reducing the maximum time per request)
Yes, every AWS Lambda function has a setting for defining maximum duration. The default is a few seconds, but this can be expanded to 5 minutes.
AWS also has the ability to define Budgets and Forecasts so that you can set a budget per service, per AZ, per region, etc. You can then receive notifications at intervals such as 50%, 80% and 100% of budget.
You can also create Billing Alarms to be notified when expenditure passes a threshold.
AWS Lambda comes with a monthly free usage tier that includes 3 million seconds of time (at 128MB of memory).
It is unlikely that you will experience high bills with AWS Lambda it is being used for its correct purpose, which is running many small functions (rather than for long-running purposes, for which EC2 is better).
My application heavily relies on AWS services, and I am looking for an optimal solution based on them. Web Application triggers a scheduled job (assume repeated infinitely) which requires certain amount of resources to be performed. Single run of the task normally will take maximum 1 min.
Current idea is to pass jobs via SQS and spawn workers on EC2 instances depending on the queue size. (this part is more or less clear)
But I struggle to find a proper solution for actually triggering the jobs at certain intervals. Assume we are dealing with 10000 jobs. So for a scheduler to run 10k cronjobs (the job itself is quite simple, just passing job description via SQS) at the same time seems like a crazy idea. So the actual question would be, how to autoscale the scheduler itself (given the scenarios when scheduler is restarted, new instance is created etc. )?
Or the scheduler is redundant as an app and it is wiser to rely on AWS Lambda functions (or other services providing scheduling)? The problem with using Lambda functions is the certain limitation and the memory provided 128mb provided by single function is actually too much (20mb seems like more than enough)
Alternatively, the worker itself can wait for a certain amount of time and notify the scheduler that it should trigger the job one more time. Let's say if the frequency is 1 hour:
1. Scheduler sends job to worker 1
2. Worker 1 performs the job and after one hour sends it back to Scheduler
3. Scheduler sends the job again
The issue here however is the possibility of that worker will be get scaled in.
Bottom Line I am trying to achieve a lightweight scheduler which would not require autoscaling and serve as a hub with sole purpose of transmitting job descriptions. And certainly should not get throttled on service restart.
Lambda is perfect for this. You have a lot of short running processes (~1 minute) and Lambda is for short processes (up until five minutes nowadays). It is very important to know that CPU speed is coupled to RAM linearly. A 1GB Lambda function is equivalent to a t2.micro instance if I recall correctly, and 1.5GB RAM means 1.5x more CPU speed. The cost of these functions is so low that you can just execute this. The 128MB RAM has 1/8 CPU speed of a micro instance so I do not recommend using those actually.
As a queueing mechanism you can use S3 (yes you read that right). Create a bucket and let the Lambda worker trigger when an object is created. When you want to schedule a job, put a file inside the bucket. Lambda starts and processes it immediately.
Now you have to respect some limits. This way you can only have 100 workers at the same time (the total amount of active Lambda instances), but you can ask AWS to increase this.
The costs are as follows:
0.005 per 1000 PUT requests, so $5 per million job requests (this is more expensive than SQS).
The Lambda runtime. Assuming normal t2.micro CPU speed (1GB RAM), this costs $0.0001 per job (60 seconds, first 300.000 seconds are free = 5000 jobs)
The Lambda requests. $0.20 per million triggers (first million is free)
This setup does not require any servers on your part. This cannot go down (only if AWS itself does).
(don't forget to delete the job out of S3 when you're done)