How does Google bill for compute engine? - google-cloud-platform

I am currently running the free trial with 300$ credit.
There's one instance present in the console. Does Google bill for 'running' the instance or 'connecting' it to SSH?

Google charges compute instances by the time they're running (started) according to CPU and RAM, there are additional charges for disks and network. There are discounts for long running instances and for commitments. pricing information is available at https://cloud.google.com/compute/pricing
You can start and stop instances any time and it depends on your workload... for your use-case of compiling things you may use preemptible instances which are much cheaper - https://cloud.google.com/preemptible-vms/

Related

AWS EC2 On-Demand Pricing

I'm new to AWS EC2, and I wanted to deploy a web server in it. However I'm concerned about the price because the app will only be used for a few hours per day and I saw in the AWS Calculator that there's a Utilization per month as part of the billing computation https://calculator.aws/#/createCalculator/EC2.
What does the Utilization mean? Let's say I have a running EC2 instance. How do I reduce the charges?
Does it depend on the amount of times the server APIs are invoked in the app? So for the hours that the APIs are NOT being invoked, I won't get charged?
Or
Will it keep on charging me as long as the EC2 instance is running so I should shut it down during idle hours to save up on costs?
Amazon EC2 is charged at an hourly price. The price varies by the Instance Type and the Operating System. Basically, machines with more memory and more CPUs are more expensive, and Windows is more expensive than Linux. There is also a charge for Data Transfer, which is traffic that goes out to the Internet.
If you have a small application, an alternative would be to use Amazon Lightsail, which offers a simple monthly price for both the computer and the traffic.
I've added my response to each of your questions -
What does the Utilization mean? Let's say I have a running EC2 instance. How do I reduce the charges? - You will be charged for the time you let that EC2 instance running, start with a t2.micro under free tier account, you are allowed to run it for 750 hours a month!
Does it depend on the amount of times the server APIs are invoked in the app? So for the hours that the APIs are NOT being invoked, I won't get charged? - No, for EC2, it's the runtime and not the API queries.
Will it keep on charging me as long as the EC2 instance is running so I should shut it down during idle hours to save up on costs? - Shut it down, I would also to setup billing alarms to get an alert once my bill crosses a certain threshold
As long as the servers are up and running you will be charged for it. So yes, you should shut it down during idle hours if you want to save costs.
If you just want to try it out for a simple Rest API server, you can create a new account for a 12-month free tier that will basically entitle you to the smallest 24/7 running (750 hours/month) server.
I've used this server for one of my smaller projects, and it was enough to serve about 100 users in total, with about maximum 10 people coming in and out time to time per day. Had no problem with it.

Google cloud platform free tier limits from compute engine

In GCP, it is not notified when a virtual machine of with resources higher than the free tier limit is created. An error message of following pattern arises in the notification. So, what is the maximum allowed resourced for Google cloud platform virtual machine?
Create VM instance "instance-2" and its boot disk "instance-2"
Quota 'C2_CPUS' exceeded. Limit: 0.0 in region asia-south1.
As written in the documentation:
Compute Engine
1 non-preemptible e2-micro VM instance per month in one of the following US regions:.
Oregon: us-west1
Iowa: us-central1
South Carolina: us-east1
30 GB-months HDD.
5 GB-month snapshot storage in the following regions:.
Oregon: us-west1
Iowa: us-central1
South Carolina: us-east1
Taiwan: asia-east1
Belgium: europe-west1
1 GB network egress from North America to all region destinations (excluding China and Australia) per month
Your Free Tier e2-micro instance limit is by time, not by instance. Each month, eligible use of all of your e2-micro instances is free until you have used a number of hours equal to the total hours in the current month. Usage calculations are combined across the supported regions.
Google Cloud Free Tier does not include external IP addresses.
Compute Engine offers discounts for sustained use of virtual machines. Your Free Tier use doesn't factor into sustained use.
GPUs and TPUs are not included in the Free Tier offer. You are always charged for GPUs and TPUs that you add to VM instances.
NB: This is subject to changes, check the link for up-to-date information.
Step-by-Step guide to create a free instance:
Create instance
Now go create the instance at https://console.cloud.google.com/compute/instancesAdd
region: us-east1 or one of the region indicated in the documentation.
Select General Purpose -> N2 -> e2-micro. You will see "Your first 744 hours of e2-micro instance usage are free this month"
Select Boot disk -> public image -> ubuntu -> 20.04LS -> boot disk type: Standard persistent disk (HDD) -> size 30gb (or as per documentation)
Allow http and https traffic (or don't check the boxes, if you don't intend to use port 80 and 443)
Click on Create
You can check "view billing report" to make sure you did it right.
You can found more information at the documentation Google Cloud Free Tier:
The Google Cloud Free Tier has two parts:
A 3-month(previously 12) free trial with $300 credit to use with any Google Cloud services.
Always Free, which provides limited access to many common Google Cloud resources, free of charge.
At the section 12-month, $300 free trial you can find Program coverage details:
Your free trial credit applies to all Google Cloud resources, with the
following exceptions:
You can't have more than 8 cores (or virtual CPUs) running at the same time.
You can't add GPUs to your VM instances.
You can't request a quota increase. For an overview of Compute Engine quotas, see Resource quotas.
You can't create VM instances that are based on Windows Server images.
You must upgrade your account to perform any of the actions in the preceding list.
In addition, have a look at the End of the free trial:
The free trial ends when you use all of your credit, or after 12
months, whichever happens first. At that time, the following
conditions apply:
You must upgrade to a paid account to continue using Google Cloud.
All resources you created during the trial are stopped.
Any data you stored in Compute Engine is lost.
Your account enters a 30-day grace period, during which you can recover resources and data you stored in any Google Cloud services
during the trial period.
You might receive a message stating that your account has been canceled, which only indicates that your account has been suspended to
prevent charges.
and at the Recovering data:
Caution: There is no automated way to recover data that you used on VM instances you created with Compute Engine. You must manually
export any data that you want to keep from your Compute Engine VM
instances before the trial period ends.
I do recommend you to upgrade your account before free trial ends.
After the free trial period ends you just have to register a credit card to continue to use their services if/when you accrue charges from them. If you set it up right it might charge you .02 cents every now and then. I just set up my first one with wordpress and at first I would get charged .02cents/month but once I updated the software and the config it rarely charges me. p.s. I started getting hack attempts pretty quickly.

Does cost of EC2 on AWS increase at the same rate as user count?

I'm getting ready to launch a mobile app that I have hosted on AWS with an EC2 instance. ($0.0464 per On Demand Linux t2.medium Instance Hour).
This past month I was charged $112 for the EC2 usage, but only had a handful of internal users testing the private version of the app. It's a fairly simple app, not anything that should require a lot of computing power.
So what I'm wondering is if 10 users and dev team costs $112/mo, what happens if I get 1,000 users, or 10k users? Would the cost increase 100x, 1000x? I can't imagine getting auto-billed for $112,000 for a month of service with a small user base like 10k users.
Thanks for any help and guidance, I don't know much about AWS.
Here are the details of my billing for last month:
The billing page shows 2219 hours of t2.medium during this billing month.
That is the equivalent of 92 days. So, it might be 3 instances running for a full month.
Amazon EC2 is charged when the instance is in the Running state. If you are not using an instance, you can Stop the instance. The attached disks (EBS) will still be charged, but there will be no charge for the instance itself.
The charge is not based on the number of users, nor how 'busy' the instance is. It is simply charged when the instance is 'running'. This is because computer resources are exclusively assigned to instances (CPU, RAM) that nobody else can use.
Bottom line: Stop instances that you don't need. Use the smallest instance type for your use-case to reduce costs.
If you were not aware of the charges involved, you can contact AWS Customer Service and request a refund.
FYI, the T2 and T3 family are great for workloads that occasionally 'burst' but then have low-usage periods, but they are not great for sustained workloads. See: Burstable performance instances - Amazon Elastic Compute Cloud

Why am I being charged for N1 Predefined Instance Ram?

I'm new at GCP and I confess that don't undestand all of the billings.
I'm being charged twice for my instance as you can see in the following image
First for my Instance Core, okay, but later for a Instance Ram, I had made my research and discovered that this can charge me when I use custom RAM on my instance
In this following print, it says me how to find out if I'm using more vCPUs than the pre-defined options
As you can see I'm only using 1 reserved vCPU
That is the pre-defined options of n1-standard-1
Is this charge correct? If so, is there a way to prevent it using n1-standard-1? How?
I am following up regarding your concern about how your instances are being charged. You can verify pricing for Predefined vCPUs and memory. This is the actual price of the service and is more reliable compared to the Pricing Calculator which only gives you an estimate.
The VM instances charge pricing is in this link. It shows the cost for N1’s machine standard predefined machine types. The vCPUs and memory from each of the machine types are billed by their individual predefined vCPU and memory prices.
I suggest that you check the complete pricing matrix for all of our services so you'll have an idea of how much the actual charges will be for your projects and you can choose different instance types or memory from pre-defined n1-standard-1.
If you are interested in discounts, this document explain 3 type of discounts: Sustained use discounts, Committed use discounts, Discounts for preemptible VM instances
Also the Google Cloud Free Tier gives you free resources to learn about Google Cloud services by trying them on your own.

Using Google Container Engine with GCP free tier

Is it possible to use Google Container Engine with Google Cloud free tier?
(I mean the "Always Free" usage limit, not the $300 free credit)
The docs for GKE says:
The basic cluster is free but each node is charged at standard Compute
Engine pricing
But the Compute Engine also have a free instance. Is it possible to use them together?
Unfortuniately, this is no longer a correct answer, as GKE no longer (as of December 2020 if not earlier) supports f1-micro instances for node pools as they do not have sufficient memory (as alluded to in my original answer below where enabling stackdrier would make the cluster unstable). Therefore, it is not possible to run a GKE cluster fully within the free tier.
Previously, this was possible. See explanation below.
Yes, you can use GKE with the free tier. GKE only charges for the underlying compute engine resources, which are directly billed by compute engine. (Note that after June 6, 2020, the free tier only includes one free GKE zonal cluster -- not an unlimited number of clusters).
GKE will likely require you to run 3 free f1-micro instances concurrently to get the cluster to a minimum size, but as long as the cluster is in one of the free regions and the total usage in a month is under the total number of hours per month it will still be free (that is, you can run 3 f1-micros for a bit under 250 hours and still be in the free tier). Make sure to shut off your instances when you are not using them. See more at https://cloud.google.com/free/docs/gcp-free-tier#always-free-usage-limits (especially the notes about the limit being time rather than instance count).
You may also want to ensure that the persistant disks are not kept around while the cluster isn't running, as the free tier only allows 3 10GB disks over the course of the month.
If you happen to run over the usage, you will only be charged for the usage beyond the free tier.
Of course, this all assumes that f1-micro instances are suitable to your use case. They are quite limited, and once GKE is on them, they have almost nothing remaining in terms of RAM: https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-architecture#memory_cpu
Finally, it has been my experience that setting up the stackdriver support when you create the cluster if you only have micro instances can cause the cluster to struggle greatly -- the stackdriver monitoring alone (or with even minimal other applications) start to cause the nodes to be throttled and time out.
For now It's not possible to create kubernates cluster with one f1-micro. It requires minimum 3 f1-micro instances:
ERROR: (gcloud.container.clusters.create) ResponseError: code=400,
message=Clusters of f1-micro instances must contain at least 3 nodes.
Please make the cluster larger or use a different machine type
This is how I made mine. I created a cluster named 'free-cluster' which runs 2 nodes. These nodes are in 'us-west1-a' as the 'free' tier only allows for the us-east, us-west and us-central zones only. also the VM instance-type should only use 'f1-micro' as that is the freebie they give. the rest are paid.
As is pointed out, GCP does force us to create 3 nodes and no option to state this in the dashboard. But after that you can just go to the nodes and "cordon" and "drain" them so they will not consume the free compute fast. You can leave just one node for the free-tier... this however makes less sense as you will not take advantage of load balancing, self-healing and other features as to why we use kubernetes clusters to begin with. for me I am good testing on 2 nodes as I only need to pay for that 1 cheap monthly f1-micro for my hobby and learning. make sure to go to Google Compute Engine in the dashboard and open the 'Instance groups' in the sidebar, you will find the VM instances in that cluster which you can just delete by selecting and clicking the "Delete Instance" button.
There is no way to get a free GKE cluster on GCP, but you can get a very cheap one by following the instructions at https://github.com/Neutrollized/free-tier-gke.
Using a combination of GKE's free management tier and a low cost machine type, the cost estimate is less than $5 per month: .
More details on what is available as part of the free tier can be found here: https://cloud.google.com/free.
tl;dr
gcloud container clusters create cheap-cluster \
--zone us-west1-a \
--node-locations us-west1-a \
--machine-type=e2-small \
--max-nodes=1 \
--num-nodes=1
As I understand, Google allows 1 f1-micro instance to be used for free even after the 12 month free period.