GCP's Extend Memory in AWS - amazon-web-services

In GCP there is a "Custom Machine Type" choice to select, which allows selecting custom numbers of CPU and RAM. There is also a checkbox to select the "Extend Memory" option and then google allows to increase RAM more than 300 GB per core.
In AWS there are memory-optimized instances types that allow starting instances with 8 GB per core.
Is there any solution to this in AWS? I mean I need more than 8 GB per core, it is not mandatory to have so much RAM as google gives, but I need more than 8 GB, e.g. 14 GB per core.

AWS supports the instance types that are listed on the EC2 Instance Types page, and only in those configurations.
The closest thing in AWS is Fargate, the container runtime, where you pick a CPU/RAM combination that fits your container – but it's nothing like what you can do in GCP (and the max RAM per core is even lower than what you can get in EC2).

Related

GCP machines with 1:2 ratio of cpu/ram

I was wondering if there are any machines on GCP that have 2x more ram than cpu ? I checked provided list by them but not seeing anything like in the ratio of 4vcpu 8gb ram, the available 2vcpu 8gb ram or 8vcpu 8gb ram is just a waste of resource for me atm https://cloud.google.com/compute/docs/machine-types
There are no such pre-defined instance types, but you can easily create a Custom machine type with the desired amount of RAM and number of CPUs.
Just pick up a Custom machine type while creating the instance and configure it as needed.
Here is an example of such a configuration in the Google Cloud Console web UI:
Also, please consider checking the respective docs to better understand the capabilities and restrictions of the custom machine types.

Why am I being charged for N1 Predefined Instance Ram?

I'm new at GCP and I confess that don't undestand all of the billings.
I'm being charged twice for my instance as you can see in the following image
First for my Instance Core, okay, but later for a Instance Ram, I had made my research and discovered that this can charge me when I use custom RAM on my instance
In this following print, it says me how to find out if I'm using more vCPUs than the pre-defined options
As you can see I'm only using 1 reserved vCPU
That is the pre-defined options of n1-standard-1
Is this charge correct? If so, is there a way to prevent it using n1-standard-1? How?
I am following up regarding your concern about how your instances are being charged. You can verify pricing for Predefined vCPUs and memory. This is the actual price of the service and is more reliable compared to the Pricing Calculator which only gives you an estimate.
The VM instances charge pricing is in this link. It shows the cost for N1’s machine standard predefined machine types. The vCPUs and memory from each of the machine types are billed by their individual predefined vCPU and memory prices.
I suggest that you check the complete pricing matrix for all of our services so you'll have an idea of how much the actual charges will be for your projects and you can choose different instance types or memory from pre-defined n1-standard-1.
If you are interested in discounts, this document explain 3 type of discounts: Sustained use discounts, Committed use discounts, Discounts for preemptible VM instances
Also the Google Cloud Free Tier gives you free resources to learn about Google Cloud services by trying them on your own.

What is the number of cores in aws.data.highio.i3 elastic cloud instance given for a 14 day trial period?

I wanted to make some performance calculations hence i need to know the number of cores that this aws.data.highio.i3 instance deployed by elastic cloud on aws has, I know that it has 4 GB of ram so if anyone can help me with the number of cores that would be really very helpfull.
I am working on elasticsearch deployed on elastic cloud and my use case requires me to make approx 40 million writes in a day so if you can help me suggest what machines i must use that can work accordingly to my use case and are I/O optimized as well.
The instance used by Elastic Cloud for aws.data.highio.i3 in the background is i3.8xlarge, see here. That means it has 32 virtual CPUs or 16 cores, see here.
But you down own the instance in Elastic Cloud, from reference hardware page:
Host machines are shared between deployments, but containerization and
guaranteed resource assignment for each deployment prevent a noisy
neighbor effect.
Each ES process runs on a large multi-tenant server with resources carved out using cgroups, and ES scales the thread pool sizing automatically. You can see the number of times that the CPU was throttled by the cgroups if you go to Stack Monitoring -> Advanced and down to graphs Cgroup CPU Performance and Cgroup CFS Stats.
That being said, if you need full CPU availability all the time, better go with AWS Elasticsearch service or host your own cluster.

Running Map Reduce on a data set of around 10 GB on AWS

I want to store around 10 GB of data on AWS services and use map reduce to process the data.
Is using EC2 the best option ? I want to use free tier service, it says maximum of 613 MB for free services on EC2 and that does not satisfy my requirement. I am doing a hobby project and my expenses are limited.
The free tier FAQ also talks about using AWS EBS with free 30 GB of data. Can I use Map Reduce services on EBS too, since AFAIK EMR is only available on EC2 ?
Does anyone know of any other alternatives that I can use for the same ?
Try the AWS Simply Monthly Calculator, located at http://calculator.s3.amazonaws.com/calc5.html#s=EMR, to get a feel for how much your project will cost using AWS.
The recommended workflow for EMR is to store data in an S3 bucket. So in the calculator, click S3 on the left. In the form enter 10G. The price for S3 storage is about $0.10 per gb/mo, so 10G costs about $1.00/mo.
Then, click on Amazon Elastic Map Reduce on the left. The form allows you to select predicted number of instances, hours/week or hours/mo expected usage, and expected instance type needed for your project. For example, for a project that requires 20 hrs/week using 1 Small EC2 is estimated to cost around $6.00. Micro instances do not seem to be offered with EMR.
Therefore, if you think you can get by with a Small Instance, and you plan to use it infrequently, your expenses might be under $10 per mo.
To reduce expenses even further, you could use spot instances rather than standard instances, as explained here: http://aws.amazon.com/ec2/spot-instances/#7.

Computing power of AWS Elastic Beanstalk instances

I have a CPU-intensive application that I'm considering hosting on 1+ AWS Elastic Beanstalk instances. If at all possible, I'd like to throttle it so that I don't dip over the "free" utilization of the instances.
So I need to figure out what kind of hardware/virtualized hardware the Beanstalk instances are running on, and compare that to the maximum CPU utilization of the free versions.
So for instance, if each Beanstalk instance is running on, say, 2GHz CPUs, and my app performs a specific "supercalc" operation that takes 50 million CPU operations, but the free version of the app only allows me to utilize 100 billion operations per day, then I am limited to 100billion/50million = 2,000 "supercalcs" per day on a free instance. So if the CPU is 2GHz, then my app instance could only run for 2GHz/50million = 40 seconds before I've already "maxed out" the free CPU utilization on the Beanstalk instance.
This is probably not a great example, but hopefully illustrates what I'm trying to achieve. I need to figure out how much I need to throttle my app, or how long my app could run before I max out the Beanstalk CPU utilization, and it really comes down to how beefy the AWS Beanstalk machines are. Thanks in advance!
Amazon EC2 instances aren't based on a "CPU utilisation" billing system (I think Google App Engine is?) - EC2 instance billing is based on the amount of time the machine is "on" regardless of what is doing. See the Amazon EC2 Pricing for the amount it costs to run the different instances sizes in different regions.
There is a special case which is the "Micro" instance - this provides the ability to have short bursts of higher CPU usage than the "small" instance at a lower cost, but if you overuse it you get throttled back for a period (which you don't with a Small). This isn't the same as having an operation limit though, and the price remains the same whether you're throttled or not.
Also note that with Elastic Beanstalk you're also paying for the Elastic Loadbalancer, any storage and bandwidth, and also any database service you are using.
Given all that though - AWS does have a Free Tier - however this is only for the first 12 months of a new account. The Free Tier will cover the cost of a micro EC2 instance, Elastic Loadbalancer, RDS database and other ancillary services - see the link for more info.