Increasing compute power temporarily on AWS - amazon-web-services

I have an Amazon EC2 Micro instance running using EBS storage. This more than meets my needs 99.9% of the time, however I need to perform a very intensive database operation as a once off which kills the Micro instance.
Is there a simple way to restart the exact same instance but with lots more power for a temporary period, and then revert back to the Micro instance when I'm done? I thought this seemed more than possible under the cloud based model Amazon uses but it doesn't appear to simply be a matter of shutting down and restarting with more power as I first thought it might be.

If you are manually running the database operation, then you can just create the image of the server, launch a small or a high cpu instance using the same image, run the database operation and then create the image and launch it again as a micro instance. You can also automate this process by writing scripts using AWS APIs.

In case you're using an EBS-backed AMI you don't have to create a new image and launch it. Just stop the machine and issue a simple EC2 API command to change the instance type:
ec2-modify-instance-attribute --instance-type <instance_type> <instance_id>
Keep in mind that not all instance types work for every AMI. The applicable instance types depend on the machine itself and the kernel. You can find a list of available instance types here: http://aws.amazon.com/ec2/instance-types/

Related

How to take a backup of EC2 instance in AWS and move to a low cost alternative?

We have an EC2 instance running in AWS EC2 instance. We have our ML algorithms and data that. We have also hosted a web-based interface also in that machine.
Now there are no new developments happening in that EC2 instance. We would like to terminate AWS subscription for a short period of time (for the purpose of cost-reduction and exploring new cloud services). Most importantly, we want to be in a position where we can purchase a new EC2 instance with a fresh AWS subscription, use the backup which we take now, and resume all operations (web-backend, SMS services for our app which is hosted in AWS, etc.).
What is the best way to do it? Is temporary termination of AWS subscription advisable?
There is no concept of an "AWS Subscription". AWS is charged on-demand, which means you only pay when you use resources.
If you temporarily do not want the Amazon EC2 instance, you could:
Stop the instance, which is like turning off the power. You will not be charged for the instance, but you will still pay for the disk storage attached to the instance. You can simply Start the instance again when you wish to use it. You will only be charged while the instance is running. OR
Create an image of the instance, then terminate the instance. This will create an Amazon Machine Image (AMI), which contains a copy of the disks. You can then launch a new Amazon EC2 instance from the AMI when you wish to use it again. This is a lower-cost option compared to simply stopping the instance, but it takes more effort to stop/start.
It is quite common for companies to stop Amazon EC2 instances at night or over the weekend to reduce costs while they are not needed.
EDIT: Just thought of a third option. Will test it and be back. Not worth it; it would involve creating an image from the EC2 instance and then convert that image to a VM image, storing the VM image in S3. There may be some advantages to this, but I do not see them.
I think you have two options, both of them very reasonably priced. If you can separate the data from the operating system, then your best option would be to use an S3 bucket as a file system within the EC2 instance. Your EC2 instance would use this bucket to store all your "ML algorithms and data" and, possibly, even your "web-based interface". Whenever you decide that you no longer need the processing capacity of the EC2, you would unmount the S3 bucket file system from the EC2 instance and terminate that instance. After configuring an appropriate lifecycle rule for the S3 bucket, it would transition to Glacier, or even Glacier Deep Archive [you must considerer the different options of long term storage]. In the future, whenever you want to work with your data again, you would move your data from Glacier back to S3, create a new EC2 instance, install your applications, mount your S3 bucket as a file system and you would have access to all your data. I think this is your least expensive and shortest recovery time objective option. To implement this option, look at my answer to this question; everything you need to use an S3 bucket as a regular folder inside the EC2 instance is there.
The second option provides an integrated solution, meaning the operating system and the data stay together, and allows you to restore everything as it was the day you stopped processing your data. It's made up of the following cycle:
Shutdown your EC2 and make a note of all the specs [you need them further down].
Export your instance to a virtual image, vmdk for example, and store it in your S3 bucket. Something like this:
aws ec2 create-instance-export-task --instance-id i-0d54b0682aa3998a0
--target-environment vmware --export-to-s3-task DiskImageFormat=VMDK,ContainerFormat=ova,S3Bucket=sm-vm-backup,S3Prefix=vms
Configure an appropriate lifecycle rule for the S3 bucket so that it transitions to Glacier, or even Glacier Deep Archive.
Terminate the EC2 instance.
In the future you will need to implement the inverse, so you will need to restore the archived S3 Object [make sure you you can live with the time needed by AWS to do this]
Import the virtual image as an EC2 AMI, something like this [this is not complete - you will need some more options that you saved above]:
aws ec2 import-image --disk-containers
Format=ova,UserBucket="{S3Bucket=sm-vm-backup,S3Key=vmsexport-i-0a1c382e740f8b0ee.ova}"
Create an EC2 instance based on the image and you're back in business.
Obviously you should do some trial runs and even automate the entire process if it's something that will be done frequently. I have a feeling, based on what you said, that the first option is a better option, provided you can easily install whatever applications they use.
I'm assuming that you launched an EC2 instance from a base Amazon Machine Image and then added your own software and models to it. As opposed to launched an EC2 instance from an AWS Marketplace offering.
The simplest thing to do is to create an Amazon Machine Image (AMI) from your running EC2 instance. That will capture the current state of the instance and persist it in your AWS account. Then you can terminate the instance. Later, when you want to recreate it, launch a new instance, selecting the saved AMI instead of a standard AMI.
An alternative is to avoid the need to capture machine state at all, by using standard DevOps practices to revision-control everything you need to recreate the state of a running machine.
Note that there are costs associated with an AMI, though they are minimal ($0.05 per GB-month of data stored, for example).
I had contacted AWS customer care regarding this issue. Given below is the response I received. Please add your comments on which option might be good for me.
Note: I acknowledge the AWS customer care team for their help.
I understand that you require some information on cost saving for your
Instance since you will not be utilizing the service for a while.
To assist you with this I would recommend checking out the Instance
Stop/Start link here:
==>https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/Stop_Start.html .
When you stop an Instance, you do not lose any data & you are not
charged for the resources any further. However please keep in mind
that you will still be charged for any EBS Storage Volumes attached to
the stopped Instance(s).
I also recommend checking out the below links on how you can reduce
your costs.
==>https://aws.amazon.com/premiumsupport/knowledge-center/reduce-aws-bill/
==>https://aws.amazon.com/blogs/compute/10-things-you-can-do-today-to-reduce-aws-costs/
That being said, please note that as I am in the billing department,
for the best assistance with the various plans you will require the
assistance of our Sales Team.
The Sales Team will be able to assist with ways to save while
maintaining your configurations.
You will be able to reach the Sales Team here:
==>https://aws.amazon.com/websites/contact-us/.
Once you have completed the details in the link, a member of the team
will be in touch with you at their soonest.

what does deploy command does in aws sagemaker?

I am curious to know what does the model.deploy command actually does in the background when implemented in aws sagemaker notebook
for eg :
predictor = sagemaker_model.deploy(initial_instance_count=9,instance_type='ml.c5.xlarge')
and also at the time of sagemaker endpoint autoscaling what is happening in the background, it is taking to long almost 10 minutes to launch a new-instances, by which most of the requests get dropped or not processed and also getting connection timeout while load testing threw JMeter. Is there any way to fast bootup or golden AMI kind of thing in sagemaker?
are there any other means by which this issue can be solved?
The docs mention what the deploy method does: https://sagemaker.readthedocs.io/en/stable/model.html#sagemaker.model.Model.deploy
You could also take a look at the source code here: https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/model.py#L377
Essentially the deploy method hosts your model on a SageMaker Endpoint, launching the number of instances using the instance type that you specify. You can then invoke your model using a predictor: https://sagemaker.readthedocs.io/en/stable/predictors.html
For autoscaling, you may want to consider lowering your threshold for scaling out so that the additional instances start to be launched earlier. This page offers some good advice on how to determine the RPS your endpoint can handle. Specifically, you may want to have a lower SAFETY_FACTOR to ensure new instances are provisioned in time to handle your expected traffic.

Using Amazon G2 Instance to advance dedicated server for GPU tasks

I recently learned about using Amazon's G2 instances to perform calculations for game development (e.g. lightmass calculation) according to this tutorial: http://gofreak.tumblr.com/post/87018936015/unreal-engine-4-developing-in-the-cloud
I was wondering if it would be possible to get a Spot instance of a g2.2xlarge, install all tools required (Visual Studio, the engine itself etc.), use the instance for performing GPU tasks and then save the instance (or just the storage used by the engine, where all files are etc. (like EBS, which I am unable to claim since I can not find an option to create a volume)) on my server.
So, my questions are:
Can I store files on my Windows server and use these files in a spot instance when required?
Is it possible to save a "clone" of the spot instance on my server (or on Amazon, depending on price) to be re-used with another spot instance?
As long as you stop the instance, instead of terminating it, all the files within instance would be saved. Regardless of the instance type. While instance is stopped you would still be paying for storage, but that's a tiny fraction of the cost, comparing to the cost of actually running the instance.
To "save a clone" You can do a storage snapshot or create your own AMI image, so data you created can be reused.
You might also consider automating the provisioning of the instance, so it would automatically install and configure all the software required within the instance, so you can repeatedly recreate to instance configuration you need without storing images/snapshots.

Is it possible to auto scale with amazon web services, with ever changing AMI's?

Curious if this is possible:
We have a web application that at MOST times, works just fine with our single small instance. However, when we get multiple customers running simultaneously intense queries (we are a cloud scheduling service); our instance bogs way down to near 80% cpu load and becomes pretty unresponsive.
Is there a way to have AWS fire up another small instance (or a few), quickly, only for the times that its operating under this intense load? BUT, the real question is how does this work when we have very frequent programming updates to our application? Do we have to manually create a new image everytime we upload a code change???
Thanks
You should never be running anything important on a single EC2 instance. Instances can--and do--go offline randomly. Always use an autoscaling (AS) group that spans multiple availability zones. An AS group will automatically bring new instances online when you hit a certain trigger (in your case, CPU utilization). And then it will scale down the instances when traffic subsides. Autoscaling is the heart and soul of AWS and if you're not using it, you might as well be using a cheaper (and more durable) VPS host.
No, you don't want to be creating a new AMI for each code release. Ideally you should use a base AMI (like one of Amazon's official ones) and then have it auto-provision at boot. You can use the "user data" field when you launch an AMI to bootstrap this process. It can be as simple as a bash script that pulls from your Git repo to as something as sophisticated as Puppet or Chef.
The only time I create custom AMI's is if the provisioning process just takes too long. However that can almost always be solved by storing the needed files in S3.

Is it possible to do a temporary upgrade of an AWS micro instance to test what would be ok?

I do have a free micro instance on AWS and quite often my CPU is throttled making it very hard to use.
I do want to know if there is any way to test a bigger instance so I see which one would be ok.
Side questions:
Can I go back to the free micro if I want?
Can I limit the cost of the testing, or do an estimate on it? I don't want to endup with a surprise bill as the result of the testing.
You can of course launch a new instance of a larger size, run your tests, then terminate the instance. It will not effect your running micro instance in any way at all.
AWS publishes their pricing data, so you can either calculate the cost manually or use the cost calculator: http://calculator.s3.amazonaws.com/calc5.html
There is no way to "cap" your AWS spend.
Mike Ryan's answer is correct as such, but there might be a better way to achieve your goal, because it is possible to upgrade your Amazon EC2 t1.micro instance in place. This process (and the few constraints) are summarized in Eric Hammond's article Moving an EC2 Instance to a Larger (or Smaller) Instance Type:
When you discover that the entry level t1.micro instance size is
simply not cutting it for your growing application needs, you may want
to try upgrading it to a larger instance type, perhaps an m1.small or
even a c1.medium.
Instead of starting a new instance and having to configure it from
scratch, you may be able to simply resize the existing instance by
asking Amazon move it to better hardware for you. Of course, since
this is AWS, you don’t have to actually talk to anybody—just type a
few commands and the job is done automatically.
Eric describes how to achieve this via the command line, but the same can be done via the AWS Management Console as well if your prefer, the instance menu features a respective command Change Instance Type (only enabled when the instance is stopped).
Alternatively you might also want to get acquainted with the ease of duplicating an EBS-Backed EC2 instance by means of an Amazon Machine Image (AMI), which allows you to start any number of exact duplicates of your current instance - this process is outlined in Creating Amazon EBS-Backed AMIs Using the Console for example.