I recently learned about using Amazon's G2 instances to perform calculations for game development (e.g. lightmass calculation) according to this tutorial: http://gofreak.tumblr.com/post/87018936015/unreal-engine-4-developing-in-the-cloud
I was wondering if it would be possible to get a Spot instance of a g2.2xlarge, install all tools required (Visual Studio, the engine itself etc.), use the instance for performing GPU tasks and then save the instance (or just the storage used by the engine, where all files are etc. (like EBS, which I am unable to claim since I can not find an option to create a volume)) on my server.
So, my questions are:
Can I store files on my Windows server and use these files in a spot instance when required?
Is it possible to save a "clone" of the spot instance on my server (or on Amazon, depending on price) to be re-used with another spot instance?
As long as you stop the instance, instead of terminating it, all the files within instance would be saved. Regardless of the instance type. While instance is stopped you would still be paying for storage, but that's a tiny fraction of the cost, comparing to the cost of actually running the instance.
To "save a clone" You can do a storage snapshot or create your own AMI image, so data you created can be reused.
You might also consider automating the provisioning of the instance, so it would automatically install and configure all the software required within the instance, so you can repeatedly recreate to instance configuration you need without storing images/snapshots.
Related
t2 instance and t4g instance are not compatible and hence cannot be directly converted by just changing the instance type. Is there any way where the instance type t2 can be migrated to t4g instance without going through setting up again from scratch?
Any instances suffixed with g are running on a Graviton chip, these run on an Arm architecture whereas your t2 will be running on Intel (x86).
The base Amazon Machine Image (AMI) for a graviton based instance must be from an Arm architecture to allow these instance types to be launched, therefore there is not a quick way to migrate data.
From the system point of view you will need to ensure that the packages you are currently using are also available for Arm, if they are not you would need to look into alternatives. There might also be changes to your own software that are needed to make it compatiable.
Assuming you have not already, for the future it is worth looking into an orchestration tool to configure your instances so that it is quick to configure, such tools that exist are Ansible, Chef and Puppet.
For any files that exist on disk either use a shared NFS mount between the old and new instance (using the EFS service) or look at uploading to S3 and then copying down to your new instance once it has launched.
Additional information on transitioning to Graviton can be found in the AWS Graviton getting started guide.
I have a very basic idea on servers. So far I have only worked with few Ubuntu VPS server which I can easily maintain, install a database, upload my code and run my projects. And to save static data like image/video I use local SSD storage of my server.
Now I got some projects where AWS is required to use. In the beginning, I thought it would be very similar to my normal Ubuntu based VPS server. But while I start researching/reading articles also their own docs I find out it has lots more cool features for server and at the same, it's little complicated for a beginner. I would be really glad if someone give his time and reply on these questions of mine to clear concept about AWS of mine and people like me
As my plan is to use one EC2 instance to run my project. But I can see many experts suggest to use Elastic Beanstalk and create EC2 instance inside that. While I can directly run my project with EC2 without taking help from Elastic Beanstalk. So why it's better / what other help do it(Elastic Beanstalk) provide?
When I am checking the pricing of EC2(On-demand > Linux Unix) it says ECU as Variable. What does that mean? And where does ECU work
Instance Storage (GB) as EBS only. Does that mean I can't have any storage with my server I must buy separately? But in my previous VPS server, I use to get fewer storages with my server. Because storage is required if I want to install new software like MySQL/Redis/Python each of them requires local storage. Also if I want to upload my code or few static images it requires storage.
Like storage do I also need to buy other instances for a database? Like if I want to use PostgreSQL as my database do I need to buy AWS RDS or I can install that inside my Linux system?
Lastly, what are the main differences of my normal VPS Linux server and in AWS EC2 Linux server?
Thanks in advance for giving time :)
Let me try to answer your questions inline.
As my plan is to use one EC2 instance to run my project. But I can
see many experts suggest to use Elastic Beanstalk and create EC2
instance inside that. While I can directly run my project with EC2
without taking help from Elastic Beanstalk. So why it's better /
what other help do it(Elastic Beanstalk) provide?
If you are planning to use a single server and a database going with EC2 and RDS would be straightforward. However, if you are planning to set up, autoscaling (automatically increasing the number of servers only when load increases and return back to one server), load balancing and DevOps support, you need to set them up which requires more knowledge on AWS platform. AWS Elastic Beanstalk does these for you automatically, also by giving you the options to select the technology of your application and simply upload the code.
When I am checking the pricing of EC2(On-demand > Linux Unix) it says ECU as Variable. What does that mean? And where does ECU work
ECU is simply a rough figure to compare the processing across multiple EC2 classes that are having the different levels processing power.
Instance Storage (GB) as EBS only. Does that mean I can't have any storage with my server I must buy separately? But in my previous VPS server, I use to get fewer storages with my server. Because storage is required if I want to install new software like MySQL/Redis/Python each of them requires local storage. Also if I want to upload my code or few static images it requires storage.
EBS storage is reliable storage (With internal redundancy) that will last beyond your instance lifetime. Which means, you can upgrade the EC2 class and install software, or store files, which will remain in the EBS volume unless you delete it.
Since you are basically paying for the GBs, you can also create another EBS volume for static files and mount it to the EC2 instance if you want.
Like storage do I also need to buy other instances for a database? Like if I want to use PostgreSQL as my database do I need to buy AWS RDS or I can install that inside my Linux system?
It's not mandatory but recommended since you can even use a smaller instance for a web server and use another one for the DB. It's up to you. For example, the cost would be roughly similar if you use two small EC2 instances for a web server and DB server (Or use RDS) or use a single medium-size EC2 instance where both DB and web is running.
Lastly what are the main differences of my normal VPS Linux server and in AWS EC2 Linux server?
You will get more options in terms of selecting the hardware underneath since AWS provides different configuration options. In addition, EC2 instances are able to utilize the AWS ecosystem for Networking, Security, Load balancing & etc for better-optimized solution architectures in terms of reliability, security, performance & etc.
Q1) Beanstalk is a management application. AWS has several: CloudFormation, OpsWorks. Third party vendors have their own: Chef, Ansible, Terraform, etc. I really like Beanstalk and how it makes deploying code very easy for small sites (one command). I can scale up or scale down with a button push. I also use CloudFormation every day for just about everything.
Q2) ECU is a AWS Equivalent Compute Unit used to compare one instance with another. How does that translate to physical CPUs? Don't know as AWS does not publish its absolute meaning. Use is only to compare EC2 instances.
Q3) When you launch an EC2 instance, you will need storage. This is an additional cost (around $0.10 per GB per month). You will specify the size and type of storage (there are a number of types). There is also Instance Store Volumes. Stay away from these unless you really understand how to use them (they don't persist a shutdown so all data is lost). There are good use cases for Instance Store (AI, Big Data, Image processing), but a website is not one of them.
Q4) If your EC2 instance is big enough (2 GB of memory and larger), you can install PostgreSQL, MySQL, etc on your EC2 instance. Otherwise AWS has a number of database optios: DynamoDB, RDS, Aurora, etc.
Q5) Difficult to answer as each vendor offers its own set of features. EC2 instances are virtual machines. You have control over the raw power of that VM. Most VPS servers have management interfaces that EC2 does not. Usually EC2 is more expensive than VPS servers.
Watch a couple of AWS videos on YouTube. This will help you to understand AWS and why it is so successful in the cloud. Linux Academy, A Cloud Guru, etc. have very good training courses on AWS.
AWS Essentials: EC2 Basics
If you have further questions, open a new StackOverflow question per question. You will seldom get answers to long multi-question questions.
Curious if this is possible:
We have a web application that at MOST times, works just fine with our single small instance. However, when we get multiple customers running simultaneously intense queries (we are a cloud scheduling service); our instance bogs way down to near 80% cpu load and becomes pretty unresponsive.
Is there a way to have AWS fire up another small instance (or a few), quickly, only for the times that its operating under this intense load? BUT, the real question is how does this work when we have very frequent programming updates to our application? Do we have to manually create a new image everytime we upload a code change???
Thanks
You should never be running anything important on a single EC2 instance. Instances can--and do--go offline randomly. Always use an autoscaling (AS) group that spans multiple availability zones. An AS group will automatically bring new instances online when you hit a certain trigger (in your case, CPU utilization). And then it will scale down the instances when traffic subsides. Autoscaling is the heart and soul of AWS and if you're not using it, you might as well be using a cheaper (and more durable) VPS host.
No, you don't want to be creating a new AMI for each code release. Ideally you should use a base AMI (like one of Amazon's official ones) and then have it auto-provision at boot. You can use the "user data" field when you launch an AMI to bootstrap this process. It can be as simple as a bash script that pulls from your Git repo to as something as sophisticated as Puppet or Chef.
The only time I create custom AMI's is if the provisioning process just takes too long. However that can almost always be solved by storing the needed files in S3.
I do have a free micro instance on AWS and quite often my CPU is throttled making it very hard to use.
I do want to know if there is any way to test a bigger instance so I see which one would be ok.
Side questions:
Can I go back to the free micro if I want?
Can I limit the cost of the testing, or do an estimate on it? I don't want to endup with a surprise bill as the result of the testing.
You can of course launch a new instance of a larger size, run your tests, then terminate the instance. It will not effect your running micro instance in any way at all.
AWS publishes their pricing data, so you can either calculate the cost manually or use the cost calculator: http://calculator.s3.amazonaws.com/calc5.html
There is no way to "cap" your AWS spend.
Mike Ryan's answer is correct as such, but there might be a better way to achieve your goal, because it is possible to upgrade your Amazon EC2 t1.micro instance in place. This process (and the few constraints) are summarized in Eric Hammond's article Moving an EC2 Instance to a Larger (or Smaller) Instance Type:
When you discover that the entry level t1.micro instance size is
simply not cutting it for your growing application needs, you may want
to try upgrading it to a larger instance type, perhaps an m1.small or
even a c1.medium.
Instead of starting a new instance and having to configure it from
scratch, you may be able to simply resize the existing instance by
asking Amazon move it to better hardware for you. Of course, since
this is AWS, you don’t have to actually talk to anybody—just type a
few commands and the job is done automatically.
Eric describes how to achieve this via the command line, but the same can be done via the AWS Management Console as well if your prefer, the instance menu features a respective command Change Instance Type (only enabled when the instance is stopped).
Alternatively you might also want to get acquainted with the ease of duplicating an EBS-Backed EC2 instance by means of an Amazon Machine Image (AMI), which allows you to start any number of exact duplicates of your current instance - this process is outlined in Creating Amazon EBS-Backed AMIs Using the Console for example.
I have an Amazon EC2 Micro instance running using EBS storage. This more than meets my needs 99.9% of the time, however I need to perform a very intensive database operation as a once off which kills the Micro instance.
Is there a simple way to restart the exact same instance but with lots more power for a temporary period, and then revert back to the Micro instance when I'm done? I thought this seemed more than possible under the cloud based model Amazon uses but it doesn't appear to simply be a matter of shutting down and restarting with more power as I first thought it might be.
If you are manually running the database operation, then you can just create the image of the server, launch a small or a high cpu instance using the same image, run the database operation and then create the image and launch it again as a micro instance. You can also automate this process by writing scripts using AWS APIs.
In case you're using an EBS-backed AMI you don't have to create a new image and launch it. Just stop the machine and issue a simple EC2 API command to change the instance type:
ec2-modify-instance-attribute --instance-type <instance_type> <instance_id>
Keep in mind that not all instance types work for every AMI. The applicable instance types depend on the machine itself and the kernel. You can find a list of available instance types here: http://aws.amazon.com/ec2/instance-types/