I have an EC2 instance up and running (Linux). I've made some installations I'd like to completely undo. What would be the best/simplest way to get back to a clean instance?
Start a new instance, running through your installation and configuration steps.
You can do this without terminating the old instance. This lets you look at configuration on the old instance in case you forgot how you set things up. It also lets you copy data or other files from the old instance to the new instance.
One you're completely happy with the new instance, terminate the old instance.
This approach of starting the new before destroying the old didn't make sense back in the days of physical servers, but EC2 gives us new ways to think about things.
Also: Document and/or script your installation and configuration steps so you can easily reproduce them in the future on new instances. Think about separating your data onto a second EBS volume so it can be easily moved to a new instance.
You should be comfortable with testing your setup scripts/docs repeatedly until they work just right on a brand new instance.
Destroy it and create a new one using the same AMI, kernel, and any user-defined data/script that you passed to the original instance creation. Back any of your own data up to S3.
Related
I am using Google-Cloud-Platform I want to update the os version of my VM instance.
I am using ubuntu-18.4LTS and want to upgrade it to the ubunut-20.04LTS with all the data.
I got one suggestion in which I ask I can create a whole new VM instance with the new OS that I want but in that case, I will lose the data.
Another Way that I come into my knowledge is using the snapshot.
Create a snapshot from the disk.
Create a new instance with the new OS version.
Add your old disk to your new instance.
But still, after doing this I didn't get any luck I got the new version but not the data of the old machine.
So is there any way to achieve this?
I can't use comment due to insufficient points.
Is there a reason you don't want to use command line to upgrade?
Does this link https://ubuntu.com/blog/how-to-upgrade-from-ubuntu-18-04-lts-to-20-04-lts-today help?
And this https://www.cyberciti.biz/faq/upgrade-ubuntu-18-04-to-20-04-lts-using-command-line/
I need to move more than 50 compute instances from a Google Cloud project to another one, and I was wondering if there's some tool that can take care of this.
Ideally, the needed steps could be the following (I'm omitting regions and zones for the sake of simplicity):
Get all instances in source project
For each instance get machine sizing and the list of attached disks
For each disk create a disk-image
Create a new instance, of type machine sizing, in target project using the first disk-image as source
Attach remaining disk-images to new instance (in the same order they were created)
I've been checking on both Terraform and Ansible, but I have the feeling that none of them supports creating disk images, meaning that I could only use them for the last 2 steps.
I'd like to avoid writing a shell script because it doesn't seem a robust option, but I can't find tools that can help me doing the whole process either.
Just as a side note, I'm doing this because I need to change the subnet for all my machines, and it seems like you can't do it on already created machines but you need to clone them to change the network.
There is no tool by GCP to migrate the instances from one project to another one.
I was able to find, however, an Ansible module to create Images.
In Ansible:
You can specify the “source_disk” when creating a “gcp_compute_image” as mentioned here
Frederic
Let's say I've created an AMI from one of my EC2 instances. Now, I can add this manually to then LB or let the AutoScaling group to do it for me (based on the conditions I've provided). Up to this point everything is fine.
Now, let's say my developers have added a new functionality and I pull the new code on the existing instances. Note that the AMI is not updated at this point and still has the old code. My question is about how I should handle this situation so that when the autoscaling group creates a new instance from my AMI it'll be with the latest code.
Two ways come into my mind, please let me know if you have any other solutions:
a) keep AMIs updated all the time; meaning that whenever there's a pull-request, the old AMI should be removed (deleted) and replaced with the new one.
b) have a start-up script (cloud-init) on AMIs that will pull the latest code from repository on initial launch. (by storing the repository credentials on the instance and pulling the code directly from git)
Which one of these methods are better? and if both are not good, then what's the best practice to achieve this goal?
Given that anything (almost) can be automated using the AWS using the API; it would again fall down to the specific use case at hand.
At the outset, people would recommend having a base AMI with necessary packages installed and configured and have init script which would download the the source code is always the latest. The very important factor which needs to be counted here is the time taken to checkout or pull the code and configure the instance and make it ready to put to work. If that time period is very big - then it would be a bad idea to use that strategy for auto-scaling. As the warm up time combined with auto-scaling & cloud watch's statistics would result in a different reality [may be / may be not - but the probability is not zero]. This is when you might consider baking a new AMI frequently. This would enable you to minimize the time taken for the instance to prepare themselves for the war against the traffic.
I would recommend measuring and seeing which every is convenient and cost effective. It costs real money to pull down the down the instance and relaunch using the AMI; however thats the tradeoff you need to make.
While, I have answered little open ended; coz. the question is also little.
People have started using Chef, Ansible, Puppet which performs configuration management. These tools add a different level of automation altogether; you want to explore that option as well. A similar approach is using the Docker or other containers.
a) keep AMIs updated all the time; meaning that whenever there's a
pull-request, the old AMI should be removed (deleted) and replaced
with the new one.
You shouldn't store your source code in the AMI. That introduces a maintenance nightmare and issues with autoscaling as you have identified.
b) have a start-up script (cloud-init) on AMIs that will pull the
latest code from repository on initial launch. (by storing the
repository credentials on the instance and pulling the code directly
from git)
Which one of these methods are better? and if both are not good, then
what's the best practice to achieve this goal?
Your second item, downloading the source on server startup, is the correct way to go about this.
Other options would be the use of Amazon CodeDeploy or some other deployment service to deploy updates. A deployment service could also be used to deploy updates to existing instances while allowing new instances to download the latest code automatically at startup.
Is it possible? Official manual consists this:
Deterministic instance templates
Use custom images or disk snapshots rather than startup scripts.
But no more info how can I do this. Maybe someone have already did that? Tnx in advance.
It's possible. Here's the process:
Create your Snapshot
Create Disk from your Snapshot
Create an Image from Disk
Create an Instance Template from Image
So my solution is following:
I created an instance.
Install the required services on that instance.
Created the image from the disk using the steps mentioned on this link.
With that Image created a new template.
The startup scripts will run on instance boots up or restarts. So the only way I found to run it once is the same which you have tired i.e deleting them from metadata. In my setup when using startup scripts I don't reboot the instances I rather delete them and create a new ones if required.
I don't know whether it is anew feature, but you can simply replicate an instance by
Creating a snapshot
When the snapshot is ready, click on it (it'll show the details)
In the breadcrumbs bar, click on Create Instance.
It'll take a minute or so and the new instance will start-up, and that's all!
Yes, I've heard all the stories about EC2 instances being unreliable and how you need to proactively prepare for that. I've also heard stories from others about how they have never had a problem, and their instances just run and run.
Today I had a strange thing happen. I've had an Linux instance running for a couple of months, as I've been preparing to launch an e-commerce site. I've been periodically taking snapshots. I have my images on S3. I have my code in a private github repo. All things considered, I've been doing a fairly good job of protecting myself against failure. Ironically, it was while I was doing even more in this regard today that I experienced something really strange.
Since I have these snapshots, I had assumed that the best thing to do if I needed to quickly spin up a new instance (whether due to a failed instance that wouldn't come back up, or if I just needed additional capacity) would be to take a snapshot and make a volume out of it, then make an image out of that volume, and then launch a new instance using that image.
For whatever reason, every time I've tried that lately, the new instance had a kernel panic during boot, so I decided to try a different approach. I right-clicked on my RUNNING INSTANCE, and chose "Create Image." That seemed like a reasonable shortcut. Then I went to that image and launched an instance.
At almost exactly the same time, my original instance rebooted. I didn't even see it happen. I only know it did from the system log. Is this just a wild coincidence? Or did I commit a silly mistake and accidentally screw up my instance?
Fortunately, I'm just getting this new thing off the ground, so the bit of downtime didn't kill me, and I was able to very quickly get things going again. But either I totally do not understand the "Create Image" feature from the instance list, or I got really unlucky today.
"Create image" takes the following actions:
Stop EC2 instance
Snapshot EBS volume
Start EC2 instance
Register EBS snapshot as an AMI
So, yes, this would look like a reboot because it is like a reboot.
Here's an article I wrote on the difference between stop/start and simple reboot: http://alestic.com/2011/09/ec2-reboot-stop-start
Your problem sounds a lot like my problem. After some searching this page helped me: http://www.raleche.com/node/138
"The problem turned out to be the kernel. Both when creating the AMI and the instance I selected default for the kernel image.
To resolve the problem, I recreated the AMI using the same kernel image as the original instance."