Why EC2 instance is stopping again and again? - amazon-web-services

I can't keep my EC2 instance in running state. Whenever I start the instance it always changing back to "Stopped" state. I tried many ways like they mentioned in this official page here.
Stopped and restarted the instance - didn't work
Created an alternate image of the current instance and launched a new instance with that image and started it. - didn't work
Finally, made a snapshot of current volume of stopped instance. Then created anew volume with that snapshot. Launched a new instance with a new AMI and stopped it. Then detached the root volume of new instance and attached the newly created volume as root volume. Then started the instance - didn't work though.
UPDATE: when I run below command to check the reason for the issue
aws ec2 describe-instances --instance-id MYINSTANCE --output json
it response with below issue
"StateReason": {
"Code": "Client.InstanceInitiatedShutdown",
"Message": "Client.InstanceInitiatedShutdown: Instance initiated shutdown"
},
What am I missing here?

You can try below
Create another fresh instance and ensure its working fine
Remove storage from this instance and attach it to the affected instance
Start your affected instance now which is having new latest storage (make sure to remove old storage)
If it works then it means there is some issue in the storage drive only.
If the above does not work then there is also possible some hardware issue. you can try creating same instance in different region with the same image to see if it works in different region or not.

Related

Does creating an AMI on AWS require the reboot of the original machine?

When creating an AMI image from an existing EC2 instance, does it require the restart of the existing instance?
I make a copy of the instance, and subsequently the server went down because it turned off a process monitor which led to downtime, but I can't remember if it was because I rebooted the system (I can't remember if I rebooted it), or if it was because I made a copy of the image.
There's option to enable No reboot during create AMI
When creating an AMI image from an existing EC2 instance, does it require the restart of the existing instance
to answer this yes, when ami is being created aws the instance for ensuring data integrity.
doc says this "Amazon EC2 powers down the instance before creating the AMI to ensure that everything on the instance is stopped and in a consistent state during the creation process."
you can override this behaviour by enabling no reboot while creating ami.
No reboot – This option is not selected by default. Amazon EC2 shuts down the instance, takes snapshots of any attached volumes, creates and registers the AMI, and then reboots the instance. Select No reboot to avoid having your instance shut down.
refer 6 point of this https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-ebs.html.
also whenever you are unsure about anything just follow the docs of that service, it will be define in one way or the another.

EC2 - New Instance Vs (remount) EBS-backed instance

I have a an Ubuntu-14.04 EC2 instance running with EBS volume
I regularly take snapshot
I launched a new instance Ubuntu-16.04.
I detach the root volume
I created a EBS volume from snapshot above
I re-attached the volume.
I see all the data and my servers seem to work on the new instance. For eg. mongo, app servers etc.
My question is (other than app data):
What are the differences between the new instance and the instance launched via an existing EBS?
Is the existing-EBS-launched instance supposed to work like the old instance without any changes, out-of-the-box?
What are the differences between the new instance and the instance launched via an existing EBS?
Answer- first of all, understand what EBS is, in a simple language, it is a block storage volume for use with Amazon EC2 Instance.
So Whenever you launch a new Instance via an existing EBS, all the stuff/ any manual changes on the disk which you have done previously will be automatically reflected in your new Instance, as you are using the same disk (Block Storage). It's just when you want any kind of modification like wants to change the key pair at that time we detach the volume, do the modifications and again attach the volume (disk).
Is the existing-EBS-launched instance supposed to work like the old instance without any changes, out-of-the-box?
Answer- yes Existing EBS launched instance work as the old instance, its just what kind of modification you have provided to the new instance. Suppose while launching you have changed the Type of instance, key pair, attach different security group. So all these changes will be reflected and all the manual operations done on Disk will remains same.

An unknown AWS EC2 instance running which recreates even after termination

I am running amazon AWS ECS container which creates one single instance of EC2. I made sure that it is 1 instance when I created ECS.
My issue is that I have another instance running in EC2 and amazon has sent me an email that I am using double of the free quota and will be charged.
But I am not sure why this second EC2 instance is coming from.
I have terminated it many times but it is recreated. When I terminate it, this is the prompt i receive which advises me that it is created from EBS but there is no app in EBS
On an EBS-backed instance, the default action is for the root EBS volume to be deleted when the instance is terminated.
Storage on any local drives will be lost.
This name of the instance is.
ECS Instance - amazon-ecs-cli-setup-ecs-cricketscorer
Please help.
check if you have any Auto Scaling Groups that you do not recognize. It is most probably being created because of it. If not, change your account password and deactivate/delete your existing AccessKeys.

What is EC2 previous-instance-id?

admin#ip-172-34-40-199:/var/lib/cloud/data> cat previous-instance-id
i-08070b6e274c5abc6
admin#ip-172-34-40-199:/var/lib/cloud/data> cat instance-id
i-0d865c5d95798349b
My understanding is that instance ids are supposed to be stable. I've found no reference to them changing. The instance that this is from, I just launched moments ago.
The AMI that was the source for these instances was generated from a different instance.
Ie. i-08070b6e274c5abc6 was used to generate ami-deadbeef, then i-0d865c5d95798349b was started from ami-deadbeef. All instances started from that AMI will share the same previous-instance-id.
Instance IDs are stable within the lifetime of an instance. However you can move a volume from one instance to another, or turn a volume into an AMI and launch it as a new instance.
Cloud-init is keeping track of the previous instance ID (pnrobably from when this image was originally created) to know if it should run the firstboot and other tasks that are run once per instance.

EBS Volume Being Read as Root Device

One of my instances was preventing me from logging so I stopped it, detached its volume, spun up a new instance, and attached the old volume. The problem is, the old volume is being treated as the root device. Because of this, I still cannot log in to the new instance (although I can do so if I don't attach the old volume).
Is there any way to fix this issue using the AWS Management Console?
It seems like you have attached your old volume in "/dev/sda1". Deattach your old volume and attach it to "/dev/sdf".
This is caused by the filesystems on each volume having the same label (the value returned by e.g. e2label /dev/xvda1). The server correctly starts booting from the first volume, and then the bootloader, there, sees the second volume having the label it anticipates for the root volume, and continues booting with the second volume as root. This an os-level setting, not visible to the AWS infrastructure.
Workaround: don't attach the second volume until after the instance has booted. EBS volumes can be attached to an instance at any time -- they don't have to be present when the instance is started. After unmount, they can also be detached at any time, with the instance still running.
To resolve this, I had to make a snapshot of the old volume first. I then created a new AMI using that snapshot. I included the old volume as an extra storage so that it's explicitly defined not to be treated as a root device.
I then created a new instance using that AMI. I was able to finally log in to the new instance. From there, I just mounted the volume.