Ec2 AutoScaling User Data not running - amazon-web-services

I'm trying to add user data to my auto scaling on AWS.
When I setup my launch configuration through the web console on AWS I entered the following user data:
#!/bin/bash
echo $RANDOM > /home/ubuntu/clusterID
I had to base64 encode it, I did that with base64encode.org. The result:
IyEvYmluL2Jhc2gNCmVjaG8gJFJBTkRPTSA+IC9ob21lL3VidW50dS9jbHVzdGVySUQ=
When the ec2 instance launches I see the following error:
2015-02-24 07:50:08,754 - init.py[WARNING]: Unhandled
non-multipart userdata starting 'IyEvYmluL2Jhc2gNCmVjaG8g...'
Any ideas what I'm doing wrong?

Is your /home or /home/ubuntu a separate partition? If yes, you can check if the fs is mounted properly before the command executes.
I faced similar kind of issue 1.5 year back and it was the same mistake I mentioned....

ok..
seems the data passed in user-data does not have to be encoded(base64).
You can pass the user-data as it is and aws cli will pass this data to ec2 instance after encoding.

Related

Get AWS Availability zone info inside a machine

Is there an AWS API to hit and get the availability zone of the current machine where the code is running? I tried to check if such ENV variable is set automatically by AWS but couldn't find such a thing. Any help would be appreciated.
You can get the current AZ of the instance using Instance metadata.
For example:
AZ=$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone)
echo ${AZ}
will output (example):
us-east-1e

Adding to authorized_keys via AWS EC2 Instance User Data

Recently I made the silly mistake of clearing the contents of my user's ~/.ssh/authorized_keys file on my AWS instance. As such, I can no longer ssh onto the instance.
I realised I could add these keys back via AWS EC2 instance user data. However as of yet I have had no luck with this. I stopped my instance, added the following to the user data and started it again:
#!/bin/bash
> /home/myUser/.ssh/authorized_keys
echo "ssh-rsa aaa/bbb/ccc/ddd/etc== mykeypair" >> /home/myUser/.ssh/authorized_keys
chown myUser:myUser /home/myUser/.ssh/authorized_keys
chmod 600 /home/myUser/.ssh/authorized_keys
This should empty the file, add the public keypair and ensure the correct permissions are present on the file.
However my private key is still being rejected.
I know the keys are correct so it must be something to do with my instance user data. I have also tried prepending 'sudo' to all commands.
Try to use cloud-init directives instead of shell
#cloud-config
cloud_final_modules:
- [users-groups,always]
users:
- name: example_user
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- ssh-rsa AAAAB3Nz<your public key>...
The default behavior is to execute once-per-instance. However, these
instructions add the key on every reboot or restart of the instance.
If the user data is removed, the default functionality is restored.
These instructions are for use on all OS distributions that support
cloud-init directives.
https://aws.amazon.com/premiumsupport/knowledge-center/ec2-user-account-cloud-init-user-data/
From the official docs:
By default, user data and cloud-init directives only run during the
first boot cycle when you launch an instance. However, AWS Marketplace
vendors and owners of third-party AMIs may have made their own
customizations for how and when scripts run.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
So modifying user data after you shut down your instance will not be useful in most cases.
Solution: you can detach your EBS volume, attach it to an EC2 instance you can connect to, mount the volume, fix authorized_keys, than connect the volume back to the affected instance.

can't find /var/log/cloud-init-output.log on redhat ec2 instance

I usually work on amazon linux ec2 instance and i check /var/log/cloud-init-output.log to see if my cloudformation user data script is working or not. I can't find cloud-init-output.log on redhat ec2 instance and i am not sure where to check the logs and how to make sure that my user data script is working properly.
Josh answered this here: https://stackoverflow.com/a/50258755/5775568
TL;DR: Run this command from your EC2 instance to see your logs:
sudo grep cloud-init /var/log/messages
For posterity: I also needed to take this approach to see user-data logs on my centos7 EC2 instance.

Can't login to docker with aws

This is an extension of my last question considering I've decided to deploy a Docker container onto a ton of EC2's. I've set up a repository and a user with full rights, and I added the correct keys to my aws cli configuration. When I try to run the docker login command that comes up after running the "aws ecr get-login" command, it gives me a failed with status: 403 forbidden error. I have absolutely no clue what's going on, and I've spent the past 2 days trying to fix this error... Any ideas?
I would suggest to check the security group of the EC2 Instance
To allow access via SSH you have to apply the following settings for the Security Group of the EC2 Instance:
Security Groups

IAM Role + Boto3 + Docker container

As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation
Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root