Why my same ".s3cfg" file is not working on different machines - amazon-web-services

I have server ec2 instance where I got error :
$ sh ami-backup.sh
----------------------------------
Thu Sep 24 10:37:47 UTC 2015
----------------------------------
Unable to locate credentials. You can configure credentials by running "aws configure".
Same script worked on my local machine so I coped ".s3cfg" to that server still it gives same error "Unable to locate credentials"
On my local machine :
ashish#ashishk:~$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxx | awk '{ print $8 }' | sort -n | grep "i-"
i-127fb8df
i-1effb6d3
i-29efe0e4
i-29fd04e4
i-d5888618
On my server (ec2 instance) with same ".s3cfg" :
$ aws ec2 describe-instances --filters Name=vpc-id,Values=vpc-xxx | awk '{ print $8 }' | sort -n | grep "i-" > /tmp/instanceid.txt
Unable to locate credentials. You can configure credentials by running "aws configure".
Why my same ".s3cfg" file is not working on different machines ! Please let me know what is wrong here Or if I copy ".s3cfg" from machine to another machine will it work or i have to run "aws configure" & configure on new machine also ?

On your server, use aws configure to set your AWS Access key ID, Secret Key ID and other things, before you run your command.

Related

get secret from aws in systemd service file

I have a systemd service in ubuntu and I want to retrieve the secret in the envoirment section and not put it hardcoded like this:
Environment="PASSWORD=`aws secretsmanager get-secret-value --secret-id aws_pwds | jq -r '.SecretString | fromjson | .test_pwd'`"
the command work in the shell but the service failed to get it any suggestions why?
I also tried using $()

AWS EC2 user data docker system prune before start ecs task

I have followed the below code from the AWS to start a ECS task when the EC2 instance launches. This works great.
However my containers only run for a few minutes(max ten) then once finished the EC# is shutdown using a cloudwatch rule.
The problem I am find is that due to the instances shutting down straight after the task is finished the auto clean up of the docker containers doesn't happen resulting in the EC2 instance getting full up stopping other tasks to fail. I have tried the lower the time between clean up but it still can be a bit flaky.
Next idea was to add docker system prune -a -f to the user data of the EC2 instance but it doesnt seem to get ran. I think its because I am putting it in the wrong part of the user data, I have searched through the docs for this but cant find anything to help.
Question where can I put the docker prune command in the user data to ensure that at each launch the prune command is ran?
--==BOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"
#!/bin/bash
# Specify the cluster that the container instance should register into
cluster=your_cluster_name
# Write the cluster configuration variable to the ecs.config file
# (add any other configuration variables here also)
echo ECS_CLUSTER=$cluster >> /etc/ecs/ecs.config
# Install the AWS CLI and the jq JSON parser
yum install -y aws-cli jq
--==BOUNDARY==
Content-Type: text/upstart-job; charset="us-ascii"
#upstart-job
description "Amazon EC2 Container Service (start task on instance boot)"
author "Amazon Web Services"
start on started ecs
script
exec 2>>/var/log/ecs/ecs-start-task.log
set -x
until curl -s http://localhost:51678/v1/metadata
do
sleep 1
done
# Grab the container instance ARN and AWS region from instance metadata
instance_arn=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F/ '{print $NF}' )
cluster=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .Cluster' | awk -F/ '{print $NF}' )
region=$(curl -s http://localhost:51678/v1/metadata | jq -r '. | .ContainerInstanceArn' | awk -F: '{print $4}')
# Specify the task definition to run at launch
task_definition=my_task_def
# Run the AWS CLI start-task command to start your task on this container instance
aws ecs start-task --cluster $cluster --task-definition $task_definition --container-instances $instance_arn --started-by $instance_arn --region $region
end script
--==BOUNDARY==--
I hadn't considered terminated then creating a new instance.
I use cloud formation currently to create EC2.
What's the best workflow for terminating an EC2 after the task definition has completed then on schedule create a new one registering it to the ECS cluster?
Cloud watch scheduled rule to start lambda that creates EC2 then registers to cluster?

How to automate EC2 instance startup and ssh connect

At the moment I connect with the following step manually:
Open EC2-Instance web
Under Actions -> Instance State click Start
Look at Connect tab
Manually copy the ssh command e.g.:
ssh -i "mykey.pem" ubuntu#ec2-13-112-241-333.ap-northeast-1.compute.amazonaws.com
What's the best practice so that I can streamline these stems through command line in my local computer? So that I can just use one command.
An approach with awscli would be
# Start the instance
aws ec2 start-instances --instance-ids i-xxxxxxxxxxx
status=0
# Wait for the instance until the 2/2 checks are passed
while [ $status -lt 2]
do
status=`aws ec2 describe-instance-status --instance-ids i-xxxxxxxxxxx --filters Name="instance-status.reachability,Values=passed" | grep '"Status": "passed"' | wc -l`
# add sleep time
done
# Associate an Elastic IP if already have one allocated (skip if not reqd)
aws ec2 associate-address --instance-id i-xxxxxxxxxxx --public-ip elastic_ip
# Get the Public DNS, (If the instance has only PrivateIp, grep "PrivateIpAddress")
public_dns=`aws ec2 describe-instances --instance-ids i-xxxxxxxxxxx | grep "PublicDnsName" | head -1 | awk -F: '{print $2}' | sed 's/\ "//g;s/",//g'`
ssh -i key.pem username#public_dns

Amazon AWS - Upload files to multiple instances under single LB

I need to upload the updated files into multiple ec2 instace which is under single LB. My problem is I missed some ec2 instance and it broke my webpage.
Is there any tool available to upload the multiple files to multiple EC2 windows server in a single click.
I will update my files weekly or some times daily. I checked with Elastic beanstalk , Amazon Code Deploy and Amazon EFS. But the are hard to use. Anyone please help
I will suggest use AWS S3 and AWS CLI. What you can do is install AWS CLI on all the EC2 instance. Create a Bucket in AWS S3.
Start a Cron Job on each EC2 instance with below syntax.
aws s3 sync s3://bucket-name/folder-on-bucket /path/to/local/folder
So what will happen is when you upload new images to the S3 bucket all images will automatically sync with all the EC2 instances behind your load balancer. And also AWS s3 will be central directory where you upload and delete images.
You could leverage the AWS CLI, you could run something like
aws elb describe-load-balancers --load-balancer-name <name_of_your_lb> --query LoadBalancerDescriptions[].Instances --output text |\
xargs -I {} aws ec2 describe-instances --instance-id {} --query Reservations[].Instances[].PublicIpAddress |\
xargs -I {} scp <name_of_your_file> <your_username>#{}:/some/remote/directory
basically it goes like this:
find out all the ec2 instances connected to your Load Balancer
for each of the ec2 instances, find out the PublicIPAddress (supposedly you have since you can connect to them through scp)
run scp command to copy 1 files somewhere on the ec2 server
you can copy also copy folder if you need to push many files , it might be easier
Amazon ElasticFileSystem would probably now be the easiest option, you would create your file system and attach it to all your ec2 instances that are attached to the Load Balancer, and when you transfer files to the EFS it will be available to all the ec2 instances where the EFS is attached
(the setup to create EFS and mount it to your ec2 instances has to be done once only)
Create a script containing some robocopy commands and run it when you want to update the files on your servers. Something like this:
robocopy Source Destination1 files
robocopy Source Destination2 files
You will also need to share the folder you want to copy to with the user on your machine.
I had an application load balancer (alb), so I had to build on #FredricHenri's answer
EC2_PUBLIC_IPS=`aws elbv2 --profile mfa describe-load-balancers --names c360-infra-everest-dev-lb --query 'LoadBalancers[].LoadBalancerArn' --output text | xargs -n 1 -I {} aws elbv2 --profile mfa describe-target-groups --load-balancer-arn {} --query 'TargetGroups[].TargetGroupArn' --output text | xargs -n 1 -I {} aws elbv2 --profile mfa describe-target-health --target-group-arn {} --query 'TargetHealthDescriptions[*].Target.Id' --output text | xargs -n 1 -I {} aws ec2 --profile mfa describe-instances --instance-id {} --query 'Reservations[].Instances[].PublicIpAddress' --output text`
echo $EC2_PUBLIC_IPS
echo ${EC2_PUBLIC_IPS} | xargs -n 1 -I {} scp -i ${EC2_SSH_KEY_FILE} ../swateek.txt ubuntu#{}:/home/ubuntu/
Points to Note
I have used an AWS profile called "MFA", this is optional
The other environment variables EC2_SSH_KEY_FILE is the name of the .pem file used to access the EC2 instance.

How can I start all AWS EC2 instances in Ansible

I have found a script for starting/stopping a dynamically created ec2 instance, but how do I start any instances in my inventory?
Seems you are talking about scripting, not SDK. So there are two tools to do the job.
1 AWS CLI tools
download aws cli tool and set the API Key in $HOME/.aws/credentials
list all instances on region us-east-1
Confirm which instances you are targeting.
aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --region us-east-1 --output text
2 Amazon EC2 Command Line Interface Tools
download and setup instruction
list all instances on region us-east-1
You should get same output as WAY #1.
ec2-describe-instances --region us-west-2 |awk '/INSTANCE/{print $2}'
With the instance ID list, you can use your command to start them one by one.
for example, the instance name are saved in file instance.list
while read instance
do
echo "Starting instance $instance ..."
ec2-start-instances "$linstance"
done < instance.list
BMW, give you an excellent startup, but you can even summarise the thing like this:
1) First get the id of all the instances and save them into a file
aws ec2 describe-instances --query 'Reservations[].Instances[].InstanceId' --region us-east-1 --output text >> id.txt
2) Then simply run this command to start all the instances
for id in $(awk '{print $1}' id.txt); do echo "starting the following instance $id"; aws ec2 start-instances --instance-ids --region us-east-1 $id; done
Please change the region, I am considering that you have installed and setup the AWS CLI tools properly. Thanks