Calling AWS cli from user-data file - amazon-web-services

I have a user-data script file when launching an EC2 instance from an AMI image.
The script uses AWS but I get "aws: command not found".
The AWS-CLI is installed as part of the AMI (I can use it once the instance is up) but for some reason the script cannot find it.
Am I missing something? any chance that the user-data script runs before the image is loaded (I find it hard to believe)?
Maybe the path env variable is not set at this point?
Thanks,

any chance that the user-data script runs before the image is loaded
No certainly not. It is a service on that image that runs the script.
Maybe the path env variable is not set at this point
This is most likely the issue. The scripts run as root not ec2-user, and don't have access to the path you may have configured in your ec2-user account. What happens if you try specifying /usr/bin/aws instead of just aws?

You can install aws cli and set up environment variables with your credentials. For example, in the user data script, you can write something like:
#!/bin/bash
apt-get install -y awscli
export AWS_ACCESS_KEY_ID=your_access_key_id_here
export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here
aws cp s3://test-bucket/something /local/directory/
In case you are using a CentOS based AMI, then you have to change apt-get line for yum, and the package is called aws-cli instead of awscli.

Related

sam build botocore.exceptions.NoCredentialsError: Unable to locate credentials

I am trying to deploy my machine learning model with sam for couple of days and I am getting this error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
I am also make sure that my aws config is fine
the "aws s3 ls" command works fine with me any help will be useful thanks in advance
I've read through this issue which seems to have been deployed in v1.53: SAM Accelerate issue
Reading that seemed to imply that it might be worth trying
sam deploy --guided --profile mark
--profile mark is the new part and mark is just the name of the profile.
I'm using v1.53 but still have to pass in the profile to avoid the problem you're having and I was having, so they may not have fixed the issue as well as intended, but at least the --profile seems to solve it for me.
If you are using Linux, this error can be caused by a misalignment between a docker root installation and user-level AWS credentials.
Amazon documentation recommends adding credentials using the aws configure command without sudo. However, when you install docker on Linux, it requires a root-level installation. This ultimately results in the user being forced to use sudo for the SAM CLI build and deploy commands, which leads to the error.
There are two different solutions that will fix the issue:
Allow non-root users to manage docker. If you use this method, you will not need to use sudo for your SAM CLI commands. This fix can be accomplished by using the following commands:
sudo groupadd docker
sudo usermod -aG docker $USER
OR
Use sudo aws configure to add AWS credentials to root. This fix requires you to continue using sudo for your SAM CLI commands.

aws ec2 - aws command not found when running user data

My aim is to launch an instance such that a start-up script is triggered on boot-up to download some configuration files stored in AWS S3. Therefore, in the start-up script, I am setting the S3 bucket details and then, triggering a config.sh where aws s3 sync does the actual download. However, the aws command does not work - it is not found for execution.
User data
I have the following user data when launching an EC2 instance:
#!/bin/bash
# Set command from https://stackoverflow.com/a/34206311/919480
set -e -x
export S3_PREFIX='bucket-name/folder-name'
/home/ubuntu/app/sh/config.sh
The AWS CLI was installed with pip as described in the documentation.
Observation
I think, the user data script is run with root user ID. That is why, in the user data I have /home/ubuntu/ because $HOME did not resolve into /home/ubuntu/. In fact, the first command in config.sh is mkdir /home/ubuntu/csv which creates a directory with owner as root!
So, would it be right to conclude that, the user data runs under root user ID?
Resolution
Should I use REST API to download?
Scripts entered as user data are executed as the root user, so do not use the sudo command in the script.
See: Running Commands on Your Linux Instance at Launch
One solution is to set the PATH env variable to include AWS CLI (and add any other required path) before executing AWS CLI.
Solution
Given that, AWS CLI was installed without a sudo pip, the CLI is not available for root. Therefore, to run with ubuntu user, I used the following user data script:
#!/bin/bash
su ubuntu -c '$HOME/app/sh/config.sh default`
In config.sh, the argument default is used to build the full S3 URI before invoking the CLI. However, the invocation was successful only with the full path $HOME/.local/bin/aws despite the fact that aws can be accessed with normal login.

boto3 python package installed but aws/config does not exist

I installed AWS CLI and Boto3 but can't find Shared Credentials File to put user access key.
I tried to install it on Windows 7 and Ubuntu and I have got the same issue, simply can't find the Shared Credentials File in the default location ~/.aws/credentials
I was using this official guide of boto3
Thanks,
In fact, you can create your ~/.aws folder yourself then manually key in and save the config into ~/.aws/credentials and ~/.aws/config
If you think it is troublesome, after you install awscli usign apt , you can run aws configure to use the shell interface to create those folder.
sudo apt install awscli
aws configure
The aws shell script will ask you for access key and create ~/.aws/credential and ~/.aws/config

How to install dynamically python in amazon aws spot instance?

Have some Python script to be run on Amazon AMI Spot Instance.
Wondering can I deploy by Python script/remote script :
1) The AMI spot instance.
2) Lubuntu, Anaconda + additionnal Python conda packages dynamically
on the AMI spot instance through script
Do I need to use Docker to have everything packaged in advance ?
There is StarCluster pacakge in Python, am not sure if we can use to launch
Spot Instance ?
The easiest way to get started with bootstrapping EC2 instances at launch is to add a custom user data script. If you start the instance user data with #! and the path to the interpreter you want to use, it gets executed at boot time and can perform any customization you want.
Example:
#!/bin/bash
yum update -y
Further documentation: User Data and Shell Scripts

Command cannot use as root in Amazon Linux in EC2

I'm using aws ec2 (Amazon Linux), and I'm in trouble.
I installed git flow, and I can use git flow init command as ec2-user.
But I cannot use git flow init as root user.
I don't understand why.....
if you need run as root, sudo it.
/usr/bin/sudo /usr/bin/git flow init
But from your description, the problem is something else.
Anyway, let me know if it helps.