My aim is to launch an instance such that a start-up script is triggered on boot-up to download some configuration files stored in AWS S3. Therefore, in the start-up script, I am setting the S3 bucket details and then, triggering a config.sh where aws s3 sync does the actual download. However, the aws command does not work - it is not found for execution.
User data
I have the following user data when launching an EC2 instance:
#!/bin/bash
# Set command from https://stackoverflow.com/a/34206311/919480
set -e -x
export S3_PREFIX='bucket-name/folder-name'
/home/ubuntu/app/sh/config.sh
The AWS CLI was installed with pip as described in the documentation.
Observation
I think, the user data script is run with root user ID. That is why, in the user data I have /home/ubuntu/ because $HOME did not resolve into /home/ubuntu/. In fact, the first command in config.sh is mkdir /home/ubuntu/csv which creates a directory with owner as root!
So, would it be right to conclude that, the user data runs under root user ID?
Resolution
Should I use REST API to download?
Scripts entered as user data are executed as the root user, so do not use the sudo command in the script.
See: Running Commands on Your Linux Instance at Launch
One solution is to set the PATH env variable to include AWS CLI (and add any other required path) before executing AWS CLI.
Solution
Given that, AWS CLI was installed without a sudo pip, the CLI is not available for root. Therefore, to run with ubuntu user, I used the following user data script:
#!/bin/bash
su ubuntu -c '$HOME/app/sh/config.sh default`
In config.sh, the argument default is used to build the full S3 URI before invoking the CLI. However, the invocation was successful only with the full path $HOME/.local/bin/aws despite the fact that aws can be accessed with normal login.
Related
Good afternoon, can you cast your wise eyes and brains on this issue. I'm supposed to set up a scheduled upload using AWS CLI so that cron job file that I have created, gets sent to s3 (every hour), also created earlier. Now, I have used this guide, created script file as per below, where local folder path is where my cron job file is sitting and bucket-name is my bucket created (has been created via CLI with basic command, have not added any permissions or policies). This was saved as script.sh and since creating it, I have received 2 mails to my ec2-user with main error saying /bin/sh: /home/ec2-user/script.sh: Permission denied
Can someone advise on where am I making boo boo? Is it not fully configured S3? Is it incorrect naming and/or placing of the fiels in the directories on the server? Cron job that is supposed to be sending first file to s3 is 0 */1 * * * /path-to-script-file.script.sh
Thank you very much for the help, I would really like to get this right and understand what is happening here.
Script file contains:
#!/bin/bash
aws s3 cp /local-folder-path/ s3://bucket-name
Make sure you're running the script as ec2-user and don't forget to chmod 755 PATH/TO/YOUR/SCRIPT it.
Also, you should run the cron under ec2-user - use
sudo -u ec2-user crontab -e t
And try re-running the script.
Finally, make sure that your EC2 has the right role to write to S3.
I have installed the latest versions of the aws-cli-2 and docker, as well as ran "aws configure" and entered my access key and secret key. I have also verified the aws.config is correct and showing the right region and output format. My credentials in AWS are admin. I keep getting the following error:
'''Unable to locate credentials. You can configure credentials by running "aws configure".
Error: Cannot perform an interactive login from a non TTY device'''
Even though I have already ran 'aws configure.' I am running the commands prefixed with 'sudo' as well. Any thoughts?! Thank you for your time!
The aws configure command was being run as the local user, whereas the ecr command was being run as sudo.
If you run commands as sudo it will not have access to your local users config, it will instead default to the root users.
Instead ensure all commands are run as the same user.
If you want to use the aws credentials file from the default location you can also specify the location via the AWS_CONFIG_FILE environment variable.
I had a Jenkins 2.46 installation running on an EC2 box, associated to a IAM role through an instance profile.
Jenkins was able to do various tasks requiring AWS credentials (f.e. use terraform, upload files to s3, access CodeCommit git repos) using just the instance profile role (no access key or secret keys were stored on the instance).
After upgrading to Jenkins 2.89, this is no longer the case: every task requiring authentication with AWS fails with a 403 error.
However, running a command on the instance bash as the jenkins user still works fine (f.e. running sudo -u jenkins /usr/bin/aws s3 ls s3://my-bucket/ lists bucket files; running the same command into Jenkins' Script Console yelds a 403).
I read the release notes of every version from 2.46 to 2.89 but I did not find anything relevant.
Jenkins was installed and updated through yum, the aws cli was installed using the bundled installer provided by AWS.
I would like to run some code whenever my AWS EC2 instance starts. The code should pull down data from Amazon S3, do some work on the data, and then copy the data back to S3. I created a makefile to do this, which works fine if I call it while I'm logged into the instance. I then placed a script in /ect/rc.local (this scripts runs every time the instance starts) that will call the makefile. This scripts successfully runs on instance startup. The problem I'm having is that when the makefile is called from the startup script it does not pull data from or copy data to s3. I read here that setting your access keys solves this problem with a Windows server, but this does not work for me. It looks like the code just stops when it tries to call any aws commands because in the log file the output is always the first line of code from the makefile. Below is what my log file says:
aws s3 sync s3:<s3 bucket to get data from> <location to save data to>
Here is the relelvant code from my makefile:
### Download all data
get_data:
aws s3 sync s3:<s3 bucket to get data from> <location to save data to>
### Copy data back to s3
copy_data_to_s3:
aws s3 sync <location of data to copy to s3> s3:<s3 bucket data is copied to>
Here is my script in /etc/rc.local:
#!/bin/bash
#
# rc.local
#
make -f <location of makefile>/Makefile > <location to save log file>/log.txt
exit 0
Any help would be appreciated.
When you configure the AWS command line, it stores the credentials and region in ~/.aws/.... But when you execute your command on startup, from rc.local, it's not running as you.
The problem you're seeing is the AWS CLI failing to find any credentials.
So, you have a couple of options:
Option 1: (Preferred)
Don't configure locally-stored AWS credentials. Instead, launch your EC2 instance using an IAM role. When you do this, no credentials are required to be store on your instance. The AWS CLI will simply "find" the credentials in the IAM Role.
Option 2: (May work)
From rc.local, run your scripts under your account. This way, the stored credentials may be found.
The more secure way to do what you want to do is using Option 1.
I have a user-data script file when launching an EC2 instance from an AMI image.
The script uses AWS but I get "aws: command not found".
The AWS-CLI is installed as part of the AMI (I can use it once the instance is up) but for some reason the script cannot find it.
Am I missing something? any chance that the user-data script runs before the image is loaded (I find it hard to believe)?
Maybe the path env variable is not set at this point?
Thanks,
any chance that the user-data script runs before the image is loaded
No certainly not. It is a service on that image that runs the script.
Maybe the path env variable is not set at this point
This is most likely the issue. The scripts run as root not ec2-user, and don't have access to the path you may have configured in your ec2-user account. What happens if you try specifying /usr/bin/aws instead of just aws?
You can install aws cli and set up environment variables with your credentials. For example, in the user data script, you can write something like:
#!/bin/bash
apt-get install -y awscli
export AWS_ACCESS_KEY_ID=your_access_key_id_here
export AWS_SECRET_ACCESS_KEY=your_secret_access_key_here
aws cp s3://test-bucket/something /local/directory/
In case you are using a CentOS based AMI, then you have to change apt-get line for yum, and the package is called aws-cli instead of awscli.