I am using ec2 instances with ubuntu 18 ami,
with user data script as follows:
#!/bin/bash
sudo apt-get update -y
sudo apt-get install python-pip -y
sudo apt-get install awscli -y
mkdir /home/ubuntu/dir
aws s3 sync s3://art-meta-data ./art-meta-data
the script it only partially executed, It installed pip, performs apt-get update, installed the awscli, but does not sync the bucket and does not create the directory.
I don't get any errors (maybe I don't look the right place?) and when I try to create the dir and sync the bucker via ssh, it works perfectly, meaning the s3 permissions and os permissions are fine.
What can be the issue here? What else should I check?
edit:
I found this - explaining how to make your script run each time you stop and start the instance, but without explanation why the added meta coding changes anything. can anyone point me to some reference for why this script works differently than just regular bash script?
It would be better to describe the full path on the sync command to avoid being created in the wrong place.
#!/bin/bash
sudo apt-get update -y
sudo apt-get install python-pip -y
sudo apt-get install awscli -y
mkdir /home/ubuntu/dir
aws s3 sync s3://art-meta-data /home/ubuntu/dir/art-meta-data
You can check the EC2 system logs to see the output of the failed command. That is really the only way for you to debug your an issue within your user data script.
Double check your instance profile has access to the bucket and that you are using the correct arn to reference the bucket
If you run sudo cat /var/log/cloud-init-output.log you can see the log output of everything that happened while the ec2 user-data you supplied was run. Here's what you'd likely see if you did that:
mkdir: cannot create directory '/home/ubuntu/dir': No such file or directory
Jul 16 18:57:21 cloud-init[2471]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Jul 16 18:57:21 cloud-init[2471]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Jul 16 18:57:21 cloud-init[2471]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
ci-info: no authorized ssh keys fingerprints found for user ec2-user.
Cloud-init v. 19.3-45.amzn2 finished at Sat, 16 Jul 2022 18:57:21 +0000. Datasource DataSourceEc2. Up 121.29 seconds
It appears that mkdir fails because /home/ubuntu doesn't yet exist at the time the ec2 user data script runs. One way to solve this would be to move the creation of the folder to /etc/profile.d.
To do this, you could modify your user data script as follows:
echo "mkdir -p /home/ubuntu/dir && aws s3 sync s3://art-meta-data /home/ubuntu/dir/art-meta-data" >> /etc/profile.d/sync_bucket.sh
Files in /etc/profile.d/ are run when a user logs in so you're guaranteed the existence of /home/ubuntu folder and the sync will occur on each login.
Related
I am trying to run a GCE startup script that downloads all dependencies, clones a repository and runs a python program. Here is the code
#! /usr/bin/bash
apt-get update
apt-get -y install python3.7
apt-get -y install git
export HOME=/home/codingassignment
echo $HOME
cd $HOME
rm -rf sshlogin-counter/
git clone https://rutu2605:************#github.com/rutu2605/sshlogin-counter.git
nohup python3 -u ./sshlogin-counter/alphaclient.py > output.log 2>&1 &
When I run echo$HOME, it displays the path in the log file. However when I cd into it, it says directory not found
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /home/codingassignment
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /tmp/metadata-scripts701519516/startup-script: line 7: cd: /home/codingassignment: No such file or directory
That's because at the time when the script is executed, the /home/codingassignment directory doesn't exist yet. To quote the answer you referred to in the comment:
The startup script is executed as root when the user have been not created yet and no user is logged in
The user home directory for the codingassignment user is created later, when you try to login through SSH for example, if you're using the SSH button in Cloud Console or use the gcloud compute ssh command.
My suggestion:
a) Download the code to some "neutral" directory, like /assignment and set proper permissions for this folder so that the codingassignment user can access it later.
b) Try first creating the user with adduser - this might solve your problem. First create the user, then use su codingassignment to drop root permissions, if you don't need them when executing the script.
I have an EC2 userdata script which looks like this:
#!/bin/bash
yum -y install python3
yum -y install python3-pip
pip3 install boto3
pip3 install pandas
aws s3 cp s3://<bucket name>/script.py /home/ec2-user/script.py
chown ec2-user:ec2-user /home/ec2-user/script.py
echo "#reboot /home/ec2-user/script.py">> /var/spool/cron/ec2-user
All worked well, it added an entry in ec2-user crontab when EC2 instance is created.
However, when I stopped and started the instance, this crontab entry did not get executed - probably because it starts with root user, not ec2-user?
I wanted to execute ec2-user crontab entry on startup. I can not have entries in root user's crontab going forward.
You can add a crontab entry for a user
crontab -u ec2-user /home/ec2-user/script.py
-u Specifies the user whose crontab is to be viewed or modified. If this option is not given, crontab opens the crontab of the user who ran crontab.
Say your script needs to be run only after 5 minutes. For example: reboot + 5mintues. The syntax is as follows:
#reboot sleep 300 && /home/ec2-user/script.py
When I install the AWS CLI for the root user on CENTOS 7, it installs it to /usr/local/bin as with other users. Problem is though, /usr/local/bin isn't in $PATH for the root user. At first I thought this was a bug in CENTOS, one that has been around for a very long time, but it's also possible its for reasons of security, I don't know.
What would be best practice then to install the AWS CLI for the root user?
To complement Chris'es answer, you can install the AWS CLI v2 in a folder visible to root, such as /usr/local/sbin as follows:
sudo yum install unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --bin-dir /usr/local/sbin
then confirm with:
aws --version
which should produce:
aws-cli/2.0.44 Python/3.7.3 Linux/3.10.0-1127.el7.x86_64 exe/x86_64.centos.7
This appears to a bug logged in CentOS since 2012 in CentOS 6 but as of yet has not been fixed.
Regarding running AWS CLI as root, you can still run it by running /usr/local/bin/aws although I get that this is not ideal. Additionally you should try to avoid running AWS CLI as root if possible, instead run it as a named user.
According to the documentation you can use either --bin-dir or -b to specify a different bin directory so you could check a path that both root and named users have in their $PATH variable.
What worked for me was
sudo ./aws/install --bin-dir /usr/bin
I reran the startup script using following command:
sudo google_metadata_script_runner --script-type startup
All the yum install commands are failing with following error:
startup-script: INFO startup-script-url: Retrieving key from https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
startup-script: INFO startup-script-url: https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64/repodata/repomd.xml: [Errno -1] repomd.xml signature could not be verified for google-cloud-sdk
startup-script: INFO startup-script-url: Trying other mirror.
startup-script: INFO startup-script-url: One of the configured repositories failed (Google Cloud SDK),
Any idea how I could fix this while instance provisioning or any workaround?
To be honest I have found this on google, not sure if this helps but maybe you can try it out anyway.
1) Disable Caching in yum config /etc/yum.conf:
http_caching=none
2) Delete tmp yum files:
rm -r /var/tmp/yum*
3) restart machines
4) cleanup yum:
yum clean metadata
yum clean all
yum update
I am trying to reproduce the issue on my end. It would be helpful if you could share the information below:
What is the exact OS you are using here.
What happens when you are trying to run the scripts manually after VM starts.
Can you please share the sample script without the confidential information or credentials.
Though I have not tested this, the error can happen due to yum not having enough cached data to continue. And the solution can be found on the public: https://community.cloudera.com/t5/Support-Questions/yum-doesn-t-have-enough-cached-data-to-continue/m-p/220862
I just encountered this same error on a Docker build.
Google Cloud's (latest) repo configuration is as follows:
[google-cloud-cli]
name=Google Cloud CLI
baseurl=https://packages.cloud.google.com/yum/repos/cloud-sdk-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=0
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
Our Dockerfile yum repo configuration had to be corrected from repo_gpgcheck=1 to repo_gpgcheck=0 and then the error went away.
For me running yum-config-manager --disable google-cloud-sdk solved it. I got this from the error message that I got:
...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable google-cloud-sdk
or
subscription-manager repos --disable=google-cloud-sdk
In case anyone's facing same issue with apt. Can try --
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key --keyring /usr/share/keyrings/cloud.google.gpg add -
i'm typing the following in my working amazon ec2 linux server. (with ENV activated)
pip install pillow
getting this error:
Could not install packages due to an EnvironmentError:
[Errno 13] Permission denied: '/home/ec2-user/env/lib64/python3.5/site-packages/Pillow-5.1.0.dist-info'.
Consider using the `--user` option or check the permissions.
if i use --user i get:
Can not perform a '--user' install. User site-packages are not visible in this virtualenv.
Based on your answers, what happened is that you used sudo when you created the virtualenv so root owns it.
sudo chown ec2-user:ec2-user -R ~ec2-user/env will fix this and make ec2-user the owner of the directory (and subdirectories) again.