AWS : execute script from ec2-user user - amazon-web-services

I have an EC2 userdata script which looks like this:
#!/bin/bash
yum -y install python3
yum -y install python3-pip
pip3 install boto3
pip3 install pandas
aws s3 cp s3://<bucket name>/script.py /home/ec2-user/script.py
chown ec2-user:ec2-user /home/ec2-user/script.py
echo "#reboot /home/ec2-user/script.py">> /var/spool/cron/ec2-user
All worked well, it added an entry in ec2-user crontab when EC2 instance is created.
However, when I stopped and started the instance, this crontab entry did not get executed - probably because it starts with root user, not ec2-user?
I wanted to execute ec2-user crontab entry on startup. I can not have entries in root user's crontab going forward.

You can add a crontab entry for a user
crontab -u ec2-user /home/ec2-user/script.py
-u Specifies the user whose crontab is to be viewed or modified. If this option is not given, crontab opens the crontab of the user who ran crontab.
Say your script needs to be run only after 5 minutes. For example: reboot + 5mintues. The syntax is as follows:
#reboot sleep 300 && /home/ec2-user/script.py

Related

Files downloaded with user-data deleted?

I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.

GCE startup script: can't find $HOME after exporting in startup script

I am trying to run a GCE startup script that downloads all dependencies, clones a repository and runs a python program. Here is the code
#! /usr/bin/bash
apt-get update
apt-get -y install python3.7
apt-get -y install git
export HOME=/home/codingassignment
echo $HOME
cd $HOME
rm -rf sshlogin-counter/
git clone https://rutu2605:************#github.com/rutu2605/sshlogin-counter.git
nohup python3 -u ./sshlogin-counter/alphaclient.py > output.log 2>&1 &
When I run echo$HOME, it displays the path in the log file. However when I cd into it, it says directory not found
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /home/codingassignment
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /tmp/metadata-scripts701519516/startup-script: line 7: cd: /home/codingassignment: No such file or directory
That's because at the time when the script is executed, the /home/codingassignment directory doesn't exist yet. To quote the answer you referred to in the comment:
The startup script is executed as root when the user have been not created yet and no user is logged in
The user home directory for the codingassignment user is created later, when you try to login through SSH for example, if you're using the SSH button in Cloud Console or use the gcloud compute ssh command.
My suggestion:
a) Download the code to some "neutral" directory, like /assignment and set proper permissions for this folder so that the codingassignment user can access it later.
b) Try first creating the user with adduser - this might solve your problem. First create the user, then use su codingassignment to drop root permissions, if you don't need them when executing the script.

Setup VNC for ssm-user on EC2 using user data script

I've attempted to setup an EC2 to access the MATE desktop using port forwarding using SSM agent. I've followed instructions here. I want to use the user data script to set this up, but I can't get the ssm-user to start the vncserver.
I think the ssm-user is created when I log in, not when the script runs. In any case if I do log in when the user data script is running, the config files for the vncserver appears to be setup with root access only.
Here is my user data script so far based on other so answers:
#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo '## install mate'
amazon-linux-extras install mate-desktop1.x -y
bash -c 'echo PREFERRED=/usr/bin/mate-session > /etc/sysconfig/desktop'
echo '## install tiger vnc'
yum install tigervnc-server -y
echo '## install chromium'
amazon-linux-extras install epel -y
yum install chromium -y
echo '## setup user'
su ssm-user
export HOME=/home/ssm-user
echo '## config vnc password'
umask 0077
mkdir -p "$HOME/.vnc"
chmod go-rwx "$HOME/.vnc"
vncpasswd -f <<<"some_password" >"$HOME/.vnc/passwd"
echo '## start vncserver'
vncserver :1
When I run this, the log shows:
su: user ssm-user does not exist
If I instead let the root user start the vncserver (removing the su ssm-user line) I'm able to connect using the SSM port forward session and VNC, but the desktop is blank. Guess this is as I'm logged in an ssm-user? Is there a way to setup the vncserver for the ssm-user via user data script?

ec2 user data script is only partialy excecuted

I am using ec2 instances with ubuntu 18 ami,
with user data script as follows:
#!/bin/bash
sudo apt-get update -y
sudo apt-get install python-pip -y
sudo apt-get install awscli -y
mkdir /home/ubuntu/dir
aws s3 sync s3://art-meta-data ./art-meta-data
the script it only partially executed, It installed pip, performs apt-get update, installed the awscli, but does not sync the bucket and does not create the directory.
I don't get any errors (maybe I don't look the right place?) and when I try to create the dir and sync the bucker via ssh, it works perfectly, meaning the s3 permissions and os permissions are fine.
What can be the issue here? What else should I check?
edit:
I found this - explaining how to make your script run each time you stop and start the instance, but without explanation why the added meta coding changes anything. can anyone point me to some reference for why this script works differently than just regular bash script?
It would be better to describe the full path on the sync command to avoid being created in the wrong place.
#!/bin/bash
sudo apt-get update -y
sudo apt-get install python-pip -y
sudo apt-get install awscli -y
mkdir /home/ubuntu/dir
aws s3 sync s3://art-meta-data /home/ubuntu/dir/art-meta-data
You can check the EC2 system logs to see the output of the failed command. That is really the only way for you to debug your an issue within your user data script.
Double check your instance profile has access to the bucket and that you are using the correct arn to reference the bucket
If you run sudo cat /var/log/cloud-init-output.log you can see the log output of everything that happened while the ec2 user-data you supplied was run. Here's what you'd likely see if you did that:
mkdir: cannot create directory '/home/ubuntu/dir': No such file or directory
Jul 16 18:57:21 cloud-init[2471]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Jul 16 18:57:21 cloud-init[2471]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Jul 16 18:57:21 cloud-init[2471]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/site-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
ci-info: no authorized ssh keys fingerprints found for user ec2-user.
Cloud-init v. 19.3-45.amzn2 finished at Sat, 16 Jul 2022 18:57:21 +0000. Datasource DataSourceEc2. Up 121.29 seconds
It appears that mkdir fails because /home/ubuntu doesn't yet exist at the time the ec2 user data script runs. One way to solve this would be to move the creation of the folder to /etc/profile.d.
To do this, you could modify your user data script as follows:
echo "mkdir -p /home/ubuntu/dir && aws s3 sync s3://art-meta-data /home/ubuntu/dir/art-meta-data" >> /etc/profile.d/sync_bucket.sh
Files in /etc/profile.d/ are run when a user logs in so you're guaranteed the existence of /home/ubuntu folder and the sync will occur on each login.

How to create addition EC2 user in linux AMI via UserData with ssh permission

Problem statement- Create additional user pretty much same what been explained Here, only thing which I am doing is instead of generating new key pair I am using same key pair which is being used for ec2-user.
Now if I run following commands manually login into ec-2 instance it working without any issue and I am able to ssh with same key as test-user
sudo adduser test-user
sudo su - test-user
mkdir .ssh
chmod 700 .ssh
cd .ssh
curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key >> authorized_keys
chmod 600 authorized_keys
But if I keep same instruction in user data section of instance to run on boot up, It only create test-user but doesn't perform rest of the steps. I don't found much detail also on /var/log/cloud-init-output.log
#!/bin/bash
sudo adduser test-user
sudo su - test-user
mkdir .ssh
chmod 700 .ssh
cd .ssh
curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key >> authorized_keys
chmod 600 authorized_keys
First, make sure cloud-init is installed on your instance
sudo yum install cloud-init
Stop the instance (not terminate)
Update user data with the following script (make sure to replace <YOUR-PUBLIC-SSH-KEY> with your key (eg. ssh-rsa abc123...)
#cloud-config
cloud_final_modules:
- [users-groups,always]
users:
- name: username
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- <YOUR-PUBLIC-SSH-KEY>
Start your instance
Now you should be able to login the same way as for ec2-user.
More information here: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-user-account-cloud-init-user-data/
Apparently scripts entered as user data are executed as the root user, so any files you create will be owned by root. So you have to change the ownership of .those file to test-user. Below command need to be executed in the end.
chown -R test-user:test-user /home/test-user/