Initiate EC2 instance with pack of comands - amazon-web-services

Is there a way to start AWS EC2 instance with pack of commands?
So im creating a new instance and thing i wan't to achieve is run some linux commands automatically after starting it without connecting with machine and typing those commands manually.

This is exactly the purpose of UserData.
You would list your script (bash for Linux, or Powershell for Windows), this will then run on the first time the instance runs.
An example user data taken from the documentation to perform the setup of a web server is below.
#!/bin/bash
yum update -y
amazon-linux-extras install -y lamp-mariadb10.2-php7.2 php7.2
yum install -y httpd mariadb-server
systemctl start httpd
systemctl enable httpd
usermod -a -G apache ec2-user
chown -R ec2-user:apache /var/www
chmod 2775 /var/www
find /var/www -type d -exec chmod 2775 {} \;
find /var/www -type f -exec chmod 0664 {} \;
echo "<?php phpinfo(); ?>" > /var/www/html/phpinfo.php
In the event you need to debug take a look at the /var/log/cloud-init-output.log log once the instance has launched.
However, if there are a larger number of steps it might be preferable to create a pre-baked AMI which involves setting up a blank server with all the necessary services and configuration using a tool such as Ansible, Chef or Puppet.

Related

Files downloaded with user-data deleted?

I have a user-data bootstrap script that creates a folder called content in root directory and downloads files from an S3 bucket.
#!/bin/bash
sudo yum update -y
sudo yum search docker
sudo yum install docker -y
sudo usermod -a -G docker ec2-user
id ec2-user
newgrp docker
sudo yum install python3-pip -y
sudo pip3 install docker-compose
sudo systemctl enable docker.service
sudo systemctl start docker.service
export PATH=$PATH:/usr/local/bin
mkdir content
docker network create web_todos
docker run -d -p 80:80 --name nginx-proxy --network=web_todos -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
aws s3 cp s3://jv-pocho/docker-compose.yaml .
aws s3 cp s3://jv-pocho/backup.sql .
aws s3 cp s3://jv-pocho/dns-updater.sh .
aws s3 sync s3://jv-pocho/images/ ./content/images
aws s3 sync s3://jv-pocho/themes/ ./content/themes
docker-compose up -d
sleep 30
docker exec -i db_jv sh -c 'exec mysql -uroot -p"$MYSQL_ROOT_PASSWORD"' < backup.sql
rm backup.sql
chmod +x dns-updater.sh
This bootstrap works ok, it creates the folder and download the files (it has permissions to download the files) i.e.:
download: s3://jv-pocho/dns-updater.sh to ./dns-updater.sh
[ 92.739262] cloud-init[3203]: Completed 32.0 KiB/727.2 KiB (273.1 KiB/s) with 25 file(s) remaining
so it's copying all the files correctly. The thing is that when i enter via SSH to the instance, i don't have any files inside
[ec2-user#ip-x-x-x-x ~]$ ls
[ec2-user#ip-x-x-x-x ~]$ ls -l
total 0
all commands worked as expected, all the yum installs, python, docker, etc were successfully installed, but no files.
Are files deleted after the bootstrap script ran?
thanks!
Try to copy them in a specific path, then look for it. Because here we don't know which path it's going to use.
Use the following command for specific path:
aws s3 cp s3://Bucket-name/Objet /Path
else you can do one thing,
use pwd command to get the current directory and print it using echo command so that you will get the present working directory.

Setup VNC for ssm-user on EC2 using user data script

I've attempted to setup an EC2 to access the MATE desktop using port forwarding using SSM agent. I've followed instructions here. I want to use the user data script to set this up, but I can't get the ssm-user to start the vncserver.
I think the ssm-user is created when I log in, not when the script runs. In any case if I do log in when the user data script is running, the config files for the vncserver appears to be setup with root access only.
Here is my user data script so far based on other so answers:
#!/bin/bash
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1
echo '## install mate'
amazon-linux-extras install mate-desktop1.x -y
bash -c 'echo PREFERRED=/usr/bin/mate-session > /etc/sysconfig/desktop'
echo '## install tiger vnc'
yum install tigervnc-server -y
echo '## install chromium'
amazon-linux-extras install epel -y
yum install chromium -y
echo '## setup user'
su ssm-user
export HOME=/home/ssm-user
echo '## config vnc password'
umask 0077
mkdir -p "$HOME/.vnc"
chmod go-rwx "$HOME/.vnc"
vncpasswd -f <<<"some_password" >"$HOME/.vnc/passwd"
echo '## start vncserver'
vncserver :1
When I run this, the log shows:
su: user ssm-user does not exist
If I instead let the root user start the vncserver (removing the su ssm-user line) I'm able to connect using the SSM port forward session and VNC, but the desktop is blank. Guess this is as I'm logged in an ssm-user? Is there a way to setup the vncserver for the ssm-user via user data script?

Let's encrypt certbot on AWS Linux

I am new to AWS and Let's encrypt both.
I follow and article and simpley run these commands
wget https://dl.eff.org/certbot-auto
chmod a+x certbot-auto
sudo cp certbot-auto /usr/bin/
Then I run this command.
sudo /usr/bin/certbot-auto --nginx -d example.com -d www.example.com --debug
This gives me the error
Sorry, I don't know how to bootstrap Certbot on your operating system!
You will need to install OS dependencies, configure virtualenv, and
run pip install manually. Please see
https://letsencrypt.readthedocs.org/en/latest/contributing.html#prerequisites
for more info.
What does this really means?
How do I setup certbot on AWS linux?
I have created a fresh amazon linux 2 ec2 instance and tested the following for you.
The following steps are working for me.
Edit the file /usr/bin/certbot-auto to recognize your version of Linux:
$ sudo vim /usr/bin/certbot-auto
find this line in the file (likely near line nearr 780):
elif [ -f /etc/redhat-release ]; then
and replace whole line with this:
elif [ -f /etc/redhat-release ] || grep 'cpe:.*:amazon_linux:2' /etc/os-release > /dev/null 2>&1; then
Save and exit vim (type :wq to do that)
Reference:
Deploying Let’s Encrypt on an Amazon Linux AMI EC2 Instance
Make sure that system requirements are met, you can find the system requirement here.
Also here are the best practices for certbot-auto deploment.
Navigate to your home directory (/home/ec2-user).
Download EPEL using the following command. sudo wget -r --no-parent -A 'epel-release-*.rpm' https://dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/
Install the repository packages as shown in the following command.
sudo rpm -Uvh dl.fedoraproject.org/pub/epel/7/x86_64/Packages/e/epel-release-*.rpm
Enable EPEL as shown in the following command. sudo yum-config-manager --enable epel*
Confirm that EPEL is enabled with the following command.
sudo yum repolist all
Install and run Certbot
This procedure is based on the EFF documentation for installing Certbot on Fedora and on RHEL 7. It describes the default use of Certbot, resulting in a certificate based on a 2048-bit RSA key.
sudo yum install -y certbot python2-certbot-apache or sudo yum install -y certbot python2-certbot-nginx For nginx.
Source here

AWSEBCLI not reading env vars

I am attempting to run AWSEBCLI inside a docker container. I am passing the access key and security token as env vars as described in the docs under "Configuration Settings and Precedence"
ERROR: CredentialsError - Operation Denied. You appear to have no credentials
Here is my docker file
FROM circleci/golang
ADD . /go/src
WORKDIR /go/src
RUN sudo apt-get -y -qq update --assume-yes
RUN sudo apt-get install python-pip python-dev build-essential --assume-yes
RUN sudo pip install awscli=="1.16.9"
RUN sudo pip install awsebcli=="3.14.4"
RUN echo ${AWS_ACCESS_KEY_ID}
RUN echo ${AWS_SECRET_ACCESS_KEY}
CMD sudo eb deploy Circledocker
The environment defined in your user session and the sudo session are not the same.
RUN echo ${AWS_ACCESS_KEY_ID} -> Works
RUN sudo echo ${AWS_ACCESS_KEY_ID} -> Will not provide you the value.
Take a look at man sudo, the -E flag :
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their
existing environment variables. The security policy may return an error
if the user does not have permission to preserve the environment.
So this normally works :
sudo -E bash -c 'echo $AWS_ACCESS_KEY_ID'
Try your eb deploy command like this :
sudo -E bash -c 'eb deploy Circledocker'
Hope it helps !

Amazon AWS Filezilla transfer permission denied

I have my instance of the Amazon AWS running, test page is up.
I am trying to SFTP the files to the server to display my website. I have Filezilla connected to the AWS server but when I try to move the files from my local machine to the /var/www/html directory, it says permission denied.
I just figured out I CAN move the files to the /home/ec2-user directory. So my files are on the server I guess. But when I try to move them from there to the /var/www/html directory, it still won't move them, permission denied.
I've been researching this for approximately 2 hours now but I haven't been able to locate the answer to this.
Any help is greatly appreciated, i'm so close! Haha
Thanks
UPDATE
To allow user ec2-user (Amazon AWS) write access to the public web directory (/var/www/html),
enter this command via Putty or Terminal, as the root user sudo:
sudo chown -R ec2-user /var/www/html
Make sure permissions on that entire folder were correct:
sudo chmod -R 755 /var/www/html
Doc's:
Setting up amazon ec2-instances
Connect to Amazon EC2 file directory using Filezilla and SFTP (Video)
Understanding and Using File Permissions
if you are using centOs then use
sudo chown -R centos:centos /var/www/html
sudo chmod -R 755 /var/www/html
For Ubuntu
sudo chown -R ubuntu:ubuntu /var/www/html
sudo chmod -R 755 /var/www/html
For Amazon ami
sudo chown -R ec2-user:ec2-user /var/www/html
sudo chmod -R 755 /var/www/html
In my case the /var/www/html in not a directory but a symbolic link to the /var/app/current, so you should change the real directoy ie /var/app/current:
sudo chown -R ec2-user /var/app/current
sudo chmod -R 755 /var/app/current
I hope this save some of your times :)
If you're using Ubuntu then use the following:
sudo chown -R ubuntu /var/www/html
sudo chmod -R 755 /var/www/html
This work best everyone
chmod ugo+rwx your-folder
https://help.ubuntu.com/community/FilePermissions
In my case, after 30 minutes changing permissions, got into account that the XLSX file I was trying to transfer was still open in Excel.
for me below worked:
chown -R ftpusername /var/app/current