AWS EC2 USERDATA NPM INSTALL - amazon-web-services

I have this user data script in templates to launch ec2 instances:
#!/bin/bash
yum update -y
export HOME=/home/ec2-user
nodev='16.19.0'
nvmv='0.39.3'
gitrepo='https://github.com/ptv1p3r/etl-fuel-priceguide-ec2.git'
su - ec2-user -c "curl https://raw.githubusercontent.com/creationix/nvm/v${nvmv}/install.sh | bash"
su - ec2-user -c "nvm install ${nodev}"
su - ec2-user -c "nvm use ${nodev}"
# install git
yum install git -y
# get repository clone
cd /home/ec2-user
su - ec2-user -c "git clone ${gitrepo}"
# install node modules
cd /home/ec2-user/etl-fuel-priceguide-ec2
su - ec2-user -c "npm install"
# start app
su - ec2-user -c "node index.js"
Everything works until I need to do the npm install, instead of npm install run in the git clone folder created it keeps being run on the /home/ec2-user folder and then gives error of not finding the package.json.
I cant git clone into /home/ec2-user folder because its not empty, and I simply cant move to the created git clone folder to run npm install there, please help

Related

Trigger Workflow Job with commands in Linux (Cloud-init)

im trying to deploy my github repository on my ec2 instance with a terraform and cloud-init file. For the Github part i use Github Actions. There i have already made a workflow file for nodejs and this runs correctly. Every time i create a new Instance with the terraform file and cloud.init, the workflow file should build again to create a _work folder, which is important for further processes. Now my question is, how can i trigger this workflow file build with Linux commands in yaml?
Cloud-init File:
#cloud-config
runcmd:
- apt-get update && apt-get install -y expect
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.301.2.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.2/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.300.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
-
echo "server {listen 80 default_server;server_name _;location / {root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
I found this Github Page, but i can't figure out how to do it:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions
The code should be between after this commands:
- yes "" | sudo apt install nginx

Executing npm install in user data

I am attempting to create a launch template in aws with the following in the user data
#!/bin/bash
home=/home/ec2-user
nodev='8.11.2'
nvmv='0.33.11'
#install node
su - ec2-user -c "curl
https://raw.githubusercontent.com/creationix/nvm/v${nvmv}/install.sh | bash"
su - ec2-user -c "nvm install ${nodev}"
su - ec2-user -c "nvm use ${nodev}"
# install git
yum install git -y
#clone the code
cd /home/ec2-user
su - ec2-user -c "git clone https://github.com/xyz/xdf.git"
cd /home/ec2-user/xdf
#install dependencies
su - ec2-user -c "npm install"
echo "test" > test.txt
#install pm2
su - ec2-user -c "npm install pm2 -g"
#run the server
su - ec2-user -c "pm2 run index.js"
The script is being executed and the repo is cloned but the npm install command is running on the dir /home/ec2-user rather than on /home/ec2-user/xdf. The test.txt is created in the correct place ie inside /home/ec2-user/xdf. How do I get npm install to run on /home/ec2-user/xdf. I tried just running npm install instead of su - ec2-user -c "npm install", but it still giving the same results.
First of all userdata is running with root user permissions so you don't need to have sudo or su there. In case you want ec2-user to be owner of that dir so simply execute chown ec2-user:ec2-user /path/to/dir
Next, when you run su - ec2-user -c ... it is executed in /home/ec2-user dir and cd /home/ec2-user/xdf is not working here.
Simply remove all su from your script

Creating an Object Detection Application Using TensorFlow not able to install and launch the web application

not able to launch the web application in track 3 of quest 10 creating an oject detection application using tensorflow.
used the followding code to do so..
apt-get update
apt-get install -y protobuf-compiler python3-pil python3-lxml python3-pip python3-dev git
pip3 install --upgrade pip
pip3 install Flask==2.1.2 WTForms==3.0.1 Flask_WTF==1.0.1 Werkzeug==2.0.3 itsdangerous==2.1.2 jinja2==3.1.2
Install TensorFlow:
pip3 install tensorflow==2.9.0
Install the Object Detection API library:
cd /opt
Copied!
git clone https://github.com/tensorflow/models
Copied!
cd models/research
Copied!
protoc object_detection/protos/*.proto --python_out=.
Download the pre-trained model binaries by running the following commands:
mkdir -p /opt/graph_def
cd /tmp
for model in
ssd_mobilenet_v1_coco_11_06_2017
ssd_inception_v2_coco_11_06_2017
rfcn_resnet101_coco_11_06_2017
faster_rcnn_resnet101_coco_11_06_2017
faster_rcnn_inception_resnet_v2_atrous_coco_11_06_2017
do
curl -OL http://download.tensorflow.org/models/object_detection/$model.tar.gz
tar -xzf $model.tar.gz $model/frozen_inference_graph.pb
cp -a $model /opt/graph_def/
done
Now you will choose a model for the web application to use. For this lab, enter the following command faster_rcnn_resnet101_coco_11_06_2017:
ln -sf /opt/graph_def/faster_rcnn_resnet101_coco_11_06_2017/frozen_inference_graph.pb /opt/graph_def/frozen_inference_graph.pb
Install and launch the web application
Change to the root home directory:
cd $HOME
Copied!
Clone the example application from Github:
git clone https://github.com/GoogleCloudPlatform/tensorflow-object-detection-example
Copied!
Install the application:
cp -a tensorflow-object-detection-example/object_detection_app_p3 /opt/
Copied!
chmod u+x /opt/object_detection_app_p3/app.py
Copied!
Create the object detection service:
cp /opt/object_detection_app_p3/object-detection.service /etc/systemd/system/
Copied!
Reload systemd to reload the systemd manager configuration:
systemctl daemon-reload
Copied!
The application provides a simple user authentication mechanism. You can change the default username and password by modifying the /opt/object_detection_app/decorator.py file and changing the following lines: USERNAME = 'username' PASSWORD = 'passw0rd'
Launch the application.
systemctl enable object-detection
Copied!
systemctl start object-detection
Copied!
systemctl status object-detection

Run conda inside singularity

I would like to run a conda command with singularity.
The command is:
singularity exec ~/dockerimage.sif conda
It yields an error:
/.singularity.d/actions/exec: 9: exec: conda: Permission denied
Here is my dockerfile:
FROM ubuntu:20.04
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y apt-utils wget=1.20.3-1ubuntu1 python3.8=3.8.2-1ubuntu1.2 python3-pip=20.0.2-5ubuntu1 python3-yaml=5.3.1-1 git=1:2.25.1-1ubuntu3
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-py38_4.8.3-Linux-x86_64.sh && chmod +x Miniconda3-py38_4.8.3-Linux-x86_64.sh && ./Miniconda3-py38_4.8.3-Linux-x86_64.sh -b && cp /root/miniconda3/bin/conda /usr/bin/conda
RUN wget https://data.qiime2.org/distro/core/qiime2-2020.8-py36-linux-conda.yml && conda env create -n qiime2-2020.8 --file qiime2-2020.8-py36-linux-conda.yml && conda install -y -n qiime2-2020.8 -c conda-forge -c bioconda -c qiime2 -c defaults q2cli q2template q2-types q2-feature-table q2-metadata vsearch snakemake
What should I add to the Dockerfile? How would it work?
You're installing with conda default settings, which puts it in the home of the current user. That user is root. Singularity runs as your current user, so unless you're running as root the conda files will not be available.
modify your conda install command to set the install prefix: -p /opt/conda (or some other arbitrary location)
make sure that any user will be able to access the files installed with conda: chmod -R o+rX /opt/conda
update PATH to include conda: export PATH="$PATH:/opt/conda/bin"
when running your image make sure your environment variables are not overriding those in the container: singularity exec --cleanenv ~/dockerimage.sif conda

Deploying a Geodjango Application on AWS Elastic Beanstalk

I'm trying to deploy a geodjango application on AWS Elastic Beanstalk. The configuration is 64bit Amazon Linux 2017.09 v2.6.6 running Python 3.6. I am getting this error when trying to deploy:
Requires: libpoppler.so.5()(64bit) Error: Package: gdal-java-1.9.2-8.rhel6.x86_64 (pgdg93) Requires: libpoppler.so.5()(64bit)
How do I install the required package? I read through Setting up Django with GeoDjango Support in AWS Beanstalk or EC2 Instance but I am still getting problems. My ebextensions currently looks like:
commands:
01_yum_update:
command: sudo yum -y update
02_epel_repo:
command: sudo yum-config-manager -y --enable epel
03_install_gdal_packages:
command: sudo yum -y install gdal gdal-devel
packages:
yum:
git: []
postgresql95-devel: []
gettext: []
libjpeg-turbo-devel: []
libffi-devel: []
I'm going to answer my own question for the sake my future projects and anyone else trying to get started with geodjango. Updating this answer as of July 2020
Create an ebextensions file to install GDAL on the EC2 instance at deployment:
01_gdal.config
commands:
01_install_gdal:
test: "[ ! -d /usr/local/gdal ]"
command: "/tmp/gdal_install.sh"
files:
"/tmp/gdal_install.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo yum-config-manager --enable epel
sudo yum -y install make automake gcc gcc-c++ libcurl-devel proj-devel geos-devel
# Geos
cd /
sudo mkdir -p /usr/local/geos
cd usr/local/geos/geos-3.7.2
sudo wget geos-3.7.2.tar.bz2 http://download.osgeo.org/geos/geos-3.7.2.tar.bz2
sudo tar -xvf geos-3.7.2.tar.bz2
cd geos-3.7.2
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# Proj4
cd /
sudo mkdir -p /usr/local/proj
cd usr/local/proj
sudo wget -O proj-5.2.0.tar.gz http://download.osgeo.org/proj/proj-5.2.0.tar.gz
sudo wget -O proj-datumgrid-1.8.tar.gz http://download.osgeo.org/proj/proj-datumgrid-1.8.tar.gz
sudo tar xvf proj-5.2.0.tar.gz
sudo tar xvf proj-datumgrid-1.8.tar.gz
cd proj-5.2.0
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# GDAL
cd /
sudo mkdir -p /usr/local/gdal
cd usr/local/gdal
sudo wget -O gdal-2.4.4.tar.gz http://download.osgeo.org/gdal/2.4.4/gdal-2.4.4.tar.gz
sudo tar xvf gdal-2.4.4.tar.gz
cd gdal-2.4.4
sudo ./configure
sudo make
sudo make install
sudo ldconfig
As shown, the script checks whether gdal already exists using the test function. It then downloads the Geos, Proj, and GDAL libraries and installs them in the usr/local directory. At the time of writing this, geodjango (Django 3.0) supports up to Geos 3.7, Proj 5.2 (which also requires projdatum. Current releases do not require it), and GDAL 2.4 Warning: this installation process can take a long time. Also I am not a Linux professional so some of those commands may be redundant, but it works.
Lastly I add the following two environment variables to my Elastic Beanstalk configuration:
LD_LIBRARY_PATH: /usr/local/lib:$LD_LIBRARY_PATH
PROJ_LIB: usr/local/proj
If you still have troubles I recommend checking the logs and ssh-ing in the EC2 instance to check that installation took place. Original credit to this post