I have an AutoScaling group on Amazon AWS.
It works perfectly apart from one thing. When userdata bootstrap script is executed it runs as root user and my pm2 process is running with ubuntu user.
Even if I switch user in the script it still says pm2 command not found
//RESPONSE in /var/log/cloud-init-output.log
//Here is the response
Already up to date.
ubuntu
v8.10.0
bash: npm: command not found
From https://bitbucket.org/repo
* branch HEAD -> FETCH_HEAD
Already up to date.
bash: pm2: command not found
Even node version in AMI is 10.15.1 but it prints 8.10.0
Here is my bootstrap script
#!/bin/bash
cd /pathtodirectory
git pull repo
cd ..
sudo touch yesiran
cd folder
su ubuntu bash -c "whoami"
su ubuntu bash -c "git config --global core.mergeoptions --no-edit"
su ubuntu bash -c "node -v"
su ubuntu bash -c "npm -v"
su ubuntu bash -c "pm2 reload all"
Related
I have this user data script in templates to launch ec2 instances:
#!/bin/bash
yum update -y
export HOME=/home/ec2-user
nodev='16.19.0'
nvmv='0.39.3'
gitrepo='https://github.com/ptv1p3r/etl-fuel-priceguide-ec2.git'
su - ec2-user -c "curl https://raw.githubusercontent.com/creationix/nvm/v${nvmv}/install.sh | bash"
su - ec2-user -c "nvm install ${nodev}"
su - ec2-user -c "nvm use ${nodev}"
# install git
yum install git -y
# get repository clone
cd /home/ec2-user
su - ec2-user -c "git clone ${gitrepo}"
# install node modules
cd /home/ec2-user/etl-fuel-priceguide-ec2
su - ec2-user -c "npm install"
# start app
su - ec2-user -c "node index.js"
Everything works until I need to do the npm install, instead of npm install run in the git clone folder created it keeps being run on the /home/ec2-user folder and then gives error of not finding the package.json.
I cant git clone into /home/ec2-user folder because its not empty, and I simply cant move to the created git clone folder to run npm install there, please help
I am attempting to create a launch template in aws with the following in the user data
#!/bin/bash
home=/home/ec2-user
nodev='8.11.2'
nvmv='0.33.11'
#install node
su - ec2-user -c "curl
https://raw.githubusercontent.com/creationix/nvm/v${nvmv}/install.sh | bash"
su - ec2-user -c "nvm install ${nodev}"
su - ec2-user -c "nvm use ${nodev}"
# install git
yum install git -y
#clone the code
cd /home/ec2-user
su - ec2-user -c "git clone https://github.com/xyz/xdf.git"
cd /home/ec2-user/xdf
#install dependencies
su - ec2-user -c "npm install"
echo "test" > test.txt
#install pm2
su - ec2-user -c "npm install pm2 -g"
#run the server
su - ec2-user -c "pm2 run index.js"
The script is being executed and the repo is cloned but the npm install command is running on the dir /home/ec2-user rather than on /home/ec2-user/xdf. The test.txt is created in the correct place ie inside /home/ec2-user/xdf. How do I get npm install to run on /home/ec2-user/xdf. I tried just running npm install instead of su - ec2-user -c "npm install", but it still giving the same results.
First of all userdata is running with root user permissions so you don't need to have sudo or su there. In case you want ec2-user to be owner of that dir so simply execute chown ec2-user:ec2-user /path/to/dir
Next, when you run su - ec2-user -c ... it is executed in /home/ec2-user dir and cd /home/ec2-user/xdf is not working here.
Simply remove all su from your script
I have a Django application which it's deployed to Amazon Elastic Beanstalk. I have to install anaconda for installing pythonocc-core package. I have created a .config file in .ebextensions folder and add the anaconda path in my wsgi.py file such as below and I have deployed it successfully.
.config file:
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_conda_activate_installation:
command: 'source ~/.bashrc'
wsgi.py:
sys.path.append('/anaconda/lib/python3.7/site-packages')
However when I add the 04_conda_install_pythonocc command below to the continuation of this .config file, I got command failed error.
04_conda_install_pythonocc:
command: 'conda install -c dlr-sc pythonocc-core=7.4.0'
I ssh into the instance for checking. I saw the /anaconda folder has occured. When I checked with the conda --version command, I got the -bash: conda: command not found error.
Afterwards, I thought there might be a problem with the PATH and I edited the .config file as follows and I have deployed this .config file successfully.
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
test: test ! -d /anaconda
02_create_home:
command: 'mkdir -p /home/wsgi'
03_add_path:
command: 'export PATH=$PATH:$HOME/anaconda/bin'
04_conda_activate_installation:
command: 'source ~/.bashrc'
But when I add the conda_install_pythonocc command again to the continuation of this edited version of .config file, it failed again and I got command failed.
In manually, all the commands work but they don't work in my .config file.
How can I fix this issue and install package with conda?
I tried to replicated the issue on my sandbox account, and I successful installed conda using the following (simplified) config file on 64bit Amazon Linux 2 v3.0.3 running Python 3.7:
.ebextensions/60_anaconda.config
commands:
00_download_conda:
command: 'wget https://repo.anaconda.com/archive/Anaconda3-2020.02-Linux-x86_64.sh'
01_install_conda:
command: 'bash Anaconda3-2020.02-Linux-x86_64.sh -b -f -p /anaconda'
05_conda_install:
command: '/anaconda/bin/conda install -y -c dlr-sc pythonocc-core=7.4.0'
Note the use off absolute path /anaconda/bin/conda and -y to not ask for manual confirmations. I only verified installation procedure, not how to use it afterwards (e.g. not how to use it in python application). Thus you will probably need to adjust it to your needs.
The EB log file showing successful installation is also provided for your reference (shortened for simplicity):
/var/log/cfn-init-cmd.log
I need to have LibreOffice installed on my web server. Since I'm using autoscaling with AWS Elastic Beanstalk, I need to install it on deployment. To do so, I am using .ebextensions files, but can't get it to work. This is my config file in .ebextensions folder:
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/6.0.2/rpm/x86_64/LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
02-untar:
command: sudo tar -xvf LibreOffice_6.0.2_Linux_x86-64_rpm.tar.gz
03-install:
command: |
if [ ${APP_ENV} == "production" ]; then
cd LibreOffice_6.0.2.1_Linux_x86-64_rpm/RPMS
sudo yum localinstall *.rpm
fi
04-symlink:
command: sudo ln -fs /opt/libreoffice6.0/program/soffice /usr/bin/soffice
I tried to run these commands myself on my ec2-instance one after another as the root user, and everything worked. Only thing I might suspect: when I run the localinstall command, I need to confirm (there is a [y/n] prompt) to start the installation.
If this was the problem, I think I would still find a zipped LibreOffice file on my server or even untared LibreOffice files, but I can't find anything when I ssh into the ec2 instance after deployment.
There is no error message on deployment. Also, I can see that other .ebextensions scripts are running fine since some processes are running as asked in these scripts.
Any idea where the problem could be?
If it can be of any help, here is how I manage to install Libreoffice on my EC2 instances on deployment. This will install libreoffice 5.4 in /opt/libreoffice5.4
The following code is placed in this file : .ebextensions/01-libreoffice-setup.config
packages:
yum:
libXinerama.x86_64: []
cups-libs: []
dbus-glib: []
commands:
01-download-libreoffice:
command: wget http://download.documentfoundation.org/libreoffice/stable/5.4.6/rpm/x86_64/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -f /tmp/LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz ]"
02-untar:
command: sudo tar -xvf LibreOffice_5.4.6_Linux_x86-64_rpm.tar.gz
cwd: /tmp
test: "[ ! -d /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm ]"
03-install:
command: sudo yum localinstall *.rpm -y
cwd: /tmp/LibreOffice_5.4.6.2_Linux_x86-64_rpm/RPMS
test: "[ ! -d /opt/libreoffice5.4 ]"
I have a django app that I will need to deploy on Amazon's EC2 Container Service. In the meantime, in order to test the deployment, I am trying to deploy it in a docker container locally first, but even when running a simple demo django application, I am unable to see the page at localhost:8000.
Here is my setup.
Create a docker machine:
$ docker-machine create --driver virtualbox testmachine
After this I set up my environment:
$ eval "$(docker-machine env testmachine)"
I set up a Dockerfile for my test container:
FROM ubuntu
RUN echo "deb http://archive.ubuntu.com/ubuntu/ $(lsb_release -sc) main universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install python-pip -y
RUN pip install django
RUN mkdir django_test
RUN cd django_test && \
django-admin.py startproject django_test .
Then I call
$ docker build -t dockertest .
... builds successfully
$ docker run -d -i -t -p 8000:8000 dockertest
cbef144ac068eb61b0c3e032448cc207c8f0384a9a67a710df6d9beb26d2ab32
$ docker attach cbef144ac068eb61b0c3e032448cc207c8f0384a9a67a710df6d9beb26d2ab32
root#cbef144ac068:/# cd django_test
root#cbef144ac068::/django_test# python manage.py runserver 0.0.0.0:8000
This successfully starts the server at 0.0.0.0:8000/ of the container.
However, when I try to go to localhost:8000 in my browser, I get a "This webpage is not available." What am I missing?
Turns out I was looking at the wrong IP.
To figure out the correct IP, I ran:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
testmachine * virtualbox Running tcp://192.168.99.100:2376
I then loaded 192.168.99.100:8000 in my browser, and it worked like a charm.