Trigger Workflow Job with commands in Linux (Cloud-init) - amazon-web-services

im trying to deploy my github repository on my ec2 instance with a terraform and cloud-init file. For the Github part i use Github Actions. There i have already made a workflow file for nodejs and this runs correctly. Every time i create a new Instance with the terraform file and cloud.init, the workflow file should build again to create a _work folder, which is important for further processes. Now my question is, how can i trigger this workflow file build with Linux commands in yaml?
Cloud-init File:
#cloud-config
runcmd:
- apt-get update && apt-get install -y expect
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.301.2.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.2/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.300.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
-
echo "server {listen 80 default_server;server_name _;location / {root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
I found this Github Page, but i can't figure out how to do it:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions
The code should be between after this commands:
- yes "" | sudo apt install nginx

Related

Linux commands for login to Github CLI in command line

I want to login into the gh cli with parameters or flags. Im setting up a cloud-init file, with bash commands. So i can't interact manual processes:
That's how it would look like if you do it manually in the console:
ubuntu#ip-172-31-54-112:~/react$ gh auth login
? What account do you want to log into? GitHub.com
? What is your preferred protocol for Git operations? HTTPS
? Authenticate Git with your GitHub credentials? Yes
? How would you like to authenticate GitHub CLI? Paste an authentication token
Tip: you can generate a Personal Access Token here https://github.com/settings/tokens
The minimum required scopes are 'repo', 'read:org', 'workflow'.
? Paste your authentication token: ****************************************
- gh config set -h github.com git_protocol https
✓ Configured git protocol
✓ Logged in as yuuval
I tried it with this line of code which is printed:
- gh config set -h github.com git_protocol https
But it doesnt really log you into github cli. This is my Cloud-init file:
#cloud-config
keyboard:
layout: ch
package_update: true
package_upgrade: true
packages:
- nginx
- git
package_reboot_if_required: true
runcmd:
- type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.301.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.1/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.301.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- //HERE COMES THE LOGIN PART + GH WORKFLOW RUN NODE.JS.YML FILE
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
-
echo "server {listen 80 default_server;server_name _;location / {root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
Does anyone know how to login via commandline and no user inputs. And if you know how to do the same with the gh workflow run command and then selecting the node.js.yaml file without user interactions needed that would be nice.
Set an environment variable with your generated Personal Access Token e.g. PAT.
Pass it to gh auth login command with --with-token flag:
gh auth login --hostname github.com --with-token <<< $PAT
Reference:
https://cli.github.com/manual/gh_auth_login

Install gdal geodjango on elastic beanstalk

what are the correct steps to install geodjango on elastic beanstalk?
got eb instance, installed env and made it two instances now I want to use geodjango on it, I'm already using it on a separate ec2 instance for testing
that's my django.config file and it fails
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: hike.project.wsgi:application
commands:
01_gdal:
command: "wget http://download.osgeo.org/gdal/2.1.3/gdal-2.1.3.tar.gz && tar -xzf gdal-2.1.3.tar.gz && cd gdal-2.1.3 && ./configure && make && make install"
then tried this instead and also failed from 100% cpu and time limit
option_settings:
aws:elasticbeanstalk:container:python:
WSGIPath: hike.project.wsgi:application
commands:
01_install_gdal:
test: "[ ! -d /usr/local/gdal ]"
command: "/tmp/gdal_install.sh"
files:
"/tmp/gdal_install.sh":
mode: "000755"
owner: root
group: root
content: |
#!/usr/bin/env bash
sudo yum-config-manager --enable epel
sudo yum -y install make automake gcc gcc-c++ libcurl-devel proj-devel geos-devel
# Geos
cd /
sudo mkdir -p /usr/local/geos
cd usr/local/geos/geos-3.7.2
sudo wget geos-3.7.2.tar.bz2 http://download.osgeo.org/geos/geos-3.7.2.tar.bz2
sudo tar -xvf geos-3.7.2.tar.bz2
cd geos-3.7.2
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# Proj4
cd /
sudo mkdir -p /usr/local/proj
cd usr/local/proj
sudo wget -O proj-5.2.0.tar.gz http://download.osgeo.org/proj/proj-5.2.0.tar.gz
sudo wget -O proj-datumgrid-1.8.tar.gz http://download.osgeo.org/proj/proj-datumgrid-1.8.tar.gz
sudo tar xvf proj-5.2.0.tar.gz
sudo tar xvf proj-datumgrid-1.8.tar.gz
cd proj-5.2.0
sudo ./configure
sudo make
sudo make install
sudo ldconfig
# GDAL
cd /
sudo mkdir -p /usr/local/gdal
cd usr/local/gdal
sudo wget -O gdal-2.4.4.tar.gz http://download.osgeo.org/gdal/2.4.4/gdal-2.4.4.tar.gz
sudo tar xvf gdal-2.4.4.tar.gz
cd gdal-2.4.4
sudo ./configure
sudo make
sudo make install
sudo ldconfig
and I've no idea what to do,

Executing npm install in user data

I am attempting to create a launch template in aws with the following in the user data
#!/bin/bash
home=/home/ec2-user
nodev='8.11.2'
nvmv='0.33.11'
#install node
su - ec2-user -c "curl
https://raw.githubusercontent.com/creationix/nvm/v${nvmv}/install.sh | bash"
su - ec2-user -c "nvm install ${nodev}"
su - ec2-user -c "nvm use ${nodev}"
# install git
yum install git -y
#clone the code
cd /home/ec2-user
su - ec2-user -c "git clone https://github.com/xyz/xdf.git"
cd /home/ec2-user/xdf
#install dependencies
su - ec2-user -c "npm install"
echo "test" > test.txt
#install pm2
su - ec2-user -c "npm install pm2 -g"
#run the server
su - ec2-user -c "pm2 run index.js"
The script is being executed and the repo is cloned but the npm install command is running on the dir /home/ec2-user rather than on /home/ec2-user/xdf. The test.txt is created in the correct place ie inside /home/ec2-user/xdf. How do I get npm install to run on /home/ec2-user/xdf. I tried just running npm install instead of su - ec2-user -c "npm install", but it still giving the same results.
First of all userdata is running with root user permissions so you don't need to have sudo or su there. In case you want ec2-user to be owner of that dir so simply execute chown ec2-user:ec2-user /path/to/dir
Next, when you run su - ec2-user -c ... it is executed in /home/ec2-user dir and cd /home/ec2-user/xdf is not working here.
Simply remove all su from your script

ECS Docker container won't start

I have a Docker container with this Dockerfile:
FROM node:8.1
RUN rm -fR /var/lib/apt/lists/*
RUN echo "deb http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee /etc/apt/sources.list.d/webupd8team-java.list
RUN echo "deb-src http://ppa.launchpad.net/webupd8team/java/ubuntu trusty main" | tee -a /etc/apt/sources.list.d/webupd8team-java.list
RUN apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys EEA14886
RUN apt-get update
RUN echo debconf shared/accepted-oracle-license-v1-1 select true | \
debconf-set-selections
RUN echo debconf shared/accepted-oracle-license-v1-1 seen true | \
debconf-set-selections
RUN apt-get install -y oracle-java8-installer
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN mkdir -p /app
WORKDIR /app
# Install app dependencies
COPY package.json /app/
RUN npm install
# Bundle app source
COPY . /app
# Environment Variables
ENV PORT 8080
# start the SSH daemon service
RUN service ssh start
# create a non-root user & a home directory for them
RUN useradd --create-home --shell /bin/bash tunnel-user
# set their password
RUN echo 'tunnel-user:93wcBjsp' | chpasswd
# Copy the SSH key to authorized_keys
COPY tunnel.pub /app/
RUN mkdir -p /home/tunnel-user/.ssh
RUN cat tunnel.pub >> /home/tunnel-user/.ssh/authorized_keys
# Set permissions
RUN chown -R tunnel-user:tunnel-user /home/tunnel-user/.ssh
RUN chmod 0700 /home/tunnel-user/.ssh
RUN chmod 0600 /home/tunnel-user/.ssh/authorized_keys
# allow the tunnel-user to SSH into this machine
RUN echo 'AllowUsers tunnel-user' >> /etc/ssh/sshd_config
EXPOSE 8080
EXPOSE 22
CMD [ "npm", "start" ]
My ECS task has this definition. I'm using a role which has AmazonEC2ContainerServiceforEC2Role.
When I try to start it as a task in my ECS cluster I get this error:
CannotStartContainerError: API error (500): driver failed programming external connectivity on endpoint ecs-ssh-4-ssh-8cc68dbfaa8edbdc0500 (387e024a87752293f51e5b62de9e2b26102d735e8da500c8e7fa5e1b4b4f0983): Error starting userland proxy: listen tcp 0.0.0
How do I fix this?

Installing CPhalcon on an AWS Docker image

I have a docker image that installs phalcon onto a Docker image. Here is the Dockerfile:
FROM ubuntu:trusty
MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
sudo apt-get -y install supervisor php5-dev libpcre3-dev gcc make php5-mysql git curl unzip apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD my.cnf /etc/mysql/conf.d/my.cnf
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
ADD php.ini /etc/php5/cli/php.ini
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD 30-phalcon.ini /etc/php5/apache2/conf.d/30-phalcon.ini
ADD 30-phalcon.ini /etc/php5/cli/conf.d/30-phalcon.ini
#RUN rm -rd /var/www/html/*
#RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /var/www/html/cphalcon
#RUN chmod 755 /var/www/html/cphalcon/build/install
#CMD["/var/www/html/cphalcon/build/install"]
RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /usr/local/src/cphalcon
RUN cd /usr/local/src/cphalcon/build && ./install ;\
echo "extension=phalcon.so" > /etc/php5/mods-available/phalcon.ini ;\
php5enmod phalcon
RUN sudo service apache2 stop
RUN sudo service apache2 start
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
# Add MySQL utils
ADD create_mysql_admin_user.sh /create_mysql_admin_user.sh
RUN chmod 755 /*.sh
# config to enable .htaccess
RUN a2enmod rewrite
# Copy over private key, and set permissions
ADD .ssh /root/.ssh
# Get aws stuff
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
RUN rm -rd /var/www/html/*
RUN git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Demo-Server /var/www/html
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
EXPOSE 80 3306
CMD ["/run.sh"]
When I run this Docker image locally it works fine, but when I run it on Elastic Beanstalk I get the error: PHP Fatal error: Class 'Phalcon\Loader' not found. To debug this I checked phpinfo() both locally and on the AWS server. Locally it shows all of the phalcon files installed, but on AWS I don't get any info about CPhalcon. How could the Docker image install Phalcon correctly when running on my local machine but not on Elastic Beanstalk?