Linux commands for login to Github CLI in command line - amazon-web-services

I want to login into the gh cli with parameters or flags. Im setting up a cloud-init file, with bash commands. So i can't interact manual processes:
That's how it would look like if you do it manually in the console:
ubuntu#ip-172-31-54-112:~/react$ gh auth login
? What account do you want to log into? GitHub.com
? What is your preferred protocol for Git operations? HTTPS
? Authenticate Git with your GitHub credentials? Yes
? How would you like to authenticate GitHub CLI? Paste an authentication token
Tip: you can generate a Personal Access Token here https://github.com/settings/tokens
The minimum required scopes are 'repo', 'read:org', 'workflow'.
? Paste your authentication token: ****************************************
- gh config set -h github.com git_protocol https
✓ Configured git protocol
✓ Logged in as yuuval
I tried it with this line of code which is printed:
- gh config set -h github.com git_protocol https
But it doesnt really log you into github cli. This is my Cloud-init file:
#cloud-config
keyboard:
layout: ch
package_update: true
package_upgrade: true
packages:
- nginx
- git
package_reboot_if_required: true
runcmd:
- type -p curl >/dev/null || sudo apt install curl -y
curl -fsSL https://cli.github.com/packages/githubcli-archive-keyring.gpg | sudo dd of=/usr/share/keyrings/githubcli-archive-keyring.gpg \
&& sudo chmod go+r /usr/share/keyrings/githubcli-archive-keyring.gpg \
&& echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/githubcli-archive-keyring.gpg] https://cli.github.com/packages stable main" | sudo tee /etc/apt/sources.list.d/github-cli.list > /dev/null \
&& sudo apt update \
&& sudo apt install gh -y
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.301.1.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.1/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.301.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- //HERE COMES THE LOGIN PART + GH WORKFLOW RUN NODE.JS.YML FILE
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
-
echo "server {listen 80 default_server;server_name _;location / {root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
Does anyone know how to login via commandline and no user inputs. And if you know how to do the same with the gh workflow run command and then selecting the node.js.yaml file without user interactions needed that would be nice.

Set an environment variable with your generated Personal Access Token e.g. PAT.
Pass it to gh auth login command with --with-token flag:
gh auth login --hostname github.com --with-token <<< $PAT
Reference:
https://cli.github.com/manual/gh_auth_login

Related

Trigger Workflow Job with commands in Linux (Cloud-init)

im trying to deploy my github repository on my ec2 instance with a terraform and cloud-init file. For the Github part i use Github Actions. There i have already made a workflow file for nodejs and this runs correctly. Every time i create a new Instance with the terraform file and cloud.init, the workflow file should build again to create a _work folder, which is important for further processes. Now my question is, how can i trigger this workflow file build with Linux commands in yaml?
Cloud-init File:
#cloud-config
runcmd:
- apt-get update && apt-get install -y expect
- mkdir react
- cd react
- curl -o actions-runner-linux-x64-2.301.2.tar.gz -L https://github.com/actions/runner/releases/download/v2.300.2/actions-runner-linux-x64-2.301.1.tar.gz
- tar xzf ./actions-runner-linux-x64-2.300.1.tar.gz
- yes "" | ./config.sh --url https://github.com/yuuval/react-deploy-aws --token AVYXWHVAXX2TB4J63XBJCIDDYB6TA
- sudo ./svc.sh install
- sudo ./svc.sh start
- yes "" | sudo apt install nginx
- cd _work
- cd react-deploy-aws
- cd react-deploy-aws
- cd /etc/nginx/sites-available
- sudo rm default
-
echo "server {listen 80 default_server;server_name _;location / {root
/home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws/build;try_files
\$uri /index.html;}}" | sudo tee /etc/nginx/sites-available/default
- sudo service nginx restart
- sudo chmod +x /home
- sudo chmod +x /home/ubuntu
- sudo chmod +x /home/ubuntu/react
- sudo chmod +x /home/ubuntu/react/_work
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/react-deploy-aws
- sudo chmod +x /home/ubuntu/react/_work/react-deploy-aws/build
I found this Github Page, but i can't figure out how to do it:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions
The code should be between after this commands:
- yes "" | sudo apt install nginx

GitHub Actions Self-hosted Runner sibling container: permission denied... Docker daemon socket unix:///var/run/docker.sock

I need to execute on GPU hardware so I have to create a self-hosted runner for github actions to execute my code. The self-hosted runner is hosted on my local machine (ubuntu 20.04).
I'm running the self hosted runner container locally with -v and binding the socks using: docker run -it --rm -v /var/run/docker.sock:/var/run/docker.sock -e GITHUB_OWNER=<xxx> -e GITHUB_REPOSITORY=<xxxx>-e GITHUB_PAT=<xxxx>
This local self-hosted runner executes successfully until I try to build the second "project" container I need for my project code. I get a permission issue with the docker sock when I try to build the container not run the container. I'm about 70% certain that with the -v binding when running the self-hosted runner locally this enables sibling containers versus Docker in Docker (which I've read isn't cool anymore).
Permission error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post "http://%2Fvar%2Frun%2Fdocker.sock/v1.24/build?buildargs=%7B%7D&cachefrom=%5B%5D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=1&shmsize=0&target=&ulimits=null&version=1": dial unix /var/run/docker.sock: connect: permission denied
I've tried building the project container with -v /var/run/docker.sock:/var/run/docker.sock in the docker build command but it doesn't like the -v and I've also tried the following approaches in the "project" docker container:
Approach 1.
useradd -m cnncontainer && \
usermod -aG sudo cnncontainer && \
echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
curl -sSL https://get.docker.com/ | sh
usermod -aG docker cnncontainer
Approach 2.
sudo groupadd docker && \
sudo usermod -aG docker "$USER" &&\
newgrp docker
docker run hello-world
Approach 3.
sudo usermod -aG docker $USER
sudo setfacl --modify user:$USER:rw /var/run/docker.sock
docker run hello-world
GitHub actions self-hosted runner Dockerfile:
FROM debian:buster
#tensorflow/tensorflow:2.3.4-gpu - this image doesn't work either
ARG RUNNER_VERSION="2.298.2"
ENV GITHUB_PERSONAL_TOKEN ""
ENV GITHUB_OWNER ""
ENV GITHUB_REPOSITORY ""
RUN apt-get update \
&& apt-get install -y \
curl \
sudo \
git \
jq \
tar \
gnupg2 \
apt-transport-https \
ca-certificates \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN useradd -m github && \
usermod -aG sudo github && \
echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
#setup docker runner
RUN curl -sSL https://get.docker.com/ | sh
RUN usermod -aG docker github
USER github
WORKDIR /home/github
#install github actions cli
RUN curl -O -L https://github.com/actions/runner/releases/download/v$RUNNER_VERSION/actions-runner-linux-x64-$RUNNER_VERSION.tar.gz
RUN tar xzf ./actions-runner-linux-x64-$RUNNER_VERSION.tar.gz
RUN sudo ./bin/installdependencies.sh
COPY --chown=github:github entrypoint.sh ./entrypoint.sh
RUN sudo chmod u+x ./entrypoint.sh
ENTRYPOINT ["/home/github/entrypoint.sh"]```
Self-hosted runner entrypoint.sh:
#!/bin/sh
registration_url="https://api.github.com/repos/${GITHUB_OWNER}/${GITHUB_REPOSITORY}/actions/runners/registration-token"
echo "Requesting registration URL at '${registration_url}'"
payload=$(curl -sX POST -H "Authorization: token ${GITHUB_PAT}" ${registration_url})
export RUNNER_TOKEN=$(echo $payload | jq .token --raw-output)
./config.sh \
--name $(hostname) \
--token ${RUNNER_TOKEN} \
--url https://github.com/${GITHUB_OWNER}/${GITHUB_REPOSITORY} \
--work ${RUNNER_WORKDIR} \
--unattended \
--replace
remove() {
./config.sh remove --unattended --token "${RUNNER_TOKEN}"
}
trap 'remove; exit 130' INT
trap 'remove; exit 143' TERM
./run.sh "$*" & #changed from run.sh
### BEGIN
sudo systemctl start docker
sudo systemctl enable docker
export RUNNER_ALLOW_RUNASROOT=true
export AGENT_TOOLSDIRECTORY=/opt/hostedtoolcache
mkdir actions-runner
sudo mkdir /opt/hostedtoolcache
cd actions-runner
# Make /actions-runner/_work
mkdir _work
# Link /opt/hostedtoolcache as /actions-runner/_work/_tool
ln -s /opt/hostedtoolcache _work/_tool
### END
wait $!
Dockerfile I want to run in/with the self-hosted runner
FROM tensorflow/tensorflow:2.3.4-gpu
RUN mkdir -p /app
COPY . main.py /app/
WORKDIR /app
RUN sudo apt install -y make && sudo apt-get install python3-pip -y
RUN pip install -r requirements.txt
RUN sudo usermod -aG docker $USER
RUN sudo setfacl --modify user:$USER:rw /var/run/docker.sock
RUN docker run hello-world
CMD [ "main.py" ]
ENTRYPOINT [ "python" ]

Gitlab returns Permission denied (publickey,password) for digitalocean server

I'm trying to implement CD for my dockerized Django application on the DigitalOcean droplet.
Here's my .gitlab-ci.yml:
image:
name: docker/compose:1.29.1
entrypoint: [""]
services:
- docker:dind
stages:
- build
- deploy
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
before_script:
- export IMAGE=$CI_REGISTRY/$CI_PROJECT_NAMESPACE/$CI_PROJECT_NAME
- export WEB_IMAGE=$IMAGE/web:web
- export NGINX_IMAGE=$IMAGE/nginx:nginx
- apk add --no-cache openssh-client bash
- chmod +x ./setup_env.sh
- bash ./setup_env.sh
- docker login -u $CI_REGISTRY_USER -p $CI_JOB_TOKEN $CI_REGISTRY
build:
stage: build
script:
- docker pull $IMAGE/web:web || true
- docker pull $IMAGE/web:nginx || true
- docker-compose -f docker-compose.prod.yml build
- docker push $IMAGE/web:web
- docker push $IMAGE/nginx:nginx
deploy:
stage: deploy
script:
- mkdir -p ~/.ssh
- echo "$PRIVATE_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- cat ~/.ssh/id_rsa
- chmod 700 ~/.ssh/id_rsa
- eval "$(ssh-agent -s)"
- ssh-add ~/.ssh/id_rsa
- ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
- chmod +x ./deploy.sh
- scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
- bash ./deploy.sh
only:
- master
I have copied my Publick key to the production server (DO droplet).
The build job is successful but the deploy stage failed with the following error:
$ chmod 700 ~/.ssh/id_rsa
$ eval "$(ssh-agent -s)"
Agent pid 26
$ ssh-add ~/.ssh/id_rsa
Identity added: /root/.ssh/id_rsa (abdul12391#gmail.com)
$ ssh-keyscan -H 'gitlab.com' >> ~/.ssh/known_hosts
$ chmod +x ./deploy.sh
$ scp -o StrictHostKeyChecking=no -r ./.env ./docker-compose.prod.yml root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Warning: Permanently added '143.198.103.99' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
root#143.198.103.99: Permission denied (publickey,password).
lost connection
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
The official process is "How to Upload an SSH Public Key to an Existing Droplet", but it usually involves username, not root.
While your pipeline might be executed as root (as the Identity added: /root/.ssh/id_rsa message suggests), your scp should use a DO remote user, not the remote DO root account): the same account username where you have added the public key to the remote ~/.ssh/authorized_keys
So:
username#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
# not
root#$DO_PUBLIC_IP_ADDRESS:/Pythonist.org
Try the following on the digital ocean server:
cat ~/.ssh/id_rsa.pub
and copy the public key to authorized keys
nano ~/.ssh/authorized_keys
then change permission
chmod 600 ~/.ssh/authorized_keys
chmod 600 ~/.ssh/id_rsa

Installing gcloud on Travis CI

I'm following this tutorial on how to use Travis CI with Google Cloud for Continuous Deployments:
https://cloud.google.com/solutions/continuous-delivery-with-travis-ci
When Travis builds, it tells me that the gcloud command is not found. Here's my .travis file:
sudo: false
language: python
cache:
directories:
- "$HOME/google-cloud-sdk/"
env:
- GAE_PYTHONPATH=${HOME}/.cache/google_appengine PATH=$PATH:${HOME}/google-cloud-sdk/bin
PYTHONPATH=${PYTHONPATH}:${GAE_PYTHONPATH} CLOUDSDK_CORE_DISABLE_PROMPTS=1
before_install:
- openssl aes-256-cbc -K $encrypted_404aa45a170f_key -iv $encrypted_404aa45a170f_iv
-in credentials.tar.gz.enc -out credentials.tar.gz -d
- if [ ! -d "${GAE_PYTHONPATH}" ]; then python scripts/fetch_gae_sdk.py $(dirname
"${GAE_PYTHONPATH}"); fi
- if [ ! -d ${HOME}/google-cloud-sdk ]; then curl https://sdk.cloud.google.com | bash;
fi
- tar -xzf credentials.tar.gz
- mkdir -p lib
- gcloud auth activate-service-account --key-file client-secret.json
install:
- gcloud config set project continuous-deployment-192112
- gcloud -q components update gae-python
- pip install -r requirements.txt -t lib/
script:
- python test_main.py
- gcloud -q preview app deploy app.yaml --promote
- python e2e_test.py
This is the same file provided by the example repository from the tutorial. The line that fails is:
- gcloud auth activate-service-account --key-file client-secret.json
Even though it's already checked for the SDK and installed it if it isn't there.
I've already tried adding - source ~/.bash_profile after the install, but this doesn't work.
Am I missing a command somewhere?
I ran into the same issue and this has worked for me:
- if [ ! -d "$HOME/google-cloud-sdk" ]; then
export CLOUD_SDK_REPO="cloud-sdk-$(lsb_release -c -s)";
echo "deb http://packages.cloud.google.com/apt $CLOUD_SDK_REPO main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list;
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - ;
sudo apt-get update && sudo apt-get install google-cloud-sdk;
fi
The only issue however is since it needs sudo, it will run on gce which is much slower then ec2
https://docs.travis-ci.com/user/reference/overview/#Virtualisation-Environment-vs-Operating-System
Updated:
This is the best solution -
How to install Google Cloud SDK on Travis?

Installing CPhalcon on an AWS Docker image

I have a docker image that installs phalcon onto a Docker image. Here is the Dockerfile:
FROM ubuntu:trusty
MAINTAINER Fernando Mayo <fernando#tutum.co>, Feng Honglin <hfeng#tutum.co>
# Install packages
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && \
sudo apt-get -y install supervisor php5-dev libpcre3-dev gcc make php5-mysql git curl unzip apache2 libapache2-mod-php5 mysql-server php5-mysql pwgen php-apc php5-mcrypt php5-curl && \
echo "ServerName localhost" >> /etc/apache2/apache2.conf
# Add image configuration and scripts
ADD start-apache2.sh /start-apache2.sh
ADD start-mysqld.sh /start-mysqld.sh
ADD run.sh /run.sh
RUN chmod 755 /*.sh
ADD my.cnf /etc/mysql/conf.d/my.cnf
ADD supervisord-apache2.conf /etc/supervisor/conf.d/supervisord-apache2.conf
ADD supervisord-mysqld.conf /etc/supervisor/conf.d/supervisord-mysqld.conf
ADD php.ini /etc/php5/cli/php.ini
ADD 000-default.conf /etc/apache2/sites-available/000-default.conf
ADD 30-phalcon.ini /etc/php5/apache2/conf.d/30-phalcon.ini
ADD 30-phalcon.ini /etc/php5/cli/conf.d/30-phalcon.ini
#RUN rm -rd /var/www/html/*
#RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /var/www/html/cphalcon
#RUN chmod 755 /var/www/html/cphalcon/build/install
#CMD["/var/www/html/cphalcon/build/install"]
RUN git clone --depth=1 git://github.com/phalcon/cphalcon.git /usr/local/src/cphalcon
RUN cd /usr/local/src/cphalcon/build && ./install ;\
echo "extension=phalcon.so" > /etc/php5/mods-available/phalcon.ini ;\
php5enmod phalcon
RUN sudo service apache2 stop
RUN sudo service apache2 start
# Remove pre-installed database
RUN rm -rf /var/lib/mysql/*
# Add MySQL utils
ADD create_mysql_admin_user.sh /create_mysql_admin_user.sh
RUN chmod 755 /*.sh
# config to enable .htaccess
RUN a2enmod rewrite
# Copy over private key, and set permissions
ADD .ssh /root/.ssh
# Get aws stuff
RUN curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
RUN unzip awscli-bundle.zip
RUN ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
RUN rm -rd /var/www/html/*
RUN git clone ssh://git-codecommit.us-east-1.amazonaws.com/v1/repos/Demo-Server /var/www/html
#Environment variables to configure php
ENV PHP_UPLOAD_MAX_FILESIZE 10M
ENV PHP_POST_MAX_SIZE 10M
# Add volumes for MySQL
VOLUME ["/etc/mysql", "/var/lib/mysql" ]
EXPOSE 80 3306
CMD ["/run.sh"]
When I run this Docker image locally it works fine, but when I run it on Elastic Beanstalk I get the error: PHP Fatal error: Class 'Phalcon\Loader' not found. To debug this I checked phpinfo() both locally and on the AWS server. Locally it shows all of the phalcon files installed, but on AWS I don't get any info about CPhalcon. How could the Docker image install Phalcon correctly when running on my local machine but not on Elastic Beanstalk?