How to make gitlab runner access EC2 instance to make deploy? - amazon-web-services

I create a script to make deploy but every time throw this error:
"Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed."
My .gitlab-ci.yml:
make_deploy:
stage: deploy
script:
- apk update
- apk add bash
- apk add git
- apk add openssh
- bash scripts/deploy.sh
- echo "Deploy succeeded!"
only:
- master
deploy.sh:
#!/bin/bash
user=gitlab+deploy-token-44444
pass=passwordpass
gitlab="https://"$user":"$pass"#gitlab.com/repo/project.git"
ssh-keygen -R 50-200-50-15
chmod 600 key.pem
ssh -tt -i key.pem ubuntu#ec2-50-200-50-15.compute-1.amazonaws.com << 'ENDSSH'
rm -rf project
git clone $gitlab
cd project
npm i
pm2 restart .
ENDSSH
exit

You need to change your authentication type instead of using username & password, use ssh key exchange.
This way, your script will not be prompted with username & password input.
But before you do that, you should first create ssh keys and upload the public key to your repository settings, it will serve as your primary authentication between the instance and the gitlab server.
More info here.
Test your connection.
ssh -T git#gitlab.com

Related

Can't find correct syntax to forward SSH keys

I'm trying to build a custom container with Buildah via a Dockerfile that will run some tasks in Celery, but the tasks need access to a library available in a private repository on our local Gitlab instance. It works if I copy the library from a directory I cloned locally, but it would be best if I could just clone a copy to the container in the Dockerfile. However, I can't get the git clone to work inside the Dockerfile when trying to build it in Buildah. It doesn't seem to be able to read my SSH keys, which are stored on the host at ~/.ssh/id_rsa. I'm trying to follow this from the Buildah man page:
--ssh=default|id[=socket>|<key>[,<key>]
SSH agent socket or keys to expose to the build. The socket path can be left empty to use the
value of default=$SSH_AUTH_SOCK
To later use the ssh agent, use the --mount flag in a RUN instruction within a Containerfile:
RUN --mount=type=secret,id=id mycmd
So in my Dockerfile:
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan -t ed25519 gitlab.mycompany.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library
And when I try to build it in Builad:
buildah build --ssh=default -f celery/Dockerfile -t celery
And the error when Buildah gets to the step where it's trying to clone the git repository:
Permission denied, please try again.
Permission denied, please try again.
git#gitlab.mycompany.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
error building at STEP "RUN --mount=type=ssh git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library": error while running runtime: exit status 128
Finished
git clones work correctly using my default SSH keys on my host, but whatever I'm doing to access the keys when building the Dockerfile in Buildah isn't working correctly. What do I need to change to get use the SSH keys inside of Buildah?
PS Buildah version, on RHEL8:
$ buildah -v
buildah version 1.26.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
EDIT: So I figured out how to get it to work via the --secret flag. Dockerfile:
RUN --mount=type=secret,id=id_rsa GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa" git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library
Command line:
buildah build --secret id=id_rsa,src=/home/wile_e8/.ssh/id_rsa -f celery/Dockerfile -t celery
This works, although only once. When I try to run this command next in the Dockerfile:
WORKDIR /opt/library
RUN --mount=type=secret,id=id_rsa GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa" git fetch --all --tags --prune
I get the following error:
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0755 for '/run/secrets/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/run/secrets/id_rsa": bad permissions
Permission denied, please try again.
Permission denied, please try again.
git#gitlab.mycompany.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Looks like I'll have to figure out how to set permissions on the secret file. But I still have no idea on how to get the --ssh flag to work correctly, which should be easier than doing all this stuff with the secret file.
EDIT 2: And here is how I managed to run multiple commands that contact the private Gitlab repository - Dockerfile:
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa"
RUN --mount=type=secret,id=id_rsa git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library && \
cd /opt/library && \
git fetch --all --tags --prune && \
git checkout tags/1.0.0 -b 1.0.0
Still not as convenient as figuring out the correct syntax for the --ssh flag, but it works.
I eventually figured out how to format this to get the --ssh flag to work. Although I'm now updated to version 1.27.2, so maybe it was a bug fix.
$ buildah -v
buildah version 1.27.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
But here is how I formatted the buildah command:
buildah build --ssh id=/home/wile_e8/.ssh/id_rsa -f celery/Dockerfile -t celery
And here is the git fetch line in the Dockerfile:
RUN --mount=type=ssh,id=id git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library && \
cd /opt/library && \
git fetch --all --tags --prune && \
git checkout tags/1.0.0 -b 1.0.0
I don't know why --ssh=default doesn't automatically pull ~/.ssh/id_rsa, but manually specifying that file in this way works.

running script to upload file to AWS S3 works, but running the same script via jenkins job doesn't work

The simple goal:
I would like to have two containers both running on my local machine. One jenkins container & one SSH server container. Then, jenkins job could connect to the SSH server container & execute aws command to upload file to S3.
My workspace directory structure:
a docker-compose.yml (details see below)
a directory named centos/,
Inside centos/ I have a Dockerfile for building the SSH server image.
The docker-compose.yml:
In my docker-compose.yml I declared the two containers(services).
One jenkins container, name jenkins.
One SSH server contaienr, named remote_host.
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote_host
image: remote-host
build:
context: centos7
networks:
- net
networks:
net:
The Dockerfile for the remote_host is like this (Notice the last RUN installs the AWS CLI):
FROM centos
RUN yum -y install openssh-server
RUN useradd remote_user && \
echo remote_user:1234 | chpasswd && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user/.ssh/ && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN ssh-keygen -A
RUN rm -rf /run/nologin
RUN yum -y install unzip
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && unzip awscliv2.zip && ./aws/install
Current situation with the above setup:
I run docker-compose build and docker-compose up. Both jenkins container and the remote_host(SSH server) container are up and running successfully.
I can go inside jenkins container by :
$ docker exec -it jenkins bash
jenkins#7551f2fa441d:/$
I can successfully ssh to the remote_host container by:
jenkins#7551f2fa441d:/$ ssh -i /tmp/remote-key remote_user#remote_host
Warning: the ECDSA host key for 'remote_host' differs from the key for the IP address '172.19.0.2'
Offending key for IP in /var/jenkins_home/.ssh/known_hosts:1
Matching host key in /var/jenkins_home/.ssh/known_hosts:2
Are you sure you want to continue connecting (yes/no)? yes
[remote_user#8c203bbdcf72 ~]$
Inside the remote_host container, I have also configured my AWS access key and secret key under ~.aws/credentials:
[default]
aws_access_key_id=AKIAIOSFODNN7EXAMPLE
aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
I can successfully run aws command to upload a file from remote_host container to my AWS S3 bucket. Like:
[remote_user#8c203bbdcf72 ~]$ aws s3 cp myfile s3://mybucket123asx/myfile
What the issue is
Now, I would like my jenkins job to execute the aws command to upload file to S3. So I created a shell script inside my remote_host container, the script is like this:
#/bin/bash
BUCKET_NAME=$1
aws s3 cp /tmp/myfile s3://$BUCKET_NAME/myfile
In my jenkins, I have configured the SSH & in my jenkins job configuration, I have:
As you can see , it simply runs the script located in the remote_host container.
When I build the jenkins job, I always get the error in console : upload failed: ../../tmp/myfile to s3://mybucket123asx/myfile Unable to locate credentials.
Why the same s3 command works when executing in the remote_host container but not working when run from jenkins job?
I also tried explicitly export the aws key id & secrete key in the script. (bear in mind that I have the ~.aws/credentils configured in remote_host, which works without explicitly exporting the aws secret key)
#/bin/bash
BUCKET_NAME=$1
export aws_access_key_id=AKAARXL1CFQNN4UV5TIO
export aws_secret_access_key=MY_SECRETE_KEY
aws s3 cp /tmp/myfile s3://$BUCKET_NAME/myfile
OK, I solved my issue by changing the export statement to capital case. So, the cause of the issue is that when jenkins run the script, it runs as remote_user on remote_host. Though on remote_host I have the ~/.aws/credentials setup, but that file only have read permission for users other than root:
[root#8c203bbdcf72 /]# ls -l ~/.aws/
total 4
-rw-r--r-- 1 root root 112 Sep 25 19:14 credentials
That's why when jenkins run the script to upload file to S3 got Unable to locate credentials failure. Because the credentials file can't be read by remote_user. So, I have to still uncomment the lines which exports aws key id and secret key. #Marcin's comment is helpful that the letters need to be capital letters, otherwise it would not work.
So, overall, what I did to fix the issue is to update my script with:
export aws_access_key_id=AKAARXL1CFQNN4UV5TIO
export aws_secret_access_key=MY_SECRETE_KEY

How to create addition EC2 user in linux AMI via UserData with ssh permission

Problem statement- Create additional user pretty much same what been explained Here, only thing which I am doing is instead of generating new key pair I am using same key pair which is being used for ec2-user.
Now if I run following commands manually login into ec-2 instance it working without any issue and I am able to ssh with same key as test-user
sudo adduser test-user
sudo su - test-user
mkdir .ssh
chmod 700 .ssh
cd .ssh
curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key >> authorized_keys
chmod 600 authorized_keys
But if I keep same instruction in user data section of instance to run on boot up, It only create test-user but doesn't perform rest of the steps. I don't found much detail also on /var/log/cloud-init-output.log
#!/bin/bash
sudo adduser test-user
sudo su - test-user
mkdir .ssh
chmod 700 .ssh
cd .ssh
curl http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key >> authorized_keys
chmod 600 authorized_keys
First, make sure cloud-init is installed on your instance
sudo yum install cloud-init
Stop the instance (not terminate)
Update user data with the following script (make sure to replace <YOUR-PUBLIC-SSH-KEY> with your key (eg. ssh-rsa abc123...)
#cloud-config
cloud_final_modules:
- [users-groups,always]
users:
- name: username
groups: [ wheel ]
sudo: [ "ALL=(ALL) NOPASSWD:ALL" ]
shell: /bin/bash
ssh-authorized-keys:
- <YOUR-PUBLIC-SSH-KEY>
Start your instance
Now you should be able to login the same way as for ec2-user.
More information here: https://aws.amazon.com/premiumsupport/knowledge-center/ec2-user-account-cloud-init-user-data/
Apparently scripts entered as user data are executed as the root user, so any files you create will be owned by root. So you have to change the ownership of .those file to test-user. Below command need to be executed in the end.
chown -R test-user:test-user /home/test-user/

Deploy docker container to digital ocean droplet from gitlab-ci

So here is what I want to do.
Push to master in git
Have gitlab-ci hear that push an start a pipeline
The pipeline builds code and pushes a docker container to the gitlab registry
The pipeline logs into a digital ocean droplet via ssh
The pipeline pulls the docker container from the gitlab registry
The pipeline starts the container
I can get up to step 4 no problem. But step 4 just fails every which way. I've tried the ssh key approach:
https://gitlab.com/gitlab-examples/ssh-private-key/blob/master/.gitlab-ci.yml
But that did not work.
So I tried a plain text password approach like this:
image: gitlab/dind:latest
before_script:
- apt-get update -y && apt-get install sshpass
stages:
- deploy
deploy:
stage: deploy
script:
- sshpass -p "mypassword" ssh root#x.x.x.x 'echo $HOME'
this version just exits with code 1 like so
Pseudo-terminal will not be allocated because stdin is not a terminal.
ln: failed to create symbolic link '/sys/fs/cgroup/systemd/name=systemd': Operation not permitted
/usr/local/bin/wrapdocker: line 113: 54 Killed docker daemon $DOCKER_DAEMON_ARGS &> /var/log/docker.log
Timed out trying to connect to internal docker host.
Is there a better way to do this? How can I at the very least access my droplet from inside the gitlab-ci build environment?
I just answered this related question: Create react app + Gitlab CI + Digital Ocean droplet - Pipeline succeeds but Docker container is deleted right after
Heres the solution he is using to get ssh creds set:
before_script:
## Install ssh agent (so we can access the Digital Ocean Droplet) and run it.
- apk update && apk add openssh-client
- eval $(ssh-agent -s)
## Write the environment variable value to the agent store, create the ssh directory and give the right permissions to it.
- echo "$SECRETS_DIGITAL_OCEAN_DROPLET_SSH_KEY" | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
## Make sure that ssh will trust the new host, instead of asking
- echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config
## Test it!
- ssh -t ${SECRETS_DIGITAL_OCEAN_DROPLET_USER}#${SECRETS_DIGITAL_OCEAN_DROPLET_IP} 'echo $HOME'
Code credit goes to https://stackoverflow.com/users/6655011/leonardo-sarmento-de-castro

How to create stun turn server instance using AWS EC2

Actually i wants to use my own stun/Turn server instance and i want to use Amazon EC2 .If anybody has any idea regarding this please share with me the steps to create or any reference link to follow.
do an ssh login to your ec2 instance, then run the below commands for installing and starting the turn server.
simple way:
sudo apt-get install coturn
If you say no, I want the latest cutting edge, you can download source code from their downloads page in install it yourself, example:
sudo -i # ignore if you already in admin mode
apt-get update && apt-get install libssl-dev libevent-dev libhiredis-dev make -y # install the dependencies
wget -O turn.tar.gz http://turnserver.open-sys.org/downloads/v4.5.0.3/turnserver-4.5.0.3.tar.gz # Download the source tar
tar -zxvf turn.tar.gz # unzip
cd turnserver-*
./configure
make && make install
sample command for running TURN server:
turnserver -a -o -v -n -u user:root -p 3478 -L INT_IP -r someRealm -X EXT_IP/INT_IP --no-dtls --no-tls
command description:
-X - your amazon instance's external IP, internal IP: EXT_IP/INT_IP
-p - port to be used, default 3478
-a - Use long-term credentials mechanism
-o - Run server process as daemon
-v - 'Moderate' verbose mode.
-n - no configuration file
--no-dtls - Do not start DTLS listeners
--no-tls - Do not start TLS listeners
-u - user credentials to be used
-r - default realm to be used, need for TURN REST API
in your WebRTC app, you can use trun server like:
{
url: 'turn:user#EXT_IP:3478',
credential: 'root'
}
One method to install a turnserver on Amazon EC2 would be to choose Debian and to install the coturn package, which is the successor of the RFC5766-server.
The configuration file at /etc/turnserver.conf includes EC2 specific instructions. The information provided within this file is very exhaustive in general and should answer the majority of configuration questions.
Once configured, the coturn server can be stopped an started however you would any other service.