Can't find correct syntax to forward SSH keys - dockerfile

I'm trying to build a custom container with Buildah via a Dockerfile that will run some tasks in Celery, but the tasks need access to a library available in a private repository on our local Gitlab instance. It works if I copy the library from a directory I cloned locally, but it would be best if I could just clone a copy to the container in the Dockerfile. However, I can't get the git clone to work inside the Dockerfile when trying to build it in Buildah. It doesn't seem to be able to read my SSH keys, which are stored on the host at ~/.ssh/id_rsa. I'm trying to follow this from the Buildah man page:
--ssh=default|id[=socket>|<key>[,<key>]
SSH agent socket or keys to expose to the build. The socket path can be left empty to use the
value of default=$SSH_AUTH_SOCK
To later use the ssh agent, use the --mount flag in a RUN instruction within a Containerfile:
RUN --mount=type=secret,id=id mycmd
So in my Dockerfile:
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan -t ed25519 gitlab.mycompany.com >> ~/.ssh/known_hosts
RUN --mount=type=ssh git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library
And when I try to build it in Builad:
buildah build --ssh=default -f celery/Dockerfile -t celery
And the error when Buildah gets to the step where it's trying to clone the git repository:
Permission denied, please try again.
Permission denied, please try again.
git#gitlab.mycompany.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
error building at STEP "RUN --mount=type=ssh git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library": error while running runtime: exit status 128
Finished
git clones work correctly using my default SSH keys on my host, but whatever I'm doing to access the keys when building the Dockerfile in Buildah isn't working correctly. What do I need to change to get use the SSH keys inside of Buildah?
PS Buildah version, on RHEL8:
$ buildah -v
buildah version 1.26.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
EDIT: So I figured out how to get it to work via the --secret flag. Dockerfile:
RUN --mount=type=secret,id=id_rsa GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa" git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library
Command line:
buildah build --secret id=id_rsa,src=/home/wile_e8/.ssh/id_rsa -f celery/Dockerfile -t celery
This works, although only once. When I try to run this command next in the Dockerfile:
WORKDIR /opt/library
RUN --mount=type=secret,id=id_rsa GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa" git fetch --all --tags --prune
I get the following error:
###########################################################
# WARNING: UNPROTECTED PRIVATE KEY FILE! #
###########################################################
Permissions 0755 for '/run/secrets/id_rsa' are too open.
It is required that your private key files are NOT accessible by others.
This private key will be ignored.
Load key "/run/secrets/id_rsa": bad permissions
Permission denied, please try again.
Permission denied, please try again.
git#gitlab.mycompany.com: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Looks like I'll have to figure out how to set permissions on the secret file. But I still have no idea on how to get the --ssh flag to work correctly, which should be easier than doing all this stuff with the secret file.
EDIT 2: And here is how I managed to run multiple commands that contact the private Gitlab repository - Dockerfile:
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/id_rsa"
RUN --mount=type=secret,id=id_rsa git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library && \
cd /opt/library && \
git fetch --all --tags --prune && \
git checkout tags/1.0.0 -b 1.0.0
Still not as convenient as figuring out the correct syntax for the --ssh flag, but it works.

I eventually figured out how to format this to get the --ssh flag to work. Although I'm now updated to version 1.27.2, so maybe it was a bug fix.
$ buildah -v
buildah version 1.27.2 (image-spec 1.0.2-dev, runtime-spec 1.0.2-dev)
But here is how I formatted the buildah command:
buildah build --ssh id=/home/wile_e8/.ssh/id_rsa -f celery/Dockerfile -t celery
And here is the git fetch line in the Dockerfile:
RUN --mount=type=ssh,id=id git clone git#gitlab.mycompany.com:jdoe/library.git /opt/library && \
cd /opt/library && \
git fetch --all --tags --prune && \
git checkout tags/1.0.0 -b 1.0.0
I don't know why --ssh=default doesn't automatically pull ~/.ssh/id_rsa, but manually specifying that file in this way works.

Related

Where do I put `.aws/credentials` for Docker awslogs log-driver (and avoid NoCredentialProviders)?

The Docker awslogs documentation states:
the default AWS shared credentials file (~/.aws/credentials of the root user)
Yet if I copy my AWS credentials file there:
sudo bash -c 'mkdir -p $HOME/.aws; cp .aws/credentials $HOME/.aws/credentials'
... and then try to use the driver:
docker run --log-driver=awslogs --log-opt awslogs-group=neiltest-deleteme --rm hello-world
The result is still the dreaded error:
docker: Error response from daemon: failed to initialize logging driver: failed to create Cloudwatch log stream: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors.
Where does this file really need to go? Is it because the Docker daemon isn't running as root but rather some other user and, if so, how do I determine that user?
NOTE: I can work around this on systems using systemd by setting environment variables. But this doesn't work on Google CloudShell where the Docker daemon has been started by some other method.
Ah ha! I figured it out and tested this on Debian Linux (on my Chromebook w/ Linux VM and Google CloudShell):
The .aws folder must be in the root folder of the root user not in the $HOME folder!
Based on that I was able to successfully run the following:
pushd $HOME; sudo bash -c 'mkdir -p /.aws; cp .aws/* /.aws/'; popd
docker run --log-driver=awslogs --log-opt awslogs-region=us-east-1 --log-opt awslogs-group=neiltest-deleteme --rm hello-world
I initially figured this all out by looking at the Docker daemon's process information:
DOCKERD_PID=$(ps -A | grep dockerd | grep -Eo '[0-9]+' | head -n 1)
sudo cat /proc/$DOCKERD_PID/environ
The confusing bit is that Docker's documentation here is wrong:
the default AWS shared credentials file (~/.aws/credentials of the root user)
The true location is /.aws/credentials. I believe this is because the daemon starts before $HOME is actually defined since it's not running as a user process. So starting a shell as root will tell you a different story for tilde or $HOME:
sudo sh -c 'cd ~/; echo $PWD'
That outputs /root but using /root/.aws/credentials does not work!

pull access denied repo does not exist or may require authorization: server message:insufficient_scope: authorization failed"host=registry-1.docker.io

My Docker container works perfectly locally and using the default context and the command "docker compose up". I'm trying to run my docker image on ECS in AWS following this guide - https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I've followed all of the steps on the guide, after I've set the context to my new context (I've tried all 3 options) - after I run "docker compose up" I get the above error, here again for detail:
INFO trying next host error="pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed" host=registry-1.docker.io
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
I've also set the user and added all of the permissions I can think of - image below
I've looked everywhere and I can't find traction, please help :)
The image is located on AWS ECS and Docker hub - I've tried both
Here is my Docker file:
FROM php:7.4-fpm
# Arguments defined in docker-compose.yml
ARG user
ARG uid
# Install system dependencies
RUN apt-get update && apt-get install -y \
git \
curl \
libpng-dev \
libonig-dev \
libxml2-dev \
zip \
unzip
# Clear cache
RUN apt-get clean && rm -rf /var/lib/apt/lists/*
RUN curl -sS https://getcomposer.org/installer | php -- --
install-dir=/usr/local/bin --filename=composer
# Install PHP extensions
RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath
gd
# Get latest Composer
COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
# Create system user to run Composer and Artisan Commands
# RUN useradd -G www-data,root -u $uid -d /home/$user $user
RUN mkdir -p /home/$user/.composer && \
chown -R $user:$user /home/$user
# Set working directory
WORKDIR /var/www
USER $user

GCE startup script: can't find $HOME after exporting in startup script

I am trying to run a GCE startup script that downloads all dependencies, clones a repository and runs a python program. Here is the code
#! /usr/bin/bash
apt-get update
apt-get -y install python3.7
apt-get -y install git
export HOME=/home/codingassignment
echo $HOME
cd $HOME
rm -rf sshlogin-counter/
git clone https://rutu2605:************#github.com/rutu2605/sshlogin-counter.git
nohup python3 -u ./sshlogin-counter/alphaclient.py > output.log 2>&1 &
When I run echo$HOME, it displays the path in the log file. However when I cd into it, it says directory not found
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /home/codingassignment
May 08 23:15:18 alphaclient google_metadata_script_runner[488]: startup-script: /tmp/metadata-scripts701519516/startup-script: line 7: cd: /home/codingassignment: No such file or directory
That's because at the time when the script is executed, the /home/codingassignment directory doesn't exist yet. To quote the answer you referred to in the comment:
The startup script is executed as root when the user have been not created yet and no user is logged in
The user home directory for the codingassignment user is created later, when you try to login through SSH for example, if you're using the SSH button in Cloud Console or use the gcloud compute ssh command.
My suggestion:
a) Download the code to some "neutral" directory, like /assignment and set proper permissions for this folder so that the codingassignment user can access it later.
b) Try first creating the user with adduser - this might solve your problem. First create the user, then use su codingassignment to drop root permissions, if you don't need them when executing the script.

How to make gitlab runner access EC2 instance to make deploy?

I create a script to make deploy but every time throw this error:
"Pseudo-terminal will not be allocated because stdin is not a terminal.
Host key verification failed."
My .gitlab-ci.yml:
make_deploy:
stage: deploy
script:
- apk update
- apk add bash
- apk add git
- apk add openssh
- bash scripts/deploy.sh
- echo "Deploy succeeded!"
only:
- master
deploy.sh:
#!/bin/bash
user=gitlab+deploy-token-44444
pass=passwordpass
gitlab="https://"$user":"$pass"#gitlab.com/repo/project.git"
ssh-keygen -R 50-200-50-15
chmod 600 key.pem
ssh -tt -i key.pem ubuntu#ec2-50-200-50-15.compute-1.amazonaws.com << 'ENDSSH'
rm -rf project
git clone $gitlab
cd project
npm i
pm2 restart .
ENDSSH
exit
You need to change your authentication type instead of using username & password, use ssh key exchange.
This way, your script will not be prompted with username & password input.
But before you do that, you should first create ssh keys and upload the public key to your repository settings, it will serve as your primary authentication between the instance and the gitlab server.
More info here.
Test your connection.
ssh -T git#gitlab.com

Create a Jenkins Job to run JMeter tests on a remote EC2 instance

I have a Java application that creates and runs JMeter tests.
Those tests need to be run on a remote EC2 instance.
Is it possible to have some command in Jenkins (which is on a separate AWS machine) to clone a git project to a remote EC2 instance? And run the flow there?
I will appreciate any thoughts and ideas!
So, here is my solution:
in Jenkins in Build section add 'Execute shell' step and do scp there for pom.xml and src folder from the Jenkins workspace to ec2 instance tmp folder
in my case it looks like this:
scp -i ../../../jobs/utilities/keys/.pem pom.xml ec2-user#ec2-00-000-00.compute.amazonaws.com:/tmp
scp -i ../../../jobs/utilities/keys/.pem -r src ec2-user#ec2-00-000-00.compute.amazonaws.com:/tmp
then add 'Send files or execute command over SSH' step and in the Exec command section put next:
sudo rm -rf ../../my_project_folder_name/
sudo mkdir ../../my_project_folder_name
cd ../../tmp
sudo cp pom.xml ../my_project_folder_name/
sudo cp -r src ../my_project_folder_name
cd ../my_project_folder_name
sudo mvn clean test
then add one more 'Execute shell' step to copy all the files from tag=rget folder to be able to use them for different reports:
scp -i ../../../jobs/utilities/keys/.pem -r ec2-user#ec2-00-000-00.compute.amazonaws.com:/my_project_folder_name/target .
That's it :)