gitlab CI : Error loading key : invalid format - amazon-web-services

I'm stuck with this problem since 2 days.
Tried with id_rsa.pub and id_rsa from my production server, still the same error...
SSH_PRIVATE_KEY is a variable I created in the CI/CD Settings on GitLab.
edit : not protected, not masked.
# This file is a template, and might need editing before it works on your project.
# Official framework image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/node/tags/
image: node:alpine
stages:
- deploy
deploy:
stage: deploy
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apk add --update openssh )'
# Add bash
- apk add --update bash
# Add git
- apk add --update git
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- echo "$SSH_PRIVATE_KEY"
- echo "$SSH_PRIVATE_KEY" | ssh-add -
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
# In order to properly check the server's host key, assuming you created the
# SSH_SERVER_HOSTKEYS variable previously, uncomment the following two lines
# instead.
# - mkdir -p ~/.ssh
# - '[[ -f /.dockerenv ]] && echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
script:
- npm i -g pm2
- pm2 deploy ecosystem.config.js production
only:
- master
And when I run the pipeline, I still get this error...
$ echo "$SSH_PRIVATE_KEY" | ssh-add -
Error loading key "(stdin)": invalid format
Could you please help ? I'm helpless, clueless, hopeless loading...
Thanks very much !

SSH_PRIVATE_KEY is a variable I created in the CI/CD Settings on GitLab.
This is documented here
in the Value field paste the content of your private key that you created earlier.
So make sure you have pasted the id_rsa full content, including -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY----- (with 5 final -)
(And, as MrDuk comments, a final newline)
Stephane Paquet adds in the comments:
cat ~/.ssh/id_rsa | pbcopy
to make sure you copy all the required information.

Just as an FYI for anyone else doing this, I had the same problem but had missed the final dash off the END RSA PRIVATE KEY section. It must have 5 dashes as the dividers, apparently.

Also just as an FYI, my issue was that my SSH key was an OpenSSH format key (ex. -----BEGIN OPENSSH PRIVATE KEY-----) instead of a PEM format key (-----BEGIN RSA PRIVATE KEY-----), if you want instructions on how to convert an OpenSSH key to a PEM key you can find the answer here: Openssh Private Key to RSA Private Key

My solution was to change CI/CD Variable type from Variable to File.
And instead of sourcing from the variable, did the sourcing from the file where SSH_PRIVATE_KEY is pointing
chmod 600 $SSH_PRIVATE_KEY
ssh-add $SSH_PRIVATE_KEY

Related

How to pull in additional private repositories in Amplify

Trying to use AWS Amplify to deploy a multi-repo dendron wiki which has a mix of public and private github repositories.
Amplify can be associated with a single repo but there doesn't seem to be a built-in way to pull in additional private repositories.
Create a custom deploy key for the private repo in github
generate the key
ssh-keygen -f deploy_key -N ""
Encode the deploy key as a base64 encoded env variable for amplitude
cat deploy_key | base64 | tr -d \\n
add this as a hosting environment variable (eg. DEPLOY_KEY)
Modify the amplify.yml file to make use of the deploy key
there's 2 key steps
adding deploy key to ssh-agent
WARNING: this implementation will print the $DEPLOY_KEY to stdout
disabling StrictHostKeyChecking
NOTE: amplify does not have a $HOME/.ssh folder by default so you'll need to create one as part of the deployment process
relevant excerpt below
- ...
- eval "$(ssh-agent -s)"
- ssh-add <(echo "$DEPLOY_KEY" | base64 -d)
- echo "disable strict host key check"
- mkdir ~/.ssh
- touch ~/.ssh/config
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ...
full build file here
Now you should be able to use git to clone the private repo.
For a more detailed writeup as well as alternatives and gotchas, see here
For GitLab repositories:
Create a deploy token in GitLab
Set env variables (eg. DEPLOY_TOKEN_USERNAME, DEPLOY_TONE_PASSWORD) in amplify panel
Add this line to the amplify config
- git config --global url."https://$DEPLOY_TOKEN_USERNAME:$DEPLOY_TOKEN_PASSWORD#gitlab.com/.../repo.git".insteadOf "https://gitlab.com/.../repo.git"

Dockerfile writing ssh private key environment variable to /root/.ssh/id_rsa

I have environment variable stored in aws parameter store. When aws codebuild run i want to be able to copy or write the environment variable to /root/.ssh/id_rsa so that i can be able to clone the repo. When the image is build, this error is thrown: Load key "/root/.ssh/id_rsa": invalid format.
FROM php:8.0-fpm
ARG SSHPRIVATE_KEY=$GIT_SSHPRIVATE_KEY
ARG SSHPUBLIC_KEY=$GIT_SSHPUBLIC_KEY
RUN mkdir /root/.ssh/
RUN echo "${SSHPRIVATE_KEY}" > /root/.ssh/id_rsa
RUN echo "${SSHPUBLIC_KEY}" > /root/.ssh/id_rsa.pub
I ended up, setting things up from the buildspec file. I access the source artifact then i write the parameter store value to the id_rsa file.
- cd $CODEBUILD_SRC_DIR_MySourceArtifacts
- echo "$BITBUCKET_SSH_KEY" >> id_rsa
Then on the Dockerfile i copy id_rsa to the appropriate docker container folder.
COPY $CODEBUILD_SRC_DIR_MySourceArtifacts/id_rsa /root/.ssh

Pass my local environment variables values to my ec2 user data

As simple as it sounds, I would like to pass my local environment variable value inside my ec2 user data script. So for instance I run this locally:
export PASSWORD=mypassword
printenv PASSWORD
mypassword
then once I ssh to my ec2 and run
printenv PASSWORD
I should see the same value mypassword. I haven't found a way to inject the right codes in my user data script. Please help if you can.
This is my user data, I am basically installing some packages then authenticate to my vault with the password value I would like to upload from my laptop to my ec2. I just don't want to hardcode mypassword in my user dat script. (not even sure if it's doable?)
# User Data for ASG
user_data = <<EOF
#!/usr/bin/env bash
set -x -v
exec > >(tee -i user-data.log 2>/dev/console) 2>&1
# Install latest AWS cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
# Install VAULT cli
sudo wget https://releases.hashicorp.com/vault/1.8.2/vault_1.8.2_linux_amd64.zip
sudo unzip vault_1.8.2_linux_amd64.zip
sudo mv vault /usr/local/bin/vault
sudo chmod +x /usr/local/bin/vault
vault -v
# Vault env var
export VAULT_ADDR=https://myvault.test
export VAULT_SKIP_VERIFY=true
export VAULT_NAMESPACE=test
# Vault login (to authenticate to vault must export local value of $PASSWORD
export VAULT_PASSWORD=$PASSWORD
vault login -namespace=test -method=userpass username=myuser password=$VAULT_PASSWORD
user_data runs under root user and it has its own shell environment. Thus when you ssh to the instance as an ec2-user or ubuntu, you have your own, different local environment. This is the reason why your export does not work.
To rectify the issue, your user_data must modify .bashrc (or equivalent depending on the OS) of your ssh user (often ec2-user or ubuntu). Only then your exports will take effect.
I was able to make it work by setting up locally all variables for my sensitive data and defined them my variables.tf. Then on my user data field I just exported the TF var name. See below:
Local setup
export TF_VAR_password=password
TF code --> variables.tf
variable "password" {
description = "my password"
type = string
default = ""
}
Now in my app user data script
export MYPASSWORD=${var.password}
VOILA :)
Here is the website as a point of reference --> https://learn.hashicorp.com/tutorials/terraform/sensitive-variables?in=terraform/0-14 ( look for Set values with environment variables)

Creating user in ubuntu from AWS

Using AWS (Amazon Web Services) I have created an Ubuntu 16.10 instance and I am able to login using a pem file like this:
ssh -i key.pem ubuntu#52.16.73.14.54
After I am logged, I can see that I am able to execute:
sudo su
(with no password), however the file /etc/sudoers does NOT contain any reference to the user current user: ubuntu.
How can I create another user with exactly the same behavior (without touching the sudoers file) from terminal in a NON interactive way?
I tried:
sudo useradd -m -c "adding a test user" -G sudo,adm -s /bin/bash testuser
But after I become "testuser" if I invoke:
sudo su
I have to provide a password. Which is exactly the way I want to avoid.
You can't do this without touching sudo, beacuse the ubuntu user is given passwordless access specifically.
$ for group in `groups ubuntu`; do sudo grep -r ^[[:space:]]*[^#]*$group[[:space:]] /etc/sudoers* ; done
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers.d/90-cloud-init-users:ubuntu ALL=(ALL) NOPASSWD:ALL
/etc/sudoers:%sudo ALL=(ALL:ALL) ALL
But what you can do is create a new sudoers file without touching any existing files. sudo is typically configured these days to read all the configurations in a directiory, usually /etc/sudoers.d/, preceisely so that one failing config doesn't effect the rest of sudo.
In your case, you might want to give an admin group sudoless access rather than your user. Then you can add access in the future to other users without changing sudo config.

Error during raising review request in ReviewBoard

I have installed reviewboard. I added local GIT repository to it. During creation of review requests, the repository is available to be selected. But, when I am selecting a file from the repository and trying to add it as a Diff it says 'The selected file does not appear to be a diff.'. Please let me know if anyone has any answer for the question. Thanks....
git diff <filename1> ><filename2>.diff
This can be used for generating the diff file.
Some helpful tips for reviewboard are:
Log Settings:
Check/Tick - Enable logging
Log directory: /var/www/reviewboard
Log Level: Debug
Review board git configuration steps::
$ git config --global user.name "Chalpat Rauth"
$ git config --global user.email chalpat.rauth#ap.sony.com
You can veryify the entries in vim ~/.gitconfig
$ ssh-keygen -t rsa
copy the public key ./root/.ssh/id_rsa.pub to gitlab as a new key
chmod 700 -R /root/.ssh/
git clone git#gitlab.csx.sony.co.jp:testtest.git
During configuration in ReviewBoard:
Hosting service: None - Custom Repository
Repository Type: Git
Path: /var/www/reviewboard/code/testtest/.git
Note the below:
In path: /var/www/reviewboard/code/testtest/helloworld/src/test/java/com/sony/csx
git add <file_name>
git commit -m "This is second commit"
git push
git diff HEAD >DiffForReview
LDAP Settings::
Check/Tick - Allow anonymous read-only access
Authentication Method: LDAP
LDAP Server: ldap://ldap.csx.sony.co.jp
LDAP Base DN: dc=csx,dc=sony,dc=co,dc=jp
Surname Attribute: csxUsername1
Full Name Attribute: csxUsernameF
E-Mail LDAP Attribute: mail
User Mask: uid=%s