I have environment variable stored in aws parameter store. When aws codebuild run i want to be able to copy or write the environment variable to /root/.ssh/id_rsa so that i can be able to clone the repo. When the image is build, this error is thrown: Load key "/root/.ssh/id_rsa": invalid format.
FROM php:8.0-fpm
ARG SSHPRIVATE_KEY=$GIT_SSHPRIVATE_KEY
ARG SSHPUBLIC_KEY=$GIT_SSHPUBLIC_KEY
RUN mkdir /root/.ssh/
RUN echo "${SSHPRIVATE_KEY}" > /root/.ssh/id_rsa
RUN echo "${SSHPUBLIC_KEY}" > /root/.ssh/id_rsa.pub
I ended up, setting things up from the buildspec file. I access the source artifact then i write the parameter store value to the id_rsa file.
- cd $CODEBUILD_SRC_DIR_MySourceArtifacts
- echo "$BITBUCKET_SSH_KEY" >> id_rsa
Then on the Dockerfile i copy id_rsa to the appropriate docker container folder.
COPY $CODEBUILD_SRC_DIR_MySourceArtifacts/id_rsa /root/.ssh
Related
Elastic Beanstalk is infinitely copying a file to the /tmp folder that I created with a config file in .ebextensions. The name of this file is /tmp/mount-efs.sh. This file causes an issue on initialisation of an environment. So I try to get rid of it or at least change the content of it.
I tried already:
deploy an older version, that is not having this file.
Result: The ec2 instance not get deleted, so the file is still there
Upload the zip instead of using the application version
Result: The ec2 instance not get deleted, so the file is still there
delete the file from /tmp/mount-efs.sh
Result: The file immediatly reappears again and its ".bak" file too
Removed the '.config' file from /var/app/staging/.ebextensions/
Result: Same error and the file mount-efs.sh is still created in /tmp folder
I think Elastik Beanstalk is stuck with a version that it thinks works. But the version has an issue. And EB does not allow me to deploy a different version (older or newer).
The stranger thing is, that the version, that EB every time fallback to, did not have the file in the .ebextensions.
I also tried to rebuild the environment.
Result: Fallback is loaded, file is there, issue happens.
from eb-engine.log:
Running command /bin/sh -c /opt/aws/bin/cfn-init -s arn:aws:cloudformation:us-west-2:xxxxxxxxxxxx:stack/awseb-e-xxxxxxxxxxx-stack/nnnnnnnn-nnnn-nnnn-nnnn-xxxxxxxxxxxx -r AWSEBAutoScalingGroup --region us-west-2 --configsets Infra-EmbeddedPreBuild
2022/07/14 20:31:13.403626 [INFO] Error occurred during build: Command 01_mount failed
2022/07/14 20:31:13.403667 [ERROR] An error occurred during execution of command [self-startup] - [PreBuildEbExtension]. Stop running the command. Error: EbExtension build failed. Please refer to /var/log/cfn-init.log for more details.
This error happens every 5 sec. So EB is in an infinite loop here.
So I want to get rid of the /tmp/mount-efs.sh file, or that the content of /tmp/mount-efs.sh is different. I want to do this directly via ssh on the ec2 instance it self.
So my understanding is, that EB runs the config files that I added in .ebextensions. In this files there are files created in the /tmp folder. This files in the /tmp folder run on initialization.
So what file I have to change, so that the changes are recognized in the file, that is created in the /tmp folder (without deployment)?
Or can I stop the initialization loop somehow?
The infinity loop happens because of a command that calls a file in /var/www/html that did not exist. Why this file did not exist is a riddle for me. The whole /var/www/html folder was empty. Normally elastic beanstalk should do the stuff before running the commands, but this is not the case. (create app folder and staging, unzip the source code into staging, copy it into the app/current folder, and create a symlink to the app/current folder)
I was able to solve the issue with the infinity loop by doing the following:
sudo mkdir -p /var/app/staging
cd $_
sudo unzip /opt/elasticbeanstalk/deployment/app_source_bundle
sudo cp -rpv /var/app/staging /var/app/current
sudo rm -rf /var/www/html
sudo ln -s /var/app/current /var/www/html
mkdir -p: creates the directories with parent. so if "app" not exists it will be created before "staging" will be created
$_: Reference to the last folder "in action". here this was /var/app/staging
unzip: unzip the source bundle code into staging
cp -rp: copy recursively (r) and keep ownership and timestamps (p) from "staging" into "current"
rm -rf /var/www/html: deletes the existing HTML folder. Be careful with this command what you delete!
ln -s : creates a symbolic link from /var/www/html to /var/app/current
Trying to use AWS Amplify to deploy a multi-repo dendron wiki which has a mix of public and private github repositories.
Amplify can be associated with a single repo but there doesn't seem to be a built-in way to pull in additional private repositories.
Create a custom deploy key for the private repo in github
generate the key
ssh-keygen -f deploy_key -N ""
Encode the deploy key as a base64 encoded env variable for amplitude
cat deploy_key | base64 | tr -d \\n
add this as a hosting environment variable (eg. DEPLOY_KEY)
Modify the amplify.yml file to make use of the deploy key
there's 2 key steps
adding deploy key to ssh-agent
WARNING: this implementation will print the $DEPLOY_KEY to stdout
disabling StrictHostKeyChecking
NOTE: amplify does not have a $HOME/.ssh folder by default so you'll need to create one as part of the deployment process
relevant excerpt below
- ...
- eval "$(ssh-agent -s)"
- ssh-add <(echo "$DEPLOY_KEY" | base64 -d)
- echo "disable strict host key check"
- mkdir ~/.ssh
- touch ~/.ssh/config
- 'echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ...
full build file here
Now you should be able to use git to clone the private repo.
For a more detailed writeup as well as alternatives and gotchas, see here
For GitLab repositories:
Create a deploy token in GitLab
Set env variables (eg. DEPLOY_TOKEN_USERNAME, DEPLOY_TONE_PASSWORD) in amplify panel
Add this line to the amplify config
- git config --global url."https://$DEPLOY_TOKEN_USERNAME:$DEPLOY_TOKEN_PASSWORD#gitlab.com/.../repo.git".insteadOf "https://gitlab.com/.../repo.git"
As simple as it sounds, I would like to pass my local environment variable value inside my ec2 user data script. So for instance I run this locally:
export PASSWORD=mypassword
printenv PASSWORD
mypassword
then once I ssh to my ec2 and run
printenv PASSWORD
I should see the same value mypassword. I haven't found a way to inject the right codes in my user data script. Please help if you can.
This is my user data, I am basically installing some packages then authenticate to my vault with the password value I would like to upload from my laptop to my ec2. I just don't want to hardcode mypassword in my user dat script. (not even sure if it's doable?)
# User Data for ASG
user_data = <<EOF
#!/usr/bin/env bash
set -x -v
exec > >(tee -i user-data.log 2>/dev/console) 2>&1
# Install latest AWS cli
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install --update
# Install VAULT cli
sudo wget https://releases.hashicorp.com/vault/1.8.2/vault_1.8.2_linux_amd64.zip
sudo unzip vault_1.8.2_linux_amd64.zip
sudo mv vault /usr/local/bin/vault
sudo chmod +x /usr/local/bin/vault
vault -v
# Vault env var
export VAULT_ADDR=https://myvault.test
export VAULT_SKIP_VERIFY=true
export VAULT_NAMESPACE=test
# Vault login (to authenticate to vault must export local value of $PASSWORD
export VAULT_PASSWORD=$PASSWORD
vault login -namespace=test -method=userpass username=myuser password=$VAULT_PASSWORD
user_data runs under root user and it has its own shell environment. Thus when you ssh to the instance as an ec2-user or ubuntu, you have your own, different local environment. This is the reason why your export does not work.
To rectify the issue, your user_data must modify .bashrc (or equivalent depending on the OS) of your ssh user (often ec2-user or ubuntu). Only then your exports will take effect.
I was able to make it work by setting up locally all variables for my sensitive data and defined them my variables.tf. Then on my user data field I just exported the TF var name. See below:
Local setup
export TF_VAR_password=password
TF code --> variables.tf
variable "password" {
description = "my password"
type = string
default = ""
}
Now in my app user data script
export MYPASSWORD=${var.password}
VOILA :)
Here is the website as a point of reference --> https://learn.hashicorp.com/tutorials/terraform/sensitive-variables?in=terraform/0-14 ( look for Set values with environment variables)
I am using aws codepipeline.
I have 2 codecommit repo say source1 and source2.
I am using codepipeline for CI/CD.
Codepipeline that i have created, is using both the codecommit repo i.e. source1 and source2 in codepipeline's source.
Now Codebuild is also using both the input source i.e source1 and source2 in its Input Artifacts.
Source1 is primary and source2 is secondary Input artifact
I have a buildspec.yml file which is using dockerfile stored in the root directory of source1 to build the code.
Now issue is, dockerfile is not able to copy source2 code in the container.
i.e
say source1 has folder abc in that and source2 has folder xyz in that
I am doing below in docker file
COPY ./abc /source1/abc/ --working
COPY ./xyz /source2/xyz/ --Not working,getting below error
COPY failed: stat /var/lib/docker/tmp/docker-builder297252497/xyz: no such file or directory.
then i tried below in dockerfile
COPY ./abc /source1/abc/ --working
COPY $CODEBUILD_SRC_DIR_source2/xyz /source2/xyz/ --Not working,getting same error
also tried to CD in $CODEBUILD_SRC_DIR_source2 and than run COPY command, but same error.
Afterwards, I tried printing PWD,CODEBUILD_SRC_DIR,CODEBUILD_SRC_DIR_source2 in both yaml as well as in dockerfile.
it yields below output
in yaml file
echo $CODEBUILD_SRC_DIR prints --> /codebuild/output/src886/src/s3/00
echo CODEBUILD_SRC_DIR_source2 --> /codebuild/output/src886/src/s3/01
echo $PWD --> /codebuild/output/src886/src/s3/00
in dockerfile
echo $CODEBUILD_SRC_DIR prints --> prints nothing
echo CODEBUILD_SRC_DIR_source2 --> prints nothing
echo $PWD --> print '/'
Seems like dockerfile doesn't have access to CODEBUILD_SRC_DIR and CODEBUILD_SRC_DIR_source2 env variables.
Anyone have any idea how can i access CODEBUILD_SRC_DIR_source2 or source2 in dockerfile so that I can copy them in container and make codebuild successful.
Thanks in Advance !!!
Adding answer for anyone else who is facing the same issue.
Hope this will help someone !
The issue was regarding build context passed to docker
when there is only one repo as input source, then codebuild uses this directory to build as pwd --> CODEBUILD_SRC_DIR=/codebuild/output/src894561443/src
The source in first repo (in case only one repo) is present in the same directory i.e. CODEBUILD_SRC_DIR=/codebuild/output/src894561443/src
and in buildspec.yml file we had following command to build the image
docker build -t tag . (uses the dockerfile present in root directory of first source)
But when we have multiple source then code build stores the input artifacts like this
CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00
CODEBUILD_SRC_DIR_source2=/codebuild/output/src886/src/s3/01
instead of CODEBUILD_SRC_DIR=/codebuild/output/src886/src/
where CODEBUILD_SRC_DIR is first input artifact(1st codecommit repo)
and CODEBUILD_SRC_DIR_source2 is second input artifact (2nd codecommit repo)
In this case codebuild was using directory -> CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00 as pwd
So below command where context was passed as dot '.' (pwd)
docker build -t tag .
As a result only first source was passed to the docker as CODEBUILD_SRC_DIR was PWD and docker was failed to refer to the second source.
To fix this we passed the parent directory of CODEBUILD_SRC_DIR=/codebuild/output/src886/src/s3/00 i.e /codebuild/output/src886/src/s3/
in docker build command like this.
docker build -t tag -f $CODEBUILD_SRC_DIR/Dockerfile /codebuild/output/src886/src/s3/
and in dockerfile reffered the source1 and source2 as below
source1=./00
source2=./01
and it worked !!!
I'm stuck with this problem since 2 days.
Tried with id_rsa.pub and id_rsa from my production server, still the same error...
SSH_PRIVATE_KEY is a variable I created in the CI/CD Settings on GitLab.
edit : not protected, not masked.
# This file is a template, and might need editing before it works on your project.
# Official framework image. Look for the different tagged releases at:
# https://hub.docker.com/r/library/node/tags/
image: node:alpine
stages:
- deploy
deploy:
stage: deploy
before_script:
# Install ssh-agent if not already installed, it is required by Docker.
# (change apt-get to yum if you use a CentOS-based image)
- 'which ssh-agent || ( apk add --update openssh )'
# Add bash
- apk add --update bash
# Add git
- apk add --update git
# Run ssh-agent (inside the build environment)
- eval $(ssh-agent -s)
# Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
- echo "$SSH_PRIVATE_KEY"
- echo "$SSH_PRIVATE_KEY" | ssh-add -
# For Docker builds disable host key checking. Be aware that by adding that
# you are suspectible to man-in-the-middle attacks.
# WARNING: Use this only with the Docker executor, if you use it with shell
# you will overwrite your user's SSH config.
- mkdir -p ~/.ssh
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
# In order to properly check the server's host key, assuming you created the
# SSH_SERVER_HOSTKEYS variable previously, uncomment the following two lines
# instead.
# - mkdir -p ~/.ssh
# - '[[ -f /.dockerenv ]] && echo "$SSH_SERVER_HOSTKEYS" > ~/.ssh/known_hosts'
script:
- npm i -g pm2
- pm2 deploy ecosystem.config.js production
only:
- master
And when I run the pipeline, I still get this error...
$ echo "$SSH_PRIVATE_KEY" | ssh-add -
Error loading key "(stdin)": invalid format
Could you please help ? I'm helpless, clueless, hopeless loading...
Thanks very much !
SSH_PRIVATE_KEY is a variable I created in the CI/CD Settings on GitLab.
This is documented here
in the Value field paste the content of your private key that you created earlier.
So make sure you have pasted the id_rsa full content, including -----BEGIN RSA PRIVATE KEY----- and -----END RSA PRIVATE KEY----- (with 5 final -)
(And, as MrDuk comments, a final newline)
Stephane Paquet adds in the comments:
cat ~/.ssh/id_rsa | pbcopy
to make sure you copy all the required information.
Just as an FYI for anyone else doing this, I had the same problem but had missed the final dash off the END RSA PRIVATE KEY section. It must have 5 dashes as the dividers, apparently.
Also just as an FYI, my issue was that my SSH key was an OpenSSH format key (ex. -----BEGIN OPENSSH PRIVATE KEY-----) instead of a PEM format key (-----BEGIN RSA PRIVATE KEY-----), if you want instructions on how to convert an OpenSSH key to a PEM key you can find the answer here: Openssh Private Key to RSA Private Key
My solution was to change CI/CD Variable type from Variable to File.
And instead of sourcing from the variable, did the sourcing from the file where SSH_PRIVATE_KEY is pointing
chmod 600 $SSH_PRIVATE_KEY
ssh-add $SSH_PRIVATE_KEY