How to pipe a github secret variable into a file - kubectl

I have a github pipeline and im piping a github sercret variable into a file but i get the following error.
/home/runner/work/_temp/c6144b9a-c8e3-489a-ae97-795f592c57f0.sh: line 6: /config: Permission denied
echo: write error: Broken pipe
name: pipeline
on: [ push ]
env:
KUBECONFIG_B64DATA: ${{ secrets.KUBECONFIG_B64DATA }}
deploy:
name: Deploy
# if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#master
- name: Setup Kubectl
run: |
sudo apt-get -y install curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
sudo mkdir -p ~/.kube
sudo mv config /root/.kube/
EDIT:
I use a different folder to get passed permissions isuses (/tmp/config)
However i still struggle to pipe a github secret variable into a file because github masks the secret and im returned with an error.
base64: invalid input
I believe this is because when you echo a secret you simply get **** instead of the actual value

I spent 4 hours on this issue. Then found the solution which was actually hidden in the comments.
As pointed out by #Kay, this was caused by the white space. Doing echo "${KUBECONFIG_B64DATA// /}" | base64 --decode > /tmp/config fixed the problem for me.
Just posting this as an official answer, so that it becomes easier for someone to find it later.

Change this line:
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
To
sudo bash -c 'base64 --decode <<< "$KUBECONFIG_B64DATA" > /config'
Or
sudo tee /config > /dev/null < <(base64 --decode <<< "$KUBECONFIG_B64DATA")

Related

AWS EC2 Image Builder issue with authorized_keys

I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script

Bitbucket pipeline add description in version description

I am trying to add description to newly created version only, but no luck ;-(
I can add description to lambda and alias, but not to newly created version.
You can refer below screenshot where I want my description from pipeline.
Here is my code in yml file
- step:
name: Build and publish version
oidc: true
script:
- apt-get update && apt-get install -y zip jq
- for dir in ${LAMBDA_FUNCTION_NAME}; do
- echo $dir
- cd ./$dir && npm install --production
- zip -r code.zip *
# lambda config
# - export ENVIRONMENT="dev"
# Create lambda configuration file with environment variables
- export LAMBDA_CONFIG="{}"
- echo $LAMBDA_CONFIG > ./lambdaConfig.json
- pipe: atlassian/aws-lambda-deploy:1.5.0
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_OIDC_ROLE_ARN: '############################'
FUNCTION_NAME: $dir
COMMAND: 'update'
ZIP_FILE: 'code.zip'
FUNCTION_CONFIGURATION: "lambdaConfig.json"
WAIT: "true"
#PUBLISH_FLAG: "false"
- BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes"
# - if [ ${ALIAS} == "" ]; then echo "Setup successful"; else
- VERSION=$(jq --raw-output '.Version' $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env)
- cd .. && echo ${VERSION} > ./version.txt
- echo "Published version:${VERSION}"
- cat version.txt
- VERSION=$(cat ./version.txt)
#- fi;
- pipe: atlassian/aws-lambda-deploy:1.5.0
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_OIDC_ROLE_ARN: '########################'
FUNCTION_NAME: $dir
COMMAND: 'alias'
ALIAS: ${ALIAS}
VERSION: '${VERSION}'
DESCRIPTION: ${DESCRIPTION}
- done

Same Dockerfile produces error in Codebuild

I have a Dockerfile:
FROM public.ecr.aws/bitnami/node:15 AS stage-01
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci
FROM stage-01 AS stage-02
COPY src /app/src
COPY public /app/public
COPY tsconfig.json /app/tsconfig.json
WORKDIR /app
RUN PUBLIC_URL=/myapp/web npm run build
FROM public.ecr.aws/bitnami/nginx:1.20
USER 1001
COPY --from=stage-02 /app/build /app/build
COPY nginx.conf /opt/bitnami/nginx/conf/server_blocks/nginx.conf
COPY ./env.sh /app/build
COPY window.env /app/build
EXPOSE 8080
WORKDIR /app/build
CMD ["/bin/sh", "-c", "/app/build/env.sh && nginx -g \"daemon off;\""]
If I build this image locally it starts normally and does what it has to do.
My local docker version:
Client: Docker Engine - Community
Version: 20.10.7
API version: 1.41
Go version: go1.13.15
Git commit: f0df350
Built: Wed Jun 2 11:56:40 2021
OS/Arch: linux/amd64
Context: default
Experimental: true
Server: Docker Engine - Community
Engine:
Version: 20.10.8
API version: 1.41 (minimum version 1.12)
Go version: go1.16.6
Git commit: 75249d8
Built: Fri Jul 30 19:52:16 2021
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.4.9
GitCommit: e25210fe30a0a703442421b0f60afac609f950a3
runc:
Version: 1.0.1
GitCommit: v1.0.1-0-g4144b63
docker-init:
Version: 0.19.0
GitCommit: de40ad0
If I build it in Codebuild it does not starts:
/app/build/env.sh: 4: /app/build/env.sh: cannot create ./env-config.js: Permission denied
This is the image I am using in codebuild: aws/codebuild/amazonlinux2-x86_64-standard:3.0
I have also run the same script in local and still no error.
What could be the cause of this? If you have something in mind please let me know, otherwise I will post more code
This is my env.sh
#!/usr/bin/env sh
# Add assignment
echo "window._env_ = {" > ./env-config.js
# Read each line in .env file
# Each line represents key=value pairs
while read -r line || [ -n "$line" ];
do
echo "$line"
# Split env variables by character `=`
if printf '%s\n' "$line" | grep -q -e '='; then
varname=$(printf '%s\n' "$line" | sed -e 's/=.*//')
varvalue=$(printf '%s\n' "$line" | sed -e 's/^[^=]*=//')
fi
# Read value of current variable if exists as Environment variable
eval value=\"\$"$varname"\"
# Otherwise use value from .env file
[ -z "$value" ] && value=${varvalue}
echo name: "$varname", value: "$value"
# Append configuration property to JS file
echo " $varname: \"$value\"," >> ./env-config.js
done < window.env
echo "}" >> ./env-config.js
buildspec:
version: 0.2
env:
git-credential-helper: yes
secrets-manager:
GITHUB_TOKEN: "github:GITHUB_TOKEN"
phases:
install:
runtime-versions:
nodejs: 12
commands:
- npm install
build:
commands:
- echo Build started on `date`
- GITHUB_USERNAME=${GITHUB_USERNAME} GITHUB_EMAIL=${GITHUB_EMAIL} GITHUB_TOKEN=${GITHUB_TOKEN} AWS_REGION=${AWS_DEFAULT_REGION} GITHUB_REPOSITORY_URL=${GITHUB_REPOSITORY_URL} ECR_REPOSITORY_URL=${ECR_REPOSITORY_URL} ENV=${ENV} node release.js
My build project terraform configuration:
resource "aws_codebuild_project" "dashboard_image" {
name = var.project.name
service_role = var.codebuild_role_arn
artifacts {
type = "CODEPIPELINE"
}
environment {
compute_type = "BUILD_GENERAL1_SMALL"
image = "aws/codebuild/amazonlinux2-x86_64-standard:3.0"
type = "LINUX_CONTAINER"
privileged_mode = true
environment_variable {
name = "GITHUB_REPOSITORY_URL"
value = "https://github.com/${var.project.github_organization_name}/${var.project.github_repository_name}.git"
}
environment_variable {
name = "ECR_REPOSITORY_URL"
value = var.project.ecr_repository_url
}
environment_variable {
name = "ECR_IMAGE_NAME"
value = var.project.ecr_image_name
}
environment_variable {
name = "ENV"
value = "prod"
}
}
source {
type = "CODEPIPELINE"
buildspec = "buildspec.yml"
}
}
It's all about your Dockerfile and user permissions in it. Try to run docker run public.ecr.aws/bitnami/nginx:1.20 whoami - you will see that this image has not default user. It will be the same if you exec something inside this container. You have to add --user root to run or exec commands. See section "Why use a non-root container?" in Bitnami Nginx image documentation
That's why you don't have permission to create file inside the /app folder. The owner of this folder is root from the first public.ecr.aws/bitnami/node:15 image (which has root user by default).
In order to make it work in your case you have to change the line from USER 1001 to USER root (or someone with proper permissions) and double check that env.sh file has execute permission chmod +x env.sh.
This is the change I had to make to my Dockerfile in order to make it work:
FROM public.ecr.aws/bitnami/node:15 AS stage-01
COPY package.json /app/package.json
COPY package-lock.json /app/package-lock.json
WORKDIR /app
RUN npm ci
FROM stage-01 AS stage-02
COPY src /app/src
COPY public /app/public
COPY tsconfig.json /app/tsconfig.json
WORKDIR /app
RUN PUBLIC_URL=/myapp/web npm run build
FROM public.ecr.aws/bitnami/nginx:1.20
USER root
COPY --from=stage-02 /app/build /app/build
COPY nginx.conf /opt/bitnami/nginx/conf/server_blocks/nginx.conf
COPY ./env.sh /app/build
COPY window.env /app/build
RUN chmod 777 /app/build/env-config.js
EXPOSE 8080
WORKDIR /app/build
USER 1001
CMD ["/bin/sh", "-c", "/app/build/env.sh && nginx -g \"daemon off;\""]
It is probably due to the codebuild permissions when cloning the repository
777 is just temporary, later I will probably test if I can restrict the permissions.

Run elastic beanstalk .ebextensions config for specific environments

I have the following config
0_logdna.config
commands:
01_install_logdna:
command: "/home/ec2-user/logdna.sh"
02_restart_logdna:
command: "service logdna-agent restart"
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
echo "[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg" | sudo tee /etc/yum.repos.d/logdna.repo
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
I want to be able to only run this config for my production environment but each time i run this code, i get an error that says
ERROR [Instance: i-091794aa00f84ab36,i-05b6d0824e7a0f5da] Command failed on instance. Return code: 1 Output: (TRUNCATED)...not found
/home/ec2-user/logdna.sh: line 17: logdna-agent: command not found
/home/ec2-user/logdna.sh: line 18: logdna-agent: command not found
error reading information on service logdna-agent: No such file or directory
logdna-agent: unrecognized service.
Not sure why this is not working. When i echo RACK_ENV i get production as the value so i know that is correct but why is it failing my if statement and why is it not working properly?
Your use of echo will lead to malformed /etc/yum.repos.d/logdna.repo. To set it up properly, please use the following (indentations for EOL2 are important):
files:
"/home/ec2-user/logdna.sh" :
mode: "000777"
owner: root
group: root
content: |
#!/bin/sh
RACK_ENV=$(/opt/elasticbeanstalk/bin/get-config environment -k RACK_ENV)
echo "$RACK_ENV"
if [ $RACK_ENV == production ]
then
rpm --import https://repo.logdna.com/logdna.gpg
cat >/etc/yum.repos.d/logdna.repo << 'EOL2'
[logdna]
name=LogDNA packages
baseurl=https://repo.logdna.com/el6/
enabled=1
gpgcheck=1
gpgkey=https://repo.logdna.com/logdna.gpg
EOL2
LOGDNA_INGESTION_KEY=$(/opt/elasticbeanstalk/bin/get-config environment -k LOGDNA_INGESTION_KEY)
yum -y install logdna-agent
logdna-agent -k $LOGDNA_INGESTION_KEY # this is your unique Ingestion Key
# /var/log is monitored/added by default (recursively), optionally add more dirs here
logdna-agent -d /var/app/current/log/logstasher.log
logdna-agent -d /var/app/containerfiles/logs/sidekiq.log
# logdna-agent --hostname allows you to pass your AWS env metadata to LogDNA (remove # to uncomment the line below)
# logdna-agent --hostname `{"Ref": "AWSEBEnvironmentName" }`
# logdna -t option allows you to tag the host with tags (remove # to uncomment the line below)
#logdna-agent -t `{"Ref": "AWSEBEnvironmentName" }`
chkconfig logdna-agent on
service logdna-agent start
fi
For further troubleshooting please check /var/log/cfn-init-cmd.log file.

How can I build a Docker image and push it to ECR with CIRCLE 2.0?

I'm trying to upgrade from CIRCLE 1.0 to 2.0 & I'm having trouble getting the Docker images to build. I've got the following job:|
... There is another Job here which runs some tests
deploy-aws:
# machine: true
docker:
- image: ecrurl/backend
aws_auth:
aws_access_key_id: ID1
aws_secret_access_key: $ECR_AWS_SECRET_ACCESS_KEY # or project UI envar reference
environment:
TAG: $CIRCLE_BRANCH-$CIRCLE_SHA1
ECR_URL: ecrurl/backend
DOCKER_IMAGE: $ECR_URL:$TAG
STAGING_BUCKET: staging
TESTING_BUCKET: testing
PRODUCTION_BUCKET: production
NPM_TOKEN: $NPM_TOKEN
working_directory: ~/backend
steps:
- run:
name: Install awscli
command: sudo apt-get -y -qq install awscli
- checkout
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker pull $ECR_URL:latest
docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
workflows:
version: 2
build-deploy:
jobs:
- build # This one simply runs test
- deploy-aws:
requires:
- build
Running this throws the following error:
#!/bin/bash -eo pipefail
sudo apt-get -y -qq install awscli
/bin/bash: sudo: command not found
Exited with code 127
All I had todo before was this:
dependencies:
pre:
- $(aws ecr get-login --region us-west-2)
deployment:
staging:
branch: staging
- docker pull $ECR_URL:latest
- docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
- docker tag backend $DOCKER_IMAGE
- docker push $DOCKER_IMAGE
- docker tag -f $DOCKER_IMAGE $ECR_URL:latest
- docker push $ECR_URL:latest
Here is the config I've changed to make this work:
deploy-aws:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker build -t backend --build-arg .
fi
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- run:
name: Tag and push to ECR
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
Check out this link: https://github.com/builtinnya/circleci-2.0-beta-docker-example/blob/master/.circleci/config.yml#L39