AWS EC2 Image Builder issue with authorized_keys - amazon-web-services

I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!

AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script

Related

Bitbucket pipeline add description in version description

I am trying to add description to newly created version only, but no luck ;-(
I can add description to lambda and alias, but not to newly created version.
You can refer below screenshot where I want my description from pipeline.
Here is my code in yml file
- step:
name: Build and publish version
oidc: true
script:
- apt-get update && apt-get install -y zip jq
- for dir in ${LAMBDA_FUNCTION_NAME}; do
- echo $dir
- cd ./$dir && npm install --production
- zip -r code.zip *
# lambda config
# - export ENVIRONMENT="dev"
# Create lambda configuration file with environment variables
- export LAMBDA_CONFIG="{}"
- echo $LAMBDA_CONFIG > ./lambdaConfig.json
- pipe: atlassian/aws-lambda-deploy:1.5.0
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_OIDC_ROLE_ARN: '############################'
FUNCTION_NAME: $dir
COMMAND: 'update'
ZIP_FILE: 'code.zip'
FUNCTION_CONFIGURATION: "lambdaConfig.json"
WAIT: "true"
#PUBLISH_FLAG: "false"
- BITBUCKET_PIPE_SHARED_STORAGE_DIR="/opt/atlassian/pipelines/agent/build/.bitbucket/pipelines/generated/pipeline/pipes"
# - if [ ${ALIAS} == "" ]; then echo "Setup successful"; else
- VERSION=$(jq --raw-output '.Version' $BITBUCKET_PIPE_SHARED_STORAGE_DIR/aws-lambda-deploy-env)
- cd .. && echo ${VERSION} > ./version.txt
- echo "Published version:${VERSION}"
- cat version.txt
- VERSION=$(cat ./version.txt)
#- fi;
- pipe: atlassian/aws-lambda-deploy:1.5.0
variables:
AWS_DEFAULT_REGION: $AWS_DEFAULT_REGION
AWS_OIDC_ROLE_ARN: '########################'
FUNCTION_NAME: $dir
COMMAND: 'alias'
ALIAS: ${ALIAS}
VERSION: '${VERSION}'
DESCRIPTION: ${DESCRIPTION}
- done

Bash script inside Cloudformation

I am trying to deploy a Sagemaker Lifecycle with AWS CloudFormation.
The Lifecycle is importing ipynb notebooks from s3 bucket to the Sagemaker notebook instance.
the bucket name is specified in the parameters, I want to use it in a !Sub function inside the bash script of the Lifecycle.
The problem is that the CF runs first on the template and tries to complete its own functions (like !Sub) and then the scripts upload as bash script to the Lifecycle.
This is my code:
LifecycleConfig:
Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
Properties:
NotebookInstanceLifecycleConfigName: !Sub
- ${NotebookInstanceName}LifecycleConfig
- NotebookInstanceName: !Ref NotebookInstanceName
OnStart:
- Content:
Fn::Base64: !Sub
- |
#!/bin/bash -xe
set -e
CP_SAMPLES=true
EXTRACT_CSV=false
s3region=s3.amazonaws.com
SRC_NOTEBOOK_DIR=${Consumer2BucketName}/sagemaker-notebooks
Sagedir=/home/ec2-user/SageMaker
industry=industry
notebooks=("notebook1.ipynb" "notebook2.ipynb" "notebook3.ipynb")
download_files(){
for notebook in "${notebooks[#]}"
do
printf "aws s3 cp s3://${SRC_NOTEBOOK_DIR}/${notebook} ${Sagedir}/${industry}\n"
aws s3 cp s3://"${SRC_NOTEBOOK_DIR}"/"${notebook}" ${Sagedir}/${industry}
done
}
if [ ${CP_SAMPLES} = true ]; then
sudo -u ec2-user mkdir -p ${Sagedir}/${industry}
mkdir -p ${Sagedir}/${industry}
download_files
chmod -R 755 ${Sagedir}/${industry}
chown -R ec2-user:ec2-user ${Sagedir}/${industry}/.
fi
- Consumer2BucketName: !Ref Consumer2BucketName
Raised the following error:
Template error: variable names in Fn::Sub syntax must contain only alphanumeric characters, underscores, periods, and colons
It seems that was a conflict with the Bash Vars and the !Sub CF function.
In the following template I changed the Bash Vars and removed the {}:
LifecycleConfig:
Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
Properties:
NotebookInstanceLifecycleConfigName: !Sub
- ${NotebookInstanceName}LifecycleConfig
- NotebookInstanceName: !Ref NotebookInstanceName
OnStart:
- Content:
Fn::Base64:
!Sub
- |
#!/bin/bash -xe
set -e
CP_SAMPLES=true
EXTRACT_CSV=false
s3region=s3.amazonaws.com
SRC_NOTEBOOK_DIR=${Consumer2BucketName}/sagemaker-notebooks
Sagedir=/home/ec2-user/SageMaker
industry=industry
notebooks=("notebook1.ipynb" "notebook2.ipynb" "notebook3.ipynb")
download_files(){
for notebook in $notebooks
do
printf "aws s3 cp s3://$SRC_NOTEBOOK_DIR/${!notebook} $Sagedir/$industry\n"
aws s3 cp s3://"$SRC_NOTEBOOK_DIR"/"${!notebook}" $Sagedir/$industry
done
}
if [ $CP_SAMPLES = true ]; then
sudo -u ec2-user mkdir -p $Sagedir/$industry
mkdir -p $Sagedir/$industry
download_files
chmod -R 755 $Sagedir/$industry
chown -R ec2-user:ec2-user $Sagedir/$industry/.
fi
- Consumer2BucketName: !Ref Consumer2BucketName
The problem here is the for loop is not running through all the notebooks in the list but importing only the first one.
After going through some solutions I tried adding [#] to the notebooks:
for notebook in $notebooks[#]
and
for notebook in “$notebooks[#]“/”$notebooks[*]“/$notebooks[#]
I got the same error.
It seems that was a conflict with the Bash Vars and the !Sub CF function.
That's correct. Both bash and !Sub use ${} for variable substitution. You can escape the bash variables with ${!}. For example:
for notebook in "${!notebooks[#]}"
Also mentioned in the docs:
To write a dollar sign and curly braces (${}) literally, add an exclamation point (!) after the open curly brace, such as ${!Literal}. AWS CloudFormation resolves this text as ${Literal}.

How to pipe a github secret variable into a file

I have a github pipeline and im piping a github sercret variable into a file but i get the following error.
/home/runner/work/_temp/c6144b9a-c8e3-489a-ae97-795f592c57f0.sh: line 6: /config: Permission denied
echo: write error: Broken pipe
name: pipeline
on: [ push ]
env:
KUBECONFIG_B64DATA: ${{ secrets.KUBECONFIG_B64DATA }}
deploy:
name: Deploy
# if: startsWith(github.ref, 'refs/tags/')
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#master
- name: Setup Kubectl
run: |
sudo apt-get -y install curl
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
sudo mkdir -p ~/.kube
sudo mv config /root/.kube/
EDIT:
I use a different folder to get passed permissions isuses (/tmp/config)
However i still struggle to pipe a github secret variable into a file because github masks the secret and im returned with an error.
base64: invalid input
I believe this is because when you echo a secret you simply get **** instead of the actual value
I spent 4 hours on this issue. Then found the solution which was actually hidden in the comments.
As pointed out by #Kay, this was caused by the white space. Doing echo "${KUBECONFIG_B64DATA// /}" | base64 --decode > /tmp/config fixed the problem for me.
Just posting this as an official answer, so that it becomes easier for someone to find it later.
Change this line:
sudo echo $KUBECONFIG_B64DATA | base64 --decode > /config
To
sudo bash -c 'base64 --decode <<< "$KUBECONFIG_B64DATA" > /config'
Or
sudo tee /config > /dev/null < <(base64 --decode <<< "$KUBECONFIG_B64DATA")

How can I build a Docker image and push it to ECR with CIRCLE 2.0?

I'm trying to upgrade from CIRCLE 1.0 to 2.0 & I'm having trouble getting the Docker images to build. I've got the following job:|
... There is another Job here which runs some tests
deploy-aws:
# machine: true
docker:
- image: ecrurl/backend
aws_auth:
aws_access_key_id: ID1
aws_secret_access_key: $ECR_AWS_SECRET_ACCESS_KEY # or project UI envar reference
environment:
TAG: $CIRCLE_BRANCH-$CIRCLE_SHA1
ECR_URL: ecrurl/backend
DOCKER_IMAGE: $ECR_URL:$TAG
STAGING_BUCKET: staging
TESTING_BUCKET: testing
PRODUCTION_BUCKET: production
NPM_TOKEN: $NPM_TOKEN
working_directory: ~/backend
steps:
- run:
name: Install awscli
command: sudo apt-get -y -qq install awscli
- checkout
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker pull $ECR_URL:latest
docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
workflows:
version: 2
build-deploy:
jobs:
- build # This one simply runs test
- deploy-aws:
requires:
- build
Running this throws the following error:
#!/bin/bash -eo pipefail
sudo apt-get -y -qq install awscli
/bin/bash: sudo: command not found
Exited with code 127
All I had todo before was this:
dependencies:
pre:
- $(aws ecr get-login --region us-west-2)
deployment:
staging:
branch: staging
- docker pull $ECR_URL:latest
- docker build -t backend NODE_ENV=$NODE_ENV --build-arg NPM_TOKEN=$NPM_TOKEN .
- docker tag backend $DOCKER_IMAGE
- docker push $DOCKER_IMAGE
- docker tag -f $DOCKER_IMAGE $ECR_URL:latest
- docker push $ECR_URL:latest
Here is the config I've changed to make this work:
deploy-aws:
docker:
- image: docker:17.05.0-ce-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache \
py-pip=9.0.0-r1
pip install \
docker-compose==1.12.0 \
awscli==1.11.76
- restore_cache:
keys:
- v1-{{ .Branch }}
paths:
- /caches/app.tar
- run:
name: Load Docker image layer cache
command: |
set +o pipefail
docker load -i /caches/app.tar | true
- run:
name: Build Docker image
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker build -t backend --build-arg .
fi
- run:
name: Save Docker image layer cache
command: |
mkdir -p /caches
docker save -o /caches/app.tar app
- save_cache:
key: v1-{{ .Branch }}-{{ epoch }}
paths:
- /caches/app.tar
- run:
name: Tag and push to ECR
command: |
if [ "${CIRCLE_BRANCH}" == "master" ]; then
docker tag backend $DOCKER_IMAGE
docker push $DOCKER_IMAGE
docker tag -f $DOCKER_IMAGE $ECR_URL:latest
docker push $ECR_URL:latest
fi
Check out this link: https://github.com/builtinnya/circleci-2.0-beta-docker-example/blob/master/.circleci/config.yml#L39

.ebextensions not executing not uploading and creating files

I am trying to follow these instructions to force SSL on AWS Beanstalk.
I believe this is the important part.
files:
"/tmp/45_nginx_https_rw.sh":
owner: root
group: root
mode: "000644"
content: |
#! /bin/bash
CONFIGURED=`grep -c "return 301 https" /opt/elasticbeanstalk/support/conf/webapp_healthd.conf`
if [ $CONFIGURED = 0 ]
then
sed -i '/listen 80;/a \ if ($http_x_forwarded_proto = "http") { return 301 https://$host$request_uri; }\n' /opt/elasticbeanstalk/support/conf/webapp_healthd.conf
logger -t nginx_rw "https rewrite rules added"
exit 0
else
logger -t nginx_rw "https rewrite rules already set"
exit 0
fi
container_commands:
00_appdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/appdeploy/enact
01_configdeploy_rewrite_hook:
command: cp -v /tmp/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact
02_rewrite_hook_perms:
command: chmod 755 /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
03_rewrite_hook_ownership:
command: chown root:users /opt/elasticbeanstalk/hooks/appdeploy/enact/45_nginx_https_rw.sh /opt/elasticbeanstalk/hooks/configdeploy/enact/45_nginx_https_rw.sh
For some reason the file is not being uploaded or created.
I also tried this a sudo command for the container commands starting with 00 and 01.
I also manually ssh into the server, manually created the file. Then locally used the aws elasticbeanstalk restart-app-server --environment-name command to restart the server. And this still did not work.
Any help would be greatly appreciated.