Bash script inside Cloudformation - amazon-web-services

I am trying to deploy a Sagemaker Lifecycle with AWS CloudFormation.
The Lifecycle is importing ipynb notebooks from s3 bucket to the Sagemaker notebook instance.
the bucket name is specified in the parameters, I want to use it in a !Sub function inside the bash script of the Lifecycle.
The problem is that the CF runs first on the template and tries to complete its own functions (like !Sub) and then the scripts upload as bash script to the Lifecycle.
This is my code:
LifecycleConfig:
Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
Properties:
NotebookInstanceLifecycleConfigName: !Sub
- ${NotebookInstanceName}LifecycleConfig
- NotebookInstanceName: !Ref NotebookInstanceName
OnStart:
- Content:
Fn::Base64: !Sub
- |
#!/bin/bash -xe
set -e
CP_SAMPLES=true
EXTRACT_CSV=false
s3region=s3.amazonaws.com
SRC_NOTEBOOK_DIR=${Consumer2BucketName}/sagemaker-notebooks
Sagedir=/home/ec2-user/SageMaker
industry=industry
notebooks=("notebook1.ipynb" "notebook2.ipynb" "notebook3.ipynb")
download_files(){
for notebook in "${notebooks[#]}"
do
printf "aws s3 cp s3://${SRC_NOTEBOOK_DIR}/${notebook} ${Sagedir}/${industry}\n"
aws s3 cp s3://"${SRC_NOTEBOOK_DIR}"/"${notebook}" ${Sagedir}/${industry}
done
}
if [ ${CP_SAMPLES} = true ]; then
sudo -u ec2-user mkdir -p ${Sagedir}/${industry}
mkdir -p ${Sagedir}/${industry}
download_files
chmod -R 755 ${Sagedir}/${industry}
chown -R ec2-user:ec2-user ${Sagedir}/${industry}/.
fi
- Consumer2BucketName: !Ref Consumer2BucketName
Raised the following error:
Template error: variable names in Fn::Sub syntax must contain only alphanumeric characters, underscores, periods, and colons
It seems that was a conflict with the Bash Vars and the !Sub CF function.
In the following template I changed the Bash Vars and removed the {}:
LifecycleConfig:
Type: AWS::SageMaker::NotebookInstanceLifecycleConfig
Properties:
NotebookInstanceLifecycleConfigName: !Sub
- ${NotebookInstanceName}LifecycleConfig
- NotebookInstanceName: !Ref NotebookInstanceName
OnStart:
- Content:
Fn::Base64:
!Sub
- |
#!/bin/bash -xe
set -e
CP_SAMPLES=true
EXTRACT_CSV=false
s3region=s3.amazonaws.com
SRC_NOTEBOOK_DIR=${Consumer2BucketName}/sagemaker-notebooks
Sagedir=/home/ec2-user/SageMaker
industry=industry
notebooks=("notebook1.ipynb" "notebook2.ipynb" "notebook3.ipynb")
download_files(){
for notebook in $notebooks
do
printf "aws s3 cp s3://$SRC_NOTEBOOK_DIR/${!notebook} $Sagedir/$industry\n"
aws s3 cp s3://"$SRC_NOTEBOOK_DIR"/"${!notebook}" $Sagedir/$industry
done
}
if [ $CP_SAMPLES = true ]; then
sudo -u ec2-user mkdir -p $Sagedir/$industry
mkdir -p $Sagedir/$industry
download_files
chmod -R 755 $Sagedir/$industry
chown -R ec2-user:ec2-user $Sagedir/$industry/.
fi
- Consumer2BucketName: !Ref Consumer2BucketName
The problem here is the for loop is not running through all the notebooks in the list but importing only the first one.
After going through some solutions I tried adding [#] to the notebooks:
for notebook in $notebooks[#]
and
for notebook in “$notebooks[#]“/”$notebooks[*]“/$notebooks[#]
I got the same error.

It seems that was a conflict with the Bash Vars and the !Sub CF function.
That's correct. Both bash and !Sub use ${} for variable substitution. You can escape the bash variables with ${!}. For example:
for notebook in "${!notebooks[#]}"
Also mentioned in the docs:
To write a dollar sign and curly braces (${}) literally, add an exclamation point (!) after the open curly brace, such as ${!Literal}. AWS CloudFormation resolves this text as ${Literal}.

Related

AWS EC2 Image Builder issue with authorized_keys

I'm trying to create a custom image of RedHat 8 using the EC2 Image Builder. In one of the recipes added to the pipeline, I've created the ansible user and used S3 to download the authorized_keys and the custom sudoers.d file. The issue I'm facing is that the sudoers file called "ansible" gets copied just fine, the authorized_keys doesn't. CloudWatch says that the recipe get executed without errors, the files are downloaded but when I create an EC2 with this AMI, the authorized_keys file is not in the path.
What's happening?
This is the recipe I'm using:
name: USER-Ansible
description: Creazione e configurazione dell'utente ansible
schemaVersion: 1.0
phases:
- name: build
steps:
- name: UserCreate
action: ExecuteBash
inputs:
commands:
- groupadd -g 2004 ux
- useradd -u 4134 -g ux -c "AWX Ansible" -m -d /home/ansible ansible
- mkdir /home/ansible/.ssh
- name: FilesDownload
action: S3Download
inputs:
- source: s3://[REDACTED]/authorized_keys
destination: /home/ansible/.ssh/authorized_keys
expectedBucketOwner: [REDACTED]
overwrite: false
- source: s3://[REDACTED]/ansible
destination: /etc/sudoers.d/ansible
expectedBucketOwner: [REDACTED]
overwrite: false
- name: FilesConfiguration
action: ExecuteBash
inputs:
commands:
- chown ansible:ux /home/ansible/.ssh/authorized_keys; chmod 600 /home/ansible/.ssh/authorized_keys
- chown ansible:ux /home/ansible/.ssh; chmod 700 /home/ansible/.ssh
- chown root:root /etc/sudoers.d/ansible; chmod 440 /etc/sudoers.d/ansible
Thanks in advance!
AWS EC2 Image Builder cleans up afterwards
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#post-build-cleanup
# Clean up for ssh files
SSH_FILES=(
"/etc/ssh/ssh_host_rsa_key"
"/etc/ssh/ssh_host_rsa_key.pub"
"/etc/ssh/ssh_host_ecdsa_key"
"/etc/ssh/ssh_host_ecdsa_key.pub"
"/etc/ssh/ssh_host_ed25519_key"
"/etc/ssh/ssh_host_ed25519_key.pub"
"/root/.ssh/authorized_keys"
)
if [[ -f {{workingDirectory}}/skip_cleanup_ssh_files ]]; then
echo "Skipping cleanup of ssh files"
else
echo "Cleaning up ssh files"
cleanup "${SSH_FILES[#]}"
USERS=$(ls /home/)
for user in $USERS; do
echo Deleting /home/"$user"/.ssh/authorized_keys;
sudo find /home/"$user"/.ssh/authorized_keys -type f -exec shred -zuf {} \;
done
for user in $USERS; do
if [[ -f /home/"$user"/.ssh/authorized_keys ]]; then
echo Failed to delete /home/"$user"/.ssh/authorized_keys;
exit 1
fi;
done;
fi;
You can skip individual sections of the clean up script.
https://docs.aws.amazon.com/imagebuilder/latest/userguide/security-best-practices.html#override-linux-cleanup-script

CodeBuild YML formatting and syntax

I've got a codebuild project up and running on an ubuntu server to clone rds database servers from snapshots. A lot of it is working as expected but when i try and include the following like in the buildspec.yml the job falls over and it doesn't like the command.
Guessing the job doesn't like the formatting but bit stumped where to go with it
- while [ $(aws rds describe-db-clusters --db-cluster-identifier mysql-dev-20201009|grep -c '"Status": "available"') -eq 0 ]; do echo "sleep 60s"; sleep 60; done
Here's the full buildspec file:
Version: 0.2
phases:
install:
runtime-versions:
python: 3.7
pre_build:
commands:
- pip install --upgrade pip
- pip3 install awscli --upgrade --user
- export SOURCEDBENV=mysql-dev
- export DATE=`date +%Y%m%d`
- export TARGETDBENV=$SOURCEDBENV-$DATE
- echo $TARGETDBENV
- export PREARNSNAP=$(aws rds describe-db-cluster-snapshots --db-cluster-identifier $SOURCEDBENV --query="reverse(sort_by(DBClusterSnapshots, &SnapshotCreateTime))[0]|DBClusterSnapshotArn" )
- export ARNSNAP=`echo $PREARNSNAP | tr -d '"'`
- echo $ARNSNAP
- aws rds restore-db-cluster-from-snapshot --snapshot-identifier $ARNSNAP --db-cluster-identifier $TARGETDBENV --engine aurora-mysql
- aws rds create-db-instance --db-instance-identifier $TARGETDBENV --db-instance-class db.t3.medium --db-subnet-group-name db_subnet_grp_2019 --engine aurora-mysql --db-cluster-identifier $TARGETDBENV
- while [ $(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBNAME | grep -c available) -eq 0 ]; do echo "sleep 60s"; sleep 60; done
- echo "Temp db ready"
- export ENDPOINT=$(aws rds describe-db-cluster-endpoints --db-cluster-identifier $DBIDENTIFIER| grep "\"Endpoint\"" | grep -v "\-ro\-" | awk -F '\"' '{print $4}')
- echo $ENDPOINT
build:
commands:
- echo Build started on `date`
- echo proceed db connection to $ENDPOINT
- echo proceed db migrate update, DDL proceed here
- echo proceed application test, CRUD test run here
post_build:
commands:
- echo Build completed on `date`
- echo $DBNAME
Perhaps you can take advantage of the wait command in the AWS cli?
Rather than use the while loop, simply wait for the db instance to be available
aws rds wait db-instance-available --filters Name=db-cluster-id,Values=$TARGETDBENV

aws cloudformation userdata: how to use local variable in script

I'm writing the cloudformation template that includes ec2 instance. In userdata block, I want to create a file with some content. In the file, I'm initializing local variable MY_MESSAGE, but next, after the template is deployed this variable is not shown in the file.
original temlate:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-03368e982f317ae48
InstanceType: t2.micro
KeyName: ec2
UserData:
!Base64 |
#!/bin/bash
cat <<EOF > /etc/aws-kinesis/start.sh
#!/bin/sh
MY_MESSAGE="Hello World"
echo $MY_MESSAGE
output file in ec2 instance:
#!/bin/sh
MY_MESSAGE="Hello World"
echo
As you can see variable MY_MESSAGE does not exist in echo block.
You can put EOF in quotes: "EOF":
UserData:
!Base64 |
#!/bin/bash
cat <<"EOF" > /etc/aws-kinesis/start.sh
#!/bin/sh
MY_MESSAGE="Hello World"
echo $MY_MESSAGE
EOF

IAM based ssh to EC2 instance using CloudFormation template

I am using an AWS CloudFormation template for IAM role-based access to an EC2 instance.
I getting permission denied error while running the template, and I am not able to access the EC2 machine with a username without a pem file.
Instance:
Type: 'AWS::EC2::Instance'
Metadata:
'AWS::CloudFormation::Init':
config:
files:
/opt/authorized_keys_command.sh:
content: >
#!/bin/bash -e
if [ -z "$1" ]; then
exit 1
fi
SaveUserName="$1"
SaveUserName=${SaveUserName//"+"/".plus."}
SaveUserName=${SaveUserName//"="/".equal."}
SaveUserName=${SaveUserName//","/".comma."}
SaveUserName=${SaveUserName//"#"/".at."}
aws iam list-ssh-public-keys --user-name "$SaveUserName" --query
"SSHPublicKeys[?Status == 'Active'].[SSHPublicKeyId]" --output
text | while read KeyId; do
aws iam get-ssh-public-key --user-name "$SaveUserName" --ssh-public-key-id "$KeyId" --encoding SSH --query "SSHPublicKey.SSHPublicKeyBody" --output text
done
mode: '000755'
owner: root
group: root
/opt/import_users.sh:
content: >
#!/bin/bash
aws iam list-users --query "Users[].[UserName]" --output text |
while read User; do
SaveUserName="$User"
SaveUserName=${SaveUserName//"+"/".plus."}
SaveUserName=${SaveUserName//"="/".equal."}
SaveUserName=${SaveUserName//","/".comma."}
SaveUserName=${SaveUserName//"#"/".at."}
if id -u "$SaveUserName" >/dev/null 2>&1; then
echo "$SaveUserName exists"
else
#sudo will read each file in /etc/sudoers.d, skipping file names that end in ?~? or contain a ?.? character to avoid causing problems with package manager or editor temporary/backup files.
SaveUserFileName=$(echo "$SaveUserName" | tr "." " ")
/usr/sbin/adduser "$SaveUserName"
echo "$SaveUserName ALL=(ALL) NOPASSWD:ALL" > "/etc/sudoers.d/$SaveUserFileName"
fi
done
mode: '000755' owner: root group: root
/etc/cron.d/import_users:
content: |
*/10 * * * * root /opt/import_users.sh
mode: '000644' owner: root
group: root
/etc/cfn/cfn-hup.conf:
content: !Sub |
[main]
stack=${AWS::StackId}
region=${AWS::Region}
interval=1
mode: '000400' owner: root group: root
/etc/cfn/hooks.d/cfn-auto-reloader.conf:
content: !Sub >
[cfn-auto-reloader-hook]
triggers=post.update
path=Resources.Instance.Metadata.AWS::CloudFormation::Init
action=/opt/aws/bin/cfn-init --verbose
--stack=${AWS::StackName} --region=${AWS::Region}
--resource=Instance
runas=root
commands:
a_configure_sshd_command:
command: >-
sed -i 's:#AuthorizedKeysCommand none:AuthorizedKeysCommand
/opt/authorized_keys_command.sh:g' /etc/ssh/sshd_config
b_configure_sshd_commanduser:
command: >-
sed -i 's:#AuthorizedKeysCommandUser
nobody:AuthorizedKeysCommandUser nobody:g' /etc/ssh/sshd_config
c_import_users:
command: ./import_users.sh
cwd: /opt
services:
sysvinit:
cfn-hup:
enabled: true
ensureRunning: true
files:
- /etc/cfn/cfn-hup.conf
- /etc/cfn/hooks.d/cfn-auto-reloader.conf
sshd:
enabled: true
ensureRunning: true
commands:
- a_configure_sshd_command
- b_configure_sshd_commanduser
'AWS::CloudFormation::Designer':
id: 85ddeee0-0623-4f50-8872-1872897c812f
Properties:
ImageId: !FindInMap
- RegionMap
- !Ref 'AWS::Region'
- AMI
IamInstanceProfile: !Ref InstanceProfile
InstanceType: t2.micro
UserData:
'Fn::Base64': !Sub >
#!/bin/bash -x
/opt/aws/bin/cfn-init --verbose --stack=${AWS::StackName}
--region=${AWS::Region} --resource=Instance
/opt/aws/bin/cfn-signal --exit-code=$? --stack=${AWS::StackName}
--region=${AWS::Region} --resource=Instance
This User Data script will configure a Linux instance to use password authentication.
While the password here is hard-coded, you could obtain it in other ways and set it to the appropriate value.
#!
echo 'secret-password' | passwd ec2-user --stdin
sed -i 's|[#]*PasswordAuthentication no|PasswordAuthentication yes|g' /etc/ssh/sshd_config
systemctl restart sshd.service

Userdata script not executed on AWS CloudFormation Template

I am trying to create a CloudFormation stack which has UserData script to install java, tomcat, httpd and java application on launch of an EC2 instance.
However, the stack gets created successfully with all the resources but when I connect to EC2 instance to check the configuration of above applications I don't find any. My usecase is to spin-up an instance with all the above applications/software to be installed with automation.
UserData:
Fn::Base64:
Fn::Join:
- ' '
- - '#!/bin/bash -xe\n'
- 'sudo yum update && install pip && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz\n'
- 'date > /home/ec2-user/starttime\n'
- 'sudo yum update -y aws-cfn-bootstrap\n'
# Initialize CloudFormation bits\n
- ' '
- '/opt/aws/bin/cfn-init -v\n'
- ' --stack\n'
- '!Ref AWS::StackName\n'
- ' --resource LaunchConfig\n'
- 'ACCESS_KEY=${HostKeys}&SECRET_KEY=${HostKeys.SecretAccessKey}\n'
# Start servers\n
- 'service tomcat8 start\n'
- '/etc/init.d/httpd start\n'
- 'date > /home/ec2-user/stoptime\n'
Metadata:
AWS::CloudFormation::Init:
config:
packages:
yum:
- java-1.8.0-openjdk.x86_64: []
- tomcat8: []
- httpd: []
services:
sysvinit:
httpd:
enabled: 'true'
ensureRunning: 'true'
files:
- /usr/share/tomcat8/webapps/sample.war:
- source: https://s3-eu-west-1.amazonaws.com/testbucket/sample.war
- mode: 000500
- owner: tomcat
- group: tomcat
CfnUser:
Type: AWS::IAM::User
Properties:
Path: '/'
Policies:
- PolicyName: Admin
PolicyDocument:
Statement:
- Effect: Allow
Action: '*'
Resource: '*'
HostKeys:
Type: AWS::IAM::AccessKey
Properties:
UserName: !Ref CfnUser
The problem is in the way you have formatted your UserData. I would suggest that you launch the EC2 instance and manually test the script first. It has a number of problems in it.
Try formatting your UserData like this:
UserData:
Fn::Base64:
!Sub |
#!/bin/bash -xe
# FIXME. This won't work either.
# sudo yum update && install pip && pip install https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
date > /home/ec2-user/starttime
sudo yum update -y aws-cfn-bootstrap
# Initialize CloudFormation bits
/opt/aws/bin/cfn-init -v \
--stack ${AWS::StackName} \
--resource LaunchConfig
# FIXME. Not sure why these are here.
# ACCESS_KEY=${HostKeys}
# SECRET_KEY=${HostKeys.SecretAccessKey}
# Start servers\n
service tomcat8 start
/etc/init.d/httpd start
date > /home/ec2-user/stoptime
Things to note:
You can't interpolate here using !Ref notation. Notice I changed it to ${AWS::StackName} and notice the whole block is inside !Sub.
As my comments indicate, the yum update line has invalid commands in it.
As noted in the comments, it is a bad practice to inject access keys. Also, the keys don't seem to be required for anything in this script.
Note also that the files section is specified incorrectly in the MetaData, as Arrays instead of Hash keys.
It should be:
files:
/usr/share/tomcat8/webapps/sample.war:
source: https://s3-eu-west-1.amazonaws.com/testbucket/sample.war
mode: '000500'
owner: tomcat
group: tomcat