I have the following aws cloudformation template where i need to perform string manipulation inside the EC2 launch template section (where in URL is a cfn template parameter)-
UserData:
Fn::Base64: !Join
- ''
- - !Sub |
#!/bin/bash
set -o xtrace
.
.
git clone ${URL}
- |
CFNREPO=${URL/aws.*/aws-cfn.git}
git clone ${CFNREPO}
- !Sub |
GITHUBURL: ${githubURL}
.
.
Expected Output:
#!/bin/bash
set -o xtrace
.
.
git clone https://www.example.com/aws-s3.git
CFNREPO=https://www.example.com/aws-cfn.git
git clone https://www.example.com/aws-cfn.git
GITHUBURL: https://github.com/other-repo-url
.
.
Actual Output:
#!/bin/bash
set -o xtrace
.
.
git clone https://www.example.com/aws-s3.git
CFNREPO=${URL/aws.*/aws-cfn.git}
git clone ${CFNREPO}
GITHUBURL: https://github.com/other-repo-url
.
.
Referred - link , but the solution is not working as expected.
Your code is correct, at least from yaml perspective. This part:
CFNREPO=${URL/aws.*/aws-cfn.git}
git clone ${CFNREPO}
will be resolved when you run your code on an instance. This is not something that YAML or CloudFormation can do for you.
Related
I am trying to implement parallel testing in AWS code build. I created a buildspec.yml file like this sample project:
https://github.com/cypress-io/cypress-realworld-app/blob/develop/buildspec.yml
My problem is the environments that I use during the cypress command are getting as empty.
- echo $CY_GROUP_SPEC
- CY_GROUP=$(echo $CY_GROUP_SPEC | cut -d'|' -f1)
- CY_BROWSER=$(echo $CY_GROUP_SPEC | cut -d'|' -f2)
- CY_SPEC=$(echo $CY_GROUP_SPEC | cut -d'|' -f3)
- CY_CONFIG=$(echo $CY_GROUP_SPEC | cut -d'|' -f4)
And then the cypress code build fails with this error:
Opening Cypress...
Cypress encountered an error while parsing the argument: --spec
You passed: true
The error was: spec must be a string or comma-separated list
I use this command to run cypress:
- NO_COLOR=1 ./node_modules/.bin/cypress run --browser $CY_BROWSER --spec "$CY_SPEC" --config "$CY_CONFIG" --headless. --record --key $CYPRESS_KEY --parallel --ci-build-id $CODEBUILD_INITIATOR --group "$CY_GROUP"
I defined these env variables like this on the top of the file:
batch:
build-matrix:
dynamic:
env:
image:
- ${AWS_ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/cypress:latest
variables:
CY_GROUP_SPEC:
- "UI - Chrome|chrome|cypress/e2e/account/*"
- "UI - Chrome|chrome|cypress/e2e/auth/*"
- "UI - Chrome|chrome|cypress/e2e/mastering/*"
- "UI - Chrome|chrome|cypress/e2e/pages/**/*"
- "UI - Chrome|chrome|cypress/e2e/user-flows/**/*"
WORKERS:
- 1
- 2
- 3
- 4
- 5
How can I fix this problem?
Thanks
The errors definitely tell you that the command is wrong. Check that carefully.
It seems that in AWS-codebuild variables are not propagated between commands for the windows 2019 environment.
With this buildspec.yml
version: 0.2
env:
variables:
MY_VAR_0: $(git log -n 1 --date=short --pretty=format:%cd_%h)
phases:
build:
commands:
- $Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
- Get-ChildItem Env:MY_VAR_*
# build commands here
artifacts:
name: $MY_VAR_0
I get in the logs:
[Container] 2020/12/14 11:41:27 Entering phase BUILD
[Container] 2020/12/14 11:41:27 Running command $Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
[Container] 2020/12/14 11:41:27 Running command Get-ChildItem Env:MY_VAR_*
Name Value
---- -----
MY_VAR_0 $(git log -n 1 --date=short --pretty=format:%...
[Container] 2020/12/14 11:41:28 Phase complete: BUILD State: SUCCEEDED
The problem here are
MY_VAR_0 is set to the string $(git log ... and not the output of the command.
MY_VAR_1 is not propagated to following commands in phases.build.commands
and of course my artifacts end in the wrong place.
Up to now the only way I found to solve this problem is
version: 0.2
phases:
build:
commands:
- |
$Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# first build command here
- |
$Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# second build command here
artifacts:
name: $(git log -n 1 --date=short --pretty=format:%cd_%h)
with the following log:
[Container] 2020/12/14 12:25:18 Entering phase BUILD
[Container] 2020/12/14 12:25:18 Running command $Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# first build command here
Name Value
---- -----
MY_VAR_1 2020-12-14_eccfb77
MY_VAR_0 2020-12-14_eccfb77
[Container] 2020/12/14 12:25:19 Running command $Env:MY_VAR_0 = & git log -n 1 --date=short --pretty=format:%cd_%h
$Env:MY_VAR_1 = & git log -n 1 --date=short --pretty=format:%cd_%h
Get-ChildItem Env:MY_VAR_*
# second build command here
Name Value
---- -----
MY_VAR_1 2020-12-14_eccfb77
MY_VAR_0 2020-12-14_eccfb77
[Container] 2020/12/14 12:25:20 Phase complete: BUILD State: SUCCEEDED
What I do not like in this approach is that I have to repeat the code for computing the MY_VAR_* values at the begin of each build command. (And no, I do not consider feasible to have a single, multiline huge build command.) Moreover the same code has to be repeated in artifacts.name
questions
How do I propagate environment variables computed at build-time between different phases.*.commands?
Why is $(...) expanded in artifacts.name but not in env.variables.MY_VAR_0?
Answer to #1:
it looks like setting Environment variables in Env: drive silently fails. However, using local variable syntax $Foo = "bar" seems to work between commands as well as phases.
For example, in the following:
version: 0.2
env:
shell: powershell.exe
phases:
pre_build:
commands:
- $foo = "bar"
- echo $foo
build:
commands:
- echo $foo
both echo commands print bar.
Answer to #2:
artifacts.name always uses the shell language (unix), where when setting a variable you are in either cmd.exe or powershell.exe depending on what you have env.shell set to in the buildspec. Reference
I am trying to get ENVIRONMENT Variables into the EC2 instance (trying to run a django app on Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type ami-0ff8a91507f77f867 ). How do you get them in the newest version of amazon's linux, or get the logging so it can be traced.
user-data text (modified from here):
#!/bin/bash
#trying to get a file made
touch /tmp/testfile.txt
cat 'This and that' > /tmp/testfile.txt
#trying to log
echo 'Woot!' > /home/ec2-user/user-script-output.txt
#Trying to get the output logged to see what is going wrong
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
#trying to log
echo "XXXXXXXXXX STARTING USER DATA SCRIPT XXXXXXXXXXXXXX"
#trying to store the ENVIRONMENT VARIABLES
PARAMETER_PATH='/'
REGION='us-east-1'
# Functions
AWS="/usr/local/bin/aws"
get_parameter_store_tags() {
echo $($AWS ssm get-parameters-by-path --with-decryption --path ${PARAMETER_PATH} --region ${REGION})
}
params_to_env () {
params=$1
# If .Ta1gs does not exist we assume ssm Parameteres object.
SELECTOR="Name"
for key in $(echo $params | /usr/bin/jq -r ".[][].${SELECTOR}"); do
value=$(echo $params | /usr/bin/jq -r ".[][] | select(.${SELECTOR}==\"$key\") | .Value")
key=$(echo "${key##*/}" | /usr/bin/tr ':' '_' | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
export $key="$value"
echo "$key=$value"
done
}
# Get TAGS
if [ -z "$PARAMETER_PATH" ]
then
echo "Please provide a parameter store path. -p option"
exit 1
fi
TAGS=$(get_parameter_store_tags ${PARAMETER_PATH} ${REGION})
echo "Tags fetched via ssm from ${PARAMETER_PATH} ${REGION}"
echo "Adding new variables..."
params_to_env "$TAGS"
Notes -
What i think i know but am unsure
the user-data script is only loaded when it is created, not when I stop and then start mentioned here (although it also says [i think outdated] that the output is logged to /var/log/cloud-init-output.log )
I may not be starting the instance correctly
I don't know where to store the bash script so that it can be executed
What I have verified
the user-data text is on the instance by ssh-ing in and curl http://169.254.169.254/latest/user-data shows the current text (#!/bin/bash …)
What Ive tried
editing rc.local directly to export AWS_ACCESS_KEY_ID='JEFEJEFEJEFEJEFE' … and the like
putting them in the AWS Parameter Store (and can see them via the correct call, I just can't trace getting them into the EC2 instance without logs or confirming if the user-data is getting run)
putting ENV variables in Tags and importing them as mentioned here:
tried outputting the logs to other files as suggested here (Not seeing any log files in the ssh instance or on the system log)
viewing the System Log on the aws webpage to see any errors/logs via selecting the instance -> 'Actions' -> 'Instance Settings' -> 'Get System Log' (not seeing any commands run or log statements [only 1 unrelated word of user])
In my aws Cloud Formation cfn configset I have a command to set an environment key to the name of the user group apache belongs to as it might be apache or www-data depending on the distro.
Something like this:
Metadata:
AWS::CloudFormation::Init:
configSets:
joomla:
- "set_permissions"
- "and_some_more..."
configure_cfn:
files:
/etc/cfn/hooks.d/cfn-auto-reloader.conf:
content: !Sub |
[cfn-auto-reloader-hook]
triggers=post.update
path=Resources.EC2.Metadata.AWS::CloudFormation::Init
action=/opt/aws/bin/cfn-init -v --stack ${AWS::StackName} --resource EC2 --configsets joomla --region ${AWS::Region}
mode: "000400"
owner: root
group: root
.....
set_permissions:
commands:
01_01_get_WebServerGroup:
env:
#webserver group might be apache or www-data depending on the distro
WebServerGp:
command: "ps -ef | egrep '(httpd|apache2|apache)' | grep -v `whoami` | grep -v root | head -n1 | awk '{print $1}'"
However, when I launch this stack, the configsets process halts at this point and I get an an error in the cfn_init.log that looks like this:
File
"/usr/lib/python2.7/dist-packages/cfnbootstrap/command_tool.py", line
80, in apply
raise ToolError(u"%s does not specify the 'command' attribute, which is required" % name) ToolError: 01_01_get_WebServerGroup does
not specify the 'command' attribute, which is required
Is this the preferred method to catch and use a grep result in a configset command? Is there a better way? What can I do to address the error thrown in the cfn_init.log?
OK, I guess I can create parameter and mapping elements to capture the distro type on launch and then set the webserver group accordingly but I am really trying to understand how to set the env: key to a response from the cli.
The problem of your code is this line WebServerGp.
Line command is must be on the same level of env, under the commands name, in your case is 01_01_get_WebServerGroup. So, it has to be like this:
commands:
01_01_get_WebServerGroup:
env: ..
command: ..
If you want to use the result of grep, you can put them on variable and use it later.
You can specify more than one command under that command line using \n for executing the command.
Please check this code below.
command: "result=(ps ef | grep ...)\n echo $result\n ..."
If you have really long command, you can use the Fn::Join to be the value of command.
The docker inspect command can be very useful for getting the labels on a Docker image:
# -*- Dockerfile -*-
FROM busybox
LABEL foo="bar"
LABEL com.wherever.foo="bang"
For simple label names, the inspect command has a --format option (which uses Go templates) that works nicely.
$ docker build -t foo .
$ docker inspect -f '{{ .Config.Labels.foo }}' foo
bar
But how do I access labels that have a dot in their name?
$ docker inspect -f '{{ .Config.Labels.com.wherever.foo }}' foo
<no value>
I'm writing this in a bash script, where I'd like to avoid re-parsing the JSON output from docker inspect, if possible.
The index function is what I was looking for. It can lookup arbitrary strings in the map.
$ docker inspect -f '{{ index .Config.Labels "com.wherever.foo" }}' foo
bang