How to run for loop in buildspec file of AWS CodeBuild? - amazon-web-services

I am trying to run a for loop to traverse multiple folders in the cloned code using the following method
commands:
- folders=`ls`
- for value in ${folders}
- do
- some_code_here
- done
Also, I've tried different ways like
- for value in ${folders}; do
- some_code_here
- done
But none of them works.

You should write for-loops as one-liner. As CodeBuild merges all lines in one command together, you can write for-loops in a readable format as follows:
- folders=`ls`
- for value in $folders;
do
echo $value;
done
- echo "run the next command"

I think you can use YAML multiline string.
- |
for value in ${folders}; do
some_code_here
done

Related

GITHUB ACTIONS replace character in string

I'm trying to replace a character in a variable within a GITHUB actions step
- name: Set Up DB Name
run: |
DB_NAME="${GITHUB_REF_SLUG/-/_}"
echo $DB_NAME
I'm getting a bad request error
What am I doing wrong?
I successfully made the character replace works (with GITHUB_REPOSITORY) using this implementation:
job1:
runs-on: ubuntu-latest
steps:
- name: character-replacement-test
run: |
REPO=$GITHUB_REPOSITORY
DB_NAME="${REPO//-/_}"
echo $DB_NAME
I couldn't get to the same result with 2 lines.
(But someone more experienced with bash might help us get there as well).
Evidence:
So in your case, it should work using this code if you substitute the GITHUB_REPOSITORY by GITHUB_REF_SLUG in your workflow.
I used this post as reference.

How to run Gitlab CI only for specific branches and tags?

I would like to setup my project_dev CI only for 3 branches and specific kind of tags like: dev_1.0, dev_1.1, dev_1.2.
How can I achieve that?
This is what I have now:
project_dev:
stage: dev
script:
- export
- bundle exec pod repo update
- bundle exec pod install
- bundle exec fastlane crashlytics_project_dev
after_script:
- rm -rf ~/Library/Developer/Xcode/Archives || true
when: manual
only:
- develop
- release
- master
- //here I need to add condition to fire that stage additionally only for specific tags. How can I setup regexp here?
tags:
- iOS
When I type it like:
only:
- branches
- /^dev_[0-9.]*$/
It also runs the CI for tags like: dev1.2 but it should not. Why? Is there a regexp for tags at all?
Sounds like a regular expression question. I just created a project on gitlab.com for the regular expression.
File: .gitlab-ci.yml
project_dev:
# Irrelevant keys is skipped
script:
- echo "Hello World"
only:
- develop
- release
- master
- /^dev_[0-9]+(?:.[0-9]+)+$/ # regular expression
I was pushed all of tags you mentioned to test this regular expression.
As you can see , It will match tags like dev_1.0, dev_1.1, but the job project_dev will not be triggered by tag dev1.2, You can check the result on pipeline pages
Instead of using only/except you can use rules which are more powerful.
Rules support regex pattern matching.
Your rule for excepting only specific kind of branches/tags like dev_1.0, dev_1.1, dev_1.2 should look like:
rules:
- if: '$CI_COMMIT_BRANCH =~ /^dev_[0-9]+\.[0-9]+$/ || $CI_COMMIT_TAG =~ /^dev_[0-9]+\.[0-9]+$/'
Predefined environment variables like CI_COMMIT_BRANCH and CI_COMMIT_TAG are described here.
Gitlab.com ?
You could try a combination of except and only.
Something like
only:
- tags
- branches
except:
- /^(?!(branch1|branch2|branch3|dev_[0-9.]*$)$).*$/
The idea being, allowing only branches and tags to trigger a job, with the exception of everything different from branch[1-3] and dev_ branches/tags
And here is the official documentation for this:
GitLab CI/CD pipeline configuration reference
There you find the section for only/except with the supported regex syntax, although it states that:
only and except are not being actively developed. rules is the preferred keyword to control when to add jobs to pipelines.

Environment variables in Google Cloud Build

We want to migrate from Bitbucket Pipelines to Google Cloud Build to test, build and push Docker images.
How can we use environment variables without a CryptoKey? For example:
- printf "https://registry.npmjs.org/:_authToken=${NPM_TOKEN}\nregistry=https://registry.npmjs.org" > ~/.npmrc
To use environment variables in the args portion of your build steps you need:
"a shell to resolve environment variables with $$" (as mentioned in the example code here)
and you also need to be careful with your usage of quotes (use single quotes)
See below the break for a more detailed explanation of these two points.
While the Using encrypted resources docs that David Bendory also linked to (and which you probably based your assumption on) show how to do this using an encrypted environment variable specified via secretEnv, this is not a requirement and it works with normal environment variables too.
In your specific case you'll need to modify your build step to look something like this:
# you didn't show us which builder you're using - this is just one example of
# how you can get a shell using one of the supported builder images
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: ['-c', 'printf "https://registry.npmjs.org/:_authToken=%s\nregistry=https://registry.npmjs.org" $$NPM_TOKEN > ~/.npmrc']
Note the usage of %s in the string to be formatted and how the environment variable is passed as an argument to printf. I'm not aware of a way that you can include an environment variable value directly in the format string.
Alternatively you could use echo as follows:
args: ['-c', 'echo "https://registry.npmjs.org/:_authToken=$${NPM_TOKEN}\nregistry=https://registry.npmjs.org" > ~/.npmrc']
Detailed explanation:
My first point at the top can actually be split in two:
you need a shell to resolve environment variables, and
you need to escape the $ character so that Cloud Build doesn't try to perform a substitution here
If you don't do 2. your build will fail with an error like: Error merging substitutions and validating build: Error validating build: key in the template "NPM_TOKEN" is not a valid built-in substitution
You should read through the Substituting variable values docs and make sure that you understand how that works. Then you need to realise that you are not performing a substitution here, at least not a Cloud Build substitution. You're asking the shell to perform a substitution.
In that context, 2. is actually the only useful piece of information that you'll get from the Substituting variable values docs (that $$ evaluates to the literal character $).
My second point at the top may be obvious if you're used to working with the shell a lot. The reason for needing to use single quotes is well explained by these two questions. Basically: "You need to use single quotes to prevent interpolation happening in your calling shell."
That sounds like you want to use Encrypted Secrets: https://cloud.google.com/cloud-build/docs/securing-builds/use-encrypted-secrets-credentials

error in grep using a regex expression

I think I have uncovered an error in grep. If I run this grep statement against a db log on the command line it runs fine.
grep "Query Executed in [[:digit:]]\{5\}.\?" db.log
I get this result:
Query Executed in 19699.188 ms;"select distinct * from /xyztable.....
when I run it in a script
LONG_QUERY=`grep "Query Executed in [[:digit:]]\{5\}.\?" db.log`
the asterisk in the result is replaced with a list of all files in the current directory.
echo $LONG_QUERY
Result:
Query Executed in 19699.188 ms; "select distinct <list of files in
current directory> from /xyztable.....
Has anyone seen this behavior?
This is not an error in grep. This is an error in your understanding of how scripts are interpreted.
If I write in a script:
echo *
I will get a list of filenames, because an unquoted, unescaped, asterisk is interpreted by the shell (not grep, but /bin/bash or /bin/sh or whatever shell you use) as a request to substitute filenames matching the pattern '*', which is to say all of them.
If I write in a script:
echo "*"
I will get a single '*', because it was in a quoted string.
If I write:
STAR="*"
echo $STAR
I will get filenames again, because I quoted the star while assigning it to a variable, but then when I substituted the variable into the command it became unquoted.
If I write:
STAR="*"
echo "$STAR"
I will get a single star, because double-quotes allow variable interpolation.
You are using backquotes - that is, ` characters - around a command. That captures the output of the command into a variable.
I would suggest that if you are going to be echoing the results of the command, and little else, you should just redirect the results into a file. (After all, what are you going to do when your LONG_QUERY contains 10,000 lines of output because your log file got really full?)
Barring that, at the very least do echo "$LONG_QUERY" (in double quotes).

Advanced pattern matching in Makefile

Is it possible to create a Makefile pattern matching with two or three varying patterns? I'm using Gnu make.
In my current set-up, in simplified form, I'm using two Bash for-loops in order to convert a certain set of files to another set, and to create the final result file. Example:
#!/bin/bash
XMIN=$1
XMAX=$2
YMIN=$3
YMAX=$4
z=$5
FINAL_LIST=
for y in `seq $YMIN $YMAX`;
do
SOURCE_LIST=
echo Processing column $y
for x in `seq $XMIN $XMAX`;
do
# Convert from file source/something_${x}_${y}_${z} to
# target/something_else_$${x}_${y}_${z}
echo Processing X ${x} Y ${y} with Z ${z}
# do_something
SOURCE_LIST+="target/something_else_$${x}_${y}_${z} "
done
# Create something for this line
echo Processing ${SOURCE_LIST} target_line_${y}_${z}
# process the line
FINAL_LIST+="target_line_${y}_${z} "
done
# Finally, compose the final thing
echo Process the final result: ${FINAL_LIST} result_${z}
# process the final result
# We're done
I would like to make this more effectively with Makefile, as it would allow me to execute things in parallel, and also it would take care, that "line results" are re-generated only when something changes for that particular line.
I'm already using Makefile to convert single datafiles to another format, with simple pattern matching. Makefile is very good in handling my base of >500k datafiles - it can very fast detect changed source files and execute the conversion only for the changed datafiles.
The problem here is that I don't know, how to make Makefile patterns with more than one varying pattern. Following is an easy pattern:
%.target : %.source
# do something
But I don't know, whether the following would be possible (as pseudocode):
<var_pat_Z>_<var_pat_Y>.target: <var_pat_Z>_<var_pat_Y>.source
# do something else
It is not necessary to implement this with Makefile, but I would still need to find a way to detect changed source files, and the capability to execute things in parallel. Currently I'm handling those detections in my bash scripts, and the parallelization by executing bash scripts in parallel with Gnu parallel command. Anyway, that is most likely not the optimal way.
If I understood your question correctly, you have a bunch of *.source files, and want a rule that turns each into a *.target file, while picking two sub-strings from whatever the * expands to.
Why not pick the stem in $* apart at the underscore? Here's a solution.
If you have these files
$ ls *.source
1_1.source
1_2.source
1_3.source
a_b.source
foo_bar.source
then running this GNUmakefile's default target
# all should depend on all targets for which a source exists.
all: $(shell echo *.source | sed 's/source/target/g')
%.target: %.source
#z="$*" y="$*"; \
z=$${z%%_*} y=$${y##*_}; \
echo z=$$z y=$$y
will give you
$ gmake
z=1 y=1
z=1 y=2
z=1 y=3
z=a y=b
z=foo y=bar