I have an env var INTEGRATION that is set to true or false.
I would like to set the values of variables based on whether INTEGRATION is set to true or false.
I have tried:
ifeq ($(INTEGRATION),false)
PRODUCTION_URL="production"
else
PRODUCTION_URL="integration"
endif
and also
if [ ${SNOOTY_INTEGRATION} == false ]; then \
echo "HIT THE TRUTH"; \
PRODUCTION_URL="https://docs.mongodb.com"; \
fi
after running
export INTEGRATION=true
make help
in terminal, I run make help:
help:
echo "prod url ${PRODUCTION_URL}";
in the first case, it print prod url integration and in the second case, it prints prod url -- when the desired behavior is for it to print, prod url production
Running env INTEGRATION=true is wrong.
That doesn't set the environment variable in the current shell: it only sets that environment variable in the env command.
You can either use:
$ export INTEGRATION=true
$ make help
(set the variable in the shell, so all subsequent commands can see it until you un-export it again), or you can use:
$ env INTEGRATION=true make help
Which is essentially the same thing as (the more common and shorter):
$ INTEGRATION=true make help
(set the variable only for this invocation of make).
But you can't use:
$ env INTEGRATION=true
$ make help
ETA
If that doesn't work there must be something else about your environment you haven't explained. Please provide a complete self-contained example that shows the problem. For example, I don't know what the shell if-statement you show in your question is for or how it relates to the rest of your question. This works for me:
$ cat Makefile
ifeq ($(INTEGRATION),false)
PRODUCTION_URL="production"
else
PRODUCTION_URL="integration"
endif
help:
echo "prod url ${PRODUCTION_URL}";
$ make help
echo "prod url "integration"";
prod url integration
$ export INTEGRATION=true
$ make help
echo "prod url "integration"";
prod url integration
$ export INTEGRATION=false
$ make help
echo "prod url "production"";
prod url production
Related
I am trying to automate the process of sending my temporary Amazon AWS keys as environment variables to a Docker image using Windows. I have a file, credentials.txt that contains my AWS credentials (the 3 ids are always the same, but the string values change regularly). I am using Windows command prompt.
Input:
(includes 2 empty lines at end) credentials.txt:
[default]
aws_access_key_id = STR/+ing1
aws_secret_access_key = STR/+ing2
aws_session_token = STR/+ing3
Desired output:
I need to issue the following command in order to run a Docker image (substituting the strings with the actual strings):
docker run -e AWS_ACCESS_KEY_ID=STR/+ing1 -e AWS_SECRET_ACCESS_KEY=STR/+ing2 -e AWS_SESSION_TOKEN=STR/+ing3 my-aws-container
My idea is to try to use regex on credentials.txt to convert it to:
SET aws_access_key_id=STR/+ing1
SET aws_secret_access_key=STR/+ing2
SET aws_session_token=STR/+ing3
And then run:
docker run -e AWS_ACCESS_KEY_ID=%aws_access_key_id% -e AWS_SECRET_ACCESS_KEY=%aws_secret_access_key% -e AWS_SESSION_TOKEN=%aws_session_token% my-aws-container
Does anyone have any advice on how to achieve this?
You can parse your credentials.txt with a for /f loop to set the variables (effectively removing the spaces):
for /f "tokens=1,3" %%a in ('type credentials.txt ^| find "="') do set "%%a=%%b"
and then run the last code line from your question:
docker run -e AWS_ACCESS_KEY_ID=%aws_access_key_id% -e AWS_SECRET_ACCESS_KEY=%aws_secret_access_key% -e AWS_SESSION_TOKEN=%aws_session_token% my-aws-container
Note: the values should not contain spaces or commas.
I've had a go in python that seems to work. Someone else may have a better answer.
I create the python file:
docker_run.py
import re
import os
myfile = 'C:/fullpath/credentials'
with open(myfile,'r') as f:
mystr = f.read()
vals = re.findall('=[\s]*([^\n]+)',mystr)
keys = ['AWS_ACCESS_KEY_ID','AWS_SECRET_ACCESS_KEY','AWS_SESSION_TOKEN']
environment_vars = ''.join([' -e ' + k + '=' + v for k,v in zip(keys,vals)])
cmd = 'docker run'+environment_vars+' my-aws-container'
os.system(cmd)
Then from command prompt I run:
python docker_run.py
This succeeds in running docker
(note: I tried using exec() in the final line rather than os.system(), but got the error "SyntaxError: invalid syntax")
I have branch folder "feature-set" under this folder there's multibranch
I need to run the below script in my Jenkinsfile with a condition if this build runs from any branches under the "feature-set" folder like "feature-set/" then run the script
the script is:
sh """
if [ ${env.BRANCH_NAME} = "feature-set*" ]
then
echo ${env.BRANCH_NAME}
branchName='${env.BRANCH_NAME}' | cut -d'\\/' -f 2
echo \$branchName
npm install
ng build --aot --output-hashing none --sourcemap=false
fi
"""
the current output doesn't get the condition:
[ feature-set/swat5 = feature-set* ]
any help?
I would re-write this to be primarily Jenkins/Groovy syntax and only go to shell when required.
Based on the info you provided I assume your env.BRANCH_NAME always looks like `feature-set/
// Echo first so we can see value if condition fails
echo(env.BRANCH_NAME)
// startsWith better than contains() based on current usecase
if ( (env.BRANCH_NAME).startsWith('feature-set') ) {
// Split branch string into list based on delimiter
List<String> parts = (env.BRANCH_NAME).tokenize('/')
/**
* Grab everything minus the first part
* This handles branches that include additional '/' characters
* e.g. 'feature-set/feat/my-feat'
*/
branchName = parts[1..-1].join('/')
echo(branchName)
sh('npm install && ng build --aot --output-hashing none --sourcemap=false')
}
This seems to be more on shell side. Since you are planning to use shell if condition the below worked for me.
Administrator1#XXXXXXXX:
$ if [[ ${BRANCH_NAME} = feature-set* ]]; then echo "Success"; fi
Success
Remove the quotes and add an additional "[]" at the start and end respectively.
The additional "[]" works as regex
I am trying to get ENVIRONMENT Variables into the EC2 instance (trying to run a django app on Amazon Linux AMI 2018.03.0 (HVM), SSD Volume Type ami-0ff8a91507f77f867 ). How do you get them in the newest version of amazon's linux, or get the logging so it can be traced.
user-data text (modified from here):
#!/bin/bash
#trying to get a file made
touch /tmp/testfile.txt
cat 'This and that' > /tmp/testfile.txt
#trying to log
echo 'Woot!' > /home/ec2-user/user-script-output.txt
#Trying to get the output logged to see what is going wrong
exec > >(tee /var/log/user-data.log|logger -t user-data ) 2>&1
#trying to log
echo "XXXXXXXXXX STARTING USER DATA SCRIPT XXXXXXXXXXXXXX"
#trying to store the ENVIRONMENT VARIABLES
PARAMETER_PATH='/'
REGION='us-east-1'
# Functions
AWS="/usr/local/bin/aws"
get_parameter_store_tags() {
echo $($AWS ssm get-parameters-by-path --with-decryption --path ${PARAMETER_PATH} --region ${REGION})
}
params_to_env () {
params=$1
# If .Ta1gs does not exist we assume ssm Parameteres object.
SELECTOR="Name"
for key in $(echo $params | /usr/bin/jq -r ".[][].${SELECTOR}"); do
value=$(echo $params | /usr/bin/jq -r ".[][] | select(.${SELECTOR}==\"$key\") | .Value")
key=$(echo "${key##*/}" | /usr/bin/tr ':' '_' | /usr/bin/tr '-' '_' | /usr/bin/tr '[:lower:]' '[:upper:]')
export $key="$value"
echo "$key=$value"
done
}
# Get TAGS
if [ -z "$PARAMETER_PATH" ]
then
echo "Please provide a parameter store path. -p option"
exit 1
fi
TAGS=$(get_parameter_store_tags ${PARAMETER_PATH} ${REGION})
echo "Tags fetched via ssm from ${PARAMETER_PATH} ${REGION}"
echo "Adding new variables..."
params_to_env "$TAGS"
Notes -
What i think i know but am unsure
the user-data script is only loaded when it is created, not when I stop and then start mentioned here (although it also says [i think outdated] that the output is logged to /var/log/cloud-init-output.log )
I may not be starting the instance correctly
I don't know where to store the bash script so that it can be executed
What I have verified
the user-data text is on the instance by ssh-ing in and curl http://169.254.169.254/latest/user-data shows the current text (#!/bin/bash …)
What Ive tried
editing rc.local directly to export AWS_ACCESS_KEY_ID='JEFEJEFEJEFEJEFE' … and the like
putting them in the AWS Parameter Store (and can see them via the correct call, I just can't trace getting them into the EC2 instance without logs or confirming if the user-data is getting run)
putting ENV variables in Tags and importing them as mentioned here:
tried outputting the logs to other files as suggested here (Not seeing any log files in the ssh instance or on the system log)
viewing the System Log on the aws webpage to see any errors/logs via selecting the instance -> 'Actions' -> 'Instance Settings' -> 'Get System Log' (not seeing any commands run or log statements [only 1 unrelated word of user])
How to pass argument to Makefile from command line?
I understand I can do
$ make action VAR="value"
$ value
with Makefile
VAR = "default"
action:
#echo $(VAR)
How do I get the following behavior?
$ make action value
value
How about
$make action value1 value2
value1 value2
You probably shouldn't do this; you're breaking the basic pattern of how Make works. But here it is:
action:
#echo action $(filter-out $#,$(MAKECMDGOALS))
%: # thanks to chakrit
#: # thanks to William Pursell
EDIT:
To explain the first command,
$(MAKECMDGOALS) is the list of "targets" spelled out on the command line, e.g. "action value1 value2".
$# is an automatic variable for the name of the target of the rule, in this case "action".
filter-out is a function that removes some elements from a list. So $(filter-out bar, foo bar baz) returns foo baz (it can be more subtle, but we don't need subtlety here).
Put these together and $(filter-out $#,$(MAKECMDGOALS)) returns the list of targets specified on the command line other than "action", which might be "value1 value2".
Here is a generic working solution based on #Beta's
I'm using GNU Make 4.1 with SHELL=/bin/bash atop my Makefile, so YMMV!
This allows us to accept extra arguments (by doing nothing when we get a job that doesn't match, rather than throwing an error).
%:
#:
And this is a macro which gets the args for us:
args = `arg="$(filter-out $#,$(MAKECMDGOALS))" && echo $${arg:-${1}}`
Here is a job which might call this one:
test:
#echo $(call args,defaultstring)
The result would be:
$ make test
defaultstring
$ make test hi
hi
Note! You might be better off using a "Taskfile", which is a bash pattern that works similarly to make, only without the nuances of Maketools. See https://github.com/adriancooney/Taskfile
Much easier aproach. Consider a task:
provision:
ansible-playbook -vvvv \
-i .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory \
--private-key=.vagrant/machines/default/virtualbox/private_key \
--start-at-task="$(AT)" \
-u vagrant playbook.yml
Now when I want to call it I just run something like:
AT="build assets" make provision
or just:
make provision in this case AT is an empty string
Few years later, want to suggest just for this: https://github.com/casey/just
action v1 v2=default:
#echo 'take action on {{v1}} and {{v2}}...'
You will be better of defining variables and calling your make instead of using parameters:
Makefile
action: ## My action helper
#echo $$VAR_NAME
Terminal
> VAR_NAME="Hello World" make action
Hello World
don't try to do this
$ make action value1 value2
instead create script:
#! /bin/sh
# rebuild if necessary
make
# do action with arguments
action "$#"
and do this:
$ ./buildthenaction.sh value1 value2
for more explanation why do this and caveats of makefile hackery read my answer to another very similar but seemingly not duplicate question: Passing arguments to "make run"
I have a ami which need username/password for login via ssh. I want to create new amis from this, in which I can login from any newly created keypairs.
Any suggestions?
I'm not sure what AMI allows username/password login, but when you create an instance from an AMI, you need to specify a key pair.
That key will be ADDED to the authorized_keys for the default user (ec2-user for Amazon Linux, ubuntu for the Ubuntu AMI, etc).
Why you don't just add the users/password to the instance and then build your AMI from there? Then you can change your /etc/ssh/sshd_config and permit username passwords with this: PasswordAuthentication yes. Btw, Username/Password authentication is not recommended for servers in the cloud because of man in the middle attacks. (use it at your own risk)
Not sure if I understand the question fully, but if you want to change the behavior of the instance when it boots up I suggest you look at fuzzing with cloud-init. The configuration in the instance is under /etc/cloud/cloud.cfg. For example on on Ubuntu the default says something like this:
user: ubuntu
disable_root: 1
preserve_hostname: False
...
If you want to change the default user you can change it there
user: <myuser>
disable_root: 1
preserve_hostname: False
...
The simplest way is to do this is by adding the following snippet in to the /etc/rc.local or its equivalent.
#!/bin/sh
#
# This script will be executed *after* all the other init scripts.
# You can put your own initialization stuff in here if you don't
# want to do the full Sys V style init stuff.
touch /var/lock/subsys/local
if [ ! -d /root/.ssh ] ; then
mkdir -p /root/.ssh
chmod 0700 /root/.ssh
fi
# Fetch public key using HTTP
curl -f http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key > /tmp/aws-key 2>/dev/null
if [ $? -eq 0 ] ; then
cat /tmp/aws-key >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
fi
rm -f /tmp/aws-key
# or fetch public key using the file in the ephemeral store:
if [ -e /mnt/openssh_id.pub ] ; then
cat /mnt/openssh_id.pub >> /root/.ssh/authorized_keys
chmod 0600 /root/.ssh/authorized_keys
fi