Symfony & AWS BS - Env vars for console outside .env files - amazon-web-services

I have a symfony API which runs on AWS beanstalk instances. My .env file registers the environment variables that I need in my app. I override all of them in the AWS console to adapt to my environments.
For exemple :
Env file : DEFAULT_CONNECTION_DSN=bolt://user:password#host:port
Test server : DEFAULT_CONNECTION_DSN=bolt://test:azerty#test.toto.com:7687
Prod server : DEFAULT_CONNECTION_DSN=bolt://prod:azerty#toto.com:7687
This works because AWS overrides the environment variables when the PHP server is started, so the values placed in the .env file are ignored.
The problem is that I try to create a CRON on the server. The CRON is executed from command line, and I saw that, in this case, the variables still have the value specified in the .env file at runtime.
If I list the environment variables on the server, I see that DEFAULT_CONNECTION_DSN has the value that I want, but if I dump the value in my code (or execute php bin/console debug:container --env-vars), DEFAULT_CONNECTION_DSN has the .env file value. I already tried to delete the entry from my .env file. In this case, I have an error saying my environment variable is not found.
I must precise that I work with a .env.local locally, file which is not versionned, and the deploys are based on git versionning, so it seems difficult to add a .env.env-name file for each environement.
What could I do ?

Symfony loads env vars only if they are not already present. Your problem looks like more how to add env vars with cron in AWS. As I don't know BS I can't help you with this.
In your cron I think you can still run DEFAULT_CONNECTION_DSN=bolt://prod:azerty#toto.com:7687 php bin/console ..., this will set your env var at runtime.

When you run something, from a bash, it inherits the exported variables, plus the ones given in the same command line.
Suppose this case:
xavi#bromo:~$ export -p
declare -x HOME="/home/xavi"
declare -x TERM="linux"
declare -x USER="xavi"
xavi#bromo:~$
Say you echo something not defined: You don't get anything, as expected:
xavi#bromo:~$ echo $ABC $XYZ
xavi#bromo:~$
You can place this echo inside a bash script, for example:
xavi#bromo:~$ cat /tmp/test.sh
#!/usr/bin/env bash
echo $ABC $XYZ
Then give it execution permissions:
xavi#bromo:~$ chmod a+x /tmp/test.sh
xavi#bromo:~$
Now, if you execute the script it also says nothing, but if you prefix them with variable assignment the value lives "exclussively" inside that call. See examples with hello-bye and orange-banana. If you later just show the values, they are not there:
xavi#bromo:~$ /tmp/test.sh
xavi#bromo:~$ ABC=hello XYZ=bye /tmp/test.sh
hello bye
xavi#bromo:~$ ABC=orange XYZ=banana /tmp/test.sh
orange banana
xavi#bromo:~$ echo $ABC $XYZ
xavi#bromo:~$
This would be a good approach for the solution of Fabien Papet: To prefix the cron call with the variable assignment.
But if you cannot do that, you can go furthter:
Env vars are not inherited when not exported but inherited when exported: See this:
xavi#bromo:~$ ABC=pita
xavi#bromo:~$ XYZ=pota
xavi#bromo:~$ /tmp/test.sh
xavi#bromo:~$ export ABC=pita
xavi#bromo:~$ export XYZ=pota
xavi#bromo:~$ /tmp/test.sh
pita pota
You could take advantage of bash dot-import command . to import variables.
Place in these files those contents:
xavi#bromo:~$ cat /tmp/fruits
export ABC=orange
export XYZ=tangerine
xavi#bromo:~$ cat /tmp/places
export ABC=Paris
export XYZ=Barcelona
xavi#bromo:~$
Note that the previous files do not have a she-bang as they are not meant to be executed, they do not have to create a bash instance. They are meant for inclussion from an existing bash instance.
Now edit the test.sh to make an inclusion of a file which we'll pass via the first command line parameter:
xavi#bromo:~$ cat /tmp/test.sh
#!/usr/bin/env bash
. $1
echo $ABC $XYZ
xavi#bromo:~$
You can now play with the invocation. I still have the pita-pota pair from the last test. See what happens:
xavi#bromo:~$ echo $ABC $XYZ
pita pota
xavi#bromo:~$ /tmp/test.sh /tmp/fruits
orange tangerine
xavi#bromo:~$ /tmp/test.sh /tmp/places
Paris Barcelona
xavi#bromo:~$ echo $ABC $XYZ
pita pota
xavi#bromo:~$
The first line echo $ABC $XYZ displays our current environment.
The second line invokes a new bash (via the she-bang of /tmp/test.sh) and as pita-pota were exported they are momentarily there. But as soon as . $1 is executed, it is expanded into . /tmp/fruits which overrides the environment by exporting new variables, thus the result.
The second scripts (the one with fruits) ends, so the bash is terminated and its environment is destroyed. We return to our main bash. In here we still have pota-pita. If we had printed now, we'd see the pita-pota. We go with the places, now.
The reasoning with the places is identical to the reasoning with the fruits.
As soon as we return to the main bash, the child env is destroyed, so the places have been blown away and we return to the first initial environment with pita-pota, which is then printed.
So...
With all this you can:
Setup a bash script that wraps:
loading the environment from some
place.
Call the php bin/console
In the cron, do not invoke the php but your wrapper script.
This allows you to
Change the script with different environments without depending on versioning.
Keep your credentials and configuration separated from the code.
In conclusion:
Make your source version control system to have the cron versioned, and the wrapper bash script versioned.
Make your deployer to place a different "includable" parameters file in each environment.
Make your cron to call the wrapper.
Make your wrapper to setup the env vars and call the php bin/console.
Does this solve your issue?

Related

How to use scl command as a script shebang?

If I want to run a specific command (with arguments) under Software Collections, I can use this command:
scl enable python27 "ls /tmp"
However, if I try to make a shell script that has a similar command as its shebang line, I get errors:
$ cat myscript
#!/usr/bin/scl enable python27 "ls /tmp"
echo hello
$ ./myscript
Unable to open /etc/scl/prefixes/"ls!
What am I doing wrong?
You should try using -- instead of surrounding your command with quotes.
scl enable python27 -- ls /tmp
I was able to make a python script that uses the rh-python35 collection with this shebang:
#!/usr/bin/scl enable rh-python35 -- python
import sys
print(sys.version)
The parsing of arguments in the she-bang command is not really defined. From man execve:
The semantics of the optional-arg argument of an interpreter script vary across implementations. On Linux, the entire string following the interpreter name is passed as a single argument to the interpreter, and this string can include white space. However, behavior differs on some other systems. Some systems use the first white space to terminate optional-arg. On some systems, an interpreter script can have multiple arguments, and white spaces in optional-arg are used to delimit the arguments.
No matter what, argument splitting based on quote sis not supported. So when you write:
#!/usr/bin/scl enable python27 "ls /tmp"
It's very possible that what gets invoked is (using bash notation):
'/usr/bin/scl' 'enable' 'python27' '"ls' '/tmp"'
This is probably why it tries to open the "ls file at /etc/scl/prefixes/"ls
But it is just as likely that the shebang evaluates to:
'/usr/bin/scl' 'enable python27 "ls /tmp"'
And that would fail since it wont be able to find a command named enable python27 "ls /tmp" for scl to execute.
There's a few workarounds you can use.
You can call your script via scl:
$ cat myscript
#!/bin/bash
echo hello
$ scl enable python27 ./myscript
hello
You can also use the heredoc notation, but it might lead to subtle issues. I personally avoid this:
$ cat ./myscript
#!/bin/bash
scl enable python27 -- <<EOF
echo hi
echo \$X_SCLS
EOF
$ bash -x myscript
+ scl enable python27 --
hi
python27
You can see one of the gotcha's already: I had to write \$X_SCLS to access the environment variable instead of just $X_SCL.
Edit: Another option is two have two scripts. One that has the actual code, and the second that simply does scl enable python27 $FIRST_SCRIPT. Then you wont have to remember to enter scl ... manually.
The software collections documentation may also be helpful. In particular you can try
cat myscript.sh | scl enable python27 -
As well as permanently enabling a software collection
source scl_source enable python27
./myscript.sh

`-p` is for what in creating virtual environment command of Django

Django creates virtual environment alternatively applies command:
virtualenv -p python3 .
I have typed over 30 times of the commands.
The meaning of -p is a ghost for me.To search it several times,I failed to get its explanation.
I wonder it might be out of the scope of my concepts and vocabulary.
-p sets which python interpreter to use
See the vitrualenv reference here https://virtualenv.pypa.io/en/stable/reference/

How can I create command autocompletion for Fabric?

I created a fabric file that contain several commands from short to long and complex. I need to have an autocomplete feature so that when user type fab[tab][tab] then all available fab commands are shown, just like we have in bash.
i.e.
user#someone-ubuntu:~/path/to/fabfile$ fab[tab][tab]
command1 command2 command3 ..and so on
How can I do this ?
There are instructions you can follow here: http://evans.io/legacy/posts/bash-tab-completion-fabric-ubuntu/
Basically you run a script that calls fab --shortlist, the output gets fed into complete which is a bash function that you can read more about here: https://www.gnu.org/software/bash/manual/html_node/Programmable-Completion-Builtins.html
I did this for my new fabfile using fabric 2.5 & Python 3:
~/.config/fabfile
#!/usr/bin/env zsh
_fab()
{
local cur
COMPREPLY=()
# Variable to hold the current word
cur="${COMP_WORDS[COMP_CWORD]}"
# Build a list of the available tasks from: `python3 -m fabric --complete`
local cmds=$(python3 -m fabric --complete 2>/dev/null)
# Generate possible matches and store them in the
# array variable COMPREPLY
COMPREPLY=($(compgen -W "${cmds}" $cur))
}
# Assign the auto-completion function for our command.
complete -F _fab fab
And in my ~/.zshrc:
source ~/.config/fabfile
I updated the fabric 1.X version here: gregorynichola's gist

aws code deploy can't access bash variable

I have a script I'm run as root user in a BeforeInstall hook, I'm trying to access some variables I have placed into the /root/.bashrc, but I've been unable to get the variable contents to display?
Is there something I'm missing from being able to access the variable?
/root/.bashrc
...
export FOO="bar"
...
deployment_script run as root in a BeforeInstall hook
#!/bin/bash
echo `whoami` // prints root
...
echo $FOO // prints nothing
...
MY_VAR=`echo $FOO`
echo $MY_VAR // prints nothing
...
I've tried sourcing the /root/.bashrc, I've tried placing the variables in the /root/.profile, I can't eval anything that includes them b/c it still comes up empty.
You can try loading /root/.bashrc script explicitly in your deployment_script:
#!/bin/bash
# Your deployment script
. "/root/.bashrc"
whoami
echo $FOO
# ...

Need to solve "Can't locate VMware/VIRuntime.pm" in cygwin

I have a (maybe) unusual situation. I need to run VMware CLI commands in a Windows box, but via the cygwin CLI inside a shell script. I can NOT change this for now, so any suggestions to "why not do this instead" may be futile, although appreciated. Here's a sample script.
#!/bin/bash
# Paths for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
When I run the above script, I get
Can't locate VMware/VIRuntime.pm in #INC (#INC contains:
/usr/lib/perl5/site_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/site_perl/5.14
/usr/lib/perl5/vendor_perl/5.14/i686-cygwin-threads-64int
/usr/lib/perl5/vendor_perl/5.14
/usr/lib/perl5/5.14/i686-cygwin-threads-64int /usr/lib/perl5/5.14
/usr/lib/perl5/site_perl/5.10 /usr/lib/perl5/vendor_perl/5.10
/usr/lib/perl5/site_perl/5.8 .) at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8. BEGIN
failed--compilation aborted at /cygdrive/c/Program Files
(x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl line 8.
I can run the same vmware-cmd.pl script from a DOS command prompt
c:> vmware-cm.pl
So I now my installation is correct.
Any clues please?
This post gave me the idea to fix it. But now I get a core dump.
How is Perl's #INC constructed? (aka What are all the ways of affecting where Perl modules are searched for?)
The added line is the second export PERL5LIB line.
#!/bin/bash
# Path for vmware-cmd.pl file to run vmware commands from vsphere cli
_vcli_dir="/cygdrive/c/Program Files (x86)/VMware/VMware vSphere CLI"
_vcli_bin="$_vcli_dir/bin"
_vcli_perl="$_vcli_dir/Perl"
_vcli_perl_bin="$_vcli_perl/bin"
_vcli_perl_lib="$_vcli_perl/lib"
_vcli_perl_vlib="$_vcli_perl_lib/VMware"
_vcmd=vmware-cmd.pl
export _orig_path=$PATH
# Add above directories to path variable
export PATH=$PATH:$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
export PERL5LIB=$_vcli_dir:$_vcli_bin:$_vcli_perl:$_vcli_perl_bin:$_vcli_perl_lib:$_vcli_perl_vlib
echo $PATH
$_vcmd /?
export PATH=$_orig_path
echo $PATH
I solved by going through my elbow to get to my a**, as the saying goes.
What I did was
- Install vmware cli on my Windows box to the default directory
- Added environment variables for the VMware main directory, the bin directory, the Perl directory and the Perl/bin directory
- Added these environment variables to my PATH variable.
Then I created a vmware-cli.bat file that takes parameters and concatenates them into a vmware-cli command with the correct values. For example, I call this to list the VMs in the server
cygwin:> ./vmware-cli.bat vmware-cmd.pl --server MyServer --username User --password PW -l
Inside the batch file I essentailly do
REM Get first parm as the command, and then concatenate the rest of the parms
set VCLI_CMD=%1
shift
:LOOP
if %1x==x goto :EXECUTE
set VCLI_CMD=%VCLI_CMD% %1
shift
goto LOOP:
:EXECUTE
%VCLI_CMD%
This is an alternative to the previous posted that will allow you to keep it in the same shell script
VIMCMD="/cygdrive/C/Program Files (x86)/VMware/VMware vSphere CLI/bin/vmware-cmd.pl"
VIMCMD_DOS=$(cygpath -d "$VIMCMD")
DOS_VIMCMD="cmd /c $VIMCMD_DOS"
Then you can run:
$ $DOS_VIMCMD --version
vSphere SDK for Perl version: 6.0.0
Script 'vmware-cmd.pl' version: 6.0.0