Recipe to run on deployment and standalone - amazon-web-services

I have a recipe below (migrate.rb) which is run as part of our deployment and works perfectly.
However one thing that I can't workout is how to set it up so it can also be run as a standalone recipe in the execute_recipe command.
As it stands if we execute this recipe as a stand alone then nothing happens since the node[:deploy].each has nothing to loop over (the deploy key doesn't exist)..
The only part that actually relies on the deploy node is this line cwd "#{deploy[:deploy_to]}/current" since I need to know where the code was deployed to.
node[:deploy].each do |application, deploy|
execute 'DB migrate then seed' do
cwd "#{deploy[:deploy_to]}/current"
command 'php artisan migrate; while read -r line || [ -n "$line" ]; do php artisan db:seed --class="$line"; done < "app/database/run.list"'
end
end

I would relocate that part to a definition (or a provider). So basically split your recipe in two parts:
recipes/deploy.rb:
node[:deploy].each do |application, deploy|
php_artisan_setup do
dir "#{deploy[:deploy_to]}/current"
end
end
definitions/php_artisan_setup.rb:
define :php_artisan_setup do
execute 'DB migrate then seed' do
cwd params[:dir]
command 'php artisan migrate; while read -r line || [ -n "$line" ]; do php artisan db:seed --class="$line"; done < "app/database/run.list"'
end
end
This way, you can call php_artisan_setup from your "standalone" recipe, too. You still need two recipes, but you dont have to duplicate the relevant part.

Related

Symfony & AWS BS - Env vars for console outside .env files

I have a symfony API which runs on AWS beanstalk instances. My .env file registers the environment variables that I need in my app. I override all of them in the AWS console to adapt to my environments.
For exemple :
Env file : DEFAULT_CONNECTION_DSN=bolt://user:password#host:port
Test server : DEFAULT_CONNECTION_DSN=bolt://test:azerty#test.toto.com:7687
Prod server : DEFAULT_CONNECTION_DSN=bolt://prod:azerty#toto.com:7687
This works because AWS overrides the environment variables when the PHP server is started, so the values placed in the .env file are ignored.
The problem is that I try to create a CRON on the server. The CRON is executed from command line, and I saw that, in this case, the variables still have the value specified in the .env file at runtime.
If I list the environment variables on the server, I see that DEFAULT_CONNECTION_DSN has the value that I want, but if I dump the value in my code (or execute php bin/console debug:container --env-vars), DEFAULT_CONNECTION_DSN has the .env file value. I already tried to delete the entry from my .env file. In this case, I have an error saying my environment variable is not found.
I must precise that I work with a .env.local locally, file which is not versionned, and the deploys are based on git versionning, so it seems difficult to add a .env.env-name file for each environement.
What could I do ?
Symfony loads env vars only if they are not already present. Your problem looks like more how to add env vars with cron in AWS. As I don't know BS I can't help you with this.
In your cron I think you can still run DEFAULT_CONNECTION_DSN=bolt://prod:azerty#toto.com:7687 php bin/console ..., this will set your env var at runtime.
When you run something, from a bash, it inherits the exported variables, plus the ones given in the same command line.
Suppose this case:
xavi#bromo:~$ export -p
declare -x HOME="/home/xavi"
declare -x TERM="linux"
declare -x USER="xavi"
xavi#bromo:~$
Say you echo something not defined: You don't get anything, as expected:
xavi#bromo:~$ echo $ABC $XYZ
xavi#bromo:~$
You can place this echo inside a bash script, for example:
xavi#bromo:~$ cat /tmp/test.sh
#!/usr/bin/env bash
echo $ABC $XYZ
Then give it execution permissions:
xavi#bromo:~$ chmod a+x /tmp/test.sh
xavi#bromo:~$
Now, if you execute the script it also says nothing, but if you prefix them with variable assignment the value lives "exclussively" inside that call. See examples with hello-bye and orange-banana. If you later just show the values, they are not there:
xavi#bromo:~$ /tmp/test.sh
xavi#bromo:~$ ABC=hello XYZ=bye /tmp/test.sh
hello bye
xavi#bromo:~$ ABC=orange XYZ=banana /tmp/test.sh
orange banana
xavi#bromo:~$ echo $ABC $XYZ
xavi#bromo:~$
This would be a good approach for the solution of Fabien Papet: To prefix the cron call with the variable assignment.
But if you cannot do that, you can go furthter:
Env vars are not inherited when not exported but inherited when exported: See this:
xavi#bromo:~$ ABC=pita
xavi#bromo:~$ XYZ=pota
xavi#bromo:~$ /tmp/test.sh
xavi#bromo:~$ export ABC=pita
xavi#bromo:~$ export XYZ=pota
xavi#bromo:~$ /tmp/test.sh
pita pota
You could take advantage of bash dot-import command . to import variables.
Place in these files those contents:
xavi#bromo:~$ cat /tmp/fruits
export ABC=orange
export XYZ=tangerine
xavi#bromo:~$ cat /tmp/places
export ABC=Paris
export XYZ=Barcelona
xavi#bromo:~$
Note that the previous files do not have a she-bang as they are not meant to be executed, they do not have to create a bash instance. They are meant for inclussion from an existing bash instance.
Now edit the test.sh to make an inclusion of a file which we'll pass via the first command line parameter:
xavi#bromo:~$ cat /tmp/test.sh
#!/usr/bin/env bash
. $1
echo $ABC $XYZ
xavi#bromo:~$
You can now play with the invocation. I still have the pita-pota pair from the last test. See what happens:
xavi#bromo:~$ echo $ABC $XYZ
pita pota
xavi#bromo:~$ /tmp/test.sh /tmp/fruits
orange tangerine
xavi#bromo:~$ /tmp/test.sh /tmp/places
Paris Barcelona
xavi#bromo:~$ echo $ABC $XYZ
pita pota
xavi#bromo:~$
The first line echo $ABC $XYZ displays our current environment.
The second line invokes a new bash (via the she-bang of /tmp/test.sh) and as pita-pota were exported they are momentarily there. But as soon as . $1 is executed, it is expanded into . /tmp/fruits which overrides the environment by exporting new variables, thus the result.
The second scripts (the one with fruits) ends, so the bash is terminated and its environment is destroyed. We return to our main bash. In here we still have pota-pita. If we had printed now, we'd see the pita-pota. We go with the places, now.
The reasoning with the places is identical to the reasoning with the fruits.
As soon as we return to the main bash, the child env is destroyed, so the places have been blown away and we return to the first initial environment with pita-pota, which is then printed.
So...
With all this you can:
Setup a bash script that wraps:
loading the environment from some
place.
Call the php bin/console
In the cron, do not invoke the php but your wrapper script.
This allows you to
Change the script with different environments without depending on versioning.
Keep your credentials and configuration separated from the code.
In conclusion:
Make your source version control system to have the cron versioned, and the wrapper bash script versioned.
Make your deployer to place a different "includable" parameters file in each environment.
Make your cron to call the wrapper.
Make your wrapper to setup the env vars and call the php bin/console.
Does this solve your issue?

.sh file works in terminal but not in python script (rclone w/ Raspberry Pi)

I'm having trouble running a .sh file in python. When I type in the location of the .sh file (/home/pi/file/script.sh) the script runs perfectly.
I'm trying to run this script in my python2 script and I've done the following methods:
subprocess.Popen(['bash', 'location of .sh'])
subprocess.call(['location of .sh'])
os.popen(['location of .sh'])
When I run the python script, I get a prompt from rclone saying "Command sync needs 2 arguments maximum"
My .sh file just includes:
#!/bin/sh
sudo /usr/local/bin/rclone -v sync /home/pi/some_project_data remote:rclone --delete-before --include *.csv --include *.py
I'm not sure how running the .sh file on terminal works fine, but this error pops up when I'm trying to run the .sh file using Python.
Your script fails whenever you run it in a directory containing 2 or more .csv or .py files. This is true for terminals as well as via Python.
To avoid that, quote your patterns so the shell doesn't expand them:
#!/bin/sh
sudo /usr/local/bin/rclone -v sync /home/pi/some_project_data remote:rclone \
--delete-before --include "*.csv" --include "*.py"
Please try:
os.popen('bash locationof.sh')
ex:
os.popen('bash /home/script.sh')
That worked on my machine. If you place square brackets around the string then python assumes it is a list and popen doesnt accept a list, it accepts a single string.
If the script doesnt work, then this won't fix that, but it will at least run it. If it still doesnt work, try running the script with something like
touch z.txt
and see if z.txt appears in the file explorer. If it does, then your .sh file has a problem.

Installed go with hombrew, can find $GOROOT causing package failures

I installed Go with homebrew and it usually works. Following the tutorial here on creating serverless api in Go. When I try to run the unit tests, I get the following error:
# _/Users/pro/Documents/Code/Go/ServerLess
main_test.go:6:2: cannot find package "github.com/strechr/testify/assert" in any of:
/usr/local/Cellar/go/1.9.2/libexec/src/github.com/strechr/testify/assert (from $GOROOT)
/Users/pro/go/src/github.com/strechr/testify/assert (from $GOPATH)
FAIL _/Users/pro/Documents/Code/Go/ServerLess [setup failed]
Pros-MBP:ServerLess Santi$ echo $GOROOT
I have installed the test library with : go get github.com/stretchr/testify
I would appreciate it if anyone could point me in the right direction.
Also confusing is when I run echo $GOPATH it doesnt return anything. same goes for echo $GOROOT
Some things to try/verify:
As JimB notes, starting with Go 1.8 the GOPATH env var is now optional and has default values: https://rakyll.org/default-gopath/
While you don't need to set it, the directory does need to have the Go workspace structure: https://golang.org/doc/code.html#Workspaces
Once that is created, create your source file in something like: $GOPATH/src/github.com/DataKid/sample/main.go
cd into that directory, and re-run the go get commands:
go get -u -v github.com/stretchr/testify
go get -u -v github.com/aws/aws-lambda-go/lambda
Then try running the test command again: go test -v
The -v option is for verbose output, the -u option ensures you download the latest package versions (https://golang.org/cmd/go/#hdr-Download_and_install_packages_and_dependencies).

Virtualenv have multiple possible locations

A colleague of mine implement a shell script with the following line
output="$(venv/bin/python manage.py has_missing_migrations --quiet --settings=project.tests_settings 2>&1)"
Here is the full code :
# Check missing migrations
output="$(venv/bin/python manage.py has_missing_migrations --quiet --settings=project.tests_settings 2>&1)"
[ $? -ne 0 ] \
&& ipoopoomypants "Migrations" "$output" \
|| irock "Migrations"
If I run the script, I obtain
Running pre-commit checks:
[OK] anonymize_db ORM queries
[OK] Forbidden Python keywords
[OK] Forbidden JavaScript keywords
[OK] Forbidden HTML keywords
[FAIL] Migrations
COMMIT REJECTED: .git/hooks/pre-commit: line 88: venv/bin/python: No such file or directory
The problem with the above line is it takes into account that the virtual environment has been created inside the project itself. However, it is not always the case. From what I am concerned, I work with virtualenvwrapper. Hence, my virtualenv is not ./venv, but well in ~/.virtualenvs/venv.
Question : How could I modify the above line in such a way it will consider both path ./venv and ~/.virtualenvs/venv?
You should probably use the WORKON_HOME environment variable to point the the location of virtualenvs instead of hard-coding it.

One summary for multiple test files using python unittest

I wanna make automated testing for my python project but I'm not sure about the correct way to use unittest module.
All of my test files are currently in one folder and have this format:
import unittest
class SampleTest(unittest.TestCase):
def testMethod(self):
# Assertion here
if __name__ == "__main__":
unittest.main()
Then I run
find ./tests -name "*_test.py" -exec python {} \;
When there are three test files, it outputs
.
----------------------------------------------------------------------
Ran 1 test in 0.000s
OK
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
..
----------------------------------------------------------------------
Ran 2 tests in 0.000s
OK
It printed one summary for each test file. So the question is what can I do to make it print only one test summary, eg Ran 5 tests in 0.001s?
Thanks in advance
And I don't want to install any other module
You are invoking Python multiple times, and each process does not have any knowledge about rest of them. You need to run Python once and use unittest discover mechanism.
Run in shell:
python -m unittest discover
Depending on what is your project structure and naming conventions you may want to tweak discovery params, e.g. change --pattern option, as described in help:
Usage: python -m unittest discover [options]
Options:
-h, --help show this help message and exit
-v, --verbose Verbose output
-f, --failfast Stop on first fail or error
-c, --catch Catch Ctrl-C and display results so far
-b, --buffer Buffer stdout and stderr during tests
-s START, --start-directory=START
Directory to start discovery ('.' default)
-p PATTERN, --pattern=PATTERN
Pattern to match tests ('test*.py' default)
-t TOP, --top-level-directory=TOP
Top level directory of project (defaults to start
directory)
While you said I don't want to install any other module, I'd still recommend using another test runner. There are quite few out there, pytest or nose to name a few.