I have defined several Postman test collections/folders and their associated test data files. Running them through Postman Collection Runner and Newman individually works fine. I wanted to batch multiple runs together, like this .bat in Windows:
SET postman_collection=Regression.postman_collection.json
SET postman_environment=Development.postman_environment.json
SET postman_folder="Order details"
SET postman_data="orders.json"
newman run %postman_collection% -r html,cli -e %postman_environment% --folder %postman_folder% -d %postman_data%
SET postman_folder="Fuzzy Search"
SET postman_data="fuzzy search regression.csv"
newman run %postman_collection% -r html,cli -e %postman_environment% --folder %postman_folder% -d %postman_data%
SET postman_folder="Sorting"
SET postman_data=""
newman run %postman_collection% -r html,cli -e %postman_environment% --folder %postman_folder% -d %postman_data%
However, execution ends after the first newman run completes. I think it terminates the console for some reason.
How can I achieve what I want to do above? Am I structuring my tests incorrectly? Any help is appreciated!
You just have to use "call" before newman command as follows:
SET postman_collection=Regression.postman_collection.json
SET postman_environment=Development.postman_environment.json
SET postman_folder="Order details"
SET postman_data="orders.json"
call newman run %postman_collection% -r html,cli -e %postman_environment% --folder %postman_folder% -d %postman_data%
SET postman_folder="Fuzzy Search"
SET postman_data="fuzzy search regression.csv"
call newman run %postman_collection% -r html,cli -e %postman_environment% --folder %postman_folder% -d %postman_data%
Related
the below script should run a notebook called prepTimePreProcessing whenever a AWS notebook instance starts runing.
however I am getting "could not find conda environment conda_python3" error from the lifecycle config file.
set -e
ENVIRONMENT=python3
NOTEBOOK_FILE="/home/ec2-user/SageMaker/prepTimePreProcessing.ipynb"
echo "Activating conda env"
source /home/ec2-user/anaconda3/bin/activate "$ENVIRONMENT"
echo "Starting notebook"
nohup jupyter nbconvert --to notebook --inplace --ExecutePreprocessor.timeout=600 --ExecutePreprocessor.kernel_name=python3 --execute "$NOTEBOOK_FILE" &
Any help whould be appreciated.
Assuming no environment problems, if you open a terminal in the instance in use and run:
conda env list
the result should also contain this line:
python3 /home/ec2-user/anaconda3/envs/python3
After that, you can create a .sh script inside /home/ec2-user/SageMaker containing all the code to run. This way it also becomes versionable by being a persisted file in the instance space and not inside an external configuration.
The on-start.sh/on-create.sh (from this point I will simply call it script.sh) file becomes trivially:
# PARAMETERS
ENVIRONMENT=python3
# conda env
source /home/ec2-user/anaconda3/bin/activate "$ENVIRONMENT";
echo "'$ENVIRONMENT' env activated"
In the lifecycle config, on the other hand, just write a few lines to invoke the previously created script.sh:
#!/bin/bash
set -e
SETUP_FILE=/home/ec2-user/SageMaker/script.sh
echo "Run setup script"
sh "$SETUP_FILE"
echo "Setup completed!"
Extra
If you want to add a safety check so that the .sh file is read correctly regardless of line breaks, I would also add a conversion:
#!/bin/bash
set -e
SETUP_FILE=/home/ec2-user/SageMaker/script.sh
# convert script to unix format
echo "Converting setup script into unix format"
sudo yum -y install dos2unix > /dev/null 2>&1
dos2unix "$SETUP_FILE" > /dev/null 2>&1
echo "Run setup script"
sh "$SETUP_FILE"
echo "Setup completed!"
given a config file that has new line delimitted a set of folders (cannot use complete list of dirs (TOO LARGE)) in Google Cloud Storage as follows:
gs://databucket/path/to/dir/441738
gs://databucket/path/to/dir/441739
gs://databucket/path/to/dir/441740
how can one use gsutil inside a bash script to recursively rsync the files, whilst deleting files present in the destination folder that don't exist on the bucket?
I have tried using the following in a bash script
cat ${1} | gsutil -m rsync -r -d ${2}
after which I receive an error code 126
whereby ${1} references the aforementioned config file and ${2} references the destination folder to which each folder in the config file list is to be rsynced.
This works with gsutil cp however rsync more efficiently/effectively suits my needs.
cat ${1} | gsutil -m cp -R -I ${2}
How might one accomplish this?
Thanks
As you know, rsync does not support function uses stdin like -I flag...
So you have to use a different method than cp.
If you want synchronize multiple folders in a single command, Write batch script that has rsync command each line like below.
gsutil -m rsync -r -d gs://databucket/path/to/dir/441738 *destination_folder1*
gsutil -m rsync -r -d gs://databucket/path/to/dir/441739 *destination_folder2*
gsutil -m rsync -r -d gs://databucket/path/to/dir/441740 *destination_folder3*
And run a script file you wrote.
This method is a bit bothersome, but it can work same result you want.
I have a startup script I believe is failing, but where can I find the logs for it? It doesnt seem to appear in StackDriver. My startup script looks like this:
#!/bin/bash
pwd
whoami
sysctl -w vm.max_map_count=262144
sysctl -w fs.file-max=65536
ulimit -n 65536
ulimit -u 4096
docker run -d --name sonarqube \
-p 80:9000 \
-e sonar.jdbc.username=xxx \
-e sonar.jdbc.password=xxx \
-e sonar.jdbc.url=xxx \
sonarqube:latest
When a Compute Engine starts up, you will find the logs for the startup in the serial log. You can read about the serial log here:
https://cloud.google.com/compute/docs/instances/interacting-with-serial-console
Two part question, would really appreciate help on either part. I'm attempting to install Anaconda followed by numbapro on AWS EB. My options.config in .ebextensions looks like this:
commands:
00_download_conda:
command: 'wget http://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda2-4.3.0-Linux-x86_64.sh'
command: echo 'Finished installing Anaconda'
test: test ! -d /anaconda
02_install_cuda:
command: 'export PATH=$PATH:$HOME/anaconda2/bin'
command: echo 'About to install numbapro'
command: 'conda install -c anaconda numbapro'
Whenever I attempt to deploy this I run into a timeout and when I try and manually stop the current processes from the console I get an error saying that the environment is not in a state where I can abort the current operation or view any log files.
There are a couple of problems here.
First, you need to make sure that you're properly indenting your YAML file, as YAML is sensitive to whitespace. Your file should look like this:
commands:
00_download_conda:
command: 'wget http://repo.continuum.io/archive/Anaconda2-4.3.0-Linux-x86_64.sh'
test: test ! -d /anaconda
01_install_conda:
command: 'bash Anaconda2-4.3.0-Linux-x86_64.sh'
...
Next, you can only have one command: entry per command. The echo commands aren't particularly valuable, as you can see what commands are being executed by looking at /var/log/eb-activity.log. You can also combine the export PATH line with conda install something like this:
PATH=$PATH:$HOME/anaconda2/bin conda install -c anaconda numbapro
If you're still having trouble after you clear up those items, check (or post here) eb-activity.log to see what's going on.
Refer to the documentation for more details.
I am writing a script, and when I execute this script inside of my RHEL EC2 instance manually, it works as expected.
However, when i am trying to automate using a CloudFormation template (that means putting in a s3 bucket and downloading in user-data from there) it is not running.
I have the following commands in my .bash_profile
sudo sed -e '11 a export ORACLE_HOME=/usr/lib/oracle/12.1/client64' -i /home/ec2-user/.bash_profile
sudo sed -e '12 a export LD_LIBRARY_PATH=$ORACLE_HOME/lib' -i /home/ec2-user/.bash_profile
sudo sed -e '13 a export PATH=$ORACLE_HOME/bin:$PATH' -i /home/ec2-user/.bash_profile
by elevating the permission of script, it worked.