TravisCI : Setup two jobs with different configurations - unit-testing

I am setting up an automated Travis CI CI, and was wondering if it is possible to launch two jobs (same tests) with two different configurations.
My app depends on a config.json file, which sets up different DB usages (json and mongo). My use case is simple: run the tests with a config file using json and run the same tests using another config file with mongo.
To retrieve config I'm running a before script which just gets it from somewhere and saves the file.
Thanks!

My solution for this is quite simple, depending on the configuration (in this case env variables), I run specific scripts that download different configurations for each env variable.
before_script:
- sh -c "if [ '$DB' = 'mongo' ]; then sleep 15; fi"
- sh -c "if [ '$DB' = 'mongo' ]; then wget https://google.com/config.json; fi"
- sh -c "if [ '$DB' = 'mysql' ]; then wget https://google.com/config2.json; fi"
That way when the code runs, you can load different configurations

Related

Prisma client is not working with next js serverless when deployed over aws

I am using next js "12.1.6" and prisma version "^3.15.2". When I deploy my app over vercels, It works very smoothly. But, When I try to upload the same app over aws using serverless, it does't work. Below is my serverless.yml file.
myNextApp:
component: "#sls-next/serverless-component"
My next.config.js file contains configuration for db connection.
DATABASE_URL:"mysql://***:***#***:3306/****"
If I remove prisma client from my app, the app works with aws as well, but it doesn't work with prisma client.
I have tried below solutions which did not work.
LINK TO GIT REPOSITORY
Below is the serverless.yml code.
yourProjectName:
component: "#sls-next/serverless-component#1.19.0"
inputs:
minifyHandlers: true
build:
postBuildCommands:
- PDIR=node_modules/.prisma/client/;
LDIR=.serverless_nextjs/api-lambda/;
if [ "$(ls -A $LDIR)" ]; then
mkdir -p $LDIR$PDIR;
cp "$PDIR"query-engine-rhel-* $LDIR$PDIR;
cp "$PDIR"schema.prisma $LDIR$PDIR;
fi;
- PDIR=node_modules/.prisma/client/;
LDIR=.serverless_nextjs/default-lambda/;
if [ "$(ls -A $LDIR)" ]; then
mkdir -p $LDIR$PDIR;
cp "$PDIR"query-engine-rhel-* $LDIR$PDIR;
cp "$PDIR"schema.prisma $LDIR$PDIR;
fi;
when I run 'serverless' with above code (my OS is Windows 10), it says "PDIR is not recognized as an internal or external command, operable program or batch file."
If I try to put ".prisma/client" folder in '.serverless_nextjs/api-lambda/' and '.serverless_nextjs/default-lambda/', it automatically get disappear after I run serverless.
This is the url to my repo which works fine with "vercels" but not working with "aws serverless"
just put below lines in your serverless.yml. This will work for next js 12 or >12, Prisma >= 4
inputs:
useServerlessTraceTarget: true

Is it possible to run appium test on aws device farm with gradle command?

In yaml file below we are running with next command: java org.testng.TestNG testng.xml.
Is it possible to run tests something like this ./gradlew clean runTests testng.xml?
**version: 0.1
# Phases are collection of commands that get executed on Device Farm.
phases:
# The install phase includes commands that install dependencies that your tests use.
# Default dependencies for testing frameworks supported on Device Farm are already installed.
install:
commands:
# This test execution environment uses Appium version 1.9.1 by default, however we enable you to change it using the Appium version manager (avm). An
# example "avm" command below changes the version to 1.14.2.
# For your convenience, we have preinstalled the following versions: 1.9.1, 1.10.1, 1.11.1, 1.12.1, 1.13.0, 1.14.1, 1.14.2, 1.15.1 or 1.16.0.
# To use one of these Appium versions, change the version number in the "avm" command below to your desired version:
- export APPIUM_VERSION=1.14.2
- avm $APPIUM_VERSION
- ln -s /usr/local/avm/versions/$APPIUM_VERSION/node_modules/.bin/appium /usr/local/avm/versions/$APPIUM_VERSION/node_modules/appium/bin/appium.js
# The pre-test phase includes commands that setup your test environment.
pre_test:
commands:
# Setup environment variables for java
- export CLASSPATH=$CLASSPATH:$DEVICEFARM_TESTNG_JAR
- export CLASSPATH=$CLASSPATH:$DEVICEFARM_TEST_PACKAGE_PATH/*
- export CLASSPATH=$CLASSPATH:$DEVICEFARM_TEST_PACKAGE_PATH/dependency-jars/*
# We recommend starting appium server process in the background using the command below.
# Appium server log will go to $DEVICEFARM_LOG_DIR directory.
# The environment variables below will be auto-populated during run time.
- echo "Start appium server"
- >-
appium --log-timestamp
--default-capabilities "{\"deviceName\": \"$DEVICEFARM_DEVICE_NAME\", \"platformName\":\"$DEVICEFARM_DEVICE_PLATFORM_NAME\",
\"app\":\"$DEVICEFARM_APP_PATH\", \"udid\":\"$DEVICEFARM_DEVICE_UDID\", \"platformVersion\":\"$DEVICEFARM_DEVICE_OS_VERSION\",
\"chromedriverExecutable\":\"$DEVICEFARM_CHROMEDRIVER_EXECUTABLE\"}"
>> $DEVICEFARM_LOG_DIR/appiumlog.txt 2>&1 &
- >-
start_appium_timeout=0;
while [ true ];
do
if [ $start_appium_timeout -gt 60 ];
then
echo "appium server never started in 60 seconds. Exiting";
exit 1;
fi;
grep -i "Appium REST http interface listener started on 0.0.0.0:4723" $DEVICEFARM_LOG_DIR/appiumlog.txt >> /dev/null 2>&1;
if [ $? -eq 0 ];
then
echo "Appium REST http interface listener started on 0.0.0.0:4723";
break;
else
echo "Waiting for appium server to start. Sleeping for 1 second";
sleep 1;
start_appium_timeout=$((start_appium_timeout+1));
fi;
done;
# The test phase includes commands that start your test suite execution.
test:
commands:
# Your test package is downloaded in $DEVICEFARM_TEST_PACKAGE_PATH so we first change directory to that path.
- echo "Navigate to test package directory"
- echo $DEVICEFARM_TEST_PACKAGE_PATH
- cd $DEVICEFARM_TEST_PACKAGE_PATH
# By default, the following command is used by Device Farm to run your Appium TestNG test.
# The goal is to run to your tests jar file with all the dependencies jars in the CLASSPATH.
# Alternatively, You may specify your customized command.
# Note: For most use cases, the default command works fine.
# Please refer "http://testng.org/doc/documentation-main.html#running-testng" for more options on running TestNG tests from the command line.
- echo "Unzipping TestNG tests jar"
- unzip tests.jar
- echo "Start Appium TestNG test"
- cd suites
- ls -l
- java org.testng.TestNG testng.xml
# The post test phase includes are commands that are run after your tests are executed.
post_test:
commands:
- ls -l
- zip -r allure.zip allure-results artifacts report test-output
- ls -l
- cp allure.zip $DEVICEFARM_LOG_DIR
- cd $DEVICEFARM_LOG_DIR
- ls -l
# The artifacts phase lets you specify the location where your tests logs, device logs will be stored.
# And also let you specify the location of your test logs and artifacts which you want to be collected by Device Farm.
# These logs and artifacts will be available through ListArtifacts API in Device Farm.
artifacts:
# By default, Device Farm will collect your artifacts from following directories
- $DEVICEFARM_LOG_DIR**
Thank you for reaching out. Are you trying to replace "java org.testng.TestNG testng.xml" with "./gradlew clean runTests testng.xml", or expecting to run gradle command locally?
I found solution:
You need to zip all project with your build.gradle files
Select Appium Node config, upload your zip
Use yaml config from TestNg or from my question, but replace command java org.testng.TestNG testng.xml to ./gradlew clean runTests(task in your gradle) your_test_suite.xml

coveralls doesn't recognize token when dockerized django app is run in TravisCI, but only from pull requests

I'm getting this error from TravisCI when it tries to run a Pull Request
coveralls.exception.CoverallsException: Not on TravisCI. You have to provide either repo_token in .coveralls.yml or set the COVERALLS_REPO_TOKEN env var.
The command "docker-compose -f docker-compose.yml -f docker-compose.override.yml run -e COVERALLS_REPO_TOKEN web sh -c "coverage run ./src/manage.py test src && flake8 src && coveralls"" exited with 1.
However, I do have both COVERALLS_REPO_TOKEN and repo_token set as environment variables in my TravisCI, and I know they're correct because TravisCI passes my develop branch and successfully sends the results to coveralls.io:
OK
Destroying test database for alias 'default'...
Submitting coverage to coveralls.io...
Coverage submitted!
Job ##40.1
https://coveralls.io/jobs/61852774
The command "docker-compose -f docker-compose.yml -f docker-compose.override.yml run -e COVERALLS_REPO_TOKEN web sh -c "coverage run ./src/manage.py test src && flake8 src && coveralls"" exited with 0.
How do I get TravisCI to recognize my COVERALLS_REPO_TOKEN for the pull requests it runs?
Found the answer: You can't! At least not while keeping your coveralls.io token secret, because:
Defining encrypted variables in .travis.yml
Encrypted environment variables are not available to pull requests
from forks due to the security risk of exposing such information to
unknown code.
Defining Variables in Repository Settings
Similarly, we do not provide these values to untrusted builds,
triggered by pull requests from another repository.

Docker on EC2, RUN command in dockerfile not reading environment variable

I have two elastic-beanstalk environments on AWS: development and production. I'm running a glassfish server on each instance and it is requested that the same application package be deployable in production and in development environment, without requiring two different .EAR files.The two instance differ in size: the dev has a micro instance while the production has a medium instance, therefore I need to deploy two different configuration files for glassfish, one for each environment.
The main problem is that the file has to be in the glassfish config directory before the server starts, therefore I thought it could be better moving it while the container was created.
Of course each environment uses a docker container to host the glassfish instance, so my first thought was to configure an environment variable for the elastic-beanstalk. In this case
ypenvironment = dev
for the development environment and
ypenvironment = pro
for the production environment. Then in my DOCKERFILE I put this statement in the RUN command:
RUN if [ "$ypenvironment"="pro" ] ; then \
mv --force /var/app/GF_domain.xml /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
elif [ "$ypenvironment"="dev" ] ; then \
mv --force /var/app/GF_domain.xml.dev /usr/local/glassfish/glassfish/domains/domain1/config/domain.xml ; \
fi
unfortunately, when the startup finishes, both GF_domain files are still in var/app.
Then I red that the RUN command runs things BEFORE the container is fully loaded, maybe missing the elastic-beanstalk-injected variables. So I tried to move the code to the ENTRYPOINT directive. No luck again, the container startup fails. Also tried the
ENTRYPOINT ["command", "param"]
syntax, but it didn't work giving a
System error: exec: "if": executable file not found in $PATH
Thus I'm stuck.
You need:
1/ Not to use entrypoint (or at least use a sh -c 'if...' syntax): that is for runtime execution, not compile-time image build.
2/ to use build-time variables (--build-arg):
You can use ENV instructions in a Dockerfile to define variable values. These values persist in the built image.
However, often persistence is not what you want. Users want to specify variables differently depending on which host they build an image on.
A good example is http_proxy or source versions for pulling intermediate files. The ARG instruction lets Dockerfile authors define values that users can set at build-time using the --build-arg flag:
$ docker build --build-arg HTTP_PROXY=http://10.20.30.2:1234 .
In your case, your Dockefile should include:
ENV ypenvironment
Then docker build --build-arg ypenvironment=dev ... myDevImage
You will build 2 different images (based on the same Dockerfile)
I need to be able to use the same EAR package for dev and pro environments,
Then you want your ENTRYPOINT, when run, to move a file depending on the value of an environment variable.
Your Dockerfile still needs to include:
ENV ypenvironment
But you need to run your one image with
docker run -x ypenvironment=dev ...
Make sure your script (referenced by your entrypoint) includes the if [ "$ypenvironment"="pro" ] ; then... you mention in your question, plus the actual launch (in foreground) of your app.
Your script needs to not exit right away, or your container would switch to exit status right after having started.
When working with Docker you must differentiate between build-time actions and run-time actions.
Dockerfiles are used for building Docker images, not for deploying containers. This means that all the commands in the Dockerfile are executed when you build the Docker image, not when you deploy a container from it.
The CMD and ENTRYPOINT commands are special build-time commands which tell Docker what command to execute when a container is deployed from that image.
Now, in your case a better approach would be to check if Glassfish supports environment variables inside domain.xml (or somewhere else). If it does, you can use the same domain.xml file for both environments, and have the same Docker image for both of them. You then differentiate between the environments by injecting run-time environment variables to the containers by using docker run -e "VAR=value" when running locally, and by using the Environment Properties configuration section when deploying on Elastic Beanstalk.
Edit: In case you can't use environment variables inside domain.xml, you can solve the problem by starting the container with a script which reads the runtime environment variables and puts their values in the correct places in domain.xml using sed, then starts your application as usual. You can find an example in this post.

multiple supervisor.conf for two different projects

Can I use different supervisor.conf file for different django-celery projects.
I have created separate supervisors for both of them in project itself but supervisor just works with one. Is there any way to keep configuration file separately for both of them and still able to use supervisor demon for both of them.
note: I have not created supervisor.conf file in /etc/supervisor/conf.d directory.
we had the same issue, just running same project for two different docker services - one for API (flask), and second for offline processing (celery)
this is the command section from our docker-compose
command: bash -c "echo -e \"[include]\nfiles = /svc/etc/supervisord/*\" > unified.conf
&& supervisord -c unified.conf"
first we create the file, and then running the supervisord command