I'm running my Play Framework app using the command sbt run
When I change something in the code, it will "hot recompile" and serve my updated app.
I would like to run my unit test on each hot recompile.
I have tried things like
sbt test run but it will only run the test once. Then, all the other code change trigger "hot recompile" but no unit test.
I also tried sbt ~test run but it will wait for code change forever and never launch the app.
Is there a way to configure SBT so that it will always run the command "test" each time there is a hot recompile?
The closest I could get was running sbt and then using the command ~ ;test;run, which will run the tests and then launch the app in a continuous cycle (as long as there are changes) but still requires you to shut down the app with Ctrl-D to get back to running the tests.
My initial approach was trying to disable auto-reloading, but it appears to be hard-coded, and even that wouldn't be enough on it's own, as you'd need whatever the auto-reload hook uses to shut down the app with each change. So... technically possible, but not without creating a custom sbt task.
Related
What is the right way of running the unit test cases for django as part of build process ?
We use Jenkins pipeline for building the docker image and container will be started by using a startup script.
Do i need to call the manage.py test before the nginx container was started ?
or
Do i need to add a post build task in the Jenkins build pipeline to run the tests after the container was started ?
So basically looking for best practice to run tests.. Do we need to run unit tests before the server has started or after the server has started ?
I know that it makes more sense to run unit tests before we start the nginx, but won't it create an increased time with more n more test cases being added in future?
Depends on your test cases. If you are running unit tests only you don't need to. If you are doing something more in your tests like for example calling your apis (functional testing, etc) a good approach (in my opinion) is to create different stages in your jenkinsfile where you first build the docker image, then run the unit tests, then decide what to do depending on the test results. I see this as a good thing because you will be running tests over your app inside the same container (same conditions) it will be running in a production environment. Another good practice would be to add some plugins to Jenkins and have some reports (i.e. coverage).
I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.
I have selenium tests written in python 3.4. How to run them from jenkins after success build ?
Process is :
1. pull from git repository
2. python setup.py build
3. python setup.py install
After that i need to run server and selenium tests.
How to run them from jenkins after success build ?
- You can add a trigger to your selenium job so it runs after the build job runs successfully
To answer your question accurately, I need to know whether you are planning in running selenium tests in jenkins box...
Assuming you aren't planning in running the tests in jenkins (which IMO is something you dont want to) you can take 2 different directions:
1:. add a "execute shell" step to your build with the ssh to the machine you want to fire your tests on along with the command you need to run your tests in that machine. This would mean your pull from git to get latest code from selenium would have to happen in this step
2:. if you are outsourcing your browser execution to browserstack, sauce labs etc, add a "execute shell" step with the command needed to trigger your tests (firing from jenkins). This is assuming your tests know that it should point to outsourced env etc... You will most likely have a step to start a tunnel between your CI box and outsourced env...
Try using Selenium and Seleniumhq plugins for the same.
To add plugin : Manage Jenkins/ Manage Plugins/Available
I'm trying to get started with Docker for developing a web application with Clojure and am unsure which way to go. From what I've read so far and also looking at the offical Docker Clojure repo, there are basically two possible ways:
call lein ring server (interactively or as a CMD in a Dockerfile) or
use a Dockerfile to compile your application into an uberjar and use java -jar as the CMD on the resulting jar file.
The former seems to me to be problematic in the sense that the dev environment is not as close as possible to the production environment, given that we're probably using a :dev leiningen profile adding stuff that one would strictly not want in production (providing as few tools and "information", i.e. code on an exposed production server is always a good idea). The latter, however, seems to have the exact opposite problem: Now every change requires basically a rebuild of the image (think edit-compile-run cycle), so you would lose lein rings nice re-compile on modification functionality.
How are people using this combination in practice?
PS: I'm aware that there might be some other modes of operation in practice (e.g. using Immutant or Tomcat as the deployment target or using a CI server like Hudson etc.). I'm asking about the most basic setup first here.
My team and I have opted to optimize rapid feedback while developing and minimize the number of moving parts in our deploys. As a result we've opted to use lein ring server in development and opt to ship an uberjar for our deployment. I've done this with code running in docker containers and without them.
I wouldn't want to go back to using a development workflow that didn't enable seeing the results of changing code as quickly as possible. In my mind, the rapid feedback far outweighs the risk of the running services slightly differently between my local machine and production.
Also, nothing stops me from changing a couple lines of code and then starting up a local service that is running much closer to my production setup (either running a built docker image or building an uberjar locally).
There's nothing stopping you from running in production mode with Leiningen. Just use:
lein with-profile production ring server
I've used both approaches successfully, although we've settled on the uberjar approach because it gives faster startup times.
I use the second option java -jar ... to deploy my web application to production (not using Docker yet). This creates an edit-compile-run cycle as you said. But I don't recompile for every change. Only when I'm ready to release I create the uberjar. Of cource CI is always recommended.
At work we fully test the GUI components. The problem arises from the fact that, while the testsuite is running, the various components pop up, stealing the focus or making it impossible to continue working. The first thing I thought of was Xnest, but I was wondering if there's a more elegant solution to this problem.
I think what you need to do here is have your tests run on a different Display than the one you're working on.
When we moved our TeamCity agents to EC2, we had to figure out a solution to running our UI unit tests on a headless Linux server. I found a way to do it in this blog post, which outlines how to use Xvfb.
For my case, all I had to do was:
yum install xorg-x11-server-Xvfb
Xvfb :100 -ac to run the server. I added this to my rc.local file on my EC2 agents to start it at machine startup.
Then I added env.DISPLAY :100 to my TeamCity build configuration