Technically possible to run meteor unit tests against running server? - unit-testing

Locally I'm running meteor test --driver-package practicalmeteor:mocha --port 4001 to run my unit tests, which is refreshing each time I am changing files. So this is ok for developing.
In my CI process I would like to do first unit tests, and if they are passing, I do a meteor build and deploy the application to a docker container. This is working, but running meteor test --once --driver-package dispatch:mocha as a CI stage will always take some minutes (5-15 minutes) each time to start up a meteor instance, which is not very practical for unit tests. Working, but takes way too much time...
So my idea would be if it is technically possible to run a kind of meteor test server as a docker container, which is already running. In the CI unit test stage I would pull the repository (files) and run the tests with those immediately.
But is this possible with meteor at all?

Related

Is it possible to stop the imbedded server from starting when running the unit tests?

I'm running the gradle test task on a Micronaut project and the imbedded server is getting started.
In local that is not much of an issue, but when running the unit tests in the CI environment there isn't a database available for the server to connect to. So ideally we would run the gradle test task without starting the server.
Is it possible to configure Micronaut to not do that?
Is it possible to configure Micronaut to not do that?
Yes. The embedded server is not started by default. It is only started if you write code in your test to start it or if you mark the test with #MicronautTest. If you don't do either of those things, the server shouldn't start.

Running unit tests inside Docker Container Django

What is the right way of running the unit test cases for django as part of build process ?
We use Jenkins pipeline for building the docker image and container will be started by using a startup script.
Do i need to call the manage.py test before the nginx container was started ?
or
Do i need to add a post build task in the Jenkins build pipeline to run the tests after the container was started ?
So basically looking for best practice to run tests.. Do we need to run unit tests before the server has started or after the server has started ?
I know that it makes more sense to run unit tests before we start the nginx, but won't it create an increased time with more n more test cases being added in future?
Depends on your test cases. If you are running unit tests only you don't need to. If you are doing something more in your tests like for example calling your apis (functional testing, etc) a good approach (in my opinion) is to create different stages in your jenkinsfile where you first build the docker image, then run the unit tests, then decide what to do depending on the test results. I see this as a good thing because you will be running tests over your app inside the same container (same conditions) it will be running in a production environment. Another good practice would be to add some plugins to Jenkins and have some reports (i.e. coverage).

When should I execute unit tests and integration tests in a Dockerfile with Flask installed?

I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.

Why don't any of my rspec or cucumber tests run locally?

I have inherited a rails app which has an extensive suite of rspec and cucumber tests, when I run these tests on a CI server such as Circle, I see the tests run (most pass, some fail).
The app connects to a back-office API & I see the code has mocked responses to various API calls, the app also uses elasticsearch.
When I run the test suite locally using bundle exec rspec - I immediacy see a whole bunch of "Elasticsearch::Transport::Transport::Errors::BadRequest" errors from RSpec and failed to connect to those backoffice API's which I mentioned, however none of these failures occur on the CI server (yes, I have tried starting ES locally, same issue)
What am I doing wrong here? How can I run my tests locally to mirror what the CI server is doing to get them to run?
Any help is appreciated! Thank you.

Why my Sonar Jenkins job never becomes unstable, even with test failures?

I have a job in Jenkins that is run every night. The tasks executed during this build are: compilation, unit tests, integration tests (which are only JUnit tests which are longer than "real unit tests" to execute), and Sonar quality analysis.
When a test fails, the job is however considered as successfull and thus, no email is sent to notify this failure.
The Maven command used is mvn clean install sonar:sonar. Removing the install goal does not change anything.
What is wrong with that?
Is there a way to get the expected behavior (i.e. having an unstable build when a test failed) with only one Jenkins job, or should I create two jobs, one for the whole "Java part" (compile, unit test and integration tests), and one for the Sonar analysis?
We are using Maven 2.0.9, Java 1.6, Sonar 2.8, Jenkins 1.413.
Jenkins seems to set that property: Hudson build successful with unit test failures
With the property (-Dmaven.test.failure.ignore=false), when there is a test failure, the build stops.
There is a jenkins plugin for sonar:
That seems to analyze even if Tests fails: http://jira.codehaus.org/browse/SONARPLUGINS-461
In my sonar installation, I run the tests seperate from sonar and reuse the junit/surefire reports. That way I can control the tests independently from sonar.