I am very new to TeamCity and I was asked to upgrade JUnit to the newest version in our project. During my upgrade I had to update some tests, because JUnit API has changed. Now I would like to run all the tests, to be sure that I haven't damaged anything. We have set up CI using TeamCity, where all the tests should be run during the build, but what is happening is that only one test is triggered (normally it is over 200). I am not sure if it is because I have changed only the unit tests and there was no change in the code, but I would like to force trigger all the tests. Could someone please help me to achieve this. Thanks a lot.
Related
I am pretty new to unit testing, and I am trying to understand best practice for whether and, if so, how to run Jest alongside webpack.
For context, I am used to using ESLint in the following way. I use eslint-webpack-plugin and configure it so that webpack outputs an error and/or fails the build if there is a linting error. I use this setup for both the development build (using webpack-dev-server) and the production build so that I can be made aware of and address linting issues as they arise. I also use lint-staged and husky to set up a pre-commit hook that runs ESLint before commits for a similar reason.
So, my inclination when learning Jest was to use a similar setup, where tests will be run as part of the webpack compilation process and errors will be obvious and intrusive so that I can address/resolve them as they arise. I tried following the tutorials for Babel and webpack on the Jest site, but I cannot get webpack to throw any errors, and I'm not even sure if it's even running Jest at all to be honest. I looked to see how create-react-app and create-next-app have Jest set up. They both include an npm script for testing, but it seems users are supposed to run that script manually, separately from the dev/build processes, or as part of a CI workflow.
Any advice appreciated!
I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.
I have Jenkins setup to run tests before anything gets pushed into our QA environment. Recently I added python coverage to check code coverage of the tests.
Issue I have is not I see in the output that tests are failing, but the build still pushes through.
I am running the following in a bash script:
coverage run manage.py test --settings=my.settings.jenkins --noinput
When I was running the tests normally without coverage, if the test failed, the build would fail, this is no longer the case.
The project is a Django project on Python 3, any help would greatly be appreciated.
I am hitting the same issue. My Jenkins build always passes even if Coverage.py has some failing tests (I use the same command as you).
I have come up with a workaround but would ideally like to know if you figured it out?
My workaround uses the TextFinder plugin and I search the console for a specific string which I then fail the build if found....Hack-y I know!
https://wiki.jenkins-ci.org/display/JENKINS/Text-finder+Plugin
I have a job in Jenkins that is run every night. The tasks executed during this build are: compilation, unit tests, integration tests (which are only JUnit tests which are longer than "real unit tests" to execute), and Sonar quality analysis.
When a test fails, the job is however considered as successfull and thus, no email is sent to notify this failure.
The Maven command used is mvn clean install sonar:sonar. Removing the install goal does not change anything.
What is wrong with that?
Is there a way to get the expected behavior (i.e. having an unstable build when a test failed) with only one Jenkins job, or should I create two jobs, one for the whole "Java part" (compile, unit test and integration tests), and one for the Sonar analysis?
We are using Maven 2.0.9, Java 1.6, Sonar 2.8, Jenkins 1.413.
Jenkins seems to set that property: Hudson build successful with unit test failures
With the property (-Dmaven.test.failure.ignore=false), when there is a test failure, the build stops.
There is a jenkins plugin for sonar:
That seems to analyze even if Tests fails: http://jira.codehaus.org/browse/SONARPLUGINS-461
In my sonar installation, I run the tests seperate from sonar and reuse the junit/surefire reports. That way I can control the tests independently from sonar.
I have a maven2 project in hudson and when the cobertura reporting plugin runs, it causes the unit tests to show that they have run twice. I don't mind them running multiple times, but the trend graph shows twice as many tests as we actually are running. Is there a way to make sure the graph only shows them once?
thanks,
Jeff
This is a known bug. Just wait for it to be fixed.
The workaround I use (works in Hudson 1.391) is to configure cobertura in separate Maven profile and run it in a Hudson job as a post-build step.
Mode detailed instructions:
Add cobertura to your project pom in a special profile (so it won't run while default lifecycle) and configure it to create report in xml format.
Install "Hudson M2 Extra Steps Plugin"
Configure your Hudson job as Maven 2 project
In your job configuration in the "Build" section configure usual clean/install goals
In "Build Environment" section select "Configure M2 Extra Build Steps" and add Maven post-build step. Configure it to run "cobertura:cobertura -P your_cobertura_profile_name"
In "Post-build Actions" select "Publish Cobertura Coverage Report" and configure proper xml report pattern (default should work just fine)
I had the same problem recently when I was running maven goals test and emma:emma on the same job. emma seems to have rerun all tests thus doubling the results. When I removed goal test my unit tests still got executed but test results went back to normal. Could be the same with cobertura.