Hi We are using TeamCity and OCtopus for CI/CD. We have several hundred test cases.
As set up we have build server (Machine A) and it gets deployed on server (Machine B)
We use TeamCity and and last step is Deploy step through OctopusDeploy.
We have several test case which gets executed as Pre Deploy Tests. Now I want to add few Performance test case which will run on server (Machine B). How can I do this ?
Thanks in advance
Octopus is already helping us with this , You can write a PowerShell step that can run your performane testing cases. So the actual process of performing the tests is pretty easy.Moreover if you wanna check the test results with display (viewing the test results) .In Octopus 2.0 you can “attach” files to a deployment via PowerShell which are then uploaded and available on the deployment page.
But there is a possibility where you can run the load tests from the Teamcity using TC JMeter plugin and you can Parameterise the target environment and run the load tests , Here below is the link which can help you.
https://devblog.xero.com/run-jmeter-performance-tests-on-teamcity-8315f7ccffc1
References and sources:
https://octopus.com/docs/deployment-examples/custom-scripts
Related
What is the right way of running the unit test cases for django as part of build process ?
We use Jenkins pipeline for building the docker image and container will be started by using a startup script.
Do i need to call the manage.py test before the nginx container was started ?
or
Do i need to add a post build task in the Jenkins build pipeline to run the tests after the container was started ?
So basically looking for best practice to run tests.. Do we need to run unit tests before the server has started or after the server has started ?
I know that it makes more sense to run unit tests before we start the nginx, but won't it create an increased time with more n more test cases being added in future?
Depends on your test cases. If you are running unit tests only you don't need to. If you are doing something more in your tests like for example calling your apis (functional testing, etc) a good approach (in my opinion) is to create different stages in your jenkinsfile where you first build the docker image, then run the unit tests, then decide what to do depending on the test results. I see this as a good thing because you will be running tests over your app inside the same container (same conditions) it will be running in a production environment. Another good practice would be to add some plugins to Jenkins and have some reports (i.e. coverage).
Does Jacoco provide code coverage for integration tests of APIs? That is, I have an instance of my application running locally and I have integration tests where I hit an api offered by my running application instance. In this scenario can I use Jacoco to get how many lines of my running application instance was covered when integration tests where ran?
I have already tried Jacoco's maven plugin's prepare-agent-integration and report-integration goals. But they give code coverage as 0. I think its because jacoco only measures code coverage of the currently ran instance and not the instance whose api is hit.
I had forgot to run the javaagent while running the service. Running the jar file with javaagent with output=tcpserver and then dumping the execution file using Jacoco:dump and creating report using Jacoco:report solved the issue.
java -javaagent:<path_to_agent>/org.jacoco.agent-0.7.9-runtime.jar=output=tcpserver,address=127.0.0.1 -jar myapp.jar
mvn clean verify -Pintegration-tests
mvn jacoco:report -DdataFile=./target/jacoco.exec
mvn jacoco:dump -Djacoco.address=localhost -Djacoco.destFile=./service/target/jacoco.exec
I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.
I have selenium tests written in python 3.4. How to run them from jenkins after success build ?
Process is :
1. pull from git repository
2. python setup.py build
3. python setup.py install
After that i need to run server and selenium tests.
How to run them from jenkins after success build ?
- You can add a trigger to your selenium job so it runs after the build job runs successfully
To answer your question accurately, I need to know whether you are planning in running selenium tests in jenkins box...
Assuming you aren't planning in running the tests in jenkins (which IMO is something you dont want to) you can take 2 different directions:
1:. add a "execute shell" step to your build with the ssh to the machine you want to fire your tests on along with the command you need to run your tests in that machine. This would mean your pull from git to get latest code from selenium would have to happen in this step
2:. if you are outsourcing your browser execution to browserstack, sauce labs etc, add a "execute shell" step with the command needed to trigger your tests (firing from jenkins). This is assuming your tests know that it should point to outsourced env etc... You will most likely have a step to start a tunnel between your CI box and outsourced env...
Try using Selenium and Seleniumhq plugins for the same.
To add plugin : Manage Jenkins/ Manage Plugins/Available
I have configured TeamCity with Git to get my ASP.NET MVC project.
My solution contains the web app and the corresponding unit tests:
MY_SOLUTION.sln:
- WebAppProject
- SomeCoreLibrary
- SomeCoreLibraryTests
- OtherProjects...
The steps that I have configured in TeamCity are the following:
Get external packages using NuGet
Build the solution and deploy it
Run Unit Tests
Run Automated Tests (using Selenium)
I want to run the unit tests after building but before deployment and stop deployment if the unit tests failed. Currently the deployment is done after the build using the following Command Line Parameters:
/p:VisualStudioVersion=11.0
/p:DeployOnBuild=true I want this to be done only after SomeCoreLibraryTests.dll unit tests have passed
/p:PublishProfile=MyWebDeploy
/P:AllowUntrustedCertificate=True
/P:UserName=username_here
/P:Password=password_here
Thanks,
Ionut
What I've done in similar cases is to use RoboCopy to just mirror the new website into the deployment path. Doesn't that work for you?
P.S.: if you do get this working, I'd suggest doing a performance improvement change in TeamCity (which would allow you to run the unit tests in parallel to the automated tests):
I assume you are employing a single build configuration for all those steps. If that is the case, what I would recommend instead is using Dependent Build configurations to separate the different concerns. You can see an example here in an open source project of mine:
http://teamcity.codebetter.com/viewLog.html?buildId=112432&buildTypeId=bt1075&tab=dependencies
Log in as Guest and expand the Testeroids :: Publish to NuGet tree node to visualize the build flow.
To achieve this, basically you pass around the result of your build step in the artifacts (e.g. you pass the resulting binaries from Compile into Unit Test). You gain several things by using dependent builds: several independent build steps can run in parallel on different agents, plus if one of your build steps fails because of external factors (e.g. let's say Publish failed because the network went down), you can trigger again the build and it will only rebuild the failed steps.
I am not familiar with the tools that you use. However, I would, in general, use a few build configurations for a project:
build configuration, triggered on change, containing these steps: get the latest source code and packages, build/compile and unit test. Then create an artifact for deployment task.
build configuration to deploy to a development server, triggered by successful completion of and using artifact (via dependency) from (1).
build configuration for long running (eg integration/functional) testing that is scheduled to run less frequently.
An advantage of (2) is that you can, if necessary, re-deploy a build/artifact without having to rebuild the artifact first. Also, if you have multiple agents, (2) and (3) can run independently of each other.
Furthermore, you can also tag build in (2) that have passed development checks and then use its artifact in another build configuration to deploy it to test server, etc.