Fake X server for testing? - unit-testing

At work we fully test the GUI components. The problem arises from the fact that, while the testsuite is running, the various components pop up, stealing the focus or making it impossible to continue working. The first thing I thought of was Xnest, but I was wondering if there's a more elegant solution to this problem.

I think what you need to do here is have your tests run on a different Display than the one you're working on.
When we moved our TeamCity agents to EC2, we had to figure out a solution to running our UI unit tests on a headless Linux server. I found a way to do it in this blog post, which outlines how to use Xvfb.
For my case, all I had to do was:
yum install xorg-x11-server-Xvfb
Xvfb :100 -ac to run the server. I added this to my rc.local file on my EC2 agents to start it at machine startup.
Then I added env.DISPLAY :100 to my TeamCity build configuration

Related

Django test api while api is running in dev environment [duplicate]

I have looked at this question but I am not sure I got it correctly or not.
I have opened pycharm and one python script and its running (it's topic modeling).
Also I have another python script in which I opened in another pycharm in the same server. I also run it.
Now these two program are running in the same server, I should mention that I have not changed any configuration neither server nor pycharm.
Do you think its ok in this way? or one script technically won't run(in terms of progressing I mean it just show its running but practically wont run) until the other script finished?
Edit Configurations -> Allow parallel run. Done
First, PyCharm will create independent processes on the server, so both scripts will run. You can check it with something like htop - search for processes and verify that they're running.
Second, you don't have to open second PyCharm window to run the second script. You can run both of them from the single one. There are at least two ways: with run configurations or by spawning multiple terminal windows and running scripts from there.
From the Run/Debug Configurations windows you can add a Compound configuration that contains multiple configurations that will run in parallel. The Allow parallel run option for child configurations make no difference in this case.
The default behaviour was changed starting from version 2018.3. You can allow multiple runs by selecting Allow parallel run within the Edit Configurations menu.

When should I execute unit tests and integration tests in a Dockerfile with Flask installed?

I set up a new Flask Python server and I created a Dockerfile with all my codes. I've written some unit tests and I'm executing them locally. When should I execute them if I want to implement a CI/CD?
I also need to write integration tests (to test if I'm querying the database correctly, to understand if the endpoint is exposed correctly, and so on), when should I execute them in a CI/CD?
I was thinking to execute them during the docker build so to put the execution of the tests in the Dockerfile. Is it correct?
Unit tests: Outside of Docker, before you run your docker build. Within your CI pipeline, after checking out the source code and running any setup steps like installing package dependencies.
Integration tests: Launched from outside of Docker; depending on how complex your setup is, either late in your CI pipeline or as part of your CD pipeline.
This assumes a true "unit test" that has no external dependencies; it depends only on the application/library code, and where it needs things like databases, it either mocks out those dependencies or uses something like an embedded SQLite. (Some frameworks are especially bad at this workflow and make it impossible to start up the application at all if the database isn't available. But Rails doesn't run on Python.)
Running unit tests in a Dockerfile will last until it's midnight, you have a production outage, and either your quick fix that will bring the site back up happens to break one obscure unit test, or you can't wait the 5-minute cycle time to run the whole unit-test suite. Since there shouldn't be dependencies on the Docker-or-not environment in your unit tests, I'd just run them outside Docker.
Often you can stand up enough infrastructure to be able to run your application "for real" with a couple of docker run commands or a simple Docker Compose setup. In that case, it makes sense to run an integration test towards the end of your CI pipeline. With a more complex setup (maybe one involving Kubernetes) you might need to actually deploy into a test environment, and if you have separate CI and CD tools, this would turn into "test deploy", "integration test", "pre-production deploy".
As a developer I find having tools not-in-Docker vastly easier to manage than tools that only run in Docker. (I don't subscribe to the "any binary other than /usr/bin/docker is bad" philosophy.) I'd rather just run pytest or curl than remember the 4-line docker run invocation to do some specific task.

Intern - Create local tunnel to run functional tests

So i'm starting use Intern for functional tests, so far so good I did it all, unit and functional tests.
I followed their intern-tutorial
Whenever you need to run a full test against all platforms, use the test runner. When you are in the process of writing your tests and want to check them for correctness more quickly, you can either use just the Node.js client (for unit tests only) or create an alternate configuration file that only tests against a single local platform, like your local copy of Chrome or Firefox (for all tests, including functional tests).
I searched on their documentation, but I didn't find anything exactly about local "tunnels".
I'm using Intern with Gulp, my localhost is localhost:3000 and I want to test on my Chrome 54 on Mac.
Thank you
I guess NullTunnel is what you're looking for?
I found the answer. I had to change the tunnel to Local Selenium.
Download the latest version of ChromeDriver
Set tunnel to 'NullTunnel'
Run chromedriver --port=4444 --url-base=wd/hub
Set your environments capabilities to [ { browserName: 'chrome' } ]
Run the test runner
Obs:
Don't forget to copy the chromedriver file to your project root.
I had to run on my project root .\chromedriver --port=4444 --url-base=wd/hub
The test runner has to be run in a new command line/terminal/shell
Hope to help someone that had the same issue.

Development workflow for a Clojure webapp with Docker

I'm trying to get started with Docker for developing a web application with Clojure and am unsure which way to go. From what I've read so far and also looking at the offical Docker Clojure repo, there are basically two possible ways:
call lein ring server (interactively or as a CMD in a Dockerfile) or
use a Dockerfile to compile your application into an uberjar and use java -jar as the CMD on the resulting jar file.
The former seems to me to be problematic in the sense that the dev environment is not as close as possible to the production environment, given that we're probably using a :dev leiningen profile adding stuff that one would strictly not want in production (providing as few tools and "information", i.e. code on an exposed production server is always a good idea). The latter, however, seems to have the exact opposite problem: Now every change requires basically a rebuild of the image (think edit-compile-run cycle), so you would lose lein rings nice re-compile on modification functionality.
How are people using this combination in practice?
PS: I'm aware that there might be some other modes of operation in practice (e.g. using Immutant or Tomcat as the deployment target or using a CI server like Hudson etc.). I'm asking about the most basic setup first here.
My team and I have opted to optimize rapid feedback while developing and minimize the number of moving parts in our deploys. As a result we've opted to use lein ring server in development and opt to ship an uberjar for our deployment. I've done this with code running in docker containers and without them.
I wouldn't want to go back to using a development workflow that didn't enable seeing the results of changing code as quickly as possible. In my mind, the rapid feedback far outweighs the risk of the running services slightly differently between my local machine and production.
Also, nothing stops me from changing a couple lines of code and then starting up a local service that is running much closer to my production setup (either running a built docker image or building an uberjar locally).
There's nothing stopping you from running in production mode with Leiningen. Just use:
lein with-profile production ring server
I've used both approaches successfully, although we've settled on the uberjar approach because it gives faster startup times.
I use the second option java -jar ... to deploy my web application to production (not using Docker yet). This creates an edit-compile-run cycle as you said. But I don't recompile for every change. Only when I'm ready to release I create the uberjar. Of cource CI is always recommended.

How to automate installer testing

I'm wondering if anyone has any best practices for automating the testing of installers on various machines with potentially different hardware / software profiles and by specifying various options to the installer. The idea would be that I could write "unit test like" code to set up a machine, run the installer, then test that certain things are true. Tests might look similar to:
Test:
Boot Machine without IIS
Run Installer
Assert Installer Had Errors
Test:
Boot Machine with IIS
Run Installer
Assert Installer Ran
Test_Fixture:
SetUp:
Boot Machine with IIS
Test:
Run Installer without IIS install
Assert Website Not Installed
Test:
Run Installer with IIS install
Assert Website Installed
I know I could create lots of VMs, but waiting for a VM to boot for each functional test sounds like way more work than I want. What I really want is a way to virtualize the installer environment. Any suggestions?
We have created a set of VMs and find it is very easy to manage. We run the tests for 13 different Windows installers over night. The VMs we have created our very bare bones, so it is possible to run a number of tests in parallel.
If you have the installer runnable from the command line, it's easy to have a script to call it automatically.
Then you can use a web app testing tool to see it the install was successful, like this one http://seleniumhq.org/ For this you will need an unique way to test a new install - like a page with the current version.