I recently started using Dash for developing web applications. In order to improve the quality of my code, I began writing unit tests for these applications.
I followed the tutorials in the Dash documentation (https://dash.plotly.com/testing) and installed ChromeDriver and all other needed dependencies. When running the example unit tests from the Dash Documentation, every test that used dash_duo failed with a TestingTimeoutError. The unit tests that don't use dash_duo run just fine, but every single test that uses dash_duo fails with a TestingTimeoutError.
Eventually, I wrote a super simple test:
import dash
from dash import html
def test_001_test_dash_duo(dash_duo):
x = 2
assert x == 2
Even this test fails with a timeout error:
msg = 'expected condition not met within timeout'
def until(
wait_cond, timeout, poll=0.1, msg="expected condition not met within timeout"
): # noqa: C0330
res = wait_cond()
logger.debug(
"start wait.until with method, timeout, poll => %s %s %s",
wait_cond,
timeout,
poll,
)
end_time = time.time() + timeout
while not res:
if time.time() > end_time:
> raise TestingTimeoutError(msg)
E dash.testing.errors.TestingTimeoutError: expected condition not met within timeout
Any ideas of what is needed to get Dash unit tests using dash_duo to not fail by timeout?
Related
Environment: Python 3, tornado 4.4. The normal unittests cannot be used because methods are asynchronous. There is ttp://www.tornadoweb.org/en/stable/testing.html that explains how to do unit testing for asynchronous code. But that works with tornado coroutines ONLY. The classes I want to test are using the async def statements, and they cannot be tested this way. For example, here is a test case that uses ASyncHTTPClient.fetch and its callback parameter:
class MyTestCase2(AsyncTestCase):
def test_http_fetch(self):
client = AsyncHTTPClient(self.io_loop)
client.fetch("http://www.tornadoweb.org/", self.stop)
response = self.wait()
# Test contents of response
self.assertIn("FriendFeed", response.body)
But my methods are declared like this:
class Connection:
async def get_data(url, *args):
# ....
And there is no callback. How can I "await" for this method from a test case?
UPDATE: based on Jessie's answer, I created this MWE:
import unittest
from tornado.httpclient import AsyncHTTPClient
from tornado.testing import AsyncTestCase, gen_test, main
class MyTestCase2(AsyncTestCase):
#gen_test
async def test_01(self):
await self.do_more()
async def do_more(self):
self.assertEqual(1+1, 2)
main()
The result is this:
>py -3 -m test.py
E
======================================================================
ERROR: all (unittest.loader._FailedTest)
----------------------------------------------------------------------
AttributeError: module '__main__' has no attribute 'all'
----------------------------------------------------------------------
Ran 1 test in 0.000s
FAILED (errors=1)
[E 170205 10:05:43 testing:731] FAIL
There is no traceback. But if I replace tornado.testing.main() with unittest.main() then it suddenly starts working.
But why? I guessed that for asnyc unit tests, I need to use tornado.testing.main ( http://www.tornadoweb.org/en/stable/testing.html#tornado.testing.main )
I'm confused.
UPDATE 2: It is a bug in tornado.testing. Workaround:
all = MyTestCase2
main()
Instead of using the self.wait / self.stop callbacks, you can wait for "fetch" to complete by using it in an "await" expression:
import unittest
from tornado.httpclient import AsyncHTTPClient
from tornado.testing import AsyncTestCase, gen_test
class MyTestCase2(AsyncTestCase):
#gen_test
async def test_http_fetch(self):
client = AsyncHTTPClient(self.io_loop)
response = await client.fetch("http://www.tornadoweb.org/")
# Test contents of response
self.assertIn("FriendFeed", response.body.decode())
unittest.main()
The other change I had to make in your code is to call "decode" on the body in order to compare the body (which is bytes) to "FriendFeed" which is a string.
My application callback start Supervisor that conflicts with unit tests.
With that callback I am getting something like {:error, {:already_started, #PID<0.258.0>}} when I try to run unit test because my processes are already started.
Can I execute Application callback only for :dev and :prod, keeping :test environment clean of startup code?
I am looking for something like this:
def application do
[
applications: [:logger],
mod: {MyApplication, [], only: [:dev, :prod]}
]
only: [:dev, :prod] - this is missing piece
I do not know if this is the correct way to handle testing in this case, but here's how you can do what you're asking for:
In mix.exs:
def application do
rest = if(Mix.env == :test, do: [], else: [mod: {MyApp, []}])
[applications: [:logger]] ++ rest
end
For the demo below, I added the following to MyApp.start/2:
IO.puts "starting app..."
Demo:
$ MIX_ENV=dev mix
starting app...
$ MIX_ENV=prod mix
starting app...
$ MIX_ENV=test mix # no output
One solution is to kill that process before the test suite runs. For example you could do something like the below:
setup do
Process.exit(pid, :kill)
end
test "do something..." do
assert 1 == 1
end
This will ensure that before the test is run, that process is already killed.
I have a tool, where i am implementing upnp discovery of devices connected in network.
For that i have written a script and used datagram class in it.
Implementation:
whenever scan button is pressed on tool, it will run that upnp script and will list the devices in the box created in tool.
This was working fine.
But when i again press the scan button, it gives me following error:
Traceback (most recent call last):
File "tool\ui\main.py", line 508, in updateDevices
upnp_script.main("server", localHostAddress)
File "tool\ui\upnp_script.py", line 90, in main
reactor.run()
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1191, in run
self.startRunning(installSignalHandlers=installSignalHandlers)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 1171, in startRunning
ReactorBase.startRunning(self)
File "C:\Python27\lib\site-packages\twisted\internet\base.py", line 683, in startRunning
raise error.ReactorNotRestartable()
twisted.internet.error.ReactorNotRestartable
Main function of upnp script:
def main(mode, iface):
klass = Server if mode == 'server' else Client
obj = klass
obj(iface)
reactor.run()
There is server class which is sending M-search command(upnp) for discovering devices.
MS = 'M-SEARCH * HTTP/1.1\r\nHOST: %s:%d\r\nMAN: "ssdp:discover"\r\nMX: 2\r\nST: ssdp:all\r\n\r\n' % (SSDP_ADDR, SSDP_PORT)
In server class constructor, after sending m-search i am stooping reactor
reactor.callLater(10, reactor.stop)
From google i found that, we cannot restart a reactor beacause it is its limitation.
http://twistedmatrix.com/trac/wiki/FrequentlyAskedQuestions#WhycanttheTwistedsreactorberestarted
Please guide me how can i modify my code so that i am able to scan devices more than 1 time and don't get this "reactor not restartable error"
In response to "Please guide me how can i modify my code...", you haven't provided enough code that I would know how to specifically guide you, I would need to understand the (twisted part) of the logic around your scan/search.
If I were to offer a generic design/pattern/mental-model for the "twisted reactor" though, I would say think of it as your programs main loop. (thinking about the reactor that way is what makes the problem obvious to me anyway...)
I.E. most long running programs have a form something like
def main():
while(True):
check_and_update_some_stuff()
sleep 10
That same code in twisted is more like:
def main():
# the LoopingCall adds the given function to the reactor loop
l = task.LoopingCall(check_and_update_some_stuff)
l.start(10.0)
reactor.run() # <--- this is the endless while loop
If you think of the reactor as "the endless loop that makes up the main() of my program" then you'll understand why no-one is bothering to add support for "restarting" the reactor. Why would you want to restart an endless loop? Instead of stopping the core of your program, you should instead only surgically stop the task inside that is complete, leaving the main loop untouched.
You seem to be implying that the current code will keep "sending m-search"s endlessly when the reactor is running. So change your sending code so it stops repeating the "send" (... I can't tell you how to do this because you didn't provide code, but for instance, a LoopingCall can be turned off by calling its .stop method.
Runnable example as follows:
#!/usr/bin/python
from twisted.internet import task
from twisted.internet import reactor
from twisted.internet.protocol import Protocol, ServerFactory
class PollingIOThingy(object):
def __init__(self):
self.sendingcallback = None # Note I'm pushing sendToAll into here in main()
self.l = None # Also being pushed in from main()
self.iotries = 0
def pollingtry(self):
self.iotries += 1
if self.iotries > 5:
print "stoping this task"
self.l.stop()
return()
print "Polling runs: " + str(self.iotries)
if self.sendingcallback:
self.sendingcallback("Polling runs: " + str(self.iotries) + "\n")
class MyClientConnections(Protocol):
def connectionMade(self):
print "Got new client!"
self.factory.clients.append(self)
def connectionLost(self, reason):
print "Lost a client!"
self.factory.clients.remove(self)
class MyServerFactory(ServerFactory):
protocol = MyClientConnections
def __init__(self):
self.clients = []
def sendToAll(self, message):
for c in self.clients:
c.transport.write(message)
# Normally I would define a class of ServerFactory here but I'm going to
# hack it into main() as they do in the twisted chat, to make things shorter
def main():
client_connection_factory = MyServerFactory()
polling_stuff = PollingIOThingy()
# the following line is what this example is all about:
polling_stuff.sendingcallback = client_connection_factory.sendToAll
# push the client connections send def into my polling class
# if you want to run something ever second (instead of 1 second after
# the end of your last code run, which could vary) do:
l = task.LoopingCall(polling_stuff.pollingtry)
polling_stuff.l = l
l.start(1.0)
# from: https://twistedmatrix.com/documents/12.3.0/core/howto/time.html
reactor.listenTCP(5000, client_connection_factory)
reactor.run()
if __name__ == '__main__':
main()
This script has extra cruft in it that you might not care about, so just focus on the self.l.stop() in PollingIOThingys polling try method and the l related stuff in main() to illustrates the point.
(this code comes from SO: Persistent connection in twisted check that question if you want to know what the extra bits are about)
I am using Python 2.7 and Jenkins.
I am writing some code in Python that will perform a checkin and wait/poll for Jenkins job to be complete. I would like some thoughts on around how I achieve it.
Python function to create a check-in in Perforce-> This can be easily done as P4 has CLI
Python code to detect when a build got triggered -> I have the changelist and the job number. How do I poll the Jenkins API for the build log to check if it has the appropriate changelists? The output of this step is a build url which is carrying out the job
How do I wait till the Jenkins job is complete?
Can I use snippets from the Jenkins Rest API or from Python Jenkins module?
If you need to know if the job is finished, the buildNumber and buildTimestamp are not enough.
This is the gist of how I find out if a job is complete, I have it in ruby but not python so perhaps someone could update this into real code.
lastBuild = get jenkins/job/myJob/lastBuild/buildNumber
get jenkins/job/myJob/lastBuild/build?token=gogogo
currentBuild = get jenkins/job/myJob/lastBuild/buildNumber
while currentBuild == lastBuild
sleep 1
thisBuild = get jenkins/job/myJob/lastBuild/buildNumber
buildInfo = get jenkins/job/myJob/[thisBuild]/api/xml?depth=0
while buildInfo["freeStyleBuild/building"] == true
buildInfo = get jenkins/job/myJob/[thisBuild]/api/xml?depth=0
sleep 1
ie. I found I needed to A) wait until the build starts (new build number) and B) wait until the building finishes (building is false).
You can query the last build timestamp to determine if the build finished. Compare it to what it was just before you triggered the build, and see when it changes. To get the timestamp, add /lastBuild/buildTimestamp to your job URL
As a matter of fact, in your Jenkins, add /lastBuild/api/ to any Job, and you will see a lot of API information. It even has Python API, but I not familiar with that so can't help you further
However, if you were using XML, you can add lastBuild/api/xml?depth=0 and inside the XML, you can see the <changeSet> object with list of revisions/commit messages that triggered the build
Simple solution using invoke and block_until_complete methods (tested with Python 3.7)
import jenkinsapi
from jenkinsapi.jenkins import Jenkins
...
server = Jenkins(jenkinsUrl, username=jenkinsUser,
password=jenkinsToken, ssl_verify=sslVerifyFlag)
job = server.create_job(jobName, None)
queue = job.invoke()
queue.block_until_complete()
Inpsired by a test method in pycontribs
This snippet starts build job and wait until job is done.
It is easy to start the job but we need some kind of logic to know when job is done. First we need to wait for job ID to be applied and than we can query job for details:
from jenkinsapi import jenkins
server = jenkins.Jenkins(jenkinsurl, username=username, password='******')
job = server.get_job(j_name)
prev_id = job.get_last_buildnumber()
server.build_job(j_name)
while True:
print('Waiting for build to start...')
if prev_id != job.get_last_buildnumber():
break
time.sleep(3)
print('Running...')
last_build = job.get_last_build()
while last_build.is_running():
time.sleep(1)
print(str(last_build.get_status()))
Don't know if this was available at the time of the question, but jenkinsapi module's Job.invoke() and/or Jenkins.build_job() return a QueueItem object, which can block_until_building(), or block_until_complete()
jobq = server.build_job(job_name, job_params)
jobq.block_until_building()
print("Job %s (%s) is building." % (jobq.get_job_name(), jobq.get_build_number()))
jobq.block_until_complete(5) # check every 5s instead of the default 15
print("Job complete, %s" % jobq.get_build().get_status())
Was going through the same problem and this worked for me, using python3 and python-jenkins.
while "".join([d['color'] for d in j.get_jobs() if d['name'] == "job_name"]) == 'blue_anime':
print('Job is Running')
time.sleep(1)
print('Job Over!!')
Working Github Script: Link
This is working for me
#!/usr/bin/env python
import jenkins
import time
server = jenkins.Jenkins('https://jenkinsurl/', username='xxxxx', password='xxxxxx')
j_name = 'test'
server.build_job(j_name, {'testparam1': 'test', 'testparam2': 'test'})
while True:
print('Running....')
if server.get_job_info(j_name)['lastCompletedBuild']['number'] == server.get_job_info(j_name)['lastBuild']['number']:
print "Last ID %s, Current ID %s" % (server.get_job_info(j_name)['lastCompletedBuild']['number'], server.get_job_info(j_name)['lastBuild']['number'])
break
time.sleep(3)
print('Stop....')
console_output = server.get_build_console_output(j_name, server.get_job_info(j_name)['lastBuild']['number'])
print console_output
the issue main issue that the build_job doesn't return the number of the job, returns the number of a queue item (that only last 5 min). so the trick is
build_job
get the queue number,
with the queue number get the job_number
now we know the name of the job and the job number
get_job_info and loop the jobs till we find one with our job number
check the status
so i made a function for it with time_out
import time
from datetime import datetime, timedelta
import jenkins
def launch_job(jenkins_connection, job_name, parameters={}, wait=False, interval=30, time_out=7200):
"""
Create a jenkins job and waits for the job to finish
:param jenkins_connection: jenkins server jenkins object
:param job_name: the name of job we want to create and see if finish string
:param parameters: the parameters of the job to build directory
:param wait: if we want to wait for the job to finish or not bool
:param interval: how often we want to monitor seconds int
:param time_out: break the loop after certain X seconds int
:return: build job number int
"""
# we lunch the job and returns a queue_id
job_id = jenkins_connection.build_job(job_name, parameters)
# from the queue_id we get the job number that was created
queue_job = jenkins_connection.get_queue_item(job_id, depth=0)
build_number = queue_job["executable"]["number"]
print(f"job_name: {job_name} build_number: {build_number}")
if wait is True:
now = datetime.now()
later = now + timedelta(seconds=time_out)
while True:
# we check current time vs the timeout(later)
if datetime.now() > later:
raise ValueError(f"Job: {job_name}:{build_number} is running for more than {time_out} we"
f"stop monitoring the job, you can check it in Jenkins")
b = jenkins_connection.get_job_info(job_name, depth=1, fetch_all_builds=False)
for i in b["builds"]:
loop_id = i["id"]
if int(loop_id) == build_number:
result = (i["result"])
print(f"result: {result}") # in the json looks like null
if result is not None:
return i
# break
time.sleep(interval)
# return result
return build_number
after we ask jenkins to build the job>get queue#>get job#> loop the info and get the status till change from None to something else.
if works will return the directory with the information of that job. (hope the jenkins library could implement something like this.)
I am trying to run tests on django with coverage. It works fine, but it doesn't detect class definitions, because they are defined before coverage is started. I have following test runner, that I use, when I compute coverage:
import sys
import os
import logging
from django.conf import settings
MAIN_TEST_RUNNER = 'django.test.simple.run_tests'
if settings.COMPUTE_COVERAGE:
try:
import coverage
except ImportError:
print "Warning: coverage module not found: test code coverage will not be computed"
else:
coverage.exclude('def __unicode__')
coverage.exclude('if DEBUG')
coverage.exclude('if settings.DEBUG')
coverage.exclude('raise')
coverage.erase()
coverage.start()
MAIN_TEST_RUNNER = 'django-test-coverage.runner.run_tests'
def run_tests(test_labels, verbosity=1, interactive=True, extra_tests=[]):
# start coverage - jeśli włączmy już tutaj, a wyłączymy w django-test-coverage,
# to dostaniemy dobrze wyliczone pokrycie dla instrukcji wykonywanych przy
# imporcie modułów
test_path = MAIN_TEST_RUNNER.split('.')
# Allow for Python 2.5 relative paths
if len(test_path) > 1:
test_module_name = '.'.join(test_path[:-1])
else:
test_module_name = '.'
test_module = __import__(test_module_name, {}, {}, test_path[-1])
test_runner = getattr(test_module, test_path[-1])
failures = test_runner(test_labels, verbosity=verbosity, interactive=interactive)
if failures:
sys.exit(failures)
What can I do, to have classes also included in coverage? Otherwise I have quite a low coverage and I can't easily detect places, that really need to be covered.
The simplest thing to do is to use coverage to execute the test runner. If your runner is called "runner.py", then use:
coverage run runner.py
You can put your four exclusions into a .coveragerc file, and you'll have all of the benefits of your coverage code, without keeping any of your coverage code.