As I know performing a functional test is done with an app context. This-for I am using this modular fixture:
#pytest.fixture(scope='module')
def test_client():
flask_app = create_app("ENV_FILE_LOCATION")
# Create a test client using the Flask application configured for testing
with flask_app.test_client() as testing_client:
# Establish an application context
with flask_app.app_context():
yield testing_client # this is where the testing happens!
And I am after calling the test_client to do tests on each route, for example:
c = app.test_client()
response = c.get('/test/url')
# test response
When performing the tests on some routes the process works well. However, I have a one that is calling another route I get this error
requests.exceptions.ConnectionError:
HTTPConnectionPool(host='0.0.0.0', port=5500): Max retries exceeded
with url: /test/nested_route_call
(Caused by NewConnectionError('<urllib3.connection.HTTPConnection
object at 0x7f56f3c30580>: Failed to establish a new connection:
[Errno 111] Connection refused'))
It is logical since the app is not running and the call is performed to the closed app.
When doing the tests while the app is running, all things goes rightly. However as I must incorporate my tests in a pre-build process, it must be closed, so is there any solution to fix the issue.
As said #TheifMaster we can write a separate function for the route.
However there is other solution using the live server fixture from pytest-flask.
Please see:
https://pytest-flask.readthedocs.io/en/stable/features.html
Related
I'm trying to execute a long running function (ex: sleep(30)) after a Django view returns a response. I've tried implementing the solutions suggested to similar questions:
How to execute code in Django after response has been sent
Execute code in Django after response has been sent to the
client
However, the client's page load only completes after the long running function completes running when using a WSGI server like gunicorn.
Now that Django supports asynchronous views is it possible to run a long running query asynchronously?
Obviously, I am looking for a solution regarding the same issue, to open a view which should start a background task and send a response to the client without waiting until started task is finished.
As far as I understand yet this is not one of the objectives of async view in Django. The problem is that all executed code is connected the the worker started to handle the http request. If the response is sent back to the client the worker cannot handle any other code / task anymore started in the view asynchronous. Therefore, all async functions require an "await" in front of. Consequently, the view will only send its response to the client if the awaited function is finished.
As I understand all background tasks must be pushed in a queue of tasks where another worker can catch each new task. There are several solution for this, like Djangp Channels or Django Q. However, I am not sure what is the most lightweighted solution.
This is a follow-up to this question: How to stop flask application without using ctrl-c . The problem is that I didn't understand some of the terminology in the accepted answer since I'm totally new to this.
import dash
import dash_core_components as dcc
import dash_html_components as html
app = dash.Dash()
app.layout = html.Div(children=[
html.H1(children='Dash Tutorials'),
dcc.Graph()
])
if __name__ == '__main__':
app.run_server(debug=True)
How do I shut this down? My end goal is to run a plotly dashboard on a remote machine, but I'm testing it out on my local machine first.
I guess I'm supposed to "expose an endpoint" (have no idea what that means) via:
from flask import request
def shutdown_server():
func = request.environ.get('werkzeug.server.shutdown')
if func is None:
raise RuntimeError('Not running with the Werkzeug Server')
func()
#app.route('/shutdown', methods=['POST'])
def shutdown():
shutdown_server()
return 'Server shutting down...'
Where do I include the above code? Is it supposed to be included in the first block of code that I showed (i.e. the code that contains app.run_server command)? Is it supposed to be separate? And then what are the exact steps I need to take to shut down the server when I want?
Finally, are the steps to shut down the server the same whether I run the server on a local or remote machine?
Would really appreciate help!
The method in the linked answer, werkzeug.server.shutdown, only works with the development server. Creating a view function, with an assigned URL ("exposing an endpoint") to implement this shutdown function is a convenience thing, which won't work when deployed with a WSGI server like gunicorn.
Maybe that creates more questions than it answers:
I suggest familiarising yourself with Flask's wsgi-standalone deployment docs.
And then probably the gunicorn deployment guide. The monitoring section has a number of different examples of service monitors, which you can use with gunicorn allowing you to run the app in the background, start on reboot, etc.
Ultimately, starting and stopping the WSGI server is the responsibility of the service monitor and logic to do this probably shouldn't be coded into your app.
What works in both cases of
app.run_server(debug=True)
and
app.run_server(debug=False)
anywhere in the code is:
os.kill(os.getpid(), signal.SIGTERM)
(don't forget to import os and signal)
SIGTERM should cause a clean exit of the application.
First off, this is a follow up question from here: Change number of running spiders scrapyd
I'm used phantomjs and selenium to create a downloader middleware for my scrapy project. It works well and hasn't really slowed things down when I run my spiders one at a time locally.
But just recently I put a scrapyd server up on AWS. I noticed a possible race condition that seems to be causing errors and performance issues when more than one spider is running at once. I feel like the problem stems from two separate issues.
1) Spiders trying to use phantomjs executable at the same time.
2) Spiders trying to log to phantomjs's ghostdriver log file at the same time.
Guessing here, the performance issue may be the spider trying to wait until the resources are available (this could be due to the fact that I also had a race condition for an sqlite database as well).
Here are the errors I get:
exceptions.IOError: [Errno 13] Permission denied: 'ghostdriver.log' (log file race condition?)
selenium.common.exceptions.WebDriverException: Message: 'Can not connect to GhostDriver' (executable race condition?)
My questions are:
Does my analysis of what the problem(s) are seem correct?
Are there any known solutions to this problem other than limiting the number of spiders that can be ran at a time?
Is there some other way I should be handling javascript? (if you think I should create an entirely new question to discuss the best way to handle javascript with scrapy let me know and I will)
Here is my downloader middleware:
class JsDownload(object):
#check_spider_middleware
def process_request(self, request, spider):
if _platform == "linux" or _platform == "linux2":
driver = webdriver.PhantomJS(service_log_path='/var/log/scrapyd/ghost.log')
else:
driver = webdriver.PhantomJS(executable_path=settings.PHANTOM_JS_PATH)
driver.get(request.url)
return HtmlResponse(request.url, encoding='utf-8', body=driver.page_source.encode('utf-8'))
note: the _platform code is a temporary work around until I get this source code deployed into a static environment.
I found solutions on SO for javascript problem but they were spider based. This bothered me because it meant every request had to be made once in the downloader handler and again in the spider. That is why I decided to implement mine as a downloader middleware.
try using webdriver to interface with phantomjs
https://github.com/brandicted/scrapy-webdriver
I've noticed that on occasions where I've run my django project without the PostgreSQL server available, the errors produced are fairly cryptic and often appear to be generated by deep django internals as these are the functions actually connecting to the backend.
Is there a simple clean(and DRY) way to test the server is running.
Where is the best place to put project level start up checks?
You can register a signal on class-prepared.
https://docs.djangoproject.com/en/dev/ref/signals/#class-prepared
Than try executing custom sql directly.
https://docs.djangoproject.com/en/dev/topics/db/sql/#executing-custom-sql-directly
If it fails raise your custom exception.
import time
from django.db import connections
from django.db.utils import OperationalError
self.stdout.write('Waiting for database...')
db_conn = None
while not db_conn:
try:
db_conn = connections['default']
except OperationalError:
self.stdout.write('Database unavailable, waiting 1 second...')
time.sleep(1)
self.stdout.write(self.style.SUCCESS('Database available!'))
you can use this snippet where you need to.
befor accessing database and making any queries you must check if the database is up or not
I'm using django-on-tornado to build an application that is similar to the chat applicatoin proposed. All tutorials are focused on how to run a django application over tornado server, but how can I test an asynchronous feature that depends on tornado?
My current test does the following:
Starts a thread that sleeps for some time than sends a chat message
Do a request to ask for messages
When request ends, check that message arrived and that time elapsed is compatible with thread sleep time
When I run the test (with manage.py test), I get an "AttributeError: 'WSGIRequest' object has no attribute '_tornado_handler'", which is expected, since the _tornado_handler property of the request is set in runtornado command.
Is there a way to make this setup so that I can test the asynchronous feature? I use nose with django_nose plugin for tests.
Actually django-on-tornado does not anyhow change the manage.py test command of Django, so the Tornado is invoked only via runtornado. You will need to add command to manage.py called something like "testtornado" with implementation similar to https://github.com/koblas/django-on-tornado/blob/master/myproject/django_tornado/management/commands/runtornado.py - it should set up _tornado_handler and proceed with launching your test code.