I am running django on twisted. I have a special variable which is my engine being passed to each request. Take a loook at the following code:
# Django setup
sys.path.append("shoout_web")
os.environ['DJANGO_SETTINGS_MODULE'] = 'shoout_web.settings'
from django.core.handlers.wsgi import WSGIHandler
def wsgi_resource():
pool = threadpool.ThreadPool()
pool.start()
# Allow Ctrl-C to get you out cleanly:
reactor.addSystemEventTrigger('after', 'shutdown', pool.stop)
generic = WSGIHandler()
def wrapper(environ, start_response):
environ['engine'] = engine
return generic(environ, start_response)
wsgi_resource = wsgi.WSGIResource(reactor, pool, wrapper)
return wsgi_resource
wsgi_root = wsgi_resource()
reactor.listenTCP(DJANGO_PORT, server.Site(wsgi_root, logPath=os.path.join(log_dir, '.django.log')))
Note the line " environ['engine'] = engine "
Right now I am interested in writing test all my django views. How should I go about doing this?
Sample view function:
def push_message(request):
engine = request.META['engine']
if request.method == "POST":
user_hexid = request.session['user_hexid']
room_hexid = request.POST['room_hexid']
message_body = request.POST['message_body']
ret = blockingCallFromThread( reactor, engine.push_public_message, user_hexid, room_hexid, message_body)
return HttpResponse(cjson.encode( {'thread_hexid':ret} ))
EDIT:
Just to clear up some doubts:
I don't think I am able to put that engine within settings because the engine is actually a twisted server which is listening on a specific port
Apparently it's not documented, but from looking at the test client code, you can pass extra environ keys using the defaults keyword argument:
client = Client(defaults={'engine': engine})
Related
I'd like to host multiple dashapp into a flak server. Each dashapp shall be accessible with a login and password.
Some users can access different dashapps.
I tried the dash_auth.BasicAuth. It works perfectly but only for one dashapp.
So I tried to authenticate with flask_httpauth. Here again, it works well for one dashboard, but not for 2 and more because of blueprints.
My flask_app.py:
import dash
from flask import Flask, render_template, redirect, Blueprint
import dash_bootstrap_components as dbc
from flask_httpauth import HTTPDigestAuth
from apps.dashboard import Dashboard
app = Flask(__name__)
#app.route('/')
def hello_world():
return 'Hello from Flask!'
#others routes
auth = HTTPDigestAuth()
users = {
"john": "hello",
"susan": "bye"
}
#auth.get_password
def get_pw(username):
if username in users:
return users.get(username)
return None
url1 = '/dahsboard1'
dash_app1 = dash.Dash(__name__, server = app, external_stylesheets=[dbc.themes.BOOTSTRAP])
dash_app1.config.suppress_callback_exceptions = True
dash_app1.layout = Dashboard(dash_app1, 'data1', 'Title1', url1).layout
#app.route(url1)
#app.route(url1 + '/')
#app.route('/dash1')
#auth.login_required
def render_dashboard1():
return dash_app1.index()
url2 = '/dashboard2'
dash_app2 = dash.Dash(name='app2', server = app, external_stylesheets=[dbc.themes.BOOTSTRAP])
dash_app2.config.suppress_callback_exceptions = True
dash_app2.layout = Dashboard(dash_app2, 'data2', 'Title2', url2).layout
#app.route(url2)
#app.route(url2 + '/')
#app.route('/dash2')
#auth.login_required
def render_dashboard2():
return dash_app2.index()
if __name__ == '__main__':
app.run(debug=True)
The error:
ValueError: The name '_dash_assets' is already registered for a different blueprint. Use 'name=' to provide a unique name.
I undestand that a blueprint is created at each dashapp creation. After the first call :
print(app.blueprints)
returns
{'_dash_assets': <Blueprint '_dash_assets'>}
How can I add different blueprint names for each dashapp created ? Or more generally, how can I manage authentification for several dashapps running on one flask server ?
EDTIT:
I can solve this problem using this argument at dashboard creation
url_base_pathname = '/fake-url/'
But it leads to another problem: I can't protect this route with
#app.route('/fake-url/')
#auth.login_required(role=['admin'])
def render_dashboard():
return dash_app.app.index()
So the question is: how can I protect the route used in the dash creation with the argument url_base_pathname ?
You may have already solved this by now but will leave the solution here for the community. First you will need to set url_base_pathname for example:
dash_app2 = dash.Dash(
name='app2',
server = app,
url_base_pathname='/your_url_of_choice/'
external_stylesheets=[dbc.themes.BOOTSTRAP])
This will resolve that error.
My question is about setting proxy in selenium (3.4.3.) coding in python (2.7) for Firefox (Geckodriver v0.18.0-win64).
The spec at
http://www.seleniumhq.org/docs/04_webdriver_advanced.jsp
provides only a java example.
from selenium import webdriver
PROXY = "94.56.171.137:8080"
class Proxy(object):
def __call__(self):
self.base_url = "https://whatismyip.com"
print self.base_url
# proxy json object
desired_capability = webdriver.DesiredCapabilities.FIREFOX['proxy']={
"httpProxy":PROXY,
"ftpProxy":PROXY,
"sslProxy":PROXY,
#"noProxy":None,
"proxyType":"manual"
}
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.privatebrowsing.autostart", True)
self.driver = webdriver.Firefox(executable_path='D:\Code\Drivers\geckodriver',firefox_profile=firefox_profile, capabilities=desired_capability)
self.driver.get(self.base_url)
if __name__ == "__main__":
proxy_test = Proxy()
proxy_test()
I am getting the following Error Message:
selenium.common.exceptions.WebDriverException: Message: Can't load the
profile. Possible firefox version mismatch. You must use GeckoDriver
instead for Firefox 48+.
If I comment the code regarding the proxy, I am able to get the page, in private mode as the profile specified. I think it is the proxy that is messing things up.
Yaso's answer didn't work for me, instead i used this
proxyString = "Ip:port"
desired_capability = webdriver.DesiredCapabilities.FIREFOX
desired_capability['proxy'] = {
"proxyType": "manual",
"httpProxy": proxyString,
"ftpProxy": proxyString,
"sslProxy": proxyString
}
I spent hours finding an answer and I want to share it.
The simple problem was in the proxy specification.
Initially the proxy and port were one string
PROXY = "94.56.171.137:8080"
the answer should make the port as a number
PROXY = "94.56.171.137"
PORT = 8080
Here is the rest of the code
from selenium import webdriver
PROXY = "94.56.171.137"
PORT = 8080
class Proxy(object):
def __call__(self):
self.base_url = "https://whatismyip.com"
print self.base_url
# https://github.com/mozilla/geckodriver
# proxy json object
desired_capability = webdriver.DesiredCapabilities.FIREFOX
desired_capability['proxy']={
"proxyType":"manual",
"httpProxy":PROXY,
"httpProxyPort": PORT,
"ftpProxy":PROXY,
"ftpProxyPort": PORT,
"sslProxy":PROXY,
"sslProxyPort" : PORT
}
firefox_profile = webdriver.FirefoxProfile()
firefox_profile.set_preference("browser.privatebrowsing.autostart", True)
self.driver = webdriver.Firefox(executable_path='D:\Drivers\geckodriver',firefox_profile=firefox_profile, capabilities=desired_capability)
self.driver.get(self.base_url)
if __name__ == "__main__":
proxy_test = Proxy()
proxy_test() code here
Is it possible to create a integration test of a scrapy-pipeline? I can't figure out how to do this. In particular I am trying to write a test for the FilesPipeline and I also want it to persist my mocked response to Amazon S3.
Here is my test:
def _mocked_download_func(request, info):
return Response(url=response.url, status=200, body="test", request=request)
class FilesPipelineTests(unittest.TestCase):
def setUp(self):
self.settings = get_project_settings()
crawler = Crawler(self.settings)
crawler.configure()
self.pipeline = FilesPipeline.from_crawler(crawler)
self.pipeline.open_spider(None)
self.pipeline.download_func = _mocked_download_func
#defer.inlineCallbacks
def test_file_should_be_directly_available_from_s3_when_processed(self):
item = CrawlResult()
item['id'] = "test"
item['file_urls'] = ['http://localhost/test']
result = yield self.pipeline.process_item(item, None)
self.assertEquals(result['files'][0]['path'], "full/002338a87aab86c6b37ffa22100504ad1262f21b")
I always run into the following error:
DirtyReactorAggregateError: Reactor was unclean.
How do I create a proper test using twisted and scrapy?
Up do now I did my pipeline tests without the call to from_crawler, so they are not ideal, because they do not test the functionality of from_crawler, but they work.
I do them by using an empty Spider instance:
from scrapy.spiders import Spider
# some other imports for my own stuff and standard libs
#pytest.fixture
def mqtt_client():
client = mock.Mock()
return client
def test_mqtt_pipeline_does_return_item_after_process(mqtt_client):
spider = Spider(name='spider')
pipeline = MqttOutputPipeline(mqtt_client, 'dummy-namespace')
item = BasicItem()
item['url'] = 'http://example.com/'
item['source'] = 'dummy source'
ret = pipeline.process_item(item, spider)
assert ret is not None
(in fact, I forgot to call open_spider())
You can also have a look at how scrapy itself does the testing of pipelines, e.g. for MediaPipeline:
class BaseMediaPipelineTestCase(unittest.TestCase):
pipeline_class = MediaPipeline
settings = None
def setUp(self):
self.spider = Spider('media.com')
self.pipe = self.pipeline_class(download_func=_mocked_download_func,
settings=Settings(self.settings))
self.pipe.open_spider(self.spider)
self.info = self.pipe.spiderinfo
def test_default_media_to_download(self):
request = Request('http://url')
assert self.pipe.media_to_download(request, self.info) is None
You can also have a look through their other unit tests. For me, these are always good inspiration on how to unit test scrapy components.
If you want to test the from_crawler function, too, you could have a look on their Middleware tests. In these tests, they often use from_crawler to create middlewares, e.g. for OffsiteMiddleware.
from scrapy.spiders import Spider
from scrapy.utils.test import get_crawler
class TestOffsiteMiddleware(TestCase):
def setUp(self):
crawler = get_crawler(Spider)
self.spider = crawler._create_spider(**self._get_spiderargs())
self.mw = OffsiteMiddleware.from_crawler(crawler)
self.mw.spider_opened(self.spider)
I assume the key component here is to call get_crawler from scrapy.utils.test. Seems they factored out some calls you need to do in order to have a testing environment.
The following code is written using selenium python web driver which is run in saucelabs.I am providing the browser name,version and platform in a list,how do i do the same by providing the browser details through command line arguments? I am using py.test to execute the test cases.
import os
import sys
import httplib
import base64
import json
import new
import unittest
import sauceclient
from selenium import webdriver
from sauceclient import SauceClient
# it's best to remove the hardcoded defaults and always get these values
# from environment variables
USERNAME = os.environ.get('SAUCE_USERNAME', "ranjanprabhub")
ACCESS_KEY = os.environ.get('SAUCE_ACCESS_KEY', "ecec4dd0-d8da-49b9-b719-17e2c43d0165")
sauce = SauceClient(USERNAME, ACCESS_KEY)
browsers = [{"platform": "Mac OS X 10.9",
"browserName": "chrome",
"version": ""},
]
def on_platforms(platforms):
def decorator(base_class):
module = sys.modules[base_class.__module__].__dict__
for i, platform in enumerate(platforms):
d = dict(base_class.__dict__)
d['desired_capabilities'] = platform
name = "%s_%s" % (base_class.__name__, i + 1)
module[name] = new.classobj(name, (base_class,), d)
return decorator
#on_platforms(browsers)
class SauceSampleTest(unittest.TestCase):
def setUp(self):
self.desired_capabilities['name'] = self.id()
sauce_url = "http://%s:%s#ondemand.saucelabs.com:80/wd/hub"
self.driver = webdriver.Remote(
desired_capabilities=self.desired_capabilities,
command_executor=sauce_url % (USERNAME, ACCESS_KEY)
)
self.driver.implicitly_wait(30)
def test_sauce(self):
self.driver.get('http://saucelabs.com/test/guinea-pig')
assert "I am a page title - Sauce Labs" in self.driver.title
comments = self.driver.find_element_by_id('comments')
comments.send_keys('Hello! I am some example comments.'
' I should be in the page after submitting the form')
self.driver.find_element_by_id('submit').click()
commented = self.driver.find_element_by_id('your_comments')
assert ('Your comments: Hello! I am some example comments.'
' I should be in the page after submitting the form'
in commented.text)
body = self.driver.find_element_by_xpath('//body')
assert 'I am some other page content' not in body.text
self.driver.find_elements_by_link_text('i am a link')[0].click()
body = self.driver.find_element_by_xpath('//body')
assert 'I am some other page content' in body.text
def tearDown(self):
print("Link to your job: https://saucelabs.com/jobs/%s" % self.driver.session_id)
try:
if sys.exc_info() == (None, None, None):
sauce.jobs.update_job(self.driver.session_id, passed=True)
else:
sauce.jobs.update_job(self.driver.session_id, passed=False)
finally:
self.driver.quit()
So this is a bit complicated because you can pass an array of browsers into the #on_platforms decorator. My solution will only work for a single browser, as it looks like that's what you're doing right now.
For the current, single browser, situation -- you're looking for argparse. Here's my suggested fix:
import argparse
def setup_parser():
parser = argparse.ArgumentParser(description='Automation Testing!')
parser.add_argument('-p', '--platform', help='Platform for desired_caps', default='Mac OS X 10.9')
parser.add_argument('-b', '--browser-name', help='Browser Name for desired_caps', default='chrome')
parser.add_argument('-v', '--version', default='')
args = vars(parser.parse_args())
return args
desired_caps = setup_parser()
browsers = [desired_caps]
print browsers
But if you're looking to test multiple browsers (which I suggest you do!), you should not try and use command line arguments for the desired_caps of each individual browser. You should instead load a json config file for the browsers and the desired_caps for each one that you want Sauce to run.
Maybe have a different config file for each set of browsers, and then use command line arguments to pass in the config files you want to load.
I am trying to access access application configuration inside a blueprint authorisation.py which in a package api. I am initializing the blueprint in __init__.py which is used in authorisation.py.
__init__.py
from flask import Blueprint
api_blueprint = Blueprint("xxx.api", __name__, None)
from api import authorisation
authorisation.py
from flask import request, jsonify, current_app
from ..oauth_adapter import OauthAdapter
from api import api_blueprint as api
client_id = current_app.config.get('CLIENT_ID')
client_secret = current_app.config.get('CLIENT_SECRET')
scope = current_app.config.get('SCOPE')
callback = current_app.config.get('CALLBACK')
auth = OauthAdapter(client_id, client_secret, scope, callback)
#api.route('/authorisation_url')
def authorisation_url():
url = auth.get_authorisation_url()
return str(url)
I am getting RuntimeError: working outside of application context
I understand why that is but then what is the correct way of accessing those configuration settings?
----Update----
Temporarily, I have done this.
#api.route('/authorisation_url')
def authorisation_url():
client_id, client_secret, scope, callback = config_helper.get_config()
auth = OauthAdapter(client_id, client_secret, scope, callback)
url = auth.get_authorisation_url()
return str(url)
Use flask.current_app in place of app in the blueprint view.
from flask import current_app
#api.route("/info")
def get_account_num():
num = current_app.config["INFO"]
The current_app proxy is only available in the context of a request.
Overloading record method seems to be quite easy:
api_blueprint = Blueprint('xxx.api', __name__, None)
api_blueprint.config = {}
#api_blueprint.record
def record_params(setup_state):
app = setup_state.app
api_blueprint.config = dict([(key,value) for (key,value) in app.config.iteritems()])
To build on tbicr's answer, here's an example overriding the register method example:
from flask import Blueprint
auth = None
class RegisteringExampleBlueprint(Blueprint):
def register(self, app, options, first_registration=False):
global auth
config = app.config
client_id = config.get('CLIENT_ID')
client_secret = config.get('CLIENT_SECRET')
scope = config.get('SCOPE')
callback = config.get('CALLBACK')
auth = OauthAdapter(client_id, client_secret, scope, callback)
super(RegisteringExampleBlueprint,
self).register(app, options, first_registration)
the_blueprint = RegisteringExampleBlueprint('example', __name__)
And an example using the record decorator:
from flask import Blueprint
from api import api_blueprint as api
auth = None
# Note there's also a record_once decorator
#api.record
def record_auth(setup_state):
global auth
config = setup_state.app.config
client_id = config.get('CLIENT_ID')
client_secret = config.get('CLIENT_SECRET')
scope = config.get('SCOPE')
callback = config.get('CALLBACK')
auth = OauthAdapter(client_id, client_secret, scope, callback)
Blueprints have register method which called when you register blueprint. So you can override this method or use record decorator to describe logic which depends from app.
The current_app approach is fine but you must have some request context. If you don't have one (some pre-work like testing, e.g.) you'd better place
with app.test_request_context('/'):
before this current_app call.
You will have RuntimeError: working outside of application context , instead.
You either need to import the main app variable (or whatever you have called it) that is returned by Flask():
from someplace import app
app.config.get('CLIENT_ID')
Or do that from within a request:
#api.route('/authorisation_url')
def authorisation_url():
client_id = current_app.config.get('CLIENT_ID')
url = auth.get_authorisation_url()
return str(url)
You could also wrap the blueprint in a function and pass the app as an argument:
Blueprint:
def get_blueprint(app):
bp = Blueprint()
return bp
Main:
from . import my_blueprint
app.register_blueprint(my_blueprint.get_blueprint(app))
I know this is an old thread. But while writing a flask service, I used a method like this to do it. It's longer than the solutions above but it gives you the possibility to use customized class yourself. And frankly, I like to write services like this.
Step 1:
I added a struct in a different module file where we can make the class structs singleton. And I got this class structure from this thread already discussed. Creating a singleton in Python
class Singleton(type):
_instances = {}
def __call__(cls, *args, **kwargs):
if cls not in cls._instances:
cls._instances[cls] = super(Singleton, cls).__call__(*args, **kwargs)
else:
cls._instances[cls].__init__(*args, **kwargs)
return cls._instances[cls]
Step 2:
Then I created a Singleton EnvironmentService class from our Singleton class that we defined above, just for our purpose. Instead of recreating such classes, create them once and use them in other modules, routes, etc. import. We can access the class with the same reference.
from flask import Config
from src.core.metaclass.Singleton import Singleton
class EnvironmentService(metaclass=Singleton):
__env: Config = None
def initialize(self, env):
self.__env = env
return EnvironmentService()
def get_all(self):
return self.__env.copy()
def get_one(self, key):
return self.__env.get(key)
Step 3:
Now we include the service in the application in our project root directory. This process should be applied before the routes.
from flask import Flask
from src.services.EnvironmentService import EnvironmentService
app = Flask(__name__)
# Here is our service
env = EnvironmentService().initialize(app.config)
# Your routes...
Usage:
Yes, we can now access our service from other routes.
from src.services.EnvironmentService import EnvironmentService
key = EnvironmentService().get_one("YOUR_KEY")