I am trying to deploy a spreadsheet model with a web page front end using Django. The web "app" flow is simple:
User enters data in a web form
Send form data to a Django backend view function "run_model(request)"
Parse request object to get user inputs and then populate named ranges in the excel model's input sheet using xlwings to interact with a spreadsheet model (sheet.range function is used)
Run "calculate()" on the spreadsheet
Read outputs from another tab in the spreadsheet using xlwings and named ranges (again, using sheet.range function).
The problem is that the connection to the Excel process keeps getting killed by Django (I believe it handles each request as a separate process), so I can get at most one request to work (by importing xlwings inside the view function) but when I send a second request, the connection is dead and it won't reactivate.
Basically, how can I keep the connection to the workbook alive between requests or at least re-open a connection for each request?
Ok, ended up implementing a simple "spreadsheet server" to address the issue with Django killing the connection.
First wrote code for the server (server.py), then some code to start it up from command line args (start_server.py), then had my view open a connection to this model when it needs to use it (in views.py).
So, I had to separate my (Excel + xlwings) and Django into independent processes to keep the interfaces clean and control how much access Django has to my speadsheet model. Works fine now.
start_server.py
"""
Starts spreadsheet server on specified port
Usage: python start_server.py port_number logging_filepath
port_number: sets server listening to localhost:<port_number>
logging_filepath: full path to logging file (all messages directed to this file)
"""
import argparse
import os
import subprocess
_file_path = os.path.dirname(os.path.abspath(__file__))
#command line interface
parser = argparse.ArgumentParser()
parser.add_argument('port_number',
help='sets server listening to localhost:<port_number>')
parser.add_argument('logging_filepath',help='full path to logging file (all messages directed to this file)')
args = parser.parse_args()
#set up logging
_logging_path = args.logging_filepath
print("logging output to " + _logging_path)
_log = open(_logging_path,'wb')
#set up and start server
_port = args.port_number
print('starting Excel server...')
subprocess.Popen(['python',_file_path +
'\\server.py',str(_port)],stdin=_log, stdout=_log, stderr=_log)
print("".join(['server listening on localhost:',str(_port)]))
server.py
"""
Imports package that starts Excel process (using xlwings), gets interface
to the object wrapper for the Excel model, and then serves requests to that model.
"""
import os
import sys
from multiprocessing.connection import Listener
_file_path = os.path.dirname(os.path.abspath(__file__))
sys.path.append(_file_path)
import excel_object_wrapper
_MODEL_FILENAME = 'excel_model.xls'
model_interface = excel_object_wrapper.get_model_interface(_file_path+"\\"+_MODEL_FILENAME)
model_connection = model_interface['run_location']
close_model = model_interface['close_model']
_port = sys.argv[1]
address = ('localhost', int(_port))
listener = Listener(address)
_alive = True
print('starting server on ' + str(address))
while _alive:
print("listening for connections")
conn = listener.accept()
print 'connection accepted from', listener.last_accepted
while True:
try:
input = conn.recv()
print(input)
if not input or input=='close':
print('closing connection to ' + str(conn))
conn.close()
break
if input == 'kill':
_alive = False
print('stopping server')
close_model()
conn.send('model closed')
conn.close()
listener.close()
break
except EOFError:
print('closing connection to ' + str(conn))
conn.close()
break
conn.send(model_connection(*input))
views.py (from within Django)
from __future__ import unicode_literals
import os
from multiprocessing.connection import Client
from django.shortcuts import render
from django.http import HttpResponse
def run(request):
model_connection = Client(('localhost',6000)) #we started excel server on localhost:6000 before starting Django
params = request.POST
param_a = float(params['a'])
param_b = float(params['b'])
model_connection.send((param_a ,param_b ))
results = model_connection.recv()
return render(request,'model_app/show_results.html',context={'results':results})
Perhaps I am missing something simple, so I hope this is an easy question. I am using Scrapy to parse a directory listing and then pull down each appropriate web page (actually a text file) and parse it out using Python.
Each page has a set of data I am interested in, and I update a global dictionary each time I encounter such an item in each page.
What I would like to do is simply print out an aggregate summary when all Request calls are complete, however after the yield command nothing runs. I am assuming because yield is actually returning a generator and bailing out.
I'd like to avoid writing a file for each Request object if possible...I'd rather keep it self contained within this Python script.
Here is the code I am using:
from scrapy.spider import BaseSpider
from scrapy.selector import Selector
from scrapy.http import Request
slain = {}
class GameSpider(BaseSpider):
name = "game"
allowed_domains = ["example.org"]
start_urls = [
"http://example.org/users/deaths/"
]
def parse(self, response):
links = response.xpath('//a/#href').extract()
for link in links:
if 'txt' in link:
l = self.start_urls[0] + link
yield Request(l, callback=self.parse_following, dont_filter=True)
# Ideally print out the aggregate after all Requests are satisfied
# print "-----"
# for k,v in slain.iteritems():
# print "Slain by %s: %d" % (k,v)
# print "-----"
def parse_following(self, response):
parsed_resp = response.body.rstrip().split('\n')
for line in parsed_resp:
if "Slain" in line:
broken = line.split()
slain_by = broken[3]
if (slain_by in slain):
slain[slain_by] += 1
else:
slain[slain_by] = 1
You have closed(reason) function, it is called when the spider finishes.
def closed(self, reason):
for k,v in self.slain.iteritems():
print "Slain by %s: %d" % (k,v)
I am using PyQt4 for creating a custom browser using QtWebKit, but I am stuck on saving bookmarks from the browser. Does anyone know how to achieve that?
You're a little vague on how you want this done, so I'll say we wanted to use a button imported from a UI file called bookmarks_Btn. You'll need to use the pickle module.
Here's the example code...
from PyQt4 import QtCore, QtGui, QtWebKit, uic
import pickle
class window(QtGui.QWidget):
def __init__(self, parent=None):
super(httpWidget, self).__init__(parent)
self.ui = uic.loadUi('mybrowser.ui')
self.ui.setupUi(self)
def bookmarksLoad(self):
print 'Loading bookmarks'
try:
bookOpen = open('bookmarks.txt', 'rb')
bookmarks = pickle.load(bookOpen)
bookOpen.close()
print bookmarks # Not necessary, but for example purposes
# Here you decide how "bookmarks" variable is displayed.
except:
bookOpen = open('bookmarks.txt', 'wb')
bookmarks = 'http://www.stackoverflow.com'
bookWrite = pickle.dump(bookmarks, bookOpen)
bookOpen.close()
print bookmarks # Not necessary, but for example purposes
# Here you decide how "bookmarks" variable is displayed.
QtCore.QObject.connect(self.ui.bookmarks_Btn, QtCore.SIGNAL('clicked()'), self.bookmarksLoad)
self.ui.show()
def bookmarks():
url = input 'Enter a URL: '
bookOpen = open('bookmarks.txt', 'wb')
bookOpen.write(url)
bookOpen.close()
print 'Website bookmarked!'
if __name__ == '__main__':
app = QtGui.QApplication(sys.argv)
run = window()
bookmarks()
sys.exit(app.exec_())
# You add on here, for example, deleting bookmarks.
However, if you wanted it to be retrieved from an address bar (named address, make the following changes...
# In the bookmarks function...
global url # Add at beginning
# Remove the input line.
# Add at end of __init__ in window class:
url = self.ui.address.text()
global url
That's pretty much the basics. Please note I normally program in Python 3 and PyQt5 so if there are any errors let me know :)
The following code is written using selenium python web driver which is run in saucelabs.I am providing the browser name,version and platform in a list,how do i do the same by providing the browser details through command line arguments? I am using py.test to execute the test cases.
import os
import sys
import httplib
import base64
import json
import new
import unittest
import sauceclient
from selenium import webdriver
from sauceclient import SauceClient
# it's best to remove the hardcoded defaults and always get these values
# from environment variables
USERNAME = os.environ.get('SAUCE_USERNAME', "ranjanprabhub")
ACCESS_KEY = os.environ.get('SAUCE_ACCESS_KEY', "ecec4dd0-d8da-49b9-b719-17e2c43d0165")
sauce = SauceClient(USERNAME, ACCESS_KEY)
browsers = [{"platform": "Mac OS X 10.9",
"browserName": "chrome",
"version": ""},
]
def on_platforms(platforms):
def decorator(base_class):
module = sys.modules[base_class.__module__].__dict__
for i, platform in enumerate(platforms):
d = dict(base_class.__dict__)
d['desired_capabilities'] = platform
name = "%s_%s" % (base_class.__name__, i + 1)
module[name] = new.classobj(name, (base_class,), d)
return decorator
#on_platforms(browsers)
class SauceSampleTest(unittest.TestCase):
def setUp(self):
self.desired_capabilities['name'] = self.id()
sauce_url = "http://%s:%s#ondemand.saucelabs.com:80/wd/hub"
self.driver = webdriver.Remote(
desired_capabilities=self.desired_capabilities,
command_executor=sauce_url % (USERNAME, ACCESS_KEY)
)
self.driver.implicitly_wait(30)
def test_sauce(self):
self.driver.get('http://saucelabs.com/test/guinea-pig')
assert "I am a page title - Sauce Labs" in self.driver.title
comments = self.driver.find_element_by_id('comments')
comments.send_keys('Hello! I am some example comments.'
' I should be in the page after submitting the form')
self.driver.find_element_by_id('submit').click()
commented = self.driver.find_element_by_id('your_comments')
assert ('Your comments: Hello! I am some example comments.'
' I should be in the page after submitting the form'
in commented.text)
body = self.driver.find_element_by_xpath('//body')
assert 'I am some other page content' not in body.text
self.driver.find_elements_by_link_text('i am a link')[0].click()
body = self.driver.find_element_by_xpath('//body')
assert 'I am some other page content' in body.text
def tearDown(self):
print("Link to your job: https://saucelabs.com/jobs/%s" % self.driver.session_id)
try:
if sys.exc_info() == (None, None, None):
sauce.jobs.update_job(self.driver.session_id, passed=True)
else:
sauce.jobs.update_job(self.driver.session_id, passed=False)
finally:
self.driver.quit()
So this is a bit complicated because you can pass an array of browsers into the #on_platforms decorator. My solution will only work for a single browser, as it looks like that's what you're doing right now.
For the current, single browser, situation -- you're looking for argparse. Here's my suggested fix:
import argparse
def setup_parser():
parser = argparse.ArgumentParser(description='Automation Testing!')
parser.add_argument('-p', '--platform', help='Platform for desired_caps', default='Mac OS X 10.9')
parser.add_argument('-b', '--browser-name', help='Browser Name for desired_caps', default='chrome')
parser.add_argument('-v', '--version', default='')
args = vars(parser.parse_args())
return args
desired_caps = setup_parser()
browsers = [desired_caps]
print browsers
But if you're looking to test multiple browsers (which I suggest you do!), you should not try and use command line arguments for the desired_caps of each individual browser. You should instead load a json config file for the browsers and the desired_caps for each one that you want Sauce to run.
Maybe have a different config file for each set of browsers, and then use command line arguments to pass in the config files you want to load.
In my django app, I have a view which accomplishes file upload.The core snippet is like this
...
if (request.method == 'POST'):
if request.FILES.has_key('file'):
file = request.FILES['file']
with open(settings.destfolder+'/%s' % file.name, 'wb+') as dest:
for chunk in file.chunks():
dest.write(chunk)
I would like to unit test the view.I am planning to test the happy path as well as the fail path..ie,the case where the request.FILES has no key 'file' , case where request.FILES['file'] has None..
How do I set up the post data for the happy path?Can somebody tell me?
I used to do the same with open('some_file.txt') as fp: but then I needed images, videos and other real files in the repo and also I was testing a part of a Django core component that is well tested, so currently this is what I have been doing:
from django.core.files.uploadedfile import SimpleUploadedFile
def test_upload_video(self):
video = SimpleUploadedFile("file.mp4", "file_content", content_type="video/mp4")
self.client.post(reverse('app:some_view'), {'video': video})
# some important assertions ...
In Python 3.5+ you need to use bytes object instead of str. Change "file_content" to b"file_content"
It's been working fine, SimpleUploadedFile creates an InMemoryFile that behaves like a regular upload and you can pick the name, content and content type.
From Django docs on Client.post:
Submitting files is a special case. To POST a file, you need only
provide the file field name as a key, and a file handle to the file
you wish to upload as a value. For example:
c = Client()
with open('wishlist.doc') as fp:
c.post('/customers/wishes/', {'name': 'fred', 'attachment': fp})
I recommend you to take a look at Django RequestFactory. It's the best way to mock data provided in the request.
Said that, I found several flaws in your code.
"unit" testing means to test just one "unit" of functionality. So,
if you want to test that view you'd be testing the view, and the file
system, ergo, not really unit test. To make this point more clear. If
you run that test, and the view works fine, but you don't have
permissions to save that file, your test would fail because of that.
Other important thing is test speed. If you're doing something like
TDD the speed of execution of your tests is really important.
Accessing any I/O is not a good idea.
So, I recommend you to refactor your view to use a function like:
def upload_file_to_location(request, location=None): # Can use the default configured
And do some mocking on that. You can use Python Mock.
PS: You could also use Django Test Client But that would mean that you're adding another thing more to test, because that client make use of Sessions, middlewares, etc. Nothing similar to Unit Testing.
I do something like this for my own event related application but you should have more than enough code to get on with your own use case
import tempfile, csv, os
class UploadPaperTest(TestCase):
def generate_file(self):
try:
myfile = open('test.csv', 'wb')
wr = csv.writer(myfile)
wr.writerow(('Paper ID','Paper Title', 'Authors'))
wr.writerow(('1','Title1', 'Author1'))
wr.writerow(('2','Title2', 'Author2'))
wr.writerow(('3','Title3', 'Author3'))
finally:
myfile.close()
return myfile
def setUp(self):
self.user = create_fuser()
self.profile = ProfileFactory(user=self.user)
self.event = EventFactory()
self.client = Client()
self.module = ModuleFactory()
self.event_module = EventModule.objects.get_or_create(event=self.event,
module=self.module)[0]
add_to_admin(self.event, self.user)
def test_paper_upload(self):
response = self.client.login(username=self.user.email, password='foz')
self.assertTrue(response)
myfile = self.generate_file()
file_path = myfile.name
f = open(file_path, "r")
url = reverse('registration_upload_papers', args=[self.event.slug])
# post wrong data type
post_data = {'uploaded_file': i}
response = self.client.post(url, post_data)
self.assertContains(response, 'File type is not supported.')
post_data['uploaded_file'] = f
response = self.client.post(url, post_data)
import_file = SubmissionImportFile.objects.all()[0]
self.assertEqual(SubmissionImportFile.objects.all().count(), 1)
#self.assertEqual(import_file.uploaded_file.name, 'files/registration/{0}'.format(file_path))
os.remove(myfile.name)
file_path = import_file.uploaded_file.path
os.remove(file_path)
I did something like that :
from django.core.files.uploadedfile import SimpleUploadedFile
from django.test import TestCase
from django.core.urlresolvers import reverse
from django.core.files import File
from django.utils.six import BytesIO
from .forms import UploadImageForm
from PIL import Image
from io import StringIO
def create_image(storage, filename, size=(100, 100), image_mode='RGB', image_format='PNG'):
"""
Generate a test image, returning the filename that it was saved as.
If ``storage`` is ``None``, the BytesIO containing the image data
will be passed instead.
"""
data = BytesIO()
Image.new(image_mode, size).save(data, image_format)
data.seek(0)
if not storage:
return data
image_file = ContentFile(data.read())
return storage.save(filename, image_file)
class UploadImageTests(TestCase):
def setUp(self):
super(UploadImageTests, self).setUp()
def test_valid_form(self):
'''
valid post data should redirect
The expected behavior is to show the image
'''
url = reverse('image')
avatar = create_image(None, 'avatar.png')
avatar_file = SimpleUploadedFile('front.png', avatar.getvalue())
data = {'image': avatar_file}
response = self.client.post(url, data, follow=True)
image_src = response.context.get('image_src')
self.assertEquals(response.status_code, 200)
self.assertTrue(image_src)
self.assertTemplateUsed('content_upload/result_image.html')
create_image function will create image so you don't need to give static path of image.
Note : You can update code as per you code.
This code for Python 3.6.
from rest_framework.test import force_authenticate
from rest_framework.test import APIRequestFactory
factory = APIRequestFactory()
user = User.objects.get(username='#####')
view = <your_view_name>.as_view()
with open('<file_name>.pdf', 'rb') as fp:
request=factory.post('<url_path>',{'file_name':fp})
force_authenticate(request, user)
response = view(request)
As mentioned in Django's official documentation:
Submitting files is a special case. To POST a file, you need only provide the file field name as a key, and a file handle to the file you wish to upload as a value. For example:
c = Client()
with open('wishlist.doc') as fp:
c.post('/customers/wishes/', {'name': 'fred', 'attachment': fp})
More Information: How to check if the file is passed as an argument to some function?
While testing, sometimes we want to make sure that the file is passed as an argument to some function.
e.g.
...
class AnyView(CreateView):
...
def post(self, request, *args, **kwargs):
attachment = request.FILES['attachment']
# pass the file as an argument
my_function(attachment)
...
In tests, use Python's mock something like this:
# Mock 'my_function' and then check the following:
response = do_a_post_request()
self.assertEqual(mock_my_function.call_count, 1)
self.assertEqual(
mock_my_function.call_args,
call(response.wsgi_request.FILES['attachment']),
)
if you want to add other data with file upload then follow the below method
file = open('path/to/file.txt', 'r', encoding='utf-8')
data = {
'file_name_to_receive_on_backend': file,
'param1': 1,
'param2': 2,
.
.
}
response = self.client.post("/url/to/view", data, format='multipart')`
The only file_name_to_receive_on_backend will be received as a file other params received normally as post paramas.
In Django 1.7 there's an issue with the TestCase wich can be resolved by using open(filepath, 'rb') but when using the test client we have no control over it. I think it's probably best to ensure file.read() returns always bytes.
source: https://code.djangoproject.com/ticket/23912, by KevinEtienne
Without rb option, a TypeError is raised:
TypeError: sequence item 4: expected bytes, bytearray, or an object with the buffer interface, str found
from django.test import Client
from requests import Response
client = Client()
with open(template_path, 'rb') as f:
file = SimpleUploadedFile('Name of the django file', f.read())
response: Response = client.post(url, format='multipart', data={'file': file})
Hope this helps.
Very handy solution with mock
from django.test import TestCase, override_settings
#use your own client request factory
from my_framework.test import APIClient
from django.core.files import File
import tempfile
from pathlib import Path
import mock
image_mock = mock.MagicMock(spec=File)
image_mock.name = 'image.png' # or smt else
class MyTest(TestCase):
# I assume we want to put this file in storage
# so to avoid putting garbage in our MEDIA_ROOT
# we're using temporary storage for test purposes
#override_settings(MEDIA_ROOT=Path(tempfile.gettempdir()))
def test_send_file(self):
client = APIClient()
client.post(
'/endpoint/'
{'file':image_mock},
format="multipart"
)
I am using Python==3.8.2 , Django==3.0.4, djangorestframework==3.11.0
I tried self.client.post but got a Resolver404 exception.
Following worked for me:
import requests
upload_url='www.some.com/oaisjdoasjd' # your url to upload
with open('/home/xyz/video1.webm', 'rb') as video_file:
# if it was a text file we would perhaps do
# file = video_file.read()
response_upload = requests.put(
upload_url,
data=video_file,
headers={'content-type': 'video/webm'}
)
I am using django rest framework and I had to test the upload of multiple files.
I finally get it by using format="multipart" in my APIClient.post request.
from rest_framework.test import APIClient
...
self.client = APIClient()
with open('./photo.jpg', 'rb') as fp:
resp = self.client.post('/upload/',
{'images': [fp]},
format="multipart")
I am using GraphQL, upload for test:
with open('test.jpg', 'rb') as fp:
response = self.client.execute(query, variables, data={'image': [fp]})
code in class mutation
#classmethod
def mutate(cls, root, info, **kwargs):
if image := info.context.FILES.get("image", None):
kwargs["image"] = image
TestingMainModel.objects.get_or_create(
id=kwargs["id"],
defaults=kwargs
)