Django - Send Output Continually - django

I want to start processing some files from a django view and I want to be able to send the name of the files to the browser as they are processed. Is there a way to do this (easily)? I could probably do this using threads and ajax calls, but I want the simplest solution for now.

I found what I needed in an answer from one of the links that Andre Miller provided.
I found out that's possible to pass an iterator to HttpResponse so I used this code and it worked:
def import_iter():
""" Used to return output as it is generated """
# First return the template
t = loader.get_template('main/qimport.htm')
c = Context()
yield t.render(c)
# Now process the files
if req.method == 'POST':
location = req.POST['location']
if location:
for finfo in import_location(location):
yield finfo+"<br/>"
return HttpResponse(import_iter())

You would need to use some sort of queuing process if you want to kick off the task when the view is rendered, otherwise the process will finish first before anything is returned to the browser.
Once the task is running asynchronously you could use either AJAX to update the page with the latest status or simply use a meta-refresh inside the page to load the new content.
There is Django queue server here you could use:
http://code.google.com/p/django-queue-service/
It would seem that this question has also been asked a few times before:
How to best launch an asynchronous job request in Django view?
Is there any way to make an asynchronous function call from Python [Django]?
How do you do something after you render the view? (Django)

We are in 201X
Yes, you should use WebSockets or Ajax calls !!
Since you were asking(for the record purpose) for some streaming solution in Django you can use StreamingHttpResponse which Django supports out of the box.
https://docs.djangoproject.com/en/dev/ref/request-response/#django.http.StreamingHttpResponse
The StreamingHttpResponse class is used to stream a response from Django to the browser. You might want to do this if generating the response takes too long or uses too much memory. For instance, it’s useful for generating large CSV files.

If you clear the output buffer, then you should be able to see what has been processed.

First of all, make sure you output a Connection: Keep-Alive header, after which you just have to make sure that the script output isn't being buffered. In Python, you can use the cgi module's cgiprint function to ensure that Python's buffer is cleared, but you should also check the web server configuration, as some will buffer all output until the script finishes running.

Related

Embedded graphics in original mail change to attached graphics in reply when using exchangelib in python

We have a python script that replies to incoming emails using exchangelib. User A sends us an email that can contain a picture/graphic (e.g. company logo in signature line). Our script is able to reply to his mail, and user A will get our reply. Unfortunately, the picture/graphic that was embedded in the original mail to us, is now an attached file instead of an embedded picture.
Here is the code that we're using:
origmsg.reply(
subject='Re: ' + origmsg.subject,
body="This is my reply to your inquiry...."
)
I understand that for new messages the HTML code needs to include a reference to the attached file to make it embedded. How can this be done in a reply?
Thanks.
https://ecederstrand.github.io/exchangelib/#attachments has some example of embedding images in emails.
The .reply() method is for simple replies. You may need to call .create_reply() instead and edit the returned ReplyToItem object as needed before calling .send() on it.
If you have even more special requirements, you can call .save() on the ReplyToItem object to save it as a draft, fetch the draft as a plain Message object with account.drafts.get(id=reply_to_item_id) and do whatever you need to do before sending the draft.

A a means of viewing task-specific logs via Flower (or some other similar interface)

I have an application/website which I use to run tests. Whenever I run a test, a Celery task is created, and this task goes through the process of running the actual test. The test contacts a 3rd party server, so there are quite a few reasons why the task might fail or hang. This all works fine when everything's run locally; I have direct access to stdout and stderr—they pop up right on the terminal I used to start Celery's worker. If there's an error, a hangup, or any other such thing, I can see it directly, deal with it, and make sure it's handled gracefully in the future.
Eventually, this will be hosted on servers that are independent of my computer, which is where the problem begins:
I'd like a way to access task-specific logs to stdout and stderr (preferably, in real time). I implemented Flower thinking it might do this, but it seems that it doesn't. I've thought about saving the logs to a file, one for each task, and including a "View Log" button link on my site which would allow me to see the logs I'd otherwise see locally—but this is pretty cumbersome. Maybe I could do something like: Generate a link to each running task, and use javascript to update the page in that link w/ the contents of a log file?
I've done some research and haven't found much in the way of this type of logging. Would anyone mind pointing me in the right direction?
You can create a LoggingTask object capturing the logs and publishing them as the result of the task (if you're not using the result).
I did not test this, but it should work with a few teaks (you should note it will also eat up exceptions) :
import logging
from io import StringIO
from celery import Task
LOGGER = logging.getLogger(__name__)
class LoggingTask(Task):
def __call__(self, *args, **kwargs):
# Only handles the logging capture
log_stream = StringIO()
handler = logging.StreamHandler(log_stream)
try:
logging.getLogger().addHandler(handler)
Task.__call__(self, *args, **kwargs)
catch:
LOGGER.exception("Error in task")
finally:
logging.getLogger().removeHandler(handler)
handler.flush()
return log_stream.getvalue()

Django-piston make piston return full traceback of exception

How to make piston return full traceback of exception. By default it returns me only last error text. Like
id() takes exactly one argument (0 given)
Need to know which file and which line...
Piston loads a http status response via utils.rc, no errors are raised.
from the documentation:
Configuration variables
Piston is configurable in a couple of ways, which allows more granular
control of some areas without editing the code.
Setting Meaning
settings.PISTON_EMAIL_ERRORS If (when) Piston crashes, it will email the
administrators a backtrace (like the Django one
you see during DEBUG = True)
settings.PISTON_DISPLAY_ERRORS Upon crashing, will display a small backtrace
to the client, including the method signature
expected.
settings.PISTON_STREAM_OUTPUT When enabled, Piston will instruct Django to
stream the output to the client, but please read
streaming before enabling it.
I'd recommend to setup a logger, sentry together with raven is rather convenient and you get to configure your own log level and handler.

Selenium wait for download?

I'm trying to test the happy-path for a piece of code which takes a long time to respond, and then begins writing a file to the response output stream, which prompts a download dialog in browsers.
The problem is that this process has failed in the past, throwing an exception after this long amount of work. Is there a way in selenium to wait-for-download or equivalent?
I could throw in a Thread.sleep, but that would be inaccurate and unnecessarily slow down the test run.
What should I do, here?
I had the same problem. I invented something to solve the problem. A tempt file is created by Python with '.part' extension. So, if still we have the temp, python can wait for 10 second and check again if the file is downloaded or not yet.
while True:
if os.path.isfile('ts.csv.part'):
sleep(10)
elif os.path.isfile('ts.csv'):
break
else:
sleep(10)
driver.close()
So you have two problems here:
You need to cause the browser to download the file
You need to measure when the downloaded file is complete
Neither problemc an be directly solved by Selenium (yet - 2.0 may help), but both are solvable problems. The first problem can be solved by GUI automation toolkits, such as AutoIT. But they can also be solved by simply sending an automated keypress at the OS level that simulates the enter key (works for Firefox, a little harder on some versions of Chrome and Safari). If you're using Java, you can use Robot to do that. Other languages have similar toolkits to do such a thing.
The second issue is probably best solved with some sort of proxy solution. For example, if your browser was configured to go through a proxy and that proxy had an API, you could query the proxy with that API to ask when network activity had ended.
That's what we do at http://browsermob.com, which is a a startup I founded that uses Selenium to do load testing. We've released some of the proxy code as open source, available at http://browsermob.com/tools.
But two problems still persist:
You need to configure the browser to use the proxy. In Selenium 2 this is easier, but it's possible to do it with Selenium 1 as well. The key is just making sure that your browser launcher brings up the browser with the right profile/settings.
There currently is no API for BrowserMob proxy to tell you when network traffic has stopped! This is a big hole in the concept of the project that I want to fix as soon as I get the time. However, if you're keen to help out, join the Google Group and I can definitely point you in the right direction.
Hope that helps you identify your various options. Best of luck!
This is Chrome-testing-only solution for controlling the downloads with javascript..
Using WebDriver (Selenium2) it can be done within Chrome's chrome:// which is HTML/CSS/Javascript:
driver.get( "chrome://downloads/" );
waitElement( By.CssSelector("#downloads-summary-text") );
// next javascript snippet cancels the last/current download
// if your test ends in file attachment downloading
// you'll very likely need this if you more re-instantiated tests left
((JavascriptExecutor)driver).executeScript("downloads.downloads_[0].cancel_();");
There are other Download.prototype.functions in "chrome://downloads/downloads.js"
This suites you if you just need to test some info note eg. caused by file attachment starting activity, and not the file itself.
Naturally you need to control step 1. - mentioned by Patrick above - and by this you control step 2. FOR THE TEST, not for the functionality of actual file download completion / cancel.
See also : Javascript: Cancel/Stop Image Requests which is relating to Browser stopping.
This falls under the "things that can't be automated" category. Selenium is built with JavaScipt and due to JavaScript sandbox restrictions it can't access downloads.
Selenium 2 might be able to do this once Alerts/Prompts have been implemented but that this won't happen for the next little while yet.
If you want to check for the download dialog, try with AutoIt. I use that for uploading and downloading the files. Using AutoIt with Se RC is easier.
def file_downloaded?(file)
while File.file?(file) == false
p "File downloading in progress..."
sleep 1
end
end
*Ruby Syntax

how to upload file by POST in libcurl?

how to upload file by POST in libcurl?(c++)
Are you referring to RFC 1867 (i.e., what the browser sends when the user submits an HTML form containing an input field with type="file")?
If that's the case, you may be interested in http://curl.haxx.se/libcurl/c/postit2.html
From the documentation here:
When using libcurl's "easy" interface you init your session and get a handle (often referred to as an "easy handle"), which you use as input to the easy interface functions you use. Use curl_easy_init to get the handle.
You continue by setting all the options you want in the upcoming transfer, the most important among them is the URL itself (you can't transfer anything without a specified URL as you may have figured out yourself). You might want to set some callbacks as well that will be called from the library when data is available etc. curl_easy_setopt is used for all this.
When all is setup, you tell libcurl to perform the transfer using curl_easy_perform. It will then do the entire operation and won't return until it is done (successfully or not).
After the transfer has been made, you can set new options and make another transfer, or if you're done, cleanup the session by calling curl_easy_cleanup. If you want persistent connections, you don't cleanup immediately, but instead run ahead and perform other transfers using the same easy handle.
So it looks like you need to call the following:
curl_easy_init (initialize the curl session)
curl_easy_setopt (setup the session options)
curl_easy_perform (perform the curl)
curl_easy_cleanup (delete the session)
Given that these are C APIs you should have no problem calling them within a C++ source file.