How to return data when a file is uploaded with igUpload control - infragistics

I'm using upload control in the following manner:
uploadContainer.igUpload({
mode: 'multiple',
multipleFiles: true,
maxSimultaneousFilesUploads: 2,
maxUploadedFiles: 2048,
labelUploadButton: "Choose File",
labelAddButton: "Choose File",
labelClearAllButton: null,
autostartupload: true,
onError: function(e, args) {...some function...},
fileUploaded: function(e, args) {...some function...},
allowedExtensions: ['dwg', 'DWG']
});
Current user, when arrives on the page, could upload a file.
After uploading, he expects some information to be returned from uploaded
file.I have to display list of layouts of the dwg file per example.
Into my upload handler class at the back-end, "FinishedUpload" method
is called. There, I'm reading the file, extracting some information
from it and I want to return this information back to the front-end.
How to accomplish that? Any ideas are welcomed!
Thank you!

The information to send additional data to the server Upload MVC Wrapper and the igUpload control on the client and vice-versa will be available in the Ignite UI June 2015 service release.
What you can do currently is to handle the client-side fileUploaded event and send an additional Ajax request to the server to get the data you want.

Related

How to show progress status of requests response download into web page in Django

I build an application in Django, and I have a function to get attendance log from a fingerprint machine, basically like this:
import requests
from xml.etree import ElementTree
def main(request):
Key="xxxxxx"
url="http://192.168.2.188:80"
soapreq="<GetAttLog><ArgComKey xsi:type=\"xsd:integer\">"+Key+"</ArgComKey><Arg><PIN xsi:type=\"xsd:integer\">All</PIN></Arg></GetAttLog>"
http_headers = {
"Accept": "application/soap+xml,multipart/related,text/*",
"Cache-Control": "no-cache",
"Pragma": "no-cache",
"Content-Type": "text/xml; charset=utf-8"
}
response = requests.post(url+"/iWsService",data=soapreq,headers=http_headers)
root = ElementTree.fromstring(response.content)
Now, that process will be repeated for hundred++ fingerprint machines, and I need to display some kind of progress status and also error messages ( if any, like connection cannot be eastablished, etc) on a page, periodically and sequentially, after each events. I mean something like :
....
"machine 1 : download finished."
"downloading data from machine 2 .. Please wait "
"machine 2 : download finished."
...
Thanks.
I'm not really sure what is the main hurdle you're facing. Can you try framing the question in a more precise way?
From what I understand, you want a page that changes dynamically based on something that's happening on the server. I think django may not be the best tool for that, as its basic use is to compute a full view in one go then display it. However, there are a few things that can be done.
The easiest (but not the best in terms of server load) would be to use Ajax requests from the webpage.
Have a "main view" that loads the user-facing page as well as some JS libraries (e.g. jQuery), which will be used to query the server for progress.
Have a "progress view" that displays the current status and that is queried from the main view via AJAX.
This is not the best architecture because you may end up reloading data from the server too often or not often enough. I would suggest to have a look at websockets, which allow you to keep a client-server connection open and use it when needed only, but there is no native support for them in django.

Kendo upload loses class when events added

Ive got a kendoupload in my gridview:
#(Html.Kendo().Upload()
.Name("files")
)
And this works fine:
But when i add events to the upload:
#(Html.Kendo().Upload()
.Name("files")
.Events(events => events
.Complete("onUpload")
.Remove("onRemoveSuccess"))
)
It loses all css-classes:
Looks like it just works according to docs.
Complete handler: http://docs.telerik.com/kendo-ui/api/web/upload#events-complete
complete
Fires when all active uploads have completed either successfully or with errors.
Note: The complete event fires only when the upload is in async mode.
Async mode: http://docs.telerik.com/kendo-ui/getting-started/web/upload/modes#asynchronous-mode
Asynchronous mode
In this mode the Upload requires dedicated server handlers to store and remove uploaded files. Files are uploaded immediately or, optionally, after user confirmation. The upload request is executed out-of-band without interrupting the page flow.
The async mode is implemented using the HTML5 File API. The upload will gracefully degrade and continue to function in legacy browsers using a hidden IFRAME.
So that's it - subscribing to complete you forcing legacy look & feel for legacy browsers.

Ember data doesn't send id when triggering PUT request

Does anybody know how can I handle this?
So, I execute the following code in my controller:
this.get('newEvent').save();
It gets my data and sends it to the server. Here's some data
Request URL:http://localhost:1337/api/v1/events/53a9cfee701b870000dc1d01
Data to send:
event:{
date: "Mon, 18 Aug 2014 07:00:00 GMT"
host: "Host"
repeatable: true
title: "Event name"
}
But for success I need to know the model id on my server which is normally included to the event object I send. Do you know any solutions? By the way DELETE request doesn't exclude id from the event object
You can easily override the saving behavior to include the id by overwriting the updateRecord hook in the RestAdapter: https://github.com/emberjs/data/blob/master/packages/ember-data/lib/adapters/rest_adapter.js#L457 to pass includeId: true the same way createRecord does https://github.com/emberjs/data/blob/master/packages/ember-data/lib/adapters/rest_adapter.js#L436
There are a few ways of handling this, and it comes down to personal preference. But I would recommend altering your backend code to pull the ID from the URL. Normally when you define the put request, you do it like /events/:event_id. If you did it this way on the backend, there's no need for it to be in your events object that's send over.
api/v1/events/53a9cfee701b870000dc1d01
You can capture this on the server by looking at the URL parameter that's sent over. If you're using Express on the backend, you can get the ID straight from the URL with little effort. I can't comment yet, but let me know your backend structure, and I can point you in the right direction. Possibly include the PUT request you wrote on the back end as a point of reference.
The other thing that you can do(not recommended, but will work) is to store a temporary id before doing the save.
var id = this.get('newEvent');
this.get('newEvent').set('temp_id', id).save();
And then you can access the temp_id on the server call without checking the param_id on the server call.

On server processing

I have a web application that will be doing some processing with submitted data. Instead of making the user wait for what will take at least a few seconds, maybe up to a few minutes during heavy load, I would like to know if there is some way to, within coldfusion, have processing that just occurs on the server.
Basically, the data would be passed to the server, and then the user would be redirected back to the main page to allow them to do other things, but not necessarily be able to see the results right away. Meanwhile, the processing of the data would take place on the server, and be entered into the database when complete.
Is this even possible within coldfusion, or would I need to look into using code that would receive the data and process it as a separate program?
ColdFusion 8 introduced the cfthread tag which may assist you.
<cfthread
required
name="thread name"
optional
action="run"
priority="NORMAL|HIGH|LOW"
zero or more application-specific attributes>
Thread code
</cfthread>
To do this reliably, you can use a database table as a job queue. So you when the user submits the data you insert record into the database indicating there is some work to be done. Then you create a scheduled task in the CF Administrator that polls a script that gets the next job from the queue and does the processing you describe. When complete it can update the database and you can then alert your user that there job is complete.
Make sense?
Another option that will possibly work for you is to use AJAX to post the data to the server. This is a pretty easy method to use, since you can use pretty much the exact same CF code that you have now and instead only need to modify the form submitting page (and you could even use some unobtrusive javascript techniques to have this degrade gracefully if javascript isn't present).
Here's an example using jQuery and BlockUI that will work for unobtrusively-submitting any form on your page in a background thread:
<script>
$(function () {
$("form").on("submit", function (e) {
var f = $(this);
e.preventDefault();
$.ajax({
method: f.attr("method"),
url: f.attr("action"),
data: f.serialize(),
beforeSend(jqXHR, settings) {
f.blockUI({message: "Loading..."});
},
complete(jqXHR, textStatus) {
f.unblockUI();
},
success: function (data, textStatus, jqXHR) {
// do something useful with the response
},
error: function(jqXHR, textStatus, errorThrown) {
// report the error
}
});
});
});
</script>
You should combine all three of these answers to give yourself a complete solution.
Use CF Thread to "kick off" the work.
Add a record to the DB to tell you the process is underway.
Use Ajax to check the DB record to see if the work is complete. When
your thread completes update the record - Ajax finds the work
complete and you display some message or indicator on the user's
screen so they can go on to step 2 or whatever. So each of these
answers holds a clue to a complete solution.
Not sure if this should be an answer or a comment (since I'm not adding anything new here).
We use an CF event gateway for this. The user submits a file via a web form and the event gateway monitors that upload directory. Based upon the file name the gateway knows how it should process the file into the database. This way the only real delay the user faces is the time for the file to actually transmit from their machine up to the server. We however have no need to inform the user of any statuses related to the process though could easily see how to work that into things if we did.

Django: Large file uploads - custom processing with mod_wsgi

I'm doing file uploads using Django's File Upload mechanism with a custom handler (by subclassing django.core.files.uploadhandler.FileUploadHandler) which does some additional processing in the
receive_data_chunk(self, raw_data, start) function.
I was curious when the handler is actually called (i.e. after the file has been completely uploaded by the server or as it arrives on the socket)?
From my tests I found out that you have access to the data as it arrives on the socket, but I would like someone to confirm this. I'm a little puzzled by this, because I thought mod_wsgi was a content generator in Apache, thus being called after the input filters which pre-process the client's request.
PS: I'm using Apache + mod_wsgi + Django.
In Apache, input filters are only applied to input content when the request handler reads the input content. So, no preprocessing is done by input filters, it is done inline with the request handler consuming the input content.