In my application, we can upload upload files to a server. For this I am using WININET. I want to stop the upload process when user click on Stop button. Now if the user click on stop button the ongoing process will not stop uploading. How can I deny the file from uploading.
If you are using WININET, you need to post the file in several smaller chunks. If the uses presses "Cancel" you then need to set a variable to abort the upload. This must be checked after each small upload...
A full example of Splitting Uploads into smaller pparts can be found here:
http://support.microsoft.com/kb/177188/en-us
You need to download the "hsrex.exe" file and then open it with WinZip or 7-zip and extract the "BigPost.cpp" file. Also I can post the code here, if you want...
try CHttpFile::EndRequest() function...
Another idea, try to terminate the thread working for the upload operation
First, you need to be doing the WinInet stuff on a worker thread so that the UI is freed up to get the Cancel button click. When user clicks Cancel, your UI thread should close the handle being used by WinInet to upload chunks of the file. This causes WinInet to instantly abort any upload chunk currently in progress. To cleanly exit the worker thread at this point, the UI thread should set a bool 'done' flag that the worker thread reads, and if it is set, the worker thread exits instead of looping to upload another chunk.
Related
I am using wxWizard, in my 3rd page I need to call a function from backend, when that function takes long time to send response, my app hangs and shows not responding in title. Once that response is recieved from backend function, app behaves normally. why this is happening? Am I doing something wrong? How should I make that app not to hang? Response from backend function is delayed because of some network issue or long processing time in backend.
See this answer. Following is an extract:
An application gets the events from a queue provided by Windows.
If the application doesn't poll the eventqueue for a while (5 seconds), for example when doing a long calculation, then Windows assumes that the application is hung and alerts the user.
To avoid that applications should push expensive calculations to worker threads or split up processing and make sure the queue gets polled regularly.
So, the problem with your code is that, In your program there is only one thread,the main thread. It is taking care of all the activity e.g. UI update, event handling, responding users etc (which are very less time consuming). But when use it comes to handle connection with backend server, it is time consuming. So, you should use another thread to handle network operations. In this way main thread will be available for its normal work and not show that it is not responding.
I am writing a program to interact with a network-based API using Qt.
The interaction with the API is made with XML messages (both queries and results)
I implemented the communication and data processing in a class in a shared library project and I have a QMainWindow in which the user can enter the connection details. When clicking on the connect button, the following should happen:
1. An instance of the connecting class is created.
2. A connection message is sent to the API to get the session ID. The answer is parsed and a session ID is stored in the class instance.
3. A message is sent to the API the get some field information. The XML is then parsed to extract the required field information to get data from the API.
4. Another message is sent to get the data matching the fields. The XML answer is then parsed and stored in a data structure for processing.
5. The data is then processed and finally displayed to the user.
I made a simple console program to test the library and it is working fine - no message is sent before all the data from the previous message has been processed. However, when I implement the same process in a QMainWindow instance, no wait occurs and messages are sent one after another without waiting.
How can I block the GUI thread to wait for full processing before sending the next message?
Thanks
Blocking the UI isn't achieved by blocking the event loop. It's done by disabling the widgets that you don't want to allow interaction with - either by literally calling disable() method on them, or by guarding the interaction based on some state variable, e.g.:
connect(button, &QPushButton::clicked, button, [this]{
if (! hasAllData) return;
// react to a button press
});
All you need is to define a set of states your application can be in, and disable relevant widgets in appropriate states. I presume that once the session is established, it'd be fastest to issue all queries in parallel anyway, and asynchronously update the UI as the replies come back in real time.
In order to prevent application's data from losing caused by "End Task" from Task Manager, I am trying to save data at the function handler of WM_CLOSE event.
The app saves data successfully in case I closed my app via Alt+F4 or "close" button. But when I killed it via the Task Manager, the saving data process couldn't be done properly. It seems that the saving progress was terminated in middle.
I tried to debug it via VS2015 IDE, the debugger intercepted a break point in the WM_CLOSE handler successfully but it could not go further, hitting F10 to step over caused my app closes immediately.
Is there any way to delay the termination progress until my application saves data completely?
I found two links below but they didn't help.
How to handle "End Task" from Windows Task Manager on a background process?
How does task manager kill my program?
The task manager might decide that your application isn't responding, and terminate it. You can do nothing against it.
If you want to ensure that your data is always saved, you should save constantly (with some heuristics, like at least once every minute, preferrably after no change happened in a few seconds) in the background. It's more complex but has the advantage of working even when you won't receive WM_CLOSE at all, for example in the case of power loss.
In our web application, for each http-request there is a lot of computation that happens on back end. Output can vary from 10 sec - 1 Hour. In the mean time when it is computed, "Waiting.." is shown on the website for the respective user.
But it so happens, that a user might cut down the service in between. So what all can be done on the back end so that the computation can be stopped in between to save resources? What different tactics can be applied here?
And if better (instead of killing the thread directly), then a graceful termination policy should make wonders.
I'm not sure if this fits your scenario but here is how I have tackled this issue in the past. We were generating pdf reports for a web-app. Most reports could be generated in under 5 seconds but some would take up to an hour.
When the User clicks on generate button we redirect them to a "Generating..." dialog screen which has a sort of progress bar and a Cancel button. This also launches the generate process on the server in a separate thread (we have a worker pool). The browser then polls the server regularly via ajax to check on the progress (either update the progress bar or redirect to the display page when finished).
The synchronization at the server between the generating process and the ajax process was done via a process synchronization object. The sync-obj was a very simple class instance which could be retrieved quickly from any thread at any time via some unique string.
Both processes could update this shared sync-obj. As the report generated the repgen thread would update the sync-obj which the ajax thread would inform the browser. If the User clicked the Cancel button then the ajax thread would set the "cancel" flag in the sync-ob and the repgen thread would pick that up and break out of the generate loop.
Clearly the responsiveness of the whole process depends a lot on how frequently the repgen thread checks the sync-obj and that often comes down to how the individual report was coded.
Finally, to answer your question, if the User gets bored and goes "back" and clicks the generate button again we do not cancel the first report and start a second but rather realise that it is the same report (and the same sync-obj id) and so just let the report continue. However if that does not suit your scenario then starting a generate process could cancel the first in the same manner that the User could via the Cancel button.
How can I return a 400 status code and close the connection, without aborting script execution? I'm looking to initiate execution of the script using <cfhttp> but not wait for the script to end before returning.
You need to run the portion of the script you want to keep running after the connection is close, on a different thread.
Here's a tutorial on how to launch a new thread in Coldfusion.
If you want to hit a new request triggered from in another request without waiting just set the cfhttp timeout to 0 and it will return right away without waiting on response. For continuing a script after returning a separate thread may be a better idea but if you are hitting something outside the server or really need a separate request then cfhttp timeout=0 should do the trick.