I am not sure if the functionality I need isn't something that Qt is just not capable of but it seems that since there are multiple valid use cases maybe I am doing something wrong and someone has already dealt with this.
What I want do to is pretty simple - I want to issue a POST request with a given content length and provide the actual POST contents on the fly from a QIODevice. QNetworkAccessManager has a method QNetworkReply * QNetworkAccessManager::post(const QNetworkRequest & request, QIODevice * data) which seems to be ideal for this. The content length is large (lets assume 8 GB) and it seems that Qt tries to call QIODevice::readData to obtain all of the data before anything is emitted to the network (at least Wireshark is not showing anything whereas setting a small content length, for example 4, produces the behavior where 4 bytes are read and everything is transmitted). This has lead me to believe that Qt actually wants to buffer all of the POST content. I have explicitly set the QNetworkRequest::DoNotBufferUploadDataAttribute attribute but that does not change this.
It could be that it actually works like this and there is nothing I can do about it, but in that case, how would huge file upload that simply do not fit in memory work? Anyhow, any feedback from people who have experienced the same problem is welcome while I debug to see if Qt is actually buffering the whole thing.
Related
I'll copy here part of my previous question to describe the problem:
I wrote an application in C++ that has two parts - the frontend and
the backend. These two communicate using IPC layer provided by
wxWidgets. In the backend I use some legacy functions for image data
manipulation. One of these functions hangs or falls into some infinite
loop sometimes (I can observe that 0% of the process resources are
used by the process after some point), but this happens only if I ran
the backend as a subprocess of the frontend. Otherwise (when I run it
manually) it works just fine.
It turns out that printing too many lines with std::cout was causing that, but I'd like to understand why. Could it be that wxWidgets utilizes some buffer for storing application output and printing was simply overflowing it? Or this is rather native issue of Windows? Or maybe it could be related to std::cout implementation? I'm pretty sure I'm not able to reproduce this with printf It seems that I was wrong - printf also seems to trigger that issue
The stdout buffer is of a finite size. Something must be reading what you are writing into the buffer, whether this is a file, a console window or another process. If you write faster than the reader is able to cope with then the buffer will eventually fill up and block any further writes until the reader has read some data.
I'm developing an updater for my application in Qt, primarily to get to know the framework (I realize there are multiple ready-made solutions available, that's not relevant here). It is a basic GUI application using a QMainWindow subclass for its main window and an MyAppUpdater class to perform the actual program logic.
The update information (version, changelog, files to be downloaded) is stored on my server as an XML file. The first thing the updater should do after it sets up the UI is query that server, get the XML file, parse it and display info to the user. Here's where I have a problem though; coming from a procedural/C background, I'd initiate a synchronous download, set a timeout of maybe 3 seconds, then see what happens - if I manage to download the file correctly, I'll parse it and carry on, otherwise display an error.
However, seeing how inconvenient something like that is to implement in Qt, I've come to believe that its network classes are designed in a different way, with a different approach in mind.
I was thinking about initiating an asynchronous download in, say, InitVersionInfoDownload, and then connecting QNetworkReply's finished signal to a slot called VersionInfoDownloadComplete, or something along these lines. I'd also need a timer somewhere to implement timeout checks - if the slot is not invoked after say 3 seconds, the update should be aborted. However, this approach seems overly complicated and in general inadequate to the situation; I cannot proceed without retrieving this file from the server, or indeed do anything while waiting for it to be downloaded, so an asynchronous approach seems inappropriate in general.
Am I mistaken about that, or is there a better way?
TL;DR: It's the wrong approach in any GUI application.
how inconvenient something like that is to implement in Qt
It's not meant to be convenient, since whenever I see a shipping product that behaves that way, I have an urge to have a stern talk with the developers. Blocking the GUI is a usability nightmare. You never want to code that way.
coming from a procedural/C background, I'd initiate a synchronous download, set a timeout of maybe 3 seconds, then see what happens
If you write any sort of machine or interface control code in C, you probably don't want it to be synchronous either. You'd set up a state machine and process everything asynchronously. When coding embedded C applications, state machines make hard things downright trivial. There are several solutions out there, QP/C would be a first class example.
was thinking about initiating an asynchronous download in, say, InitVersionInfoDownload, and then connecting QNetworkReply's finished signal to a slot called VersionInfoDownloadComplete, or something along these lines. I'd also need a timer somewhere to implement timeout checks - if the slot is not invoked after say 3 seconds, the update should be aborted. However, this approach seems overly complicated
It is trivial. You can't discuss such things without showing your code: perhaps you've implemented it in some horribly verbose manner. When done correctly, it's supposed to look lean and sweet. For some inspiration, see this answer.
I cannot proceed without retrieving this file from the server, or indeed do anything while waiting for it to be downloaded
That's patently false. Your user might wish to cancel the update and exit your application, or resize its window, or minimize/maximize it, or check the existing version, or the OS might require a window repaint, or ...
Remember: Your user and the environment are in control. An application unresponsive by design is not only horrible user experience, but also makes your code harder to comprehend and test. Pseudo-synchronous spaghetti gets out of hand real quick. With async design, it's trivial to use signal spy or other products to introspect what the application is doing, where it's stuck, etc.
I am creating a user interface using (Qt) and I am attaching it to my C/C++ motion application using shared memory as my form of Inter Process Communication.
I currently have a class which I created in my motion application that has many members. Most of these members are used to update data on the UI and some of them get updated about 20 to 50 times a second, so it is pretty fast (the reason being because it is tracking motion). My problem is that the data is not getting updated on the UI frequently. It gets updated every few seconds. I was able to get it work using other variables made in structures from my application by using "volatile" however it does not seem to be working for members of my class. I know that the problem is not on the UI (Qt) side, because I saw that the actual member data was not being updated in my application, even though I have commands every cycle to update the data.
I was pretty sure the problem is that some optimization is occurring since I do not have my members declared as volatile as in my structures, but when I made them volatile it still did not work. I found that when I through a comment to print out in the function that updates my motion data within my motion application, the UI updates much more frequently as if the command to print out the comment deters the compiler form optimizing out some stuff.
Has anyone experienced this problem or have a possible solution?
Your help is greatly appreciated. Thanks ahead of time!
EDIT:
The interface does not freeze completely. I just updates every few seconds instead of continuously as I intended for it to do. Using various tests I know that the problem is not on the GUI or shared memory side. The problem lies strictly on the motion application side. The function that I am calling is below: int
`motionUpdate(MOTION_STAT * stat)
{
positionUpdate(&stat->traj);
}
`
where
positionUpdate(){stat->Position = motStatus.pos_fb;}
Position is a class member that contains x, y, and z. The function does not seem to update the position values unless I put a printed out comment before positionUpdate(). I don't track the change in shared memory to update the UI, but instead just update the UI every cycle.
Especially Given you are using Qt, I would strongly advise not using "native" shared memory, but to use signals instead. Concurrency using message-passing (signals/slots is one such way) is much, much easier to reason about and debug than trying to share memory.
I would expect your problem with updating is that the UI isn't being called enough of the time, so there is a backlog of updating to do.
Try putting in some code that throws away updates if they happen less than 0.3 seconds apart and see if that helps. You may wish to tune that number but start at the larger end.
Secondly, make sure there aren't any "notspots" in your app, in which the GUI thread is not being given the chance to run. If there are, consider putting code into another thread or, alternatively, calling processEvents() within that part of the code.
If the above really isn't what's happening, I would suggest adding more info about the architecture of your program.
I'm trying to stream (large) files over HTTP into a database. I'm using Tomcat and Jersey as Webframework. I noticed, that if I POST a file to my resource the file is first buffered on disk (in temp\MIME*.tmp} before it is handled in my doPOST method.
This is really an undesired behaviour since it doubles disk I/O and also leads to a somewhat bad UX, because if the browser is already done with uploading the user needs to wait a few minutes (depending on file size of course) until he gets the HTTP response.
I know that it's probably not the best implementation of a large file upload (since you don't even have any resume capabilities) but so are the requirements. :/
So my questions is, if there is any way to disable (disk) buffering for MULTIPART POSTs. Mem buffering is obviously too expensive, but I don't really see the need for disk buffering anyway? (Explain please) How do large sites like YouTube handle this situation? Or is there at least a chance to give the user immediate feedback if the file is sent? (Should be bad, since there could be still something like SQLException)
In case anybody is still interested, I solved the same issue by using the Apache Commons Streaming api
The code example on that page worked just fine for me.
Ok, so after days of reading and trying different stuff I stumbled upon HTTPServletRequest. At first I didn't even want to try since it takes away all the convenience methods from #FormDataParam but since i didn't know what else to do...
Turns out it helped. When I'm using #Context HTTPServletRequest request and request.getInputStream() i don't get disk buffering at all.
Now I just have to figure out how to get to the FormDataContentDisposition without #FormDataParam
Edit:
Ok. MultiPartFormData probably has to buffer on disk to parse the InputStream of the Request. So it seems I have to manually parse it myself, if I want to prevent any buffering :(
Your best bet is to take full control and write your own servlet that just grabs request.getInputStream (or request.getWriter if you are consuming text) and does the streaming itself. Most frameworks make your life "easy" by handling all the upload, temporary storage, etc. for you and often make it difficult to do things like streaming. It's quite easy to grab the stream yourself and do whatever you want.
I'm pretty sure Jersey is writing the files to disk to ensure memory is not flooded. Since you know exactly what you need to do with the incoming data -> stream into the database you probably have to write your own MessageBodyReader and get Jersey to use that to process your incoming multipart data.
I'm a newbie C++ developer and I'm working on an application which needs to write out a log file every so often, and we've noticed that the log file has been corrupted a few times when running the app. The main scenarios seems to be when the program is shutting down, or crashes, but I'm concerned that this isn't the only time that something may go wrong, as the application was born out of a fairly "quick and dirty" project.
It's not critical to have to the most absolute up-to-date data saved, so one idea that someone mentioned was to alternatively write to two log files, and then if the program crashes at least one will still have proper integrity. But this doesn't smell right to me as I haven't really seen any other application use this method.
Are there any "best practises" or standard "patterns" or frameworks to deal with this problem?
At the moment I'm thinking of doing something like this -
Write data to a temp file
Check the data was written correctly with a hash
Rename the original file, and put the temp file in place.
Delete the original
Then if anything fails I can just roll back by just deleting the temp, and the original be untouched.
You must find the reason why the file gets corrupted. If the app crashes unexpectedly, it can't corrupt the file. The only thing that can happen is that the file is truncated (i.e. the last log messages are missing). But the app can't really jump around in the file and modify something elsewhere (unless you call seek in the logging code which would surprise me).
My guess is that the app is multi threaded and the logging code is being called from several threads which can easily lead to data corrupted before the data is written to the log.
You probably forgot to call fsync() every so often, or the data comes in from different threads without proper synchronization among them. Hard to tell without more information (platform, form of corruption you see).
A workaround would be to use logfile rollover, ie. starting a new file every so often.
I really think that you (and others) are wasting your time when you start adding complexity to log files. The whole point of a log is that it should be simple to use and implement, and should work most of the time. To that end, just write the log to an unbuffered stream (l;ike cerr in a C++ program) and live with any, very occasional in my experience, snafus.
OTOH, if you really need an audit trail of everything your app does, for legal reasons, then you should be using some form of transactional storage such as a SQL database.
Not sure if your app is multi-threaded -- if so, consider using Active Object Pattern (PDF) to put a queue in front of the log and make all writes within a single thread. That thread can commit the log in the background. All logs writes will be asynchronous, and in order, but not necessarily written immediately.
The active object can also batch writes.