I have a script that displays all images found on the server and checks on a regular basis.
From time to time, it downloads an image that hasn't been completely uploaded yet resulting in a half JPEG with the lower part being gray.
I'm using a qbyteAray to store received data and load in in a QPixmap with :
QByteArray bytes = reply->readAll(); // bytes
qDebug() << "loading pixmap" ;
qDebug() << pixmap.loadFromData(bytes);
I would like to detect if the load failed and retry 500ms later, but I cannot find a solution to verify if the pixmap contains valid JPG data.
The loadFromData returns TRUE but inside this method I get a Warning, this is the Application Output for the lines above:
loading pixmap
Corrupt JPEG data: premature end of data segment
true
Is there any way to check/bool if the pixmap has corrupted data?
Solution
As suggested by vsz, if images are natural images and not graphics, it's very unlikely that pixels of the lower row are the exact gray of an invalid JPG. Thus it it possible to determine if an image is valid with these lines in a method :
QImage img = pixmap.toImage();
if( img.pixel(img.width()-1,img.height()-1 ) == 4286611584 && img.pixel(img.width()/2,img.height()-1 ) == 4286611584 && img.pixel(0,img.height()-1 ) == 4286611584 ) return false; //invalid color : 4286611584 (default gray jpg)
else return true;
There are several possible messages on corrupt JPEG data:
Corrupt JPEG data: premature end of data segment
Corrupt JPEG data: 12 extraneous bytes before marker 0xd9
Corrupt JPEG data: bad Huffman code
The nice vsz way would unfortunately not catch the 2 other errors I have encountered as no default gray used for these in the bad image area.
But it is better for the 1st error than what I propose below to try to report on all errors but fails in multithread.
In Qt 4.8.6, the "Corrupt JPEG data" errors were directly sent to the console, no way to catch them from qt code.
Not sure when it changed, but in Qt5.5.1, these messages go through the MessageHandler, so it is possible to catch them.
Loading only pictures from disk in the software, I use the following to know which picture is corrupted.
In the MessageHandler, I detect any "Corrupt JPEG data" message and set a global variable corrupt_jpg_detected to true.
if(msg.contains("Corrupt JPEG data"))
setCorrupt_jpg_detected(&corrupt_jpg_detected, true);
Then in the code where I read the image, I add the following:
QImage img(path_to_my_image) //read image file => will sent "Corrupt JPEG data" error if image corrupted
if(corrupt_jpg_detected) "code to print in log file / inform user of image corruption, can include path_to_my_image"
This has a strong weakness, if you are using multithread to load pictures (using QtConcurrent::mapped for example) then the first of thread testing the if(corrupt_jpg_detected) will print image info, this will most likely not be the thread which encountered the corrupted image...
The decoding of JPEG data happens in the external library called "libjpeg", so the Qt classes QPixmap and QImage don't do anything about it, as they receive an image which was decoded by libjpeg. If libjpeg gives an image to QPixmap, then QPixmap regards it as success.
There was a request back in 2009 for Qt to do something about it (see this post) but it seems that nothing has been done regarding this issue.
This means that it's highly likely that you are on your own, and unless QPixmap and QImage will be expanded with new features in future releases, you can't verify the integrity of a jpeg image just by calling a method within these classes.
I would suggest one of the following:
access libjpeg by yourself
check the color of the last row of the image. It's unlikely that every pixel of the lowest row will be the same shade of gray as in the case of a corrupted image. This depends on your application, but if you deal with natural images (photos) then this is the solution I would advise.
Related
UPD. Okay, so since my question was met mostly with confusion, I'll try to fix it by narrowing the problem down to the essentials.
Given:
a large image of several Gb, call it img.jpg(.png/.bmp);
the coordinates of a small fragment on the image rect = QRect(x,y,w,h);
Write:
a function that loads the fragment rect from the image on a disk to a QPixmap; the function is to be called frequently.
Attempted solution and problems:
the first solution was to use QImageReader that has a method setClipRect(QRect) to specify a region on the image to be loaded. Perfect, right? Well, the problem is that the method read() will return QImage only when it's called for the first time. After that, it will be returning empty images with an error message "Unable to read image data". Rewinding the QIOReader that QImageReader uses to zero doesn't help. Creating the QImageReader again costs dearly in performance.
Links:
Here's the question from 2011 with essentially the same problem (my question might as well be a duplicate). A person attempts to do the same thing I do and faces the same problem. He asks why his reader gives him "Unable to read image data", well because it doesn't read twice, and it's unclear how to fix it.
Here they ask basically the same question and the solution is to rewind the QIODevice by calling reader->device()->seek(0);. The problem is that it doesn't work: after rewinding the device reader->read() still returns empty QImages and produce an error message "Unable to read image data".
Here, here and here are posts that discuss the problem of loading large images in tiles in Qt that helped me implement my current solution.
Below is the original post with more details.
I'm writing a widget that loads an image fragmentarily and displays it in QGraphicsScene with tiles that can be loaded and unloaded depending on their visibility in a viewport. Tiles show a placeholder when they are unloaded and they load a corresponding fragment into a QPixmap on demand. The widget is supposed to be able to open images of sizes up to several gigabytes without taking much memory.
I was planning to make each tile have the image opened and ready to be read from a certain place. I'm using a QImageReader that allows you to set a rectangle defining the area of the image to be loaded. But it turned out that after setting a rectangle I can only read the image once. In order to load it again, I have to create a new instance of QImageReader that would open the file anew. I tried opening ~2Gb images with 256x256 tiles and it loads very slowly: on my computer, it takes about 30 sec to load one HD screen.
This is the code that I currently use to load tiles:
void TileLoader::load(int posx, int posy, int w, int h, QString filename) {
QImageReader reader(filename);
QRect tile(posx, posy, w, h);
reader.setClipRect(tile);
QImage image(w, h, QImage::Format_RGB32);
reader.read(&image);
emit tileLoaded(QPixmap::fromImage(image));
}
Instead of creating QImageReader instance every time, I was hoping to pass the existing one as an argument. But unfortunately it works only once, so QImageReader is not the way to go. But what is?
I can also recreate the slow loading with regular images like ~1000x1000 pixels if I set tile size at 16x16. I can see little tiles unfolding an image in front of my eyes in a wave-like motion for about 10 sec, which is a beautiful thing to see by itself, but completely impractical in terms of the task at hand.
So the question is: is there a way to keep my image file open for the whole time a tile exists, so it can quickly read parts of it?
So, I have a Video-decoder written in c++ whit help of ffmpeg library, non problem till when it comes to decode JPEG 2000 frame in multi threads, in this case the image is discontinuous, I set the context to have even number of threads and to process image slices:
m->context->thread_count = m_cfgHhiThreads->value();
m->context->thread_type = FF_THREAD_SLICE;
here is a sample image captured after decoding process (dimension is fine), this only happen if multithred is set
Question is, why is this happening?
FFMPEG does not report any error, it actually think that the image has been correctly decoded. It is also correctly decoded the problem is in slicing.
I find out that if I setup exact so many thread as the image slice number it actually work fine.
I am capturing images in real time using OpenCV, and I want to show these images in the OGRE window as a background. So, for each frame the background will change.
I am trying to use MemoryDataStream along with loadRawData to load the images into an OGRE window, but I am getting the following error:
OGRE EXCEPTION(2:InvalidParametersException): Stream size does not
match calculated image size in Image::loadRawData at
../../../../../OgreMain/src/OgreImage.cpp (line 283)
An image comes from OpenCV with a size of 640x480 and frame->buffer is a type of Mat in OpenCV 2.3. Also, the pixel format that I used in OpenCV is CV_8UC3 (i.e., each pixel is 8-bits and each pixel contains 3 channels ( B8G8R8 ) ).
Ogre::MemoryDataStream* videoStream = new Ogre::MemoryDataStream((void*)frame->buffer.data, 640*480*3, true);
Ogre::DataStreamPtr ptr(videoStream,Ogre::SPFM_DELETE);
ptr->seek(0);
Ogre::Image* image = new Ogre::Image();
image->loadRawData(ptr,640, 480,Ogre::PF_B8G8R8 );
texture->unload();
texture->loadImage(*image)
Why I always getting this memory error?
Quick idea, maybe memory 4-byte alignment issues ?
see Link 1 and
Link 2
I'm not an Ogre expert, but does it work if you use loadDynamicImage instead?
EDIT : Just for grins try using the Mat fields to setup the buffer:
Ogre::Image* image = new Ogre::Image();
image->loadDynamicImage((uchar*)frame->buffer.data, frame->buffer.cols, frame->buffer.rows, frame->buffer.channels(), Ogre::PF_B8G8R8);
This will avoid copying the image data, and should let the Mat delete it's contents later.
I had similar problems to get image data into OGRE, in my case the data came from ROS (see ros.org). The thing is that your data in frame->buffer is not RAW, but has a file header etc.
I think my solution was to search the data stream for the beginning of the image (by finding the appropriate indicator in the data block, e.g. 0x4D 0x00), and inserting the data from this point on.
You would have to find out were in your buffer the header ends and where your data begins.
Android seems to visibly reduce the quality of PNG files at compile time.
I have an aplication working with a canvas object. The process writes canvas data over a PNG file which has smaller dimensions than the canvas. Writing process repeats according to user events more than once (maybe 20 times according to user). But after every writing process, image quality becomes worse and worse.It becomes pixelated.
Is there any way to turn off or disable compression for this?
EditingPartMutable_Bitmap.WriteToStream(out, 100, "PNG")
'quality = 100 didn't worked either
Any idea?
PNG is a lossless format and as such the quality setting is ignored. The pixelated effect you are getting must be due to another reason.
I'm trying to process each frame in a pair of video files in OpenCV and then write the resulting frames to an output avi file. Everything works, except that the output video file looks strange: instead of one solid image, the image is repeated three times and horizontally compressed so all three copies fit into the window. I suspect there is something going wrong with the number of channels the writer is expecting, but I'm giving it 8-bit single channel images to write. Below are the setting with which I'm initializing my videowriter:
//Initialize the video writer
CvVideoWriter *writer = cvCreateVideoWriter("out.avi",CV_FOURCC('D','I','V','X'), 30, frame_sizeL, 0);
Has anyone encountered this strange output from the openCV videowriter before? I've been checking the resulting frames with cvSaveImage just to see if somehow my processing step is creating the "tripled" image, but it's not. It's only when I write to the output avi with cvWriteFrame that the image gets "tripled" and compressed.
Edit: So I've discovered that this only happens when I attempt to write single channel images using write frame. If I write 3 channel 8-bit RGB images, the output video turns out fine. Why is it doing this? I am correctly passing "0" for the color argument when initializing the CvVideoWriter, so it should be expecting single channel images.
In the C++ version you have to tell cv::VideoWriter that you are sending a single channel image by setting the last paramter "false", are you sure you are doing this?
edit: alternatively you can convert a greyscale image to color using cvtColor() and CV_GRAY2RGB