GDI+ gif speed problem - c++

I am using C++ GDI+ to open a gif
however I find the frame interval is really strange.
It is different from played it by window's pic viewer.
The code I written is as follow.
pMultiPageImg = new Bitmap(XXXXX);
int size = m_pMultiPageImg->GetPropertyItemSize(PropertyTagFrameDelay);
m_pTimeDelays = (PropertyItem*) malloc (size);
m_pMultiPageImg->GetPropertyItem(PropertyTagFrameDelay, size, m_pTimeDelays);
int frameSize = m_pMultiPageImg->GetFrameDimensionsCount();();
// the interal of frame FrameNumber:
long lPause = ((long*)m_pTimeDelays->value)[FrameNumber] * 10;
however I found some frame the lPause <= 0.
What does this mean?
And are code I listed right for get the interval?
Many thanks!

The frame duration field in the gif header is only two bytes long (interpreted as 100ths of a second - allowing values from 0 to 32.768 seconds).
You seem to be interpreting it as long, which is probably 4 bytes on your platform so you will be reading another field along with the duration. It is hard to tell from the code you provide, but I think this is the problem.

Frame delays should not be negative numbers. I think the error comes in during the array type conversion or "FrameNumber" goes out of bounds.
GetPropertyItemSize(PropertyTagFrameDelay) returns a native byte array. It'll be safer to convert it to an Int32 array instead of a "long" array. "long" is always 4 bytes long under 32-bit systems, but could be 8 bytes under some 64-bit systems.
m_pMultiPageImg->GetFrameDimensionsCount() returns the number of frame dimensions in the image, not the number of frames. The dimension of the first frame (master image) is usually used in order to get the frame count.
In your case, the code looks like
int count = m_pMultiPageImg->GetFrameDimensionsCount();
GUID* dimensionIDs = new GUID[count];
m_pMultiPageImg->GetFrameDimensionsList(dimensionIDs, count);
int frameCount = m_pMultiPageImg->GetFrameCount(&m_pDimensionIDs[0]);
Hope this helps.

Related

Loading large JPG images in qt fails [duplicate]

Are there any known size/space limitation of QPixmap and/or QImage objects documented? I did not find any useful information regarding this. I'm currently using Qt 4.7.3 on OSX and Windows. Particulary I'm interested in:
Width/Height limits?
Limits depending on color format?
Difference between 32/64 bit machines?
Difference regarding OS?
I would naively suspect that memory is the only limitation, so one could calculate max size by
width x height x byte_per_pixel
I assume that there is a more elaborate rule of thumb; also 32 bit machines may have addressing problems when you run into GB dimensions.
In the end I want to store multiple RGBA images of about 16000x16000 pixel in size and render them using transparency onto each other within a QGraphicsScene. The workstation available can have a lot of RAM, let's say 16GB.
tl;dr: What size limits of QImage/QPixmap are you aware of, or where can I find such information?
Edit: I'm aware of the tiling approach and I'm fine with that. Still it would be great to know the things described above.
Thanks!
Both are limited to 32767x32767 pixels. That is, you can think of them as using a signed 16-bit value for both the X and Y resolution.
No axis can ever exceed 32767 pixels, even if the other axis is only 1 pixel. Operating system "bitness" does not affect the limitation.
The underlying system may run into other limits, such as memory as you mentioned, before such a huge image can be created.
You can see an example of this limitation in the following source code:
http://git.zx2c4.com/qt/plain/src/gui/image/qpixmap_x11.cpp
if (uint(w) >= 32768 || uint(h) >= 32768) {
w = h = 0;
is_null = true;
return;
}
Building on the answer by #charles-burns, here is relevant source code for QImage:
QImageData *d = 0;
if (format == QImage::Format_Invalid)
return d;
const int depth = qt_depthForFormat(format);
const int calc_bytes_per_line = ((width * depth + 31)/32) * 4;
const int min_bytes_per_line = (width * depth + 7)/8;
if (bpl <= 0)
bpl = calc_bytes_per_line;
if (width <= 0 || height <= 0 || !data
|| INT_MAX/sizeof(uchar *) < uint(height)
|| INT_MAX/uint(depth) < uint(width)
|| bpl <= 0
|| height <= 0
|| bpl < min_bytes_per_line
|| INT_MAX/uint(bpl) < uint(height))
return d; // invalid parameter(s)
So here, bpl is the number of bytes per line, which is effectively width * depth_in_bytes. Using algebra on that final invalid test:
INT_MAX/uint(bpl) < uint(height)
INT_MAX < uint(height) * uint(bpl)
INT_MAX < height * width * depth_in_bytes
So, your image size in total must be less than 2147483647 (for 32-bit ints).
I actually had occasion to look into this at one time. Do a search in the source code of qimage.cpp for "sanity check for potential overflows" and you can see the checks that Qt is doing. Basically,
The number of bytes required (width * height * depth_for_format) must be less than INT_MAX.
It must be able to malloc those bytes at the point you are creating the QImage instance.
Are you building a 64 bit app? If not, you are going to run into memory issues very quickly. On Windows, even if the machine has 16GB ram, a 32 bit process will be limited to 2GB (Unless it is LARGEADDRESSAWARE then 3GB). A 16000x16000 image will be just under 1 GB, so you'll only be able to allocate enough memory for 1, maybe 2 if you are very lucky.
With a 64 bit app you should be able to allocate enough memory for several images.
When I try to load JPEG with size 6160x4120 to QPixmap I get this warning: "qt.gui.imageio: QImageIOHandler: Rejecting image as it exceeds the current allocation limit of 128 megabytes" and returns empty QPixmap.
This seems to be the most strict constraint I have found so far.
There is however an option to increase this limit with void QImageReader::setAllocationLimit(int mbLimit).

How do you convert a 16 bit unsigned integer to a larger 8 bit unsigned integer?

I have a function that needs to return a 16 bit unsigned int vector, but for another from which I also call this one, I need the output in 8 bit unsigned int vector format. For example, if I start out with:
std::vector<uint16_t> myVec(640*480);
How might I convert it to the format of:
std::vector<uint8_t> myVec2(640*480*4);
UPDATE (more information):
I am working with libfreenect and its getDepth() method. I have modified it to output a 16 bit unsigned integer vector so that I can retrieve the depth data in millimeters. However, I would also like to display the depth data. I am working with some example code c++ from the freenect installation, which uses glut and requires an 8 bit unsigned int vector to display the depth, however, i need the 16 bit to retrieve the depth in millimeters and log it to a text file. Therefore, i was looking to retrieve the data as a 16 bit unsigned int vector in glut's draw function, and then convert it so that I can display it with the glut function that's already written.
As per your update, assuming the 8-bit unsigned int is going to be displayed as a gray scale image, what you need is akin to a Brightness Transfer Function. Basically, your output function is looking to map the data to the values 0-255, but you don't necessarily want those to correspond directly to millimeters. What if all of your data was from 0-3mm? Then your image would look almost completely black. What if it was all 300-400mm? Then it'd be completely white because it was clipped to 255.
A rudimentary way to do it would be to find the minimum and maximum values, and do this:
double scale = 255.0 / (double)(maxVal - minVal);
for( int i = 0; i < std::min(myVec.size(), myVec2.size()); ++i )
{
myVec2.at(i) = (unsigned int)((double)(myVec.at(i)-minVal) * scale);
}
depending on the distribution of your data, you might need to do something a little more complex to get the most out of your dynamic range.
Edit: This assumes your glut function is creating an image, if it is using the 8-bit value as an input to a graph then you can disregard.
Edit2: An update after your other update. If you want to fill a 640x480x4 vector, you are clearly doing an image. You need to do what I outlined above, but also the 4 dimensions that it is looking for are Red, Green, Blue, and Alpha. The Alpha channel needs to be 255 at all times (this controls how transparent it is, you don't want it to be transparent), as for the other 3... that value you got from the function above (the scaled value) if you set all 3 channels (channels being red, green, and blue) to the same value it will appear as grayscale. For example, if my data ranged from 0-25mm, for a pixel who's value is 10mm, I would set the data to 255/(25-0)* 10 = 102 and therefore the pixel would be (102, 102, 102, 255)
Edit 3: Adding wikipedia link about Brightness Transfer Functions - https://en.wikipedia.org/wiki/Color_mapping
How might I convert it to the format of:
std::vector myVec2; such that myVec2.size() will be twice as
big as myVec.size()?
myVec2.reserve(myVec.size() * 2);
for (auto it = begin(myVec); it!=end(myVec); ++it)
{
uint8_t val = static_cast<uint8_t>(*it); // isolate the low 8 bits
myVec2.push_back(val);
val = static_cast<uint8_t>((*it) >> 8); // isolate the upper 8 bits
myVec2.push_back(val);
}
Or you can change the order of push_back()'s if it matters which byte come first (the upper or the lower).
Straightforward way:
std::vector<std::uint8_t> myVec2(myVec.size() * 2);
std::memcpy(myVec2.data(), myVec.data(), myVec.size());
or with the use of the standard library
std::copy( begin(myVec), end(myVec), begin(myVec2));

How to deal with values larger than char type

I knew this was going to come back and bite me one day. I'm reading an image, doing a resize to 48 pixels tall (by whatever the width is), then grabbing the total image columns and reading each individual pixel to get the color values. All of this information gets written out to a file. The concise version of the code is this:
unsigned char cols, rows;
unsigned char red, green, blue;
short int myCol, myRow;
cols = processedImage.columns();
rows = processedImage.rows();
myFile.write(reinterpret_cast<const char *>(&cols), sizeof(cols));
for (myCol = cols - 1; myCol >= 0; myCol--) {
for (myRow = rows - 1; myRow >= 0; myRow--) {
Magick::ColorRGB rgb(processedImage.pixelColor(myCol, myRow));
red = rgb.red() * 255;
green = rgb.green() * 255;
blue = rgb.blue() * 255;
myFile.write(reinterpret_cast <const char*> (&red), sizeof(red));
myFile.write(reinterpret_cast <const char*> (&green), sizeof(green));
myFile.write(reinterpret_cast <const char*> (&blue), sizeof(blue));
}
}
The problem here is when the file is wider than what char can hold. For example, I'm processing a file that's 494x48 pixels.
When I look at the (binary) file created, the first line which holds the column count says it's '238'. The next line starts the RGB data:
0: 238 // Column count
1: 255 // Red
2: 0 // Green
3: 0 // Blue
4: 255 // Red
5: 0 // Green
6: 0 // Blue
So I'm stuck. How can I store the actual columns value as a single line in the resulting file?
What about using more than one character instead of one character? Presume there are say 4 characters to store the cols, rows etc. since character can store 0-255, so 4 character will store 256x256x256x256 i.e. 32 bits, long enough
Answering my own question. Thanks to everyone who responded and helped figure out what I was doing wrong. The issue here stems from months of making assumptions based on Arduino code. Arduino has a single INT/UINT and I was using that to read in values from the generated files. I assumed that data type was a uint8_t when in reality I discovered it's really a uint16_t. As it was messing up other parts in the code (namely what position to seek to in a file), I had switched to a char data type as that was only taking up 1-byte. But in doing so I nailed myself with the roll-over issue mentioned above. So the solution, now that I know more about how the data types are within Arduino code:
change the image file processing to use uint16_t for both rows and columns
(since I have access to it) change the reading on the Arduino side to also use uint16_t
change the file seek command to move one more byte after the "header" so the data being
read doesn't get mangled.
And ultimately, I've now stopped using Arduino's built-in data types and switched to platform independent data types that are actually what they say they are.
Chalk this up to another learning experience (in my entire process of actually learning c++) ...

reinterpret_cast and use with CV_MAT_ELEM

i want to put all the data of an 8 bit input colorimage (Inputfile is a.bmp file)
in a new 16 bit Mat Array.
I do this because i need to sum up several image patches and then build the meanvalue for each pixel.
Mat img = imread(Inputfile); //there now should be values between 0 and 255
Addressing the blue value for example like follows
uchar* ptr = img.data + img.step*row;
cout << ptr[n*col] << endl;
only brings up single Letters and no values.
cout << static_cast<short>(ptr[n*col]) << endl;
Typecasting to short or bigger brings up the correct values. but a cast to unsigned char (which is the correct datatype in my opinion) brings up the same Letters than without any typecast.
Short has 2 Bytes as i know, but a color .bmp should only have 1 Byte color information per channel. As i need to sum up in worst case 81 (smaller 128=7bit) pixel values, i thought the short as a target value would be great.
Any help concerning the right way to get simple access to the 8 bit values and use them in 16bit arrays would be great.
Thank you.
The cast works correct, but if you send an unsigned char into the output stream, it will be interpreted as a character and printed as a character.
Also note that OpenCV already has functionality to convert a matrix to a different datatype. You can even read your image into a matrix of the preferred datatype:
cv::Mat3s img = cv::imread(...);
And it is disregarded to use the data pointer. Read OpenCV documentation on how to access single pixels or rows in a clean fashion (iterators, operator(), operator[]...).

Read From Media Buffer - Pointer Arithmetic C++ Syntax

This may well have come up before but the following code is taken from an MSDN example I am modifying. I want to know how I can iterate through the contents of the buffer which contains data about a bitmap and print out the colors. Each pixel is 4 bytes of data so I am assuming the R G B values account for 3 of these bytes, and possibly A is the 4th.
What is the correct C++ syntax for the pointer arithmetic required (ideally inside a loop) that will store the value pointed to during that iteration in to a local variable that I can use, eg. print to the console.
Many thanks
PS. Is this safe? Or is there a safer way to read the contents of an IMFMediaBuffer? I could not find an alternative.
Here is the code:
hr = pSample->ConvertToContiguousBuffer(&pBuffer); // this is the BitmapData
// Converts a sample with multiple buffers into a sample with a single IMFMediaBuffer which we Lock in memory next...
// IMFMediaBuffer represents a block of memory that contains media data
hr = pBuffer->Lock(&pBitmapData, NULL, &cbBitmapData); // pBuffer is IMFMediaBuffer
/* Lock method gives the caller access to the memory in the buffer, for reading or writing:
pBitmapData - receives a pointer to start of buffer
NULL - receives the maximum amount of data that can be written to the buffer. This parameter can be NULL.
cbBitmapData - receives the length of the valid data in the buffer, in bytes. This parameter can be NULL.
*/
I solved the problem myself and thought it best to add the answer here so that it formats correctly and maybe others will benefit from it. Basically in this situation we use 32 bits for the image data and what is great is that we are reading raw from memory so there is not yet a Bitmap header to skip because this is just raw color information.
NOTE: Across these 4 bytes we have (from bit 0 - 31) B G R A, which we can verify by using my code:
int x = 0;
while(x < cbBitmapData){
Console::Write("B: {0}", (*(pBitmapData + x++)));
Console::Write("\tG: {0}", (*(pBitmapData + x++)));
Console::Write("\tR: {0}", (*(pBitmapData + x++)));
Console::Write("\tA: {0}\n", (*(pBitmapData + x++)));
}
From the output you will see that the A value is 0 for each pixel because there is no concept of transparency or depth here, which is what we expect.
Also to verify that all we have in the buffer is raw image data and no other data I used this calculation which you may also find of use:
Console::Write("no of pixels in buffer: {0} \nexpected no of pixels based on dimensions:{1}", (cbBitmapData/4), (m_format.imageWidthPels * m_format.imageHeightPels) );
Where we divide the value of cbBitmapData by 4 because it is a count of the bytes, and as aforementioned for each pixel we have a width of 4 bytes (32-bit DWORDS in actual fact because the length of a byte is not always strictly uniform across hardware apparently!?). We compare this to the image width multiplied by its height. They are equal and thus we have just pixel color information in the buffer.
Hope this helps someone.