Qt wrongly thinks QImage is loaded properly - c++

I am having an issue when trying to store a sequence of image data with Qt.
Here is a piece of code that shows the problem:
#include <vector>
#include <iostream>
#include <QImage>
...
const int nFrames = 1000;
std::vector<int> sizes(nFrames);
std::vector<uchar*> images(nFrames);
for (int k = 0; k < nFrames; k++)
{
QImage *img = new QImage("/.../sample.png");
uchar *data = img->bits();
sizes.at(k) = img->width() * img->height();
images.at(k) = data;
}
std::cout << "Data loaded \"successfully\"." << std::endl;
for (int k = 0; k < nFrames; k++)
{
std::cout << k << ": " << (int) (images.at(k)[0]) << std::endl;
}
In the first loop, the program loads QImage objects and puts the bitmaps in the images vector of pointers. In the second loop, we just read a pixel of each frame.
The problem is that the program proceeds through the first loop without complaining, even if the heap memory becomes full. As a result, I get a crash in the second loop, as shown by the output of the program:
Data loaded "successfully".
0: 128
1: 128
2: 128
...
192: 128
[crash before hitting 1000]
To reproduce the problem, you can use the grayscale image below, and you may need to change the value of nFrames, depending on how much memory you have.
My question is: How can I load the data in the first loop in a way that would allow me to detect if the memory becomes full? I don't necessarily need to keep the QImage objects in memory, but only the data of theimages vector.

Firs of all, the first loop has memory leak becouse of img objects are not deleted.
From Qt documentation:
uchar * QImage::bits()
Returns a pointer to the first pixel data. This
is equivalent to scanLine(0).
Note that QImage uses implicit data sharing. This function performs a
deep copy of the shared pixel data, thus ensuring that this QImage is
the only one using the current return value.
So you can safely delete img at and of loop.
....
images.at(k) = data;
delete img;
}
To detect if the memory becomes full you can check if operator new create QImage object like this:
QImage *img = new QImage("/.../sample.png");
if(!img) {
//out of memory
}

Partial answer:
The first loop can be replaced by the following:
for (int k = 0; k < nFrames; k++)
{
QImage *img = new QImage("/.../sample.png");
sizes.at(k) = img->width() * img->height();
uchar *data = new uchar[sizes.at(k)];
std::copy(img->bits(), img->bits() + sizes.at(k), data);
images.at(k) = data;
delete img;
}
This creates in images.at(k) a copy of the data that img->bits() points to. (Btw, this allows now to delete the QImage at the end of the first for loop.) An std::bad_alloc error in the loop if out of memory.
However, this is not good enough. I suspect possible issues when nFrames is set to a value such that the maximum memory taken by the program is close to the limit (or when another program frees memory while this is running). My concern is that I still have no guarantee that img.bits() returns a pointer to accurate data.

Related

Why am I getting errors freeing memory allocated by apriltags image_u8_create() and stored in 2D vector of pointers in C++?

I am attempting to detect apriltags in image streams. Because the images come in from multiple sources at a high rate, detecting the tags in the image callback will take too long, causing images to be dropped due to missed callbacks. I have decided to store images for a few seconds at a time, and run detection on the images afterwards. Between each run of images, I would like to free all the used memory, as I will need to store multiple GB of data for each ~5 second run and images/framerates/sources change between runs.
I am using the image_u8_t type that comes with the apriltag library compiled from source:
typedef struct image_u8 image_u8_t;
struct image_u8
{
const int32_t width;
const int32_t height;
const int32_t stride;
uint8_t *buf;
};
and which has create() and destroy() functions (create() is a wrapper that fills in some default values for the shown create_from_stride(), namely stride = width:
image_u8_t *image_u8_create_stride(unsigned int width, unsigned int height, unsigned int stride)
{
uint8_t *buf = calloc(height*stride, sizeof(uint8_t));
// const initializer
image_u8_t tmp = { .width = width, .height = height, .stride = stride, .buf = buf };
image_u8_t *im = calloc(1, sizeof(image_u8_t));
memcpy(im, &tmp, sizeof(image_u8_t));
return im;
}
void image_u8_destroy(image_u8_t *im)
{
if (!im)
return;
free(im->buf);
free(im);
}
The first run of images always goes as expected, however, on the second, I consistently get errors freeing memory. It seems that although the vectors report a size=0 after using clear(), it is retaining values at the front of the vector and iterating through them, attempting to double free memory. A minimum example of code that also shows the error is below:
#include <iostream>
#include <apriltag/apriltag.h>
#include <vector>
using namespace std;
vector<vector<image_u8_t *>> images(2);
void create_images(int i){
image_u8_t * img;
img = image_u8_create(1920, 1200);
images.at(i).push_back(img);
}
int main() {
char c;
for (int i = 0; i < 3; i++){
for (int j = 0; j < 2; j++){
create_images(j);
}
}
// This works fine
for (auto vec : images){
for (auto img : vec){
image_u8_destroy(img);
}
vec.clear();
}
// Just to pause and inspect output
cin >> c;
for (int i = 0; i < 3; i++){
for (int j = 0; j < 2; j++){
create_images(j);
}
}
// This causes a segfault/free() error
for (auto vec : images){
for (auto img : vec){
image_u8_destroy(img);
}
vec.clear();
}
}
Printing the pointer to be freed (im->buf) shows what seems to be happening:
Freeing image buffer at **0x7ff8cf22e010**
Freeing image buffer at 0x7ff8cedc8010
Freeing image buffer at 0x7ff8ce962010
Freeing image buffer at 0x7ff8ceffb010
Freeing image buffer at 0x7ff8ceb95010
Freeing image buffer at 0x7ff8ce72f010
c
Freeing image buffer at **0x7ff8cf22e010**
Segmentation fault (core dumped)
and the output from my real program shows a more specific but similar problem:
img u8 vector length: 92
Destroyed all images and cleared vector. New size = 0
Destroyed all images and cleared vector. New size = 0
Destroyed all images and cleared vector. New size = 0
Destroyed all images and cleared vector. New size = 0
Freeing image buffer at 0x7f2834000b80
free(): invalid pointer
Can anyone explain if I am misunderstanding how vectors work, the clear() function specifically, or point me towards where I might be causing this issue?
Editing to add output that shows even after clearing and having size() return 0, on the next push_back()s, the old values seem to reappear in the vector:
Vector size before 1st clear: 3
Freeing image buffer at 0x7f5e34fe6010
Freeing image buffer at 0x7f5e34b80010
Freeing image buffer at 0x7f5e3471a010
Vector size after 1st clear: 0
Vector size before 1st clear: 3
Freeing image buffer at 0x7f5e34db3010
Freeing image buffer at 0x7f5e3494d010
Freeing image buffer at 0x7f5e344e7010
Vector size after 1st clear: 0
c
Vector size before 2nd clear: 6
Freeing image buffer at 0x7f5e34fe6010
Segmentation fault (core dumped)
// This works fine
for (auto vec : images){
for (auto img : vec){
image_u8_destroy(img);
}
vec.clear();
}
LOOKS like it works fine but auto vec in for (auto vec : images) is a value, not a reference. It makes a copy of the vector in images, and that means vec.clear(); cleared a copy. The original in images still contains image_u8 instances holding now-dangling pointers.
If I'd been paying attention to the
Vector size before 2nd clear: 6
diagnostic, I'd have figured this out an hour ago when the question was first asked. Good debugging. Asker was looking at the right stuff and just missed a detail that surprises a lot of people. In C++ unless you ask for a reference, or you're passing around arrays, you get a value.
Solution:
// This REALLY works fine
for (auto &vec : images){
for (auto &img : vec){ // optional. You probably won't save much since `img` is
// already a pointer
image_u8_destroy(img);
}
vec.clear();
}

cvCreateMat memory leak (OpenCV)

Alright; so I'm finding an odd memory leak when attempting to use cvCreateMat to make room for my soon-to-be-filled mat. Below is what I am attempting to do; adaptiveThreshold didn't like it when I put the 3-channel image in, so I wanted to split it into its separate channels. It works! But every time we go through this particular function we gain another ~3MB of memory. Since this function is expected to run a few hundred times, this becomes a rather noticeable problem.
So here's the code:
void adaptiveColorThreshold(Mat *inputShot, int adaptiveMethod, int blockSize, int cSubtraction)
{
Mat newInputShot = (*inputShot).clone();
Mat inputBlue = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputGreen = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputRed = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
for(int rows = 0; rows < newInputShot.rows; rows++)
{
for(int cols = 0; cols < newInputShot.cols; cols++)
{
inputBlue.data[inputBlue.step[0]*rows + inputBlue.step[1]*cols] = newInputShot.data[newInputShot.step[0]*rows + newInputShot.step[1]*cols + 0];
inputGreen.data[inputGreen.step[0]*rows + inputGreen.step[1]*cols] = newInputShot.data[newInputShot.step[0]*rows + newInputShot.step[1]*cols + 1];
inputRed.data[inputRed.step[0]*rows + inputRed.step[1]*cols] = newInputShot.data[newInputShot.step[0]*rows + newInputShot.step[1]*cols + 2];
}
}
adaptiveThreshold(inputBlue, inputBlue, 255, adaptiveMethod, THRESH_BINARY, blockSize, cSubtraction);
adaptiveThreshold(inputGreen, inputGreen, 255, adaptiveMethod, THRESH_BINARY, blockSize, cSubtraction);
adaptiveThreshold(inputRed, inputRed, 255, adaptiveMethod, THRESH_BINARY, blockSize, cSubtraction);
for(int rows = 0; rows < (*inputShot).rows; rows++)
{
for(int cols = 0; cols < (*inputShot).cols; cols++)
{
(*inputShot).data[(*inputShot).step[0]*rows + (*inputShot).step[1]*cols + 0] = inputBlue.data[inputBlue.step[0]*rows + inputBlue.step[1]*cols];
(*inputShot).data[(*inputShot).step[0]*rows + (*inputShot).step[1]*cols + 1] = inputGreen.data[inputGreen.step[0]*rows + inputGreen.step[1]*cols];
(*inputShot).data[(*inputShot).step[0]*rows + (*inputShot).step[1]*cols + 2] = inputRed.data[inputRed.step[0]*rows + inputRed.step[1]*cols];
}
}
inputBlue.release();
inputGreen.release();
inputRed.release();
newInputShot.release();
return;
}
So going through it one line at a time...
newInputShot adds ~3MB
inputBlue adds ~1MB
inputGreen adds ~1MB
and inputRed adds ~1MB
So far, so good - need memory to hold the data. newInputShot gets its data right off the bat, but inputRGB need to get their data from newInputShot - so we just allocate the space to be filled in the upcoming for-loop, which (as expected) allocates no new memory, just fills in the space already claimed.
The adaptiveThresholds don't add any new memory either, since they're simply supposed to overwrite what is already there, and the next for-loop writes straight to inputShot; no new memory needed there. So now we get around to (manually) releasing the memory.
Releasing inputBlue frees up 0MB
Releasing inputGreen frees up 0MB
Releasing inputRed frees up 0MB
Releasing newInputShot frees up ~3MB
Now, according to the OpenCV documentation site: "OpenCV handles all the memory automatically."
First of all, std::vector, Mat, and other data structures used by the
functions and methods have destructors that deallocate the underlying
memory buffers when needed. This means that the destructors do not
always deallocate the buffers as in case of Mat. They take into
account possible data sharing. A destructor decrements the reference
counter associated with the matrix data buffer. The buffer is
deallocated if and only if the reference counter reaches zero, that
is, when no other structures refer to the same buffer. Similarly, when
a Mat instance is copied, no actual data is really copied. Instead,
the reference counter is incremented to memorize that there is another
owner of the same data. There is also the Mat::clone method that
creates a full copy of the matrix data.
TLDR the quote: Related mats get clumped together in a super-mat that gets released all at once when nothing is left using it.
This is why I created newInputShot as a clone (that doesn't get clumped with inputShot) - so I could see if this was occurring with the inputRGBs. Well... nope! the inputRGBs are their own beast that refuse to be deallocated. I know it isn't any of the intermediate functions because this snippet does the exact same thing:
void adaptiveColorThreshold(Mat *inputShot, int adaptiveMethod, int blockSize, int cSubtraction)
{
Mat newInputShot = (*inputShot).clone();
Mat inputBlue = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputGreen = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
Mat inputRed = cvCreateMat(newInputShot.rows, newInputShot.cols, CV_8UC1);
inputBlue.release();
inputGreen.release();
inputRed.release();
newInputShot.release();
return;
}
That's about as simple as it gets. Allocate - fail to Deallocate. So what's going on with cvCreateMat?
I would suggest not to use cvCreateMat and you don't need to clone the original Mat either.
Look into using split() and merge() functions. They will do the dirty work for you and will return Mat's that will handle memory for you. I don't have OpenCV installed right now so i can't test any of the code but i'm sure that's the route you want to take.

Add 1 to vector<unsigned char> value - Histogram in C++

I guess it's such an easy question (I'm coming from Java), but I can't figure out how it works.
I simply want to increment an vector element by one. The reason for this is, that I want to compute a histogram out of image values. But whatever I try I just can accomplish to assign a value to the vector. But not to increment it by one!
This is my histogram function:
void histogram(unsigned char** image, int height,
int width, vector<unsigned char>& histogramArray) {
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
// histogramArray[1] = (int)histogramArray[1] + (int)1;
// add histogram position by one if greylevel occured
histogramArray[(int)image[i][j]]++;
}
}
// display output
for (int i = 0; i < 256; i++) {
cout << "Position: " << i << endl;
cout << "Histogram Value: " << (int)histogramArray[i] << endl;
}
}
But whatever I try to add one to the histogramArray position, it leads to just 0 in the output. I'm only allowed to assign concrete values like:
histogramArray[1] = 2;
Is there any simple and easy way? I though iterators are hopefully not necesarry at this point, because I know the exakt index position where I want to increment something.
EDIT:
I'm so sorry, I should have been more precise with my question, thank you for your help so far! The code above is working, but it shows a different mean value out of the histogram (difference of around 90) than it should. Also the histogram values are way different than in a graphic program - even though the image values are exactly the same! Thats why I investigated the function and found out if I set the histogram to zeros and then just try to increase one element, nothing happens! This is the commented code above:
for (int i = 0; i < width; i++) {
for (int j = 0; j < height; j++) {
histogramArray[1]++;
// add histogram position by one if greylevel occured
// histogramArray[(int)image[i][j]]++;
}
}
So the position 1 remains 0, instead of having the value height*width. Because of this, I think the correct calculation histogramArray[image[i][j]]++; is also not working properly.
Do you have any explanation for this? This was my main question, I'm sorry.
Just for completeness, this is my mean function for the histogram:
unsigned char meanHistogram(vector<unsigned char>& histogram) {
int allOccurences = 0;
int allValues = 0;
for (int i = 0; i < 256; i++) {
allOccurences += histogram[i] * i;
allValues += histogram[i];
}
return (allOccurences / (float) allValues) + 0.5f;
}
And I initialize the image like this:
unsigned char** image= new unsigned char*[width];
for (int i = 0; i < width; i++) {
image[i] = new unsigned char[height];
}
But there shouldn't be any problem with the initialization code, since all other computations work perfectly and I am able to manipulate and safe the original image. But it's true, that I should change width and height - since I had only square images it didn't matter so far.
The Histogram is created like this and then the function is called like that:
vector<unsigned char> histogramArray(256);
histogram(array, adaptedHeight, adaptedWidth, histogramArray);
So do you have any clue why this part histogramArray[1]++; don't increases my histogram? histogramArray[1] remains 0 all the time! histogramArray[1] = 2; is working perfectly. Also histogramArray[(int)image[i][j]]++; seems to calculate something, but as I said, I think it's wrongly calculating.
I appreciate any help very much! The reason why I used a 2D Array is simply because it is asked for. I like the 1D version also much more, because it's way simpler!
You see, the current problem in your code is not incrementing a value versus assigning to it; it's the way you index your image. The way you've written your histogram function and the image access part puts very fine restrictions on how you need to allocate your images for this code to work.
For example, assuming your histogram function is as you've written it above, none of these image allocation strategies will work: (I've used char instead of unsigned char for brevity.)
char image [width * height]; // Obvious; "char[]" != "char **"
char * image = new char [width * height]; // "char*" != "char **"
char image [height][width]; // Most surprisingly, this won't work either.
The reason why the third case won't work is tough to explain simply. Suffice it to say that a 2D array like this will not implicitly decay into a pointer to pointer, and if it did, it would be meaningless. Contrary to what you might read in some books or hear from some people, in C/C++, arrays and pointers are not the same thing!
Anyway, for your histogram function to work correctly, you have to allocate your image like this:
char** image = new char* [height];
for (int i = 0; i < height; ++i)
image[i] = new char [width];
Now you can fill the image, for example:
for (int i = 0; i < height; ++i)
for (int j = 0; j < width; ++j)
image[i][j] = rand() % 256; // Or whatever...
On an image allocated like this, you can call your histogram function and it will work. After you're done with this image, you have to free it like this:
for (int i = 0; i < height; ++i)
delete[] image[i];
delete[] image;
For now, that's enough about allocation. I'll come back to it later.
In addition to the above, it is vital to note the order of iteration over your image. The way you've written it, you iterate over your columns on the outside, and your inner loop walks over the rows. Most (all?) image file formats and many (most?) image processing applications I've seen do it the other way around. The memory allocations I've shown above also assume that the first index is for the row, and the second is for the column. I suggest you do this too, unless you've very good reasons not to.
No matter which layout you choose for your images (the recommended row-major, or your current column-major,) it is in issue that you should always keep in your mind and take notice of.
Now, on to my recommended way of allocating and accessing images and calculating histograms.
I suggest that you allocate and free images like this:
// Allocate:
char * image = new char [height * width];
// Free:
delete[] image;
That's it; no nasty (de)allocation loops, and every image is one contiguous block of memory. When you want to access row i and column j (note which is which) you do it like this:
image[i * width + j] = 42;
char x = image[i * width + j];
And you'd calculate the histogram like this:
void histogram (
unsigned char * image, int height, int width,
// Note that the elements here are pixel-counts, not colors!
vector<unsigned> & histogram
) {
// Make sure histogram has enough room; you can do this outside as well.
if (histogram.size() < 256)
histogram.resize (256, 0);
int pixels = height * width;
for (int i = 0; i < pixels; ++i)
histogram[image[i]]++;
}
I've eliminated the printing code, which should not be there anyway. Note that I've used a single loop to go through the whole image; this is another advantage of allocating a 1D array. Also, for this particular function, it doesn't matter whether your images are row-major or column major, since it doesn't matter in what order we go through the pixels; it only matters that we go through all the pixels and nothing more.
UPDATE: After the question update, I think all of the above discussion is moot and notwithstanding! I believe the problem could be in the declaration of the histogram vector. It should be a vector of unsigned ints, not single bytes. Your problem seems to be that the value of the vector elements seem to stay at zero when your simplify the code and increment just one element, and are off from the values they need to be when you run the actual code. Well, this could be a symptom of numeric wrap-around. If the number of pixels in your image are a a multiple of 256 (e.g. 32x32 or 1024x1024 image) then it is natural that the sum of their number would be 0 mod 256.
I've already alluded to this point in my original answer. If you read my implementation of the histogram function, you see in the signature that I've declared my vector as vector<unsigned> and have put a comment above it that says this victor counts pixels, so its data type should be suitable.
I guess I should have made it bolder and clearer! I hope this solves your problem.

OpenCV Error: insufficient memory, in function call

I have a function looks like this:
void foo(){
Mat mat(50000, 200, CV_32FC1);
/* some manipulation using mat */
}
Then after several loops (in each loop, I call foo() once), it gives an error:
OpenCV Error: insufficient memory when allocating (about 1G) memory.
In my understanding, the Mat is local and once foo() returns, it is automatically de-allocated, so I am wondering why it leaks.
And it leaks on some data, but not all of them.
Here is my actual code:
bool VidBOW::readFeatPoints(int sidx, int eidx, cv::Mat &keys, cv::Mat &descs, cv::Mat &codes, int &barrier) {
// initialize buffers for keys and descriptors
int num = 50000; /// a large number
int nDims = 0; /// feature dimensions
if (featName == "STIP")
nDims = 162;
Mat descsBuff(num, nDims, CV_32FC1);
Mat keysBuff(num, 3, CV_32FC1);
Mat codesBuff(num, 3000, CV_64FC1);
// move overlapping codes from a previous window to buffer
int idxPre = -1;
int numPre = keys.rows;
int numMov = 0; /// number of overlapping points to move
for (int i = 0; i < numPre; ++i) {
if (keys.at<float>(i, 0) >= sidx) {
idxPre = i;
break;
}
}
if (idxPre > 0) {
numMov = numPre - idxPre;
keys.rowRange(idxPre, numPre).copyTo(keysBuff.rowRange(0, numMov));
codes.rowRange(idxPre, numPre).copyTo(codesBuff.rowRange(0, numMov));
}
// the starting row in code matrix where new codes from the updated features to add in
barrier = numMov;
// read keys and descriptors from feature file
int count = 0; /// number of new points that are read in buffers
if (featName == "STIP")
count = readSTIPFeatPoints(numMov, eidx, keysBuff, descsBuff);
// update keys, descriptors and codes matrix
descsBuff.rowRange(0, count).copyTo(descs);
keysBuff.rowRange(0, numMov+count).copyTo(keys);
codesBuff.rowRange(0, numMov+count).copyTo(codes);
// see if reaching the end of a feature file
bool flag = false;
if (feof(fpfeat))
flag = true;
return flag;
}
You don't post the code that calls your function, so I can't tell whether this is a true memory leak. The Mat objects that you allocate inside readFeatPoints() will be deallocated correctly, so there are no memory leaks that I can see.
You declare Mat codesBuff(num, 3000, CV_64FC1);. With num = 5000, this means you're trying to allocate 1.2 gigabytes of memory in one big block. You also copy some of this data to codes with the line:
codesBuff.rowRange(0, numMov+count).copyTo(codes);
If the value of numMove + count changes between iterations, this will cause reallocation of the data buffer in codes. If the value is large enough, you may also be eating up a significant amount of memory that persists across iterations of your loop. Both of these things may be leading to heap fragmentation. If at any point there doesn't exist a 1.2 GB chunk of memory waiting around, an insufficient memory error occurs, which is what you have experienced.

How to fix the insufficient memory error (openCV)

Please help how to handle this problem:
OpenCV Error: Insufficient memory (Failed to allocate 921604 bytes) in
unknown function, file
........\ocv\opencv\modules\core\src\alloc.cpp, line 52
One of my method using cv::clone and pointer
The code is:
There is a timer every 100ms;
In the timer event, I call this method:
void DialogApplication::filterhijau(const Mat &image, Mat &result) {
cv::Mat resultfilter = image.clone();
int nlhijau = image.rows;
int nchijau = image.cols*image.channels();;
for(int j=0; j<nlhijau; j++) {
uchar *data2=resultfilter.ptr<uchar> (j); //alamat setiap line pada result
for(int i=0; i<nchijau; i++) {
*data2++ = 0; //element B
*data2++ = 255; //element G
*data2++ = 0; //element R
}
// free(data2); //I add this line but the program hung up
}
cv::addWeighted(resultfilter,0.3,image,0.5,0,resultfilter);
result=resultfilter;
}
The clone() method of a cv::Mat performs a hard copy of the data. So the problem is that for each filterhijau() a new image is allocated, and after hundreds of calls to this method your application will have occupied hundreds of MBs (if not GBs), thus throwing the Insufficient Memory error.
It seems like you need to redesign your current approach so it occupies less RAM memory.
I faced this error before, I solved it by reducing the size of the image while reading them and sacrificed some resolution.
It was something like this in Python:
# Open the Video
cap = cv2.VideoCapture(videoName + '.mp4')
i = 0
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
frame = cv2.resize(frame, (900, 900))
# append the frames to the list
images.append(frame)
i += 1
cap.release()
N.B. I know it's not the most optimum solution for the problem but, it was enough for me.