Weird QImage compare result - opengl

I want to run a few unit tests on my OpenGL application. Which cause me a few issue in the past (OpenGL draw difference between 2 computers) but now I know what I can and cannot do.
Here's a little test I wrote to check the rendering:
QImage display(grabFrameBuffer());
QImage wanted(PATH_TO_RESSOURCES + "/file_010.bmp");
int Qimage_width = display.width();
int Qimage_height = display.height();
for(int i = 1; i < Qimage_width; i++) {
for(int j = 1; j < Qimage_height; j++) {
if(QColor(display.pixel(i, j)).name() != QColor(wanted.pixel(i, j)).name()) {
qDebug() << "different pixel detected" << i << j;
}
}
}
QVERIFY(wanted == display);
The QVERIFY() fails but the message "different pixel detected" << i << j is never shown.
If I compare the files with Photoshop (see photo.stackexchange), I can't find any different pixel. I'm kind of lost.
Edit : I'm using Qt 5.2 and if I change manually one pixel on file_010.bmp the error message "different pixel detected" << i << j is displayed.

The QImage equality operator will report that two QImage instances are different if the images have different formats, different sizes and/or different contents. For the benefit of others that might have trouble understanding why two QImage instances are different, the following function prints out what the differences are (though it may generate a lot of output if there are a lot of differing pixels):
void displayDifferencesInImages(const QImage& image1, const QImage& image2)
{
if (image1 == image2)
{
qDebug("Images are identical");
return;
}
qDebug("Found the following differences:");
if (image1.size() != image2.size())
{
qDebug(" - Image sizes are different (%dx%d vs. %dx%d)",
image1.width(), image1.height(),
image2.width(), image2.height());
}
if (image1.format() != image2.format())
{
qDebug(" - Image formats are different (%d vs. %d)",
static_cast<int>(image1.format()),
static_cast<int>(image2.format()));
}
int smallestWidth = qMin(image1.width(), image2.width());
int smallestHeight = qMin(image1.height(), image2.height());
for (int i=0; i<smallestWidth; ++i)
{
for (int j=0; j<smallestHeight; ++j)
{
if (image1.pixel(i, j) != image2.pixel(i, j))
{
qDebug(" - Image pixel (%d, %d) is different (%x vs. %x)",
i, j, image1.pixel(i, j), image2.pixel(i, j));
}
}
}
}

Related

Applying texture to array of sprites(C++, SFML, tmxlite)

I'm having some difficulty applying a texture to a sprite array. I'm trying to apply the same texture to all, so that I can later setTextRect to decide which part of the texture is used as a tile in my game.
The declaration of the sprite array is in a different class and declared as follows:
sf::Sprite tileSprites[30][40]
Going line by line and debugging the stumbling block is the for loop. The response from the window is just to close and crash out with no errors.
The game crashes once the line where I try to apply the texture.
tileSprites[idx][idy].setTexture(tileMap);
std::cout << "Creating Map... \n";
// load the image to be used as map/the spritesheet.
if (!tileMap.loadFromFile("Data/Maps/tilemap.png"))
{
std::cout << "Tilemap PNG did not load";
}
//load the generated tilemap
if(!map.load("Data/Maps/test_map.tmx"))
{
std::cout << "TMX map file failed to load";
}
// access the layers in the map
const auto& layers = map.getLayers();
const auto layer = layers[0]->getLayerAs<tmx::TileLayer>();
const auto tiles = layer.getTiles();
int idx = 0;
int idy = 0;
for (int j = 0; j < tiles.size(); ++j)
{
idx = j / 30;
idy = j % 30;
tileSprites[idx][idy].setTexture(tileMap); // <-
}
std::cout << tiles.size();
}
Any advice would be really appreciated.

change image brightness with c++

i want to change brightness of image, only accessing pixel value.
not using opencv function(ex. convertTo)
input : image , num
num means constant value for brightness
here is my code and result looks wierd.
Is there any problem?
original
result
cv::Mat function(cv::Mat img, int num){
cv::Mat output;
output = cv::Mat::zeros(img.rows, img.cols, img.type());
for (int i = 0; i < img.rows; i++)
{
for (int j = 0; j < img.cols; j++)
{
for (int c = 0; c < img.channels(); c++)
{
output.at<cv::Vec3b>(i, j)[c] = img.at<cv::Vec3b>(i, j)[c] + num;
if (output.at<cv::Vec3b>(i, j)[c] > 255){
output.at<cv::Vec3b>(i, j)[c] = 255;
}
else if (output.at<cv::Vec3b>(i, j)[c] < 0)
{
output.at<cv::Vec3b>(i, j)[c] = 0;
}
}
}
}
cv::imshow("output", output);
cv::waitKey(0);
return img;
}
not using opencv function
that's somewhat silly, since your code is already using opencv's data structures.
trying to do so, you also reinvented the wheel, albeit a slightly square one ...
Check for overflow before assigning to output.
yea that's the problem. correct way to do it would be: assign the sum to something larger than uchar, and then check
else if (output.at<cv::Vec3b>(i, j)[c] < 0)
this will never happen, try to understand why
but please note, that your whole code (triple loop, omg !!!) could be rewritten as a simple:
Mat output = img + Scalar::all(num);
(faster, safer, & this will also saturate correctly !)

Why does imwrite on BMP image gets stuck / does not return?

I am reading a Bitmap-file from disk and write a copy back to disk after some manipulation, also writing a copy of the original file. The bitmaps are relatively small with a resolution of 31 x 31 pixel.
What I see is that when I have a resolution of 30 x 30 pixel then cv::imwrite correctly writes out the files, however if I go for a resolution of 31 x 31 pixel then cv:imwrite just gets stuck and does not return. This is happening on the same directories.
<...>
image = cv::imread(imageName, IMREAD_GRAYSCALE); // Read the file
if( image.empty() ) // Check for invalid input
{
cout << "Could not open or find the image" << std::endl ;
return -1;
}
Mat image_flip (width,height,CV_8U);
int8_t pixel_8b;
for (int i=0; i< width; i++){
for (int j=0; j < height; j++){
pixel_8b= image.at<int8_t>(i,j);
image_flip.at<int8_t>(width-i,j) = pixel_8b;
}
}
cout << "Writing files" << endl;
result=cv::imwrite("./output_flip.bmp", image_flip);
cout << result << endl;
return 0;
In the good case I get the file output_flip.bmp written to the disk and result is displayed. In the bad case of being stuck the last thing I see is "Writing files" and then nothing anymore. I can switch back and forth between the good and the bad case by just resizing the input image.
Any ideas how to solve that issue?
As already discussed in the comments, you didn't provide a minimal, reproducible example (MRE). So, I derived the following MRE from your code, because I wanted to point out several things (and wondered, how your code could work at all):
#include <opencv.hpp>
int main()
{
cv::Mat image = cv::imread("path/to/your/image.png", cv::IMREAD_GRAYSCALE);
// cv::resize(image, image, cv::Size(30, 30));
cv::Mat image_flip(image.size().height, image.size().width, CV_8U);
for (int i = 0; i < image.size().width; i++)
{
for (int j = 0; j < image.size().height; j++)
{
const uint8_t pixel_8b = image.at<uint8_t>(j, i);
image_flip.at<uint8_t>(j, image.size().width - 1 - i) = pixel_8b;
}
}
std::cout << "Writing files" << std::endl;
const bool result = cv::imwrite("./output_flip.bmp", image_flip);
std::cout << result << std::endl;
return 0;
}
For single channel, 8-bit image (CV_8U), use uint8_t when accessing single pixels.
When using .at, please notice, that the syntax is .at(y, x). For square images, it might be equal, but in general, it's a common source for errors.
Accessing .at(j, width-i) MUST fail for i = 0, if width = image.size().width, since the last index of image is width - 1.
After correcting these issues, I could run your code without problems for larger images, as well as resized images to 30 x 30 or 31 x 31. So, please have a look, if you can resolve your issue(s) by modifying your code accordingly.
(I'm aware, that the actual issue as stated in the question (hanging imwrite) is not addressed at all in my answer, but as I said, I couldn't even run the provided code in the first place...)
Hope that helps!

How do you parallelize the flipping of image with Pthreads?

I'm new to Pthreads and c++ and trying to parallelize an image flipping program. Obviously it isnt working. I'm told I need to port some code from an Image class but not really sure what porting means. I just copied and pasted the code but I guess that's wrong.
I get the general idea. allocate the workload, intitialize the threads, create the threads, join the threads and define a callback function.
I'm not totally sure what the cells_per_thread should be. I'm pretty sure it should be the image width * height / threads. Does that seem correct?
I'm getting multiple errors when compiling with cmake.
its saying m_thread_number, getWidth, getHeight, getPixel, temp are not define in the scope. I assume thats because the Image class code isn't ported?
PthreadImage.cxx
//Declare a callabck fucntion for Horizontal flip
void* H_flip_callback_function(void* aThreadData);
PthreadImage PthreadImage::flipHorizontally() const
{
if (m_thread_number == 0 || m_thread_number == 1)
{
return PthreadImage(Image::flipHorizontally(), m_thread_number);
}
else
{
PthreadImage temp(getWidth(), getHeight(), m_thread_number);
//Workload allocation
//Create a vector of type ThreadData whcih is constructed at the top of the class under Struct ThreadData. Pass in the number of threads.
vector<ThreadData> p_thread_data(m_thread_number);
//create an integer to hold the last element. inizialize it as -1.
int last_element = -1;
//create an unsigned int to hold how many cells we need per thread. For the image we want the width and height divided by the number of threads.
unsigned int cells_per_thread = getHeight() * getWidth() / m_thread_number;
//Next create a variable to hold the remainder of the sum.
unsigned int remainder = getHeight() * getWidth() % m_thread_number;
//print the number of cells per thread to the console
cout << "Default number for cells per thread: " << cells_per_thread << endl;
//inizialize the threads with a for loop to interate through each thread and populate it
for (int i = 0; i < m_thread_number; i++)
{
//thread ids correspond with the for loop index values.
p_thread_data[i].thread_id = i;
//start is last element + 1 i.e -1 + 1 start = 0.
p_thread_data[i].start_id = ++last_element;
p_thread_data[i].end_id = last_element + cells_per_thread - 1;
p_thread_data[i].input = this;
p_thread_data[i].output = &temp;
//if the remainder is > thats 0 add 1 to the end them remove 1 remainder.
if (remainder > 0)
{
p_thread_data[i].end_id++;
--remainder;
}
//make the last element not = -1 but = the end of the threads.
last_element = p_thread_data[i].end_id;
//print to console what number then thread start and end on
cout << "Thread[" << i << "] starts with " << p_thread_data[i].start_id << " and stops on " << p_thread_data[i].end_id << endl;
}
//create the threads with antoher for loop
for (int i = 0; i < m_thread_number; i++)
{
pthread_create(&p_thread_data[i].thread_id, NULL, H_flip_callback_function, &p_thread_data[i]);
}
//Wait for each thread to complete;
for (int i = 0; i < m_thread_number; i++)
{
pthread_join(p_thread_data[i].thread_id, NULL);
}
return temp;
}
}
Callback function
//Define the callabck fucntion for Horizontal flip
void* H_flip_callback_function(void* aThreadData)
{
//convert void to Thread data
ThreadData* p_thread_data = static_cast<ThreadData*>(aThreadData);
int tempHeight = temp(getHeight());
int tempWidth = temp(getWidth());
for (int i = p_thread_data->start_id; i <= p_thread_data->end_id; i++)
{
// Process every row of the image
for (unsigned int j = 0; j < m_height; ++j)
{
// Process every column of the image
for (unsigned int i = 0; i < m_width / 2; ++i)
{
(*(p_thread_data->output))( i, j) = getPixel(m_width - i - 1, j);
(*(p_thread_data->output))(m_width - i - 1, j) = getPixel( i, j);
}
}
}
}
Image class
#include <sstream> // Header file for stringstream
#include <fstream> // Header file for filestream
#include <algorithm> // Header file for min/max/fill
#include <numeric> // Header file for accumulate
#include <cmath> // Header file for abs and pow
#include <vector>
#include "Image.h"
//-----------------
Image::Image():
//-----------------
m_width(0),
m_height(0)
//-----------------
{}
//----------------------------------
Image::Image(const Image& anImage):
//----------------------------------
m_width(anImage.m_width),
m_height(anImage.m_height),
m_p_image(anImage.m_p_image)
//----------------------------------
Image class code to be ported
//-----------------------------------
Image Image::flipHorizontally() const
//-----------------------------------
{
// Create an image of the right size
Image temp(getWidth(), getHeight());
// Process every row of the image
for (unsigned int j = 0; j < m_height; ++j)
{
// Process every column of the image
for (unsigned int i = 0; i < tempWidth / 2; ++i)
{
temp(i, j) = getPixel(tempWidth - i - 1, j);
temp(tempWidth - i - 1, j) = getPixel(i, j);
}
}
return 0;
}
I feel like its pretty close. Any help greatly appreciated!
EDIT
Ok, so this is the correct code for anyone wasting their time on this.
There was obviously a fair few things wrong.
I don't know why there was 3 for loops. There should be 2. 1 for Rows and 1 for columns.
The cells_per_thread should be pixels_per_thread and rows/threads as #Larry B suggested not ALL the pixels per thread.
You can use -> to get members of a pointer i.e setPixel(),getPixel` etc. Who knew that!?
There was a data structure that was pretty inportant for you guys but I forgot.
struct ThreadData
{
pthread_t thread_id;
unsigned int start_id;
unsigned int end_id;
const Image* input;
Image* output;
};
Correct Callback
void* H_flip_callback_function(void* aThreadData)
{
//convert void to Thread data
ThreadData* p_thread_data = static_cast<ThreadData*>(aThreadData);
int width = p_thread_data->input->getWidth();
// Process every row of the image
for (unsigned int j = p_thread_data->start_id; j <=p_thread_data->end_id; ++j)
}
// Process every column of the image
for (unsigned int i = 0; i < width / 2; ++i)
{
p_thread_data->output->setPixel(i,j, p_thread_data->input->getPixel(width - i - 1, j));
p_thread_data->output->setPixel(width - i - 1, j, p_thread_data->input->getPixel(i, j));
}
}
return 0;
}
So now this code compiles and flips.
Thanks!
The general strategy for porting single threaded code to a multi-thread version is essentially rewriting the existing code to divide the work into self contained units of work that you can hand off to a thread for execution.
With that in mind, I don't agree with your implementation of H_flip_callback_function:
void* H_flip_callback_function(void* aThreadData)
{
//convert void to Thread data
ThreadData* p_thread_data = static_cast<ThreadData*>(aThreadData);
// Create an image of the right size
PthreadImage temp(getWidth(), getHeight(), m_thread_number);
int tempHeight = temp(getHeight());
int tempWidth = temp(getWidth());
for (int i = p_thread_data->start_id; i <= p_thread_data->end_id; i++)
{
// Process every row of the image
for (unsigned int j = 0; j < tempHeight; ++j)
{
// Process every column of the image
for (unsigned int i = 0; i < tempWidth / 2; ++i)
{
temp(i, j) = getPixel(tempWidth - i - 1, j);
temp(tempWidth - i - 1, j) = getPixel(i, j);
}
}
}
}
At face value, it looks like all your threads will be operating on the whole image. If this is the case, there is no real difference between your single and multi-thread version as you're just doing the same work multiple times in the multi-thread version.
I would argue that the smallest self contained unit of work would be to horizontally flip a single row of the image. However, if you have less threads than the number of rows, then you could allocate (Num rows / Num threads) to each thread. Each thread would then flip the rows assigned to it and the main thread would collect the results and assemble the final image.
With regards to your build warnings and errors, you'll have to provide the complete source code, build settings, environment, etc..

OpenCV C++ multithreading speedups

For the following code, here is a bit of context.
Mat img0; // 1280x960 grayscale
--
timer.start();
for (int i = 0; i < img0.rows; i++)
{
vector<double> v;
uchar* p = img0.ptr<uchar>(i);
for (int j = 0; j < img0.cols; ++j)
{
v.push_back(p[j]);
}
}
cout << "Single thread " << timer.end() << endl;
and
timer.start();
concurrency::parallel_for(0, img0.rows, [&img0](int i) {
vector<double> v;
uchar* p = img0.ptr<uchar>(i);
for (int j = 0; j < img0.cols; ++j)
{
v.push_back(p[j]);
}
});
cout << "Multi thread " << timer.end() << endl;
The result:
Single thread 0.0458856
Multi thread 0.0329856
The speedup is hardly noticeable.
My processor is Intel i5 3.10 GHz
RAM 8 GB DDR3
EDIT
I tried also a slightly different approach.
vector<Mat> imgs = split(img0, 2,1); // `split` is my custom function that, in this case, splits `img0` into two images, its left and right half
--
timer.start();
concurrency::parallel_for(0, (int)imgs.size(), [imgs](int i) {
Mat img = imgs[i];
vector<double> v;
for (int row = 0; row < img.rows; row++)
{
uchar* p = img.ptr<uchar>(row);
for (int col = 0; col < img.cols; ++col)
{
v.push_back(p[col]);
}
}
});
cout << " Multi thread Sectored " << timer.end() << endl;
And I get much better result:
Multi thread Sectored 0.0232881
So, it looks like I was creating 960 threads or something when I ran
parallel_for(0, img0.rows, ...
And that didn't work well.
(I must add that Kenney's comment is correct. Do not put too much relevance to the specific numbers I stated here. When measuring small intervals such as these, there are high variations. But in general, what I wrote in the edit, about splitting the image in half, improved performance in comparison to old approach.)
I think your problem is that you are limited by memory bandwidth. Your second snippet is basically reading from the whole of the image, and that has got to come out of main memory into cache. (Or out of L2 cache into L1 cache).
You need to arrange your code so that all four cores are working on the same bit of memory at once (I presume you are not actually trying to optimize this code - it is just a simple example).
Edit: Insert crucial "not" in last parenthetical remark.