What is the fastest way to access a pixel in QImage? - c++

I would like to know what is the fastest way to modify a portion of a QImage.
I have this piece of code that has to be executed with a frequency of 30Hz. It displays an image through a sort of keyhole. It is not possible to see the entire image but only a portion inside a circle. The first for-loop erases the previous "keyhole portion displayed" and the second updates the position of the "displayed keyhole".
for (int i = (prev_y - r_y); i < (prev_y + r_y); i++){
QRgb *line = (QRgb *)backgrd->scanLine(i);
for(int j = (prev_x - r_x); j < (prev_x + r_x) ; j++){
if((i >= 0 && i < this->backgrd->height()) && (j >= 0 && j < this->backgrd->width()))
line[j] = qRgb(0,0,0);
}
}
prev_x = new_x; prev_y = new_y;
for (int i = (new_y - r_y); i < (new_y + r_y); i++){
QRgb *line = (QRgb *)backgrd->scanLine(i);
QRgb *line2 = (QRgb *)this->picture->scanLine(i);
for(int j = (new_x - r_x); j < (new_x + r_x) ; j++){
if ((((new_x - j)*(new_x - j)/(r_x*r_x) + (new_y - i)*(new_y - i)/(r_y*r_y)) <= 1) && (i >= 0) && (i < this->picture->height())&& (j >= 0) && (j < this->picture->width()))
line[j] = line2[j];
}
}
this->current_img = this->backgrd;
}
this->update(); //Display QImage* this->current_img
If I analyse the timestamps of the program I find a delay in the flow of execution every time it is executed...
Is it so high consuming to access a pixel in a QImage? Am I doing something wrong?
Is there a better alternative to QImage for a Qt program?

How about prerendering your 'keyhole' in an array/qimage and doing a bitwise AND with the source?
Original pixel && black => black
Original pixel && white => original pixel
You have a lot of conditions in the innermost loop (some can be moved out though), but the circle radius calculation with the multiplies and divides looks costly. You can reuse the keyhole mask for every frame, so no calculations need be performed.

You could move some of the conditions at least to the outer loop, and maybe pre-compute some of the terms inside the conditions, though this may be optimized anyway.
Call update only for the rectangle(s) you modified
Where do you get the time stamp? Maybe you lose time somewhere else?

Actually I understood it wasn't pixel acces that was slow, but the rendering.
During the tests I did I used plain color images, but these kind of images are much faster to render than complex images loaded from file. With other tests I realized was the rendering that was slow.
The fastest way to render an QImage is first of all to transform it using
public: static QImage QGLWidget::convertToGLFormat(const QImage &img)
then the image can be fastly manipulated (it preserves bits() scanline() width() and height() functions)
and can be displayed very fast by openGL (no further conversions are necessary)
QPainter painter(this);
glDrawPixels(img.width(), img.height(), GL_RGBA, GL_UNSIGNED_BYTE, img.bits());
painter.end();

As far as I know the fastest way to access the data of a QImage is to use QImage::bits() which give you direct access to the data of QImage.
For your problem, A better approch will be to do as Bgie suggested : using a array representing the keyhole and doing only a bitwise AND operation.
it will help to choose the correct format for your Image, the format RGB32 and ARG32_Premultiplied_ARGB32 are the fastest. Don't use ARGB32 if you don't need it.

Related

C++ improving palette indexing algorithm

I have a game engine that indexes colors of some bitmaps which allows using some of the crazy effects of olde (color strobing etc.). Sadly, the indexing algorithm is neither slow nor fast, but since the spritesheets these days are gigantic it really adds up. Currently, loading a single large spritesheet can take 150+ milliseconds, which is an eternity, relatively speaking.
This is the algorithm:
auto& palette = p->pal; // vector
auto& lookup = p->lookup; // vector_map
palette.reserve(200); // There are on average ~100 unique colors
palette.push_back(0); // Index zero is the blank color
uint32_t lastColor = 0;
uint32_t lastPalette = 0;
for (size_t i = 0; i < pixels; i++)
{
const auto color = data[i];
if (color == lastColor)
{
data[i] = lastPalette;
continue;
}
else if (color == 0)
{
continue;
}
uint32_t j = 0;
const auto& it = lookup.find(color);
if (it != lookup.end()) {
j = it->second;
}
else
{
j = palette.size();
palette.push_back(color);
lookup.emplace(color, j);
}
lastColor = color;
// Write the index back to the bitmap:
// (this is just a GPU texture encoding, don't mind it)
data[i] = (j & 255) | ((j >> 8) << (8 + 6));
lastPalette = data[i];
}
The base algorithm is fairly straight-forward:
Go through each pixel, find or create an entry for it (the color), write the index back to the image.
Now, can you parallelize this? Probably not. I have tried with OMP and regular threads. It's simply not going to be fast because regardless of how much time you save by going through each portion of the image separately, at the end you have to have a common set of indexes that apply throughout the whole image, and those indexes have to be written back to the image. Sadly, finding the unique colors first and then writing back using parallelization is also slower than doing it once, sequentially. Makes sense, doesn't it?
Using a bitset has no function here. Knowing whether a color exists is useful, but the colors are 32-bit, which makes for 2^32 bits (aka. 530MB). In contrast, 24-bits is only ~2MB, which might be a micro-optimization. I'm not really looking for that anyway. I need to cut the time by 10x.
So, any ideas? Would it be possible to process 4 or 8 colors at the same time using SSE/AVX?

Dividing image to tiles in qt

I have a very big image (31000X26000 pixels). I need to create tiles of a given size from this image and store them. I'm trying to use Qt's QImagereader but I've notice that after setClipRect for the second time, it can't read from the image.
The code I have so far works, but is very slow (this first row takes 7 seconds, the second 14, the third 21 and so on...)
for (int i = 0; i < tilesPerRow; i++){
for (int j = 0; j < tilesPerCol; j++){
QImageReader reader(curImage);
reader.setClipRect(QRect(j*(tileSize-OVERLAP),i*(tileSize-OVERLAP),tileSize,tileSize));
QImage img = reader.read();
if (img.isNull())
qDebug() << reader.errorString();
else{
retImg.setTile(img,i,j);
}
}
}
What am I doing wrong? Is it reasonable that I have to create a new reader each time? Does the location of the tile I'm trying to access affects speed and performance? If you have any suggestions on a better practice, I would appreciate it

Smoothing a contour with a lookup table / levels mapping (OpenCV)

I'm trying to smooth jagged contours drawn by OpenCV's drawContours() method. I'm applying a Gaussian blur to the contour, then trying to use a lookup table to map the pixel intensities.
However, I don't know what values to use in my lookup table. Right now I'm just guessing at arbitrary numbers. I put together a small mockup: The first two images are results directly from OpenCV. The last image is achieved through Photoshop's levels feature. As you can see it's smoother.
How do I know what values to use in my look up table?
std::vector<char> lut(256);
for (int i = 0; i <= 255; ++i) {
if(i >= 75) lut[i] = 255;
else if (i <= 25) lut[i] = 0;
else if (i < 75 && i > 25) lut[i] = i;
}
cv::LUT(contoursOverlay, lut, contoursOverlay);
Did you think to apply only a dilatation filter (and optionally a blur afterward): http://docs.opencv.org/doc/tutorials/imgproc/erosion_dilatation/erosion_dilatation.html
It could be simpler and a better solution. In your case, I doubt there is good practise in this case. I'm not sure it's the good solution.

Kinect for Windows v2 depth to color image misalignment

currently I am developing a tool for the Kinect for Windows v2 (similar to the one in XBOX ONE). I tried to follow some examples, and have a working example that shows the camera image, the depth image, and an image that maps the depth to the rgb using opencv. But I see that it duplicates my hand when doing the mapping, and I think it is due to something wrong in the coordinate mapper part.
here is an example of it:
And here is the code snippet that creates the image (rgbd image in the example)
void KinectViewer::create_rgbd(cv::Mat& depth_im, cv::Mat& rgb_im, cv::Mat& rgbd_im){
HRESULT hr = m_pCoordinateMapper->MapDepthFrameToColorSpace(cDepthWidth * cDepthHeight, (UINT16*)depth_im.data, cDepthWidth * cDepthHeight, m_pColorCoordinates);
rgbd_im = cv::Mat::zeros(depth_im.rows, depth_im.cols, CV_8UC3);
double minVal, maxVal;
cv::minMaxLoc(depth_im, &minVal, &maxVal);
for (int i=0; i < cDepthHeight; i++){
for (int j=0; j < cDepthWidth; j++){
if (depth_im.at<UINT16>(i, j) > 0 && depth_im.at<UINT16>(i, j) < maxVal * (max_z / 100) && depth_im.at<UINT16>(i, j) > maxVal * min_z /100){
double a = i * cDepthWidth + j;
ColorSpacePoint colorPoint = m_pColorCoordinates[i*cDepthWidth+j];
int colorX = (int)(floor(colorPoint.X + 0.5));
int colorY = (int)(floor(colorPoint.Y + 0.5));
if ((colorX >= 0) && (colorX < cColorWidth) && (colorY >= 0) && (colorY < cColorHeight))
{
rgbd_im.at<cv::Vec3b>(i, j) = rgb_im.at<cv::Vec3b>(colorY, colorX);
}
}
}
}
}
Does anyone have a clue of how to solve this? How to prevent this duplication?
Thanks in advance
UPDATE:
If I do a simple depth image thresholding I obtain the following image:
This is what more or less I expected to happen, and not having a duplicate hand in the background. Is there a way to prevent this duplicate hand in the background?
I suggest you use the BodyIndexFrame to identify whether a specific value belongs to a player or not. This way, you can reject any RGB pixel that does not belong to a player and keep the rest of them. I do not think that CoordinateMapper is lying.
A few notes:
Include the BodyIndexFrame source to your frame reader
Use MapColorFrameToDepthSpace instead of MapDepthFrameToColorSpace; this way, you'll get the HD image for the foreground
Find the corresponding DepthSpacePoint and depthX, depthY, instead of ColorSpacePoint and colorX, colorY
Here is my approach when a frame arrives (it's in C#):
depthFrame.CopyFrameDataToArray(_depthData);
colorFrame.CopyConvertedFrameDataToArray(_colorData, ColorImageFormat.Bgra);
bodyIndexFrame.CopyFrameDataToArray(_bodyData);
_coordinateMapper.MapColorFrameToDepthSpace(_depthData, _depthPoints);
Array.Clear(_displayPixels, 0, _displayPixels.Length);
for (int colorIndex = 0; colorIndex < _depthPoints.Length; ++colorIndex)
{
DepthSpacePoint depthPoint = _depthPoints[colorIndex];
if (!float.IsNegativeInfinity(depthPoint.X) && !float.IsNegativeInfinity(depthPoint.Y))
{
int depthX = (int)(depthPoint.X + 0.5f);
int depthY = (int)(depthPoint.Y + 0.5f);
if ((depthX >= 0) && (depthX < _depthWidth) && (depthY >= 0) && (depthY < _depthHeight))
{
int depthIndex = (depthY * _depthWidth) + depthX;
byte player = _bodyData[depthIndex];
// Identify whether the point belongs to a player
if (player != 0xff)
{
int sourceIndex = colorIndex * BYTES_PER_PIXEL;
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // B
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // G
_displayPixels[sourceIndex] = _colorData[sourceIndex++]; // R
_displayPixels[sourceIndex] = 0xff; // A
}
}
}
}
Here is the initialization of the arrays:
BYTES_PER_PIXEL = (PixelFormats.Bgr32.BitsPerPixel + 7) / 8;
_colorWidth = colorFrame.FrameDescription.Width;
_colorHeight = colorFrame.FrameDescription.Height;
_depthWidth = depthFrame.FrameDescription.Width;
_depthHeight = depthFrame.FrameDescription.Height;
_bodyIndexWidth = bodyIndexFrame.FrameDescription.Width;
_bodyIndexHeight = bodyIndexFrame.FrameDescription.Height;
_depthData = new ushort[_depthWidth * _depthHeight];
_bodyData = new byte[_depthWidth * _depthHeight];
_colorData = new byte[_colorWidth * _colorHeight * BYTES_PER_PIXEL];
_displayPixels = new byte[_colorWidth * _colorHeight * BYTES_PER_PIXEL];
_depthPoints = new DepthSpacePoint[_colorWidth * _colorHeight];
Notice that the _depthPoints array has a 1920x1080 size.
Once again, the most important thing is to use the BodyIndexFrame source.
Finally I get some time to write the long awaited answer.
Lets start with some theory to understand what is really happening and then a possible answer.
We should start by knowing the way to pass from a 3D point cloud which has the depth camera as the coordinate system origin to an image in the image plane of the RGB camera. To do that it is enough to use the camera pinhole model:
In here, u and v are the coordinates in the image plane of the RGB camera. the first matrix in the right side of the equation is the camera matrix, AKA intrinsics of the RGB Camera. The following matrix is the rotation and translation of the extrinsics, or better said, the transformation needed to go from the Depth camera coordinate system to the RGB camera coordinate system. The last part is the 3D point.
Basically, something like this, is what the Kinect SDK does. So, what could go wrong that makes the hand gets duplicated? well, actually more than one point projects to the same pixel....
To put it in other words and in the context of the problem in the question.
The depth image, is a representation of an ordered point cloud, and I am querying the u v values of each of its pixels that in reality can be easily converted to 3D points. The SDK gives you the projection, but it can point to the same pixel (usually, the more distance in the z axis between two neighbor points may give this problem quite easily.
Now, the big question, how can you avoid this.... well, I am not sure using the Kinect SDK, since you do not know the Z value of the points AFTER the extrinsics are applied, so it is not possible to use a technique like the Z buffering.... However, you may assume the Z value will be quite similar and use those from the original pointcloud (at your own risk).
If you were doing it manually, and not with the SDK, you can apply the Extrinsics to the points, and the use the project them into the image plane, marking in another matrix which point is mapped to which pixel and if there is one existing point already mapped, check the z values and compared them and always leave the closest point to the camera. Then, you will have a valid mapping without any problems. This way is kind of a naive way, probably you can get better ones, since the problem is now clear :)
I hope it is clear enough.
P.S.:
I do not have Kinect 2 at the moment so I can'T try to see if there is an update relative to this issue or if it still happening the same thing. I used the first released version (not pre release) of the SDK... So, a lot of changes may had happened... If someone knows if this was solve just leave a comment :)

Vertically flipping an Char array: is there a more efficient way?

Lets start with some code:
QByteArray OpenGLWidget::modifyImage(QByteArray imageArray, const int width, const int height){
if (vertFlip){
/* Each pixel constist of four unisgned chars: Red Green Blue Alpha.
* The field is normally 640*480, this means that the whole picture is in fact 640*4 uChars wide.
* The whole ByteArray is onedimensional, this means that 640*4 is the red of the first pixel of the second row
* This function is EXTREMELY SLOW
*/
QByteArray tempArray = imageArray;
for (int h = 0; h < height; ++h){
for (int w = 0; w < width/2; ++w){
for (int i = 0; i < 4; ++i){
imageArray.data()[h*width*4 + 4*w + i] = tempArray.data()[h*width*4 + (4*width - 4*w) + i ];
imageArray.data()[h*width*4 + (4*width - 4*w) + i] = tempArray.data()[h*width*4 + 4*w + i];
}
}
}
}
return imageArray;
}
This is the code I use right now to vertically flip an image which is 640*480 (The image is actually not guaranteed to be 640*480, but it mostly is). The color encoding is RGBA, which means that the total array size is 640*480*4. I get the images with 30 FPS, and I want to show them on the screen with the same FPS.
On an older CPU (Athlon x2) this code is just too much: the CPU is racing to keep up with the 30 FPS, so the question is: can I do this more efficient?
I am also working with OpenGL, does that have a gimmic I am not aware of that can flip images with relativly low CPU/GPU usage?
According to this question, you can flip an image in OpenGL by scaling it by (1,-1,1). This question explains how to do transformations and scaling.
You can improve at least by doing it blockwise, making use of the cache architecture. In your example one of the accesses (either the read OR the write) will be off-cache.
For a start it can help to "capture scanlines" if you're using two loops to loop through the pixels of an image, like so:
for (int y = 0; y < height; ++y)
{
// Capture scanline.
char* scanline = imageArray.data() + y*width*4;
for (int x = 0; x < width/2; ++x)
{
const int flipped_x = width - x-1;
for (int i = 0; i < 4; ++i)
swap(scanline[x*4 + i], scanline[flipped_x*4 + i]);
}
}
Another thing to note is that I used swap instead of a temporary image. That'll tend to be more efficient since you can just swap using registers instead of loading pixels from a copy of the entire image.
But also it generally helps if you use a 32-bit integer instead of working one byte at a time if you're going to be doing anything like this. If you're working with pixels with 8-bit types but know that each pixel is 32-bits, e.g., as in your case, you can generally get away with a case to uint32_t*, e.g.
for (int y = 0; y < height; ++y)
{
uint32_t* scanline = (uint32_t*)imageArray.data() + y*width;
std::reverse(scanline, scanline + width);
}
At this point you might parellelize the y loop. Flipping an image horizontally (it should be "horizontal" if I understood your original code correctly) in this way is a little bit tricky with the access patterns, but you should be able to get quite a decent boost using the above techniques.
I am also working with OpenGL, does that have a gimmic I am not aware
of that can flip images with relativly low CPU/GPU usage?
Naturally the fastest way to flip images is to not touch their pixels at all and just save the flipping for the final part of the pipeline when you render the result. For this you might render a texture in OGL with negative scaling instead of modifying the pixels of a texture.
Another thing that's really useful in video and image processing is to represent an image to process like this for all your image operations:
struct Image32
{
uint32_t* pixels;
int32_t width;
int32_t height;
int32_t x_stride;
int32_t y_stride;
};
The stride fields are what you use to get from one scanline (row) of an image to the next vertically and one column to the next horizontally. When you use this representation, you can use negative values for the stride and offset the pixels accordingly. You can also use the stride fields to, say, render only every other scanline of an image for fast interactive half-res scanline previews by using y_stride=height*2 and height/=2. You can quarter-res an image by setting x stride to 2 and y stride to 2*width and then halving the width and height. You can render a cropped image without making your blit functions accept a boatload of parameters by just modifying these fields and keeping the y stride to width to get from one row of the cropped section of the image to the next:
// Using the stride representation of Image32, this can now
// blit a cropped source, a horizontally flipped source,
// a vertically flipped source, a source flipped both ways,
// a half-res source, a quarter-res source, a quarter-res
// source that is horizontally flipped and cropped, etc,
// and all without modifying the source image in advance
// or having to accept all kinds of extra drawing parameters.
void blit(int dst_x, int dst_y, Image32 dst, Image32 src);
// We don't have to do things like this (and I think I lost
// some capabilities with this version below but it hurts my
// brain too much to think about what capabilities were lost):
void blit_gross(int dst_x, int dst_y, int dst_w, int dst_h, uint32_t* dst,
int src_x, int src_y, int src_w, int src_h,
const uint32_t* src, bool flip_x, bool flip_y);
By using negative values and passing it to an image operation (ex: a blit operation), the result will naturally be flipped without having to actually flip the image. It'll end up being "drawn flipped", so to speak, just as with the case of using OGL with a negative scaling transformation matrix.