Speedup drawing transparent images - c++

I need to draw image and make part of this image transparent. I write such code:
graphics.SetClip(&nonTransparentRegion);
graphics.DrawImage(pImage, dstRect, srcRect, Gdiplus::UnitPixel);
Gdiplus::ColorMatrix colorMatrix;
for (int i = 0; i < 5; ++i)
for (int j = 0; j < 5; ++j)
colorMatrix.m[i][j] = Gdiplus::REAL(i == j);
colorMatrix.m[3][3] = 0.5;
Gdiplus::ImageAttributes imageAttr;
imageAttr.SetColorMatrix(&colorMatrix);
graphics.SetClip(&transparentRegion);
graphics.DrawImage(pImage, dstRect, srcRect, Gdiplus::UnitPixel, &imageAttr);
It's works fine, but it's too slow. I tried to use Bitmap.lockBits and changed alpha channel directly for image, but it's slower. What else can I try?

Related

How can I correctly adjust the contrast of an image?

I'm trying to change the contrast of an image from an horizonal slider in Qt. I used cv::saturate_cast to do that for brightness by converting the BGR to LAB and it worked. It doesn't seem to work for adjusting the contrast, I get black or light grey images, how can I fix it? This is what I've tried: (I got the formulas from https://www.dfstudios.co.uk/articles/programming/image-programming-algorithms/image-processing-algorithms-part-5-contrast-adjustment/ )
void MainWindow::on_horizontalSlider_2_valueChanged(int value)
{
QPixmap pm =ui->label2->pixmap();
if (!pm.isNull() )
{
for (int i=0; i < image.rows; i++)
for (int j=0; j < image.cols; j++)
{
float factor = (259 * (value + 255)) / (255 * (259 - value));
image.at<cv::Vec3b>(i,j)[0] = cv::saturate_cast<uchar>( factor*(image.at<cv::Vec3b>(i,j)[0]-128)+128);
image.at<cv::Vec3b>(i,j)[1] = cv::saturate_cast<uchar>( factor*(image.at<cv::Vec3b>(i,j)[1]-128)+128);
image.at<cv::Vec3b>(i,j)[2] = cv::saturate_cast<uchar>( factor*(image.at<cv::Vec3b>(i,j)[2]-128)+128);
}
QPixmap pm =MatToPixmap(image);
QImage imageupdate= pm.toImage();
int w = ui->label2->width();
int h =ui-> label2->height();
ui->label2->setPixmap(QPixmap::fromImage(imageupdate.scaled(w,h,Qt::KeepAspectRatio)));
}
}

OpenCV: lab color quantization to predefined colors

I trying to reduce my image colors to some predefined colors using the following function:
void quantize_img(cv::Mat &lab_img, std::vector<cv::Scalar> &lab_colors) {
float min_dist, dist;
int min_idx;
for (int i = 0; i < lab_img.rows*lab_img.cols * 3; i += lab_img.cols * 3) {
for (int j = 0; j < lab_img.cols * 3; j += 3) {
min_dist = FLT_MAX;
uchar &l = *(lab_img.data + i + j + 0);
uchar &a = *(lab_img.data + i + j + 1);
uchar &b = *(lab_img.data + i + j + 2);
for (int k = 0; k < lab_colors.size(); k++) {
double &lc = lab_colors[k](0);
double &ac = lab_colors[k](1);
double &bc = lab_colors[k](2);
dist = (l - lc)*(l - lc)+(a - ac)*(a - ac)+(b - bc)*(b - bc);
if (min_dist > dist) {
min_dist = dist;
min_idx = k;
}
}
l = lab_colors[min_idx](0);
a = lab_colors[min_idx](1);
b = lab_colors[min_idx](2);
}
}
}
However it does not seem to work properly! For example the output for the following input looks amazing!
if (!(src = imread("im0.png")).data)
return -1;
cvtColor(src, lab, COLOR_BGR2Lab);
std::vector<cv::Scalar> lab_color_plate_({
Scalar(100, 0 , 0), //white
Scalar(50 , 0 , 0), //gray
Scalar(0 , 0 , 0), //black
Scalar(50 , 127, 127), //red
Scalar(50 ,-128, 127), //green
Scalar(50 , 127,-128), //violet
Scalar(50 ,-128,-128), //blue
Scalar(68 , 46 , 75), //orange
Scalar(100,-16 , 93) //yellow
});
//convert from conventional Lab to OpenCV Lab
for (int k = 0; k < lab_color_plate_.size(); k++) {
lab_color_plate_[k](0) *= 255.0 / 100.0;
lab_color_plate_[k](1) += 128;
lab_color_plate_[k](2) += 128;
}
quantize_img(lab, lab_color_plate_);
cvtColor(lab, lab, CV_Lab2BGR);
imwrite("im0_lab.png", lab);
Input image:
Output image
Can anyone explain where the problem is?
After checking your algorithm I noticed that the algorithm is correct 100% and the problem is your color space.... Let's take one of the colors that is changed "wrongly" like the green from the trees.
Using a color picker tool in GIMP it tells you that at least one of the green used is in RGB (111, 139, 80). When this is converted to LAB, you get (54.4, -20.7, 28.3). The distance to green is (by your formula) 21274.34 , and with grey the distance is 1248.74... so it will choose grey over green, even though it is a green color.
A lot of values in LAB can generate a green value. You can test it out the color ranges in this webpage. I would suggest you to use HSV or HSL and compare the H values only which is the Hue. The other values changes only the tone of green, but a small range in the Hue determines that it is green. This will probably give you more accurate results.
As some suggestion to improve your code, use Vec3b and cv::Mat functions like this:
for (int i = 0; i < lab_img.rows; ++i) {
for (int j = 0; j < lab_img.cols; ++j) {
Vec3b pixel = lab_img.at<Vec3b>(i,j);
}
}
This way the code is more readable, and some checks are done in debug mode.
The other way would be to do a one loop since you don't care about indices
auto currentData = reinterpret_cast<Vec3b*>(lab_img.data);
for (size_t i = 0; i < lab_img.rows*lab_img.cols; i++)
{
auto& pixel = currentData[i];
}
This way is also better. This last part is just a suggestion, there is nothing wrong with your current code, just harder to read understand to the outside viewer.

How to read data from a UTexture2D in C++

I am trying to read the pixel data from a populated UTexture2D in an Unreal Engine C++ project. Before I post the question here, I tried to use the method described in this link: https://answers.unrealengine.com/questions/25594/accessing-pixel-values-of-texture2d.html. However, it doesn't work for me. All pixel values I got from the texture are some garbage data.
I just want to get the depth values from the SceneCapture2D and a post-processing material that contains SceneTexture: Depth node. I need the depth values available in C++ so that I can do further processing with OpenCV. In Directx11, staging texture can be used for CPU read, but in the unreal engine, I don't know how to create a 'staging texture' like Dx11 has. I can't get the correct pixel values from the current method which makes me think I may try to access a no-CPU-readable texture.
Here is my experimental code for reading data back from an RGB UTexture2D.
Initialize the RGB Texture:
VideoTextureColor= UTexture2D::CreateTransient(640, 480, PF_B8G8R8A8);
VideoTextureColor->UpdateResource();
VideoUpdateTextureRegionColor = new FUpdateTextureRegion2D(0, 0, 0, 0, 640, 480);
ColorRegionData = new FUpdateTextureRegionsData;
PixelDepthData.Init(FColor(0, 0, 0, 255), 640 * 480);
// Populate the texture with blue color
for (int i = 0; i < 640; i++) {
for (int j = 0; j < 480; j++) {
int idx = j * 640 + i;
PixelDepthData[idx].B = 255;
PixelDepthData[idx].G = 0;
PixelDepthData[idx].R = 0;
PixelDepthData[idx].A = 255;
}
}
UpdateTextureRegions(
VideoTextureColor,
(int32)0,
(uint32)1,
VideoUpdateTextureRegionColor,
(uint32)(4 * 640),
(uint32)4,
(uint8*)PixelDepthData.GetData(),
false,
ColorRegionData
);
Then, update read its value back to the PixelDepthData (TArray type) array and update this texture with values storing in the PixelDepthData, which is its old value.
UpdateTextureRegions(
VideoTextureColor,
(int32)0,
(uint32)1,
VideoUpdateTextureRegionColor,
(uint32)(4 * 640),
(uint32)4,
(uint8*)PixelDepthData.GetData(),
false,
ColorRegionData
);
ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER(
FRealSenseDelegator,
ARealSenseDelegator*, RealSenseDelegator, this,
{
FColor* tmpImageDataPtr = static_cast<FColor*>((RealSenseDelegator->VideoTextureColor)->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY));
for (uint32 j = 0; j < 480; j++) {
for (uint32 i = 0; i < 640; i++) {
uint32 idx = j * 640 + i;
RealSenseDelegator->PixelDepthData[idx] = tmpImageDataPtr[idx];
RealSenseDelegator->PixelDepthData[idx].A = 255;
}
}
(RealSenseDelegator->VideoTextureColor)->PlatformData->Mips[0].BulkData.Unlock();
}
);
All I got is a white color texture instead of a blue color texture in the visualization scene.
Does anyone know how to read the data of the UTexture2D Object?
I figured that out. You have to get the UTexture2D's RHI texture reference first, and then use RHILockTexture2D to read it's data, and you have to do it in the RenderThread. The following code just an example:
FTexture2DResource* uTex2DRes = (FTexture2DResource*)(RealSenseDelegator->VideoTexturePixelDepth)->Resource;
float* cpuDataPtr = (float*)RHILockTexture2D(
uTex2DRes->GetTexture2DRHI(),
0,
RLM_ReadOnly,
destStride,
false);
for (uint32 j = 0; j < 480; j++) {
for (uint32 i = 0; i < 640; i++) {
uint32 idx = j * 640 + i;
// TODO Read the pixel data right here
}
}
RHIUnlockTexture2D(uTex2DRes->GetTexture2DRHI(), 0, false);
To do this in the Render Thread, you have to use the Macro such as ENQUEUE_UNIQUE_RENDER_COMMAND_ONEPARAMETER // If you just one to pass one parameter to the render thread, use this one.+

How does the remap function in OpenCV for undistorting images work?

for debugging purposes I tried to reimplement the remap function of OpenCV. Without considering interpolation, it should look something like this:
for( int j = 0; j < height; j++ )
{
for( int i = 0; i < width; i++ )
{
undistortedImage.at<double>(mapy.at<float>(j,i),mapx.at<float>(j,i)) = distortedImage.at<double>(j,i);
}
}
To test this, I used following maps to mirror the image around the y-axis:
int width = distortedImage.cols;
int height = distortedImage.rows;
cv::Mat mapx = Mat(height, width, CV_32FC1);
cv::Mat mapy = Mat(height, width, CV_32FC1);
for( int j = 0; j < height; j++)
{
for( int i = 0; i < width; i++)
{
mapx.at<float>(j,i) = width - i - 1;
mapy.at<float>(j,i) = j;
}
}
But the interpolation it works exactly like
cv::remap( distortedImage, undistortedImage, mapx, mapy, CV_INTER_LINEAR);
Now I tried to apply this function on maps created by the OCamCalib Toolbox for undistorting images. This is basicly the same as what is done by the OpenCV undistortion as well.
My implementation now obviously does not consider that several pixels from the source image are mapped to the same pixel in the destination image. But it is worse. Actually, it looks like my source image appears three times in smaller versions in the destination image. Otherwise the remap command works perfectly.
After exhaustive debugging I decided to ask you guys for some help. Can anyone explain me what I am doing wrong or provide a link to the implementation of remap in OpenCV?
I figured it out myself. My original implementation has two fundamental mistakes:
Misunderstanding on how the maps are used.
Misunderstanding on how to extract intensity values.
How to do it correctly:
for( int j = 0; j < height; j++ )
{
for( int i = 0; i < width; i++ )
{
undistortedImage.at<uchar>(mapy.at<float>(j,i),mapx.at<float>(j,i)) = distortedImage.at<uchar>(j,i);
}
}
I want to highlight that the intensity values from the images are now extracted using .at<uchar> instead of .at<double>. Furthermore, the indices for the maps are switched.

How to access all channels of OpenCV image

I have an OpenCV image created like this:
cv::Mat img(XN_VGA_Y_RES, XN_VGA_X_RES, CV_64FC3, cv::Scalar(0));
How can I access all its pixels?
I tried:
for (int x=0; x < XN_VGA_X_RES; x++) {
for (int y=0; y < XN_VGA_Y_RES; y++) {
img.at<double>(y,x) = 1;
}
}
However, when I do it this way only 1/3 of the image is white. I'm guessing this is because there are 3 channels in my image, but how can I access them all? I tried various stuff like img.at<double[3]>(y,x) or img.at<cv::Vec3f>(y,x), but they do not seem to work.
Try this:
img.at<cv::Vec3d>(y, x)[0] = 1;
img.at<cv::Vec3d>(y, x)[1] = 1;
img.at<cv::Vec3d>(y, x)[2] = 1;