How to realize Digital fill light in Opencv - c++

I want to realize the function of fill-light by use OpenCV, but There have some problem. Black part of pics is too dark, Photos become blurred, i don't know how to Optimization code。that my code:
V, value, 0~100, increase the amplitude of the brightness.
S,Scope, 0~255, dark is all less than S.
increase exposure to light dark photos increment, unchanged, so to see more details of the dark.
m_imgOriginal: original image ,type:Mat
m_imgNew: new image , clone from m_imgOriginal ,type:Mat
int OpenCVClass::AddExposure(int v, int s)
{
int new_r = v*m_mean_val.val[0] / 150;
int new_g = v*m_mean_val.val[1] / 150;
int new_b = v*m_mean_val.val[2] / 150;
for (int y = 0; y < m_imgOriginal.rows; y++)
{
auto ptr = m_imgOriginal.ptr<uchar>(y);
auto qtr = m_imgNew.ptr<uchar>(y);
for (int x = 0; x < m_imgOriginal.cols; x++)
{
int mean = (ptr[0] + ptr[1] + ptr[2]) / 3;
if (mean <= s)
{
int r = ptr[0] + new_r;
qtr[0] = r>255 ? 255 : r;
int g = ptr[1] + new_g;
qtr[1] = g>255 ? 255 : g;
int b = ptr[2] + new_b;
qtr[2] = b>255 ? 255 : b;
int newMean = (qtr[0] + qtr[1] + qtr[2]) / 3;
if (newMean > s)
{
int nr = ptr[0] + (s - mean) ;
int ng = ptr[1] + (s - mean) ;
int nb = ptr[2] + (s - mean) ;
qtr[0] = nr>255 ? 255 : nr;
qtr[1] = ng>255 ? 255 : ng;
qtr[2] = nb>255 ? 255 : nb;
}
}
else
{
qtr[0] = ptr[0];
qtr[1] = ptr[1];
qtr[2] = ptr[2];
}
ptr += 3;
qtr += 3;
}
RenderBuffer(m_imgNew, m_displayBuffer);
}
return 0;
}
Optimization before
Optimization after

First, I would suggest to calculate a luminance value for each pixel, when testing agains 's'. I mean calculate 'mean' a different way (see this link on how to calculate luminance):
http://www.niwa.nu/2013/05/math-behind-colorspace-conversions-rgb-hsl/
Second, you are dealing with an 8 bit per channel image, don't expect near-or-perfect dark pixels to have any extra detail when you make them "brighter", they will just become grey or whiter.
Third, when "adding" brightness, I suggest using the HSL representation of pixel color values and increasing the luminance. In pseudocode:
1) Convert pixel color from RGB to HSL.
2) Increase luminance (or 'lightness').
3) Convert back pixel color to RGB.

Related

How to downsample a not-power-of-2 texture in UnrealEngine?

I am rendering the Viewport with a resolution of something like 1920x1080 multiplied by a Oversampling value like 4. Now i need to downsample from the rendered Resolution 7680‬x4320 back to the 1920x1080.
Are there any functions in Unreal I could use for that ? Or any Library (windows only) which handle this nicely ?
Or what would be a propper way of writing this my own ?
We tried to implement a downsampling but it only works if SnapshotScale is 2, when its higher than 2 it doesn't seem to have an effect regarding image quality.
UTexture2D* AAVESnapShotManager::DownsampleTexture(UTexture2D* Texture)
{
UTexture2D* Result = UTexture2D::CreateTransient(RenderSettings.imageWidth, RenderSettings.imageHeight, PF_B8G8R8A8);
void* TextureDataVoid = Texture->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_ONLY);
void* ResultDataVoid = Result->PlatformData->Mips[0].BulkData.Lock(LOCK_READ_WRITE);
FColor* TextureData = (FColor*)TextureDataVoid;
FColor* ResultData = (FColor*)ResultDataVoid;
int32 WindowSize = RenderSettings.resolutionScale / 2;
for (int x = 0; x < Result->GetSizeX(); ++x)
{
for (int y = 0; y < Result->GetSizeY(); ++y)
{
const uint32 ResultIndex = y * Result->GetSizeX() + x;
uint32_t R = 0, G = 0, B = 0, A = 0;
int32 Samples = 0;
for (int32 dx = -WindowSize; dx < WindowSize; ++dx)
{
for (int32 dy = -WindowSize; dy < WindowSize; ++dy)
{
int32 PosX = (x * RenderSettings.resolutionScale + dx);
int32 PosY = (y * RenderSettings.resolutionScale + dy);
if (PosX < 0 || PosX >= Texture->GetSizeX() || PosY < 0 || PosY >= Texture->GetSizeY())
{
continue;
}
size_t TextureIndex = PosY * Texture->GetSizeX() + PosX;
FColor& Color = TextureData[TextureIndex];
R += Color.R;
G += Color.G;
B += Color.B;
A += Color.A;
++Samples;
}
}
ResultData[ResultIndex] = FColor(R / Samples, G / Samples, B / Samples, A / Samples);
}
}
Texture->PlatformData->Mips[0].BulkData.Unlock();
Result->PlatformData->Mips[0].BulkData.Unlock();
Result->UpdateResource();
return Result;
}
I expect a high quality oversampled Texture output, working with any positive int value in SnapshotScale.
I have a suggestion. It's not really direct, but it involves no writing of image filtering or importing of libraries.
Make an unlit Material with nodes TextureObject->TextureSample-> connect to Emissive.
Use the texture you start with in your function to populate the Texture Object on a Material Instance Dynamic of the material.
Use the "Draw Material to Render Target" function to draw the Material Instance Dynamic to a Render Target that is pre-set with your target resolution.

Resizing a picture vc++

Is there any function that quickly resizing a picture in Visual C++? I want to made a copy of original picture that would be x times smaller. Then I would like to placed it at the center of black bitmap. The black bitmap would be in the size of first picture.
Here is original picture: https://www.dropbox.com/s/6she1kvcby53qgz/term.bmp
and this is effect that i want to receive: https://www.dropbox.com/s/8ah59z0ip6tq4wd/term2.bmp
In my program I use Pylon's libraries. The images are in CPylonImage type.
Some simple code to handle resizes portably:
For all cases the following legend applies:
w1 - the width of the original image
h1 - the height of the original image
pixels - an array of int with the pixel data
w2 - desired width
h2 - desired height
retval - this is the returned value, it is a new pixel array which contains the manipulated image.
For Linear Interpolation:
I cannot find this on my drive at present (issues with a new hdd) so have included Bilinear:
For Bilinear Interpolation:
Bilinear Interpolation function
int* resizeBilinear(int* pixels, int w1, int h1, int w2, int h2)
{
int* retval = new int[w2*h2] ;
int a, b, c, d, x, y, index ;
float x_ratio = ((float)(w1-1))/w2 ;
float y_ratio = ((float)(h1-1))/h2 ;
float x_diff, y_diff, blue, red, green ;
int offset = 0 ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x = (int)(x_ratio * j) ;
y = (int)(y_ratio * i) ;
x_diff = (x_ratio * j) - x ;
y_diff = (y_ratio * i) - y ;
index = (y*w1+x) ;
a = pixels[index] ;
b = pixels[index+1] ;
c = pixels[index+w1] ;
d = pixels[index+w1+1] ;
// blue element
// Yb = Ab(1-w1)(1-h1) + Bb(w1)(1-h1) + Cb(h1)(1-w1) + Db(wh)
blue = (a&0xff)*(1-x_diff)*(1-y_diff) + (b&0xff)*(x_diff)*(1-y_diff) +
(c&0xff)*(y_diff)*(1-x_diff) + (d&0xff)*(x_diff*y_diff);
// green element
// Yg = Ag(1-w1)(1-h1) + Bg(w1)(1-h1) + Cg(h1)(1-w1) + Dg(wh)
green = ((a>>8)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>8)&0xff)*(x_diff)*(1-y_diff) +
((c>>8)&0xff)*(y_diff)*(1-x_diff) + ((d>>8)&0xff)*(x_diff*y_diff);
// red element
// Yr = Ar(1-w1)(1-h1) + Br(w1)(1-h1) + Cr(h1)(1-w1) + Dr(wh)
red = ((a>>16)&0xff)*(1-x_diff)*(1-y_diff) + ((b>>16)&0xff)*(x_diff)*(1-y_diff) +
((c>>16)&0xff)*(y_diff)*(1-x_diff) + ((d>>16)&0xff)*(x_diff*y_diff);
retval[offset++] =
0xff000000 | // hardcoded alpha
((((int)red)<<16)&0xff0000) |
((((int)green)<<8)&0xff00) |
((int)blue) ;
}
}
return retval;
}
For Nearest Neighbour:
int* resizePixels(int* pixels,int w1,int h1,int w2,int h2)
{
int* retval = new int[w2*h2] ;
// EDIT: added +1 to remedy an early rounding problem
int x_ratio = (int)((w1<<16)/w2) +1;
int y_ratio = (int)((h1<<16)/h2) +1;
//int x_ratio = (int)((w1<<16)/w2) ;
//int y_ratio = (int)((h1<<16)/h2) ;
int x2, y2 ;
for (int i=0;i<h2;i++) {
for (int j=0;j<w2;j++) {
x2 = ((j*x_ratio)>>16) ;
y2 = ((i*y_ratio)>>16) ;
retval[(i*w2)+j] = pixels[(y2*w1)+x2] ;
}
}
return retval;
}
Now, the code above is designed to be portable and should work with very little modification in C++, C, C# and Java (I have used the code above for all 4 when needed), which eliminates the need for an external library and allows you to process any array of pixels, so long as you can represent them in the format for the code above.
To place the manipulated image in the middle of a black background, all you would need to do is copy the data into an array of the original at the right locations and populate all the other locations with the values for black:)
Hope this helps, as I have not time to comment it all at present, however I can if needs be at a later point today or tomorrow:)

Skin Detection with Gaussian Mixture Models

I'm doing skin detection algorithm according to this article. There are two models at page 21: Mixture of Gaussian Skin and Non-skin Color Model.
The first model for skin detection works exellent.
There are examples:
1)Orginal image:
2) Skin mask
But the non-skin model gives wrong results:
Here is my code:
ipl_image_wrapper NudityDetector::filterPixelsWithGMM(const float covarinceMatrix[][3], const float meanMatrix[][3], const float weightVector[], const float probValue) const
{
ipl_image_wrapper mask = cvCreateImage(cvGetSize(m_image.get()), IPL_DEPTH_8U, 1);
double probability = 0.0;
float x[3] = { 0, 0, 0};
for(int i = 0; i < m_image.get()->height; ++i)
{
for(int j = 0; j < m_image.get()->width; ++j)
{
if (m_image.get()->nChannels == 3)
{
x[0] = (reinterpret_cast<uchar*>(m_image.get()->imageData + i * m_image.get()->widthStep))[j * 3 + 2];
x[1] = (reinterpret_cast<uchar*>(m_image.get()->imageData + i * m_image.get()->widthStep))[j * 3 + 1];
x[2] = (reinterpret_cast<uchar*>(m_image.get()->imageData + i * m_image.get()->widthStep))[j * 3];
double cov_det = 0.0;
double power = 0.0;
double A1 = 0.0;
double A2 = 0.0;
double A3 = 0.0;
probability = 0;
for (int k = 0; k < 16; ++k)
{
cov_det = covarinceMatrix[k][0] * covarinceMatrix[k][1] * covarinceMatrix[k][2];
A1 = covarinceMatrix[k][1] * covarinceMatrix[k][2];
A2 = covarinceMatrix[k][0] * covarinceMatrix[k][2];
A3 = covarinceMatrix[k][0] * covarinceMatrix[k][1];
power =(std::pow((x[0] - meanMatrix[k][0]), 2) * A1 +
std::pow((x[1] - meanMatrix[k][1]), 2) * A2 +
std::pow((x[2] - meanMatrix[k][2]), 2) * A3 ) / (2 * cov_det);
probability += 100 * weightVector[k] *std::exp(-power) / (std::pow(2 * M_PI, 3/2) * std::pow(cov_det, 1/2));
}
if ( probability < probValue)
{
(reinterpret_cast<uchar*>(mask.get()->imageData + i * mask.get()->widthStep))[j] = 0;
}
else
{
(reinterpret_cast<uchar*>(mask.get()->imageData + i * mask.get()->widthStep))[j] = 255;
}
}
}
}
cvDilate(mask.get(), mask.get(), NULL, 2);
cvErode(mask.get(), mask.get(), NULL, 1);
return mask;
}
ipl_image_wrapper NudityDetector::detectSkinWithGMM(const float probValue) const
{
//matrices are from article
ipl_image_wrapper mask = filterPixelsWithGMM(COVARIANCE_SKIN_MATRIX, MEAN_SKIN_MATRIX, SKIN_WEIGHT_VECTOR, probValue);
return mask;
}
ipl_image_wrapper NudityDetector::detectNonSkinWithGMM(const float probValue) const
{
//matrices are from article
ipl_image_wrapper mask = filterPixelsWithGMM(COVARIANCE_NON_SKIN_MATRIX, MEAN_NON_SKIN_MATRIX, NON_SKIN_WEIGHT_VECTOR, probValue);
return mask;
}
What I'm doing wrong? Maybe I misunderstand the meaning of tre article? Or I translated formula wrong in the code?
Thank you in advance!
In fact, there seems to be nothing wrong with the results, non-skin model correctly identifies non-skin regions as 255 and skin regions as 0. You may just need to tune parameter probValue to a lower value to get rid of some false negatives (small non-skin regions)
GMM may not be an effective approach for skin detection and you may employ some edge intensity information as a regularization parameter so that detected regions will not be fragmented.

Pixels in YUV image

I am using opencv to achieve object tracking. I read that YUV image is better option to use than RGB image. My problem is that I fail to understand about the YUV format although i spend much time read notes. Y is the brightness which i believe is calculated from the combination of R, G, B component.
My main problem is how can I access and manipulate the pixels in YUV image format. In RGB format its easy to access the component and therefore change it using simple operatin like
src.at<Vec3b>(j,i).val[0] = 0; for example
But this is not the case in YUV. I need help in accessing and changing the pixel values in YUV image. For example if pixel in RGB is red, then I want to only keep the corresponding pixel in YUV and the rest is removed. Please help me with this.
I would suggest operating on your image in HSV or LAB rather than RGB.
The raw image from the camera will be in YCbCr (sometimes called YUV, which I think is incorrect, but I may be wrong), and laid out in a way that resembles something like YUYV (repeating), so if you can convert directly from that to HSV, you will avoid additional copy and conversion operations which will save you some time. That may only matter to you if you're processing video or batches of images however.
Here's some C++ code for converting between YCbCr and RGB (one uses integer math, the other floating point):
Colour::bgr Colour::YCbCr::toBgrInt() const
{
int c0 = 22987;
int c1 = -11698;
int c2 = -5636;
int c3 = 29049;
int y = this->y;
int cb = this->cb - 128;
int cr = this->cr - 128;
int b = y + (((c3 * cb) + (1 << 13)) >> 14);
int g = y + (((c2 * cb + c1 * cr) + (1 << 13)) >> 14);
int r = y + (((c0 * cr) + (1 << 13)) >> 14);
if (r < 0)
r = 0;
else if (r > 255)
r = 255;
if (g < 0)
g = 0;
else if (g > 255)
g = 255;
if (b < 0)
b = 0;
else if (b > 255)
b = 255;
return Colour::bgr(b, g, r);
}
Colour::bgr Colour::YCbCr::toBgrFloat() const
{
float y = this->y;
float cb = this->cb;
float cr = this->cr;
int r = y + 1.40200 * (cr - 0x80);
int g = y - 0.34414 * (cb - 0x80) - 0.71414 * (cr - 0x80);
int b = y + 1.77200 * (cb - 0x80);
if (r < 0)
r = 0;
else if (r > 255)
r = 255;
if (g < 0)
g = 0;
else if (g > 255)
g = 255;
if (b < 0)
b = 0;
else if (b > 255)
b = 255;
return Colour::bgr(b, g, r);
}
And a conversion from BGR to HSV:
Colour::hsv Colour::bgr2hsv(bgr const& in)
{
Colour::hsv out;
int const hstep = 255 / 3; // Hue step size between red -> green -> blue
int min = in.r < in.g ? in.r : in.g;
min = min < in.b ? min : in.b;
int max = in.r > in.g ? in.r : in.g;
max = max > in.b ? max : in.b;
out.v = max; // v
int chroma = max - min;
if (max > 0)
{
out.s = 255 * chroma / max; // s
}
else
{
// r = g = b = 0 // s = 0, v is undefined
out.s = 0;
out.h = 0;
out.v = 0; // it's now undefined
return out;
}
if (chroma == 0)
{
out.h = 0;
return out;
}
const int chroma2 = chroma * 2;
int offset;
int diff;
if (in.r == max)
{
offset = 3 * hstep;
diff = in.g - in.b;
}
else if (in.g == max)
{
offset = hstep;
diff = in.b - in.r;
}
else
{
offset = 2 * hstep;
diff = in.r - in.g;
}
int h = offset + (diff * (hstep + 1)) / chroma2;
// Rotate such that red has hue 0
if (h >= 255)
h -= 255;
assert(h >= 0 && h < 256);
out.h = h;
return out;
Unfortunately I do not have code to do this in one step.
You can also use the built-in OpenCV functions for colour conversion.
cvtColor(img, img, CV_BGR2HSV);
Also the U and V components are calculated as linear combinations of RGB values. Then it means, that different intensities of red (R,0,0) are mapped to some (y*R + a,u*R + b, v*R + c), which again means that to detect "red" in YUV one can calculate if the distance of the pixel to that line determined by y,u,v,a,b,c (some of which are redundant) is close to zero. That's achievable with a single dot product. Then set the remaining pixels to the (0,128,128) in YUV space (I think that's R=0,G=0,B=0 in almost all varieties of YCrCb, YUV and such).
There are several YUV formats, but the common ones keep Y at the same resolution as the original image, but U and V are half size, and are saved as separate or interlaced planes/channels after the single channel Y image buffer.
This allows you to efficiently access Y as a 1-channel 8-bit greyscale image.
Access and manipulate pixels does not know the colorformat so the same code applies for color components Y U and V. If you need to access in RGB mode, best is probably calling cv::cvtColor for your region of interest first.

Accessing certain pixel RGB value in openCV

I have searched internet and stackoverflow thoroughly, but I haven't found answer to my question:
How can I get/set (both) RGB value of certain (given by x,y coordinates) pixel in OpenCV? What's important-I'm writing in C++, the image is stored in cv::Mat variable. I know there is an IplImage() operator, but IplImage is not very comfortable in use-as far as I know it comes from C API.
Yes, I'm aware that there was already this Pixel access in OpenCV 2.2 thread, but it was only about black and white bitmaps.
EDIT:
Thank you very much for all your answers. I see there are many ways to get/set RGB value of pixel. I got one more idea from my close friend-thanks Benny! It's very simple and effective. I think it's a matter of taste which one you choose.
Mat image;
(...)
Point3_<uchar>* p = image.ptr<Point3_<uchar> >(y,x);
And then you can read/write RGB values with:
p->x //B
p->y //G
p->z //R
Try the following:
cv::Mat image = ...do some stuff...;
image.at<cv::Vec3b>(y,x); gives you the RGB (it might be ordered as BGR) vector of type cv::Vec3b
image.at<cv::Vec3b>(y,x)[0] = newval[0];
image.at<cv::Vec3b>(y,x)[1] = newval[1];
image.at<cv::Vec3b>(y,x)[2] = newval[2];
The low-level way would be to access the matrix data directly. In an RGB image (which I believe OpenCV typically stores as BGR), and assuming your cv::Mat variable is called frame, you could get the blue value at location (x, y) (from the top left) this way:
frame.data[frame.channels()*(frame.cols*y + x)];
Likewise, to get B, G, and R:
uchar b = frame.data[frame.channels()*(frame.cols*y + x) + 0];
uchar g = frame.data[frame.channels()*(frame.cols*y + x) + 1];
uchar r = frame.data[frame.channels()*(frame.cols*y + x) + 2];
Note that this code assumes the stride is equal to the width of the image.
A piece of code is easier for people who have such problem. I share my code and you can use it directly. Please note that OpenCV store pixels as BGR.
cv::Mat vImage_;
if(src_)
{
cv::Vec3f vec_;
for(int i = 0; i < vHeight_; i++)
for(int j = 0; j < vWidth_; j++)
{
vec_ = cv::Vec3f((*src_)[0]/255.0, (*src_)[1]/255.0, (*src_)[2]/255.0);//Please note that OpenCV store pixels as BGR.
vImage_.at<cv::Vec3f>(vHeight_-1-i, j) = vec_;
++src_;
}
}
if(! vImage_.data ) // Check for invalid input
printf("failed to read image by OpenCV.");
else
{
cv::namedWindow( windowName_, CV_WINDOW_AUTOSIZE);
cv::imshow( windowName_, vImage_); // Show the image.
}
The current version allows the cv::Mat::at function to handle 3 dimensions. So for a Mat object m, m.at<uchar>(0,0,0) should work.
uchar * value = img2.data; //Pointer to the first pixel data ,it's return array in all values
int r = 2;
for (size_t i = 0; i < img2.cols* (img2.rows * img2.channels()); i++)
{
if (r > 2) r = 0;
if (r == 0) value[i] = 0;
if (r == 1)value[i] = 0;
if (r == 2)value[i] = 255;
r++;
}
const double pi = boost::math::constants::pi<double>();
cv::Mat distance2ellipse(cv::Mat image, cv::RotatedRect ellipse){
float distance = 2.0f;
float angle = ellipse.angle;
cv::Point ellipse_center = ellipse.center;
float major_axis = ellipse.size.width/2;
float minor_axis = ellipse.size.height/2;
cv::Point pixel;
float a,b,c,d;
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y < image.rows; y++)
{
auto u = cos(angle*pi/180)*(x-ellipse_center.x) + sin(angle*pi/180)*(y-ellipse_center.y);
auto v = -sin(angle*pi/180)*(x-ellipse_center.x) + cos(angle*pi/180)*(y-ellipse_center.y);
distance = (u/major_axis)*(u/major_axis) + (v/minor_axis)*(v/minor_axis);
if(distance<=1)
{
image.at<cv::Vec3b>(y,x)[1] = 255;
}
}
}
return image;
}