Image Rotation gives grayscale image - c++

I got a problem with my Rotate image function in C++, Using OpenCV and Qt.
It kinda does his job, but not as expected, apart of being in grayscale, a part of the image seems to be duplicated at the top right.
Before
After
void ImgProcessing::rotate(cv::Mat &img, cv::Mat &tmp, int angle){
float rads = angle*3.1415926/180.0;
float cs = cos(-rads);
float ss = sin(-rads);
float xcenter = (float)(img.cols)/2.0;
float ycenter = (float)(img.rows)/2.0;
for(int i = 0; i < img.rows; i++)
for(int j = 0; j < img.cols; j++){
int rorig = ycenter + ((float)(i)-ycenter)*cs - ((float)(j)-xcenter)*ss;
int corig = xcenter + ((float)(i)-ycenter)*ss + ((float)(j)-xcenter)*cs;
int pixel = 0;
if (rorig >= 0 && rorig < img.rows && corig >= 0 && corig < img.cols) {
tmp.at<int>(i ,j) = img.at<int>(rorig, corig);
}else tmp.at<int>(i ,j) = 0;
}
}
Can the problem be in accessing to the image pixels?

It depends on how you read in the image but I think you are accessing it incorrectly. It should be something like this:
Vec3b intensity = image.at<Vec3b>(j, i);

Related

Warp Image by Diagonal Sine Wave

I'm trying to warp colour image using sin function in OpenCV and I was successful in doing so. However, how can I make a 'diagonal' warping using sine wave?
My code is this:
Mat result = src.clone();
for (int i = 0; i < src.rows; i++) { // to y
for (int j = 0; j < src.cols; j++) { // to x
for (int ch = 0; ch < 3; ch++) { // each colour
int offset_x = 0;
int offset_y = (int)(25.0 * sin(3.14 * j / 150));
if (i + offset_y < src.rows) {
result.at<Vec3b>(i, j)[ch] = src.at<Vec3b>((i + offset_y) % src.rows, j)[ch];
}
else
result.at<Vec3b>(i, j)[ch] = 0.0;
}
}
}
imshow("result", result);
How can I do this? Not drawing a sine graph, but warping an image.
Solved this! Several times ago, I've received a message by someone who told me that the image is stolen. It was from Google, actually, but I've deleted it to fulfill not to cause any situations. Thx!
I think it should look like this:
void deform()
{
float alpha = 45 * CV_PI / 180.0; // wave direction
float ox = cos(alpha);
float oy = sin(alpha);
cv::Mat src = cv::imread("F:/ImagesForTest/lena.jpg");
for (int i = 0; i < src.rows; i+=8)
{
cv::line(src, cv::Point(i, 0), cv::Point(i, src.rows),cv::Scalar(255,255,255));
}
for (int j = 0; j < src.cols; j += 8)
{
cv::line(src, cv::Point(0,j), cv::Point(src.cols,j), cv::Scalar(255, 255, 255));
}
cv::Mat result = src.clone();
for (int i = 0; i < src.rows; i++)
{ // to y
for (int j = 0; j < src.cols; j++)
{ // to x
float t =(i * oy)+ (j * ox); // wave parameter
for (int ch = 0; ch < 3; ch++)
{ // each colour
int offset_x =ox* (int)(25.0 * (sin(3.14 * t/ 150)));
int offset_y =oy* (int)(25.0 * (sin(3.14 * t / 150)));
if (i + offset_y < src.rows && j + offset_x < src.rows && i + offset_y >=0 && j + offset_x>=0)
{
result.at<cv::Vec3b>(i, j)[ch] = src.at<cv::Vec3b>(i + offset_y, j + offset_x )[ch];
}
else
result.at<cv::Vec3b>(i, j)[ch] = 0.0;
}
}
}
cv:: imshow("result", result);
cv::imwrite("result.jpg", result);
cv::waitKey();
}
The result:
BTW, may be better to use cv::remap ?

Laplacian Sharpening result is kinda greyish C++

I am trying to implement laplacian filter for sharpening an image.
but the result is kinda grey , I don't know what went wrong with my code.
Here's my work so far
img = imread("moon.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
//change -5 to -4 for original result.
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int **temp = new int*[height];
for (int i = 0; i < height; i++) {
temp[i] = new int[width];
}
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp[i][j] += filter[h - i][w - j] * (int)img.at<uchar>(h, w);
}
}
}
}
//find max and min
int max = 0;
int min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp 0 - 255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
}
}
//empty the temp array
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
//img - res and store it in temp array
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
//int a = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
//cout << a << endl;
temp[y][x] = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
}
}
//find the new max and min
max = 0;
min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp it back to 0-255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
temp[i][j] = (int)res.at<uchar>(i, j);
}
}
return res;
}
And here's the result
as you can see in my code above , i already normalize the pixel value to 0-255. i still don't know what went wrong here. Can anyone here explain why is that ?
The greyness is because, as Max suggested in his answer, you are scaling to the 0-255 range, not clamping (as your comments in the code suggest).
However, that is not all of the issues in your code. The output of the Laplace operator contains negative values. You nicely store these in an int. But then you scale and copy over to a char. Don't do that!
You need to add the result of the Laplace unchanged to your image. This way, some pixels in your image will become darker, and some lighter. This is what causes the edges to appear sharper.
Simply skip some of the loops in your code, and keep one that does temp = img - temp. That result you can freely scale or clamp to the output range and cast to char.
To clamp, simply set any pixel values below 0 to 0, and any above 255 to 255. Don't compute min/max and scale as you do, because there you reduce contrast and create the greyish wash over your image.
Your recent question is quite similar (though the problem in the code was different), read my answer there again, it suggests a way to further simplify your code so that img-Laplace becomes a single convolution.
The problem is that you are clamping and rescaling the image. Look at the bottom left border of the moon: There are very bright pixels next to very dark pixels, and then some gray pixels right besides the bright ones. Your sharpening filter will really spike on that bright border and increase the maximum. Similarly, the black pixels will be reduced even further.
You then determine minimum and maximum and rescale the entire image. This necessarily means the entire image will lose contrast when displayed in the previous gray scale, because your filter outputted pixel values above 255 and below 0.
Looks closely at the border of the moon in the output image:
There is a black halo (the new 0) and a bright, sharp edge (the new 255). (The browser image scaling made it less crisp in this screenshot, look at your original output). Everything else was squashed by the rescaling, so what was previous black (0) is now dark gray.

grayscale Laplace sharpening implementation

I am trying to implement Laplace sharpening using C++ , here's my code so far:
img = imread("cow.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
}
}
}
}
//img - laplace
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = img.at<uchar>(y, x) - res.at<uchar>(y, x);
}
}
return res;
}
I don't really know what went wrong, I also tried different filter (1,1,1),(1,-8,1),(1,1,1) and the result is also same (more or less). I don't think that I need to normalize the result because the result is in range of 0 - 255. Can anyone explain what really went wrong in my code?
Problem: uchar is too small to hold partial results of filerting operation.
You should create a temporary variable and add all the filtered positions to this variable then check if value of temp is in range <0,255> if not, you need to clamp the end result to fit <0,255>.
By executing below line
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
partial result may be greater than 255 (max value in uchar) or negative (in filter you have -4 or -8). temp has to be singed integer type to handle the case when partial result is negative value.
Fix:
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
int temp = res.at<uchar>(i,j); // added
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp += filter[h - i][w - j] * img.at<uchar>(h,w); // add to temp
}
}
// clamp temp to <0,255>
res.at<uchar>(i,j) = temp;
}
}
You should also clamp values to <0,255> range when you do the subtraction of images.
The problem is partially that you’re overflowing your uchar, as rafix07 suggested, but that is not the full problem.
The Laplace of an image contains negative values. It has to. And you can’t clamp those to 0, you need to preserve the negative values. Also, it can values up to 4*255 given your version of the filter. What this means is that you need to use a signed 16 bit type to store this output.
But there is a simpler and more efficient approach!
You are computing img - laplace(img). In terms of convolutions (*), this is 1 * img - laplace_kernel * img = (1 - laplace_kernel) * img. That is to say, you can combine both operations into a single convolution. The 1 kernel that doesn’t change the image is [(0,0,0),(0,1,0),(0,0,0)]. Subtract your Laplace kernel from that and you obtain [(0,-1,0),(-1,5,-1),(0,-1,0)].
So, simply compute the convolution with that kernel, and do it using int as intermediate type, which you then clamp to the uchar output range as shown by rafix07.

normalize histogram in c++ - function normalize in openCV

I need to normalize the histogram of an image f which mean to applicated an transformation of histogram from image in order to extend the range of value of f to all available values.
the norm(fmin) = Vmin ( minimal value we want to reach) and normal(fmin) = Vmax ( maximal value we want to reach)
I have this formula too
the goal is to have the same result that the function normalize which openCV gives.
Mat normalize(Mat image, float minValue, float maxValue)
{
Mat res = image.clone();
assert(minValue <= maxValue);
float Fmax = 0;
float Fmin = 0;
for(int i = 0; i < res.rows; i++)
{
for(int j = 0; j < res.cols; j++)
{
float x = res.at<float>(i,j);
if(i < minValue)
{
Fmin = i;
}
if( i > maxValue)
{
Fmax = i;
}
res.at<float>(i,j) = (x - Fmin) * ((maxValue - minValue) / (Fmax - Fmin)) + minValue;
}
}
return res;
}
I have this error : !!! Warning, saved image values not between 0 and 1.
!!! Warning, saved image values not between 0 and 1.
I think I didn't understand how to calculate fmin/ fmax
So, as I explained in my comment, there are some mistakes, here's the corrected version. You need to run the double loop twice, once to find the min-max, and a second time to apply the formula. There were also errors in the comparisons:
cv::Mat normalize(cv::Mat image, float minValue, float maxValue)
{
cv::Mat res = image.clone();
assert(minValue <= maxValue);
// 1) find min and max values
float Fmax = 0.0f;
float Fmin = 1.0f; // set it to 1, not 0
for (int i = 0; i < res.rows; i++)
{
float* pixels = res.ptr<float>(i); // this is quicker
for (int j = 0; j < res.cols; j++)
{
float x = pixels[j];
if (x < Fmin) // compare x and Fmin, not i and minValue
{
Fmin = x;
}
if (x > Fmax) // compare x and Fmax, not i and maxValue
{
Fmax = x;
}
}
}
// 1 color image => don't normalize + avoid crash
if (Fmin >= Fmax)
return res;
// 2) normalize using your formula
for (int i = 0; i < res.rows; i++)
{
float* pixels = res.ptr<float>(i);
for (int j = 0; j < res.cols; j++)
{
pixels[j] = (pixels[j] - Fmin) * ((maxValue - minValue) / (Fmax - Fmin)) + minValue;
}
}
return res;
}
If your source image is a grayscale image in 8 bit, you can convert it like that:
cv::Mat floatImage;
grayImage.convertTo(floatImage, CV_32F, 1.0 / 255, 0);
floatImage = normalize(floatImage, 0, 1.0f);
floatImage.convertTo(grayImage, CV_8UC1, 255.0, 0);
Also, if you use cv::minMaxLoc, your normalize function can be made shorter =>
cv::Mat normalize(cv::Mat image, float minValue, float maxValue)
{
cv::Mat res = image.clone();
assert(minValue <= maxValue);
// 1) find min and max values
double Fmax;
double Fmin;
cv::minMaxLoc(image, &Fmin, &Fmax);
if (Fmin >= Fmax)
return res;
// 2) normalize using your formula
for (int i = 0; i < res.rows; i++)
{
float* pixels = res.ptr<float>(i);
for (int j = 0; j < res.cols; j++)
{
pixels[j] = (pixels[j] - Fmin) * ((maxValue - minValue) / (Fmax - Fmin)) + minValue;
}
}
return res;
}

How to set pixel value of a cv::Mat1b?

I have copied a grayscale image into a cv::Mat1b, and I want to loop through each pixel and read and change its value. How can I do that?
My code looks like this :
cv::Mat1b newImg;
grayImg.copyTo(newImg);
for (int i = 0; i < grayImg.rows; i++) {
for (int j = 0; i < grayImg.cols; j++) {
int pixelValue = static_cast<int>(newImg.at<uchar>(i, j));
if(pixelValue > thresh)
newImg.at<int>(i,j) = 0;
else
newImg.at<int>(i, j) = 255;
}
}
But in the assignments (inside of if and else), I get the error Access violation writing location.
How do I read and write specific pixels correctly?
Thanks !
Edit
Thanks to #Miki and #Micka, this is how I solved it :
for (int i = 0; i < newImg.rows; i++) {
for (int j = 0; j < newImg.cols; j++) {
// read :
cv::Scalar intensity1 = newImg.at<uchar>(i,j);
int intensity = intensity1.val[0];
// write :
newImg(i, j) = 255;
}
}
newImg.at<int>(i,j)
should be
newImg.at<uchar>(i,j)
Because cv::Mat1b is of uchar type
i suggest :
cv::Mat1b newImg;
newImg = grayImg > thresh ;
or
cv::Mat1b newImg;
newImg = grayImg < thresh ;
also look at the OpenCV Tutorials to know how to go through each and every pixel of an image