Open CV accessing and setting pixels conditionally - c++

Hello I have an image of text with bounding boxes on the text. I'm given all coordinates and I want to white or black out anything not in the bounding box (background stuff). So far I have something like this in OpenCV
for (int i = 0; i < image.rows; i++)
{
for (int j = 0; j < image.cols; j++)
{
for (int k = 0; k < coor.size(); k++)
{
if (!((j >= coor[k][0].x) && (j <= coor[k][2].x) && (i >= coor[k][0].y) && (i <= coor[k][1].y)))
{
image.at<Vec3b>(i, j) = 0;
}
}
}
}
Coor is a vector of vector of points that hold all corner points, and for now I'm checking whether or not a point is in there or not and altering the color. If i remove the NOT from the condition I am able to change pixel colors of whats inside the boxes, If i removed the not, every pixel changes :/. Any idea of what is going on?

Related

Laplacian Sharpening result is kinda greyish C++

I am trying to implement laplacian filter for sharpening an image.
but the result is kinda grey , I don't know what went wrong with my code.
Here's my work so far
img = imread("moon.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
//change -5 to -4 for original result.
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int **temp = new int*[height];
for (int i = 0; i < height; i++) {
temp[i] = new int[width];
}
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp[i][j] += filter[h - i][w - j] * (int)img.at<uchar>(h, w);
}
}
}
}
//find max and min
int max = 0;
int min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp 0 - 255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
}
}
//empty the temp array
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
temp[i][j] = 0;
}
}
//img - res and store it in temp array
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
//int a = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
//cout << a << endl;
temp[y][x] = (int)img.at<uchar>(y, x) - (int)res.at<uchar>(y, x);
}
}
//find the new max and min
max = 0;
min = 100;
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
if (temp[i][j] > max) {
max = temp[i][j];
}
if (temp[i][j] < min) {
min = temp[i][j];
}
}
}
//clamp it back to 0-255
for (int i = 0; i < height; i++) {
for (int j = 0; j < width; j++) {
res.at<uchar>(i, j) = 0 + (temp[i][j] - min)*(255 - 0) / (max - min);
temp[i][j] = (int)res.at<uchar>(i, j);
}
}
return res;
}
And here's the result
as you can see in my code above , i already normalize the pixel value to 0-255. i still don't know what went wrong here. Can anyone here explain why is that ?
The greyness is because, as Max suggested in his answer, you are scaling to the 0-255 range, not clamping (as your comments in the code suggest).
However, that is not all of the issues in your code. The output of the Laplace operator contains negative values. You nicely store these in an int. But then you scale and copy over to a char. Don't do that!
You need to add the result of the Laplace unchanged to your image. This way, some pixels in your image will become darker, and some lighter. This is what causes the edges to appear sharper.
Simply skip some of the loops in your code, and keep one that does temp = img - temp. That result you can freely scale or clamp to the output range and cast to char.
To clamp, simply set any pixel values below 0 to 0, and any above 255 to 255. Don't compute min/max and scale as you do, because there you reduce contrast and create the greyish wash over your image.
Your recent question is quite similar (though the problem in the code was different), read my answer there again, it suggests a way to further simplify your code so that img-Laplace becomes a single convolution.
The problem is that you are clamping and rescaling the image. Look at the bottom left border of the moon: There are very bright pixels next to very dark pixels, and then some gray pixels right besides the bright ones. Your sharpening filter will really spike on that bright border and increase the maximum. Similarly, the black pixels will be reduced even further.
You then determine minimum and maximum and rescale the entire image. This necessarily means the entire image will lose contrast when displayed in the previous gray scale, because your filter outputted pixel values above 255 and below 0.
Looks closely at the border of the moon in the output image:
There is a black halo (the new 0) and a bright, sharp edge (the new 255). (The browser image scaling made it less crisp in this screenshot, look at your original output). Everything else was squashed by the rescaling, so what was previous black (0) is now dark gray.

Set transparency of pixel by it's value in cv::Mat

I have two cv::Mat objects, one is CV_8UC1 and it's loaded from grayscale QImage:
QImage tmp = QImage(path/to/image);
setMap(cv::Mat(tmp.height(),
tmp.width(),
CV_8UC1,
const_cast<uchar *>(tmp.bits()),
static_cast<size_t>(tmp.bytesPerLine())
));
After I load it, I want to get every pixel value and change transparency of that pixel by its value and convert it to QImage. Currently, I access pixels like this:
for(int i = 0; i < getMap().rows; i++)
{
for(int j = 0; j < getMap().cols; j++){
uchar v = getMap().at<uchar>(i,j);
//qDebug() << v;
}
}
Now, I think I have only one choice - convert it to CV_8UC4 (or copy it somehow) and change it's alpha value, but I don't know how to copy/convert it by pixel. As I said, I need to change it's transparency by it's grayscale value.
I tried this, but when I did, program crashed
getMap().convertTo(requestedMap_, CV_8UC4);
for(int i = 0; i < getMap().rows; i++)
{
for(int j = 0; j < getMap().cols; j++){
uchar v = getMap().at<uchar>(i,j);
if(v < 50)
requestedMap_.at<cv::Vec4i>(i,j)[3] = 0;
}
}
How can I solve it?
Thanks for your help!

'Project.exe has triggered a breakpoint.'

I am working with C++ and OpenCV on Visual Studio; for my application, I have a set of images and a .mask one that permits to select a ROI in each of them (it is 0 everywhere but in the ROI).
I load the images (duck1.jpg, duck2.jpg, etc.) with:
std::array<cv::Mat, n_imgs> img;
std::string folder = "C:/Users/imgs/";
std::string fname = "duck";
std::string format = ".jpg";
for (int i = 0; i < n_imgs; i++)
img[i] = cv::imread(folder + fname + std::to_string(i + 1) + format, 0);
Then, I apply a mask:
cv::Mat mask = cv::imread(folder + fname + ".mask" + format, 0);
for (int i = 0; i < img[0].rows; i++)
for (int j = 0; j < img[0].cols; j++)
for (int k = 0; k < n_imgs; k++)
if (mask.at<float>(i, j) == 0)
img[k].at<float>(i, j) = 0.f;
I keep getting 'Project.exe has triggered a breakpoint.' quite randomly in some recurring points of my successive code (which I won't post here because is quite long); however, these problems disappear when I comment the masking line.
Given the symptoms, I supposed it is an allocation problem, am I right? How can I fix it?
It will be more of a guess what is wrong but I will give you a hint.
Are you sure that img have float as underlying type? Because when you do cv::imread(file, IMREAD_GRAYSCALE ) this 0 stand for gray scale image which usually is CV_8UC1 (unsigned char of 8 bit) when you address it with float (which has size of 32 bit) you may end up writing memory after the end of image (24 bits after last pixel in image are written). This may sometimes trigger error and sometimes not, it depends on what is in memory after your image (is it allocated or not). I guess that if you run your program in Debug mode it will always fail.
So change:
for (int i = 0; i < img[0].rows; i++)
for (int j = 0; j < img[0].cols; j++)
for (int k = 0; k < n_imgs; k++)
if (mask.at<float>(i, j) == 0)
img[k].at<float>(i, j) = 0.f;
To:
for (int i = 0; i < img[0].rows; i++)
for (int j = 0; j < img[0].cols; j++)
for (int k = 0; k < n_imgs; k++)
if (mask.at<unsigned char>(i, j) == 0)
img[k].at<unsigned char>(i, j) = 0;
If you want your code to run faster and you have binary mask (with values of 0 and 1) you can just multiply these element-wise like this (note that this is only for one image):
cv::Mat afterApplyMask = img.mul(mask);
I was writing a C code in Visual Studio when I encountered this error. My project name and name of the solution in the source file was the same. Once I changed those and debugged, I stopped encountering the error.

Find the contour in a radiograph

I just started learning openCV and I want to write a program that can detect the organs in a radiograph. The result I want to have look like this.
I tried cv2.findContours but it can't detect the correct one, then I use convex hull and it return like this which is not the one I want neither.
Is there a other way that you can find the contours in openCV that can help me with this one? I can only find two ways above.
you must use the validContours so you can use this code after use findContours and you change the boundRect value to find the area that you want
vector<vector<Point> > contours_poly(contourss.size());
vector<Rect> boundRect(contourss.size());
vector<Point2f>center(contourss.size());
vector<float>radius(contourss.size());
//Get poly contours
for (int i = 0; i < contourss.size(); i++)
{
approxPolyDP(Mat(contourss[i]), contours_poly[i], 3, true);
}
//Get only important contours, merge contours that are within another
vector<vector<Point> > validContours;
for (int i = 0; i < contours_poly.size(); i++){
Rect r = boundingRect(Mat(contours_poly[i]));
if (r.area() < 200)continue;
bool inside = false;
for (int j = 0; j < contours_poly.size(); j++){
if (j == i)continue;
Rect r2 = boundingRect(Mat(contours_poly[j]));
if (r2.area() < 200 || r2.area()<r.area())continue;
if (r.x>r2.x&&r.x + r.width<r2.x + r2.width&&
r.y>r2.y&&r.y + r.height < r2.y + r2.height){
inside = true;
}
}
if (inside)continue;
validContours.push_back(contours_poly[i]);
}
//Get bounding rects
for (int i = 0; i < validContours.size(); i++){
boundRect[i] = boundingRect(Mat(validContours[i]));
}

Issues with rotating rubik cube faces using opengl in C++

I have to design functional rubik cube as part of my homework. I am not using openGL directly, but a framework that was provided. ( All functions that do not belong to openGL and do not have their body listed here will be presumed correct)
Functionalities: all faces need to be rotated if selected by pressing a key.
The whole cube must rotate.
The rotation of the whole cube is correct and does not make the subject of this question.
In order to do this, I created the rubik cube from 27 smaller cubes(cube size is 3) and, at the same time, a tridimensional array. A replica of the cube that contains small cubes indexes.
In order to better understand this :
if initially one face was:
0 1 2
3 4 5
6 7 8
after a rotation it should be:
6 3 0
7 4 1
8 5 2
I can rotate the cubes relative to axis X or Y an indefinite number of times and it works perfectly.
However, if I combine the rotations( alternate X rotations with Y rotations in a random way) there appear cases when the cube deforms.
As this happens inconsistently, it is difficult for me to find the cause.
This is how I am creating the cube :
int count = 0;
for (int i = -1; i < 2; i++)
for(int j = -1; j < 2; j++)
for(int k = -1; k < 2; k++) {
RubikCube.push_back(drawCube());
RubikCube.at(count)->translate(4*i,4*j,4*k);
CubIndici[j+1][k+1][i+1] = count;
count++;
}
The function drawCube() effectively draws a cube of size 4 with the center positioned in origin.
CubIndici is the 3D array that I use to store the positions of the cube.
This is the function that I am using to rotate a matrix in the 3D array. (I have double checked it so it should be correct, but perhaps I am missing something).
void rotateMatrix(int face_index, int axis) {
if (axis == 0 )
{
for ( int i = 0; i < 3; i++)
for( int j = i; j < 3; j++)
{
swap(&CubIndici[i][j][face_index],&CubIndici[j][i][face_index]);
}
for (int i = 0; i < 3; i++)
for(int j = i; j < 3; j++)
{
swap(&CubIndici[i][j][face_index],&CubIndici[2-i][j][face_index]);
}
}
if (axis == 1 )
{
for ( int i = 0; i < 3; i++)
for( int j = i; j < 3; j++)
{
swap(&CubIndici[face_index][i][j],&CubIndici[face_index][j][i]);
}
for (int i = 0; i < 3; i++)
for(int j = i; j < 3; j++)
{
swap(&CubIndici[face_index][i][j],&CubIndici[face_index][2-i][j]);
}
}
}
The CubIndici 3d array is global, so I need the axis parameter to determine what kind of rotation to performe( relative to X, Y or Z)
on pressing w key I should rotate a( hardcoded, for now) face around axis X
for (int i = 0; i < 3; i++)
for(int j = 0; j < 3; j++)
RubikCube.at(CubIndici[i][j][1])->rotateXRelativeToPoint(
RubikCube.at(CubIndici[1][1][1])->axiscenter, 1.57079633);
rotateMatrix(1,0);
CubIndici11 should always contain the cube that is in the center of the face CubIndici[*][*]1.
Similarly,
for (int i = 0; i < 3; i++)
for(int j = 0; j < 3; j++)
RubikCube.at(CubIndici[2][i][j])->rotateYRelativeToPoint(
RubikCube.at(CubIndici[2][1][1])->axiscenter, 1.57079633);
rotateMatrix(2,1);
for rotating on axis Y.
1.57079633 is the radian equivalent of 90 degrees
For a better understanding I add the detailed display of rotating the left face on X axis and the top down one on Y axis.
The first block of coordinates is the initial cube face. ( The CubIndici index matrix is unmodified)
pre rotatie - coordinates and indexes for each of the cubes of the face.
post rotatie - coordiantes and indexes after rotating the objects. ( The matrix was not touched )
post rotatie matrice - after rotating the matrix aswell. If you compare the indexes of "pre rotatie" with "post rotatie matrice" you will notice 90 degrees turn.
This is the first rotation ( rotate the left face around X) and it is entirely correct.
On the next rotation however, the cubes that should be contained in the top down face ( as well as in the left one) should be 2,5,8. However, they appear as 2,5,6.
If you look at the first "post rotatie matrice" 2,5,8 are indeed the top row.
This is the issue that deforms the cube and I don't know what causes it.
If anything is unclear please let me know and I will edit the post or reply to the comment!
The formula of a PI/2 rotation clockwise for a cell at position <x;y> in a slice of the cube is:
x' = 2 - y
y' = x
similarily, for a counter clockwise rotation:
x' = y
y' = 2 - x
but these are rotations, and you want to do in-place modification of your arrays.
We can replace a rotation by a combination of 2 mirror symmetries.
clockwise:
<x;y> -> <y;x>
<x;y> -> <2-x;y>
and counter clockwise is the opposite composition of these functions.
Given these formulas, you can write your rotation function like this:
void rotateMatrix_X_CW(int face_index) {
int i,j;
for ( i = 0; i < 2; i++)
for( j = i; j < 3; j++)
{
swap(CubIndici[i][j][face_index],CubIndici[2-i][j][face_index]);
}
for ( i = 0; i < 2; i++)
for( j = i; j < 3; j++)
{
swap(CubIndici[i][j][face_index],CubIndici[j][i][face_index]);
}
}
void rotateMatrix_X_CCW(int face_index) {
int i,j;
for ( i = 0; i < 2; i++)
for( j = i; j < 3; j++)
{
swap(CubIndici[i][j][face_index],CubIndici[j][i][face_index]);
}
for ( i = 0; i < 2; i++)
for( j = i; j < 3; j++)
{
swap(CubIndici[i][j][face_index],CubIndici[2-i][j][face_index]);
}
}
You should be able to implement the other axes from there.