so i'm making this project where i'm making the reflection of an image on OpenCV (without using the flip function), and the only problem (i think) to finish it, is that the image that is suppose to come out reflected, is coming out as all blue.
The code i have (i took out the usual part, the problem should be around here):
Mat imageReflectionFinal = Mat::zeros(Size(220,220),CV_8UC3);
for(unsigned int r=0; r<221; r++)
for(unsigned int c=0; c<221; c++) {
Vec3b intensity = image.at<Vec3b>(r,c);
imageReflectionFinal.at<Vec3b>(r,c) = (uchar)(c, -r + (220)/2);
}
///displays images
imshow( "Original Image", image );
imshow("Reflected Image", imageReflectionFinal);
waitKey(0);
return 0;
}
There are some problems with your code. As pointed out, your iteration variables go beyond the actual image dimensions. Do not use hardcoded bounds, you can use inputImage.cols and inputImage.rows instead to obtain the image dimensions.
There’s the variable (a BGR Vec3b) that is set but not used - Vec3b intensity = image.at<Vec3b>(r,c);
Most importantly, it is not clear what you are trying to achieve. The line (uchar)(c, -r + (220)/2); does not give out much info. Also, which direction are you flipping the original image around? X or Y axis?
Here’s a possible solution to flip your image in the X direction:
//get input image:
cv::Mat testMat = cv::imread( "lena.png" );
//Get the input image size:
int matCols = testMat.cols;
int matRows = testMat.rows;
//prepare the output image:
cv::Mat imageReflectionFinal = cv::Mat::zeros( testMat.size(), testMat.type() );
//the image will be flipped around the x axis, so the "target"
//row will start at the last row of the input image:
int targetRow = matRows-1;
//loop thru the original image, getting the current pixel value:
for( int r = 0; r < matRows; r++ ){
for( int c = 0; c < matCols; c++ ) {
//get the source pixel:
cv::Vec3b sourcePixel = testMat.at<cv::Vec3b>( r , c );
//source and target columns are the same:
int targetCol = c;
//set the target pixel
imageReflectionFinal.at<cv::Vec3b>( targetRow , targetCol ) = sourcePixel;
}
//for every iterated source row, decrease the number of
//target rows, as we are flipping the pixels in the x dimension:
targetRow--;
}
Result:
Related
I am trying to cluster a grayscale image using Kmeans.
First, I have a question:
Is Kmeans the best way to cluster a Mat or are there newer more efficient approaches?
Second, when I try this:
Mat degrees = imread("an image" , IMREAD_GRAYSCALE);
const unsigned int singleLineSize = degrees.rows * degrees.cols;
Mat data = degrees.reshape(1, singleLineSize);
data.convertTo(data, CV_32F);
std::vector<int> labels;
cv::Mat1f colors;
cv::kmeans(data, 3, labels, cv::TermCriteria(cv::TermCriteria::EPS + cv::TermCriteria::COUNT, 10, 1.), 2, cv::KMEANS_PP_CENTERS, colors);
for (unsigned int i = 0; i < singleLineSize; i++) {
data.at<float>(i) = colors(labels[i]);
}
Mat outputImage = data.reshape(1, degrees.rows);
outputImage.convertTo(outputImage, CV_8U);
imshow("outputImage", outputImage);
The result (outputImage) is empty.
When I try to multiply colors in the for loop like data.at<float>(i) = 255 * colors(labels[i]);
I get this error:
Unhandled exception : Integer division by zero.
How can I cluster a grayscale image properly?
It looks to me that you are wrongly parsing the labels and colors info to your output matrix.
K-means returns this info:
Labels - This is an int matrix with all the cluster labels. It is a "column" matrix of size TotalImagePixels x 1.
Centers - This what you refer to as "Colors". This is a float matrix that contains the cluster centers. The matrix is of size
NumberOfClusters x featureMean.
In this case, as you are using BGR pixels as "features" consider that Centers has 3 columns: One mean for the B channel, one mean for the G channel and finally, a mean for the R channel.
So, basically you loop through the (plain) label matrix, retrieve the label, use this value as index in the Centers matrix to retrieve the 3 colors.
One way to do this is as follows, using the auto data specifier and looping through the input image instead (that way we can index each input label easier):
//prepare an empty output matrix
cv::Mat outputImage( inputImage.size(), inputImage.type() );
//loop thru the input image rows...
for( int row = 0; row != inputImage.rows; ++row ){
//obtain a pointer to the beginning of the row
//alt: uchar* outputImageBegin = outputImage.ptr<uchar>(row);
auto outputImageBegin = outputImage.ptr<uchar>(row);
//obtain a pointer to the end of the row
auto outputImageEnd = outputImageBegin + outputImage.cols * 3;
//obtain a pointer to the label:
auto labels_ptr = labels.ptr<int>(row * inputImage.cols);
//while the end of the image hasn't been reached...
while( outputImageBegin != outputImageEnd ){
//current label index:
int const cluster_idx = *labels_ptr;
//get the center of that index:
auto centers_ptr = centers.ptr<float>(cluster_idx);
//we got an implicit VEC3B vector, we must map the BGR items to the
//output mat:
clusteredImageBegin[0] = centers_ptr[0];
clusteredImageBegin[1] = centers_ptr[1];
clusteredImageBegin[2] = centers_ptr[2];
//increase the row "iterator" of our matrices:
clusteredImageBegin += 3; ++labels_ptr;
}
}
I have an image which I have split into its three separate channels (b,g,r). I want to manipulate just the red band and then remerge to blue and green band to recompose image. I keep getting a sig abort in my function however. the RBandSlider refers to a global int used for a trackbar which is defaulted to 1. Almost positive the issue is within the ImageEnhancement function.
Do I need to define redBandsAdjsuted as something else or am I not grabbing the pixel local and rewriting it correctly?
Mat ImageEnhancement(Mat band){
Mat adjustedBand;
Scalar mean, std;
meanStdDev(band, mean , std);
int pixel,temp;
for(int i = 0; i < band.rows;i++){
for(int j = 0; j < band.cols;j++){
//extract pixel
pixel = band.at<Vec3b>(i,j)[0];
//pixel greater than mean
if ( pixel > mean[0]){
temp = (255);
adjustedBand.at<Vec3b>(i,j) = temp;
}
else{
temp = 0;
adjustedBand.at<Vec3b>(i,j) = temp ;
}
}
}
return adjustedBand;
}
Mat Bands[3],merged,redBandsAdjusted(image.cols,image.rows,CV_8UC1),result;
split(image, Bands);
//loop the echancement adjustment
while(true){
//adjust red band and merge
redBandsAdjusted = ImageEnhancement(Bands[2]);
vector<Mat> channels = {Bands[0],Bands[1],redBandsAdjusted};
merge(channels,merged);
}
When you do:
split(image, Bands);
You will get from a CV_8UC3 image (image) 3 CV_8U images (Bands). Everything is good until this point. Then you go to your adjusting and do 2 mistakes:
Mat adjustedBand; is never initialized... You can do Mat adjustedBand(band.rows, band.cols, CV_8UC1); or intialized in a later stage.
pixel = band.at<Vec3b>(i,j)[0]; and adjustedBand.at<Vec3b>(i,j) = temp; are for manipulating 3 channels not a 1 channel image. You need to use ucharinstead, like: adjustedBand.at<uchar>(i,j) = temp;
Those are the errors I see... fix them and try using a debugger, that way you know if something is initialize correctly or if it does the correct operation
I'm looking for a way to place on image on top of another image at a set location.
I have been able to place images on top of each other using cv::addWeighted but when I searched for this particular problem, there wasn't any posts that I could find relating to C++.
Quick Example:
200x200 Red Square & 100x100 Blue Square
&
Blue Square on the Red Square at 70x70 (From top left corner Pixel of Blue Square)
You can also create a Mat that points to a rectangular region of the original image and copy the blue image to that:
Mat bigImage = imread("redSquare.png", -1);
Mat lilImage = imread("blueSquare.png", -1);
Mat insetImage(bigImage, Rect(70, 70, 100, 100));
lilImage.copyTo(insetImage);
imshow("Overlay Image", bigImage);
Building from beaker answer, and generalizing to any input images size, with some error checking:
cv::Mat bigImage = cv::imread("redSquare.png", -1);
const cv::Mat smallImage = cv::imread("blueSquare.png", -1);
const int x = 70;
const int y = 70;
cv::Mat destRoi;
try {
destRoi = bigImage(cv::Rect(x, y, smallImage.cols, smallImage.rows));
} catch (...) {
std::cerr << "Trying to create roi out of image boundaries" << std::endl;
return -1;
}
smallImage.copyTo(destRoi);
cv::imshow("Overlay Image", bigImage);
Check cv::Mat::operator()
Note: Probably this will still fail if the 2 images have different formats, e.g. if one is color and the other grayscale.
Suggested explicit algorithm:
1 - Read two images. E.g., bottom.ppm, top.ppm,
2 - Read the location for overlay. E.g., let the wanted top-left corner of "top.ppm" on "bottom.ppm" be (x,y) where 0 < x < bottom.height() and 0 < y < bottom.width(),
3 - Finally, nested loop on the top image to modify the bottom image pixel by pixel:
for(int i=0; i<top.height(); i++) {
for(int j=0; j<top.width(), j++) {
bottom(x+i, y+j) = top(i,j);
}
}
return bottom image.
I'm new with OpenCV and I'm using it for change the luminosity of an image.
In my image, here: https://docs.google.com/file/d/0B9LaMgEERnMxQUNKbndBODJ5TXM/edit, there's a big space reflecting the light of the ambiance just in one part of it. At first, I could change all the luminosity on image. Now, I'm trying to reduce this space, which means in a specific place of the image, using the V of HSV, here is the code for that:
enter code here
Mat newImg;
cvtColor(img, newImg, CV_BGR2HSV);
imwrite("C:/Users/amanda.brito/Desktop/test.jpg", newImg);
vector<Mat> hsv_planes;
split(newImg, hsv_planes); //geting the color plans of image
int param = -70; // the value that I'm seting for V
for (int y = 0; y < newImg.rows; y++) {
for (int x = 0; x < newImg.cols; x++) {
Vec3b pixel = hsv_planes[2].at<Vec3b>(y, x);
pixel[0] = 0;
pixel[1] = 0;
pixel[2] = param;
hsv_planes[2].at<Vec3b>(y, x) = pixel;
}
}
merge(hsv_planes, newImg);
Mat imagem;
cvtColor(newImg, imagem, CV_HSV2BGR);
imwrite("C:/Users/amanda.brito/Desktop/final.jpg", imagem);
Well, with this or nothing happen or the the compiler stops the program.
I already looked everywhere but without luck. What am I doing wrong?
Since now, thanks for your help.
I am copying a patch of pixels from one image to another and as a result I am not getting a 1:1 mapping but the new image intensities differ by 1 or 2 itensity-levels from the source image.
Do you know what could be causing this?
This is the code:
void templateCut ( IplImage* ptr2Img, IplImage* tempCut, CvBox2D* boundingBox )
{
/* Upper left corner of target's BB */
int col1 = (int)boundingBox->center.x;
int row1 = (int)boundingBox->center.y;
for(int i=0; i<tempCut->height; i++)
{
/* Pointer to a row */
uchar * ptrImgBB = (uchar*)( ptr2Img->imageData + (row1+i)*ptr2Img->widthStep + col1 );
uchar * ptrTemp = (uchar*)( tempCut->imageData + i*tempCut->widthStep );
for(int i2=0; i2<tempCut->width; i2++)
{
*ptrTemp++ = (*ptrImgBB++);
}
}
}
Is it a single channel image or multiple-channel image (such as RGB)? If it is a multiple-channel image, you have to consider the channel index in your loop.
btw: OpenCV supports region of interest (ROI) which will be convenient for you to implement copying a sub-region of an image. Below is the link you can find information on ROI usage in OpenCV.
http://nashruddin.com/OpenCV_Region_of_Interest_(ROI)