I have an image which I have split into its three separate channels (b,g,r). I want to manipulate just the red band and then remerge to blue and green band to recompose image. I keep getting a sig abort in my function however. the RBandSlider refers to a global int used for a trackbar which is defaulted to 1. Almost positive the issue is within the ImageEnhancement function.
Do I need to define redBandsAdjsuted as something else or am I not grabbing the pixel local and rewriting it correctly?
Mat ImageEnhancement(Mat band){
Mat adjustedBand;
Scalar mean, std;
meanStdDev(band, mean , std);
int pixel,temp;
for(int i = 0; i < band.rows;i++){
for(int j = 0; j < band.cols;j++){
//extract pixel
pixel = band.at<Vec3b>(i,j)[0];
//pixel greater than mean
if ( pixel > mean[0]){
temp = (255);
adjustedBand.at<Vec3b>(i,j) = temp;
}
else{
temp = 0;
adjustedBand.at<Vec3b>(i,j) = temp ;
}
}
}
return adjustedBand;
}
Mat Bands[3],merged,redBandsAdjusted(image.cols,image.rows,CV_8UC1),result;
split(image, Bands);
//loop the echancement adjustment
while(true){
//adjust red band and merge
redBandsAdjusted = ImageEnhancement(Bands[2]);
vector<Mat> channels = {Bands[0],Bands[1],redBandsAdjusted};
merge(channels,merged);
}
When you do:
split(image, Bands);
You will get from a CV_8UC3 image (image) 3 CV_8U images (Bands). Everything is good until this point. Then you go to your adjusting and do 2 mistakes:
Mat adjustedBand; is never initialized... You can do Mat adjustedBand(band.rows, band.cols, CV_8UC1); or intialized in a later stage.
pixel = band.at<Vec3b>(i,j)[0]; and adjustedBand.at<Vec3b>(i,j) = temp; are for manipulating 3 channels not a 1 channel image. You need to use ucharinstead, like: adjustedBand.at<uchar>(i,j) = temp;
Those are the errors I see... fix them and try using a debugger, that way you know if something is initialize correctly or if it does the correct operation
Related
so i'm making this project where i'm making the reflection of an image on OpenCV (without using the flip function), and the only problem (i think) to finish it, is that the image that is suppose to come out reflected, is coming out as all blue.
The code i have (i took out the usual part, the problem should be around here):
Mat imageReflectionFinal = Mat::zeros(Size(220,220),CV_8UC3);
for(unsigned int r=0; r<221; r++)
for(unsigned int c=0; c<221; c++) {
Vec3b intensity = image.at<Vec3b>(r,c);
imageReflectionFinal.at<Vec3b>(r,c) = (uchar)(c, -r + (220)/2);
}
///displays images
imshow( "Original Image", image );
imshow("Reflected Image", imageReflectionFinal);
waitKey(0);
return 0;
}
There are some problems with your code. As pointed out, your iteration variables go beyond the actual image dimensions. Do not use hardcoded bounds, you can use inputImage.cols and inputImage.rows instead to obtain the image dimensions.
There’s the variable (a BGR Vec3b) that is set but not used - Vec3b intensity = image.at<Vec3b>(r,c);
Most importantly, it is not clear what you are trying to achieve. The line (uchar)(c, -r + (220)/2); does not give out much info. Also, which direction are you flipping the original image around? X or Y axis?
Here’s a possible solution to flip your image in the X direction:
//get input image:
cv::Mat testMat = cv::imread( "lena.png" );
//Get the input image size:
int matCols = testMat.cols;
int matRows = testMat.rows;
//prepare the output image:
cv::Mat imageReflectionFinal = cv::Mat::zeros( testMat.size(), testMat.type() );
//the image will be flipped around the x axis, so the "target"
//row will start at the last row of the input image:
int targetRow = matRows-1;
//loop thru the original image, getting the current pixel value:
for( int r = 0; r < matRows; r++ ){
for( int c = 0; c < matCols; c++ ) {
//get the source pixel:
cv::Vec3b sourcePixel = testMat.at<cv::Vec3b>( r , c );
//source and target columns are the same:
int targetCol = c;
//set the target pixel
imageReflectionFinal.at<cv::Vec3b>( targetRow , targetCol ) = sourcePixel;
}
//for every iterated source row, decrease the number of
//target rows, as we are flipping the pixels in the x dimension:
targetRow--;
}
Result:
I'm quite new to OpenCV and I'm now using version 3.4.1 with C++ implementation. I'm still exploring, so this question is not specific to a project, but is more of a "try to understand how it works". Please consider, with the same idea in mind, that I know that I'm somehow "reinventing the will" with this code, but I wrote this example to understand "HOW IT WORKS".
The idea is:
Read an RGB image
Make it binary
Find Connected areas
Colour each area differently
As an example I'm using a 5x5 pixel RGB image saved as BMP. The image is a white box with black pixels all around it's contour.
Up to the point where I get the ConnectedComponents matrix, named Mat::Labels, it all goes fine. If I print the Matrix I see exactly what I expect:
11111
10001
10001
10001
11111
Remember that I've inverted the threshold so it is correct to get 1 on the edges...
I then create a Mat with same size of Mat::Labels but 3 channels to colour it with RGB. This is named Mat::ColoredLabels.
Next step is to instanciate a pointer that runs through the Mat::Labels and for each position in the Mat::Labels where the value is 1 fill the corresponding Mat:.ColoredLabels position with a color.
HERE THINGS GOT VERY WRONG ! The pointer does not fetch the Mat::Labels row byt row as I would expect but follows some other order.
Questions:
Am I doing something wrong or it is "obvious" that the pointer fetching follows some "umpredictable" order ?
How could I set values of a Matrix (Mat::ColoredLabels) based on the values of another matrix (Mat::Labels) ?
.
#include "opencv2\highgui.hpp"
#include "opencv2\opencv.hpp"
#include <stdio.h>
using namespace cv;
int main(int argc, char *argv[]) {
char* FilePath = "";
Mat Img;
Mat ImgGray;
Mat ImgBinary;
Mat Labels;
uchar *P;
uchar *CP;
// Image acquisition
if (argc < 2) {
printf("Missing argument");
return -1;
}
FilePath = argv[1];
Img = imread(FilePath, CV_LOAD_IMAGE_COLOR);
if (Img.empty()) {
printf("Invalid image");
return -1;
}
// Convert to Gray...I know I could convert it right away while loading....
cvtColor(Img, ImgGray, CV_RGB2GRAY);
// Threshold (inverted) to obtain black background and white blobs-> it works
threshold(ImgGray, ImgBinary, 170, 255, CV_THRESH_BINARY_INV);
// Find Connected Components and put the 1/0 result in Mat::Labels
int BlobsNum = connectedComponents(ImgBinary, Labels, 8, CV_16U);
// Just to see what comes out with a 5x5 image. I get:
// 11111
// 10001
// 10001
// 10001
// 11111
std::cout << Labels << "\n";
// Prepare to fetch the Mat(s) with pointer to be fast
int nRows = Labels.rows;
int nCols = Labels.cols * Labels.channels();
if (Labels.isContinuous()) {
nCols *= nRows;
nRows = 1;
}
// Prepare a Mat as big as LAbels but with 3 channels to color different blobs
Mat ColoredLabels(Img.rows, Img.cols, CV_8UC3, cv::Scalar(127, 127, 127));
int ColoredLabelsNumChannels = ColoredLabels.channels();
// Fetch Mat::Labels and Mat::ColoredLabes with the same for cycle...
for (int i = 0; i < nRows; i++) {
// !!! HERE SOMETHING GOES WRONG !!!!
P = Labels.ptr<uchar>(i);
CP = ColoredLabels.ptr<uchar>(i);
for (int j = 0; j < nCols; j++) {
// The coloring operation does not work
if (P[j] > 0) {
CP[j*ColoredLabelsNumChannels] = 0;
CP[j*ColoredLabelsNumChannels + 1] = 0;
CP[j*ColoredLabelsNumChannels + 2] = 255;
}
}
}
std::cout << "\n" << ColoredLabels << "\n";
namedWindow("ColoredLabels", CV_WINDOW_NORMAL);
imshow("ColoredLabels", ColoredLabels);
waitKey(0);
printf("Execution completed succesfully");
return 0;
}
You used connectedComponents function with CV_16U parameter. This means that the single element of the image will consist of 16 bits (hence '16') and you have to interpret them as unsigned integer (hence 'U'). And since ptr returns a pointer, you have to dereference it to get the value.
Therefore you should access label image elements in the following way:
unsigned short val = *Labels.ptr<unsigned short>(i) // or uint16_t
unsigned short val = Labels.at<unsigned short>.at(y, x);
Regarding your second question, it is as simple as that, but of course you have to understand which type casts result in loss of precisions or overflows and which ones not.
mat0.at<int>(y, x) = mat1.at<int>(y, x); // both matrices have CV_32S types
mat2.at<int>(y, x) = mat3.at<char>(y,x); // CV_32S and CV_8S
// Implicit cast occurs. Possible information loss: assigning 32-bit integer values to 8-bit ints
// mat4.at<unsigned char>(y, x) = mat5.at<unsigned int>(y, x); // CV_8U and CV_32U
I am looking for a way to take an image and get masks of all objects in it by color. My goal is to be able to separate similarly colored objects into layers so I can further examine each layer. The plan is to use each mask against the original image to create a histogram of the colors in each object and determine the similarity with other objects in the image. If something is similar enough it will be combined with other objects to form a layer.
The problem is that I can not find a function in opencv to find all objects in an image based on color contiguity. I am sure such an algorithm exists, but it seems to be evading me. Does anyone know of an algorithm or function like this?
The best method that I have found is K-means Clustering. This separates the image into different layers based on color. It uses a k-neighbors algorithm to do so. With this I am able to effectively split the image into several layers that are of similar color.
#define numClusters 7
cv::Mat src = cv::imread("img0.png");
cv::Mat kMeansSrc(src.rows * src.cols, 3, CV_32F);
//resize the image to src.rows*src.cols x 3
//cv::kmeans expects an image that is in rows with 3 channel columns
//this rearranges the image into (rows * columns, numChannels)
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
for( int z = 0; z < 3; z++)
kMeansSrc.at<float>(y + x*src.rows, z) = src.at<Vec3b>(y,x)[z];
}
}
cv::Mat labels;
cv::Mat centers;
int attempts = 2;
//perform kmeans on kMeansSrc where numClusters is defined previously as 7
//end either when desired accuracy is met or the maximum number of iterations is reached
cv::kmeans(kMeansSrc, numClusters, labels, cv::TermCriteria( CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 8, 1), attempts, KMEANS_PP_CENTERS, centers );
//create an array of numClusters colors
int colors[numClusters];
for(int i = 0; i < numClusters; i++) {
colors[i] = 255/(i+1);
}
std::vector<cv::Mat> layers;
for(int i = 0; i < numClusters; i++)
{
layers.push_back(cv::Mat::zeros(src.rows,src.cols,CV_32F));
}
//use the labels to draw the layers
//using the array of colors, draw the pixels onto each label image
for( int y = 0; y < src.rows; y++ )
{
for( int x = 0; x < src.cols; x++ )
{
int cluster_idx = labels.at<int>(y + x*src.rows,0);
layers[cluster_idx].at<float>(y, x) = (float)(colors[cluster_idx]);;
}
}
std::vector<cv::Mat> srcLayers;
//each layer to mask a portion of the original image
//this leaves us with sections of similar color from the original image
for(int i = 0; i < numClusters; i++)
{
layers[i].convertTo(layers[i], CV_8UC1);
srcLayers.push_back(cv::Mat());
src.copyTo(srcLayers[i], layers[i]);
}
I suggest you convert the image to the HSV-space (Hue-Saturation-Value). Then make a histogram based on the Hue value to find thresholds online, or define them before (depends if this is a general problem or a given one).
Crate one-channel images for each layer you want to form. (set them as black)
Then then use the HSV-image and mark a layer based on the threshold values. You might want to add some constant thresholds for value and saturation too (to avoid dark and light areas)
Does this make sense to you?
I think that you should proceed in the following proceess:
Smooth you image if it has too much details.
find edges
Find all contours
Try to find the color of each contour..lets say you want to keep all contours which are red. So, keep only those contours which are red.
Once you find the contours which you want to keep, then create a mask image based upon the contours you want to keep.
Using mask image, extract the required objects from the original image.
I'm trying to get the the difference between two cv::Mat frames in OpenCv. So here is what I tried,
#include<opencv2\opencv.hpp>
#include<opencv2\calib3d\calib3d.hpp>
#include<opencv2\core\core.hpp>
#include <opencv2\highgui\highgui.hpp>
int main ()
{
cv::VideoCapture cap(0);
cv::Mat frame, frame1,frame2;
int key=0;
while(key!=27){
cap >> frame;
if(key=='c'){
frame1 = frame;
key = 0;
}
if(key =='x'){
cv::absdiff(frame, frame1, frame2); // I also tried frame2= (frame -frame1)*255;
cv::imshow("difference ",frame2);
key =0;
}
cv::imshow("stream",frame);
key = cv::waitKey(10);
}
}
the result is always the same a 0 Matrix, any idea what I'm doing wrong here?
thanks in advance for your help.
Mat objects are pointer typed. After setting frame1 to frame directly using frame1 = frame, both matrices show the same point and same frame also. You have to copy frame value using "copyTo" method of Mat.
OpenCV Matrixes use pointers internally
The documentation of the Mat type states:
Mat is basically a class with two data parts: the matrix header and a pointer to the matrix containing the pixel values.
[...]
Whenever somebody copies a header of a Mat object, a counter is increased for the matrix. Whenever a header is cleaned this counter is decreased. When the counter reaches zero the matrix too is freed. Sometimes you will want to copy the matrix itself too, so OpenCV provides the clone() and copyTo() functions.
cv::Mat F = A.clone();
cv::Mat G;
A.copyTo(G);
OpenCV overloads the affectation operator on cv::Mat objects so that the line mat1 = mat2 only affects the pointer to the data in mat1 (that points to the same data as mat2). This avoids time consuming copies of all the image data.
If you want to save the data of a matrix, you have to write mat1 = mat2.clone() or mat2.copyTo(mat1).
I was looking for a similar program and I came across your post, here is a sample I have written for frameDifferencing, hope this helps, the below function will give you the difference between two frames
/** #function differenceFrame */
Mat differenceFrame( Mat prev_frame, Mat curr_frame )
{
Mat image = prev_frame.clone();
printf("frame rows %d Cols %d\n" , image.rows, image.cols);
for (int rows = 0; rows < image.rows; rows++)
{
for (int cols = 0; cols < image.cols; cols++)
{
/* printf("BGR value %lf %lf %lf\n" , abs(prev_frame.at<cv::Vec3b>(rows,cols)[0] -
curr_frame.at<cv::Vec3b>(rows,cols)[0]),
abs(prev_frame.at<cv::Vec3b>(rows,cols)[1] -
curr_frame.at<cv::Vec3b>(rows,cols)[0]),
abs(prev_frame.at<cv::Vec3b>(rows,cols)[2] -
curr_frame.at<cv::Vec3b>(rows,cols)[0]));
*/
image.at<cv::Vec3b>(rows,cols)[0] = abs(prev_frame.at<cv::Vec3b>(rows,cols)[0] -
curr_frame.at<cv::Vec3b>(rows,cols)[0]);
image.at<cv::Vec3b>(rows,cols)[1] = abs(prev_frame.at<cv::Vec3b>(rows,cols)[1] -
curr_frame.at<cv::Vec3b>(rows,cols)[1]);
image.at<cv::Vec3b>(rows,cols)[2] = abs(prev_frame.at<cv::Vec3b>(rows,cols)[2] -
curr_frame.at<cv::Vec3b>(rows,cols)[2]);
}
}
return image;
}
I am copying a patch of pixels from one image to another and as a result I am not getting a 1:1 mapping but the new image intensities differ by 1 or 2 itensity-levels from the source image.
Do you know what could be causing this?
This is the code:
void templateCut ( IplImage* ptr2Img, IplImage* tempCut, CvBox2D* boundingBox )
{
/* Upper left corner of target's BB */
int col1 = (int)boundingBox->center.x;
int row1 = (int)boundingBox->center.y;
for(int i=0; i<tempCut->height; i++)
{
/* Pointer to a row */
uchar * ptrImgBB = (uchar*)( ptr2Img->imageData + (row1+i)*ptr2Img->widthStep + col1 );
uchar * ptrTemp = (uchar*)( tempCut->imageData + i*tempCut->widthStep );
for(int i2=0; i2<tempCut->width; i2++)
{
*ptrTemp++ = (*ptrImgBB++);
}
}
}
Is it a single channel image or multiple-channel image (such as RGB)? If it is a multiple-channel image, you have to consider the channel index in your loop.
btw: OpenCV supports region of interest (ROI) which will be convenient for you to implement copying a sub-region of an image. Below is the link you can find information on ROI usage in OpenCV.
http://nashruddin.com/OpenCV_Region_of_Interest_(ROI)