free form image selection (preferably in c++) - c++

I am new to image manipulation. I have noticed that you can specify a rectangular region of interest and others like circles etc in image manipulation libraries like opencv. Basic paint programs like ms-paint incorporate free form selection but i cannot seem to find a function or tutorial on how to do free form image selection in opencv or other image processing libraries. Any ideas on how to achieve this?
PS: My preferred language is c/c++.

One thing you can try:
If the selection can be represented as a sequence of 2d-vectors, you can think of it as a polygon. Allocate a new 1-channel image that will be your mask and fill it with 0. Then use
void cvFillPoly(CvArr* img, CvPoint** pts, int* npts, int contours, CvScalar color, int lineType=8, int shift=0)
documented on
http://opencv.willowgarage.com/documentation/drawing_functions.html
to draw a non-zero region on the mask image to represent the selected part of the image.

I wrote a demo to display an image and paint little green dots while your mouse moves, see below.
You need to know that OpenCV was not designed for this type of interaction, so performance is an issue (and it's bad)! You'll see what I mean.
#include <stdio.h>
#include <cv.h>
#include <highgui.h>
// mouse callback
void on_mouse(int event, int x, int y, int flags, void* param)
{
// Remove comment below to paint only when left mouse button is pressed
//if (event == CV_EVENT_LBUTTONDOWN)
{
//fprintf(stderr, "Painting at %dx%d\n", x, y);
IplImage* img = (IplImage*)param;
cvCircle(img, cvPoint(x,y), 1, CV_RGB(0, 255, 0), -1, CV_AA, 0);
cvShowImage("cvPaint", img);
}
}
int main(int argc, char* argv[])
{
if (argc < 2)
{
fprintf( stderr, "Usage: %s <img>\n", argv[0]);
return -1;
}
IplImage* frame = NULL;
frame = cvLoadImage(argv[1], CV_LOAD_IMAGE_UNCHANGED);
if (!frame)
{
fprintf( stderr, "Failed: Couldn't load file %s\n", argv[1]);
return -1;
}
cvNamedWindow("cvPaint", CV_WINDOW_AUTOSIZE);
cvShowImage("cvPaint", frame);
cvSetMouseCallback("cvPaint", &on_mouse, frame);
while (1)
{
// Keep looping to prevent the app from exiting,
// so the mouse callback can be called by OpenCV and do some painting
char key = cvWaitKey(10);
if (key == 113 || key == 27) // q was pressed on the keyboard
break;
}
cvReleaseImage(&frame);
cvDestroyWindow("cvPaint");
return 0;
}
My suggestion is that you use some other window system for this type of task, where performance is better. Take a look at Qt, for instance. But you can also you platform native ways like win32 or X, if you rather.
For the other part of the question, how to crop on user selection, I suggest you take a look at the code available at: OpenCV resizing and cropping image according to pixel value
Also, recording mouse coordinates while the user is painting the image is much more practical then analyzing the image for the painted green dots. Then analise these coordinates and retrieve the smallest rectangle area from it. That's when this logic gets useful:
CvScalar s;
for (x=0; x<width-1; x++)
{
for(y=0; y<height-1; y++)
{
s = cvGet2D(binImage, y, x);
if (s.val[0] == 1)
{
minX = min(minX, x);
minY = min(minY, y);
maxX = max(maxX, x);
maxY = max(maxY, y);
}
}
}
cvSetImageROI(binImage, cvRect(minX, minY, maxX-minX, maxY-minY));
In this specific case, instead of iterating through the image looking for specific pixels like the user did in that question, you will iterate over your array of coordinates recorded during mouse movement.

Related

OpenCV ROI on Real time camera

I am trying to set ROI in real time camera and copy a picture in the ROI.
However, I tried many methods from Internet but it is still unsuccessful.
Part of my code is shown below:
while(!protonect_shutdown)
{
listener.waitForNewFrame(frames);
libfreenect2::Frame *ir = frames[libfreenect2::Frame::Ir];
//! [loop start]
cv::Mat(ir->height, ir->width, CV_32FC1, ir->data).copyTo(irmat);
Mat img = imread("button.png");
cv::Rect r(1,1,100,200);
cv::Mat dstroi = img(Rect(0,0,r.width,r.height));
irmat(r).convertTo(dstroi, dstroi.type(), 1, 0);
cv::imshow("ir", irmat / 4500.0f);
int key = cv::waitKey(1);
protonect_shutdown = protonect_shutdown || (key > 0 && ((key & 0xFF) == 27));
listener.release(frames);
}
My real time camera can show the video normally. And no bugs in my program, but the picture cannot be shown in the ROI.
Does anyone have some ideas?
Any help is appreciate.
I hope I understood your question right and you want an output something like this:
I have created a rectangle of size 100x200 on the video feed and displaying an image in that rectangle.
Here is the code:
int main()
{
Mat frame,overlayFrame;
VideoCapture cap("video.avi");//use 0 for webcam
overlayFrame=imread("picture.jpg");
if (!cap.isOpened())
{
cout << "Could not capture video";
return -1;
}
Rect roi(1,1,100,200);//creating a rectangle of size 100x200 at point (1,1) on the videofeed
namedWindow("CameraFeed");
while ((cap.get(CV_CAP_PROP_POS_FRAMES) + 1) < cap.get(CV_CAP_PROP_FRAME_COUNT))
{
cap.read(frame);
resize(overlayFrame, overlayFrame, resize(overlayFrame, overlayFrame, Size(roi.width, roi.height));//changing the size of the image to fit in the roi
overlayFrame.copyTo(frame(roi));//copying the picture to the roi
imshow("CameraFeed", frame);
if (waitKey(27) >= 0)
break;
}
destroyAllWindows;
return 0;
}

How to ignore/remove contours that touch the image boundaries

I have the following code to detect contours in an image using cvThreshold and cvFindContours:
CvMemStorage* storage = cvCreateMemStorage(0);
CvSeq* contours = 0;
cvThreshold( processedImage, processedImage, thresh1, 255, CV_THRESH_BINARY );
nContours = cvFindContours(processedImage, storage, &contours, sizeof(CvContour), CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, cvPoint(0,0) );
I would like to somehow extend this code to filter/ignore/remove any contours that touch the image boundaries. However I am unsure how to go about this. Should I filter the threshold image or can I filter the contours afterwards? Hope somebody knows an elegant solution, since surprisingly I could not come up with a solution by googling.
Update 2021-11-25
updates code example
fixes bugs with image borders
adds more images
adds Github repo with CMake support to build example app
Full out-of-the-box example can be found here:
C++ application with CMake
General info
I am using OpenCV 3.0.0
Using cv::findContours actually alters the input image, so make sure that you work either on a separate copy specifically for this function or do not further use the image at all
Update 2019-03-07: "Since opencv 3.2 source image is not modified by this function." (see corresponding OpenCV documentation)
General solution
All you need to know of a contour is if any of its points touches the image border. This info can be extracted easily by one of the following two procedures:
Check each point of your contour regarding its location. If it lies at the image border (x = 0 or x = width - 1 or y = 0 or y = height - 1), simply ignore it.
Create a bounding box around the contour. If the bounding box lies along the image border, you know the contour does, too.
Code for the second solution (CMake):
cmake_minimum_required(VERSION 2.8)
project(SolutionName)
find_package(OpenCV REQUIRED)
set(TARGETNAME "ProjectName")
add_executable(${TARGETNAME} ./src/main.cpp)
include_directories(${CMAKE_CURRENT_BINARY_DIR} ${OpenCV_INCLUDE_DIRS} ${OpenCV2_INCLUDE_DIR})
target_link_libraries(${TARGETNAME} ${OpenCV_LIBS})
Code for the second solution (C++):
bool contourTouchesImageBorder(const std::vector<cv::Point>& contour, const cv::Size& imageSize)
{
cv::Rect bb = cv::boundingRect(contour);
bool retval = false;
int xMin, xMax, yMin, yMax;
xMin = 0;
yMin = 0;
xMax = imageSize.width - 1;
yMax = imageSize.height - 1;
// Use less/greater comparisons to potentially support contours outside of
// image coordinates, possible future workarounds with cv::copyMakeBorder where
// contour coordinates may be shifted and just to be safe.
// However note that bounding boxes of size 1 will have their start point
// included (of course) but also their and with/height values set to 1
// but should not contain 2 pixels.
// Which is why we have to -1 the "search grid"
int bbxEnd = bb.x + bb.width - 1;
int bbyEnd = bb.y + bb.height - 1;
if (bb.x <= xMin ||
bb.y <= yMin ||
bbxEnd >= xMax ||
bbyEnd >= yMax)
{
retval = true;
}
return retval;
}
Call it via:
...
cv::Size imageSize = processedImage.size();
for (auto c: contours)
{
if(contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
...
Full C++ example:
void testContourBorderCheck()
{
std::vector<std::string> filenames =
{
"0_single_pixel_top_left.png",
"1_left_no_touch.png",
"1_left_touch.png",
"2_right_no_touch.png",
"2_right_touch.png",
"3_top_no_touch.png",
"3_top_touch.png",
"4_bot_no_touch.png",
"4_bot_touch.png"
};
// Load example image
//std::string path = "C:/Temp/!Testdata/ContourBorderDetection/test_1/";
std::string path = "../Testdata/ContourBorderDetection/test_1/";
for (int i = 0; i < filenames.size(); ++i)
{
//std::string filename = "circle3BorderDistance0.png";
std::string filename = filenames.at(i);
std::string fqn = path + filename;
cv::Mat img = cv::imread(fqn, cv::IMREAD_GRAYSCALE);
cv::Mat processedImage;
img.copyTo(processedImage);
// Create copy for contour extraction since cv::findContours alters the input image
cv::Mat workingCopyForContourExtraction;
processedImage.copyTo(workingCopyForContourExtraction);
std::vector<std::vector<cv::Point>> contours;
// Extract contours
cv::findContours(workingCopyForContourExtraction, contours, cv::RetrievalModes::RETR_EXTERNAL, cv::ContourApproximationModes::CHAIN_APPROX_SIMPLE);
// Prepare image for contour drawing
cv::Mat drawing;
processedImage.copyTo(drawing);
cv::cvtColor(drawing, drawing, cv::COLOR_GRAY2BGR);
// Draw contours
cv::drawContours(drawing, contours, -1, cv::Scalar(255, 255, 0), 1);
//cv::imwrite(path + "processedImage.png", processedImage);
//cv::imwrite(path + "workingCopyForContourExtraction.png", workingCopyForContourExtraction);
//cv::imwrite(path + "drawing.png", drawing);
const auto imageSize = img.size();
bool liesOnBorder = contourTouchesImageBorder(contours.at(0), imageSize);
// std::cout << "lies on border: " << std::to_string(liesOnBorder);
std::cout << filename << " lies on border: "
<< liesOnBorder;
std::cout << std::endl;
std::cout << std::endl;
cv::imshow("processedImage", processedImage);
cv::imshow("workingCopyForContourExtraction", workingCopyForContourExtraction);
cv::imshow("drawing", drawing);
cv::waitKey();
//cv::Size imageSize = workingCopyForContourExtraction.size();
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
for (auto c : contours)
{
if (contourTouchesImageBorder(c, imageSize))
{
// Do your thing...
int asdf = 0;
}
}
}
}
int main(int argc, char** argv)
{
testContourBorderCheck();
return 0;
}
Problem with contour detection near image borders
OpenCV seems to have a problem with correctly finding contours near image borders.
For both objects, the detected contour is the same (see images). However, in image 2 the detected contour is not correct since a part of the object lies along x = 0, but the contour lies in x = 1.
This seem like a bug to me.
There is an open issue regarding this here: https://github.com/opencv/opencv/pull/7516
There also seems to be a workaround with cv::copyMakeBorder (https://github.com/opencv/opencv/issues/4374), however it seems a bit complicated.
If you can be a bit patient, I'd recommend waiting for the release of OpenCV 3.2 which should happen within the next 1-2 months.
New example images:
Single pixel top left, objects left, right, top, bottom, each touching and not touching (1px distance)
Example images
Object touching image border
Object not touching image border
Contour for object touching image border
Contour for object not touching image border
Although this question is in C++, the same issue affects openCV in Python. A solution to the openCV '0-pixel' border issue in Python (and which can likely be used in C++ as well) is to pad the image with 1 pixel on each border, then call openCV with the padded image, and then remove the border afterwards. Something like:
img2 = np.pad(img.copy(), ((1,1), (1,1), (0,0)), 'edge')
# call openCV with img2, it will set all the border pixels in our new pad with 0
# now get rid of our border
img = img2[1:-1,1:-1,:]
# img is now set to the original dimensions, and the contours can be at the edge of the image
If anyone needs this in MATLAB, here is the function.
function [touch] = componentTouchesImageBorder(C,im_row_max,im_col_max)
%C is a bwconncomp instance
touch=0;
S = regionprops(C,'PixelList');
c_row_max = max(S.PixelList(:,1));
c_row_min = min(S.PixelList(:,1));
c_col_max = max(S.PixelList(:,2));
c_col_min = min(S.PixelList(:,2));
if (c_row_max==im_row_max || c_row_min == 1 || c_col_max == im_col_max || c_col_min == 1)
touch = 1;
end
end

How increase the contrast of an image with opencv c++?

I want to increase the contrast of the bellow picture, with opencv c++.
I use histogram processing techniques e.g., histogram equalization (HE), histogram specification, etc. But I don't reaches to good result such as bellow images:
What ideas on how to solve this task would you suggest? Or on what resource on the internet can I find help?
I found a useful subject on OpenCV for changing image contrast :
#include <cv.h>
#include <highgui.h>
#include <iostream>
using namespace cv;
double alpha; /**< Simple contrast control */
int beta; /**< Simple brightness control */
int main( int argc, char** argv )
{
/// Read image given by user
Mat image = imread( argv[1] );
Mat new_image = Mat::zeros( image.size(), image.type() );
/// Initialize values
std::cout<<" Basic Linear Transforms "<<std::endl;
std::cout<<"-------------------------"<<std::endl;
std::cout<<"* Enter the alpha value [1.0-3.0]: ";std::cin>>alpha;
std::cout<<"* Enter the beta value [0-100]: "; std::cin>>beta;
/// Do the operation new_image(i,j) = alpha*image(i,j) + beta
for( int y = 0; y < image.rows; y++ )
{ for( int x = 0; x < image.cols; x++ )
{ for( int c = 0; c < 3; c++ )
{
new_image.at<Vec3b>(y,x)[c] =
saturate_cast<uchar>( alpha*( image.at<Vec3b>(y,x)[c] ) + beta );
}
}
}
/// Create Windows
namedWindow("Original Image", 1);
namedWindow("New Image", 1);
/// Show stuff
imshow("Original Image", image);
imshow("New Image", new_image);
/// Wait until user press some key
waitKey();
return 0;
}
See: Changing the contrast and brightness of an image!
I'm no expert but you could try to reduce the number of colours by merging grays into darker grays, and light grays into whites.
E.g.:
Find the least common colour in <0.0, 0.5) range, merge it towards black.
Find the least common colour in <0.5, 1.0> range, merge it towards white.
This would reduce the number of colours and help create a gap between brigher darker colours maybe.
This might be late, but you can try createCLAHE() function in openCV. Works fine for me.

openFrameworks and openCv image processing issue with doing analysing video and rendering manipulated images back to the user with color palette

I am working on a project with OpenFrameworks using ofxCV, ofxOpencv and ofxColorQuantizer. Technically, the project is analyzing live video captured via webcam and analysis's the image in real time to gather and output the most prominent color in the current frame. When generating the most prominent color I am using the pixel difference between the current frame and the previous frame to generate the what colors have updated and use the updated or moving areas of the video frame to figure out the most prominent colors.
The reason for using the pixel difference's to generate the color pallet is because I want to solve for the case of a user walks into the video frame, I want try and gather the color pallet of the person, for instance what they are wearing. For example red shirt, blue pants will be in the pallet and the white background will be excluded.
I have a strong background in Javascript and canvas but am fairly new to OpenFrameworks and C++ which is why I think I am running into a roadblock with this problem I described above.
Along with OpenFrameworks I am using ofxCV, ofxOpencv and ofxColorQuantizer as tools for this installation. I am taking a webcam image than making it a cv:Mat than using pyrdown on the webcam image twice followed by a absdiff of the mat which I am than trying to pass the mat into the ofxColorQuantizer. This is where I think I am running into problems — I don't think the ofxColorQuantizer likes the mat format of the image I am trying to use. I've tried looking for the different image format to try and convert the image to to solve this issue but I haven't been able to come to solution.
For efficiencies I am hoping to to the color difference and color prominence calculations on the smaller image (after I pyrdown' the image) and display the full image on the screen and the generated color palette is displayed at the bottom left like in the ofxColorQuantizer example.
I think there may be other ways to speed up the code but at the moment I am trying to get this portion of the app working first.
I have my main.cpp set up as follows:
#include "ofMain.h"
#include "ofApp.h"
#include "ofAppGlutWindow.h"
//========================================================================
int main( ){
ofAppGlutWindow window;
ofSetupOpenGL(&window, 1024,768, OF_WINDOW); // <-------- setup the GL context
// ofSetupOpenGL(1024,768,OF_WINDOW); // <-------- setup the GL context
// this kicks off the running of my app
// can be OF_WINDOW or OF_FULLSCREEN
// pass in width and height too:
ofRunApp(new ofApp());
}
My ofApp.h file is as follows:
#pragma once
#include "ofMain.h"
#include "ofxOpenCv.h"
#include "ofxCv.h"
#include "ofxColorQuantizer.h"
class ofApp : public ofBaseApp{
public:
void setup();
void update();
void draw();
ofVideoGrabber cam;
ofPixels previous;
ofImage diff;
void kMeansTest();
ofImage image;
ofImage img;
cv::Mat matA, matB;
ofImage diffCopy;
ofImage outputImage;
ofxCv::RunningBackground background;
ofxColorQuantizer colorQuantizer;
// a scalar is like an ofVec4f but normally used for storing color information
cv::Scalar diffMean;
};
And finally my ofApp.cpp is below:
#include "ofApp.h"
using namespace ofxCv;
using namespace cv;
//--------------------------------------------------------------
void ofApp::setup(){
ofSetVerticalSync(true);
cam.initGrabber(320, 240);
// get our colors
colorQuantizer.setNumColors(3);
// resize the window to match the image
// ofSetWindowShape(image.getWidth(), image.getHeight());
ofSetWindowShape(800, 600);
// imitate() will set up previous and diff
// so they have the same size and type as cam
imitate(previous, cam);
imitate(diff, cam);
imitate(previous, outputImage);
imitate(diff, outputImage);
}
//--------------------------------------------------------------
void ofApp::update(){
cam.update();
if(cam.isFrameNew()) {
matA = ofxCv::toCv(cam.getPixelsRef());
ofxCv::pyrDown(matA, matB);
ofxCv::pyrDown(matB, matA);
ofxCv::medianBlur(matA, 3);
ofxCv::toOf(matA, outputImage);
// take the absolute difference of prev and cam and save it inside diff
absdiff(previous, outputImage, diff);
}
}
//--------------------------------------------------------------
void ofApp::draw(){
// If the image is ready to draw, then draw it
if(outputImage.isAllocated()) {
outputImage.update();
outputImage.draw(0, 0, ofGetWidth(), ofGetHeight());
}
ofBackground(100,100,100);
ofSetColor(255);
ofImage diffCopy;
diffCopy = diff;
diffCopy.resize(diffCopy.getWidth()/2, diffCopy.getHeight()/2);
// there is some sort of bug / issue going on here...
// prevent the app from compiling
// comment out to run and see blank page
colorQuantizer.quantize(diffCopy.getPixelsRef());
ofLog() << "the number is " << outputImage.getHeight();
ofLog() << "the number is " << diffCopy.getHeight();
ofSetColor(255);
img.update();
// cam.draw(0, 0, 800, 600);
outputImage.draw(0, 0, 800, 600);
// colorQuantizer.draw(ofPoint(0, cam.getHeight()-20));
colorQuantizer.draw(ofPoint(0, 600-20));
// use the [] operator to get elements from a Scalar
float diffRed = diffMean[0];
float diffGreen = diffMean[1];
float diffBlue = diffMean[2];
ofSetColor(255, 0, 0);
ofRect(0, 0, diffRed, 10);
ofSetColor(0, 255, 0);
ofRect(0, 15, diffGreen, 10);
ofSetColor(0, 0, 255);
ofRect(0, 30, diffBlue, 10);
}
//--------------------------------------------------------------
void ofApp::kMeansTest(){
cv::Mat samples = (cv::Mat_<float>(8, 1) << 31 , 2 , 10 , 11 , 25 , 27, 2, 1);
cv::Mat labels;
// double kmeans(const Mat& samples, int clusterCount, Mat& labels,
cv::TermCriteria termcrit;
int attempts, flags;
cv::Mat centers;
double compactness = cv::kmeans(samples, 3, labels, cv::TermCriteria(), 2, cv::KMEANS_PP_CENTERS, centers);
cout<<"labels:"<<endl;
for(int i = 0; i < labels.rows; ++i)
{
cout<<labels.at<int>(0, i)<<endl;
}
cout<<"\ncenters:"<<endl;
for(int i = 0; i < centers.rows; ++i)
{
cout<<centers.at<float>(0, i)<<endl;
}
cout<<"\ncompactness: "<<compactness<<endl;
}
Apologies in advance for the state of my code — it's getting late and I'm trying to get this done.
My question is what is the image format openFrameworks is using for grabbing the webcam image, what is the image format that openCV expects and what should I use to switch back from a mat image to an ofImage and is there a way to getPixelsRef from a mat image?
The area of code that I think I have something wrong is the following logic.
I have this line of code which gets the video frame from the webcam matA = ofxCv::toCv(cam.getPixelsRef());
Than do a couple ofxCv procedures on the frame such as ofxCv::pyrDown(matA, matB); which I think changes the image format or pixel format of the frame
Than I convert the frame back to OF with ofxCv::toOf(matA, outputImage);,
Next I get the difference in the pixels between the current frame and the last frame, create a copy of the difference between the two frames. Potentially the issue lies here with the diff output image format
Which I pass the diff copy to colorQuantizer.quantize(diffCopy.getPixelsRef()); to try and generate the color palette in for the change in pixels.
It is the colorQuantizer class and function call that is giving me an error which reads thread error [ error ] ofTexture: allocate(): ofTextureData has 0 width and/or height: 0x0
with an EXC_BAD_ACCESS
And lastly, could there be an alternative cause for the exc_bad_access thread error rather than image formatting? Being new to c++ I'm just guessing and going off instinct of what I think the rood cause of my problem is.
Many thanks.

opencv image window/imshow

I am just starting to use the Open CV library and one of my first code is a simple negative transform function.
#include <stdio.h>
#include <opencv2/opencv.hpp>
using namespace cv;
using namespace std;
void negative(Mat& input,Mat& output)
{
int row = input.rows;
int col = input.cols;
int x,y;
uchar *input_data=input.data;
uchar *output_data= output.data;
for( x=0;x<row;x++)
for( y=0;y<col;y++)
output_data[x*col+y]=255-input_data[x*col+y];
cout<<x<<y;
}
int main( int argc, char** argv )
{
Mat image;
image = imread( argv[1], 1 );
Mat output=image.clone();
negative(image,output);
namedWindow( "Display Image", CV_WINDOW_AUTOSIZE );
imshow( "Display Image", output );
waitKey(0);
return 0;
}
I have added the extra line to check if the entire image is processed. The problem i am facing with my output image is that negative transform is applied only to top half of the image.
Now what happens is that the values for x and y are displayed only after i press a key (i.e. once the image is shown)
My question is why is the window being called before the function is executed ?
The fundamental problem in your code is that you are reading in a color image but you try to process it as grayscale. Therefore the indices shift and what really happens is that you only process the first third of the image (because of the 3-channel format).
See opencv imread manual
flags –
Specifies color type of the loaded image:
>0 the loaded image is forced to be a 3-channel color image
=0 the loaded image is forced to be grayscale
You've specified flags=1.
Here's a way of doing it:
Vec3b v(255, 255, 255);
for(int i=0;i<input.rows;i++) //search for edges
{
for (int j=0 ;j<input.cols;j++)
{
output.at<Vec3b>(i,j) = v - input.at<Vec3b>(i,j);
}
}
Note that here Vec3b is a 3-channel pixel value as opposed to uchar which is a 1-channel value.
For a more efficient implementation you can have a look at Mat.ptr<Vec3b>(i).
EDIT:
If you are processing lots of images,
for a general iteration over the pixels the fastest way is:
Vec3b v(255, 255, 255); // or maybe Scalar v(255,255,255) Im not sure
for(int i=0;i<input.rows;i++) //search for edges
{
Vec3b *p=input.ptr<Vec3b>(i);
Vec3b *q=output.ptr<Vec3b>(i);
for (int j=0 ;j<input.cols;j++)
{
q[j] = v - p[j];
}
}
See "The OpenCV Tutorials" -- "The efficient way" section.
Try to write:
cout << x << y << endl;
The function is called before, but the output is not flushed directly, which results in your image appearing before the text is written. By adding an "endline", you force a flush. You could also use flush(cout); instead of adding and endline.
For the negative, you can use the OpenCV function subtract() directly:
subtract(Scalar(255, 255, 255), input, output);