I have a grey image ,i want to read every line from the image .
how can i do this algorithm ?
can anyone help me
there is the code what im using
//#include "stdafx.h"
#include <stdio.h>
#include <cv.h>
#include <iostream>
#include <conio.h>
#include <cxcore.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdlib.h>
using namespace cv;
Mat dst;
void main()
{
cv::Mat img = cv::imread("capture.jpg",CV_LOAD_IMAGE_GRAYSCALE);
cv::Mat some_row = img.row(0); //gives 1st row
namedWindow( "source",CV_NORMAL);
imshow( "source",img);
namedWindow( "img",CV_NORMAL);
imshow( "img",dst);
waitKey(0);
from the image i like to sort the color and make a test if i can found the order white color then black end then white and then black .
for this reason i like to make a search row by row.
i can share the picture that you can more understand me .
i have make an algorithm who can detect the circle from the right and the left side
and that works.I need a code who search only the color and no object from the image (color in order (White.Black.White.Black) to get the right position what im searching)
us im shown in image .https://drive.google.com/file/d/0B1WQBCaQu10geG1uTm40ZG9IVjQ/edit?usp=sharing
I suggest that you read the opencv tutorial on scanning images. You should also have a look at the documentation for the Mat class.
There is more than one way of doing what you need. Different methods have different efficiency and safety trade offs. More efficient methods work directly with pointers. This is fast, but you must take care of doing pointer arithmetics right depending of the number of pixels, channels and padding your image may have stored in memory.
On the other hand, using an iterator or even something like Mat::at is safer since these calculations are made for you. But it isn't as efficient.
Have a look at the documentation and choose what is right for your problem. Hope that helps.
You can use .row() method:
cv::Mat img = cv::imread("some_path.jpg",CV_LOAD_IMAGE_GRAYSCALE);//some gray level image
cv::Mat some_row = img.row(0); //gives 1st row
Note that this method causes data sharing with the original matrix img. If you want to have a copy operation, you can use .copyTo() method.
Related
This is probably a rather simple task, but I am uncertain on how to proceed, since I am new to opencv in C++.
I was inspired by this code.
The idea I had was then to take a single image, do PCA on the RGB intensities and visualize the projection of the RGB data onto the 3 principal components in grayscale.
The first problem I run into, is how to setup the matrix for PCA. Here is my code so far:
#include <opencv2/opencv.hpp>
#include <iostream>
using namespace cv;
using namespace std;
int main( )
{
// Open another image
Mat image, imageMat;
image= imread("images/0.jpg");
imageMat = convertForPCA(image);
// Do pca and visualize channels in grayscale.
// ...
return 0;
}
So if you could help me implement convertForPCA function. That function should take in an image matrix and return an n by 3 float matrix, where n is the number of pixels in the original image.
I think I can proceed with the rest, but will maybe post more questions if I get stuck and can't find an answer.
I solved my problem, and I put the solution on github in case anyone runs into this later.
Here I did coding for increasing brightness using Histogram Equalization but it changes the overall brightness of images. And I need brightness in specify location.
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
#include <iostream>
using namespace cv;
using namespace std;
int main( int argc, const char** argv {
Mat img = imread("MyPic.JPG", CV_LOAD_IMAGE_COLOR);
if (img.empty())
{
cout << "Image cannot be loaded..!!" << endl;
return -1;
}
cvtColor(img, img, CV_BGR2GRAY);
Mat img_hist_equalized;
equalizeHist(img, img_hist_equalized); //equalize the histogram
namedWindow("Original Image", CV_WINDOW_AUTOSIZE);
namedWindow("Histogram Equalized", CV_WINDOW_AUTOSIZE);
imshow("Original Image", img);
imshow("Histogram Equalized", img_hist_equalized);
waitKey(0); //wait for key press
destroyAllWindows(); //destroy all open windows
return 0;}
Input image:
Output I gets:
But I expected output
The above code is based on the Histogram Equalization.Is there any other approach means specify here.
The answer on your question is in the method, used for generation of last image with expected output. Which commands of graphical editor have you used to generate this result? Just repeat it with OpenCV.
You can try something like this:
increase contrast,
reduce number of colors to 4,
replace all three "near black" colors by real black, and replace one
"near white" color by real white.
You will have something like this (top image after second step, bottom image after third step):
Is it good enought for your task? Do you need better quality? Any way, try to process your image by any graphical editor before coding. And only when you understand, which operations you need to apply to your image, try to implement it using OpenCV.
To understand better the problem, let's see the histogram of your initial image:
We can see, that difference between "expected black" and "expected white" colors are very small. More over, inside "white circle" we can see pixels with "expected black colors". So it is not enough to change histogram for fixing of this mix of pixels. We need to analyze surroundings of each pixel (in fact, I done it in first step - when increased contrast of the image). We can discuss methods for it, but first of all we need more information and efforts from you. Editing of the pallet without analyzing of surroundings will give you strange results like this:
So, you need to open any graphical editor and find way to convert your input to expected output.
As the graph showed, I'd like to input image and get several segments as a result like that.
It's just like cluster the closest color segment, so I think it's close to the concept of "meanshift"?
I've searched relevant questions here but still don't know how to start and construct the structure in opencv C++. I'm looking for some advises, and I'll be very appreciate if getting a piece of implementation code for me to reference! Thanks for any help!!
==================================================
Edit 5/19/2015
Let me add that one of my trying implementations is Watershed here:(http://blog.csdn.net/fdl19881/article/details/6749976).
It's not perfect but the result i want. In this implement, user needs to operate manually( draw the watershed lines ), so i'm looking for AUTOMATIC version of it. Sounds a little bit hard, but... i'll appreciate for some suggestion or piece of code to do it.
Opencv Documentation: Link
Parameters: here
Sample code for Meanshift filtering:
#include "opencv2/core/core.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
Mat img, res, element;
int main(int argc, char** argv)
{
namedWindow( "Meanshift", 0 );
img = imread( argv[1] );
// GaussianBlur(img, img, Size(5,5), 2, 2);
pyrMeanShiftFiltering( img, res, 20, 45, 3);
imwrite("meanshift.png", res);
imshow( "Meanshift", res );
waitKey();
return 0;
}
This is the output with your image, you might need to use some pre-processing before or maybe find some better parameters:
EDIT: Output with some gaussian blur beforehand (comment in code)
The problem with looking at existing segmentation approaches is that they are either implemented in Matlab (which nobody outside of Uni can use) or they are not automatic. An approach where the user needs to preprocess the picture by choosing objects of interest or levels that indicate how to split colors is not useful because it is not automatic. If you like, you can try my OpenCV based implementation of segmentation described in this blog post. It is not perfect, but it is automatic and does most of the job and you can actually download the source and try it out.
I am pretty new to openCV and would like a little help
So my basic idea was to use opencv to create a small application for interior designing.
Problem
How to differentiate between walls and floor of a picture (even when we have some noise in the picture).
For Ex.
Now, my idea was, if somehow i can find the edges of the wall or tile, and then if any object which will be used for interior decoration(for example any chair), then that object will be placed perfectly over the floor(i.e the two image gets blended)
My approach
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv/cv.h>
using namespace cv;
using namespace std;
int main(){
Mat image=imread("/home/ayusun/Downloads/IMG_20140104_143443.jpg");
Mat resized_img,dst,contours,detected_edges;
resize(image, resized_img, Size(1024, 768), 0, 0, INTER_CUBIC);
threshold(resized_img, dst, 128, 255, CV_THRESH_BINARY);
//Canny(image,contours,10,350);
namedWindow("resized image");
imshow("resized image",resized_img);
//imshow("threshold",dst);
blur( resized_img, detected_edges, Size(2,2) );
imshow("blurred", detected_edges);
Canny(detected_edges,contours,10,350);
imshow("contour",contours);
waitKey(0);
return 1;
}
I tried canny edge detection algorithm, but it seems to find a lot of edges. And i still don't know how to combine floor of the room with that of the chair
Thanks
Sorry for involuntary advertisement but IKEA uses a catalog smartphone app which uses augmented reality to position objects/furniture around an image of your room. Is that what you're trying to do?
In order to achieve this you would need a "pinpoint", a fixed point where to hook your objects to. That usually helps differentiate between walls and floor in the app above (and renders things easy).
Distinguishing walls from floors is hard even for a human if they're hanging by their feet and walls/floors have the same texture on them (but we manage to do it thanks to our "gravity feeling").
Find some keypoints or please state if you're planning to do it with a fixed camera (i.e. it will never be put horizontally)
OpenCV's POSIT may be userful for you (here is an example): http://opencv-users.1802565.n2.nabble.com/file/n6908580/main.cpp
Also take a look at augmented reality toolkits ArUco for example.
For advanced methods take a look at ptam.
And you can find some userful links and papers here: http://www.doc.ic.ac.uk/~ajd/
Segmenting walls and floors out of a single image is possible to some extent but requires a lot of work, it will require quite a complex system if you want to achieve decent results. You can probably do much better with a pair of images (stereo reconstruction)
I am using this program to just read and display an image.
I dont know why it is showing this odd error:
assertion failed (scn==3 || scn ==4) in unknown function,file......\src\modules\imgproc\src\color.cpp line 3326
I changed some images, sometimes it runs without error but, even when it runs and everything, it is showing the window but not the image in it. What is wrong?
#include "stdafx.h"
#include "opencv2/calib3d/calib3d.hpp"
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/imgproc/imgproc.hpp"
using namespace cv;
using namespace std;
void main()
{
Mat leftImg,frame=imread("C:\\Users\\user\\Downloads\\stereo_progress.png");
leftImg=imread("C:\\Users\\user\\Downloads\\dm_sl.gif");//add of left camera
cvtColor(leftImg,leftImg,CV_BGR2GRAY);
imwrite("imreadtest.txt",leftImg);
imshow("cskldnsl",leftImg);
getchar();
}
As answered by others, make sure the parameter1 in cvtColor is not 1 channel image. check it by type(). it should be CV_8UC3 and etc.
Put waitKey after imshow. Image will show up.
I do not know why you are saving leftImg in imreadtest.txt. [ Though it is not making the error.]
First, make sure that the image was correctly loaded by testing for leftImg.data != 0.
Then, you can force the number of channels by passing as second parameter to cv::imread() the value CV_LOAD_IMAGE_GRAYSCALE or CV_LOAD_IMAGE_COLOR in order to ensure that you load a grayscale (1 channel) or color (3 channels) image, whatever the type of the image file is.
You cannot use the same matrix for both the input matrix and the output matrix when using cvtColor(). If you don't need the colored image later on, passing a copy is a straightforward solution:
cvtColor(leftImg.clone(), leftImg, CV_BGR2GRAY);
Another solution is using a fresh output matrix:
Mat leftImgGray;
cvtColor(leftImg, leftImgGray, CV_BGR2GRAY);
imshow("cskldnsl",leftImgGray);