I get this error while trying to run OpenCV C++ code using qtcreator and the code used to
add some features to video file
opencv error :assertion failed <fixed type<> ::<<Mat*>obj>->type<>==m type> in create file c:\opencv\sources\mogules\core\src\matrix.cpp
I tried to add file dWidth and dHeight but it did not work.
int main(){
Mat frame;
Mat image;
VideoCapture cap("file source");
namedWindow("window",1);
while(1){
cap>>frame;
GpyrTempIIR g;
g.processOnFrame(frame,image); //this is the process i do on frame
imshow("window",image);
waitKey(33);
}
}
void GpyrTempIIR::processOnFrame(const Mat& src, Mat& out) {
src.convertTo(src,CV_32F); //Convert to Float
resize(srcFloat,blurred,blurredsize,0,0,CV_INTER_AREA);
/* Method in openCv to resize a video
* INTER_AREA is a fast method that gets the average of several pixels, which
is good for shrinking an image but not so good for enlarging an image. */
if(first){
first=false;
blurred.copyTo(LowPassHigh); // Using Storing Method
blurred.copyTo(LowPassLow);
src.copyTo(out);
} else {
//apply Temporal filter substraction of two IIR LowPass Filters
LowPassHigh = LowPassHigh * (1-fHigh) + fHigh * blurred;
LowPassLow = LowPassLow * (1-fLow) + fLow * blurred;
blurred = LowPassHigh - LowPassLow;
blurred*=alpha; // amplify , multuplying by alpha value
resize(blurred, outFloat, src.size(), 0, 0, CV_INTER_LINEAR); // resize back
outFloat += srcFloat; // add back to original frame
outFloat.convertTo(out, CV_8U); // convert to 8 bit
}
}
Related
I have been trying to use absdiff to find the motion in an image,but unfortunately it fail,i am new to OpenCV. The coding supposed to use absdiff to determine whether any motion is happening around or not, but the output is a pitch black for diff1,diff2 and motion. Meanwhile,next_mframe,current_mframe, prev_mframe shows grayscale images. While, result shows a clear and normal image. I use this as my reference http://manmade2.com/simple-home-surveillance-with-opencv-c-and-raspberry-pi/. I think the all the image memory is loaded with the same frame and compare, that explain why its a pitch black. Is there any others method i miss there? I am using RTSP to pass camera RAW image to ROS.
void imageCallback(const sensor_msgs::ImageConstPtr&msg_ptr){
CvPoint center;
int radius, posX, posY;
cv_bridge::CvImagePtr cv_image; //To parse image_raw from rstp
try
{
cv_image = cv_bridge::toCvCopy(msg_ptr, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = new IplImage(cv_image->image); //frame now holding raw_image
frame1 = new IplImage(cv_image->image);
frame2 = new IplImage(cv_image->image);
frame3 = new IplImage(cv_image->image);
matriximage = cvarrToMat(frame);
cvtColor(matriximage,matriximage,CV_RGB2GRAY); //grayscale
prev_mframe = cvarrToMat(frame1);
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); //grayscale
current_mframe = cvarrToMat(frame2);
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY); //grayscale
next_mframe = cvarrToMat(frame3);
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY); //grayscale
// Maximum deviation of the image, the higher the value, the more motion is allowed
int max_deviation = 20;
result=matriximage;
//rellocate image in right order
prev_mframe = current_mframe;
current_mframe = next_mframe;
next_mframe = matriximage;
//motion=difflmg(prev_mframe,current_mframe,next_mframe);
absdiff(prev_mframe,next_mframe,diff1); //Here should show black and white image
absdiff(next_mframe,current_mframe,diff2);
bitwise_and(diff1,diff2,motion);
threshold(motion,motion,35,255,CV_THRESH_BINARY);
erode(motion,motion,kernel_ero);
imshow("Motion Detection",result);
imshow("diff1",diff1); //I tried to output the image but its all black
imshow("diff2",diff2); //same here, I tried to output the image but its all black
imshow("diff1",motion);
imshow("nextframe",next_mframe);
imshow("motion",motion);
char c =cvWaitKey(3); }
I change the cv_bridge method to VideoCap, its seem to functions well, cv_bridge just cannot save the image even through i change the IplImage to Mat format. Maybe there is other ways, but as for now, i will go with this method fist.
VideoCapture cap(0);
Tracker(void)
{
//check if camera worked
if(!cap.isOpened())
{
cout<<"cannot open the Video cam"<<endl;
}
cout<<"camera is opening"<<endl;
cap>>prev_mframe;
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); // capture 3 frame and convert to grayscale
cap>>current_mframe;
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY);
cap>>next_mframe;
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY);
//rellocate image in right order
current_mframe.copyTo(prev_mframe);
next_mframe.copyTo(current_mframe);
matriximage.copyTo(next_mframe);
motion = diffImg(prev_mframe, current_mframe, next_mframe);
}
I am performing feature detection in a video/live stream/image using OpenCV C++. The lighting condition varies in different parts of the video, leading to some parts getting ignored while transforming the RGB images to binary images.
The lighting condition in a particular portion of the video also changes over the course of the video. I tried the 'Histogram equalization' function, but it didn't help.
I got a working solution in MATLAB in the following link:
http://in.mathworks.com/help/images/examples/correcting-nonuniform-illumination.html
However, most of the functions used in the above link aren't available in OpenCV.
Can you suggest the alternative of this MATLAB code in OpenCV C++?
OpenCV has the adaptive threshold paradigm available in the framework: http://docs.opencv.org/modules/imgproc/doc/miscellaneous_transformations.html#adaptivethreshold
The function prototype looks like:
void adaptiveThreshold(InputArray src, OutputArray dst,
double maxValue, int adaptiveMethod,
int thresholdType, int blockSize, double C);
The first two parameters are the input image and a place to store the output thresholded image. maxValue is the thresholded value assigned to an output pixel should it pass the criteria, adaptiveMethod is the method to use for adaptive thresholding, thresholdType is the type of thresholding you want to perform (more later), blockSize is the size of the windows to examine (more later), and C is a constant to subtract from each window. I've never really needed to use this and I usually set this to 0.
The default method for adaptiveThreshold is to analyze blockSize x blockSize windows and calculate the mean intensity within this window subtracted by C. If the centre of this window is above the mean intensity, this corresponding location in the output position of the output image is set to maxValue, else the same position is set to 0. This should combat the non-uniform illumination issue where instead of applying a global threshold to the image, you are performing the thresholding on local pixel neighbourhoods.
You can read the documentation on the other methods for the other parameters, but to get your started, you can do something like this:
// Include libraries
#include <cv.h>
#include <highgui.h>
// For convenience
using namespace cv;
// Example function to adaptive threshold an image
void threshold()
{
// Load in an image - Change "image.jpg" to whatever your image is called
Mat image;
image = imread("image.jpg", 1);
// Convert image to grayscale and show the image
// Wait for user key before continuing
Mat gray_image;
cvtColor(image, gray_image, CV_BGR2GRAY);
namedWindow("Gray image", CV_WINDOW_AUTOSIZE);
imshow("Gray image", gray_image);
waitKey(0);
// Adaptive threshold the image
int maxValue = 255;
int blockSize = 25;
int C = 0;
adaptiveThreshold(gray_image, gray_image, maxValue,
CV_ADAPTIVE_THRESH_MEAN_C, CV_THRESH_BINARY,
blockSize, C);
// Show the thresholded image
// Wait for user key before continuing
namedWindow("Thresholded image", CV_WINDOW_AUTOSIZE);
imshow("Thresholded image", gray_image);
waitKey(0);
}
// Main function - Run the threshold function
int main( int argc, const char** argv )
{
threshold();
}
adaptiveThreshold should be your first choice.
But here I report the "translation" from Matlab to OpenCV, so you can easily port your code. As you see, most of the functions are available both in Matlab and OpenCV.
#include <opencv2\opencv.hpp>
using namespace cv;
int main()
{
// Step 1: Read Image
Mat1b img = imread("path_to_image", IMREAD_GRAYSCALE);
// Step 2: Use Morphological Opening to Estimate the Background
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(15,15));
Mat1b background;
morphologyEx(img, background, MORPH_OPEN, kernel);
// Step 3: Subtract the Background Image from the Original Image
Mat1b img2;
absdiff(img, background, img2);
// Step 4: Increase the Image Contrast
// Don't needed it here, the equivalent would be cv::equalizeHist
// Step 5(1): Threshold the Image
Mat1b bw;
threshold(img2, bw, 50, 255, THRESH_BINARY);
// Step 6: Identify Objects in the Image
vector<vector<Point>> contours;
findContours(bw.clone(), contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
for(int i=0; i<contours.size(); ++i)
{
// Step 5(2): bwareaopen
if(contours[i].size() > 50)
{
// Step 7: Examine One Object
Mat1b object(bw.size(), uchar(0));
drawContours(object, contours, i, Scalar(255), CV_FILLED);
imshow("Single Object", object);
waitKey();
}
}
return 0;
}
I've just completed the Udacity Parallel programming stage 2 course, and I'm now implementing what I've learnt into a basic app with OpenCV which applies a gaussian blur to a constant stream of images coming through a webcam.
I'm loading frames into a Mat object, and whilst in my loop I want to call a method gaussian_cpu, the only problem is it requires a uchar4 to be passed to both the input and output parameters. How would I convert a Mat object to uchar4?
// Keep processing frames - Do CPU First
while(cpu_frames > 0)
{
cout << cpu_frames << "\n";
camera >> frameIn;
gaussian_cpu(frameIn, frameOut, numRows(), numCols(), h_filter__, 9);
imshow("Source", frameIn);
imshow("Dest", frameOut);
// 2ms delay to prevent system from being interrupted whilst drawing the new frame
waitKey(2);
cpu_frames--;
}
My method signature then looks like this:
void gaussian_cpu(
const uchar4* const rgbaImage, // input image from the camera
uchar4* const outputImage, // The image we are writing back for display
size_t numRows, size_t numCols, // Width and Height of the input image (rows/cols)
const float* const filter, // The value of sigma
const int filterWidth // The size of the stencil (3x3) 9
)
I need to use uchar4 so I can split the channels, do my convolution and then recombine the channels to return the output image. Is there any way to do this?
opencv generally uses bgr, 3 channel Mats, but a basic:
Mat bgra;
cvtColor( frameIn, bgra, CV_BGR2BGRA );
will generate an (unused) 4th channel. now you probably have to allocate mem for you outputImage:
Mat frameOut( bgra.size(), bgra.type() );
then you can feed those into your gaussian_cpu():
int filterWidth=5;
float *filter = ... // your job, not mine ;)
gaussian_cpu( (uchar4*)(bgra.data), (uchar4*)(frameOut.data), bgra.rows, bgra.cols, filter, filterWidth );
I'm trying to show LiveView image in real time. I use EDSDK 2.14 + Qt5 + opencv+mingw32 under Windows. I'm not very sophisticated in image processing so now I have the following problem. I use example from Canon EDSDK and all was ok until this part of code:
//
// Display image
//
I googled a lot of examples but all of them was written on C# or MFC or VB. Also I found advise to use libjpegTurbo for decompressing image and then showing it using opencv. I tried to use libjpegTurbo but failed to undestand what to do :(. Maybe somebody here have code example of the conversion LiveView stream to opencv Mat or QImage (because I use Qt)?
Here is what worked for me after following the SAMPLE 10 from the Canon EDSDK Reference. It's a starting point for a more robust solution.
In the downloadEvfData function, I replaced the "Display Image" part by the code bellow:
unsigned char *data = NULL;
EdsUInt32 size = 0;
EdsSize coords ;
// get image coordinates
EdsGetPropertyData(evfImage, kEdsPropsID_Evf_CoordinateSystem, 0, sizeof(coords), &coords);
// get buffer pointer and size
EdsGetPointer(stream, (EdsVoid**)&data);
EdsGetLenth(stream, &size);
//
// release stream and evfImage
//
// create mat object
Mat img(coords.height, coords.width, CV_8U, data);
image = imdecode(img, CV_LOAD_IMAGE_COLOR);
I've also changed the function signature:
EdsError downloadEvfData(EdsCameraRef camera, Mat& image)
And in the main function:
Mat image;
namedWindow("main", WINDOW_NORMAL);
startLiveView(camera);
for(;;) {
dowloadEvfData(camera, image);
imshow("main", image);
if (waitkey(10) >= 0);
break;
}
Based on the Canon EDSDKs example, you may append your EdsStreamRef 'stream' data with its correct length into a QByteArray. Then, use for example the following to parse the raw data from the QByteArray as a JPG into a new QImage:
QImage my_image = QImage::fromData(limagedata,"JPG"); Once it's in a QImage you can convert it into a OpenCV cv::Mat (see How to convert QImage to opencv Mat)
Well it depends on the format of the liveview-stream.
There must be some kind of delimiter in it and you need then to convert each image and update your QImage with it.
Check out this tutorial for more information: Canon EDSDK Tutorial in C#
QImage img = QImage::fromData(data, length, "JPG");
m_image = QImageToMat(img);
// -----------------------------------------
cv::Mat MainWindow::QImageToMat(QImage& src)
{
cv::Mat tmp(src.height(),src.width(),CV_8UC4,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp.clone();
return result;
}
// -------------------------
void MainWindow::ShowVideo()
{
namedWindow("yunhu",WINDOW_NORMAL);
while(1)
{
requestLiveViewImage();
if(m_image.data != NULL)
{
imshow("yunhu", m_image);
cvWaitKey(50);
}
}
}
I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.