creating a grayscale video using opencv - c++

I need to save a grayscale video from a GIge camera using OpenCV on Mac os X 10.8. I used this code:
namedWindow("My video",CV_WINDOW_AUTOSIZE);
Size frameSize(659, 493);
VideoWriter oVideoWriter ("MyVideo.avi",-1, 30, frameSize, false);
While(1)
{
...
Mat Image=Mat(Size(GCamera.Frames[Index].Width,GCamera.Frames[Index].Height),CV_8UC1,GCamera.Frames[Index].ImageBuffer);
oVideoWriter.write(Image);
...
}
I got this error:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cvtColor, file /Users/rosa/OpenCV-2.4.3/modules/imgproc/src/color.cpp, line 3270
libc++abi.dylib: terminate called throwing an exception
The program has unexpectedly finished.

I made it this way:
VideoWriter oVideoWriter ("MyVideo.avi",CV_FOURCC('M','J','P','G'), 30, frameSize);
While(1)
{
Mat Image=Mat(Size(GCamera.Frames[Index].Width,GCamera.Frames[Index].Height),CV_8UC1,GCamera.Frames[Index].ImageBuffer);
Mat colorFrame;
cvtColor(Image, colorFrame, CV_GRAY2BGR);
oVideoWriter.write(colorFrame);
}

Your issue is your operating system. Checking the documentation, it says the greyscale feature is supported on Windows only.
Easy enough fix though,
cv::Mat imageGrey;
cv::Mat imageArr[] = {Image, Image, Image};
cv::merge(imageArr, 3, imageGrey);
oVideoWriter.write(imageGrey);

Related

Error: OpenCV 3.4.0 CUDA ORB feature detection

I am using ORB detector for detecting the keypoints of frame of a video but it gives me the following error:
OpenCV Error: Assertion failed (img.type() == (((0) & ((1 << 3) - 1)) + (((1)-1) << 3))) in detectAsync
The CPU functions for ORB detector works fine. But somehow for GPU it is not able to detect the keypoints of the image. I have also tried using FastFeatureDetector from CUDA but it also fails.
I am attaching my code below:
int main()
{
cv::VideoCapture input("/home/admin/Pictures/cars.mp4");
// Create image matrix
cv::Mat img,desc;
cv::cuda::GpuMat obj1;
// create vector keypoints
std::vector<cv::KeyPoint>keypoints;
// create keypoint detector
cv::Ptr<cv::cuda::ORB> detector = cv::cuda::ORB::create();
for(;;)
{
if(!input.read(img))
break;
obj1.upload(img);
detector->detect(obj1,keypoints, cv::cuda::GpuMat());
obj1.download(desc);
//Create cricle at keypoints
for(size_t i=0; i<keypoints.size(); i++)
cv::circle(desc, keypoints[i].pt, 2,cv::Scalar(0,0,255),1);
// Display Image
cv::imshow("img", desc);
char c = cv::waitKey();
// NOTE: Press any key to run the next frame
// Wait for a key to press
if(c == 27) // 27 is ESC key code
break;
}
}
The main problem is that the detector takes CV_8UC1 as the input format for Mat. But the format of my image is CV_8UC3. I have tried converting that image using img.convertTo(img1, CV_8UC1)but still it is unable to process and throws me the error OpenCV Error: Bad flag (parameter or structure field) (Unrecognized or unsupported array type) in cvGetMat.
You have to convert your image from CV_8UC3 - 3 channel image to CV_8UC1 - single channel (grayscale) image. To do so simply call
cv::cvtColor(img, img, cv::BGR2GRAY);
prior to uploading data to the GPU.

Error when converting from HSV to BGR, or HSV to JPEG in openCV after using inRange

I am using openCV 3.1.0 (I have tried with 2.4.9, with same problem). I want to output a HSV mat to jpeg:
// .. Getting JPEG content into memory
// JPEG to mat
Mat imgBuf=Mat(1, jpegContent, CV_8UC3, jpegSize);
Mat imgMat=imdecode(imgBuf, CV_LOAD_IMAGE_COLOR);
free(jpegContent);
if(imgMat.data == NULL) {
// Some error handling
}
// Now the JPEG is decoded and reside in imgMat
cvtColor(imgMat, imgMat, CV_BGR2HSV); // Converting to HSV
Mat tmp;
inRange(imgMat, Scalar(0, 0, 0), Scalar(8, 8, 8), tmp); // Problem goes here
cvtColor(tmp, imgMat, CV_HSV2BGR);
// Mat to JPEG
vector<uchar> buf;
imencode(".jpg", imgMat, buf, std::vector<int>());
outputJPEG=(unsigned char*)malloc(buf.size());
memcpy(outputJPEG, &buf[0], buf.size());
// ... Output JPEG
The problem is, when i do cvtColor(tmp, imgMat, CV_HSV2BGR) with inRange, my program will fail with:
OpenCV Error: Assertion failed (scn == 3 && (dcn == 3 || dcn == 4) && (depth == CV_8U || depth == CV_32F)) in cvtColor, file /home/pi/opencv/src/opencv-3.1.0/modules/imgproc/src/color.cpp, line 8176
terminate called after throwing an instance of 'cv::Exception'
what(): /home/pi/opencv/src/opencv-3.1.0/modules/imgproc/src/color.cpp:8176: error: (-215) scn == 3 && (dcn == 3 || dcn == 4) && (depth == CV_8U || depth == CV_32F) in function cvtColor
If i removed inRange, the program work just fine. I have tried to remove the cvtColor call, letting imencode to do its job, automatically converting HSV to BGR then to JPEG. This time, no more assertion failed, but i got corrupted JPEG image, as GStreamer complains:
gstrtpjpegpay.c(581): gst_rtp_jpeg_pay_read_sof ():
/GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0
WARNING: from element /GstPipeline:pipeline0/GstRtpJPEGPay:rtpjpegpay0: Wrong SOF length 11.
Again, removing inRange also solves this issue, it produces good JPEG data.
So, is that i am invoking inRange improperly that cause corrupted image data? If yes, what is the correct way to use inRange?
inRange produces a single channel binary matrix, i.e. a CV_8UC1 matrix with values either 0 or 255.
So you cannot convert tmp with HSV2BGR, because the source image tmp doesn't have 3 channels.
OpenCV is telling you exactly this: scn (source channels) is not 3.
Since you probably want to keep and then convert to BGR only part of the image in your range, you can:
set to black everything outside the range: imgMat.setTo(Scalar(0,0,0), ~tmp);
convert the resulting image to BGR: cvtColor(imgMat, imgMat, CV_HSV2BGR);

OpenCV3 Assertion failed on accumulateWeighted

I try to make average background but something wrong because it way to crash app
cv::Mat firstFrame;
cv::Mat averageBackground;
int frameCounter=0;
// this function is called for every frame of the camera
- (void)processImage:(Mat&)image; {
cv::Mat diffFrame;
cv::Mat currentFrame;
cv::Mat colourCopy;
cvtColor(image, currentFrame, COLOR_BGR2GRAY);
averageBackground = cv::Mat::zeros(image.size(), CV_32FC3);
cv::accumulateWeighted(currentFrame, averageBackground, 0.01);
cvtColor(image, colourCopy, COLOR_BGR2RGB);
I see in crash logs
OpenCV Error: Assertion failed (_src.sameSize(_dst) && dcn == scn) in accumulateWeighted, file /Volumes/Linux/builds/precommit_ios/opencv/modules/imgproc/src/accum.cpp, line 1108
libc++abi.dylib: terminating with uncaught exception of type cv::Exception: /Volumes/Linux/builds/precommit_ios/opencv/modules/imgproc/src/accum.cpp:1108: error: (-215) _src.sameSize(_dst) && dcn == scn in function accumulateWeighted
In cv::accumulateWeighted the input and the output image must have the same number of channels. In your case currentFrame has only one channel because of the COLOR_BGR2GRAY you did before, and averageBackground has three channels.
Also be careful with averageBackground = cv::Mat::zeros(image.size(), CV_32FC3); because with this line you are initializing the result image each time (so you are deleting the previous image values that allow you to calculate the average). You must initialize this image only once in the beginning of your program or wherever.

OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cv:: cvtColor, file ..\..\..\..\opencv\modules\imgproc\src\color.cpp, line 3737

Hi I am trying to run this sample code from OpenCV:
#include "opencv2\opencv.hpp"
using namespace cv;
int main(int, char**)
{
VideoCapture cap(0); // open the default camera
if (!cap.isOpened()) // check if we succeeded
return -1;
Mat edges;
namedWindow("edges", 1);
for (;;)
{
Mat frame;
cap >> frame; // get a new frame from camera
cvtColor(frame, edges, CV_BGR2GRAY);
GaussianBlur(edges, edges, Size(7, 7), 1.5, 1.5);
Canny(edges, edges, 0, 30, 3);
imshow("edges", edges);
if (waitKey(30) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
I am currently using a Windows 7 x64 BootCamp on a Macbook Pro. I'm running this code with Visual Studios 2013 and OpenCV 2.4.9.
This is how I've set up my Config Properties:
VC++ Directories: Include Directories: H:\opencv\build\include;$(IncludePath)
Linker:General:Additional Library Directories: H:\opencv\build\x64\vc12\lib;%(AdditionalLibraryDirectories)
Linker:Input:Additional Dependencies: opencv_calib3d249.lib;opencv_contrib249.lib;opencv_core249.lib;opencv_features2d249.lib;opencv_flann249.lib;opencv_gpu249.lib;opencv_highgui249.lib;opencv_imgproc249.lib;opencv_legacy249.lib;opencv_ml249.lib;opencv_nonfree249.lib;opencv_objdetect249.lib;opencv_ocl249.lib;opencv_photo249.lib;opencv_stitching249.lib;opencv_superres249.lib;opencv_ts249.lib;opencv_video249.lib;opencv_videostab249.lib;%(AdditionalDependencies)
When I click on Local Windows Debugger in Release x64 mode I get the following error from Visual Studios:
First-chance exception at 0x000007FEFD21B3DD in Project3.exe:
Microsoft C++ exception: cv::Exception at memory location
0x000000000019A8A0.
If there is a handler for this exception, the program may be safely
continued.
When I click Break instead (scared to press Continue), a window named Edges does pop up and the camera does turn on since the green light turns on. But I also get the following error in the command window:
OpenCV Error: Assertion failed (scn == 3 || scn == 4) in cv::
cvtColor, file ........\opencv\modules\imgproc\src\color.cpp, line
3737
I'm pretty new to C++ and Visual Studios, any help would be appreciated. Thanks in advance!
From the conversation in the comments to the question, we saw that VideoCapture gives frame in grayscale. So the call to cvtColor caused the crash.
...
Mat frame;
cap >> frame; // frame is already CV_8UC1
//cvtColor(frame, edges, CV_BGR2GRAY); // so don't to convert here, or crash!
edges = frame.clone();
...

VLC can't load simple openCV video filter

I've wrote a simple opencv test video filter for VLC. Built it with no errors but on VLC launch it gets me:
core libvlc warning: cannot load module
`/vlc_build/vlc/modules/.libs/libnormalization_cpp_plugin.so'
(/vlc_build/vlc/modules/.libs/libnormalization_cpp_plugin.so:
undefined symbol: _ZTVN2cv11_InputArrayE)
I'm using OpenCV 3.0.0-beta and VLC media player 3.0.0-git Vetinari (revision 2.2.0-git-2564-gc8549fb)
Here is the main function to transform frame-by-frame video:
static picture_t *FilterNormalizeOpenCV( filter_t *p_filter, picture_t *p_pic ){
IplImage* p_img;
int i_planes = 0;
CvPoint pt1, pt2;
filter_sys_t *p_sys = p_filter->p_sys;
Mat frame, frame_hist_equalized;
vector<Mat> channels;
//picture_t to IplImage without segmentation fault
p_img = cvCreateImageHeader( cvSize( p_pic->p[0].i_pitch, p_pic->p[0].i_visible_lines ),
IPL_DEPTH_8U, 1 );
cvSetData( p_img, p_pic->p[0].p_pixels, p_pic->p[0].i_pitch );
frame = cvarrToMat(p_img);
cvtColor(frame, frame_hist_equalized, COLOR_BGR2YCrCb);
IplImage* z_img = new IplImage(frame_hist_equalized);
cvGetRawData( z_img, (uchar**)&p_pic->p[0].p_pixels, NULL, NULL );
return p_pic;
}
I guess, it fails on cvtColor() function call and it is possible of wrong vlc-image(p_pic) to opencv(Mat frame) processing. I couldn't find any reasonable solutions on the web.
Found the solution. The mistake was in incorrect linking openCV libs during vlc build.