Force Qt Camera video format - c++

I'm trying to use Qt Camera from QML.
I'm developing a custom VideoFilter:
QVideoFrame MyFilterRunnable::run(QVideoFrame* input, const QVideoSurfaceFormat&, RunFlags)
I started deploying the application on Windows and I have that:
frame is mappable in QAbstractVideoBuffer::ReadWrite
frame pixel format is PixelFormat::Format_BGR32
When I moved to Linux, unfortunately, everything changed, without changing the camera I have:
The frame is only QAbstractVideoBuffer::ReadOnly
frame pixel format is PixelFormat::Format_YUYV
And now I really don't know how to convert this frame to an OpenCV Mat.
Is there any way to choose which will be the pixel format of the Camera?

Right on Steven,
if (input->pixelFormat() == QVideoFrame::Format_YUYV)
{
auto input_w = input->width ();
auto input_h = input->height();
auto cam_data = input->bits();
cv::Mat yuyv(input_h, input_w,CV_8UC2, cam_data);
cv::Mat rgb (input_h, input_w,CV_8UC3);
cvtColor(yuyv, rgb, CV_YUV2BGR_YUYV);
m_mat = rgb;
}
else
if (input->pixelFormat() == QVideoFrame::Format_YUV420P || input->pixelFormat() == QVideoFrame::Format_NV12) {
m_yuv = true;
m_mat = yuvFrameToMat8(*input);
} else {
m_yuv = false;
QImage wrapper = imageWrapper(*input);
if (wrapper.isNull()) {
if (input->handleType() == QAbstractVideoBuffer::NoHandle)
input->unmap();
return *input;
}
m_mat = imageToMat8(wrapper);
}
ensureC3(&m_mat);

I am facing the same problem between my Linux machine and a Raspberry : I am using the same camera and the the pixel formats gven by QVideoFrame are different. It has probably something to do with v4l2
About the conversion from YUYV to OpenCV Mat, this code worked for me :
QVideoFrame *input ;
...
auto input_w = input->width ();
auto input_h = input->height();
auto cam_data = input->bits();
cv::Mat yuyv(input_h, input_w,CV_8UC2, cam_data);
cv::Mat rgb (input_h, input_w,CV_8UC3);
cvtColor(yuyv, rgb, CV_YUV2BGR_YUYV);

Related

How to get frame from video by using vlc-qt

I`m using vlc-qt for decode h264 video stream but I need every frame from video (stream) for further processing. I found this link that describes the solution :
https://discuss.tano.si/t/how-to-get-frame-from-video/253
I made a class that inherits from VlcVideoStream class and re-implement frameUpdated() function as bellow :
void MyVideoStream::frameUpdated() {
qDebug() << "frame" ;
int rows, cols, matType;
// convert to shared pointer to const frame to avoid crash
std::shared_ptr<const VlcYUVVideoFrame> frame = std::dynamic_pointer_cast<const VlcYUVVideoFrame>(renderFrame());
if (!frame) {
return; // LCOV_EXCL_LINE
}
rows = frame->height + frame->height/2;
cols = frame->width;
}
and declared my class as :
MyVideoStream *_stream ;
_stream = new MyVideoStream(Vlc::YUVFormat,ui->video) ;
_stream->init(_player) ;
where _player is a VlcMediaPlayer object reference. but when I ran the program nothing happened. I don`t know what is the problem.
when you subclass the VlcVideoStream and re-implement frameUpdated(), you have access to YUV frame every time it is updated.
if you are familiar with Opencv , just add these code to frameUpdated() function, then you can see the gray image :
void MyVideoStream::frameUpdated()
{
std::shared_ptr<const VlcYUVVideoFrame> frame= std::dynamic_pointer_cast<const VlcYUVVideoFrame>(renderFrame());
if (!frame) {
return; // LCOV_EXCL_LINE
}
int width = frame->width;
int height = frame->height;
cv::Mat result = cv::Mat(height, width, CV_8UC1, (void*)frame->frameBuffer.data());
imshow("result", result);
waitKey(2);
}

Issues while reading and displaying PNG with transparency layer opencv

I am doing a simple reading and displaying PNG image. I am reading a png image with background as transparent. I am converting the image in greyscale and then displaying it. But the converted image is looking something like this:
Original Image:
Greyscale image:
Here is the code. What am I doing wrong?:
Mat image = imread("3X3_a11.png",IMREAD_GRAYSCALE);
Mat output(image.size(),image.type());
// connectedComponents(image, output);
imshow("Output", image);
waitKey(0);
destroyAllWindows();
Your RGBA image is malformed, or at least is very wierd. On loading I see the warning:
libpng warning: iCCP: known incorrect sRGB profile
You can however get the binary version need for cv::connectedComponents with simple processing like:
#include <opencv2\opencv.hpp>
int main()
{
cv::Mat4b img4b = cv::imread("path_to_image", cv::IMREAD_UNCHANGED);
// Convert to grayscale getting rid of alpha channel
cv::Mat1b img(img4b.rows, img4b.cols, uchar(0));
for (int r = 0; r < img4b.rows; ++r) {
for (int c = 0; c < img4b.cols; ++c) {
if (img4b(r, c) == cv::Vec4b(255,255,255,255)) {
img(r, c) = uchar(255);
}
if (img4b(r, c)[3] == 0) {
img(r, c) = uchar(255);
}
}
}
cv::imshow("img", img);
cv::waitKey();
}
Result:

Absdiff in openCV can compile but show black image

I have been trying to use absdiff to find the motion in an image,but unfortunately it fail,i am new to OpenCV. The coding supposed to use absdiff to determine whether any motion is happening around or not, but the output is a pitch black for diff1,diff2 and motion. Meanwhile,next_mframe,current_mframe, prev_mframe shows grayscale images. While, result shows a clear and normal image. I use this as my reference http://manmade2.com/simple-home-surveillance-with-opencv-c-and-raspberry-pi/. I think the all the image memory is loaded with the same frame and compare, that explain why its a pitch black. Is there any others method i miss there? I am using RTSP to pass camera RAW image to ROS.
void imageCallback(const sensor_msgs::ImageConstPtr&msg_ptr){
CvPoint center;
int radius, posX, posY;
cv_bridge::CvImagePtr cv_image; //To parse image_raw from rstp
try
{
cv_image = cv_bridge::toCvCopy(msg_ptr, enc::BGR8);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("cv_bridge exception: %s", e.what());
return;
}
frame = new IplImage(cv_image->image); //frame now holding raw_image
frame1 = new IplImage(cv_image->image);
frame2 = new IplImage(cv_image->image);
frame3 = new IplImage(cv_image->image);
matriximage = cvarrToMat(frame);
cvtColor(matriximage,matriximage,CV_RGB2GRAY); //grayscale
prev_mframe = cvarrToMat(frame1);
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); //grayscale
current_mframe = cvarrToMat(frame2);
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY); //grayscale
next_mframe = cvarrToMat(frame3);
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY); //grayscale
// Maximum deviation of the image, the higher the value, the more motion is allowed
int max_deviation = 20;
result=matriximage;
//rellocate image in right order
prev_mframe = current_mframe;
current_mframe = next_mframe;
next_mframe = matriximage;
//motion=difflmg(prev_mframe,current_mframe,next_mframe);
absdiff(prev_mframe,next_mframe,diff1); //Here should show black and white image
absdiff(next_mframe,current_mframe,diff2);
bitwise_and(diff1,diff2,motion);
threshold(motion,motion,35,255,CV_THRESH_BINARY);
erode(motion,motion,kernel_ero);
imshow("Motion Detection",result);
imshow("diff1",diff1); //I tried to output the image but its all black
imshow("diff2",diff2); //same here, I tried to output the image but its all black
imshow("diff1",motion);
imshow("nextframe",next_mframe);
imshow("motion",motion);
char c =cvWaitKey(3); }
I change the cv_bridge method to VideoCap, its seem to functions well, cv_bridge just cannot save the image even through i change the IplImage to Mat format. Maybe there is other ways, but as for now, i will go with this method fist.
VideoCapture cap(0);
Tracker(void)
{
//check if camera worked
if(!cap.isOpened())
{
cout<<"cannot open the Video cam"<<endl;
}
cout<<"camera is opening"<<endl;
cap>>prev_mframe;
cvtColor(prev_mframe,prev_mframe,CV_RGB2GRAY); // capture 3 frame and convert to grayscale
cap>>current_mframe;
cvtColor(current_mframe,current_mframe,CV_RGB2GRAY);
cap>>next_mframe;
cvtColor(next_mframe,next_mframe,CV_RGB2GRAY);
//rellocate image in right order
current_mframe.copyTo(prev_mframe);
next_mframe.copyTo(current_mframe);
matriximage.copyTo(next_mframe);
motion = diffImg(prev_mframe, current_mframe, next_mframe);
}

Canon LiveView: image convertion to OpenCV Mat

I'm trying to show LiveView image in real time. I use EDSDK 2.14 + Qt5 + opencv+mingw32 under Windows. I'm not very sophisticated in image processing so now I have the following problem. I use example from Canon EDSDK and all was ok until this part of code:
//
// Display image
//
I googled a lot of examples but all of them was written on C# or MFC or VB. Also I found advise to use libjpegTurbo for decompressing image and then showing it using opencv. I tried to use libjpegTurbo but failed to undestand what to do :(. Maybe somebody here have code example of the conversion LiveView stream to opencv Mat or QImage (because I use Qt)?
Here is what worked for me after following the SAMPLE 10 from the Canon EDSDK Reference. It's a starting point for a more robust solution.
In the downloadEvfData function, I replaced the "Display Image" part by the code bellow:
unsigned char *data = NULL;
EdsUInt32 size = 0;
EdsSize coords ;
// get image coordinates
EdsGetPropertyData(evfImage, kEdsPropsID_Evf_CoordinateSystem, 0, sizeof(coords), &coords);
// get buffer pointer and size
EdsGetPointer(stream, (EdsVoid**)&data);
EdsGetLenth(stream, &size);
//
// release stream and evfImage
//
// create mat object
Mat img(coords.height, coords.width, CV_8U, data);
image = imdecode(img, CV_LOAD_IMAGE_COLOR);
I've also changed the function signature:
EdsError downloadEvfData(EdsCameraRef camera, Mat& image)
And in the main function:
Mat image;
namedWindow("main", WINDOW_NORMAL);
startLiveView(camera);
for(;;) {
dowloadEvfData(camera, image);
imshow("main", image);
if (waitkey(10) >= 0);
break;
}
Based on the Canon EDSDKs example, you may append your EdsStreamRef 'stream' data with its correct length into a QByteArray. Then, use for example the following to parse the raw data from the QByteArray as a JPG into a new QImage:
QImage my_image = QImage::fromData(limagedata,"JPG"); Once it's in a QImage you can convert it into a OpenCV cv::Mat (see How to convert QImage to opencv Mat)
Well it depends on the format of the liveview-stream.
There must be some kind of delimiter in it and you need then to convert each image and update your QImage with it.
Check out this tutorial for more information: Canon EDSDK Tutorial in C#
QImage img = QImage::fromData(data, length, "JPG");
m_image = QImageToMat(img);
// -----------------------------------------
cv::Mat MainWindow::QImageToMat(QImage& src)
{
cv::Mat tmp(src.height(),src.width(),CV_8UC4,(uchar*)src.bits(),src.bytesPerLine());
cv::Mat result = tmp.clone();
return result;
}
// -------------------------
void MainWindow::ShowVideo()
{
namedWindow("yunhu",WINDOW_NORMAL);
while(1)
{
requestLiveViewImage();
if(m_image.data != NULL)
{
imshow("yunhu", m_image);
cvWaitKey(50);
}
}
}

OpenCV : convert the pointer in memory to image

I have a grabber which can get the images and show them on the screen with the following code
while((lastPicNr = Fg_getLastPicNumberBlockingEx(fg,lastPicNr+1,0,10,_memoryAllc))<200) {
iPtr=(unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc);
::DrawBuffer(nId,iPtr,lastPicNr,"testing"); }
but I want to use the pointer to the image data and display them with OpenCV, cause I need to do the processing on the pixels. my camera is a CCD mono camera and the depth of the pixels is 8bits. I am new to OpenCV, is there any option in opencv that can get the return of the (unsigned char*)Fg_getImagePtrEx(fg,lastPicNr,0,_memoryAllc); and disply it on the screen? or get the data from the iPtr pointer an allow me to use the image data?
Creating an IplImage from unsigned char* raw_data takes 2 important instructions: cvCreateImageHeader() and cvSetData():
// 1 channel for mono camera, and for RGB would be 3
int channels = 1;
IplImage* cv_image = cvCreateImageHeader(cvSize(width,height), IPL_DEPTH_8U, channels);
if (!cv_image)
{
// print error, failed to allocate image!
}
cvSetData(cv_image, raw_data, cv_image->widthStep);
cvNamedWindow("win1", CV_WINDOW_AUTOSIZE);
cvShowImage("win1", cv_image);
cvWaitKey(10);
// release resources
cvReleaseImageHeader(&cv_image);
cvDestroyWindow("win1");
I haven't tested the code, but the roadmap for the code you are looking for is there.
If you are using C++, I don't understand why your are not doing it the simple way like this:
If your camera is supported, I would do it this way:
cv::VideoCapture capture(0);
if(!capture.isOpened()) {
// print error
return -1;
}
cv::namedWindow("viewer");
cv::Mat frame;
while( true )
{
capture >> frame;
// ... processing here
cv::imshow("viewer", frame);
int c = cv::waitKey(10);
if( (char)c == 'c' ) { break; } // press c to quit
}
I would recommend starting to read the docs and tutorials which you can find here.