Accessing raspberry Pi camera using C++ - c++

I am trying to run openCV in C++ and capture the camera input.
The program looks like this:
#include <iostream>
#include <sstream>
#include <new>
#include <string>
#include <sstream>
#include <opencv2/opencv.hpp>
#include <opencv2/core.hpp>
#include <opencv2/imgcodecs.hpp>
#include <opencv2/highgui.hpp>
#define INPUT_WIDTH 3264
#define INPUT_HEIGHT 2464
#define DISPLAY_WIDTH 640
#define DISPLAY_HEIGHT 480
#define CAMERA_FRAMERATE 21/1
#define FLIP 2
void DisplayVersion()
{
std::cout << "OpenCV version: " << cv::getVersionMajor() << "." << cv::getVersionMinor() << "." << cv::getVersionRevision() << std::endl;
}
int main(int argc, const char** argv)
{
DisplayVersion();
std::stringstream ss;
ss << "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=3264, height=2464, format=NV12, framerate=21/1 ! nvvidconv flip-method=2 ! video/x-raw, width=480, height=680, format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";
//ss << "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=" << INPUT_WIDTH <<
//", height=" << INPUT_HEIGHT <<
//", format=NV12, framerate=" << CAMERA_FRAMERATE <<
//" ! nvvidconv flip-method=" << FLIP <<
//" ! video/x-raw, width=" << DISPLAY_WIDTH <<
//", height=" << DISPLAY_HEIGHT <<
//", format=BGRx ! videoconvert ! video/x-raw, format=BGR ! appsink";
cv::VideoCapture video;
video.open(ss.str());
if (!video.isOpened())
{
std::cout << "Unable to get video from the camera!" << std::endl;
return -1;
}
std::cout << "Got here!" << std::endl;
cv::Mat frame;
while (video.read(frame))
{
cv::imshow("Video feed", frame);
if (cv::waitKey(25) >= 0)
{
break;
}
}
std::cout << "Finished!" << std::endl;
return 0;
}
When running this code I get the following outout:
OpenCV version: 4.6.0
nvbuf_utils: Could not get EGL display connection
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 Failed to create CaptureSession
[ WARN:0#0.269] global /tmp/build_opencv/opencv/modules/videoio/src/cap_gstreamer.cpp (1405) open OpenCV | GStreamer warning: Cannot query video position: status=0, value=-1, duration=-1
Got here!
Finished!
If I run the other commented command to video.open() I get this output:
OpenCV version: 4.6.0
nvbuf_utils: Could not get EGL display connection
Error generated. /dvs/git/dirty/git-master_linux/multimedia/nvgstreamer/gst-nvarguscamera/gstnvarguscamerasrc.cpp, execute:751 Failed to create CaptureSession
I'm currently running this from headless mode on a jetson nano.
I also know that OpenCV and xlaunch works because I can use mjpeg streamer from my laptop and successfully stream my laptop camera output to my jetson nano by using video.open(http://laptop-ip:laptop-port/); and that works correctly (OpenCV is able to display a live video feed using xlaunch just fine).
I think this command is telling me my camera is successfully installed:
$ v4l2-ctl -d /dev/video0 --list-formats-ext
ioctl: VIDIOC_ENUM_FMT
Index : 0
Type : Video Capture
Pixel Format: 'RG10'
Name : 10-bit Bayer RGRG/GBGB
Size: Discrete 3264x2464
Interval: Discrete 0.048s (21.000 fps)
Size: Discrete 3264x1848
Interval: Discrete 0.036s (28.000 fps)
Size: Discrete 1920x1080
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1640x1232
Interval: Discrete 0.033s (30.000 fps)
Size: Discrete 1280x720
Interval: Discrete 0.017s (60.000 fps)
Any help would be much appreciated

The warning seems to be stating that you cannot use egl, ie OpenGL when in headless mode (because there is no screen)
If you run in headless mode. Would in not make more sense to not try to open a window do display?
cv::imshow("Video feed", frame);
if (cv::waitKey(25) >= 0)
{
break;
}
Remove this code and instead use cv::imwrite to write to a file, or whatever you want to do with the data.
Or if you run ssh. Run ssh with -X option to show the windows on your client computer instead. Could be slow, but if you really want to use cv::imshow it could be a option.

Well I fixed it by rebooting. I already did do a reboot but I also now have some errors whenever I run the program. I did recompile the dlib library but so I do think that when you update the gstreamer library you need to reboot your machine to successfully use it.

Related

Gstreamer HSL Stream cannot be read

I'm trying to create a HLS stream using OpenCV and Gstreamer in Linux (Ubuntu 20.10).
The OpenCv was successfully installed with GStreamer support.
I have created a simple application with the help of these two tutorials:
http://4youngpadawans.com/stream-live-video-to-browser-using-gstreamer/
How to use Opencv VideoWriter with GStreamer?
The code is the following:
#include <string>
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio/videoio_c.h>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
if(!cap.open(0, CAP_V4L2))
return 0;
VideoWriter writer(
"appsrc ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! x264enc ! mpegtsmux ! hlssink playlist-root=http://192.168.1.42:8080 location=/home/sem/hls/segment_%05d.ts target-duration=5 max-files=5 playlist-location=/home/sem/hls/playlist.m3u8 ",
0,
20,
Size(800, 600),
true);
if (!writer.isOpened()) {
std::cout <<"VideoWriter not opened"<<endl;
exit(-1);
}
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
writer.write(frame);
imshow("this is you, smile! :)", frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
}
The HTTP served was started using python command
python3 -m http.server 8080
At first look everything is fine. Streamer creates all needed files (playlist and xxx.ts files)
Folder with the HTTP Server content
Server Response on requests
But if I try to play the stream it does not work:
Open Stream in browser
The playing using VLC-Player does not work also (green screen)
Could someone give me a hint, what I'm doing wrong?
Thanks in advance!
Check what stream format is created. Check what color format you push into the pipeline. If its RGB chances are you create a non 4:2:0 stream which has very limited decoder support.
Thanks Florian,
I tried to change the format but it was not a problem.
First, what shall be performed is to take a real frame rate from capture device
int fps = cap.get(CV_CAP_PROP_FPS);
VideoWriter writer(
"appsrc ! videoconvert ! videoscale ! video/x-raw, width=640, height=480 ! x264enc ! mpegtsmux ! hlssink playlist-root=http://192.168.1.42:8080 location=/home/sem/hls/segment_%05d.ts target-duration=5 max-files=5 playlist-location=/home/sem/hls/playlist.m3u8 ",
0,
fps,
Size(640, 480),
true);
Second, the frame size shall be the same in all places it is mentioned.
The frame, which is captured, shall be also resized:
resize(frame, frame, Size(640,480));
writer.write(frame);
After this changes the chunks, generated by gstreamer, can be opened in a local player and the video works. Unfortunately the remote access still failing. :(

Video Streaming to Android from PC after processing from Opencv using RTSP

I am trying to stream a combined video stream taken from two webcams to android app after processing in opencv(combining two frames).
Here I am trying to use RTSP to send video stream from opencv to Android(using a gstreamer pipeline).
But i am stuck in how to send .sdp file configurations to the client(file name is live.sdp),and here's the code I used so far.
//basic
#include <iostream>
#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>
//opencv libraries
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc,char **argv){
Mat im1;
Mat im2;
VideoCapture cap1(1);
VideoCapture cap2(2);
VideoWriter video;
video.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! rtpmp2tpay send-config=true config-interval=10 pt=96 ! udpsink host=localhost port=5000 -v"
, 0, (double)20,Size(1280, 480), true);
if(video.isOpened()){
cout << "Video Writer is opened!"<<endl;
}else{
cout << "Video Writer is Closed!"<<endl;
return -1;
}
while(1){
cap1.grab();
cap2.grab();
bool bSuccess1 = cap1.read(im2);
bool bSuccess2 = cap2.read(im1);
Size sz1 = im1.size();
Size sz2 = im2.size();
Mat im3(sz1.height, sz1.width+sz2.width, CV_8UC3);
Mat left(im3, Rect(0, 0, sz1.width, sz1.height));
im1.copyTo(left);
Mat right(im3, Rect(sz1.width, 0, sz2.width, sz2.height));
im2.copyTo(right);
video << im3;
//imshow("im3", im3);
if(waitKey(10) == 27){
break;
}
}
cap1.release();
cap2.release();
video.release();
return 0;
}
And the .sdp file configuration,
v=0
m=video 5000 RTP/AVP 96
c=IN IP4 localhost
a=rtpmap:96 MP2T/90000
I can play the stream from vlc inside the local folder,
using
vlc live.sdp
but not inside the network,by using,
vlc rtsp://localhost:5000/live.sdp
Used Gstreamer with opencv solved the problem,but still have small lag.

Reading camera image using raspistill from C++ program

I like to capture images using RPi at least 60Hz. My code is in C++ and we have a library here for C++ interface. But that library has maximum 30Hz.
My target is minimum 60 Hz.
So far what I found is raspistill can make upto 90Hz, so I am trying to interface my C++ program to raspistill code.
I found one library here PiCam that has direct interface to raspistll. Not very sure, it can go to 60Hz, I am still trying to test it and have a few issues.
My queries are
(1)How is it possible to have 60Hz fps at RPi using C++?
(2)To interface to PiCam, I have already compiled, build and installed the library with no issues.
But I get black image when I capture. What could be the issue? A part of my code is shown below.
CCamera* cam = StartCamera(640, 480,60,1,true);
char mybuffer[640 * 480 * 4];
int ret = cam->ReadFrame(0, mybuffer, sizeof(mybuffer));
cout << " ret " << ret << endl;
Mat img(480, 640, CV_8UC4,mybuffer);
imwrite("img.jpg", img);
img.jpg is captured with black image.
(3)Using PiCam, how can I changed to Gray image?
I use Raspicam from here on a Raspberry Pi 3 and get around 90 fps in black and white mode.
I am currently re-purposing the code for something else so it is not 100% perfect for your needs, but should get you going.
#include <ctime>
#include <fstream>
#include <iostream>
#include <raspicam/raspicam.h>
#include <unistd.h> // for usleep()
using namespace std;
#define NFRAMES 1000
#define WIDTH 1280
#define HEIGHT 960
int main ( int argc,char **argv ) {
raspicam::RaspiCam Camera;
// Allowable values: RASPICAM_FORMAT_GRAY,RASPICAM_FORMAT_RGB,RASPICAM_FORMAT_BGR,RASPICAM_FORMAT_YUV420
Camera.setFormat(raspicam::RASPICAM_FORMAT_GRAY);
// Allowable widths: 320, 640, 1280
// Allowable heights: 240, 480, 960
// setCaptureSize(width,height)
Camera.setCaptureSize(WIDTH,HEIGHT);
// Open camera
cout<<"Opening Camera..."<<endl;
if ( !Camera.open()) {cerr<<"Error opening camera"<<endl;return -1;}
// Wait until camera stabilizes
cout<<"Sleeping for 3 secs"<<endl;
usleep(3000000);
cout << "Grabbing " << NFRAMES << " frames" << endl;
// Allocate memory for camera buffer
unsigned long bytes=Camera.getImageBufferSize();
cout << "Width: " << Camera.getWidth() << endl;
cout << "Height: " << Camera.getHeight() << endl;
cout << "ImageBufferSize: " << bytes << endl;;
unsigned char *data=new unsigned char[bytes];
for(int frame=0;frame<NFRAMES;frame++){
// Capture frame
Camera.grab();
// Extract the image
Camera.retrieve (data,raspicam::RASPICAM_FORMAT_IGNORE);
}
}
return 0;
}
By the way, it works very nicely with CImg.
Also, I haven't yet had the time to see if it works faster to create a new thread to process each frame, or to have a few threads started at the beginning and use a condition variable to start one after acquiring each frame.
What Mark Setchell responded is correct.
But I found out that setting frame rate parameter is not exposed at its API level and can't set frame rate. Default frame rate is 30 Hz.
Can change it at src/private/private_impl.cpp file. I set to 60Hz and now it works.

Stream video using OpenCV, GStreamer

I am developing a program that is capturing raspicam and streaming with GStreamer. The first course, capturing raspicam doesn't have problem. But, the next course is have a big problem. I was created a total of 2 of sources code (server, client). Streaming data is very slow. Can I have a way to improve it?
Please, help me.
Thank you.
----------- Server.cpp (Raspberry Pi, Raspbian) -----------
cap.set(CAP_PROP_FPS, 30);
cap.open(0);
// Movie Frame Setup
fps = cap.get(CAP_PROP_FPS);
width = cap.get(CAP_PROP_FRAME_WIDTH);
height = cap.get(CAP_PROP_FRAME_HEIGHT);
cout << "Capture camera with " << fps << " fps, " << width << "x" << height << " px" <<
writer.open("appsrc ! gdppay ! tcpserversink host=192.168.0.29 port=5000", 0, fps, cv::Size(width, height), true);
while(1){
printf("AA");
cap >> frame;
writer << frame;
}
----------- Client.cpp (PC, Ubuntu) -----------
Mat test;
String captureString = "tcpclientsrc host=192.168.0.29 port=5000 ! gdpdepay ! appsink";
VideoCapture cap(captureString);//0);
namedWindow("t");
while(1)
{
cap >> test;
imshow("t", test);
if( waitKey(10) > 0)
break;
}
}
You might benefit from using a udp stream instead of tcp. Check out this link for an example where a video was streamed from rpi to pc with only 100 ms lag.

Write to dummy video stream using OpenCV

I'm using OpenCV and v4l2loopback library to emulate video devices:
modprobe v4l2loopback devices=2
Then I check what devices I have:
root#blah:~$ v4l2-ctl --list-devices
Dummy video device (0x0000) (platform:v4l2loopback-000):
/dev/video1
Dummy video device (0x0001) (platform:v4l2loopback-001):
/dev/video2
XI100DUSB-SDI (usb-0000:00:14.0-9):
/dev/video0
video0 is my actual camera where I grab frames from, then I plan to process them via OpenCV and write it to video2 (which is a sink I believe).
Here is how I attempt to do so:
int width = 320;
int height = 240;
Mat frame(height, width, CVX_8UC3, Scalar(0, 0, 255));
cvtColor(frame, frame, CVX_BGR2YUV);
int fourcc = CVX_FOURCC('Y', 'U', 'Y', '2');
cout << "Trying to open video for write: " << FLAGS_out_video << endl;
VideoWriter outputVideo = VideoWriter(
FLAGS_out_video, fourcc, 30, frame.size());
if (!outputVideo.isOpened()) {
cerr << "Could not open the output video for write: " << FLAGS_out_video
<< endl;
}
As far as I know video output format should be YUYV (which is equal to YUY2 in OpenCV). Please correct me if I'm wrong. In my code I'm not writing into outputVideo anything yet, just trying to open it for write, but I keep getting outputVideo.isOpened()==false for some reason, no additional errors/info in the output:
root#blah:~$ main --uid='' --in_video='0' --out_video='/dev/video2'
Trying to open video for write: /dev/video2
Could not open the output video for write: /dev/video2
I'd appreciate any advice or help on how to debug/resolve this issue. Thank you in advance!