So I am not sure whether this is the best place to ask this question, but I'll give it a try.
I have some C++ openCV code running remotely. The openCV code draws different thing on the images that are continuously being captured by the camera.
In other words, when I run my software locally I can just do
imshow("some image", image);
and look at what my cod draws on the frames.
However now I would like to be able to see those frames remotely, like a videostream. What are the possibilities?
How can I see what is being output by my openCV software?
If you want to create a videostream remotely then you can use GStreamer to create a pipeline over the network. For that you can use cv::VideoWriter to write the frames to GStreamer pipeline.
cv::VideoWriter writer;
writer.open("appsrc ! videoconvert ! x264enc ! h264parse ! rtph264pay ! udpsink host=localhost port=9999", 0, (double)30, cv::Size(640, 480), true);
if (!writer.isOpened()) {
printf("=ERR= can't create video writer\n");
return -1;
}
while (true) {
/* Process the frame here */
writer << frame;
}
You can change localhost with the ip of your remote machine. On the receiving machine you can use the following command gst-launch-1.0 udpsrc port=9999 ! application/x-rtp,encoding-name=H264,payload=96 ! rtph264depay ! h264parse ! avdec_h264 ! autovideosink
For more information on GStreamer you can see this link.
By using tcp socket programming you can send the image meta data over network. There is a project sable-netcv at https://code.google.com/archive/p/sable-netcv/ , you can try it out.
Related
i am trying to stream live video feed from a camera connected to a Jetson NX, to a computer connected to the same network, the network works as wireless ethernet, meaning the jetson sees it as wired connection but in reality its wireless and is limited by bitrate.
On the jetson side, I am using OpenCV VideoWrite to send frames over the network using this pipeline:
cv::VideoWriter video_write("appsrc ! video/x-raw,format=BGR ! queue ! videoconvert ! video/x-raw,format=BGRx ! nvvidconv\
! video/x-raw(memory:NVMM),format=NV12,width=640,height=360,framerate=30/1 ! nvv4l2h264enc insert-sps-pps=1 insert-vui=1 idrinterval=30 \
bitrate=1800000 EnableTwopassCBR=1 ! h264parse ! rtph264pay ! udpsink host=169.254.84.2 port=5004 auto-multicast=0",\
cv::CAP_GSTREAMER,30,cv::Size(640,360));
on the receiving computer my video capture is :
cv::VideoCapture video("udpsrc port=5004 auto_multicast=0 !
application/x-rtp,media=video,encoding-name=H264 ! rtpjitterbuffer latency=0 !
rtph264depay ! decodebin ! videoconvert ! video/x-raw,format=BGR ! appsink drop=1 "
, cv::CAP_GSTREAMER);
Problem is, if the camera jitters a lot or moves a lot to any direction, the video stream either freezes or completely pixelates. I was hoping I could get suggestions for either a better encoder(im not limited to nvv4l2h264enc or h264 in general) or a solution for the pixaltion and freeze, or maybe even a better way to stream the video other than VideoWrite.
I am trying to stream 360p video at 30fps, my bitrate is limited to either 6mbps or 2.5mbps depending on the distance i want to limit myself at. it does not seem like a network problem simply because If i change parameters like Codec(MJPG instead of gstreamer for example) it changes the behaviour of the video feed, in my case it lowers the amount of freezing but makes the pixelating worse.
I'm trying to create screenshot (i.e. grab one frame) from RTSP camera stream using gstreamer pipeline.
The pipeline used looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
Problem is that the result image is always gray, with random artifacts. It looks like it's grabbing the very first frame, and it doesn't wait for the key frame.
Is there any way how can I modify the pipeline to actually grab first valid frame of video? Or just wait long enough to be sure that there was at least one key frame already?
I'm unsure why, but after some trial and error it is now working with decodebin3 instead of decodebin. Documentation is still bit discouraging though, stating decodebin3 is still experimental API and a technology preview. Its behaviour and exposed API is subject to change.
Full pipeline looks like this:
gst-launch-1.0 rtspsrc location=$CAM_URL is_live=true ! decodebin3 ! videoconvert ! jpegenc snapshot=true ! filesink location=/tmp/frame.jpg
I have created Video0 device using V4l2loopback and used the following sample Git code V4l2Loopback_cpp as a application to stream jpg images from a folder sequential by altering some conditions in the code.But the code read images as 24Bit RGB image and send it Video0 device which is fine ,because the image run like a proper video on VLC video device capture. As i mentioned earlier thet if i checked the VLC properties of the video its Shows the following content
i need this video0 device to stream rtsp h264 video in vlc using the gstreamer lib .
i have used the following command to check in commandline for testing but its show some internal process error
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,width=590,height=332,framerate=30/1 ! x264enc ! rtph264pay name=pay0 pt=96
i dont know whats the problem here .is it 24bit jpeg format or the gstreamer command i use. I need a proper gstreamer command line to process the video0 devide to stream h264 rtsp video any help is appreciated thank u.
image Format - jpg (sequence image passed)
Video0 recives - 24-bit RGB image
output need - h264 rtsp stream from video0
Not sure this is a solution, but the following may help:
You may try adjusting resolution according to what V4L reports (width=584) :
v4l2src device=/dev/video0 ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! videoconvert ! x264enc insert-vui=1 ! h264parse ! rtph264pay name=pay0 pt=96
Note that this error may happen on v4l2loopback receiver side while it may be a sender error. If you're feeding v4l2loopback with a gstreamer pipeline to v4l2sink, you would try adding identity such as:
... ! video/x-raw,format=RGB,width=584,height=332,framerate=30/1 ! identity drop-allocation=1 ! v4l2sink
0
The objective I am trying to achieve is streaming 1080p video from Raspberry pi camera and record the video simultaneously.
I tried recording the http streaming as source but didn't work on 30fps. A lot of frames were missing and almost got 8fps only.
As a second approach, I am trying to record the file directly from camera and then streaming the "recording in progress/buffer" file. For the same I am trying to use GStreamer. Please suggest if this is good option or should I try any other?
For Recording using GStreamer I used
gst-launch-1.0 -v v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw,width=1920,height=1080,framerate=30/1" !
videoflip method=clockwise ! videoflip method=clockwise ! videoconvert ! videorate ! x264enc! avimux ! filesink location=test_video.h264
Result : recorded video shows 1080p and 30fps but frames are dropping heavily.
For Streaming the video buffer I have used UDP in Gstreamer as,
gst-launch-1.0 -v v4l2src device=/dev/video0 ! capsfilter caps="video/x-raw,width=640,height=480,framerate=30/1" ! x264enc ! queue ! rtph264pay ! udpsink host=192.168.5.1 port=8080
Result : No specific errors on terminal but can't get stream on vlc.
Please suggest the best method here.
I've looked through tons of threads on OpenCV and Gstreamer and simply cannot resolve the issue to my error. I am trying to open a Gstreamer pipeline in OpenCV. I have built OpenCV with GStreamer and it says YES at the CMake step indicating that OpenCV built successfully. The command to retrieve the stream works fine from command line, however it just displays a frame and hangs in OpenCV.
My Syntax for Server:
gst-launch-1.0 v4l2src device="/dev/video0" ! video/x-raw,format=I420,width=640,height=480,framerate=15/1 ! jpegenc ! rtpjpegpay ! udpsink host=<IP Address> port=5000
My Syntax in OpenCV for Client (C++):
Mat frame;
//create video capture from video camera
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-
name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! autovideosink ! appsink");
cap.set(CV_CAP_PROP_FRAME_WIDTH, 640);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, 480);
for(;;)
{
cap >> frame;
char c = (char)waitKey(1);
//![display]
imshow(window_name, frame);
frame.release();
}
The error:
GStreamer Plugin: Embedded video playback halted; module
autovideosink0-actual-sink-xvimage reported: Output window was closed
OpenCV Error: Unspecified error (GStreamer: unable to start pipeline )
in icvStartPipeline, file
/home/dev/Downloads/OpenCV/opencv-3.0.0/modules/videoio/src/cap_gstreamer.cpp,
line 383 terminate called after throwing an instance of
'cv::Exception' what(): /home/dev/Downloads/OpenCV/opencv-
3.0.0/modules/videoio/src/cap_gstreamer.cpp:383: error: (-2) GStreamer: unable to start pipeline in function icvStartPipeline
Please provide any assistance I've been through at least 20 Stack posts and I am no closer to when I started with the exception of having Gstreamer enabled in OpenCV. I even tried different versions of OpenCV.
Thanks
VideoCapture cap("udpsrc port=5000 ! application/x-rtp,encoding-name=JPEG,payload=26 ! rtpjpegdepay ! jpegdec ! videoconvert ! appsink");
After a lot more digging through Gstreamer documentation today I solved the issue. The addition of videoconvert solved the issue. According to the Gstreamer documentation videoconvert automatically converts the data to the appropriate format for appsink. This allows it to be read correctly in OpenCV VideoCapture.