I am trying to stream a combined video stream taken from two webcams to android app after processing in opencv(combining two frames).
Here I am trying to use RTSP to send video stream from opencv to Android(using a gstreamer pipeline).
But i am stuck in how to send .sdp file configurations to the client(file name is live.sdp),and here's the code I used so far.
//basic
#include <iostream>
#include <stdio.h>
#include <stdio.h>
#include <stdlib.h>
//opencv libraries
#include "opencv2/highgui/highgui.hpp"
#include "opencv2/opencv.hpp"
using namespace cv;
using namespace std;
int main(int argc,char **argv){
Mat im1;
Mat im2;
VideoCapture cap1(1);
VideoCapture cap2(2);
VideoWriter video;
video.open("appsrc ! videoconvert ! x264enc noise-reduction=10000 tune=zerolatency byte-stream=true threads=4 ! mpegtsmux ! rtpmp2tpay send-config=true config-interval=10 pt=96 ! udpsink host=localhost port=5000 -v"
, 0, (double)20,Size(1280, 480), true);
if(video.isOpened()){
cout << "Video Writer is opened!"<<endl;
}else{
cout << "Video Writer is Closed!"<<endl;
return -1;
}
while(1){
cap1.grab();
cap2.grab();
bool bSuccess1 = cap1.read(im2);
bool bSuccess2 = cap2.read(im1);
Size sz1 = im1.size();
Size sz2 = im2.size();
Mat im3(sz1.height, sz1.width+sz2.width, CV_8UC3);
Mat left(im3, Rect(0, 0, sz1.width, sz1.height));
im1.copyTo(left);
Mat right(im3, Rect(sz1.width, 0, sz2.width, sz2.height));
im2.copyTo(right);
video << im3;
//imshow("im3", im3);
if(waitKey(10) == 27){
break;
}
}
cap1.release();
cap2.release();
video.release();
return 0;
}
And the .sdp file configuration,
v=0
m=video 5000 RTP/AVP 96
c=IN IP4 localhost
a=rtpmap:96 MP2T/90000
I can play the stream from vlc inside the local folder,
using
vlc live.sdp
but not inside the network,by using,
vlc rtsp://localhost:5000/live.sdp
Used Gstreamer with opencv solved the problem,but still have small lag.
Related
How can I create a simple C++ program using OpenCV to stream using rstp so that it can be seen using vlc?
I have been looking many examples but none works.
Thanks
For instance:
int main()
{
VideoCapture cap(0);
if (!cap.isOpened()) {
cerr <<"VideoCapture not opened"<<endl;
exit(-1);
}
VideoWriter writer(
"appsrc ! videoconvert ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! jpegenc ! rtpjpegpay ! udpsink host=127.0.0.1 port=5000",
0, // fourcc
30, // fps
Size(640, 480),
true); // isColor
if (!writer.isOpened()) {
cerr <<"VideoWriter not opened"<<endl;
exit(-1);
}
while (true) {
Mat frame;
cap.read(frame);
writer.write(frame);
}
return 0;
}
The video feed can be read using the command line
gst-launch-1.0 -v udpsrc port=5000
! application/x-rtp, media=video, clock-rate=90000, encoding-name=JPEG, payload=26
! rtpjpegdepay
! jpegdec
! xvimagesink sync=0
However, it cannot be opened with VLC using the rtsp://127.0.0.1:5000
URL
I got a solution.
Here is an improved version of the code
#include <iostream>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>
using namespace cv;
int main()
{
VideoCapture cap("/home/salinas/Descargas/Minions/Minions.avi"); // video file input
if (!cap.isOpened())
{
std::cout << "Video Capture Fail" << std::endl;
return 0;
}
VideoWriter writer;
// Write this string to one line to be sure!!
writer.open("appsrc ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! x264enc speed-preset=veryfast tune=zerolatency bitrate=800 ! rtspclientsink location=rtsp://localhost:8554/mystream ",
0, 20, Size(640, 480), true);
// Comment this line out
Mat img;
while(cap.read(img))
{
cv::resize(img, img, Size(640, 480));
cv::imshow("raw", img);
writer << img;
cv::waitKey(25);
}
Now, the problem is that this is not directly read by a program like vcl. You need to run at the same time an instance of rtsp-simple-server (you can download the binaries wo dependencies here)
It seems like the opencv writer sends the data to the rtsp-simple-server, which redirects the stream to rtsp clients that request it.
Finally, go to vlc and open the url rtsp://localhost:8554/mystream
I'm trying to create a HLS stream using OpenCV and Gstreamer in Linux (Ubuntu 20.10).
The OpenCv was successfully installed with GStreamer support.
I have created a simple application with the help of these two tutorials:
http://4youngpadawans.com/stream-live-video-to-browser-using-gstreamer/
How to use Opencv VideoWriter with GStreamer?
The code is the following:
#include <string>
#include <iostream>
#include <opencv2/core.hpp>
#include <opencv2/highgui.hpp>
#include <opencv2/videoio/videoio_c.h>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
if(!cap.open(0, CAP_V4L2))
return 0;
VideoWriter writer(
"appsrc ! videoconvert ! videoscale ! video/x-raw,width=640,height=480 ! x264enc ! mpegtsmux ! hlssink playlist-root=http://192.168.1.42:8080 location=/home/sem/hls/segment_%05d.ts target-duration=5 max-files=5 playlist-location=/home/sem/hls/playlist.m3u8 ",
0,
20,
Size(800, 600),
true);
if (!writer.isOpened()) {
std::cout <<"VideoWriter not opened"<<endl;
exit(-1);
}
for(;;)
{
Mat frame;
cap >> frame;
if( frame.empty() ) break; // end of video stream
writer.write(frame);
imshow("this is you, smile! :)", frame);
if( waitKey(10) == 27 ) break; // stop capturing by pressing ESC
}
}
The HTTP served was started using python command
python3 -m http.server 8080
At first look everything is fine. Streamer creates all needed files (playlist and xxx.ts files)
Folder with the HTTP Server content
Server Response on requests
But if I try to play the stream it does not work:
Open Stream in browser
The playing using VLC-Player does not work also (green screen)
Could someone give me a hint, what I'm doing wrong?
Thanks in advance!
Check what stream format is created. Check what color format you push into the pipeline. If its RGB chances are you create a non 4:2:0 stream which has very limited decoder support.
Thanks Florian,
I tried to change the format but it was not a problem.
First, what shall be performed is to take a real frame rate from capture device
int fps = cap.get(CV_CAP_PROP_FPS);
VideoWriter writer(
"appsrc ! videoconvert ! videoscale ! video/x-raw, width=640, height=480 ! x264enc ! mpegtsmux ! hlssink playlist-root=http://192.168.1.42:8080 location=/home/sem/hls/segment_%05d.ts target-duration=5 max-files=5 playlist-location=/home/sem/hls/playlist.m3u8 ",
0,
fps,
Size(640, 480),
true);
Second, the frame size shall be the same in all places it is mentioned.
The frame, which is captured, shall be also resized:
resize(frame, frame, Size(640,480));
writer.write(frame);
After this changes the chunks, generated by gstreamer, can be opened in a local player and the video works. Unfortunately the remote access still failing. :(
I want to use GStreamer to capture the video content of the IP camera and compress it to the H.264 stream server, and then uses OpenCV+GStreamer to receive the H.264 video stream on nvidia TX1.Here is my gstreamer pipeline:
gst-launch-1.0 -ve rtspsrc location=rtsp://admin:12345#192.168.1.64/Streaming/Channels/1 ! nvvidconv flip-method=6 ! 'video/x-raw(memory:NVMM), width=(int)960, height=(int)540, format=(string)I420, framerate=(fraction)30/1' ! omxh264enc control-rate=2 bitrate=4000000 ! 'video/x-h264, stream-format=(string)byte-stream' ! h264parse ! queue ! omxh264dec ! nvvidconv ! 'video/x-raw, format=(string)UYVY' ! videoconvert ! jpegenc quality=30 ! rtpjpegpay ! udpsink host=$CLIENT_IP port=5000 sync=false async=false
The above code captures and compresses the content of the camera into the H.264 video stream of 30 frames of 960p, and sends it to the network port of the board through the UDP protocol [5000].It runs successfully and here is my
code of client:
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/features2d/features2d.hpp>
#include <opencv2/calib3d/calib3d.hpp>
using namespace cv;
int main(int, char**)
{
VideoCapture input("./stream.sdp");
if(!input.isOpened()){ // check if we succeeded
std::cout<< "open failed" << std::endl;
return -1;
}
Mat img, img_gray;
OrbFeatureDetector detector(7000);
vector<KeyPoint> img_keypoints, car_keypoints;
Mat img_descriptors, car_descriptors;
input.read(img);
Mat car;
img(Rect(400, 320, 150, 100)).copyTo(car);
detector(car, Mat(), car_keypoints, car_descriptors);
drawKeypoints(car, car_keypoints, car);
for(;;)
{
if(!input.read(img))
break;
detector(img, Mat(), img_keypoints, img_descriptors);
drawKeypoints(img, img_keypoints, img);
BFMatcher matcher;
vector<DMatch> matches;
matcher.match(car_descriptors, img_descriptors, matches);
vector<Point2f> car_points, img_points;
for(int i=0; i < matches.size(); ++i){
car_points.push_back(car_keypoints[matches[i].queryIdx].pt);
img_points.push_back(img_keypoints[matches[i].queryIdx].pt);
}
std::cout<<"car points count = " << car_points.size() << std::endl;
if(car_points.size() >= 4){
Matx33f H = findHomography(car_points, img_points, CV_RANSAC);
vector<Point> car_border, img_border;
car_border.push_back(Point(0, 0));
car_border.push_back(Point(0, car.rows));
car_border.push_back(Point(car.cols, car.rows));
car_border.push_back(Point(car.cols, 0));
for (size_t i = 0; i < car_border.size(); ++i){
Vec3f p = H * Vec3f(car_border[i].x, car_border[i].y, 1);
img_border.push_back(Point(p[0]/p[2], p[1]/p[2]));
}
polylines(img, img_border, true, CV_RGB(255, 255, 0));
Mat img_matches;
drawMatches(car, car_keypoints, img, img_keypoints, matches, img_matches);
imshow("img_matches", img_matches);
}
// imshow("car", car);
// imshow("img", img);
if(waitKey(27) >= 0) break;
}
// the camera will be deinitialized automatically in VideoCapture destructor
return 0;
}
The configuration file CMakeLists.txt is as follows:
project(hello)
find_package(OpenCV REQUIRED)
include_directories(${OpenCV_INCLUDE_DIRS})
add_executable(cv_hello hello.cpp)
target_link_libraries(cv_hello ${OpenCV_LIBS}
The client code can be compiled successfully but when it runs, VideoCapture input("./stream.sdp")will fail to open the sdp file and return "open failed".Here is my stream.sdp file:
c=IN IP4 127.0.0.1
m=video 5000 RTP/AVP 96
a=rtpmap:96 JPEG/4000000
I have tried to use absolute path and tried to set environment variables
export PKG_CONFIG_PATH=/home/ubuntu/ffmpeg_build/lib/pkgconfig : $PKG_CONFIG_PATH to add ffmeg decorder but they all failed to solve the problem.
I use opencv 2.4.13 and gstreamer-1.0 on TX1.
I am trying to display a video stream with OpenCV, but I am having horrible problems with framerate. My video source can put out a maximum of 60 fps, but I have limited it to 30. The issue is I am receiving it at about 2fps
I have simplified my program down as far as possible to make it easier to read:
#include "opencv2/core/core.hpp"
#include "opencv2/calib3d/calib3d.hpp"
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <stdio.h>
using namespace cv;
using namespace std;
int main(int argc, char* argv[])
{
Mat image1;
int k;
const char* right_cam_gst = "nvcamerasrc sensor-id=0 ! video/x-raw(memory:NVMM),\
width=(int)640,\
height=(int)360,\
format=(string)I420,\
framerate=(fraction)30/1 ! nvvidconv flip-method=2 ! video/x-raw,\
format=(string)I420 ! videoconvert ! video/x-raw,\
format=(string)BGR ! appsink";
VideoCapture cap1 = VideoCapture(right_cam_gst);
for (;;)
{
cap1 >> image1;
imshow("image1", image1);
if(waitKey(1) == 27)
break;
}
}
This should grab and display the image as fast as the stream can allow. right?
Thanks for the help guys!
EDIT Looks like if I simply display an image as fast as possible, it only shows at about 1fps. This is eliminating the camera entirely.
SYSTEM: ubuntu on nvidia Jetson TX1
Found the answer! Looks like even though my ethernet connection was fast, it was somehow using the server for computing. (Not sure how). See this post: https://devtalk.nvidia.com/default/topic/1025856/very-slow-framerate-jetson-tx1-and-opencv/?offset=8
I disabled the Xserver and plugged straight in, and I got full 60 FPS.
I got a Logitech C920 camera connected via USB to a NVIDIA TX1. I am trying to both stream the camera feed over rtsp to a server while doing some computer vision in OpenCV. I managed to read H264 video from the usb camera in Opencv
#include <iostream>
#include "opencv/cv.h"
#include <opencv2/opencv.hpp>
#include "opencv/highgui.h"
using namespace cv;
using namespace std;
int main()
{
Mat img;
VideoCapture cap;
int heightCamera = 720;
int widthCamera = 1280;
// Start video capture port 0
cap.open(0);
// Check if we succeeded
if (!cap.isOpened())
{
cout << "Unable to open camera" << endl;
return -1;
}
// Set frame width and height
cap.set(CV_CAP_PROP_FRAME_WIDTH, widthCamera);
cap.set(CV_CAP_PROP_FRAME_HEIGHT, heightCamera);
cap.set(CV_CAP_PROP_FOURCC, CV_FOURCC('X','2','6','4'));
// Set camera FPS
cap.set(CV_CAP_PROP_FPS, 30);
while (true)
{
// Copy the current frame to an image
cap >> img;
// Show video streams
imshow("Video stream", img);
waitKey(1);
}
// Release video stream
cap.release();
return 0;
}
I also have streamed the USB camera to a rtsp server by using ffmpeg:
ffmpeg -f v4l2 -input_format h264 -timestamps abs -video_size hd720 -i /dev/video0 -c:v copy -c:a none -f rtsp rtsp://10.52.9.104:45002/cameraTx1
I tried to google how to combine this two functions, i.e. open usb camera in openCV and use openCV to stream H264 rtsp video. However, all I can find is people trying to open rtsp stream in openCV.
Have anyone successfully stream H264 rtsp video using openCV with ffmpeg?
Best regards
Sondre