OpenCV has the VideoCapture class to load video from external sources. I have a network camera on my network that I'm accessing with RTSP (using the FFMPEG plugin).
The sad thing is that it seems the open method and the constructor block on opening the video stream. This means if the stream is down, the program is stuck there forever.
Is there any sort of timeout ability? I tried looking in the source code, but in the end it calls external FFMPEG functions I believe and I'm unable to go further.
If there isn't timeout, is there any sort of library for a nonblocking VideoCapture method for RTSP or HTTP?
Although this question is quite old, I will supply more summarizing info for other people as well.
There is the possibility to build the dll for ffmpeg for opencv with a wrapper library Github issue refers to Peter's repo with a wrapper file (on line 198 is a timeout which can be set). (Sorry, I can't post more links).
As long as you are comfortable with Make'ing this, then you should be set (this wrapper is quite easy, it's stated). If you are looking for the actual issue on github, read the issue linked above. There you can find more information on the actual problem fixed.
To rebuild OpenCV (python cv2) with CMake: Example Visual Studio.
Related
I've been playing around the QMediaplayer library. I was curious about how it would work with some streaming video source, so I've used VLC to stream some videos using udp protocol.
To make a quick test, I've used the Qt example named MediaPlayer example. As the example is designed to work only with offline files, I've added on dumb function on the Player implementation.
void setM(QUrl url){player->setMedia(url);player->play();}
Then, on the main.cpp file I call this function like this:
...
player.setM(QUrl("udp://239.1.1.1:1234"));
return app.exec();
What this do is start to reproducing the stream source once the program is read.
The problem here is that Qt through me the following error:
DirectShowPlayerService::doSetUrlSource: Unresolved error code 800c000d
To do this with local files and http streaming and it works... but when I tried with UDP or RTP I always get the same error.
I've spent few hours looking for more information, but always get the same response... use QMLVLC... For example, look this.
Does anyone tried this before? What's is wrong here?
PD: I know that there is a VLC plugging to make this work, but I would like make this work only with Qt (or at least, understand what is happening here).
PD2: I'm on windows 8.1, Qt 5.5 (mingw 4.9.2) and I have all the important codecs installed.
Thanks in advance,
UPDATE
Finally, I manage to deal with the new http://code.qt.io and here is the code I suspect is blocking udp (and others) protocols-> here.
Maybe, only "http" and "https" are accepted as valid stream sources on Directshowsservice... I'll try to get some extra time this week to recompile just the multimedia module for windows in order to add udp procotol to the function doSetUrlSource and see what happens. If anyone test it first, please let me know here!
UPDATE 2
First of all, I suspect QMediaPlayer couldn't reproduce UDP/RTP content because the AddFilter method... Anyway, http,https and rtsp works perfectly.
Secondly, I've found some strange behavior over udp protocol.
I'm using "udp://#239.1.1.1:1234" as test multicast direction. The strange thing is that during one test I put this direction by mistake "udp://#239.1.1.1:1234z" and this time no error has been through. It seems that the direction needs to contain a letter.
The project I'm currently working on, requires an addition to the already existing VoIP capabilities. The core for speech processing is in C, the remainder is in C++ with Qt - the audio is handled via portaudio. The connection between users is currently established via UDP, which I think has to be changed for the planned video connection. Developing platform is Windows on VS2012 - however, the system is cross-platform.
In a nutshell, what I want to do is: Grab the video signal from a webcam, synchronize audio coming from C core and video from webcam and use a library and codecs for (de-)coding/muxing the signals on the respective sides and sending via RTP. The system should be capable of multicast transmission.
I did some research for possible libraries and stumbled upon ffmpeg and libVLC. For the codec I thought about using x264. And if I'm correct, ffmpeg and libVLC should both be capable of what I'm looking for?
However I'm not sure which one to pick, and from their documentations I really can't extract, which library is the better fit. Has anybody had similar problems and can help me out - I'm quite a newbie, when it comes to video processing and encoding.
Extra question: Do you have any hints or approaches on syncing the video and audio signals?
If anyone is interested, this is what I ended up doing:
I am using the WebM container format, VP8 with Vorbis currently (but going to change to VP9 with Opus soon if out of beta), handled by ffmpeg/libav libraries for encoding/decoding/muxing etc. and SDL for displaying and threading. ffmpeg/libav was cross-compiled on Unix with LGPL support to keep our project closed source.
We have a requirement to lets users record a video of our 3D application. I can already grab the individual rendered frames so this question is specifically about how to write frames into a video file.
I don't think writing each frame as a separate file and post-processing is a workable option.
I can look at options to record to a simple video file for later optimising/encoding, or writing directly to a sensibly encoded format.
FFmpeg was suggested in another post but it looks a bit daunting to me. Is it the best option, if not what can be suggested? We can work with LGPL but not full GPL.
We're working on Windows (Win32 not MFC) in C++. Sample/pseudo code with your recommended library is very much appreciated... basically after how to do 3 functions:
startRecording() does whatever initialization is needed
recordFrame() takes pointer to frame data and encodes it, ideally with timing data
endRecording() finalizes the video file, shuts down video system, etc
Check out the sources to Taksi on sourceforge. http://taksi.sourceforge.net/
You need 2 things.
1. A code to compress the frames.
2. A container file format. Like AVI or MPG.
Taksi useses the old VideoForWindows API and AVI not the newer COM API's but it still might work for you.
My application displays video and audio and I want to add a recording feature.
I've considered FFmpeg, but I have to compile my application with VS so I can't use it. So I'm trying to do it with GStreamer, but I'm not finding any example or guide on how to create a video. Any help?
(I can also consider using any other alternatives, but they must be cross-platform).
Application Development Manual explains very well how to use gstreamer from your code. Try to read it first.
Than you can experiment with gst-launch tool, build pipeline and execute it from your application using gst-parse-launch function.
You can expose more details of your problem if you want more helpful answer.
I want to create a Qt widget that can play incoming RTP streams where the video is encoded as H264 and contains no audio.
My basic plan for implementation is this:
Create a Phonon MediaSource object (Stream type).
Connect it with a QIODevice subclass that provides the data
Obtain the video data using either:
The JRTPLIB client library
The GStreamer gstrtpbin plugin. This plugin takes care depayloading the packages and decoding the video. Maybe this improves the chances that Phonon will recognize the data.
My environment:
Ubuntu 9.10
Qt 4.6
My questions:
Is my approach a good one? Perhaps I'm overlooking a more obvious or simple solution?
I'm currently experiencing this issue: when trying to play the video stream the state of the MediaObject turns to ErrorState with errorType FatalError. Can anyone tell me what I'm doing wrong?
Edit
One solution I found is using libVLC in combination with Qt, which I learned about in this thread. Here's a code sample for the interested.
I'm still looking for a Phonon-based solution.
Ideally I would only need to provide an SDP file and job is done.
I was able to get it to work using the libVLC solution. I can't garantuee that this is the best solution though as I simply stopped looking after that.
Here's a link to the libVLC sample.
The way I understand Phonon works at least in Windows is that QT provides a phonon backend plugin for DirectShow (\plugins\phonon_backend\phonon_ds94.dll) and GStreamer in your case. Then you would either obtain or write your own DirectShow filter which can accept RTP streams as a source. DirectShow takes care of the decoding, and Phonon will take care of the rendering.
So if the backend works, the application code is as simple as:
Phonon::MediaObject *media = new Phonon::MediaObject();
Phonon::VideoWidget *video = new Phonon::VideoWidget();
Phonon::createPath(media, video);
media->setCurrentSource(source);
media->play();
Seems that the problem lies with the GStreamer backend accepting RTP as a source. Can you playback that source in standalone GStreamer without any problems?