I am currently trying to stream from my webcam to my ARM device and while I can get it to stream a few frames, my pipeline does not work perfectly and sometimes it hangs and sometimes it drops frames or has other errors.
On the other hand, I notice that cheese seems to stream from my webcam just fine. I was wondering if it is possible to inspect the gstreamer pipeline used by cheese somehow, so that I can try and replicate it on ym app.
Maybe try setting GST_DEBUG_DUMP_DOT_DIR=<log dir> in your environment variables before running the application. After exiting the application you may end up with some .dot files in the given directory. These files can be read or converted to image files with tools from the graphviz package.
Related
I have an application running on an embedded system. This application has 2 video sources (and, theorically, 1 audio source). Concentrating to the video sources, I have 2 subprocess that computes different frames sets (unrelated each others). I want to send these frames to 2 differents streams.
I would to avoid to write a lot of ffmpeg/libav code. I have ffmpeg compiled for the embeeded system and I can use it as tool. For example, I can write to the stdout the first frames set and pass it to the ffmpeg like this:
./my_app|ffmpeg -an -i - -vcodec copy -f rtp rtp://"remote_ip
This basically works. But, now i would to send the other one frame set. How to do that? Theorically I need anoter ffmpeg instance that read from another source that can't be the stdout of "my_app", because is already busy.
I'm thinking to use 2 video files as support. I can record the 2 frames sets into 2 video files and then run 2 ffmpeg instances from these sources. In this case I think I need a way to limit the video files dimensions (like a circular buffer), because 2 streams can become really huge in time. It is a possibility?
This can sound "weird" to me: I need to record a video source in realtime and stream it via ffmpeg (always in realtime). I don't know if it is a good idea, there are realtime problems for sure:
loop:
my_app --write_into--> video_stream1.mp4
ffmpeg <--read_from-- video_stream1.mp4
my_app --write_into--> video_stream2.mp4
ffmpeg <--read_from-- video_stream2.mp4
Have you some suggestion to address this kind of situation?
many thanks, bye.
I have a Visual Studio C++ project that renders information from custom telemetry hardware. I need to be able to render that information over video that was shot during the telemetry gathering process. I've had suggestions that I use ffmpeg to do the extraction to individual frames and this would work for short videos, but longer ones would require ~2TB of drive space. How do I read and write .mp4s, frame-by-frame, in VSC++?
ffmpeg has a libavcodec component that is supposed to do the job, but the instructions for building and incorporating ffmpeg are vague and not recently updated.
How do I pull video frames/audio into a VSC++ application from a file, then write out again to another file?
I'm working on a c++ project that generates frames to be converted to a video later.
The project currently dumps all frames as jpg or png files in a folder and then I run ffmpeg manually to generate a mp4 video file.
This project runs on a web server and an ios/android app (under development) will call this web server to have the video generated and downloaded.
The web service is pretty much done and working fine.
I don't like this approach for obvious reasons like a server dependency, cost etc...
I successfully created a POC that exposes the frame generator lib to android and I got it to save the frames in a folder, my next step now is to convert it to video. I considered using any ffmpeg for android/ios lib and just call it when the frames are done.
Although it seems like I fixed half of the problem, I found a new one which is... each frame depending on the configuration could end up having 200kb+ in size, so depending on the amount of frames, it will take a lot of space from the user's device.
I'm sure this will become a huge problem very easily.
So I believe that the ideal solution would be to generate the mp4 file on demand as each frame is created, so in the end there would be no storage space being taken as I woudn't need to save a file for the frame.
The problem is that I don't know how to do that, I don't know much about ffmpeg, I know it's open source but I have no idea how to include a reference to it from the frames generator and generate the video "on demand".
I heard about libav as well but again, same problem...
I would really appreciate any sugestion on how to do it. What I need is basically a way to generate a mp4 video file given a list of frames.
thanks for any help!
I would like to implement a gstreamer pipeline for video streaming without using a v4l2 driver in Linux. The thing is that the video frames I have them already in the RAM(the vdma core which is configured by a different OS on a different core takes care of that) . And also I had difficulties debugging some DMA slave errors which appeared always after a dma completion callback.
Therefore I would be happy if I would not have to use v4l2 driver in order to have gstreamer on top.
I have found this plugin from Bosch that fits my case:
https://github.com/igel-oss/v4l-gst
My question would be if somebody has experience with this approach and if is a feasible one?
Other question would be how to configure the source in the gstreamer pipeline as it is not a device /dev/videoxxx but rather a memory location or even a bmp file.
Thanks, Mihaita
You could use appsrc and repeatedly call gst_app_src_push_buffer (). Your application will have all freedom to read the video data from anywhere it likes - memory, files etc. See also the relevant section of the GStreamer Application Development Manual.
If you want more flexibility, like using the video source in several applications, you should consider implementing your own custom GStreamer element.
I'm using libVLC to play a video file. If I use my code in as standalone video player, I am having no issues. The video plays very well. I can pause and play the video as I like.
When I use the same code, without modifications, in a plugin, and then play the same file, something unique happens: VLC creates two audio streams for the same video file. Now if I pause the video using libvlc_media_player_pause(...), it pauses the video and one audio stream. The other audio stream continues playing.
Any suggestions as to why this could be happening?
The application itself is written in Qt5. I have tested this issue with both audio and video files.
LibVLC version is 3.0.0
Header file and Source file are pastebin links
The mistake I did was in the code for plugin. Two instances of NBAVPlayer were created in the plugin code leading to two audio streams, one visible video stream and one hidden video stream. I have fixed the issue with the plugin, and now everything works properly.