I am a beginner to gstreamer. When I create a pipeline to play a video file, I get the following message, "There may be a timestamping problem, or this computer is too slow". After some search I found that this problem might occur if there is a bad timestamping. Is there a way to figure out whether the video file has bad timestamps?
Here is the pipeline that I'm using,
gst-launch-0.10 filesrc location=.mp4 ! qtdemux ! ffdec_mpeg4 ! dri2videosink.
You can insert a identity element between e.g. ffdec_mpeg4 and drivideosink and use the check-imperfect-timestamp + check-imperfect-offset properties and watch the debug log.
If you work on an embedded device, watch the cpu load to see if maybe the pipeline runs indeed too slow.
Related
I found something like a game capture plugin for Gst Streamer that uses OBS GameCapture.
But a long time ago, friends stopped providing support, and they did not even leave any pipeline as an example.
I'm fairly new to GStreamer and I've been messing around with the code but couldn't create a pipeline to run it.
Can someone help with GStreamer to create a sample pipeline?
gst-inspect-1.0 libgstgamecapture.dll
On Windows, you could use dx9screencapsrc to screen capture the screen. For example:
gst-launch-1.0 dx9screencapsrc x=100 y=100 width=320 height=240 ! videoconvert ! autovideosink
Context: I have an audio device running Mopidy, which outputs to a gstreamer pipeline. My device has an interface for an equalizer - for this I've set up my ALSA config to go through ALSA equalizer - the GStreamer pipeline targets this. The code that handles the interface uses python alsamixer to realise the values.
This works, but ALSA equalizer is a bit janky and has a very narrow range before it distorts the audio. GStreamer has an equalizer plugin which I think is better; I can implement this as per the example launch line at:
gst-launch-1.0 filesrc location=song.ogg ! oggdemux ! vorbisdec ! audioconvert ! equalizer-10bands band2=3.0 ! alsasink
However, I want to be able to dynamically change band0-band9 parameters while the stream is playing - either via python or from the command line. I'm not sure what direction to look - is this possible?
Properties from a plugin can be set via g_object_set() function. Whether they can be changed on the fly or only when the pipeline is stopped depends on the plugin's implementation.
Hello everyone,
I am trying to implement low-latency video streaming using WebRTC. I write my code in C++ (websocket etc.), use only webrtc signalling server which is written in Python (ref1).
When I use a webcam, I do not have any problem streaming video to the client, however, I try to use the FLIR camera, I get a lot of problems while implementation.
There are a few questions in my mind to clear. I hope you guys give me some recommendations.
Is there any specific data-type that I should do pipeline to webrtc as a source? I just would like to know what kind of data I should send as a source in webrtc?
I try to send an image to check whether my WebRTC implementation works properly (except webcam), it gives me the error "Pipeline is empty". What can cause this problem? This is actually the main problem why I would like to know data type etc. to understand what exactly I should pipe into webrtc.
ref1: https://github.com/centricular/gstwebrtc-demos/tree/master/signalling
P.S.:
Client and Jetson Nano in the network
Server for signals is running on Jetson Nano
By running gst-inspect-1.0 webrtcbin you will find that both source and sink capability for this plugin is just application/x-rtp.
Therefore, if you want webrtcbin to work as a source pad, you will need to pipe it to some sort of RTP depayloader such as rtph264depay for video and rtpopusdepay for audio.
I've got a rtsp cam with backchannel support and I'm trying to get it to work with the command line tool gst-launch-1.0. The incoming streams are not an issue, but the backchannel when enabled doesn't produce a sink. however I've digged through the sources and got this little hint from the developer from the element rtspsrc:
Set backchannel=onvif to enable, and use the 'push-backchannel-sample'
action signal with the correct stream id.
I can't seem to find any info about (action) signals on the command line for gst-launch-1.0
Does anyone know if it is even possible to send signals from gst-launch-1.0?
Thanks,
Bram
I think this is meant to be called from code and not usable from gst-launch-1.0.
Just for reference, the signal is called push-backchannel-buffer (not -sample).
Also, the above linked manual page for gst-launch-1.0 says:
Please note that gst-launch-1.0 is primarily a debugging tool. You
should not build applications on top of it. For applications, use the
gst_parse_launch() function of the GStreamer API as an easy way to
construct pipelines from pipeline descriptions.
I have been making a few experiments with GStreamer by using the gst-launch utility. However, ultimately, the aim is to implement this same functionality on my own application using GStreamer libraries.
The problem is that it's ultimately difficult (at least for someone that is not used to the GStreamer API) to "port" what I test on the command line to C/C++ code.
An example of a command that I may need to port is:
gst-launch filesrc location="CLIP8.mp4" ! decodebin2 ! jpegenc ! multifilesink location="test%d.jpg"
What's the most "straight forward" way/approach to take such command and write it in C on my own app.
Also, as a side question, how could I replace the multifilesink with the possibility of doing this work on memory (I'm using OpenCV to perform a few calculation on a given image that should be extracted from the video). Is it possible to decode directly to memory and use it right away without first saving to the filesystem? It could (and should) be sequential, I mean that would only move on to the next frame after I'm done with processing the current one so that I wouldn't have to keep thousands of frames in memory.
What do you say?
I found the solution. There's a function built in on GStreamer that parses gst-launch arguments and returns a pipeline. The function is called gst_parse_launch and is documented here: http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer/html/gstreamer-GstParse.html
I haven't tested it but it's possible the fastest solution to convert what have been testing on the command line to C/C++ code.
You could always pop open the source of gst-launch and grab the bits that parse out the command-line and turn it into a GStreamer pipeline.
That way you can just pass in the "command line" as a string, and the function will return a complete pipeline for you.
By the way, there is an interesting GStreamer element that provides a good way to integrate a processing pipeline into your (C/C++) application: appsink
http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gst-plugins-base-libs/html/gst-plugins-base-libs-appsink.html
With this one you can basically retrieve the frames from the pipeline in a big C array and do whatever you want with them. You setup a callback function, which will be activated every time a new frame is available from the pipeline thread...