Is it possible to get gst-launch string equivalent for any gst-play command?
For example, playing rtsp stream with gst-play could be:
gst-play-1.0.exe rtsp://path/to/source
That command makes connect to server and opens internal (gstreamer) window for playing.
Equivalent command could be (I don't really sure):
gst-launch-1.0.exe uridecodebin uri=rtsp://path/to/source ! autovideosink
But how to get it in general case?
My main purpose is to redirect video stream to an avi-file while I know only good gst-play command. So I need to replace autovideosink with filesink in result command.
After update - I would say that you have some options:
1, Use gst-play with option --videosink but you would also need the avi mux element there and also it must be encoded in h264.. so this approach would need some hacking in source code of gst-play which you obviously do not want
1a, You can also use playbin as suggested by #thiagoss with parameter video-sink .. then you can maybe use named bin and pass it to it (not sure if this is possible this way, but you may try this):
gst-launch-1.0 playbin uri=rtsp video-sink=bin_avi \( name=bin_avi x264enc ! avimux ! filesink location=file.avi \)
2, Get the pipeline picture, analyse it and create the same thing yourself manually, in Unix-like systems do:
export GST_DEBUG_DUMP_DOT_DIR=`pwd`
gst-play-1.0 rtsp://...
#or use gst-launch and playbin.. its the same thing basically
Check the generated *.dot files.. choose the latest one (in PLAYING state) and use graphviz library to turn it into picture:
dot -T png *PLAYING.dot -o playbin.png
3, Use just the uridecodebin stuff and continue like I wrote in 1a
uridecodebin ! video/x-raw ! x264enc ! ....
HTH
Use playbin:
gst-launch-1.0.exe playbin uri=rtsp://path/to/source
Related
I have this working but have been unable to get video from my magwell to intergrate and could use help with the correct pipline.
gst-launch-1.0 videotestsrc ! video/x-raw,width=848,height=480,framerate=25/1 ! x264enc bitrate=700 ! video/x-h264,width=848,height=480,framerate=25/1,stream-format=byte-stream,profile=baseline ! tee name=t\
t. ! queue ! tcpclientsink host=172.18.0.3 port=8000 \
t. ! queue ! tcpclientsink host=172.18.0.4 port=8000
I do not see the receiver side pipeline in the question description. This is required to verify that there are no issues at the receiver side. Based on your current pipeline I have the following suggestions:
You don't need set the caps again after the element x264enc, because the output is anyhow of type video/x-h264. What you need is to add h264parse after x264enc. You need to also add h264parse, before passing the data to decoder you are using at the receiver side.
The bitrate set for x264enc is also very less. The units are in kbits/sec, and for a video this might be very less. It's best to leave this to default setting if you do not have any strict resource constraints. Otherwise try for a higher value.
Also is there any reason why you are using TCP. Using UDP might be a better idea for video, in case video data/packet loss is not an issue.
I'm trying to use a jpg-File as a virtual webcam for Skype (or similar). The image file is reloading every few seconds and the Pipeline should also transmit always the newest image.
I started creating a Pipeline like this
gst-launch filesrc location=~/image.jpg ! jpegdec ! ffmpegcolorspace ! freeze ! v4l2sink device=/dev/video2
but it only streams the first image and ignores the newer versions of the image file. I read something about concat and dynamically changing the Pipeline but I couldn't get this working for me.
Could you give me any hints on how to get this working?
Dynamic refresh the input file is NOT possible (at least with filesrc).
Besides, your sample use freeze, which will prevent the image change.
One possible method is using multifilesrc and videorate instead.
multifilesrc can read many files (with a provided pattern similar to scanf/printf), and videorate can control the speed.
For example, you create 100 images with format image0000.jpg, image0001.jpg, ..., image0100.jpg. Then play them continuously, with each image in 1 second:
gst-launch multifilesrc location=~/image%04d.jpg start-index=0 stop-index=100 loop=true caps="image/jpeg,framerate=\(fraction\)1/1" ! jpegdec ! ffmpegcolorspace ! videorate ! v4l2sink device=/dev/video2
Changing the number of image at stop-index=100, and change speed at caps="image/jpeg,framerate=\(fraction\)1/1"
For more information about these elements, refer to their documents at gstreamer.freedesktop.org/documentation/plugins.html
EDIT: Look like you use GStreamer 0.10, not 1.x
In this case, please refer to old documents multifilesrc and videorate
You can use a general file name with multifilesrc if you add some parameter adjustments and pair it with an identity on a delay. It's a bit fragile but it'll do fine for a temporary one-off program as long as you keep your input images the same dimensions and format.
gst-launch-1.0 multifilesrc loop=true start-index=0 stop-index=0 location=/tmp/whatever ! decodebin ! identity sleep-time=1000000 ! videoconvert ! v4l2sink
I'm have pipeline:
gst-launch-1.0 rtspsrc location=rtsp://ip/cam ! rtph264depay ! h264parse ! mp4mux fragment-duration=10000 streamable=1 ! multifilesink next-file=2 location=file-%03d.mp4
The first segment is played well, others not. When I'm try to view the structure of damaged mp4 see an interesting bug:
MOOV
Some data
MOOF
MDAT
MOOF
MDAT
The most interesting thing in "Some data". There is no header data, they simply exist. By block size I think it MDAT. I find size of the block and add before it MDAT header. File immediately becomes valid and playing. But the unknown piece can't be played because before it no MOOF header.
Problem is at mp4mux and qtmux. Tested on GStreamer 1.1.0 and 1.2.2. All results are identical.
Can use multifilesink not correct?
If you take look at documentation for multifilesink you will find the answer:
It is not possible to use this element to create independently playable mp4 files, use the splitmuxsink element for that instead. ...
So use splitmuxsink and don't forget to send EOS when you done to correct finish last file
Update
Looks like at time when question has been asked there wasn't such element like splitmuxsink
Can this be reproduced using videotestsrc instead of rtsp?
Try replacing your h264 receiving and depayloading with "videotestsrc num-buffers= ! x264enc ! mp4mux ..."
This might be a bug, please file it at https://bugzilla.gnome.org/enter_bug.cgi?product=GStreamer so it gets proper attention from maintainers.
Also, how are you trying to play it?
Thanks
I need to capture a video using a webcam and output a single image for each video frame captured.
I have tried using gstreamer with a multifilesink, e.g.:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
However, this does not actually output every frame, meaning that if I record for 2 seconds at 30 fps, I don't get 60 images. I'm assuming this is because the encoding can't go that fast, so I need another method.
I figured it might work if I have one pipeline capture a video, and a separate pipeline convert that video to frames, but I don't know enough about codecs. Do I need to encode the video to a file like h264 or mp4 just to then decode it again?
Does anyone have any thoughts or suggestions? Keep in mind that I need to be able to do this in code, not using an application like Adobe Premiere, for example.
Thanks!
You could simply add a queue in there like this:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! queue ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
This should make sure the video-capture is allowed to run at 30 fps, and then writing it to
disk can happen in its own tempo. Just be aware that the queue will grow to quite a large size
if you leave this setup for too long.
the solution I have to offer doesn't use gstreamer but ffmpeg. I hope that's fine for you too.
As described in this forum post, you can use something like this:
ffmpeg -i movie.avi frame%d.png
to get a png/jpg image for each frame of the video.
But depending on the input file you use, you might have to convert it to an MPEG vid before running ffmpeg.
Note:
If you want leading zeroes in your image file names, use %05d instead (for 5-digit numbers, like in C's printf()):
ffmpeg -i movie.avi frame%05d.png
The output file format depends on the file extension, so you might use .jpg, .bmp, ... instead of .png.
I ended up doing this in two parts.
Write video to file.
gst-launch v4l2src device=/dev/video2 ! video/x-raw-yuv,framerate=30/1 ! xvidenc ! queue ! avimux ! filesink location=test.avi
Post process.
gst-launch-1.0 --gst-debug-level=3 filesrc location=test.avi ! decodebin ! queue ! autovideoconvert ! pngenc ! multifilesink location="frame%d.png"
I'm constructing a gstreamer pipeline that receives two RTP streams from an networked source:
ILBC Audio stream + corresponding RTCP stream
H263 Video stream + corresponding RTCP stream
Everything is put into one gstreamer pipeline so it will use the RTCP from both streams to synchronize audio/video. So far I've come up with this (using gst-launch for prototyping):
gst-launch -vvv gstrtpbin name=rtpbin
udpsrc caps="application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)H263-2000" port=40000 ! rtpbin.recv_rtp_sink_0
rtpbin. ! rtph263pdepay ! ffdec_h263 ! xvimagesink
udpsrc port=40001 ! rtpbin.recv_rtcp_sink_0
rtpbin.send_rtcp_src_0 ! udpsink port=40002 sync=false async=false
udpsrc caps="application/x-rtp,media=(string)audio,clock-rate=(int)8000,encoding-name=(string)PCMU,encoding-params=(string)1,octet-align=(string)1" port=60000 rtpbin.recv_rtp_sink_1
rtpbin. ! rtppcmudepay ! autoaudiosink
udpsrc port=60001 ! rtpbin.recv_rtcp_sink_1
rtpbin.send_rtcp_src_1 ! udpsink port=60002 sync=false async=false
This pipeline works well if the networked source starts out with sending both video and audio. If the videostream is paused later on, gstreamer will still playback audio and even will start playing back the video when the networked source resumes the video stream.
My problem is however that if the networked source starts out with only an audio stream (video might be added later on), the pipeline seems to pause/freeze until the video stream starts as well.
Since video is optional (and can be added/removed at will by the user) in my application, is there any way I can hook up for instance an 'videotestsrc' that will provide some kind of fallback video data to keep the pipeline running when there is no networked video data?
I've tried experimenting with 'videotestsrc' and a thing called 'videomixer' but I think that mixer still requires both streams to be alive. Any feedback is greatly appreciated!
I present a simple function for pause resume by changing bins. In the following example I provide the logic to change destination bin on the fly dynamically. This shall not completely stop the pipeline which is what you seek I believe. A similar logic could be used for src bins. Here you may remove your network source bin and related decoder/demux bins and add videotestsrc bins.
private static void dynamic_bin_replacement(Pipeline pipe, Element src_bin, Element dst_bin_new, Element dst_bin_old) {
pipe.pause();
src_bin.unlink(dst_bin_old);
pipe.remove(dst_bin_old);
pipe.add(dst_bin_new);
dst_bin_new.syncStateWithParent();
src_bin.link(dst_bin_new);
pipe.ready();
pipe.play();
}
The other logic you may want to try is "PADLOCKING". Please take a look at the following posts
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-block.txt
and
http://web.archiveorange.com/archive/v/8yxpz7FmOlGqxVYtkPb4
and
Adding and removing audio sources to/from GStreamer pipeline on-the-go
UPDATE
Try output-selector and input-selector bins as they seem to be better alternative. I found them most reliable and have had immense luck with them. I use fakesink or fakesrc respectively as the other end of the selector.
valve bin is another alternative that I found doesn't even need fakesink or fakesrc bins. It is also extremely reliable.
Also the correct state transition order for media file source
NULL -> READY -> PAUSED -> PLAYING (Upwards)
PLAYING -> PAUSED -> READY -> NULL (Downwards)
My order in the above example should be corrected where ready() should come before pause(). Also I would tend to think un-linking should be performed after null() state and not after pause(). I haven't tried these changes but theoretically they should work.
See the following link for detailed info
http://cgit.freedesktop.org/gstreamer/gstreamer/tree/docs/design/part-states.txt?h=BRANCH-RELEASE-0_10_19