I want to play /dev/urandom using gst-launch-1.0, just as I can do it with aplay:
aplay /dev/urandom
How to do this?
For a quick start try using this:
gst-launch-1.0 filesrc location=/dev/urandom ! rawaudioparse ! alsasink
Of course you modify it to select a specific ALSA device, channel count, sample rate etc..
Related
I want to write binary data directly to gstreamer pipeline but I'm unable to do so.
I had tried the rawaudioparse plugin. I had written the binary data into the .raw file and tried this command to play this binary data.
gst-launch-1.0 filesrc location=audio.raw ! rawaudioparse use-sink-caps=false \
format=pcm pcm-format=s16le sample-rate=48000 num-channels=2 \
audioconvert ! audioresample ! autoaudiosink
My goal is to write audio binary data to gstreamer pipeline and play that as RTMP streaming.
Yes, you can achieve this using the element fdsrc, which takes a file descriptor (by default: standard input) from which it will start reading data.
Your GStreamer pipeline will then look like this:
# Replace "cat audio.raw" with your actual commands
cat audio.raw | gst-launch-1.0 fdsrc ! rawaudioparse (...)
I am very much new to the whole GStreamer-thing, therefore I would be happy if you could help me.
I need to stream a near-zero-latency videosignal from a webcam to a server and them be able to view the stream on a website.
The webcam is linked to a Raspberry Pi 3, because there are space-constraints on the mounting plattform. As a result of using the Pi I really can't transcode the video on the Pi itself. Therefore I bought a Logitech C920 Webcam, which is able to output a raw h264-stream.
By now I managed to view the stream on my windows-machine, but didn't manage to get the whole website-thing working.
My "achivements":
Sender:
gst-launch-1.0 -e -v v4l2src device=/dev/video0 ! video/x-h264,width=1920,height=1080,framerate=30/1 ! rtph264pay pt=96 config-interval=5 mtu=60000 ! udpsink host=192.168.0.132 port=5000
My understanding of this command is: Get the signal of video-device0, which is a h264-stream with a certain width, height and framerate. Then pack it into a rtp-package with a high enough mtu to have no artefacts and capsulate the rtp-package into a udp-package and stream in to a ip+port.
Receiver:
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
My understanding of this command is: Receive a udp-package at port 5000. Application says it is a rtp-package inside. I don't know what rtpjitterbuffer does, but it reduces the latency of the video a bit.
rtph264depay says that inside the rtp is a h264-encoded stream. To get the raw data, which fpsdisplaysink understands we need to decode the h264 signal by the use of avdec_h264.
My next step was to change the receiver-sink to a local tcp-sink and output that signal with the following html5-tag:
<video width=320 height=240 autoplay>
<source src="http://localhost:#port#">
</video>
If I view the website I can't see the stream, but I can view the videodata, which arrived as plain text, when I analyse the data.
Am I missing a videocontainer like MP4 for my video?
Am I wrong with decoding?
What am I doing wrong?
How can I improve my solution?
How would you solve that problem?
Best regards
Is it possible to get gst-launch string equivalent for any gst-play command?
For example, playing rtsp stream with gst-play could be:
gst-play-1.0.exe rtsp://path/to/source
That command makes connect to server and opens internal (gstreamer) window for playing.
Equivalent command could be (I don't really sure):
gst-launch-1.0.exe uridecodebin uri=rtsp://path/to/source ! autovideosink
But how to get it in general case?
My main purpose is to redirect video stream to an avi-file while I know only good gst-play command. So I need to replace autovideosink with filesink in result command.
After update - I would say that you have some options:
1, Use gst-play with option --videosink but you would also need the avi mux element there and also it must be encoded in h264.. so this approach would need some hacking in source code of gst-play which you obviously do not want
1a, You can also use playbin as suggested by #thiagoss with parameter video-sink .. then you can maybe use named bin and pass it to it (not sure if this is possible this way, but you may try this):
gst-launch-1.0 playbin uri=rtsp video-sink=bin_avi \( name=bin_avi x264enc ! avimux ! filesink location=file.avi \)
2, Get the pipeline picture, analyse it and create the same thing yourself manually, in Unix-like systems do:
export GST_DEBUG_DUMP_DOT_DIR=`pwd`
gst-play-1.0 rtsp://...
#or use gst-launch and playbin.. its the same thing basically
Check the generated *.dot files.. choose the latest one (in PLAYING state) and use graphviz library to turn it into picture:
dot -T png *PLAYING.dot -o playbin.png
3, Use just the uridecodebin stuff and continue like I wrote in 1a
uridecodebin ! video/x-raw ! x264enc ! ....
HTH
Use playbin:
gst-launch-1.0.exe playbin uri=rtsp://path/to/source
I'm trying to use a jpg-File as a virtual webcam for Skype (or similar). The image file is reloading every few seconds and the Pipeline should also transmit always the newest image.
I started creating a Pipeline like this
gst-launch filesrc location=~/image.jpg ! jpegdec ! ffmpegcolorspace ! freeze ! v4l2sink device=/dev/video2
but it only streams the first image and ignores the newer versions of the image file. I read something about concat and dynamically changing the Pipeline but I couldn't get this working for me.
Could you give me any hints on how to get this working?
Dynamic refresh the input file is NOT possible (at least with filesrc).
Besides, your sample use freeze, which will prevent the image change.
One possible method is using multifilesrc and videorate instead.
multifilesrc can read many files (with a provided pattern similar to scanf/printf), and videorate can control the speed.
For example, you create 100 images with format image0000.jpg, image0001.jpg, ..., image0100.jpg. Then play them continuously, with each image in 1 second:
gst-launch multifilesrc location=~/image%04d.jpg start-index=0 stop-index=100 loop=true caps="image/jpeg,framerate=\(fraction\)1/1" ! jpegdec ! ffmpegcolorspace ! videorate ! v4l2sink device=/dev/video2
Changing the number of image at stop-index=100, and change speed at caps="image/jpeg,framerate=\(fraction\)1/1"
For more information about these elements, refer to their documents at gstreamer.freedesktop.org/documentation/plugins.html
EDIT: Look like you use GStreamer 0.10, not 1.x
In this case, please refer to old documents multifilesrc and videorate
You can use a general file name with multifilesrc if you add some parameter adjustments and pair it with an identity on a delay. It's a bit fragile but it'll do fine for a temporary one-off program as long as you keep your input images the same dimensions and format.
gst-launch-1.0 multifilesrc loop=true start-index=0 stop-index=0 location=/tmp/whatever ! decodebin ! identity sleep-time=1000000 ! videoconvert ! v4l2sink
I need to capture a video using a webcam and output a single image for each video frame captured.
I have tried using gstreamer with a multifilesink, e.g.:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
However, this does not actually output every frame, meaning that if I record for 2 seconds at 30 fps, I don't get 60 images. I'm assuming this is because the encoding can't go that fast, so I need another method.
I figured it might work if I have one pipeline capture a video, and a separate pipeline convert that video to frames, but I don't know enough about codecs. Do I need to encode the video to a file like h264 or mp4 just to then decode it again?
Does anyone have any thoughts or suggestions? Keep in mind that I need to be able to do this in code, not using an application like Adobe Premiere, for example.
Thanks!
You could simply add a queue in there like this:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! queue ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
This should make sure the video-capture is allowed to run at 30 fps, and then writing it to
disk can happen in its own tempo. Just be aware that the queue will grow to quite a large size
if you leave this setup for too long.
the solution I have to offer doesn't use gstreamer but ffmpeg. I hope that's fine for you too.
As described in this forum post, you can use something like this:
ffmpeg -i movie.avi frame%d.png
to get a png/jpg image for each frame of the video.
But depending on the input file you use, you might have to convert it to an MPEG vid before running ffmpeg.
Note:
If you want leading zeroes in your image file names, use %05d instead (for 5-digit numbers, like in C's printf()):
ffmpeg -i movie.avi frame%05d.png
The output file format depends on the file extension, so you might use .jpg, .bmp, ... instead of .png.
I ended up doing this in two parts.
Write video to file.
gst-launch v4l2src device=/dev/video2 ! video/x-raw-yuv,framerate=30/1 ! xvidenc ! queue ! avimux ! filesink location=test.avi
Post process.
gst-launch-1.0 --gst-debug-level=3 filesrc location=test.avi ! decodebin ! queue ! autovideoconvert ! pngenc ! multifilesink location="frame%d.png"