I stream my camera from a PC to a remote VM using VLC.
cvlc v4l2:///dev/video0 :live-caching=300 :sout="#transcode{vcodec=FLV1,scale=0.75,vb=128,acodec=none}:http{dst=:8080/stream.wmv}"
I can see the camera from the VM with port forwarding (ssh -C -X -p 22 user#83.*.*.* -R 8080:localhost:80) and with this command:
wget http://13.*.*.*:8080/stream.wmv -O - | mplayer mplayer -cache 8192 -
I would like to take that stream and send it to a virtual camera, lets say /dev/video9. This virtual camera should be readable from Skype.
For the virtual camera I use v4l2loopback. In order to receive the stream I use gstreamer and mjpegtools_yuv_to_v4l.
I have read from
[How can I use vloopback mjpeg pipe without WebcamStudio][1]
something like this:
gst-launch-1.0 souphttpsrc location=http://13*.*.*.*:8080/stream.wmv ! decodebin ! y4menc ! filesink location=output.yuv & cat output.yuv | mjpegtools_yuv_to_v4l /dev/video9
but I get errors like
sfdemux0: Could not demultiplex stream.
Additional debug info:
EOF in read stream header, stop.
Thank you.
Related
I'm writing a c++ application with gstreamer and am trying to achieve the following: connect to an rtp audio stream (opus), write one copy of the entire stream to an audio file, and then additionally, based on events triggered by the user, create a separate series of audio files consisting of segments of the rtp stream (think a start/stop record toggle button).
Currently using udpsrc -> rtpbin -> rtpopusdepay -> queue -> tee (pipeline splits here)
tee_stream_1 -> queue -> webmmux -> filesink
tee_stream_2 -> queue -> webmmux -> filesink
tee_stream_1 should be active during the entire duration of the pipeline. tee_stream_2 is what should generate multiple files based on user toggle events.
An example scenario:
pipeline receive rtp audio stream, tee_stream_1 begins writing audio to full_stream.webm
2 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_1.webm
5 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_1.webm and closes file.
8 seconds into rtp audio stream, user toggles "start recording". tee_stream_2 begins writing audio to stream_segment_2.webm
9 seconds into rtp audio stream, user toggles "stop recording". tee_stream_2 finishes writing audio to stream_segment_2.webm and closes file.
10 seconds into rtp audio stream, stream ends, full_stream.webm finishes writing audio and closes.
End result being 3 audio files, full_stream.webm with 10 seconds of audio, stream_segment_1.webm with 3 seconds of audio, and stream_segment_2.webm with 1 second of audio.
Attempts to do this so far have been met with difficulty since the muxers seem to require an EOS event to finish properly writing the stream_segment files, however this EOS is propogated to the other elements of the pipeline which has the undesired effect of ending all of the recordings. Any ideas on how to best accomplish this? I can provide code if it would be helpful.
Thank you for any and all assistance!
For such case, I'd suggest to give a try to RidgeRun's open source gstd and interpipe plugins that provide high level control of dynamic pipelines.
You may install with something like:
# Some required packages to be installed, not exhaustive...
# If not enough you would see errors and figure out any other missing package
sudo apt install libsoup2.4-dev libjson-glib-dev libdaemon-dev libjansson-dev libreadline-dev gtk-doc-tools python3-pip
# Get gstd sources from github
git clone --recursive https://github.com/RidgeRun/gstd-1.x.git
# Configure, build and install (meson may be better, but here using autogen/configure
cd gstd-1.x
./autogen.sh
./configure
make -j $(nproc)
sudo make install
cd ..
# Get gst-interpipe sources from github
git clone --recursive https://github.com/RidgeRun/gst-interpipe.git
# Configure, build and install (meson may be better, but here using autogen/configure
cd gst-interpipe
./autogen.sh
./configure
make -j $(nproc)
sudo make install
cd ..
# Tell gstreamer about the new plugins interpipesink and interpipesrc
# First clear gstreamer cache (here using arm64, you would adapt for your arch)
rm ~/.cache/gstreamer-1.0/registry.aarch64.bin
# add new plugins path
export GST_PLUGIN_PATH=/usr/local/lib/gstreamer-1.0
# now any gstreamer command would rebuild the cache, so if ok this should work
gst-inspect-1.0 interpipesink
interpipes need a daemon that manages, so in a first terminal you would just start it. It will display some operations and errors if any:
gstd
Now in a second terminal you would try this script (here recording into directory /home/user/tmp/tmp2...adjust for your case):
#!/bin/sh
gstd-client pipeline_create rtpopussrc udpsrc port=5004 ! application/x-rtp,media=audio,encoding-name=OPUS,clock-rate=48000 ! queue ! rtpbin ! rtpopusdepay ! opusparse ! audio/x-opus ! interpipesink name=opussrc
gstd-client pipeline_create audio_record_full interpipesrc name=audiofull listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/full_stream.webm
gstd-client pipeline_play rtpopussrc
gstd-client pipeline_play audio_record_full
sleep 2
gstd-client pipeline_create audio_record_1 interpipesrc name=audio_rec1 listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/stream_segment_1.webm
gstd-client pipeline_play audio_record_1
sleep 3
gstd-client pipeline_stop audio_record_1
gstd-client pipeline_delete audio_record_1
sleep 3
gstd-client pipeline_create audio_record_2 interpipesrc name=audio_rec2 listen-to=opussrc is-live=true allow-renegotiation=true stream-sync=compensate-ts ! queue ! audio/x-opus ! opusparse ! webmmux ! filesink location=/home/user/tmp/tmp2/stream_segment_2.webm
gstd-client pipeline_play audio_record_2
sleep 1
gstd-client pipeline_stop audio_record_2
gstd-client pipeline_delete audio_record_2
sleep 1
gstd-client pipeline_stop audio_record_full
gstd-client pipeline_delete audio_record_full
gstd-client pipeline_stop rtpopussrc
gstd-client pipeline_delete rtpopussrc
echo 'Done'
and check resulting files.
I want to play /dev/urandom using gst-launch-1.0, just as I can do it with aplay:
aplay /dev/urandom
How to do this?
For a quick start try using this:
gst-launch-1.0 filesrc location=/dev/urandom ! rawaudioparse ! alsasink
Of course you modify it to select a specific ALSA device, channel count, sample rate etc..
I want to write binary data directly to gstreamer pipeline but I'm unable to do so.
I had tried the rawaudioparse plugin. I had written the binary data into the .raw file and tried this command to play this binary data.
gst-launch-1.0 filesrc location=audio.raw ! rawaudioparse use-sink-caps=false \
format=pcm pcm-format=s16le sample-rate=48000 num-channels=2 \
audioconvert ! audioresample ! autoaudiosink
My goal is to write audio binary data to gstreamer pipeline and play that as RTMP streaming.
Yes, you can achieve this using the element fdsrc, which takes a file descriptor (by default: standard input) from which it will start reading data.
Your GStreamer pipeline will then look like this:
# Replace "cat audio.raw" with your actual commands
cat audio.raw | gst-launch-1.0 fdsrc ! rawaudioparse (...)
I am very much new to the whole GStreamer-thing, therefore I would be happy if you could help me.
I need to stream a near-zero-latency videosignal from a webcam to a server and them be able to view the stream on a website.
The webcam is linked to a Raspberry Pi 3, because there are space-constraints on the mounting plattform. As a result of using the Pi I really can't transcode the video on the Pi itself. Therefore I bought a Logitech C920 Webcam, which is able to output a raw h264-stream.
By now I managed to view the stream on my windows-machine, but didn't manage to get the whole website-thing working.
My "achivements":
Sender:
gst-launch-1.0 -e -v v4l2src device=/dev/video0 ! video/x-h264,width=1920,height=1080,framerate=30/1 ! rtph264pay pt=96 config-interval=5 mtu=60000 ! udpsink host=192.168.0.132 port=5000
My understanding of this command is: Get the signal of video-device0, which is a h264-stream with a certain width, height and framerate. Then pack it into a rtp-package with a high enough mtu to have no artefacts and capsulate the rtp-package into a udp-package and stream in to a ip+port.
Receiver:
gst-launch-1.0 -e -v udpsrc port=5000 ! application/x-rtp, payload=96 ! rtpjitterbuffer ! rtph264depay ! avdec_h264 ! fpsdisplaysink sync=false text-overlay=false
My understanding of this command is: Receive a udp-package at port 5000. Application says it is a rtp-package inside. I don't know what rtpjitterbuffer does, but it reduces the latency of the video a bit.
rtph264depay says that inside the rtp is a h264-encoded stream. To get the raw data, which fpsdisplaysink understands we need to decode the h264 signal by the use of avdec_h264.
My next step was to change the receiver-sink to a local tcp-sink and output that signal with the following html5-tag:
<video width=320 height=240 autoplay>
<source src="http://localhost:#port#">
</video>
If I view the website I can't see the stream, but I can view the videodata, which arrived as plain text, when I analyse the data.
Am I missing a videocontainer like MP4 for my video?
Am I wrong with decoding?
What am I doing wrong?
How can I improve my solution?
How would you solve that problem?
Best regards
What would be an equivalent gstreamer-1.0 command for:
ffmpeg -i <cam-url> -vcodec copy /tmp/h264Vid.avi
Here camera is giving H264 stream and we want to dump it directly to a video via gstreamer (do no need decoded data).
If you want to record your rtsp camera stream, this should work:
gst-launch-1.0 -e rtspsrc location=rtsp://<user>:<pass>#<ip>/h264.sdp \
! video/x-h264 ! avimux \
! filesink location=video_test.avi