I am trying to receive audio from an OSX audio device with 64 input channels (Soundflower64 in this example) and record them to a multichannel wav.
I get the first 8 channels without problems with this basic command:
gst-launch-1.0 osxaudiosrc device=78 ! wavenc ! filesink location=audio.wav
But I found no way to widen this pipeline to 64 channels. Nothing seems to work...
Can it be done or is this an inherent limitation of GStreamer?
This still looks like a hardcoded limit in GStreamer - https://github.com/GStreamer/gst-plugins-good/blob/972184f434c3212aa313eacb9025869d050e91c3/sys/osxaudio/gstosxcoreaudio.h#L67
Edit: I made a pull request to Gstreamer to increase this to 64, which has now been released in v1.20. See https://gitlab.freedesktop.org/gstreamer/gstreamer/-/blob/1.20/subprojects/gst-plugins-good/sys/osxaudio/gstosxcoreaudio.h#L66
Related
My goal is to stream a color map to rtsp and from there to stream it to kvs.
By color map I mean that I've created a numpy matrix, 32x24x3, that I write to the screen at around 7 fps, as shown here:
Sending OpenCV output to VLC stream
and convert it to an RTSP stream.
My line is:
python3 rtsp.py | vlc -I dummy --demux=rawvideo --rawvid-fps=7 --rawvid-width=32 --rawvid-height=24 --rawvid-chroma=RV24 - --sout "#transcode{vcodec=mp4v,vb=200,fps=7,width=32,height=24,samplerate=44100}:rtp{sdp=rtsp://127.0.0.1:8557/color.sdp}"
This works and if I open the vlc GUI I can read it and see the frames.
My next goal is to stream it to kvs using gstreamer.
Usually I stream h264 source to kvssink and it looks like that (from amazon's tutorial):
gst-launch-1.0 rtspsrc location="rtsp://127.0.0.1:8557/color.sdp" ! rtph264depay ! video/x-h264, format=avc,alignment=au ! kvssink stream-name="colorStream" storage-size=512
But it seems that the gstreamer can't read the frames and I get:
INFO - freeKinesisVideoStream(): Freeing Kinesis Video stream.
DEBUG - curlApiCallbacksShutdownActiveRequests(): pActiveRequests hashtable is empty
I've tried all kinds of codecs but nothing works.
Does anybody maybe have another solution?
Amazon's documentation is either outdated or only works for some system configurations. I had the same issue and had to use a different gstreamer incantation. For you it would look like:
gst-launch-1.0 rtspsrc location="rtsp://127.0.0.1:8557/color.sdp" ! rtph264depay ! h264parse ! kvssink stream-name="colorStream" storage-size=512
Note the h264parse in place of video/x-h264, format=avc,alignment=au. I spent hours figuring this one out and the lightbulb in my head went off after using the debug guide to precisely jack up the debug level of the part of the pipeline dropping frames, and then reading this issue thread on the KVS producer sdk repo.
I'm not a gstreamer expert and can't explain why this works, or why this plugin is considered "bad", but I hope this helps others stranded in the same problem.
I want to read the feed from a webcam and host a RTSP stream without encoding the feed. I have access to high bandwidth network but the CPUs are very low end and have other tasks to full fill due to which I want to skip the encoding/decoding steps to save up on CPU usage. Before jumping on to RTSP I tried a simple MJPG stream and tried to skip the jpegenc (JPG encoding) as it can be done directly with a simple gst pipeline:
gst-launch-1.0 -v autovideosrc ! videoconvert ! videoscale ! video/x-raw,format=I420,width=800,height=600,framerate=25/1 ! rtpjpegpay ! udpsink host=10.0.1.10 port=5000
However, I got a warning:
WARNING: erroneous pipeline: could not link videoscale0 to
rtpjpegpay0, rtpjpegpay0 can't handle caps video/x-raw,
format=(string)I420, width=(int)800, height=(int)600,
framerate=(fraction)25/1
I'm new to Gstreamer and not sure if this is possible and how to move forward next. The same command above works if I include the jpg encoding. Any suggestions would be appreciated.
rtpjpegpay is an element that takes in a Motion JPEG stream and translates it to RTP. The input you're giving it however isvideo/x-raw, which means it is unencoded, rather than encoded with Motion JPEG. If you want to use this element, you'll first have to encode it to Motion JPEG, using something like jpegenc.
Like #vermaete already mentions: if you really, really don't want to encode your video, you can use someting like rtpvrawpay, which will translate your raw video into RTP packets. However: sending raw, unencoded video over the network is not really advisable (and not even workable if you have a bad connection, or even impossible if you plan on sending it over the Internet). You might also end up using a lot of resources on your CPU just to get everything payloaded properly, and gettign it sent to your network card.
I am currently playing around with an AudioOverIP Project and wondered if you could help me out.
I have a LAN, with an Audio Source (Dante/AES67-RTP-Stream) which I would like to distribute to multiple receivers (SBC (e.g. RaspberryPi) with an Audio Output (e.g. Headphone jack):
PC-->Audio-USB-Dongle-->AES67/RTP-Multicast-Stream-->LAN-Network-Switch-->RPI (Gstreamer --> AudioJack)
I currently use Gstreamer for the Pipeline:
gst-launch-1.0 -v udpsrc uri=udp://239.69.xxx.xx:5004 caps="application/x-rtp,channels=(int)2,format=(string)S16LE,media=(string)audio,payload=(int)96,clock-rate=(int)48000,encoding-name=(string)L24" ! rtpL24depay ! audioconvert ! alsasink device=hw:0,0
It all works fine, but if I watch a video on the PC and listen to the Audio from the RPI, I have some latency (~200-300ms), therefore my questions:
Do I miss something in my Gstreamer Pipeline to be able to reduce latency?
What is the minimal Latency to be expected with RTP-Streams, is <50ms achievable?
Would the latency occur due to the network or due to the speed of the RPi?
Since my audio-input is not a Gstreamer input, I assume rtpjitterbuffer or similar would not help to decrease latency?
I am trying to capture and store a webcam stream. The requirements are 1920x1080#30fps. And it must be done by a single-board-computer (Raspberry).
The duration to capture is 10 minutes. (For the moment I only capture 10 seconds for testing)
In general the camera (usbfhd01m from ELP) is able to provide an MJPEG stream in 1920x1080#30fps. I am just not able to store it. And I don't know why. I tried it with the following pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! avimux ! filesink location=test.avi
The result is a video file which is far away from being fluent. What is missing in my pipeline?
When I use the same pipeline, but decode the stream and save it in a raw file like this:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! jpegdec ! filesink location=test.yuv
then the raw video is absolutely fluent. Therefore, I think the pipeline and the device is able to record in 1920x1080#30fps, but there seems to be something wrong for saving the stream.
Storing the stream into matroska fileformat does not change my problem. And for transcoding on the fly to H264 the Raspberry Pi 3 doesn't seem to be powerful enough. (Even by using omxh264enc)
What happens when you remove the do-timestamp=true? This options applies current pipeline timestamps to the sample buffers - overwriting those coming out from the device. You probably want to store the original timestamps instead of overwriting them as they can carry some pipeline jitter.
In your second pipeline you save the stream as raw. Basically removing all timestamp information that you have (also the jitter timestamps). So when you play back the raw stream it assumes a constant framerate instead.
I am running a 9 sec video on my NVIDIA Tegra jetson TK1 board using gstreamer as:
gst-launch-0.10 playbin uri=file:///home/ubuntu/widescreen.avi
I notice this drops a lot of frames and gstreamer prints these messages:
WARNING: from element /GstPlayBin:playbin0/GstBin:vbin/GstAutoVideoSink:videosink/GstXvImageSink:videosink-actual-sink-xvimage: A lot of buffers are being dropped.
Additional debug info:
gstbasesink.c(2875): gst_base_sink_is_too_late (): /GstPlayBin:playbin0/GstBin:vbin/GstAutoVideoSink:videosink/GstXvImageSink:videosink-actual-sink-xvimage:
There may be a timestamping problem, or this computer is too slow.
I ran top while executing this and indeed gstreamer is taking 95% of the CPU.
Now, when i play this video through the default media player, it plays completely fine and without any lag. I was wondering if anyone knows what may be the reason that gstreamer is unable to play it properly. I am new to gstreamer and wondering if I can do something to alleviate this.
Bins are many elements combined into one element type structure(i.e with appropriate number of sinks and sources). playbin is a standard bin with an "autovideosink" element which automatically detects video sinks.
Firstly I would suggest you to upgrade to gstreamer-1.0 in which many bugs were fixed.
Secondly the problem seems to be with your xvimagesink, hence try using ximagesink by defining explicitly, your autovideosink is selecting xvimagesink by default.
try this pipeine:-
(for hardware decoding)
gst-launch-1.0 filesrc location="location of h264 video/file.avi" ! avidemux ! h264parse ! omxh264dec ! videoconvert ! ximagesink
or for cpu decoding use avdec_h264 instead of omxh264dec
The default media player is likely using video hardware decoding if the media file is H.264, which allows it decode with less CPU resources. Try creating a gstreamer pipeline that explicitly uses the omx H.264 decoding element as discussed here (http://elinux.org/Jetson/H264_Codec).
eg. gst-launch-0.10 filesrc location=/home/ubuntu/widescreen.avi ! avidemux ! h264parse ! nv_omx_h264dec ! autovideosink