I am trying to stream live audio data from alsasrc to sound card via ALSA. When I tried to stream .wav audio data, it got streamed properly. But, when I tried to stream .dts audio data, I did not get the stream properly, I got noise instead. When I recorded the .dts audio in a file using filesink, it got played properly from the file.
The command which I used to stream .wav and .dts data is:
gst-launch-1.0 alsasrc device=..... ! decodebin ! audioconvert ! alsasink device=...
For, .wav audio I got the stream properly but, for .dts audio I got noise.
Related
My goal is to stream a color map to rtsp and from there to stream it to kvs.
By color map I mean that I've created a numpy matrix, 32x24x3, that I write to the screen at around 7 fps, as shown here:
Sending OpenCV output to VLC stream
and convert it to an RTSP stream.
My line is:
python3 rtsp.py | vlc -I dummy --demux=rawvideo --rawvid-fps=7 --rawvid-width=32 --rawvid-height=24 --rawvid-chroma=RV24 - --sout "#transcode{vcodec=mp4v,vb=200,fps=7,width=32,height=24,samplerate=44100}:rtp{sdp=rtsp://127.0.0.1:8557/color.sdp}"
This works and if I open the vlc GUI I can read it and see the frames.
My next goal is to stream it to kvs using gstreamer.
Usually I stream h264 source to kvssink and it looks like that (from amazon's tutorial):
gst-launch-1.0 rtspsrc location="rtsp://127.0.0.1:8557/color.sdp" ! rtph264depay ! video/x-h264, format=avc,alignment=au ! kvssink stream-name="colorStream" storage-size=512
But it seems that the gstreamer can't read the frames and I get:
INFO - freeKinesisVideoStream(): Freeing Kinesis Video stream.
DEBUG - curlApiCallbacksShutdownActiveRequests(): pActiveRequests hashtable is empty
I've tried all kinds of codecs but nothing works.
Does anybody maybe have another solution?
Amazon's documentation is either outdated or only works for some system configurations. I had the same issue and had to use a different gstreamer incantation. For you it would look like:
gst-launch-1.0 rtspsrc location="rtsp://127.0.0.1:8557/color.sdp" ! rtph264depay ! h264parse ! kvssink stream-name="colorStream" storage-size=512
Note the h264parse in place of video/x-h264, format=avc,alignment=au. I spent hours figuring this one out and the lightbulb in my head went off after using the debug guide to precisely jack up the debug level of the part of the pipeline dropping frames, and then reading this issue thread on the KVS producer sdk repo.
I'm not a gstreamer expert and can't explain why this works, or why this plugin is considered "bad", but I hope this helps others stranded in the same problem.
I want to read the feed from a webcam and host a RTSP stream without encoding the feed. I have access to high bandwidth network but the CPUs are very low end and have other tasks to full fill due to which I want to skip the encoding/decoding steps to save up on CPU usage. Before jumping on to RTSP I tried a simple MJPG stream and tried to skip the jpegenc (JPG encoding) as it can be done directly with a simple gst pipeline:
gst-launch-1.0 -v autovideosrc ! videoconvert ! videoscale ! video/x-raw,format=I420,width=800,height=600,framerate=25/1 ! rtpjpegpay ! udpsink host=10.0.1.10 port=5000
However, I got a warning:
WARNING: erroneous pipeline: could not link videoscale0 to
rtpjpegpay0, rtpjpegpay0 can't handle caps video/x-raw,
format=(string)I420, width=(int)800, height=(int)600,
framerate=(fraction)25/1
I'm new to Gstreamer and not sure if this is possible and how to move forward next. The same command above works if I include the jpg encoding. Any suggestions would be appreciated.
rtpjpegpay is an element that takes in a Motion JPEG stream and translates it to RTP. The input you're giving it however isvideo/x-raw, which means it is unencoded, rather than encoded with Motion JPEG. If you want to use this element, you'll first have to encode it to Motion JPEG, using something like jpegenc.
Like #vermaete already mentions: if you really, really don't want to encode your video, you can use someting like rtpvrawpay, which will translate your raw video into RTP packets. However: sending raw, unencoded video over the network is not really advisable (and not even workable if you have a bad connection, or even impossible if you plan on sending it over the Internet). You might also end up using a lot of resources on your CPU just to get everything payloaded properly, and gettign it sent to your network card.
How do I put unrelated audio into any generated video stream in a way that keeps them in sync in gstreamer?
Context:
I want to stream audio from icecast into a Kinesis Video stream, and then view it with Amazon's player. The player only works if there is video as well as audio, so I generate video with testvideosrc.
The video and audio need to be in sync in terms of timestamps, or the Kinesis sink 'kvssink' throws an error. But because they are two separate sources, they are not in sink.
I am using gst-launch-1.0 to run my pipeline.
My basic attempt was like this:
gst-launch-1.0 -v \
videotestsrc pattern=red ! video/x-raw,framerate=25/1 ! videoconvert ! x264enc ! h264parse ! video/x-h264,stream-format=avc,alignment=au ! \
queue ! kvssink name=sink stream-name="NAME" access-key="KEY" secret-key="S_KEY" \
uridecodebin uri=http://ice-the.musicradio.com/LBCLondon ! audioconvert ! voaacenc ! aacparse ! queue ! sink.
The error message I get translates to:
STATUS_MAX_FRAME_TIMESTAMP_DELTA_BETWEEN_TRACKS_EXCEEDED
This indicates that the audio and video timestamps are too different, so I want to force them to match, maybe by throwing away the video timestamps?
There are different meanings of "sync". Let us ignore lip sync for a moment (where audio and video match to each other).
There is sync in terms of timestamps - e.g. do they carry similar timestamps in their representation. And sync in terms of when in real time do these samples with timestamps actually arrive at the sink (latency).
Hard to tell by the error which one exactly the sink is complaining about.
Maybe try x264enc tune=zerolatency for a start, as without that options the encoder produces a two second latency which may cause issues for certain requirements.
Then again the audio stream will have some latency too. It may not be easy to tune these two to match. The sink should actually do the buffering ans synchronization.
I am trying to capture and store a webcam stream. The requirements are 1920x1080#30fps. And it must be done by a single-board-computer (Raspberry).
The duration to capture is 10 minutes. (For the moment I only capture 10 seconds for testing)
In general the camera (usbfhd01m from ELP) is able to provide an MJPEG stream in 1920x1080#30fps. I am just not able to store it. And I don't know why. I tried it with the following pipeline:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! avimux ! filesink location=test.avi
The result is a video file which is far away from being fluent. What is missing in my pipeline?
When I use the same pipeline, but decode the stream and save it in a raw file like this:
gst-launch-1.0 v4l2src device=/dev/video0 num-buffers=300 do-timestamp=true ! image/jpeg,width=1920,height=1080,framerate=30/1 ! queue ! jpegdec ! filesink location=test.yuv
then the raw video is absolutely fluent. Therefore, I think the pipeline and the device is able to record in 1920x1080#30fps, but there seems to be something wrong for saving the stream.
Storing the stream into matroska fileformat does not change my problem. And for transcoding on the fly to H264 the Raspberry Pi 3 doesn't seem to be powerful enough. (Even by using omxh264enc)
What happens when you remove the do-timestamp=true? This options applies current pipeline timestamps to the sample buffers - overwriting those coming out from the device. You probably want to store the original timestamps instead of overwriting them as they can carry some pipeline jitter.
In your second pipeline you save the stream as raw. Basically removing all timestamp information that you have (also the jitter timestamps). So when you play back the raw stream it assumes a constant framerate instead.
I have created an application that uses appsrc to record mp4/mpeg files. EOS event is sent whenever I have to stop recording and the file is created successfully. Everything goes well, my pipeline is
appsrc ! queue ! videorate ! ffmpegcolorspace ! x264enc ! mp4mux ! filesink location=video.mp4
By chance, if my application crashes (is unable to generate successful EOS ), the amount of recorded data is completely lost.
Is there a way to recover such files in gstreamer. I was thinking if I could append EOS by reading such files in gstreamer. Is there a provision to do that or something similar so that i don't loose the data.
Thanks,
Rahul
You may wish to mux the data into an MPEG transport stream (.ts) instead of an MP4 file. The reason that the MP4 file is unreadable after an application crash is that the mp4mux doesn't get a chance to write the file's 'moov' atom which can only be done after all the multimedia data is recorded (i.e., when EOS is processed). A .ts file is built for streaming and can still be read even if the end of the file is incomplete.
To invoke it, change the end of your pipeline to:
... ! x264enc ! mpegtsmux ! filesink location=video.ts
If MP4 is a requirement, the .ts file can easily be losslessly remuxed into an MP4 after recording.
You can use the "moov-recovery-file" property and to be able to repair the file in the case of a crash. See atomsrecovery for details.