This command convert rtmp to ts file, but return only video, how can i get audio and video?
rtmpsrc name=rtmpsrc location=rtmp://127.0.0.1/live/test ! flvdemux name=demux demux.video ! h264parse ! mpegtsmux name=mux ! multifilesink location=/folder/to/save/%05d.ts sync=true next-file=key-unit-event post-messages=true
You need anotherbranch demux.audio ! ... ! mux.. You can just append this to the end. Use an appropriate parser instead of the ....
Related
I need to record 4 RTSP streams into a single stream of the Kinesis Video Streams.
Streams must be placed in the video like this:
---------- ----------
| | |
| STREAM 1 | STREAM 2 |
| | |
|----------|----------|
| | |
| STREAM 3 | STREAM 4 |
| | |
---------- ----------
I was able to insert a single stream and make it work perfectly, using the command below:
gst-launch-1.0 rtspsrc user-id="admin" user-pw="password" location="rtsp://admin:password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
However, my goal is to insert an array of streams into the same stream in the Kinesis Video Streams.
For this I found the example with videomixer that's below:
gst-launch-1.0 -e rtspsrc location=rtsp_url1 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_0 \
rtspsrc location=rtsp_url2 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_1 \
rtspsrc location=rtsp_url3 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_2 \
rtspsrc location=rtsp_url4 ! rtph264depay ! h264parse ! decodebin ! videoconvert! m.sink_3 \
videomixer name=m sink_1::xpos=1280 sink_2::ypos=720 sink_3::xpos=1280 sink_3::ypos=720 ! x264enc ! mp4mux ! filesink location=./out.mp4 sync=true
I adapted the example to just two streams and made it work inside the container, using a command like the one below:
gst-launch-1.0 -e rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! m.sink_0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! m.sink_1 \
videomixer name=m sink_0::xpos=1080 sink_1::ypos=1080 ! x265enc ! h265parse ! video/x-h265, alignment=au ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
And in another way:
gst-launch-1.0 -e videomixer name=mix sink_0::xpos=0 sink_0::ypos=0 sink_0::alpha=0 sink_1::xpos=0 sink_1::ypos=0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! mix.sink_0 \
rtspsrc user-id="admin" user-pw="password" location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! rtph265depay ! h265parse ! video/x-h265, alignment=au ! libde265dec ! videoconvert ! videoscale ! video/x-raw,width=1920,height=1080 ! mix.sink_1 \
mix. ! queue ! videoconvert ! x265enc ! queue ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
The container in question is from: https://github.com/awslabs/amazon-kinesis-video-streams-producer-sdk-cpp
However, when I log into Kinesis Video Streams and try to download a getClip, in both cases I get this error:
MissingCodecPrivateDataException
Missing codec private data in fragment for track 1.
Status code: 400
The logs with GST_DEBUG=1 can be found at https://gist.github.com/vbbandeira/b15ec8af6986237a4cd7e382e4ede261
And the logs with GST_DEBUG=4 can be found at https://gist.github.com/vbbandeira/6bd4b7a014a69da5f46cd036eaf32aec
Can you guys please let me know what is going on there?
Or if possible, help me find the solution to this error.
Thanks!
for those looking for the same solution, I managed to make it work by replacing the videomixer which is deprecated by the composer, below is an example of the command I used and it worked:
gst-launch-1.0 rtspsrc location="rtsp://password#192.168.0.1:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! decodebin ! videoconvert ! comp.sink_0 \
rtspsrc location="rtsp://password#192.168.0.2:554/cam/realmonitor?channel=1&subtype=0" short-header=TRUE ! decodebin ! videoconvert ! comp.sink_1 \
compositor name=comp sink_0::xpos=0 sink_1::xpos=1280 ! x264enc ! kvssink stream-name="test-stream" storage-size=512 access-key="access-key" secret-key="secret-key" aws-region="us-east-1"
However, I was only able to do this using h264.
I'm getting raw h264 stream from some camera and i need to play that using the gst.
At first i tried to save the stream to the file (using my own application which just writes stream to a file) and pay using filesrc:
gst-launch-1.0 filesrc location="file path" ! video/x-h264 ! h264parse ! avdec_h264 ! videoconvert ! videorate ! video/x-raw,framerate=30/1 ! autovideosink
That works OK. Then i tried to play from the udpsrc:
gst-launch-1.0 udpsrc port=1234 ! video/x-h264 ! h264parse ! avdec_h264 ! videoconvert ! videorate ! video/x-raw,framerate=30/1 ! autovideosink
And got next error (after start stream from camera):
ERROR: from element /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Filter caps do not completely specify the output format
Additional debug info:
gstcapsfilter.c(453): gst_capsfilter_prepare_buf (): /GstPipeline:pipeline0/GstCapsFilter:capsfilter0:
Output caps are unfixed: video/x-h264, width=(int)[ 1, 8192 ], height=(int)[ 1, 8192 ], framerate=(fraction){ 30/1, [ 0/1, 2147483647/1 ] }
Execution ended after 0:00:04.234834646
I'm new in GST.
Please, help me)))
The solution:
gst-launch-1.0 -v udpsrc port=5000 ! h264parse ! avdec_h264 ! videoconvert ! videorate ! video/x-raw,framerate=30/1 ! autovideosink
For real-time stream:
gst-launch-1.0 -v udpsrc port=5000 ! h264parse ! avdec_h264 ! videoconvert ! autovideosink
I use the following pipeline to play my video on screen
gst-launch-1.0 filesrc location=01_189_libxvid_1920x1080_6M.mp4 !
qtdemux ! mpeg4videoparse ! omxmpeg4videodec ! videobalance brightness=100 !
video/x-raw,format=BGRA ! waylandsink --gst-debug=*:2
but now instead of directly playing I want to encode it and save it in some folder. Please suggest
Should be something like this (example with h264 codec):
gst-launch-1.0 -e --gst-debug=3 \
filesrc location="/path/input/sample_in.mp4" \
! qtdemux \
! mpeg4videoparse \
! omxmpeg4videodec \
! queue \
! x264enc \
! qtmux \
! filesink location="/path/output/sample_out.mp4"
I am trying to add a textoverlay to an mp4 movie with gstreamer-0.10. Yes I know its old but I only need to do few changes to the mp4. I know how to do it with gst-launch-0.10:
gst-launch-0.10 filesrc location=input.mp4 name=src ! decodebin
name=demuxer demuxer. ! queue ! textoverlay text="My Text" ! x264enc !
muxer. demuxer. ! queue ! audioconvert ! voaacenc ! muxer. mp4mux
name=muxer ! filesink location=output.mp4
This creates a text overlay movie for me. But now I need to add the textoverlay in the following bin in cpp - this is my working pipeline creating an mp4:
QGst::BinPtr m_encBin = QGst::Bin::fromDescription(
"filesrc location=\""+path+"videoname.raw.mkv\" ! queue ! matroskademux name=\"demux\" "
"demux.video_00 ! queue ! ffmpegcolorspace ! queue ! x264enc ! queue ! mux.video_00 "
"demux.audio_00 ! queue ! audioconvert ! queue ! faac ! queue ! mux.audio_00 "
"mp4mux name=\"mux\" ! queue ! filesink name=\"filesink\" sync=false ",
QGst::Bin::NoGhost);
Anyone knows how I can add the textoverlay into the bin?
Cheers Fredrik
I think you should add queue and textoverlay elements to your pipeline description between ffmpegcolorspace and queue elements:
QGst::BinPtr m_encBin = QGst::Bin::fromDescription(
"filesrc location=\""+path+"videoname.raw.mkv\" ! queue ! matroskademux name=\"demux\" "
"demux.video_00 ! queue ! ffmpegcolorspace ! queue ! textoverlay text=\"My Text\" ! queue ! x264enc ! queue ! mux.video_00 "
"demux.audio_00 ! queue ! audioconvert ! queue ! faac ! queue ! mux.audio_00 "
"mp4mux name=\"mux\" ! queue ! filesink name=\"filesink\" sync=false ",
QGst::Bin::NoGhost);
I think you received downvote because you didn't try to understand GStreamer pipelines description and asked for ready-to-use solution.
This command adds a text to the video, but the audio is missing in the output MP4 file:
gst-launch-1.0 filesrc location=input.mp4 name=src ! decodebin ! textoverlay text="My Text" ! x264enc ! h264parse ! mp4mux ! filesink location=output.mp4
How can I fix this, so that the audio is preserved?
Thanks
This works:
gst-launch-1.0 \
filesrc location=input.mp4 name=src\
! decodebin name=demuxer \
demuxer. ! queue \
! textoverlay text="My Text" \
! x264enc ! muxer. \
demuxer. ! queue \
! audioconvert ! voaacenc ! muxer. \
mp4mux name=muxer \
! filesink location=output.mp4