I've got web camera (China) which can play MJPEG and h.264. I want to save the stream with gstreamer, but if I can do it with MJPEG, I can't do it with h264. I know, that camera gives h264 stream. Programms from developer shows it and if I save video with such programs I see that it is h264.
General
Complete name : C:\RecordFiles\20140319\IPCAM1\20140319_232127_141.avi
Format : AVI
Format/Info : Audio Video Interleave
File size : 4.11 MiB
Duration : 7s 280ms
Overall bit rate : 4 740 Kbps
Video ID : 0
Format : AVC
Format/Info : Advanced Video Codec
Format profile : Baseline#L3.1
Format settings, CABAC : No
Format settings, ReFrames : 1 frame
Codec ID : h264
Duration : 7s 280ms
Bit rate : 4 733 Kbps
Width : 1 280 pixels
Height : 720 pixels
Display aspect ratio : 16:9
Frame rate : 25.000 fps
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.205
Stream size : 4.11 MiB (100%)
Audio ID : 1
Format : PCM
Format settings, Endianness : Little
Format settings, Sign : Signed
Codec ID : 1
Duration : 7s 280ms
Bit rate mode : Constant
Bit rate : 128 Kbps
Channel count : 1 channel
Sampling rate : 8 000 Hz
Bit depth : 16 bits
Stream size : 114 KiB (3%)
Alignment : Aligned on interleaves
So I give all data, if somebody can help to tune gstreamer, I will be glad.
This works
gst-launch souphttpsrc location="http://shonlinecam1.dyndns.org:81/videostream.cgi?loginuse=user&loginpas=123" \
! jpegparse ! jpegdec \
! x264enc bitrate=512 key-int-max=45 speed-preset=superfast threads=1 \
! video/x-h264,stream-format=avc,alignment=au,profile=constrained-baseline \
! h264parse ! fakesink
This doesn't work
gst-launch souphttpsrc \
is-live=true \
location="http://shonlinecam1.dyndns.org:81/livestream.cgi?user=user&pwd=123&streamid=0" \
! h264parse ! decodebin2 ! fakesink
All links are real, please help.
Related
I have a source of progressive video that I want to convert to interlaced fields. From one progressive frame, I want to generate two interlaced fields, each containing either the odd or the even lines.
In GStreamer terminology, each input buffer contains one progressive frame that I want to convert into two buffers, each containing one field.
I was expecting the first pipeline below to produce 20 files of size 640 * 240 * 2 and the second one to produce 10 files of size 640 * 480 * 2.
But they both produce 10 files of size 640 * 480 * 2, so the interlace element seems to do nothing.
Where am I going wrong here?
gst-launch-1.0 -v videotestsrc pattern=ball num-buffers=10 ! video/x-raw,format=YUY2,width=640,height=480 ! interlace ! multifilesink location=c:\x_%d.raw
gst-launch-1.0 -v videotestsrc pattern=ball num-buffers=10 ! video/x-raw,format=YUY2,width=640,height=480 ! multifilesink location=c:\x_%d.raw
I have a simple script to read mp4 file like this.
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/videoio/videoio.hpp>
int main()
{
cv::VideoCapture file("test.mp4");
cv::Mat frame;
while (true)
{
file.read(frame);
cv::imshow("Preview", frame);
cv::waitKey(42);
}
file.release();
return 0;
}
This works fine. But when I integrate this script into another project I'm working on. The the image frame shows in wrong aspect ratio.
Correct (Side by Side):
Wrong (Only show one side and aspect ratio is wrong):
I'm running on Windows VS2019. I have remove all other script in my exists project, just leave the above. The only different I can think of is the includes and linker setting. I use ceres, glog, d3d11, realsense2, VTK, pcl, eigen3, OpenXR in the project. Does any of that effects what OpenCV behave? Or what might be the problem?
I've already try setting the frame width and height for VideoCapture and it's not working.
I've test both OpenCV 4.1 and 4.6.
When accessing frame.cols, frame.rows, I got the correcy resolution.
UPDATE
The metadata of the file I'm trying to read is as below. This is a side by side 3D video. It also can display correctly in player such as VLC.
mediainfo test.mp4
General
Complete name : test.mp4
Format : MPEG-4
Format profile : Base Media
Codec ID : isom (isom/iso2/avc1/mp41)
File size : 181 MiB
Duration : 31 s 339 ms
Overall bit rate : 48.4 Mb/s
Writing application : Lavf58.20.100
Video
ID : 1
Format : AVC
Format/Info : Advanced Video Codec
Format profile : High#L5.1
Format settings : CABAC / 1 Ref Frames
Format settings, CABAC : Yes
Format settings, Reference frames : 1 frame
Format settings, GOP : M=1, N=30
Codec ID : avc1
Codec ID/Info : Advanced Video Coding
Duration : 31 s 317 ms
Bit rate : 48.0 Mb/s
Width : 3 840 pixels
Height : 1 080 pixels
Display aspect ratio : 16:9
Original display aspect ratio : 3.556
Frame rate mode : Variable
Frame rate : 60.000 FPS
Minimum frame rate : 59.920 FPS
Maximum frame rate : 60.080 FPS
Color space : YUV
Chroma subsampling : 4:2:0
Bit depth : 8 bits
Scan type : Progressive
Bits/(Pixel*Frame) : 0.193
Stream size : 179 MiB (99%)
Title : SStar Video
Codec configuration box : avcC
Audio
ID : 2
Format : AAC LC
Format/Info : Advanced Audio Codec Low Complexity
Codec ID : mp4a-40-2
Duration : 31 s 339 ms
Bit rate mode : Constant
Bit rate : 140 kb/s
Channel(s) : 2 channels
Channel layout : L R
Sampling rate : 48.0 kHz
Frame rate : 46.875 FPS (1024 SPF)
Compression mode : Lossy
Stream size : 534 KiB (0%)
Title : SStar Audio
Default : Yes
Alternate group : 1
What I hope will be a quick question.. Does anybody know how to specify the output format of a v4l2Convert Plugin in GStreamer? Slightly more specifically the stride alignment or Bytes per line I don't mind which?
To give the full details. I'm playing around with an embedded video processing platform and wish to simultaneously connect multiple outputs to a single input using a GStreamer Tee object. The problem is that different outputs need different stride alignments / bytes per line.
I can set the stride alignment on the v4l2src plugin I'm using to read the input device and can get a combination that "works" for all outputs. But under the hood GStremaer is being helpful and instantiating a buffer copy, to perform re-alignment, using memcpy and thus my CPU utilisation goes through the roof.
My proposed solution is to use a hardware DMA loopback device (v4l2 Mem2Mem) which is controlled by a v4l2Convert plugin, to provide a simple, low CPU load way of realigning the data.
I've tired this system with several pipleines and been monitoring it using v4l2-ctl and it appears to be able to do what I want. If I change the stride-align of the initial v4l2src plugin I can see GStreamer change the format of the data written into the Mem2Mem device to match this. However the capture / read format always remains at N BytesPerPixel x NumberofPixelsPerLine bytes per line.
Format Video Capture Multiplanar:
Width/Height : 1920/1080
Pixel Format : 'NV16' (Y/CbCr 4:2:2)
Field : None
Number of planes : 1
Flags :
Colorspace : SMPTE 170M
Transfer Function : Rec. 709
YCbCr/HSV Encoding: ITU-R 601
Quantization : Limited Range
Plane 0 :
Bytes per Line : 1920
Size Image : 4147200
Format Video Output Multiplanar:
Width/Height : 1920/1080
Pixel Format : 'NV16' (Y/CbCr 4:2:2)
Field : None
Number of planes : 1
Flags :
Colorspace : SMPTE 170M
Transfer Function : Rec. 709
YCbCr/HSV Encoding: ITU-R 601
Quantization : Limited Range
Plane 0 :
Bytes per Line : 2048
Size Image : 4423680
Vs
Format Video Capture Multiplanar:
Width/Height : 1920/1080
Pixel Format : 'NV16' (Y/CbCr 4:2:2)
Field : None
Number of planes : 1
Flags :
Colorspace : SMPTE 170M
Transfer Function : Rec. 709
YCbCr/HSV Encoding: ITU-R 601
Quantization : Limited Range
Plane 0 :
Bytes per Line : 1920
Size Image : 4147200
Format Video Output Multiplanar:
Width/Height : 1920/1080
Pixel Format : 'NV16' (Y/CbCr 4:2:2)
Field : None
Number of planes : 1
Flags :
Colorspace : SMPTE 170M
Transfer Function : Rec. 709
YCbCr/HSV Encoding: ITU-R 601
Quantization : Limited Range
Plane 0 :
Bytes per Line : 1920
Size Image : 4147200
Is there a way for me to change the v4l2Convert's capture / source format properties from within my GStreamer pipelines declaration? GST-Inpect-1.0 doesn't show any equivalent caps to the stride-align cap of v4l2src for v4l2Convert. And caps filters like video/x-raw don't appear to to be able to provide what I need (I accept if this is wrong as I'm very much a noob in this respect)
The best I've found is the extra-controls cap but I can find very little documentation on this, and what I have found appears to suggest it's used for setting the V4L2 devices "physical" controls rather than things like the format information, so I'm probably barking up the wrong tree there any way.
my test pipeline is: -
v4l2src name=videosrc device=/dev/video0 ! video/x-raw, width=1920, height=1080, format=NV16, framerate=30/1 ! queue ! v4l2convert device=/dev/video2 disable-passthrough=true capture-io-mode=4 output-io-mode=4 import-buffer-alignment=true ! queue ! kmssink sync=false fullscreen-overlay=true
If it's helps what I want to be able to do is provide video at 1920x1080, 1920 bytes per line (GStreamer will does this quite happily for me). But set the V2l2Converts capture / source to be 1920x1080, 2048 bytes per line, as the problem sink device needs a stride align of 256.
Thanks
I am working on an application that needs to record my screen at X fps (the more, the better) I'm currenctly using GStreamer as it's a command line tool and it is very powerful.
My pipeline is :
gst-launch-1.0 -e ximagesrc ! \
video/x-raw, framerate=120/1 ! videoconvert ! \
jpegenc ! avimux ! filesink location=cap.avi
edit: if you want to run this, probably add a endx and endy parameters to ximagesrc (my video is usually 300x100)
This works with flaws : The codec is right, it's a 120 fps video, but it takes the 60 frames of the first and second video to build one sec at 120 fps.
I would like to know if my pipeline is erroneous or if ximagesrc is capped at 60 fps. If so, is there a way to by pass that, thanks.
but it takes the 60 frames of the first and second video to build one sec at 120 fps. I have difficult to understand that sentence.
Anyway. By default the Linux desktop refreshed at 60 Hz. So if you capture at 120 Hz you will capture the same image from the desktop twice. If you really want to capture 120 Hz you need to find a way to run your desktop at 120 Hz.
I have an mpegts video file encoded by a silicondust hdhomerun tuner. The pipeline I have currently:
gst-launch-0.10 filesrc location=filename.mpg ! decodebin name=decoder decoder. ! queue ! audioconvert ! audioresample ! alsasink device=front decoder. ! deinterlace ! ffmpegcolorspace ! glimagesink
Works well except that it does not capture all of the audio channels. I found this out tonight when I recorded a preseason football game and the announcers were not audible while the ref and the crowd noise was. This same file plays fine with all audio channels in xine.
Here is the output of ffmpeg, which describes the streams:
Stream #0:0[0x31]: Video: mpeg2video (Main) ([2][0][0][0] / 0x0002), yuv420p, 1280x720 [SAR 1:1 DAR 16:9], 14950 kb/s, 64.96 fps, 59.94 tbr, 90k tbn, 119.88 tbc
Stream #0:1[0x34](eng): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, 5.1(side), s16, 448 kb/s
Stream #0:2[0x35](spa): Audio: ac3 (AC-3 / 0x332D4341), 48000 Hz, stereo, s16, 192 kb/s (visual impaired)
How can I get all audio channels to playback from a surround sound mpeg in gstreamer?
Extra info:
linux OS
alsa sound system
Update:
This problem is actually quite strange. Randomly, it plays back all channels required, and I'll think I have found the solution, but then the new found solution stops working and some of the audio channels are missing again.
Even playbin2 is randomly including and excluding these channels:
gst-launch-0.10 -v playbin2 uri=file:filename.mpg
I just submitted a bug report on bugzilla.gnome.org after determining that the intermittent behavior was also present using playbin2.