i want to make some kind of image processing with gstreamer in C , where i read a couples of images then concatinate them all in one big image ( the images in my program are option that the user can take later ) and i don't want to use any external library to do that
any sugesstions would be great
So basically you want to do compositing of the images ie given images A B C D produce this image for instance :
________________
| | |
| A | B |
|______|_______|
| | |
| C | D |
________________
?
If so, videomixer will be a good choice, I will edit my answer if this is indeed what you want.
Have a nice day !
Edit : As this is what you requested, here is an example of how to composit two images of different sizes with videomixer :
gst-launch-1.0 uridecodebin uri=file:///home/meh/Pictures/questions.jpg ! videoscale ! video/x-raw, width=320, height=240 ! imagefreeze ! videomixer name=m sink_1::xpos=320 ! autovideosink uridecodebin uri=file:///home/meh/Pictures/testsrc.png ! videoscale ! video/x-raw, width=320, height=240 ! imagefreeze ! m.
Explanation :
We create two decoders for the images, resize them with videoscale to an arbitrary size (here 320 x 240), freeze them and send them to videomixer. videomixer has the x position of sink_1 set to 320, which offsets the first image so that the second one appears as well.
If you plan on dynamic support for this, http://gstreamer.freedesktop.org/data/doc/gstreamer/head/gstreamer-editing-services/html/ch01.html , GES will be a good choice for you, feel free to come and drop by in #pitivi on freenode, my nickname is Mathieu_Du if you want to ping me.
Disclaimer : tested with gst 1.3, should work with the 1.X series, not so sure about 0.10.
uses multifilesrc to concatenate many file in one file.
http://gstreamer.freedesktop.org/wiki/MultiFileSrc
Related
I have a series of ts files(h265) which are part of a m3u8 manifest which are fed into the pipeline through fdsrc. I use the following pipeline to transcode them to H264 to be played on a Hlsjs web browser.
cat 2021-06-30T00-55-41Z_2000000.ts | gst-launch-1.0 -q mpegtsmux name=mux ! fdsink fd=1 fdsrc ! tsdemux name=demux demux. ! queue ! h265parse ! nvh265dec ! videoconvert ! videoscale ! video/x-raw,width=640,height=360 ! nvh264enc ! mux.
The individual ts segments are transcoded successfully and can be played.
However the DTS is out of aligment and when these ts segments are played as part of the hls manifest, it is not able to play as DTS is out of order.
[mpegts # 0x7fb69100a400] DTS 6496420096 < 6496446847 out of order
[hls # 0x7fb69580ea00] DTS 6496420096 < 6496446847 out of order
In FFMPEG we have copyts to preserve the timestamp.
Is there something similar in gstreamer to preserve the timestamp? Or atleast generate a timestamp with the current time so that the player doesnt complain?
I tried fdsrc do-timestamp=true but that didnt work.
I appreciate any help in this.
Best
Currently I have a setup like this.
my-app | gst-launch-1.0 -e fdsrc ! \
videoparse format=GST_VIDEO_FORMAT_BGR width=640 height=480 ! \
videoconvert ! 'video/x-raw, format=I420' ! x265enc ! h265parse ! \
matroskamux ! filesink location=my.mkv
From my-app I am streaming raw BGR frame buffers to gst. How can I also pass presentation timestamps (PTSs) for those frames? I have somewhat full control over my-app. I can open other pipes to gst from it.
I know I have the option to use gstreamer C/C++ API or write a gstreamer plugin, but I was trying to avoid this.
I guess you can set a framerate for the videoparse element. You can also try do-timestamp=true for the fdsrc - maybe it requires a combination of both.
If you have the PTS in my-app you would probably need to wrap buffers and PTS in a real GstBuffer and use gdppay and gdpdepay as payload between the link.
For example if your my-app would dump the images in the following format:
https://github.com/GStreamer/gstreamer/blob/master/docs/random/gdp
(not sure how recent this info document is)
You could receive the data with the following pipeline:
fdsrc ! gdpdepay ! videoconvert ! ..
No need for resolution and format either as it is part of the protocol too. And you will have PTS as well if set.
If you can use GStreamer lib in my-app you could some soome pipeline like this:
appsrc ! gdppay ! fakesink dump=true
And you would push your image buffers with PTS to the appsink.
See https://github.com/GStreamer/gst-plugins-bad/tree/master/gst/gdp for some examples how gdp is used as a protocol.
I have two AXIS IP cameras streaming H264 stream over RTSP/RTP. Both cameras are set to synchronize with same NTP server so I assume both cameras will have same exact clock (may be minor diff in ms).
In my application, both cameras are pointing to same view and its required to process both camera images of same time. Thus, I want to synchronize the image capture using GStreamer.
I have tried invoking two pipelines separately on different cmd prompts but the videos are 2-3 seconds apart .
gst-launch rtspsrc location=rtsp://192.168.16.136:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam1_video_%d.mp4
gst-launch rtspsrc location=rtsp://192.168.16.186:554/live ! rtph264depay ! h264parse ! splitmuxsink max-size-time=100000000 location=cam2_video_%d.mp4
Can someone suggest a gstreamer pipeline to synchronize both H264 streams and record them into separate video files?
Thanks!
ARM
I am able to launch a pipeline using gst-launch as shown below. It shows good improvement on captured frame synchronization compare to lanuching two pipelines. Most times they differ by 0-500 msec. Though, I still want to synchronize them less than 150 msec accuracy.
rtspsrc location=rtsp://192.168.16.136:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_136_%d.mp4 \
rtspsrc location=rtsp://192.168.16.186:554/axis-media/media.amp?videocodec=h264 \
! rtph264depay ! h264parse \
! splitmuxsink max-size-time=10000000000 location=axis/video_186_%d.mp4
Appreciate if someone can point other ideas!
~Arm
What do you mean synchronize? if you record to separate video files you do not need any synchronization.. as this is going to totaly separate them.. each RT(S)P stream will contain different timestamps, if you want to align them somehow to the same time (I mean real human time.. like "both should start from 15:00") then you have to configure them this way somehow (this is just idea)..
Also you did not tell us whats inside those rtp/rtsp streams (is it MPEG ts or pure IP.. etc). So I will give example of mpeg ts encapsulated rtp streams.
We will go step by step:
Suppose this is one camera just to demonstrate how it may look like:
gst-launch-1.0 -v videotestsrc ! videoconvert ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=127.0.0.1 port=8888
Then this would be reciever (it must use rtmp2tdepay. We are encapsulating metadata inside MPEG container):
gst-launch-1.0 udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! decodebin ! videoconvert ! autovideosink
If you test this with your camera .. the autovideosink means that new window will popup displaying your camera..
Then you can try to store it inside file.. we will use mp4mux..
So for same camera input we do:
gst-launch-1.0 -e udpsrc port=8888 caps=application/x-rtp\,\ media\=\(string\)video\,\ encoding-name\=\(string\)MP2T ! rtpmp2tdepay ! tsdemux ! h264parse ! mp4mux ! filesink location=test.mp4
Explanation: We do not decode and reencode(waste of processing power) so I will just demux the MPEG ts stream and then instead of decoding H264 I will just parse it for the mp4mux which accepts video/x-h264.
Now you could use the same pipeline for each camera.. or you can just copypaste all elements into the same pipeline..
Now as you did not provide any - at least partial - attempt to make something out this is going to be your homework :) or make yourself more clear about the synchronization as I do not understand it..
UPDATE
After your update to question this answer is not very useful, but I will keep it here as reference. I have no idea how to synchronize that..
Another advise.. try to look at timestamps after udpsrc.. maybe they are synchronized already.. in that case you can use streamsynchronizer to synchronize two streams.. or maybe video/audio mixer:
gst-launch-1.0 udpsrc -v port=8888 ! identity silent=false ! fakesink
This should print the timestamps (PTS, DTS, Duration ..):
/GstPipeline:pipeline0/GstIdentity:identity0: last-message = chain ******* (identity0:sink) (1328 bytes, dts: 0:00:02.707033598, pts:0:00:02.707033598, duration: none, offset: -1, offset_end: -1, flags: 00004000 tag-memory ) 0x7f57dc016400
Compare PTS of each stream.. maybe you could combine two udpsrc in one pipeline and after each udpsrc put identity (with different name=something1) to make them start reception together..
HTH
I'm trying to use a jpg-File as a virtual webcam for Skype (or similar). The image file is reloading every few seconds and the Pipeline should also transmit always the newest image.
I started creating a Pipeline like this
gst-launch filesrc location=~/image.jpg ! jpegdec ! ffmpegcolorspace ! freeze ! v4l2sink device=/dev/video2
but it only streams the first image and ignores the newer versions of the image file. I read something about concat and dynamically changing the Pipeline but I couldn't get this working for me.
Could you give me any hints on how to get this working?
Dynamic refresh the input file is NOT possible (at least with filesrc).
Besides, your sample use freeze, which will prevent the image change.
One possible method is using multifilesrc and videorate instead.
multifilesrc can read many files (with a provided pattern similar to scanf/printf), and videorate can control the speed.
For example, you create 100 images with format image0000.jpg, image0001.jpg, ..., image0100.jpg. Then play them continuously, with each image in 1 second:
gst-launch multifilesrc location=~/image%04d.jpg start-index=0 stop-index=100 loop=true caps="image/jpeg,framerate=\(fraction\)1/1" ! jpegdec ! ffmpegcolorspace ! videorate ! v4l2sink device=/dev/video2
Changing the number of image at stop-index=100, and change speed at caps="image/jpeg,framerate=\(fraction\)1/1"
For more information about these elements, refer to their documents at gstreamer.freedesktop.org/documentation/plugins.html
EDIT: Look like you use GStreamer 0.10, not 1.x
In this case, please refer to old documents multifilesrc and videorate
You can use a general file name with multifilesrc if you add some parameter adjustments and pair it with an identity on a delay. It's a bit fragile but it'll do fine for a temporary one-off program as long as you keep your input images the same dimensions and format.
gst-launch-1.0 multifilesrc loop=true start-index=0 stop-index=0 location=/tmp/whatever ! decodebin ! identity sleep-time=1000000 ! videoconvert ! v4l2sink
I need to capture a video using a webcam and output a single image for each video frame captured.
I have tried using gstreamer with a multifilesink, e.g.:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
However, this does not actually output every frame, meaning that if I record for 2 seconds at 30 fps, I don't get 60 images. I'm assuming this is because the encoding can't go that fast, so I need another method.
I figured it might work if I have one pipeline capture a video, and a separate pipeline convert that video to frames, but I don't know enough about codecs. Do I need to encode the video to a file like h264 or mp4 just to then decode it again?
Does anyone have any thoughts or suggestions? Keep in mind that I need to be able to do this in code, not using an application like Adobe Premiere, for example.
Thanks!
You could simply add a queue in there like this:
gst-launch v4l2src device=/dev/video1 ! video/x-raw-yuv,framerate=30/1 ! queue ! ffmpegcolorspace ! pngenc ! multifilesink location="frame%d.png"
This should make sure the video-capture is allowed to run at 30 fps, and then writing it to
disk can happen in its own tempo. Just be aware that the queue will grow to quite a large size
if you leave this setup for too long.
the solution I have to offer doesn't use gstreamer but ffmpeg. I hope that's fine for you too.
As described in this forum post, you can use something like this:
ffmpeg -i movie.avi frame%d.png
to get a png/jpg image for each frame of the video.
But depending on the input file you use, you might have to convert it to an MPEG vid before running ffmpeg.
Note:
If you want leading zeroes in your image file names, use %05d instead (for 5-digit numbers, like in C's printf()):
ffmpeg -i movie.avi frame%05d.png
The output file format depends on the file extension, so you might use .jpg, .bmp, ... instead of .png.
I ended up doing this in two parts.
Write video to file.
gst-launch v4l2src device=/dev/video2 ! video/x-raw-yuv,framerate=30/1 ! xvidenc ! queue ! avimux ! filesink location=test.avi
Post process.
gst-launch-1.0 --gst-debug-level=3 filesrc location=test.avi ! decodebin ! queue ! autovideoconvert ! pngenc ! multifilesink location="frame%d.png"