My pipeline
gst-launch-1.0 v4l2src device=/dev/video1 ! video/x-raw,width=640,height=480 ! avdec_vp9 ! filesink location=vid.webm
It will error:
WARNING: erroneous pipeline: could not link v4l2src0 to avdec_vp9-0
Whats wrong?
Pipeline works:
gst-launch-1.0 -v v4l2src device=/dev/video0 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert ! vp9enc ! webmmux ! filesink location='raw_dual.webm' sync=false
Related
I have to use gstreamer 0.10 and try to stream a mp4 file.
For that I tried
gst-launch-0.10 filesrc location=./test.mp4 ! qtdemux ! queue ! h264parse ! video/x-h264,mapping=/stream ! udpsink rtsp://192.168.192.100:12345/test
and received a warning:
WARNING: erroneous pipeline: no element "h264parse"
How can I stream the file as rtsp stream?
To get h264parse plugin, run
"sudo apt install gstreamer1.0-plugins-bad"
Pipelines
Sender
gst-launch-1.0 filesrc location= ~/file.mp4 ! qtdemux ! queue ! h264parse ! rtph264pay config-interval=10 ! udpsink host=ip_address_to_stream_to port=9999 -v
Receiver
gst-launch-1.0 udpsrc port=9999 caps="application/x-rtp, media=(string)video, clock-rate=(int)90000, payload=(int)96, encoding-name=(string)H264" ! rtph264depay ! identity silent=0 ! avdec_h264 ! videoconvert ! ximagesink
Here are the steps that worked for me:
sudo apt-get install libgstrtspserver-1.0 libgstreamer1.0-dev
wget https://github.com/GStreamer/gst-rtsp-server/blob/master/examples/test-launch.c
gcc test-launch.c -o test-launch $(pkg-config --cflags --libs gstreamer-1.0 gstreamer-rtsp-server-1.0)
./test-launch "filesrc location=<full-path-to-your-mp4-video-file> ! qtdemux ! queue ! h264parse ! rtph264pay name=pay0 pt=96"
At the client machine, to test, I run ffplay or VLC Player:
ffplay rtsp://<your-server-ip>:8554/test
I recently implement a simple code that can record rtsp stream with changing filesink location dynamically every 10 seconds by referencing this tutorial and this.
Rtsp stream example: rtsp://184.72.239.149/vod/mp4:BigBuckBunny_175k.mov
However, when I tested an x264enc element, the result videos seems to loss lots of frames.
When I open the recorded videos, they start at, for example, 00:07 instead of 00:00.
This is my code....
test.cpp
Compile:
g++ test.cpp -o test `pkg-config --cflags --libs gstreamer-1.0`
gstreamer version: 1.14.4
g++ version: 8.2.1
Could anybody help this issue ?
EDIT:
I finally solved this problem with the concept :
pipeline = rtspsrc ! rtpjpegdepay ! queue ! bin
bin = (ghost pad) ! jpegdec ! openh264enc ! h264parse ! mp4mux ! filesink
The bin will dynamically remove from pipeline and have a new one added to pipeline every 10 seconds.
I would like to use a Fortran library on Windows, which contains many modules. Therefore I need to compile this library with MinGW. For using the MSVC compiler I need to compile the library with external links as mentioned in these two links:
https://groups.google.com/forum/#!topic/comp.lang.fortran/s8CiQnNmO14
https://blog.kitware.com/fortran-for-cc-developers-made-easier-with-cmake/
This seems to work as long as I do not use modules.
Here is a minimal example:
Fortran code:
!-------------------------------------------------------------------------!
! !
! MODULE EXMOD !
! !
!-------------------------------------------------------------------------!
MODULE EXMOD
INTEGER :: VALUE
CONTAINS
!-------------------------------------------------------------------------!
! !
! INITOPTMOD !
! !
!-------------------------------------------------------------------------!
SUBROUTINE INITOPTMOD(VALUE_IN)
IMPLICIT NONE
INTEGER, INTENT(IN) :: VALUE_IN
VALUE = VALUE_IN
WRITE(*,*) 'The Value is: ', VALUE
RETURN
END SUBROUTINE INITOPTMOD
END MODULE EXMOD
!-------------------------------------------------------------------------!
!-------------------------------------------------------------------------!
! !
! EXTERN CALLER !
! !
!-------------------------------------------------------------------------!
SUBROUTINE FORTCALL( VALUE_ )
USE EXMOD
INTEGER, INTENT(IN) :: VALUE_
CALL INITOPTMOD(VALUE_)
END SUBROUTINE FORTCALL
!-------------------------------------------------------------------------!
I compiled this with:
gfortran forfunc.f90 -c
gfortran -o libfortfunc.dll forfunc.o -shared -Wl,--output-def,libfortfunc.def
lib /MACHINE:x64 /def:libfortfunc.def /out:libfortfunc.lib
After compiling the library I tried to compile the main.cpp with MSVC:
main.cpp:
extern "C" {
void fortcall_( int *value );
}
int main()
{
int value = 12;
fortcall_( &value );
return 0;
}
I tried to compile this with:
cl main.cpp libfortfunc.exp libfortfunc.lib
If there is no module this works fine, but since the fortran code contains one the following error occurs:
/out:main.exe
main.obj
libfortfunc.exp
libfortfunc.lib
libfortfunc.exp : error LNK2001: Nicht aufgelöstes externes Symbol "__exmod_MOD_value".
main.exe : fatal error LNK1120: 1 nicht aufgelöste Externe
Can I do something to fix that?
At least I found a way to avoid this error.
I had a look at the file libfortfunc.def:
EXPORTS
__exmod_MOD_initoptmod #1
__exmod_MOD_value #2 DATA
fortcall_ #3
Since i do not want to use the modules from my C++ code I simply deleted all exports from the module. After that I run the commands:
lib /MACHINE:x64 /def:libfortfunc.def /out:libfortfunc.lib
cl main.cpp libfortfunc.exp libfortfunc.lib
That works fine and the main.exe file returns the right value.
Since this does not feel like the right way to do it, the remaining questions are:
How can I tell gfortran that it must not export modules?
Why is exporting modules a problem?
I am basically creating video from opengl animation uisng glreadpixels and unix pipeline :
FILE *ffmpeg = popen("/usr/local/Cellar/ffmpeg/2.5.4/bin/ffmpeg"
" -framerate 30"
" -vcodec rawvideo"
" -f rawvideo"
" -pix_fmt rgb32"
" -s 1080x720"
" -i pipe:0 -vf vflip -vcodec h264"
" -r 60"
" /Users/xamarin/Desktop/out.mp4", "w");
Is it possible to add watermark in each frame as i am creating frames for video. I know how to add watermark in existing video but i want to add watermark as i am creating video so that in on step i get video that has watermark in it. Suggest me some parameters for ffmpeg that can do this.
Use the overlay filter:
ffmpeg \
-framerate 30 -f rawvideo -pixel_format rgb32 -video_size 1080x720 -i pipe:0 \
-i overlay.png \
-i audio.foo \
-filter_complex "[0:v]vflip[main];[main][1:v]overlay=format=rgb,format=yuv420p" \
-c:v libx264 -c:a aac -movflags +faststart output.mp4
The rawvideo demuxer documentation lists -pixel_format instead of -pix_fmt and -video_size instead of -s.
You probably don't need -c:v rawvideo when you include -f rawvideo.
I removed -r 60 because it seems unnecessary to duplicate the frames.
You may see a visual improvement when adding format=rgb to the overlay filter for RGB inputs. The format filter is then used to make H.264 output with YUV 4:2:0 chroma subsampling which is needed for non-FFmpeg based players.
-movflags +faststart is helpful if your viewers will watch via progressive download.
Now I have a IP camera. I tried to get a image through ffmpeg (like this : ffmpeg -rtsp_transport tcp -i "my rtsp stream address" -y -f image2 test.jpg ).That's OK! Or I tried to do this through opencv,no problem too.But when I open the stream in vlc,At the same time,I tried to capture the image ,oh,I just got a gray image.
why? if I open the stream in vlc two times,that's also OK! If capturing the image and view the rtsp stream together,just got a gray image.Is the reason of IP camera?normal imagegray image
Try this https://gist.github.com/jamek/1dda2add62b3f7ac415a
g++ -s -o ffmpeg_rtsp rtsp.cpp -lavcodec -lavutil -lswscale -lavformat
Run: ./ffmpeg_rtsp "your rtsp stream address"