OpenCV stream video from Netcat - c++

I am using Netcat and Mplayer to stream video from one device to another like this:
cat [video file] | nc [client ip address] [port] (server)
nc -L -p [port] | mplayer [options] (client)
I would like to ask if there is a way to pick up the stream with OpenCV to perform some image processing.
I have tried something like
VideoCapture stream("udp://#<ip>:<port>/");
but the process gets stuck at this point.
Thank you for your help !

I'm doing a similar thing myself, was able to get it working by simply piping through stdin:
nc -L -p [port] | ./opencvprogram
and then in the opencv program:
VideoCapture stream("/dev/stdin");

Did you try
VideoCapture stream("udp://#:6000"); //6000 is just an example
?
Are you sure that your video is streaming as UDP?
You can check this code too.

Related

how to use a video stream as input in python/opencv programme

I am able to stream and receive webcam feed in two terminal via udp
command for streaming:
ffmpeg -i /dev/video0 -b 50k -r 20 -s 858x500 -f mpegts udp://127.0.0.1:2000
command for recieving:
ffplay udp://127.0.0.1:2000
Now i have to use this received video stream as input in python/opencv how can i do that.
I will be doing this using rtp and rstp as well.
But in case of rtsp it is essential to initiate the receiving terminal, but if I do that then port will become busy and my program will not be able to take the feed.How could it be resolved.
I am currently using opencv 2.4.13, python 2.7 in ubuntu 14.04
Check this tutorial, and use cv2.VideoCapture("udp://127.0.0.1:2000"). You will need to build opencv with FFmpeg so that it works.

FFMPEG API - Retrieve device camera information

I need to get the compatible framerate/resolution of the available cameras. How can this be done with the ffmpeg library? I've tried using the functions of avdevice but all of them seem to retrieve an error. I am not able to get a list of the available devices as well.
This is being done on a mac using avfoundation (and it will later be ported to windows with dshow).
Thank you for your time.
Try running
ffmpeg -f dshow -list_options true -i video="Integrated Camera"
replacing 'dshow' and "Integrated Camera" with whatever you have, depending on the platform. As above, you can get the name of the video device with
ffmpeg -f dshow -list_devices true -i x
You can then pipe the resulting output to a file using the > operator, or to a further command-line tool for processing with the | operator. For example,
ffmpeg -f dshow -list_options true -i video="Integrated Camera" > test.txt
or
ffmpeg -f dshow -list_options true -i video="Integrated Camera" | grep 'pixel_format'

FFMPEG command for audio file streaming

I am trying to stream an audio file in mp3 format using the FFMPEG library to a remote computer, located on the same LAN as the sender. The command i used to stream at the sender is given below:
ffmpeg -re -f mp3 -i sender.mp3 -ar 8000 -f mulaw -f rtp rtp://10.14.35.23:1234
I got the below command on FFMPEG documentation page that generates audio and streams it to port number 1234 on remote computer
ffmpeg -re -f lavfi -i aevalsrc="sin(400*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp://10.14.35.23:1234
I thought i had made relevant changes to this so that the mp3 streaming command will work, but only to know encounter the error which reads
"Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height"
Can anyone tell me what is the wrong parameter here and how to rectify it?
I could figure out the way to stream an audio file using FFMPEG. The command for the same is given below:
ffmpeg -re -f mp3 -i sender.mp3 -acodec libmp3lame -ab 128k -ac 2 -ar 44100 -f rtp rtp://10.14.35.23
Here the audio file 'sender.mp3' is located in the same folder as ffmpeg.exe. In case of a different folder, the full path should be mentioned in the command.

Capturing jpegs from an h264 stream with gstreamer on a Raspberry Pi

I have one of the new camera add-ons for a Raspberry Pi. It doesn't yet have video4linux support but comes with a small program that spits out a 1080p h264 stream. I have verified this works and got it pushing the video to stdout with:
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o -
I would like to process this stream such that I end up with a snapshot of the video taken once a second.
Since it's 1080p I will need to use the rpi's hardware support for H264 encoding. I believe gstreamer is the only app to support this so solutions using ffmpeg or avconv won't work. I've used the build script at http://www.trans-omni.co.uk/pi/GStreamer-1.0/build_gstreamer to make gstreamer and the plugin for hardware H264 encoding and it appears to work:
root#raspberrypi:~/streamtest# GST_OMX_CONFIG_DIR=/etc/gst gst-inspect-1.0 | grep 264
...
omx: omxh264enc: OpenMAX H.264 Video Encoder
omx: omxh264dec: OpenMAX H.264 Video Decoder
So I need to construct a gst-launch pipeline that takes video on stdin and spits out a fresh jpeg once a second. I know I can use gstreamer's 'multifilesink' sink to do this so have come up with the following short script to launch it:
root#raspberrypi:~/streamtest# cat test.sh
#!/bin/bash
export GST_OMX_CONFIG_DIR=/etc/gst
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o - | \
gst-launch-1.0 fdsrc fd=0 ! decodebin ! videorate ! video/x-raw,framerate=1/1 ! jpegenc ! multifilesink location=img_%03d.jpeg
Trouble is it doesn't work: gstreamer just sits forever in the prerolling state and never spits out my precious jpegs.
root#raspberrypi:~/streamtest# ./test.sh
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
[waits forever]
In case it's helpful output with gstreamer's -v flag set is at http://pastebin.com/q4WySu4L
Can anyone explain what I'm doing wrong?
We finally found a solution to this. My gstreamer pipeline was mostly right but two problems combined to stop it working:
raspivid doesn't add timestamps to the h264 frames it produces
recent versions of gstreamer have a bug which stop it handling untimestamped frames
Run a 1.0 build of gstreamer (be sure to build from scratch & remove all traces of previous attempts) and the problem goes away.
See http://gstreamer-devel.966125.n4.nabble.com/Capturing-jpegs-from-an-h264-stream-tt4660254.html for the mailing list thread.

How to generate a thumbnail of flash movie (.flv)

I'm using ColdFusion and need to generate a thumbnail from a flash movie stored on the server. I have heard of ffMpeg but have no idea how to use it. (Once you put it on your server what's the next step?)
You can use cfexecute to run a command line on the CF server.
Karthik linked a blog post that suggests the following syntax for ffmpeg:
ffmpeg -itsoffset -4 -i test.avi
-vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 test.jpg
So you could do something like this:
<cfexecute
name="c:\pathto\ffmpeg\ffmpeg.exe"
arguments="-itsoffset -4 -i #sourcevideo# -vcodec mjpeg -vframes 1 -an -f rawvideo -s 320x240 #thumbnaildestination" />
I haven't run ffmpeg like this and you'll likely need to experiment with the syntax to get a result you like, but once you do your workflow is pretty straightforward.
You may also run into issues executing fmpeg.exe depending on the user account your ColdFusion server instance is running as.
Documentation of FFMpeg: http://www.ffmpeg.org/documentation.html
You might want to check: http://blog.prashanthellina.com/2008/03/29/creating-video-thumbnails-using-ffmpeg/
http://www.flashcomguru.com/index.cfm/2006/4/25/ffmpegthumbs
with ColdFusion its not possible but check this: http://old.nabble.com/Create-a-thumbnail-image-from-.flv-video-file-once-uploaded-td22683497.html