how to use a video stream as input in python/opencv programme - python-2.7

I am able to stream and receive webcam feed in two terminal via udp
command for streaming:
ffmpeg -i /dev/video0 -b 50k -r 20 -s 858x500 -f mpegts udp://127.0.0.1:2000
command for recieving:
ffplay udp://127.0.0.1:2000
Now i have to use this received video stream as input in python/opencv how can i do that.
I will be doing this using rtp and rstp as well.
But in case of rtsp it is essential to initiate the receiving terminal, but if I do that then port will become busy and my program will not be able to take the feed.How could it be resolved.
I am currently using opencv 2.4.13, python 2.7 in ubuntu 14.04

Check this tutorial, and use cv2.VideoCapture("udp://127.0.0.1:2000"). You will need to build opencv with FFmpeg so that it works.

Related

Capture and play MJPEG - Network Video stream over UDP with OpenCV and ffmpeg

I'm trying to receive and display a udp live mjpeg - network video stream from a network cam.
I can play the video stream by starting VLC with the Argument --demux=mjpeg and then typing udp://#:1234 in the network stream field. Or with gstreamer by the console line: gst-launch -v udpsrc port=1234 ! jpegdec ! autovideosink. My Cam has the IP Address 192.168.1.2 and it sends the stream to the address 192.168.1.1:1234.
I've tried to capture the stream with OpenCV with:
cv::VideoCapture cap;
cap.open("udp://#192.168.1.1:1234");
I tried also:
cap.open("udp://#:1234")
cap.open("udp://#localhost:1234")
cap.open("udp://192.168.1.1:1234")
cap.open("udp://192.168.1.1:1234/")
But the function hangs until I press ctrl+C. I have the same problem when I use ffmpeg with: ffmpeg -i udp://#192.168.1.1:1234 -vcodec mjpeg
What did I do wrong? When i installed ffmpeg i couldn't install the dependency libsdl1.2-dev. Is that the problem?
If so, there is any way to read the udp-frames from the socket and then decode the JPEG pictures and display it with OpenCV?
I have the OS Ubuntu linaro oneiric 11.10 with the kernel 3.0.35 from Freescale
thanks any way. i have fixed this problem by installing a newr version of ffmpeg and using the C-Api of ffmpeg

OpenCV stream video from Netcat

I am using Netcat and Mplayer to stream video from one device to another like this:
cat [video file] | nc [client ip address] [port] (server)
nc -L -p [port] | mplayer [options] (client)
I would like to ask if there is a way to pick up the stream with OpenCV to perform some image processing.
I have tried something like
VideoCapture stream("udp://#<ip>:<port>/");
but the process gets stuck at this point.
Thank you for your help !
I'm doing a similar thing myself, was able to get it working by simply piping through stdin:
nc -L -p [port] | ./opencvprogram
and then in the opencv program:
VideoCapture stream("/dev/stdin");
Did you try
VideoCapture stream("udp://#:6000"); //6000 is just an example
?
Are you sure that your video is streaming as UDP?
You can check this code too.

FFMPEG command for audio file streaming

I am trying to stream an audio file in mp3 format using the FFMPEG library to a remote computer, located on the same LAN as the sender. The command i used to stream at the sender is given below:
ffmpeg -re -f mp3 -i sender.mp3 -ar 8000 -f mulaw -f rtp rtp://10.14.35.23:1234
I got the below command on FFMPEG documentation page that generates audio and streams it to port number 1234 on remote computer
ffmpeg -re -f lavfi -i aevalsrc="sin(400*2*PI*t)" -ar 8000 -f mulaw -f rtp rtp://10.14.35.23:1234
I thought i had made relevant changes to this so that the mp3 streaming command will work, but only to know encounter the error which reads
"Error while opening encoder for output stream #0:0 - maybe incorrect parameters such as bit_rate, rate, width or height"
Can anyone tell me what is the wrong parameter here and how to rectify it?
I could figure out the way to stream an audio file using FFMPEG. The command for the same is given below:
ffmpeg -re -f mp3 -i sender.mp3 -acodec libmp3lame -ab 128k -ac 2 -ar 44100 -f rtp rtp://10.14.35.23
Here the audio file 'sender.mp3' is located in the same folder as ffmpeg.exe. In case of a different folder, the full path should be mentioned in the command.

youtube-dl command saves as flv and not mp3

So below is my command that I am running. It should be converting it to mp3 but it still exports as a video in flv. What am I doing wrong?
$cmd = '/usr/local/bin/youtube-dl -o "%(title)s.%(ext)s" -x --audio-format mp3 -- '.escapeshellarg($url).'';
youtube-dl will download the video before converting it. Most likely, you don't have ffprobe or ffmpeg installed. Make sure both programs are available (i.e. you get a sensible output for ffprobe --help and ffmpeg --help).
You can directly download the .mp3 file from the youtube site.
For e.g in ubuntu terminal youtube-dl youtube.com/watch?v=qn6CMz18lkQ -f 141 .Most probably 141 is the .mp3 file format code for better quality.

Capturing jpegs from an h264 stream with gstreamer on a Raspberry Pi

I have one of the new camera add-ons for a Raspberry Pi. It doesn't yet have video4linux support but comes with a small program that spits out a 1080p h264 stream. I have verified this works and got it pushing the video to stdout with:
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o -
I would like to process this stream such that I end up with a snapshot of the video taken once a second.
Since it's 1080p I will need to use the rpi's hardware support for H264 encoding. I believe gstreamer is the only app to support this so solutions using ffmpeg or avconv won't work. I've used the build script at http://www.trans-omni.co.uk/pi/GStreamer-1.0/build_gstreamer to make gstreamer and the plugin for hardware H264 encoding and it appears to work:
root#raspberrypi:~/streamtest# GST_OMX_CONFIG_DIR=/etc/gst gst-inspect-1.0 | grep 264
...
omx: omxh264enc: OpenMAX H.264 Video Encoder
omx: omxh264dec: OpenMAX H.264 Video Decoder
So I need to construct a gst-launch pipeline that takes video on stdin and spits out a fresh jpeg once a second. I know I can use gstreamer's 'multifilesink' sink to do this so have come up with the following short script to launch it:
root#raspberrypi:~/streamtest# cat test.sh
#!/bin/bash
export GST_OMX_CONFIG_DIR=/etc/gst
raspivid -n -t 1000000 -vf -b 2000000 -fps 25 -o - | \
gst-launch-1.0 fdsrc fd=0 ! decodebin ! videorate ! video/x-raw,framerate=1/1 ! jpegenc ! multifilesink location=img_%03d.jpeg
Trouble is it doesn't work: gstreamer just sits forever in the prerolling state and never spits out my precious jpegs.
root#raspberrypi:~/streamtest# ./test.sh
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
[waits forever]
In case it's helpful output with gstreamer's -v flag set is at http://pastebin.com/q4WySu4L
Can anyone explain what I'm doing wrong?
We finally found a solution to this. My gstreamer pipeline was mostly right but two problems combined to stop it working:
raspivid doesn't add timestamps to the h264 frames it produces
recent versions of gstreamer have a bug which stop it handling untimestamped frames
Run a 1.0 build of gstreamer (be sure to build from scratch & remove all traces of previous attempts) and the problem goes away.
See http://gstreamer-devel.966125.n4.nabble.com/Capturing-jpegs-from-an-h264-stream-tt4660254.html for the mailing list thread.