A video encoding technique - python-2.7

i am trying to create an erasure coding-based technique for video streaming over VANET, i managed to stream a video from the cam and encode it, now i want to apply the XOR operation on two successive frames and send the three all in one array (by three i mean each frame and its successive one and the XOR operation resulting frame).
here is my code :
import numpy as np
import cv,cv2
import cPickle as pickle
#cap for capturing the video
cap = cv2.VideoCapture(0)
f=open('videorecord.pkl', 'w')
vidrec = []
while(True):
#capture a frame from camera
ret, frame = cap.read()
# declare a list that will contain the frames
if ret==True:
cv2.imshow('Streaming a Video',frame)
#encode each frame
ret , frame1 = cv2.imencode('.jpeg' , frame)
# append it to the list
vidrec.append(frame1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
# add the list to the pkl file ...
pickle.dump(vidrec ,f , pickle.HIGHEST_PROTOCOL)
# realse everything
cap.release()
f.close()
cv2.destroyAllWindows()
please can any one help me?

Related

How to click images in open cv python after a certain interval of time and simaltaneously save a fixed number of those captured images?

Since I am very much new to this language, with whatever little knowledge I have, I have written code.
The code is getting executed thrice, but the three images are being overwritten and at the end there is just one image that is available instead of 3 different images (which is my goal).
import cv2
#helps in turning on the camera
cap = cv2.VideoCapture(0)
#camera clicks the images for 3 times
a = 0
while (a < 3):
a = a+1
#creating a frame
check, frame = cap.read()
print(check)
print(frame)
#conversion of image to grayscale
image = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
#shows the frame
cv2.imshow("capturing",image)
#Saving Of image
status = cv2.imwrite('path of where the image is to be saved.jpg',image)
print("Image written to file-system : ",status)
#turns off the camera
cap.release
cv2.waitKey(0)
cv2.destroyAllWindows()

Poor Performance and Strange Array Behavior when run on Linux

So I am working on a script to do some video processing. It will read a video file searching for red dots that are a certain size then find the center of each and return the x/y coordinates. Initially I had it working great on my Windows Machine, so I sent it over to the raspberry pi to see if i would encounter issues, and boy did I.
On Windows the script would run in real time, completing at the same time as the video. On the Raspberry it is slowwwwwwww. Also I noticed when I looked into the structure of countours, there is a huge array of 0's first, before my x/y coordinates array. I have no idea what is creating this, but it doesn't happen on the windows box.
I have same version of python and opencv installed on both boxes, the only difference is numpy 1.11 on windows and numpy 1.12 on raspberry. Note, I had to change np.mean(contours[?]) to 1 to skip the initial array of 0's. What have I done wrong?
Here's a video I made for testing purposes if needed:
http://www.foxcreekwinery.com/video.mp4
import numpy as np
import cv2
def vidToPoints():
cap = cv2.VideoCapture('video.mp4')
while(cap.isOpened()):
ret, image = cap.read()
if (ret):
cv2.imshow('frame',image)
if cv2.waitKey(1) == ord('q'):
break
# save frame as image
cv2.imwrite('frame.jpg',image)
# load the image
image = cv2.imread('frame.jpg')
# define the list of boundaries
boundaries = [
([0, 0, 150], [90, 90, 255])
]
# loop over the boundaries
for (lower, upper) in boundaries:
# create NumPy arrays from the boundaries
lower = np.array(lower, dtype = "uint8")
upper = np.array(upper, dtype = "uint8")
# find the colors within the specified boundaries
mask = cv2.inRange(image, lower, upper)
if (50 > cv2.countNonZero(mask) > 10):
#find contour
contours = cv2.findContours(mask, 0, 1)
#average countour list to find center
avg = np.mean(contours[1],axis=1)
x = int(round(avg[0,0,0]))
y = int(round(avg[0,0,1]))
print [x,y]
print cv2.countNonZero(mask)
for l in range(5):
cap.grab()
else:
break
cap.release()
cv2.destroyAllWindows()
vidToPoints()

Python montage a plot on OpenCV image and record as video

Assume that you have a temperature data with sampling rate 512. I want to record this data by synchronized with the camera images. The resulting record going to be just a video file.
I can plot this data with matplotlib and pyqtgraph.
I did it with matplotlib but video sampling rate is decreasing. Here is the code with random incoming data.
import cv2
import numpy as np
import matplotlib.pyplot as plt
cap = cv2.VideoCapture(0) # video source: webcam
fourcc = cv2.cv.CV_FOURCC(*'XVID') # record format xvid
out = cv2.VideoWriter('output.avi',fourcc, 1, (800,597)) # output video : output.avi
t = np.arange(0, 512, 1)# sample time axis from 1 to 512
while(cap.isOpened()): # record loop
ret, frame = cap.read()# get frame from webcam
if ret==True:
nse = np.random.randn(len(t))# generate random data squence
plt.subplot(1, 2, 1)# subplot random data
plt.plot(t, nse)
plt.subplot(1, 2, 2)# subplot image
plt.imshow(frame)
# save matplotlib subplot as last.png
plt.savefig("last.png")
plt.clf()
img=cv2.imread("last.png") # read last.png
out.write(img) # record last.png image to output.avi
cv2.imshow('frame',img)
if cv2.waitKey(1) & 0xFF == ord('q'): # exit with press q button in frame window
break
else:
break
cap.release() # relase webcam
out.release() # save video
cv2.destroyAllWindows() # close all windows
import cv2
canvas = np.zeros((480,640))
t = np.arange(0, 512, 1) # sample time axis from 1 to 512
nse = np.random.randn(len(t))
# some normalization to fit to canvas dimension
t = 640 * t / 512
nse = 480 * nse / nse.max()
pts = np.vstack((t,nse)).T.astype(np.int)
cv2.polylines(canvas, [pts], False, 255)
imshow(canvas, 'gray')
This create the plot in a new zero array (480 x 640). t and nse should be normalized by the canvas dimension as you like.
if your capture frame has 480,640 dimension too, then you can prepare cv2.VideoWriter for 960x640 and concatenate frame and canvas using np.concatenate or np.hstack to have 960x640 array which can be used as the buffer to send to VideoWriter.

Why is my OpenCV Video Refusing to Write to Disk?

So I am starting to get very confused by the openCV libraries ability to write out video to disk, because even the openCV documentation is not terribly clear as to how the video actually gets written in this case. The code I have below seems to collect the data just fine but the video file it tries to write has no data in it. All I want to do is take a video that I know I can, change the data within it to a ramp between 0 and 255, and then write that data back out to disk. However, the final I/O step is not cooperating for reasons I don't understand. Can anyone help? Find the code below:
import numpy as np
import cv2
import cv2.cv as cv
cap = cv2.VideoCapture("/Users/Steve/Documents/TestVideo.avi") #The video
height = cap.get(cv.CV_CAP_PROP_FRAME_HEIGHT) #We get some properties of the video
width = cap.get(cv.CV_CAP_PROP_FRAME_WIDTH)
fps = cap.get(cv.CV_CAP_PROP_FPS)
fourcc = cv2.cv.CV_FOURCC(*'PDVC') #This is essential for testing
out = cv2.VideoWriter('output.avi',fourcc, int(fps), (int(width),int(height)))
xaxis = np.arange(width,dtype='int')
yaxis = np.arange(height,dtype='int')
xx,yy = np.meshgrid(xaxis,yaxis)
ramp=256*xx/int(width) #This is a horizontal ramp image that scales from 0-255 across the width of the image
i=0
while(cap.isOpened()):
if i%100==0: print i
i+=1
ret, frame = cap.read() #Grab a frame
if ret==True:
# Change the frame data to the ramp instead of the original video
frame[:,:,0]=ramp #The camera is B/W so the image is in B/W
frame[:,:,1]=ramp
frame[:,:,2]=ramp
out.write(frame) #Write to disk?
cv2.imshow('frame',frame) # I see the ramp as an imshow
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
cap.release() #Clear windows
out.release()
cv2.destroyAllWindows()
Your code is generally correct, but is likely silently failing at some step along the way.
try adding some debug lines:
out = cv2.VideoWriter('output2.avi',fourcc, int(fps), (int(width),int(height)))
or
else:
print "frame %d is false" % i
break
When I was testing your code locally I found the fps was set to 0 for most .avi files I read. Manually setting it to 15 or 30 worked.
I also didn't have any luck getting your fourcc to work on my machine (osx), but this one worked fine.
fourcc = cv2.cv.CV_FOURCC('m', 'p', '4', 'v')

OpenCV edge enhancement

I am performing background subtraction to obtain a moving car from a video as below ( Running average background modeling)
I am applying findContours() after this to draw a polygon around the car.
As you can see, the quality of the obtained output is not great. Is there any way I can enhance the edges of the car to make it more prominent and cut out the extraneous noise surrounding it. I tried performing morphological closing(dilate -> erode) to fill the gaps, but the output was not as expected.
This might be overkill and far from a quick solution but Level Set Methods would be very suited to extract the presegmented car.
You can try to apply background subtraction by using cv2.createBackgroundSubtractorMOG2().
Code:
import cv2
import numpy as np
cap = cv2.VideoCapture(0)
out = cv2.createBackgroundSubtractorMOG2()
fourcc = cv2.VideoWriter_fourcc(*'MJPG')
output = cv2.VideoWriter('output.avi', fourcc, 20.0, (640,480), isColor=False)
while(cap.isOpened()):
ret, frame = cap.read()
if ret==True:
frame = cv2.flip(frame,180)
outmask = out.apply(frame)
output.write(outmask)
cv2.imshow('original', frame)
cv2.imshow('Motion Tracker', outmask)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
else:
break
output.release()
cap.release()
cv2.destroyAllWindows()