Changing the video frame quality(compress) in to JPEG and rendering - python-2.7

I'm totally new to OpenCv library and I'm implementing a simple client server application using Opencv and python. Here the client captures the video from the webcam and sends it to the server. I need to compress the video frame in order to reduce the bandwidth usage. As I could find we can save the frame in to a JPEG which is a loosy compression a technique. But using the provided method I have to write the frame into and JPEG image. What I need is without writing to an image rendering the low quality(compressed frame). What i'm currently doing is writing to a JPEG and reading it again. two IO cycles per a single frame is not efficient at all. Can anyone suggest a better solution?
cv2.imwrite('imageName.jpg', frame, [int(cv2.IMWRITE_JPEG_QUALITY), 90])
newFrame=cv2.imread('imageName.jpg')
cv2.imshow('preview',newFrame);
(frame= current image frame I captured,
newFrame=loading the saved image in to the programme)

Related

Is there a direct way to render/encode Vulkan output as an ffmpeg video file?

I'm about to generate 2D and 3D music animations and render them to video using C++. I was thinking about using OpenGL, but I've read that, unfortunately, it is being discontinued in favour of Vulkan, which seems to offer higher performance using a GPU, but is also a lower-level API, making it more difficult to learn. I still have almost no knowledge in both OpenGL and Vulkan, beginning to learn now.
My question is:
is there a way to encode the Vulkan render output (showing a window or not) into a video file, preferentially through FFPMEG? If so, how could I do that?
Requisites:
Speed: the decrease in performance should be nearly that of encoding the video only, not much more than that (e.g. by having to save lossless frames as images first and then encoding a video from them).
Controllable FPS and resolution: the video fps and frame resolution can be freely chosen.
Reliability, reproducibility: running a code that gives a same Vulkan output twice should result in 2 equal videos independently of the system, i.e. no dropping frames, async problems (I want to sync with audio) or whatsoever. The chosen video fps should stay fixed (e.g. 60 fps), no matter if the computer can render 300 or 3 fps.
What I found out so far:
An example of taking "screenshots" from Vulkan output: it writes to a ppm image at the end, which is a binary uncompressed image file.
An encoder for rendering videos from OpenGL output, which is what I want, but using OpenGL in that case.
That Khronos includes in the Vulkan API a video subset.
A video tool to decode, demux, process videos using FFMPEG and Vulkan.
That is possible to render the output into a buffer without the need of a screen to display it.
First of all, ffmpeg is a framework used for video encoding and decoding. Second, if you have no experience with any of the GPU rendering API you should start with OpenGL. Vulkan is very low-level and complicated. OpenGL will be here for a very long time and will not be immediately replaced with Vulkan.
The off-screen rendering option you mentioned is probably the best one. It doesn't really matter though, you can also use the image from the framebuffer. The image is just a matrix of RGBA pixels. You need these data as the input for the video encoding. Please take a look at how ffmpeg works. You need to send the rendered frame data in the encoder which produces video packets that are stored in a video file. You need to chose a container (mp4, mkv, avi,...) and video format (h265, av1, vp9,...). You can of course implement a frame limiter and render the scene with a constant framerate or just pick the frames that have a constant timestep.
The performance problem happens, when you transfer the data from RAM to GPU memory and vice versa. For example, when downloading the rendered image from the buffer and passing it to the CPU encoder. Therefore, the most optimal approach would be with Vulkan, using the new video extension and directly sending the rendered frames in the HW accelerated encoder without any transfers from the GPU memory. You can also run the encoder in a different thread to make it work asynchronously.
But honestly, it's not trivial. The most simple solution (not realtime) for you to create a video from 3D render would be to:
Create a fixed FPS game loop
Make screenshots of the scene by downloading the framebuffer data in OGL or Vulkan
Process the frames by ffmpeg binary to create a video file
Another hack would be to use a screen recording software (OBS, Fraps, etc.) to create the video form your 3D app.

How to use d3d11 video processor rendering efficiently

I am wrting a media player on Windows with C++ and D3D11 APIs. I have decoded video frames by GPU. And I convert the NV12 frames to RGB for swap chain presenting by using VideoProcessorBlt. The images are displayed on a window successfully.
But when I open the Windows Task Manager, it seems that the 3D, Video Decode, Video Processing modules of the GPU all are used. The utilization rate of the three modules keeps at about 7%.
Contrast with Chrome or Edge browser, the browsers can play an MP4 file just using Video Decode module and Video Processing module. The utilization rate of the 3D module is 0%.
How do they implement this? I am wondering how to render a frame efficiently. Thanks.

What is the best way to upload video frames generated by Opencv to GPU?

I am currently working on simulating a dvs camera on an input video file.
I currently read the video frame by frame and send each frame to GPU to do some calculation on it.
The thing I'm interested in doing is send all the frames ( or atleast as much as there is memory available in GPU ) to GPU first, then handle all the computation.
I am using Mat to store the data and upload it to GPU.
Maybe an array of Mats could be sent to GPU? but I don't know how to do that.
Any help or hint would be appreciated.

Modifying CISCO openh264 to take image frames and out compressed frames

Has anyone tried to modify the CISCO openh264 library to take JPEG images as input and compress them into P and I frames (output as frames, NOT video) and similarly to modify decoder to take compressed P and I frames and generate uncompressed-frames ?
I have a camera looking at a static scene and taking pictures (1280x720p) every 30 second. The scene is almost static. Currenlty I am using JPEG compression to compress each frame individually and it is resulting in an image size of ~270KB. This compressed frame is transferred via internet to a storage server. Since there is very little motion in the scene, the 'I' frame size will be very small (I think it should be ~20-50KB). So it will be very cost effective to transmit I frames over internet instead of JPEG images.
Can anyone guide me to some project or about how to proceed with this task ?
You are describing exactly what a codec does. It takes images, and compresses them. There relationship in time is irrelevant to the compression step. The decoder than decides how to display or just write them to disk. You don't need to modify open264, what you want to do is exactly what it is designed to do.

Writing variable framerate videos in openCV

The steps I follow for writing a video file in openCV are as follows:
CvVideoWriter *writer =cvCreateVideoWriter(fileName, Codec ID, frameRate, frameSize); // Create Video Writer
cvWriteFrame(writer, frame); // Write frame
cvReleaseVideoWriter(&writer); // Release video writer
The above code snippet writes at a fixed frame rate. I need to write out variable frame rate videos. The approach I had used earlier with libx264 involved writing individual timestamps to each frame.
So, the question is how do I write timestamps to a frame in openCV - what is the specific API ? More generally, how do I create variable frame rate videos ?
I don't think it is possible to do this with OpenCV directly without modifying the code to give access under the hood. You would need to use a different library like libvlc to do so using the imem to get your raw RGB frames in OpenCV into a file. This link provides an example using imem with raw images loaded from OpenCV. You would just need to change the :sout options to save to the file you want using your preferred codec.