OpenCV VideoCapture Partial Frame Corruption - c++

I recently started using OpenCV for a project involving reading videos. I followed tutorials online for video's reading and the video seems to be read with no problems. However, when I display any frame from the video, the far right column appears to be corrupted. Here is the code I used for reading and displaying the first frame.
VideoCapture cap("6.avi");
Mat frame;
cap>>frame;
imshow("test",frame);
waitKey(0);
This resulted in a frame that looks good for the most part except the far right column. See here.
I am making no modifications to the video or frames before displaying it. Can anyone help figure out why this is happening?
Note: I'm running Ubuntu 14.04, OpenCV version 2.4.8
Full video can be found here.

Your code looks fine to me. Are you certain the frame is corrupted? Resize, maximize, minimize the "test" GUI window to see if the right edge is still corrupted. Sometimes while displaying really small images, I've seen the right edge of the GUI window display incorrectly even though the frame is correct. You could also try imwrite("test.png",frame) to see if the saved image is still corrupted.
If this doesn't help, it would seem like a codec problem. Ensure you have the latest version of opencv, ffmpeg.
If this still doesn't help, the video itself may be corrupted. You could try converting it into another format using ffmpeg

Related

Blurry Saved Images of Detected Objects using OpenCV

I have a c++ code that is being run on the Parrot AR.Drone version 2.0 to detect objects, then save images of the detected objects to the controller (computer). As you all may know, the AR.Drone has an 720p High Definition camera. However, the saved images are very blurry. I cannot seem to find any OpenCV function that increases the resolution of the saved images, however I believe the resolution is set to 95/100 by default for OpenCV. Does anyone know of any solution to this problem?
Any input or comment would be helpful.
I think you mean 95/100 of jPEG quality. You can change the third parameter of cv::imwrite like it said in the opencv documentation
cv::imwrite("name.jpg", image, CV_IMWRITE_JPEG_QUALITY=100); //100 instead of default 95
But this method only increase the quality, not the resolution... and there shouldn't be much difference between 95 and 100%.

64bit OpenCV with Visual Studio 2013 - bizarre behaviour for cv::namedWindow()

Currently trying to display a stream from an uEye camera using OpenCV. For this purpose I have Visual Studio 2013 and OpenCV 2.4.9 (64bit) at my disposal. Since things are not yet close to a release I'm using the debug libraries that are shipped with OpenCV (compiled with Visual Studio 2012).
I was trying to memcpy the image data that is returned by the camera to a cv::Mat object. After getting some weird error about a NULL pointer (string name of cvNamedWindow) I decide to check if I can actually run a very basic piece of code - read a PNG image and show it in a window. Well, it's not working... My mistake is probably still in the memcpy that I use but if you read below you will see that I have also tested a case where no camera is involved.
No matter if I give the absolute path of my image or simply point at the file where the EXE is I get an assertion failure from cv::imshow that either the height and/or width are not > 0. One other thing struck me here - the window name was all messed up - weird symbols, blank spaces etc. Nothing to do with what I have assigned as a name: "camOutput"
Further I decided to test things by manually creating a matrix of type CV_8U3 and filling it with black pixels. OpenCV showed the image yet the name of the window was again messed up. This time I was able to read the following, which seems to be some part of a command:
n in DOS mode.$
O_o I have never ever seen such weird behaviour especially when it comes to imshow, imread or namedWindow. Futher I can also not explain why imread returns an empty matrix no matter what I feed it. Tried PNG, JPEG and BMP - always the same crash.
EDIT: I have created an empty C++ project and transferred all my settings from the previous one. Now it's working. Even the memcpy for my uEye camera is fine and I can display the output in an OpenCV window. I have no idea what the problem was with my previous project. Will have to analyze further since the issue might reoccur. I will however leave this questioned opened because of that.

Video players questions

Given that FFmpeg is the leading multimedia framework and most of the video/audio players uses it, I'm wondering somethings about audio/video players using FFmpeg as intermediate.
I'm studying and I want to know how audio/video players works and I have some questions.
I was reading the ffplay source code and I saw that ffplay handles the subtitle stream. I tried to use a mkv file with a subtitle on it and doesn't work. I tried using arguments such as -sst but nothing happened. - I was reading about subtitles and how video files uses it (or may I say containers?). I saw that there's two ways putting a subtitle: hardsubs and softsubs - roughly speaking hardsubs mode is burned and becomes part of the video, and softsubs turns a stream of subtitles (I might be wrong - please, correct me).
The question is: How does they handle this? I mean, when the subtitle is part of the video there's nothing to do, the video stream itself shows the subtitle, but what about the softsubs? how are they handled? (I heard something about text subs as well). - How does the subtitle appears on the screen and can be configured changing fonts, size, colors, without encoding everything again?
I was studying some video players source codes and some or most of them uses OpenGL as renderer of the frame and others uses (such as Qt's QWidget) (kind of or for sure) canvas. - What is the most used and which one is fastest and better? OpenGL with shaders and stuffs? Handling YUV or RGB and so on? How does that work?
It might be a dump question but what is the format that AVFrame returns? For example, when we want to save frames as images first we need the frame and then we convert, from which format we are converting from? Does it change according with the video codec or it's always the same?
Most of the videos I've been trying to handle is using YUV720P, I tried to save the frames as png and I need to convert to RGB first. I did a test with the players and I put at the same frame and I took also screenshots and compared. The video players shows the frames more colorful. I tried the same with ffplay that uses SDL (OpenGL) and the colors (quality) of the frames seems to be really low. What might be? What they do? Is it shaders (or a kind of magic? haha).
Well, I think that is it for now. I hope you help me with that.
If this isn't the correct place, please let me know where. I haven't found another place in Stack Exchange communities.
There are a lot of question in one post:
How are 'soft subtitles' handled
The same way as any other stream :
read packets from a stream to the container
Give the packet to a decoder
Use the decoded frame as you wish. Here with most containers supporting subtitles the presentation time will be present. All you need at this time is get the text and burn it onto the image at the same presentation time. There are a lot of ways to print the text on the video, with ffmpeg or another library
What is the most used renderer and which one is fastest and better?
most used depend on the underlying system. For instance Qt only wrap native renderers, and even has a openGL version
You can only be as fast as the underlying system allows. Does it support ouble-buffering? Can it render in your decoded pixel format or do you have to perform color conversion before? This topic is too broad
Better only depend on the use case. this is too broad
what is the format that AVFrame returns?
It is a raw format (enum AVPixelFormat), and depends on the codec. There is a list of YUV and RGB FOURCCs which cover most formats in ffmpeg. Programmatically you can access the table AVCodec::pix_fmts to obtain the pixel format a specific codec support.

jumpy video processing in opencv

heylo!
I have a bunch of old video files converted from old vhs tapes. The problem is, since those tapes were really old, the videos are jumpy (sometimes the bottom of the frame is in the middle of the screen followed by the top of the next frame)
My goal is to write something in opencv to automatically remove the frames where the image is not lined up properly.
My idea is to detect the difference between the previous frame and the next frame. If the video were smooth, the difference would be minimal. If the frame is jumpy then the difference would be noticeable.
My question: how would opencv calculate this difference between two frames?
Thx!!!!
I hope you know how to grab frames from video. If not, check here. Fortunately, it also finds similarity between two videos.
What you will learn in this tutorial:
How to open and read video streams
Two ways for checking image similarity: PSNR and SSIM
I think you can just make small adaptations to it as per your requirements. This tutorial has all enough information about it.
You can also check this SOF : Simple and fast method to compare images for similarity

How do i use libavfilter to deinterlace frames in my video player software

I'm using libavformat/libavcodec/libswscale/libavutil/libavfilter (ffmpeg related libraries) to make a video player.
I'v gotten into issues with interlaced videos, it just pairs them incorrectly... It always draws the previous bottom frame with the current top frame. Which results in things I don't want. And i'v tried messing about with the variables around this, it just won't work. (I haven't found a player which would play the videos I have correctly, no you can't have them, i'm sorry)
I managed to find a way around this, by re-encoding the video with the following command:
ffmpeg -i video.mp4 -filter:v yadif -vcodec mpeg4 out.avi
Now what i'd need is directions on how to do this with c++ code, inside my video player.
I haven't found any tutorials on the matter and the ffmpeg.c source code is just too alien to me.
A link to a tutorial would be fine, i just haven't found it..
Edit:
Also this example was worth checking out:
https://github.com/krieger-od/imgs2video/blob/master/imgs2video.c
It's by a gentleman named Andrey Utkin
See doc/examples/filtering.c from FFMPEG source.