Recording and Saving the Screen using C++ on Windows - c++

I'm trying to write an application that records and saves the screen in C++ on the windows platform. I'm not sure where to start with this. I assume I need some sort of API, (FFMPEG, maybe OpenGL?). Could someone point me in the right direction?

You could start by looking at Windows remote desktop protocol, maybe some programming libraries are provided for that.
I know of a product that intercepts calls into the Windows GDI dll and uses that to store the screen drawing activities.
A far more simpler approach would be to do screenshots as often as possible and somehow minimize redundant data (parts of the screen that didn't change between frames).
If the desired output of your app is a video file (like mpeg) you are probably better off just grabbing frames and feeding them into a video encoder. I don't know how fast the encoders are these days. Ffmpeg would be a good place to start.
If the encoder turns out not fast enough, you can try storing the frames and encoding the video file afterwards. Consecutive frames should have many matching pixels, so you could use that to reduce the amount of data stored.

Related

Video players questions

Given that FFmpeg is the leading multimedia framework and most of the video/audio players uses it, I'm wondering somethings about audio/video players using FFmpeg as intermediate.
I'm studying and I want to know how audio/video players works and I have some questions.
I was reading the ffplay source code and I saw that ffplay handles the subtitle stream. I tried to use a mkv file with a subtitle on it and doesn't work. I tried using arguments such as -sst but nothing happened. - I was reading about subtitles and how video files uses it (or may I say containers?). I saw that there's two ways putting a subtitle: hardsubs and softsubs - roughly speaking hardsubs mode is burned and becomes part of the video, and softsubs turns a stream of subtitles (I might be wrong - please, correct me).
The question is: How does they handle this? I mean, when the subtitle is part of the video there's nothing to do, the video stream itself shows the subtitle, but what about the softsubs? how are they handled? (I heard something about text subs as well). - How does the subtitle appears on the screen and can be configured changing fonts, size, colors, without encoding everything again?
I was studying some video players source codes and some or most of them uses OpenGL as renderer of the frame and others uses (such as Qt's QWidget) (kind of or for sure) canvas. - What is the most used and which one is fastest and better? OpenGL with shaders and stuffs? Handling YUV or RGB and so on? How does that work?
It might be a dump question but what is the format that AVFrame returns? For example, when we want to save frames as images first we need the frame and then we convert, from which format we are converting from? Does it change according with the video codec or it's always the same?
Most of the videos I've been trying to handle is using YUV720P, I tried to save the frames as png and I need to convert to RGB first. I did a test with the players and I put at the same frame and I took also screenshots and compared. The video players shows the frames more colorful. I tried the same with ffplay that uses SDL (OpenGL) and the colors (quality) of the frames seems to be really low. What might be? What they do? Is it shaders (or a kind of magic? haha).
Well, I think that is it for now. I hope you help me with that.
If this isn't the correct place, please let me know where. I haven't found another place in Stack Exchange communities.
There are a lot of question in one post:
How are 'soft subtitles' handled
The same way as any other stream :
read packets from a stream to the container
Give the packet to a decoder
Use the decoded frame as you wish. Here with most containers supporting subtitles the presentation time will be present. All you need at this time is get the text and burn it onto the image at the same presentation time. There are a lot of ways to print the text on the video, with ffmpeg or another library
What is the most used renderer and which one is fastest and better?
most used depend on the underlying system. For instance Qt only wrap native renderers, and even has a openGL version
You can only be as fast as the underlying system allows. Does it support ouble-buffering? Can it render in your decoded pixel format or do you have to perform color conversion before? This topic is too broad
Better only depend on the use case. this is too broad
what is the format that AVFrame returns?
It is a raw format (enum AVPixelFormat), and depends on the codec. There is a list of YUV and RGB FOURCCs which cover most formats in ffmpeg. Programmatically you can access the table AVCodec::pix_fmts to obtain the pixel format a specific codec support.

Writing video on memory OpenCV 2

We're currently developing some functionality for our program that needs OpenCV. One of the ideas being tossed at the table is the use of a "buffer" which saves a minute of video data to the memory and then we need to extract like a 13-second video file from that buffer for every event trigger.
Currently we don't have enough experience with OpenCV so we don't know if it is possible or not. Looking at the documentation the only allowable function to write in memory are imencode and imdecode, but those are images. If we can find a way to write sequences of images to a video file that would be neat, but for now our idea is to use a video buffer.
We're also using OpenCV version 2 specifications.
TL;DR We want to know if it is possible to write a portion of a video to memory.
In OpenCV, every video is treated as a collection of frames(images). Depending on your cameras' FPS you can capture frames periodically and fill the buffer with them. Meanwhile you can destroy the oldest frame(taken 1 min before). So a FIFO data structure can be implemented to achieve your goal. Getting a 13 second sample is easy, just jump to a random frame and write 13*FPS frames sequentially to a video file.
But there will be some sync and timing problems AFAIK and as far as I've used OpenCV.
Here is the link of OpenCV documentation about video i/o. Especially the last chunk of code is what you will use for writing.
TL;DR : There is no video, there are sequential images with little differences. So you need to treat them as such.

Best way to load in a video and to grab images using c++

I am looking for a fast way to load in a video file and to create images from them at certain intervals ( every second, every minute, every hour, etc.).
I tried using DirectShow, but it just ran too slow for me to start the video file and move to a certain location to get data and to save it out to an image. Even if I disabled the reference clock. Tried OpenCV, but it has trouble opening the AVI file unless I know the exact codec information. So if I know a way to get the codec information out from OpenCV I may give it another shot. I tried to use FFMPEG, but I don't have as much control over it as well as I would wish.
Any advice would be greatly appreciated. This is being developed on a Windows box since it has to be hosted on a Windows box.
MPEG-4 format is not an intra-coded format, so you can't just jump to a random frame and decode it on its own, as most frames only encode the differences from one or more other frames. I suspect your decoding is slow because when you land on a frame for which several other dependent frames to be decoded first.
One way to improve performance would be to determine which frames are keyframes (or sometimes also called 'sync' points) and limit your decoding to those frames, since these can be decoded on their own.
I'm not very familiar with DirectShow capabilities, but I would expect it has some API to expose sync points.
Also, I should mention that the QuickTime SDK on Windows is possibly another good option that you have for decoding frames from movies. You should first test that your AVI movies are played correctly in the QuickTime Player. And the QT SDK does expose sync points, see the section Finding Interesting Times in the QT SDK documentation.
ffmpeg's libavformat might work for ya...

How might one develop a program like FRAPS?

I would like to make a program to capture video.
What is the best way to capture video?
I know C++ and I'm learning assembly. I found in my assembly book that I can get data from the video card. Would that be the best way?
I know FRAPS hooks into programs, but I would like my program to take video of the entire screen.
I would like something something fast, with low memory usage if possible. A requirement is that the program must be usable on other computers, despite dissimilar hardware.
The way Fraps works, it's impossible to capture the entire screen (unless you're running a full-screen DirectX application, of course). You're apparently trying to emulate the functionality of CamStudio, more so than Fraps.
CamStudio is open-source (here is the SorceForge page) so perhaps you could start by studying the source code? I would wager that it's not really for beginners, however.
Capturing an entire screen is simple, in short you get a desktop handle (GetWindowHandle(0)) and BitBlt() it to your bitmap.
Now you need to encode it to video, potentially full HD or more, in real time, using the best possible compression, ideally lossless because of text on the screen and vector graphics nature of traditional desktops. I don't know any good custom codec for such requirements so would recommend to use traditional h.264 and tune tradeoff between quality and performance. FFMPEG is probably the most popular library for this, just check license of h.264 encoding.

Displaying a video in DirectX

What is the best/easiest way to display a video (with sound!) in an application using XAudio2 and Direct3D9/10?
At the very least it needs to be able to stream potentially larger videos, and take care of the fact that the windows aspect ratio may differ from the videos (eg by adding letter boxes), although ideally Id like the ability to embed the video into a 3D scene.
I could of course work out a way to load each frame into a texture, discarding/reusing the textures once rendered, and playing the audio separately through XAudio2, however as well as writing a loader for at least one format, ive also got to deal with stuff like synchronising the video and audio components, so hopefully there is an eaier solution available or even a ready made free one with a suitable lisence (commercial distribution in binary form, dynamic linking is fine in the case of say LGPL).
In Windows SDK, there is a DirectShow example for rendering video to texture. It handles audio output too.
But there are limitations and I can't honestly call it easy.
Have you looked at Bink video? Its what lots of games use for video playback. Works great and you don't have to code all that video stuff yourself from scratch.