GLFW | What is a frame? - glfw

Here I was reading about buffers and found such text:
When the entire frame has been rendered, the buffers need to be swapped with one another, so the back buffer becomes the front buffer and vice versa.
And also this
The swap interval indicates how many frames to wait until swapping the buffers, commonly known as vsync.
The question is: what is a frame? And how does it differ from buffer?

What is a frame?
This word comes from the film industry, where the images on a reel of film are referred to as frames.
In filmmaking, video production, animation, and related fields, a frame is one of the many still images which compose the complete moving picture.
And how does it differ from buffer?
A buffer is a region of computer memory allocated for storing data. The image that is generated by rendering a scene is stored in a buffer.
While a still image can be represented by a single buffer, an animation requires many buffers to be presented in a sequence (like a reel of film). Each buffer in that sequence can be considered a frame when given this context.

Related

What is a buffer in Computer Graphics

Give me a brief, clear definition of a Buffer in Computer Graphics, then a short description of a buffer.
Most of the definitions on the internet are answering "Frame Buffer" yet there are other types of buffers in computer graphics to be more specific in OpenGL.
Someone to give me a brief, clear definition of a Buffer in Computer Graphics
There isn't one. "Buffer" is a term that is overloaded and can mean different things in different contexts.
A "framebuffer" (one word) basically has no relation to many other kinds of "buffer". In OpenGL, a "framebuffer" is an object that has attached images to which you can render.
"Buffering" as a concept generally means using multiple, usually identically sized, regions of storage in order to prevent yourself from writing to a region while that region is being consumed by some other process. The default framebuffer in OpenGL may be double-buffered. This means that there is a front image and a back image. You render to the back image, then swap the images to render the next frame. When you swap them, the back image becomes the front image, which means that it is now visible. You then render to the old front image, now the back image, which is no longer visible. This prevents seeing incomplete rendering products, since you're never writing to the image that is visible.
You'll note that while a "framebuffer" may involve "buffering," the concept of "buffering" can be used with things that aren't "framebuffers". The two are orthogonal, unrelated.
The most broad definition of "buffer" might be "some memory that is used to store bulk data". But this would also include "textures", which in most APIs do not consider to be "buffers".
OpenGL (and Vulkan) as an API have a more strict definition. A "buffer object" is an area of contiguous, unformatted memory which can be read from or written to by various GPU processes. This is distinct from a "texture" object, which has a specific format that is internal to the implementation. Because a texture's format is not known to you, you are not allowed to directly manipulate the bytes of a texture's storage. Any bytes you upload to it or read from it are done through an API that allows the implementation to play with them.
For buffer objects, you can load arbitrary bytes to a buffer object's storage without the API (directly) knowing what those bytes mean. You can even map the storage and access it like a regular pointer to CPU memory.
"Buffer" you can simply think of it as a block of memory .
But you have to specify the context here because it means many things.
For Example In the OpenGL VBO concept. This means vertex buffer object which we use it to store vertices data in it . Like that we can do many things, we can store indices in a buffer, textures, etc.,
And For the FrameBuffer you mentioned, It is an entirely different topic. In OpenGL or Vulkan we can create custom framebuffers called frame buffer objects(FBO) apart from the default frame buffer. We can bind FBO and draw things onto it & by adding texture as an attachment we can get whatever we draw on the FBO updated to that texture.
So Buffer has so many meanings here,

Understanding buffer swapping in more detail

This is more a theoretical question. This is what I understand regarding buffer swapping and vsync:
I - When vsync is off, whenever the developer swap the front/back buffers, the buffer that the GPU is reading from and sending to the monitor will be changed to the new one, regardless if the old buffer was being read (i.e. no vblank is needed).
II - When vsync is on, the buffers are not immediately swapped, they will only be changed when the old buffer was completely read (i.e. vblank is needed).
III - Turning vsync off can boost the frame rate to be greater than the monitor refresh rate, but screen tearing can appear when buffers are swapped when they are being read
IV - Turning vsync on prevents tearing, but the monitor refresh rate limits the FPS.
Based on this I tried to do the following experiment: I disabled vsync and every frame I rendered all pixels with a solid color using glClearColor + glClear, choosing a new random color per frame. I got ~2400FPS in a 60Hz monitor. Since every frame I swapped the buffers, and since the monitor takes 1/60 second for each full screen drawing, I was expecting that each time the monitor was being refreshed, the buffers would have been swapped roughly ~40 times. This is because in 1/60s, there are around 40 buffer swapping calls. Since everytime the buffers are swapped the clear color is different, I was expecting to see a really messy image, with lots of different colors, because of the tearing. Instead, by taking some screenshots I didn't see any tearing... every pixel had the same solid color.
Could someone point the wrong assumptions that I had and why I see such behavior?
Thanks in advance!
The problem was related to the window manager. I could see the expected behavior when I ran in full screen.

Can I rely on SDL_Surface::pitch being constant?

I'm working on a project which utilises SDL 1.2.15. The application constructs a SDL_Surface whose frame buffer is then retreived via getDisplaySurface()->pixels and sent via serial line.
I learned, that the pixel buffer pointed to by SDL_Surface::pixels is not necessarily continuous. The byte sequence might be interrupted by blocks of data which are not part of the visible image area.
That means the image is of size 320×240, but the pixel buffer could be of size, let's say, 512×240. (I imagine speedups possible due to memory alignment could be a valid reason. That's just my assumption which is not backed by actual knowledge, though.)
Question:
In my case, I happen to be lucky and the the pixel buffer has exactly the dimensions of my image. Can I trust that the pixel buffer dimensions wouldn't change?
That way I could just send the pixel buffer content to the serial interface and don't have to write code dealing with removal of those invalid blocks.
SDL uses 4-byte alignment for rows. It also matches OpenGL's default alignment.

Allocating a new buffer per each frame to prevent screen tearing

When I use the SDL library to set the pixel values in the memory and update the screen, screen tearing occurs whenever the update is too fast. I don't know much about the SDL internals, but my understanding from what I see is that:
The update function returns right after signalling the graphics hardware to read the pixel data from (say) buffer1.
The next frame is painted on buffer2, and update is called again, but this was too fast and the reading from buffer1 still hasn't completed;
My program doesn't know anything about the hardware and assumes that its okay to paint again in buffer1, while this buffer is being sent to the monitor.
The screen is torn.
This isn't a big problem when the velocity of the to-be-painted object is not too fast. The screen still tears, but it is almost non-visible to the human eye, but I'd still be happy if this tearing does not occur at all. I dislike vertical sync, as it produces consistent latency per each frame.
My idea is that probably a new screen buffer can be allocated per each frame to be painted on. When the monitor wants to display something, it should read from the newest buffer.
Is this a possible way already used in practice? If I do want to test my idea, what kind of low level and cross platform library or API I may use? SDL? OpenGL?
Do you think that updating the screen faster than the human eye can see it is productive? if you really must have your engine 100% independent of the retrace, use a triple buffer system. One buffer to display, and 2 buffers to update back and forth to until the screen is ready for the next buffer. Triple is as high as you need to go as if you fill the 2nd back buffer, you can just write over the now defunct 1st back buffer instead. No GPU lag and only 3 buffers.
Here is a nice link describing this technique along with some warnings about using it on modern GPUs...

How to display real time serial port data using OpenGL C/C++

I am writing a C program that reads serial data from COM3 (these data are actually pixel intensities of a stream of video frames); once one frame is received completely, the program re-assembles the frame and displays it using OpenGL; next frame comes, display the next frame. (so in the end it looks like a video)
To me, it seems that I need one thread to receive data and another thread to display? Since the program must not stop receiving data.
I have finished data receiving and frame re-assembling part but I have no idea how the display part work.
Can anybody give me any clue how to do this?...
No, you don't have to do this on different threads. Consider this pseudocode:
while (true) {
if (data_present())
read_data();
display();
}
From what I understood from your question, you want to present raster data on the screen. In this case, instance the data in contiguous memory buffer, create a texture of it and render it on a quad or two triangles covering the whole screen.