What is a buffer in Computer Graphics - opengl

Give me a brief, clear definition of a Buffer in Computer Graphics, then a short description of a buffer.
Most of the definitions on the internet are answering "Frame Buffer" yet there are other types of buffers in computer graphics to be more specific in OpenGL.

Someone to give me a brief, clear definition of a Buffer in Computer Graphics
There isn't one. "Buffer" is a term that is overloaded and can mean different things in different contexts.
A "framebuffer" (one word) basically has no relation to many other kinds of "buffer". In OpenGL, a "framebuffer" is an object that has attached images to which you can render.
"Buffering" as a concept generally means using multiple, usually identically sized, regions of storage in order to prevent yourself from writing to a region while that region is being consumed by some other process. The default framebuffer in OpenGL may be double-buffered. This means that there is a front image and a back image. You render to the back image, then swap the images to render the next frame. When you swap them, the back image becomes the front image, which means that it is now visible. You then render to the old front image, now the back image, which is no longer visible. This prevents seeing incomplete rendering products, since you're never writing to the image that is visible.
You'll note that while a "framebuffer" may involve "buffering," the concept of "buffering" can be used with things that aren't "framebuffers". The two are orthogonal, unrelated.
The most broad definition of "buffer" might be "some memory that is used to store bulk data". But this would also include "textures", which in most APIs do not consider to be "buffers".
OpenGL (and Vulkan) as an API have a more strict definition. A "buffer object" is an area of contiguous, unformatted memory which can be read from or written to by various GPU processes. This is distinct from a "texture" object, which has a specific format that is internal to the implementation. Because a texture's format is not known to you, you are not allowed to directly manipulate the bytes of a texture's storage. Any bytes you upload to it or read from it are done through an API that allows the implementation to play with them.
For buffer objects, you can load arbitrary bytes to a buffer object's storage without the API (directly) knowing what those bytes mean. You can even map the storage and access it like a regular pointer to CPU memory.

"Buffer" you can simply think of it as a block of memory .
But you have to specify the context here because it means many things.
For Example In the OpenGL VBO concept. This means vertex buffer object which we use it to store vertices data in it . Like that we can do many things, we can store indices in a buffer, textures, etc.,
And For the FrameBuffer you mentioned, It is an entirely different topic. In OpenGL or Vulkan we can create custom framebuffers called frame buffer objects(FBO) apart from the default frame buffer. We can bind FBO and draw things onto it & by adding texture as an attachment we can get whatever we draw on the FBO updated to that texture.
So Buffer has so many meanings here,

Related

How to sample Renderbuffer depth information and process it in CPU code, without causing an impact on performance?

I am trying to sample a few fragments' depth data that I need to use in my client code (that runs on CPU).
I tried a glReadPixel() on my FrameBuffer Object, but turns out it stalls the render pipeline as it transfers data from Video Memory to Main Memory through the CPU, thus causes unbearable lag (please, correct me if I am wrong).
I read about Pixel Buffer objects, that we can use them as copies of other buffers, and very importantly, perform glReadPixel() operation without stalling the performance, but not without compromising to use outdated information. (That's OK for me.)
But, I am unable to understand about how to use Pixel Buffers.
What I've learnt is we need to sample data from a texture to store it in a PixelBuffer. But I am trying to sample from a Renderbuffer, which I've read is not possible.
So here's my problem - I want to sample the depth information stored in my Render Buffer, store it in RAM, process it and do other stuff, without causing any issues to the Rendering Pipeline. If I use a depth texture instead of a renderbuffer, i don't know how to use it for depth testing.
Is it possible to copy the entire Renderbuffer to the Pixelbuffer and perform read operations on it?
Is there any other way to achieve what I am trying to do?
Thanks!
glReadPixels can also transfer from a framebuffer to a standard GPU side buffer object. If you generate a buffer and bind it to the GL_PIXEL_PACK_BUFFER target, the data pointer argument to glReadPixels is instead an offset into the buffer object. (So probably should be 0 unless you are doing something clever.)
Once you've copied the pixels you need into a buffer object, you can transfer or map or whatever back to the CPU at a time convenient for you.

Can I rely on SDL_Surface::pitch being constant?

I'm working on a project which utilises SDL 1.2.15. The application constructs a SDL_Surface whose frame buffer is then retreived via getDisplaySurface()->pixels and sent via serial line.
I learned, that the pixel buffer pointed to by SDL_Surface::pixels is not necessarily continuous. The byte sequence might be interrupted by blocks of data which are not part of the visible image area.
That means the image is of size 320×240, but the pixel buffer could be of size, let's say, 512×240. (I imagine speedups possible due to memory alignment could be a valid reason. That's just my assumption which is not backed by actual knowledge, though.)
Question:
In my case, I happen to be lucky and the the pixel buffer has exactly the dimensions of my image. Can I trust that the pixel buffer dimensions wouldn't change?
That way I could just send the pixel buffer content to the serial interface and don't have to write code dealing with removal of those invalid blocks.
SDL uses 4-byte alignment for rows. It also matches OpenGL's default alignment.

OpenGL: Reusing the same texture with different parameters

In my program I have a texture which is used several times in different situations. In each situation I need to apply a certain set of parameters.
I want to avoid having to create an additional buffer and essentially creating a copy of the texture for every time I need to use it for something else, so I'd like to know if there's a better way?
This is what sampler objects are for (available in core since version 3.3, or using ARB_sampler_objects). Sampler objects separate the texture image from its parameters, so you can use one texture with several parameter sets. That functionality was created with exactly your problem in mind.
Quote from ARB_sampler_objects extension spec:
In unextended OpenGL textures are considered to be sets of image data (mip-chains, arrays, cube-map face sets, etc.) and sampling state (sampling mode, mip-mapping state, coordinate wrapping and clamping rules, etc.) combined into a single object. It is typical for an application to use many textures with a limited set of sampling states that are the same between them. In order to use textures in this way, an application must generate and configure many texture names, adding overhead both to applications and to implementations. Furthermore, should an application wish to sample from a texture in more than one way (with and without mip-mapping, for example) it must either modify the state of the texture or create two textures, each with a copy of the same image data. This can introduce runtime and memory costs to the application.

Reducing RAM usage with regard to Textures

Currently, My app is using a large amount of memory after loading textures (~200Mb)
I am loading the textures into a char buffer, passing it along to OpenGL and then killing the buffer.
It would seem that this memory is used by OpenGL, which is doing its own texture management internally.
What measures could I take to reduce this?
Is it possible to prevent OpenGL from managing textures internally?
One typical solution is to keep track of which textures you are needing at a given position of your camera or time-frame, and only load those when you need (opposed to load every single texture at the loading the app). You will have to have a "manager" which controls the loading-unloading and bounding of the respective texture number (e.g. a container which associates a string, name of the texture, with an integer) assigned by the glBindTexture)
Other option is to reduce the overall quality/size of the textures you are using.
It would seem that this memory is used by OpenGL,
Yes
which is doing its own texture management internally.
No, not texture management. It just need to keep the data somewhere. On modern systems the GPU is shared by several processes running simultanously. And not all of the data may fit into fast GPU memory. So the OpenGL implementation must be able to swap data out. The GPU fast memory is not storage, it's just another cache level. Just like the system memory is cache for system storage.
Also GPUs may crash and modern drivers reset them in situ, without the user noticing. For this they need a full copy of the data as well.
Is it possible to prevent OpenGL from managing textures internally?
No, because this would either be tedious to do, or break things. But what you can do, is loading only the textures you really need for drawing a given scene.
If you look through my writings about OpenGL, you'll notice that for years I tell people not to writing silly things like "initGL" functions. Put everything into your drawing code. You'll go through a drawing scheduling phase anyway (you must sort translucent objects far-to-near, frustum culling, etc.). That gives you the opportunity to check which textures you need, and to load them. You can even go as far and load only lower resolution mipmap levels so that when a scene is initially shown it has low detail, and load the higher resolution mipmaps in the background; this of course requires appropriate setting of minimum and maximum mip levels to be set as either texture or sampler parameter.

Read Framebuffer-texture like an 1D array

I am doing some gpgpu calculations with GL and want to read my results from the framebuffer.
My framebuffer-texture is logically an 1D array, but I made it 2D to have a bigger area. Now I want to read from any arbitrary pixel in the framebuffer-texture with any given length.
That means all calculations are already done on GPU side and I only need to pass certain data to the cpu that could be aligned over the border of the texture.
Is this possible? If yes is it slower/faster than glReadPixels on the whole image and then cutting out what I need?
EDIT
Of course I know about OpenCL/CUDA but they are not desired because I want my program to run out of the box on (almost) any platform.
Also I know that glReadPixels is very slow and one reason might be that it offers some functionality that I do not need (Operating in 2D). Therefore I asked for a more basic function that might be faster.
Reading the whole framebuffer with glReadPixels just to discard it all except for a few pixels/lines would be grossly inefficient. But glReadPixels lets you specify a rect within the framebuffer, so why not just restrict it to fetching the few rows of interest ? So you maybe end up fetching some extra data at the start and end of the first and last lines fetched, but I suspect the overhead of that is minimal compared with making multiple calls.
Possibly writing your data to the framebuffer in tiles and/or using Morton order might help structure it so a tighter bounding box can be be found and the extra data retrieved minimised.
You can use a pixel buffer object (PBO) to transfer pixel data from the framebuffer to the PBO, then use glMapBufferARB to read the data directly:
http://www.songho.ca/opengl/gl_pbo.html