Resize openGL PBO's or recreate OGL context - opengl

I am creating a video player application and I was wondering what would be the optimize way to handle the media changing action.
For now I am using 2xPBOs as explain here: OpenGL Pixel Buffer Object (very awesome website by the way) and some textures. In order to keep perfect colour and aspect, I need my PBOs and my textures to be the same size as my video clip. (The resize is done only once during the shadder stage).
So basically, if the actual media is like 1920x1080 but the next one on the playlist is 1280x720 what should I do? I am thinking of 2 solutions right now:
Resizing PBOs and Textures on the fly
Recreating everything, my whole OpenGLPreviewWidget
I know the solution 1 is include in solution 2 but, should I recreate a openGL context, windows, etc. or, because it's too slow, just resizing would do it?

Creating a new context is a fairly heavy weight operation, at least compared to most other things you would do when you use OpenGL. So unless you have a good reason why you need a new context, I would avoid it.
Re-allocating the PBOs and objects is easy enough. As you already pointed out, you'll end up doing this in either case. If that alone is enough, I think that's the way to go.

Related

How to best render to a window, when pixels could be written directly to display buffer?

I have used OpenGL pretty exclusively for all my rendering, to the point that I'm unaware of any other way to write pixels to a window unfortunately.
This is a problem because my current project is a work tool that emulates an LCD display (pixel perfect, 2D, very few pixels are touched each frame, all 'drawing' can be done with memcpy() to a pixel buffer) and I feel that OpenGL might be too heavy for this, but I could absolutely be wrong in that assumption.
My goal is to borrow as little CPU time as possible. What's the best way to draw pixels to a window, in this limited way, on a modern typical machine running windows 10 circa 2019? Is OpenGL suited for this type of rendering, or should I adopt another rendering method in this case? And if so, what would that method be?
edit:
I should also mention, OpenGL can be used right away for me. If rendering textured triangles with an optimal setup is the fastest method, then I can already do that. Anything that just acts as an API over OpenGL or DirectX will likely be worse in my case.
edit2:
After some research, and thanks to the comments, I think I may just use OpenGL with Pixel Buffer Objects to optimize pixel uploads and keep rendering inexpensive.

Supersampling AA with PyOpenGL and GLFW

I am developing a application with OpenGL+GLFW and Linux as a target platform.
The default rasterizing has VERY strong aliasing. I have implemented FXAA on top of my pipeline and I still got pretty strong aliasing. Especially when there's some kind of animation or movement, the edges of meshes are flickering. This literally renders the whole project useless.
So, I thought I would also add a supersampling and I have been trying to implement it for two weeks already and still can't make it work. I start to think it's not possible with the combination PyOpenGL+GLFW+Ubuntu18.04.
So, the question is, can I do a supersampling by hand (without OpenGL extentions)? At the end of my (deferred) rendering pipeline I save all the data from different passes to the hard drive, so I thought I would do something like this:
Render the image with 2x/3x resolution to the texture.
Save the texturebuffer to the array.
Get the average pixel's value from each 2x2/3x3/4x4 block
of this array.
Save it to the hard drive.
Obviously, it's gonna be slower than mulstisampling with OpenGL extention and require more memory, but I don't need high fps and I have a pretty small resolution (like 480x640 or similar) so it might work out.
Do you guys have any thoughts about it? I would be glad to any advice.

SpriteKit auto-generated atlases sizes are not powers of 2

So, I'm working on a project that has some big textures and recently I decided to split into different atlases by scene so that when navigating through scenes SpritKit can get actually rid of unused textures
(Since I cannot control memory usage manually I hope SpriteKit is smart enough to know when a texture atlas is not being used at all and release it if required).
Now, after doing this change i went to take a look at resulting atlases and to my surprise their size isn't a power of 2, which was one of the first things you learned in Cocos2D. (ie. a texture of 540x540 would actually end up being a 1024x1024 in OpenGL so that it makes more sense to be 1024x512 if possible or even fill as many sprites as possible to not waste memory)
If this the same in SpriteKit or any idea how it actually works? Maybe it'd make more sense to not split atlases since I'm going to end up maybe using the same or even more memory with the awful atlases that have been auto-generated...
This info is looooong outdated. Textures no longer need to be strictly POT (exception: PVR compressed textures have to be POT). NPOT was first supported by iPhone 3GS.
These days it is recommended not to use POT textures when you don't have to in order to conserve memory.

QGraphicsScene: OpenGL with fallback

I am writing an application that must display images, and potentially loads of them. So I was wondering if there is a proper way of having a QGraphicsScene use OpenGL, and in case it fails, use a software renderer.
I've read the documentation, but what if setting the viewport fails?
You're talking about more than a gigabyte of textures. OpenGL by itself is of no help here, neither is a raw QGraphicsScene. You'll need to dynamically cache the images, ideally with prediction based on scrolling direction and speed. You'll need a coupling layer between each view and the scene, to keep the scene populated with images that are visible or will be soon visible in each view. Once you do that, OpenGL may help but you absolutely need to profile things and prove to yourself that it helps. Even without OpenGL you can have very decent performance.

Convenient method for statically rendering 3d shapes into image files

My basic problem is to generate 2d renders of 3d objects, such as one could accomplish with openGL or DirectX. However I have no interest in displaying the rendered objects to the screen, only to generate the shaded/ textured/ rotated images as bitmaps (not necessarily written to disk). This process is likely to be a problematic bottleneck in my design, so I would prefer to keep my solution as compact as possible (ie, don't waste the time sending the image to the screen), and would be most pleased if I could make use of hardware-accelerated rendering. Does anyone know of a convenient library or tool to help in this?
Right now I would prefer a C/C++ option, however speed is what I'm going for, so I'm willing to deal with ASM/ super optimized anything if it gets what I want the fastest.
What you need is a so called technique "rendering to texture". With OpenGL you can do it very easy. Take a look at example here: http://www.songho.ca/opengl/gl_fbo.html#example
You can do this using "render to texture"
This process is likely to be a problematic bottleneck in my design, so I would prefer to keep my solution as compact as possible (ie, don't waste the time sending the image to the screen)
If you want rendering to be fast, you want to use the GPU for this. Sending a image to the screen comes for free with those things. But reading an image back to the CPU memory, that is actually quite a bottleneck. But unavoidable in you case probably.