How would I enable the glAccum buffer in SFML? It doesn't appear to exist by default (glGet(GL_ACCUM_BLUE_BIT) returns 0). Is this even possible?
Not with SFML 1.6 or 2.0. And I wouldn't hold my breath on them adding such a feature anytime soon. They were incredibly resistant to the simple suggestion of allowing sRGB framebuffers.
I'd suggest switching to Allegro 5, as they provide accumulation buffer support.
Related
I have used OpenGL pretty exclusively for all my rendering, to the point that I'm unaware of any other way to write pixels to a window unfortunately.
This is a problem because my current project is a work tool that emulates an LCD display (pixel perfect, 2D, very few pixels are touched each frame, all 'drawing' can be done with memcpy() to a pixel buffer) and I feel that OpenGL might be too heavy for this, but I could absolutely be wrong in that assumption.
My goal is to borrow as little CPU time as possible. What's the best way to draw pixels to a window, in this limited way, on a modern typical machine running windows 10 circa 2019? Is OpenGL suited for this type of rendering, or should I adopt another rendering method in this case? And if so, what would that method be?
edit:
I should also mention, OpenGL can be used right away for me. If rendering textured triangles with an optimal setup is the fastest method, then I can already do that. Anything that just acts as an API over OpenGL or DirectX will likely be worse in my case.
edit2:
After some research, and thanks to the comments, I think I may just use OpenGL with Pixel Buffer Objects to optimize pixel uploads and keep rendering inexpensive.
I want to be able to render into an OpenGL render window using DirectX. This is because the features i'm after are only supported in DirectX.
I have heard it is possible to do this a few years ago and i'm hoping it should still be possible.
I'd imagine it will involve pointing DirectX to the correct part of VRAM and the correct depth buffer.
Also a tutorial or simply an explanation would be extremely useful.
At least NVIDIA has the NV_DX_interop extension, which let's you use Direct3D 9 buffers/textures/surfaces directly as OpenGL buffers/textures/renderbuffers (therefore being the other way around). But I don't have any experience with this and I don't know if it is widely supported or actually works any good.
It would be more interresting which features you think are only available in Direct3D. Maybe we can show you how to achieve it with OpenGL, as there are not many features (if any) that are available in Direct3D and not in OpenGL. Although if you got an ATI card, being available and actually working correctly may sometimes be two seperate things.
Mixing OpenGL and Direct3D will not work, and AFAIK it never used to. May I ask, which features of Direct3D you require, that OpenGL doesn't offer?
You can. At least for nVIDIA! Check NV_DX_interop. But however, EVERY DirectX feature is supported in OpenGL as OpenGL is more RAW / Low level than DirectX! Just the way it is implemented may be different. Again, tell us WHICH feature of DirectX you want and I can tell you hints how to reimplement it.
One specific example of a difference between OpenGL and Direct3D, which is something I am researching myself, is that in DirectShow (a subset of Direct3D), there are video capture filters for my Panasonic DVCPRO-HD based cameras. So the live streams collected via that API, I would like to use as inputs into OpenGL libraries. One such library is OpenFrameWorks, which is OpenGL based. I am looking to see how efficient I can make this transfer. The existing OpenFrameWorks Video API uses a slightly older and deprecated DirectShow API, and uses a brute force strategy for getting pixels from DirectShow over to OpenFrameWorks.
By this I mean, does Cairo draw lines, shapes and everything using opengl acelerated primitives or no? and if not, a library that does this?
The OpenGL backend certainly accelerates some functions. But there are many it can't accelerate. The fact that it's written against GL 2.1 (and thus can't use more advanced features of 3.x or 4.x hardware) means that there is a lot that it simply cannot accelerate.
If you are willing to limit yourself to NVIDIA hardware, NVIDIA just came out with the NV_path_rendering extension, which provides a lot of the 2D functionality you would find with Cairo. Indeed, it's possible that you could write a Cairo backend for it. The path rendering extension is only available on GeForce 8xxx hardware and above.
It's nifty in that it's focused on the vertex pipeline. It doesn't do things like gradients or colors or whatever. That's good, because it still allows you the use of a fragment shader. Which means you get to do pretty much whatever you want ;)
Cairo is designed to have a flexible backend for rendering. It can use OpenGL for rendering, though support is still listed as "experimental" at this point. For details, see using cairo with OpenGL.
It can also output to the X Window System, Quartz, Win32, image buffers, PostScript, PDF, and SVG, and more.
How can you move 2D objects around in OpenGL? What is the mechanism? E.g., do we need to control frames?
An example code snippet would be welcome!
The short answer is that you can move an object by modifying the modelview matrix. This can be done in several ways and it depends on your coding preferences and which version of opengl drivers you have available.
Here's some partial code to get you started in the right direction:
The following is deprecated in Opengl 3.0+ (but even then it is still available via the compatibility profile).
// you'll need a game loop to make some opengl calls each frame
while(!done)
{
// clear color buffers, depth buffers, etc
initOpenglFrame();
glPushMatrixf();
glTranslatef(obj->getX(), obj->getY(), obj->getZ());
obj->draw();
glPopMatrixf();
// move the object 0.01 units to the left each tick
obj->setX(obj->getX() + 0.01);
// flush, swap buffers, etc
finishOpenglFrame();
}
For OpenGL 3.0 and beyond, glPushMatrixf(), glPopMatrixf(), glTranslatef(), and similar fixed function pipeline interface is all deprecated. If you need to using OpenGL 3.0 or greater, you'll want to do the same type of solution but you'll need your own implementation of matricies and a matrix stack. I think that discussion is beyond the scope of your question.
For further reading, and to get started with OpenGL, I'd recommend you check out nehe.gamedev.net. If you would like to get started with OpenGL 3.X, then I'd recommend purchasing the Opengl Superbible 5th Edition. Great book!
I'm having problem at mipmapping the textures on different hardware. I use the following code:
char *exts = (char *)glGetString(GL_EXTENSIONS);
if(strstr(exts, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
But on some cards it says GL_GENERATE_MIPMAP is supported when its not, thus the gfx card is trying to read memory from where the mipamp is supposed to be, thus the card renders other textures to those miplevels.
I tried glGenerateMipmapEXT(GL_TEXTURE_2D) but it makes all my textures white, i enabled GL_TEXTURE_2D before using that function (as it was told to).
I could as well just use gluBuild2DMipmaps() for everyone, since it works. But i dont want to make new cards load 10x slower because theres 2 users who have really old cards.
So how do you choose mipmap method correctly?
glGenerateMipmap is supported at least by OpenGL 3.3 as a part of functionality, not as extension.
You have following options:
Check OpenGL version, if it is more recent that the first one that ever supported glGenerateMipmap, use glGenerateMipmap.
(I'd recommend this one) OpenGL 1.4..2.1 supports texParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE)(see this) Which will generate mipmaps from base level. This probably became "deprecated" in OpenGL 3, but you should be able to use it.
Use GLEE or GLEW ans use glewIsSupported / gleeIsSupported call to check for extension.
Also I think that instead of using extensions, it should be easier to stick with OpenGL specifications. A lot of hardware supports OpenGL 3, so you should be able get most of required functionality (shaders, mipmaps, framebuffer objects, geometry shaders) as part of OpenGL specification, not as extension.
If drivers lie, there's not much you can do about it. Also remember that glGenerateMipmapEXT is part of the GL_EXT_framebuffer_object extension.
What you are doing wrong is checking for the SGIS_generate_mipmap extension and using GL_GENERATE_MIPMAP, since this enum belongs to core OpenGL, but that's not really the problem.
The issue you describe sounds like a very horrible OpenGL implementation bug, i would bypass it using gluBuild2DMipmaps on those cards (having a list and checking at startup).