I'm making kind of a graphic engine using OpenGL, but I am having a bit of a problem, the texture is not loading properly, it loads like this:
While it should load the picture of the "heart" square fully as a square, but some of it is gone, and it kinda vibrates whenever the frames change, I realized if I remove the frames (on screen), it wouldn't get that problem.
Like, what is wrong with that thing?
EDIT: it appears that the problem is caused by shaders on AMD GPUs, I dunnu why and if someone knows what could be causing the problem by that please tell me, also I have a long code don't really know which is causing the problem...
and I tried a different method: I rendered the label on a different layer and it worked :/
Related
I am developing a application with OpenGL+GLFW and Linux as a target platform.
The default rasterizing has VERY strong aliasing. I have implemented FXAA on top of my pipeline and I still got pretty strong aliasing. Especially when there's some kind of animation or movement, the edges of meshes are flickering. This literally renders the whole project useless.
So, I thought I would also add a supersampling and I have been trying to implement it for two weeks already and still can't make it work. I start to think it's not possible with the combination PyOpenGL+GLFW+Ubuntu18.04.
So, the question is, can I do a supersampling by hand (without OpenGL extentions)? At the end of my (deferred) rendering pipeline I save all the data from different passes to the hard drive, so I thought I would do something like this:
Render the image with 2x/3x resolution to the texture.
Save the texturebuffer to the array.
Get the average pixel's value from each 2x2/3x3/4x4 block
of this array.
Save it to the hard drive.
Obviously, it's gonna be slower than mulstisampling with OpenGL extention and require more memory, but I don't need high fps and I have a pretty small resolution (like 480x640 or similar) so it might work out.
Do you guys have any thoughts about it? I would be glad to any advice.
I am having alot of struggles trying to follow Frank Luna's book for DirectX 11 3D Programming, and I am currently up to chapter 7 section 2. I have imported the Skull model file and I have begun to render it. The strange thing is that when I am rendering it, it appears to be rendering the back faces over the front facing faces. I am pretty sure this is the case of what is happening. But I am putting forth this question for help and guidance on where I may be going wrong. I will edit this post with inclusions of my code if required to help me figure out where I am going wrong, Thanks alot! (photo's attached)
Photo - Facing the Skull, Slightly to the Left
Photo - Above the Skull, Facing Downwards
EDIT: I have set a breakpoint in my code for after the first draw call loop and it does not show any faces which are behind the fronts ones, so issue is solved at this frame, but when I continue to the next frame, this is when the problems start.
These kinds of problems are sometimes related to the setup of the CullMode in the D3D11_RASTERIZER_DESC. Try changing from D3D11_CULL_BACK to D3D11_CULL_FRONT or vice versa. Also have a look at the FrontCounterClockwise option of the D3D11_RASTERIZER_DESC.
I figured out what had been causing my grief. I have been using DirectXToolKits SpriteFont and SpriteBatch class's to use the function DrawString to display an FPS counter at the top left corner of the screen, and it must have been messing around with the ID3D11DeviceContext::DrawIndexed calls. Thankyou for all your input and brainstorming!!
I apologise for the limited code in this question, but It's tied into a personal project with much OpenGL functionality abstracted behind classes. Hoping someone visually recognises the problem and can offer direction.
During the first execution of my animation loop, I'm creating a GL_R32F (format: GL_RED, type: GL_FLOAT) texture, rendering an orthographic projection of a utah teapot to it (for the purposes of debugging this I'm writing the same float to every fragment).
The texture however renders incorrectly, as it should be a solid silhouette.
Re-running the the program causes the patches to move around.
I've spent a good few hours tweaking things trying to work out the cause, I've compared the code to my working shadow mapping example which similarly writes to a GL_R32F texture, yet I can't find a cause.
I've narrowed it down, to find that it's only the first renderpass to the texture which this occurs. This wouldn't be so much of an issue except I don't require more than a single render (and looping the bindFB, setViewport, render, unbindFB doesn't fix it).
I've
If anyone has an suggestions for specific code extracts to provide, I'll try and edit the question.
This was caused by a rogue call to glEnable(GL_BLEND) during an earlier stage of the algorithm.
This makes sense because I was writing to a single channel, therefore the Alpha channel would contain random garbage, leading to garbled texture.
I'm playing around with openGL (3.3 on OSX Mavericks), and I'm getting random parts of my screen being rendered to my window. I'm assuming that's probably clear evidence that I'm doing SOMETHING wrong... but what? Is it something with uninitialized values in a buffer? Am I using a buffer I didn't create? Some weird memory management thing? Or something like that?
Sorry if the question is a bit vague- I'm just betting that this is one of those bugs that openGL vets will hear and go "Of course! That means that {insert thing I'm doing wrong}".
Here's a screen shot to get an idea for what I'm talking about:
the black circle is what I'm attempting to render, the upside-down google logo is what I don't understand. Also, every time I run it I get different random textures.
Thanks! And I'd be happy to supply more details, I just don't know what other relevant info to include...
Thanks to #Andon M. Coleman (in the comments above), I've realized that this was simply a result of me not properly clearing the color buffer.
Specifically, my pipeline involved me rendering to a texture, and then blitting that texture to the screen. I WAS correctly clearing the SCREENS color buffer, but I never cleared the intermediate FB's color buffer.
When using software rendering, or any graphics card in our developlment office, our little coloured GL_POINTS render in exactly the colour we expect. Out in the field, some users report points rendered in the wrong colours. Getting them to turn off hardware acceleration fixed their problem, so we've been putting the whole thing down to a third-party issue and using a workaround (tiny pixel-sized rectangles whose colour remains unproblematic). The snag: we are taking a huge performance hit.
My question is, has anyone else had a similar issue, and, if so, did they come up with a way to keep their GL_POINTS and get the colour right?
I haven't encountered similar problem, but the solution is simple : get the card your user is using, and set the same environment.
Maybe the problem is something stupid as old drivers. I don't see what else can render wrong color.