Strange behavior with texture rendering when switching from Intel to nVidia graphic card [closed] - opengl

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 17 days ago.
Improve this question
I'm developing a little image visualizer just to learn some openGL graphics.
I'm using Dear ImGUI to have a GUI interface.
I'm struggling with an issue happening when I switch from Intel graphic card to nVidia graphic card.
I'm working on a Dell precision 7560 laptop running Windows 10.
I'm rendering an image texture to a Frame Buffer Object and then displaying the FBO texture into an ImGui::Image widget.
What is happening is that if I use the Intel Graphic card everything works fine:
texture displayed with Intel graphic card
but as soon as I switch to nVidia the texture is rendered in a wrong way:
texture displayed with nVIida graphic card
I'm using the most basic shader code needed to deal with textures.
Does anyone have experienced the same?
I've tried to check and recheck the code but everything seems ok, so I'm supposing that is something that with Intel goes by default and with the nVidia I need to specify something more...

Related

glGenerateMipmap raises GL_INVALID_OPERATION on specific platform [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I performed glGenerateMipmap(GL_TEXTURE_2D) on a texture that used as FBO rendering target. It works well on several Windows computers, but when I test it on a Linux laptop with Intel HD3000 graphics card, it raises GL_INVALID_OPERATION error.
I checked the program with AMD CodeXL. When this error raised, the GL_TEXTURE_BINDING_2D has correct value, and bounded texture has correct properties:
# of mipmaps: 10 levels;
dimensions: 600x520
internal format: GL_DEPTH_STENCIL;
GL_TEXTURE_MIN_FILTER: GL_LINEAR_MIPMAP_LINEAR.
GL_TEXTURE_MIN_LOD: -1000
GL_TEXTURE_MAX_LOD: 1000
GL_TEXTURE_BASE_LEVEL: 0
GL_TEXTURE_MAX_LEVEL: 1000
It seems caused by generating mipmap on DEPTH24_STENCIL8 texture. Temporarily I masked mipmap generation of all those depth-stencil textures, and all this kind of warnings eliminated.
Non-power-of-2 texture size does not looks like the cause, because I have many other same sized textures that works well.
I have known that the Intel HD Graphics Linux drivers have many limits, such as not support #version 150 GLSL. And now I've got one more limit :)
Your texture dimension is not power-of-two, see https://www.khronos.org/registry/OpenGL-Refpages/es2.0/xhtml/glGenerateMipmap.xml
Errors
GL_INVALID_OPERATION is generated if either the width or height of the
zero level array is not a power of two.
To fix it, use a power-of-two texture, eg. 512x512 etc.

Inconsistant OpenGL rendering bug with 3D objects [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 5 years ago.
Improve this question
So I've been hammering away at my code for a while, trying to resolve this bug, with absolutely no progress being made.
Mostly due to how utterly random and unpredictable this bug is.
So this is how the scene works when everything is working fine
And when the bug kicks in
As you can see, the bug only prevents my cubemap skybox, model, and light source mesh from rendering, but the ortho projected 2d elements are just fine.
I've ruled out shaders, as even the simplest of shader programs still experience this problem. I use ASSIMP to load in mesh files and SOIL to load textures, but up until about a day ago they have worked flawlessly.
There is absolutely no pattern to when this happens, the only way to resolve it is to just keep restarting the program until the desired output appears. That is obviously not a good solution. I'm at a complete loss and need help, as opengl doesn't push out an error or anything. I don't know where to even begin looking for a solution. Could EBO's or frame buffers cause this? As I have started implementing those recently.
I have searched far and wide for anything that could be related to this, but I have come up with nothing so far.
TL;DR: 3D objects will not render only on some runs and work fine on others, possible issues with recently implemented framebuffers and EBOs.
UPDATE:
It turns out that my mouse look code in my Camera class was causing some odd issues where calculating the change in camera angles caused it to be set to an extraordinarily high negative value. Turning mouse look off permanently resolved the issue.

How to handle backbuffer in Direct3D11? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Recently,i want to learn a book named Tricks of 3D Games Programming Gurus.It used DDraw to implement a soft render engine.But DDraw is to old.I want to use Direct3D11 to do the same things.So i got the texture of the main backbuffer,and update it.But it didn't work,what should i do?
You don't have direct access to the true frontbuffer/backbuffer even with DirectDraw on modern platforms.
If you want to do all your rendering into a block of CPU memory without using the GPU, then your best bet for fast presentation is to use a Direct3D 11 texture with D3D11_USAGE_DYNAMIC, and then do a simple full-screen quad render of that texture onto the presentation backbuffer. For that step, you can look at DirectX Tool Kit and the SpriteBatch class.
That said, performance wise this is likely to be pretty poor because you are doing everything on the CPU and the GPU is basically doing nothing 99% of the time.

Toshiba Satellite C660, OpenGL 1.1, ATI Radeon Mobility HD 5470 [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
How to update OpenGL 1.1 on Toshiba Satellite C660 with ATI Radeon Mobility HD 5470? The driver from Toshiba support doesn't include the OpenGL update.
Download the drivers from ATI/AMD directly. Go to this website http://support.amd.com/en-us/download
Enter the following values into the filter boxes:
Notebook Graphics
Radeon HD Series
Mobility Radeon HD 5xxx Series
<Your Operating System>
Then Click the "Display Results" button.
I tried many things finally figured out that it was the problem of Win8 and I've installed a fresh, stable Win7 and finally got OpenGL to v4.0. The reason I had to get this done was to start working with jMonkeyEngine3.

video file + fragment shader under Linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
coming from Windows (MSVC++ 2005):
What SDK or alike do you recommend to port an C++ application (DirectShow+Direct3D) to Linux playing video file + using fragment shaders?
is there any reason you need a fragment shader at all? (are you doing post processing on the video images?). You don't need to do any shader coding to get a video playing with OpenGL.
I would use ffmpeg (libavcodec actually) to do the video decoding. Displaying a frame just requires an OpenGL texture and a call to glTexSubImage2D each frame to do the update.
Using FFMPEG in C/C++
You need to use OpenGL instead.
Some tip for the implementation:
- To achieve a good performance you
need to make sure a good video card
driver is installed.
- If you are not familiar with OpenGL
start it with the 'Red book' - OpenGL
Programming Guide
- You may need to download the latest extension header from here
http://www.opengl.org/registry/
- The library GLEW may help you in
identifying the available
extension.
- Include the GL/gl.h and the glext.h file in your project
- Link to the driver's opengl dynamic library: /usr/lib64/libGL.so or simmilar
i would also check the gstreamer framework on linux if you need to port a more complicated directshow application. it also has some sort of graph for media playback to build. it is totally different, but if you have experience and the need for complicated directshow, then you will see some analogy.
and gstreamer also has an opengl plugin for image effects and shaders, ....
http://www.gstreamer.net/
http://www.gstreamer.net/releases/gst-plugins-gl/0.10.1.html