I get strange results, when I try to copy the rendered texture into a background texture for later use, using the CopyResource command. this is whats comming out:
I don't get any dx11 warnings or errors.
this also only happens while using an ATI Radeon card.
I also tried on 5 other nvidia cards and the output looks fine.
I downloaded the newest drivers and also I tried older ones, but nothing changed.
I can not post the code, anyway it is to huge. I only want to know, if some one also had something like this, and if so, how did you solved it?
Is there a better way to copy textures using another method?
I found out, that the problem is easy solveable. After a long debugging session
I saw, that the source texture was also bound to the render output. This gives no warnings or errors and is valid on nvidia cards, but my radeon card (AMD Radeon R7 M370) does not like it.
So I changed my code to:
OMSetRenderTargets(1, nullptr, nullptr);
CopyResource(...
and the bug was fixed. maybe someone helps this answer to solve the same problem.
Related
I'm making kind of a graphic engine using OpenGL, but I am having a bit of a problem, the texture is not loading properly, it loads like this:
While it should load the picture of the "heart" square fully as a square, but some of it is gone, and it kinda vibrates whenever the frames change, I realized if I remove the frames (on screen), it wouldn't get that problem.
Like, what is wrong with that thing?
EDIT: it appears that the problem is caused by shaders on AMD GPUs, I dunnu why and if someone knows what could be causing the problem by that please tell me, also I have a long code don't really know which is causing the problem...
and I tried a different method: I rendered the label on a different layer and it worked :/
I have an OpenGL test application that is producing incredibly unusual results. When I start up the application it may or may not feature a severe graphical bug.
It might produce an image like this:
http://i.imgur.com/JwPoDrh.jpg
Or like this:
http://i.imgur.com/QEYwhBY.jpg
Or just the correct image, like this:
http://i.imgur.com/zUJbwCM.jpg
The scene consists of one spinning colored cube (made of 12 triangles) with a simple shader on it that colors the pixels based on the absolute value of their model space coordinates. The junk faces appear to spin with the cube as though they were attached to it and often junk triangles or quads flash on the screen briefly as though they were rendered in 2D.
The thing I find most unusual about this is that the behavior is highly inconsistent, starting the exact same application repeatedly without me personally changing anything else on the system will produce different results, sometimes bugged, sometimes not, the arrangement of the junk faces produced isn't consistent either.
I can't really post source code for the application as it is very lengthy and the actual OpenGL calls are spread out across many wrapper classes and such.
This is occurring under the following conditions:
Windows 10 64 bit OS (although I have observed very similar behavior under Windows 8.1 64 bit).
AMD FX-9590 CPU (Clocked at 4.7GHz on an ASUS Sabertooth 990FX).
AMD 7970HD GPU (It is a couple years old and occasionally areas of the screen in 3D applications become scrambled, but nothing on the scale of what I'm experiencing here).
Using SDL (https://www.libsdl.org/) for window and context creation.
Using GLEW (http://glew.sourceforge.net/) for OpenGL.
Using OpenGL versions 1.0, 3.3 and 4.3 (I'm assuming SDL is indeed using the versions I instructed it to).
AMD Catalyst driver version 15.7.1 (Driver Packaging Version listed as 15.20.1062.1004-150803a1-187674C, although again I have seen very similar behavior on much older drivers).
Catalyst Control Center lists my OpenGL version as 6.14.10.13399.
This looks like a broken graphics card to me. Most likely some problem with the memory (either the memory itself, or some soldering problem). Artifacts like those you see can happen if for some reason setting the address for a memory operation does not fully settle or happen at all, before starting the read; that can happen due to a bad connection between the GPU and the memory (solder connections failed) or because the memory itself failed.
Solution: Buy new graphics card. You may try out what happens if you resolder it using a reflow process; there are some tutorials on how to do this DIY, but a proper reflow oven gives better results.
Whenever I try to render my terrain with point light's it only works on my Nvidia gpu and driver, and not the Intel integrated and driver. I believe the problem is in my code and a bug in the Nvidia gpu since I heard Nvidia's OpenGL implementations are buggy and will let you get away with things your not supposed to. And since I get no error's I need help debugging my shader's.
Link:
http://pastebin.com/sgwBatnw
Note:
I use OpenGL 2 and GLSL Version 120
Edit:
I was able to fix the problem on my one, to anyone with similar problems it's not because I used the regular transformation matrix because when I did that I set the normals w value to 0.0; The problem was that with the intel integrated graphics there is apparently a max number of array's in a uniform or max uniform size in general and I was going over that limit but it was deciding not to report it. Another thing wrong with this code was that I was doing implicit type conversion (dividing vec3's by floats) so I corrected those things and it started to work. Here's my updated code.
Link: http://pastebin.com/zydK7YMh
I just picked up a new Lenovo Thinkpad that comes with Intel HD Graphics 3000. I'm finding that my old freeglut apps, which use GLUT_MULTISAMPLE, are running at 2 or 3 fps as opposed to the expected 60fps. Even the freeglut example 'shapes' runs this slow.
If I disable GLUT_MULTISAMPLE from shapes.c (or my app) things run quickly again.
I tried multisampling on glfw (using GLFW_FSAA - or whatever that hint is called), and I think it's working fine. This was with a different app (glgears). glfw is triggering Norton Internet Security, which things it's malware so keeps removing .exes... but that's another problem... my interest is with freeglut.
I wonder if the algorithm that freeglut uses to choose a pixel format is tripping up on this card, whereas glfw is choosing the right one.
Has anyone else come across something like this? Any ideas?
That glfw triggeres Norton is a bug in Nortons virus definition. If it's still the case with the latest definitions, send them your glfw dll/app so they can fix it. Same happens on Avira and they are working on it (have already confirmed that it's a false positive).
As for the HD3000, that's quite a weak GPU, what resolution is your app and how many samples are you using? Maybe the amount of framebuffer memory gets to high for the little guy?
I use that function to get my texture from the graphics card,
but for some reason it doesnt return anything on some cards if miplevel > 0
Here is the code im using to get image:
glGetTexImage(GL_TEXTURE_2D, miplevel, GL_RGB, GL_UNSIGNED_BYTE, data);
here is the code i use to check which method to use for mipmapping:
ext = (char*)glGetString(GL_EXTENSIONS);
if(strstr(ext, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
So far it has worked properly, so it says GL_GENERATE_MIPMAP is supported for those ATI cards below.
Here are the tested cards:
ATI Radeon 9550 / X1050 Series
ATI Mobility Radeon HD 3470
ATI Radeon X700
ATI Radeon HD 4870
ATI Radeon HD 3450
At this moment i am taking the miplevel 0 and generating the mipmap by own code. Is there better fix for this?
Also glGetError() returns 0 for all cards, so no error occurs. it just doesnt work. probably a driver problem?
Im still looking for better fix than resizing it myself on CPU...
Check the error that glGetTexImage is reporting. It most probably tells you what the error is .
Edit: Sounds like the joys of using ATI's poorly written OpenGL drivers. Assuming your drivers are up-to-date, use an nVidia card, work around it or accept it won't work. Thats pretty much your only options. Might be worth hassling ATI about it but they will, most likely, do nothing, alas.
Edit2: On the problem cards are you using GL_GENERATE_MIPMAP? It might be that you can't grab the mip levels unless they are explicitly built ...? ie Try gluBuild2DMipmaps() for everything.
Edit 3: Thing is though. It "may" be the cause of your problems. It doesn't sound unlikely to me that the ATI card grabs the texture from a local copy, however if you use the auto generate of mip maps then it does it entirely on the card and never copies them back. Explicitly try building the mip maps locally and see if that fixes your issues. It may not, however you do need to try these things or you will never figure out the problem. Alas trial and error is all that works with problems like this. This is why a fair few games have large databases with driver name, card name and driver version to decide whether a feature will or will not work.