I use that function to get my texture from the graphics card,
but for some reason it doesnt return anything on some cards if miplevel > 0
Here is the code im using to get image:
glGetTexImage(GL_TEXTURE_2D, miplevel, GL_RGB, GL_UNSIGNED_BYTE, data);
here is the code i use to check which method to use for mipmapping:
ext = (char*)glGetString(GL_EXTENSIONS);
if(strstr(ext, "SGIS_generate_mipmap") == NULL){
// use gluBuild2DMipmaps()
}else{
// use GL_GENERATE_MIPMAP
}
So far it has worked properly, so it says GL_GENERATE_MIPMAP is supported for those ATI cards below.
Here are the tested cards:
ATI Radeon 9550 / X1050 Series
ATI Mobility Radeon HD 3470
ATI Radeon X700
ATI Radeon HD 4870
ATI Radeon HD 3450
At this moment i am taking the miplevel 0 and generating the mipmap by own code. Is there better fix for this?
Also glGetError() returns 0 for all cards, so no error occurs. it just doesnt work. probably a driver problem?
Im still looking for better fix than resizing it myself on CPU...
Check the error that glGetTexImage is reporting. It most probably tells you what the error is .
Edit: Sounds like the joys of using ATI's poorly written OpenGL drivers. Assuming your drivers are up-to-date, use an nVidia card, work around it or accept it won't work. Thats pretty much your only options. Might be worth hassling ATI about it but they will, most likely, do nothing, alas.
Edit2: On the problem cards are you using GL_GENERATE_MIPMAP? It might be that you can't grab the mip levels unless they are explicitly built ...? ie Try gluBuild2DMipmaps() for everything.
Edit 3: Thing is though. It "may" be the cause of your problems. It doesn't sound unlikely to me that the ATI card grabs the texture from a local copy, however if you use the auto generate of mip maps then it does it entirely on the card and never copies them back. Explicitly try building the mip maps locally and see if that fixes your issues. It may not, however you do need to try these things or you will never figure out the problem. Alas trial and error is all that works with problems like this. This is why a fair few games have large databases with driver name, card name and driver version to decide whether a feature will or will not work.
Related
I get strange results, when I try to copy the rendered texture into a background texture for later use, using the CopyResource command. this is whats comming out:
I don't get any dx11 warnings or errors.
this also only happens while using an ATI Radeon card.
I also tried on 5 other nvidia cards and the output looks fine.
I downloaded the newest drivers and also I tried older ones, but nothing changed.
I can not post the code, anyway it is to huge. I only want to know, if some one also had something like this, and if so, how did you solved it?
Is there a better way to copy textures using another method?
I found out, that the problem is easy solveable. After a long debugging session
I saw, that the source texture was also bound to the render output. This gives no warnings or errors and is valid on nvidia cards, but my radeon card (AMD Radeon R7 M370) does not like it.
So I changed my code to:
OMSetRenderTargets(1, nullptr, nullptr);
CopyResource(...
and the bug was fixed. maybe someone helps this answer to solve the same problem.
Ok so I'm trying to use the tutorials at: http://arcsynthesis.org/gltut/ but I keep getting an error message that pops for like a second saying "Unable to create OpenGL 3.3 context (flags 1, profile 1)", there's also a bunch of pdb files missing. I did download the newest drivers for both graphics cards on my laptop (that is both the Intel(R) HD Graphics 3000 and the NVIDIA GeForce GT 540M) and I did launch a software called "OpenGL Extensions Viewer", and it displays that I should be able to run OpenGL version 3.1
NOW, I guess is that some would now say that perhaps my card can't run 3.3, but:
1) My card is said to support 4.0 :
http://www.notebookcheck.net/NVIDIA-GeForce-GT-540M.41715.0.html
2) There are people who say that "Any hardware that supports OpenGL 3.1 is capable of supporting OpenGL 3.3. "
OpenGL 3.+ glsl compatibility mess?
3) And finally... A YEAR AGO, I GOT IT TO RUN! Seriously, I got it to work after 2 months of trying. I'm even using some old project files from that time and they sadly won't launch anymore because of the same mistake... I did format since then.
I recall that last time, it was a whole series of things that I tried... like disabling one graphic card to be able to update the other... or maybe it was that I used some different diagnostic, which someone online advised saying that "if that program detects that the OpenGL isn't working properly, it'll fix it".
Right now I'm busy with other homework, so if anyone at all has any suggestions what this could be about, please tell!
I am using the open source haptics and 3D graphics library Chai3D running on Windows 7. I have rewritten the library to do stereoscopic 3D with Nvidia nvision. I am using OpenGL with GLUT, and using glutInitDisplayMode(GLUT_RGB | GLUT_DEPTH | GLUT_DOUBLE | GLUT_STEREO) to initialize the display mode. It works great on Quadro cards, but on GTX 560m and GTX 580 cards it says the pixel format is unsupported. I know the monitors are capable of displaying the 3D, and I know the cards are capable of rendering it. I have tried adjusting the resolution of the screen and everything else I can think of, but nothing seems to work. I have read in various places that stereoscopic 3D with OpenGL only works in fullscreen mode. So, the only possible reason for this error I can think of is that I am starting in windowed mode. How would I force the application to start in fullscreen mode with 3D enabled? Can anyone provide a code example of quad buffer stereoscopic 3D using OpenGL that works on the later GTX model cards?
What you experience has no technical reasons, but is simply product policy of NVidia. Quadbuffer stereo is considered a professional feature and so NVidia offers it only on their Quadro cards, even if the GeForce GPUs would do it as well. This is not a recent development. Already back in 1999 it was like this. For example I had (well still have) a GeForce2 Ultra back then. But technically this was the very same chip like the Quadro, the only difference was the PCI-ID reported back to the system. One could trick the driver into thinking you had a Quadro by tinkering with the PCI-IDs (either by patching the driver or by soldering an additional resistor onto the graphics card PCB).
The stereoscopic 3D mode for Direct3D hack was already supported by my GeForce2 then. Back then the driver duplicated the rendering commands, but applied a translation to the modelview and a skew to the projection matrix. These days it's implemented a shader and multi rendertarget trick.
The NVision3D API does allow you to blit images for specific eyes (this is meant for movie players and image viewers). But it also allows you to emulate quadbuffer stereo: Instead of GL_BACK_LEFT and GL_BACK_RIGHT buffers create two Framebuffer Objects, which you bind and use as if they were quadbuffer stereo. Then after rendering you blit the resulting images (as textures) to the NVision3D API.
With only as little as 50 lines of management code you can build a program that seamlessly works on both NVision3D as well as quadbuffer stereo. What NVidia does is pointless and they should just stop it now and properly support quadbuffer stereo pixelformats on consumer GPUs as well.
Simple: you can't. Not the way you're trying to do it.
There is a difference between having a pre-existing program do things with stereoscopic glasses and doing what you're trying to do. What you are attempting to do is use the built-in stereo support of OpenGL: the ability to create a stereoscopic framebuffer, where you can render to the left and right framebuffers arbitrarily.
NVIDIA does not allow that with their non-Quadro cards. It has hacks in the driver that will force stereo on applications with nVision and the control panel. But NVIDIA's GeForce drivers do not allow you to create stereoscopic framebuffers.
And before you ask, no, I have no idea why NVIDIA doesn't let you control stereo.
Since I was looking into this issue for my own game, I w found this link where somebody hacked the USB protocol. http://users.csc.calpoly.edu/~zwood/teaching/csc572/final11/rsomers/
I didn't follow it through but at the time when I was researching on this it didn't look to hard to make use of this information. So you might have to implement your own code in order to support it in your app, which should be possible. Unfortunately a generic solution would be harder, because then you would have to hack the driver or somehow hook into the OpenGL library and intercept the calls.
I want to experiment with some GPGPU in first place. I could have chosen between 5 choices out there: OpenCL, CUDA, FireStream, Close to Metal, DirectCompute. Well not really after filtering them for my needs none suits :) I am using Radeon 3870HD, so CUDA is out, I want crossplatform DirectCompute out, Close to Metal evolved to FireStream (equivalent of CUDA for AMD) and FS is now "deprecated" for good of openCL. And guess what? openCL is avalible from radeon 4xxx series.. So I don't want to learn something that's not going to be supported and i don't have HW for new one.
So until I get new piece, I thought that shaders can really do similiar things, it's just much harder to get results back, and slower also. Anyway I don't plan to do research with this so for me it could be good enough. Searching for something like that in google is job for garbage man (no offense) so what are possibilities of rendering in other place than framebuffer used for displaying? Can one create textures or what other buffers would be suited best for this? In case of texture I would like some info how to access it, with buffers it shouldn't be much of a problem..
Almost forgot, I'm using openGL 3.1 and GLSL 1.5
Thanks
It's completely possible, GPGPU was done that way before CUDA appeared. Here is a tutorial from that time:
http://www.mathematik.uni-dortmund.de/~goeddeke/gpgpu/tutorial.html
To render to anything other than a framebuffer, you can use Transform Feeback in OpenGL 3.0 to render to a VBO.
Is there a list of 3D cards available that provide full scene antialiasing as well as which are able to do it in hardware (decent performance)?
Pretty much all cards since DX7-level technology (GeForce 2 / Radeon 7000) can do it. Most notable exceptions are Intel cards (Intel 945 aka GMA 950 and earlier can't do it; I think Intel 965 aka GMA X3100 can't do it either).
Older cards (GeForce 2 / 4MX, Radeon 7000-9250) were using supersampling (render everything into internally larger buffer, downsample at the end). All later cards have multisampling, where this expensive process is only performed at polygon edges (simply speaking, shaders are run for each pixel, while depth/coverage is stored for each sample).
Off the top of my head, pretty much any card since a geforce 2 or something can do it. There's always a performance hit, but this varies on the card and AA mode (of which there are about 100 different kinds) but generally it's quite a performance hit.
Agree with Orion Edwards, pretty much everything new can. Performance also depends greatly on the resolution you run at.
Integrated GPUs are going to be really poor performers with games FSAA or no. If you want even moderate performance, buy a separate video card.
For something that's not crazy expensive go with either a nVidia Geforce 8000 series card or an ATI 3000 series card. Even as a nVidia 8800 GTS owner, I will tell you the ATIs have better support for older games.
Although I personally still like FSAA, it is becoming less important with higher resolution screens. Also, more and more games are using deferred rendering which makes FSAA impossible.
Yes, of course integrated cards are awful. :) But this wasn't a question about gaming, but rather about an application that we are writing that will use OpenGL/D3D for 3D rendering. The 3D scene is relatively small, but antialiasing makes a dramatic difference in terms of the quality of the rendering. We are curious if there is some way to easily determine which cards support these features fully and which do not.
With the exception of the 3100, so far all of the cards we've found that do antialiasing are plenty fast for our purposes (as is my GeForce 9500).
Having seen a pile of machines recently that don't do it, I don't think that's quite true. The GMA 950 integrated ones don't do it to start with, and I don't think that the 3100/X3100 do either (at least not in hardware... the 3100 was enormously slow in a demo). Also, I don't believe that the GeForce MX5200 supported it either.
Or perhaps I'm just misunderstanding what you mean when you refer to "AA mode". Are there a lot of cards which support modes that are virtually unnoticable? :)