I update an application which uses an obsolete ChoosePixelFormat function. cRedShift, cGreenShift and cBlueShift (PIXELFORMATDESCRIPTOR struct) are set to 0. ChoosePixelFormat function should be replaced with wglChoosePixelFormatARB. Theoretically I can pass WGL_RED_SHIFT_ARB, WGL_GREEN_SHIFT_ARB and WGL_BLUE_SHIFT_ARB as wglChoosePixelFormatARB attributes but the documentation says that those attributes are always ignored even when explicitly specified. I don't understand what is purpose of those WGL attributes. Could you please explain it ? Do cRedShift, cGreenShift and cBlueShift have any usage when using with ChoosePixelFormat ?
Related
I am working on augmented reality project. So the user should use the Webcam, and to see the captured video with a cube drawn on that frame.
And this is where I get stuck , when I try to use glBindTexture(GL_TEXTURE_2D,texture_background) method, I get error:
(
ArgumentError: argument 2: : wrong type
GLUT Display callback with (),{} failed: returning None argument 2: : wrong type
)
I am completely stuck, have no idea what to do . The project is done in Python 2.7 , am using opencv and PyOpenGl 3.1.0.
You can find code on this link :click here
Thanks in advance.
Interesting error! So I played around with your source code (by the way in the future you should probably just add the code directly to your question instead of as a separate link), and the issue is actually just one of variable scope, not glut or opengl usage. The problem is your texture_background variable does not exist within the scope of the _draw_scene() function. To verify this, simply try calling print texture_background in your _draw_scene() function and you will find it returns None rather than the desired integer texture identifier.
The simple hack-y solution is to just call global texture_background before using it within your _handle_input() function. You will also need to define texture_background = None in the main scope of your program (underneath your ##FOR CUBE FROM OPENGL comment). The same global comment applies for x_axis and z_axis.
That being said, that solution is not really that great. The rigid structure required by GLUT with all of these predefined glut* functions makes it hard to structure code the way you might want to in terms of initializing your app. I would suggest, if you are not forced to use GLUT, to use a more flexible alternative, such as pygame, pysdl2 or pyqt, to create your context instead.
C++ application, I define a temp context, make it current, and then try to use wglCreateContextAttribsARB, which is simply undefined. Most answers I have seen say to use PFNWGLCREATECONTEXTATTRIBSARBPROC. Which is also undefined. What am I missing?
I'm only using gl.h (provided by VS2015)
SetPixelFormat(g_hDc, chosenPixelFormat, &pfd);
HGLRC temporaryContext = wglCreateContext(g_hDc);
wglMakeCurrent(g_hDc, temporaryContext);
PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB...
Both, however, are just unidentified. I initially tried calling wglCreateContextAttribsARB by itself, to no avail, anywhere in my code.
At this stage, I have a context working, windowed, 480p, updating, stable 60FPS. So I know my side is working. I'm getting no GL errors either. Where do I need to instantiate these two? Am I using the wrong gl header?
I'm using an updated ASUS Radeon R9-285
All data types and constants related to wgl extensions are declared in wglext.h.
You need to query the function pointer of type PFNWGLCREATECONTEXTATTRIBSARBPROC using your current context via the GL extension mechansim (e.g. wglGetProcAddress()).
I'm attempting to follow this tutorial on the MSDN to load an image file from a resource. I've a feeling some of the code provided is guff, but I can't figure out how to make it work. The call to FindResource() keeps failing with error code 1813.
I've added a .rc resource file to my C++ project (I'm using Visual Studio 2013 as my IDE), and added a png file with the ID IDB_PNG1:
The tutorial defines the resource as IDR_SAMPLE_IMAGE IMAGE "turtle.jpg", and then calls FindResource() as
FindResource(
NULL, // This component.
L"SampleImage", // Resource name.
L"Image"); // Resource type.
I've a feeling that L"SampleImage" is supposed to be L"IDR_SAMPLE_IMAGE" and L"Image" is supposed to be L"IMAGE", since the provided values don't seem to exist anywhere, but my equivalent call doesn't work:
FindResource(
NULL, // This component
"IDB_PNG1", // Resource name
"PNG", // Resource type
);
What am I doing wrong?
I don't know if it's related, but whenever I use L"string" in my code, I get an error (Argument of type "const wchar_t *" is incompatible with parameter of type "LPCSTR"), so I've been omitting the L which seems to have worked for every other example I've followed, so I don't think that's the issue here.
The sample is a little muddled with that "SampleImage" name. The confusing part is that Win32 resources may be identified with either strings or (16-bit) integers. The sample leads you to use strings (e.g. L"SampleImage"), but the Visual Studio IDE (and, frankly, most code I've come across) prefers integers. To allow both kinds, the Win32 resource functions take parameters of type LPCWSTR and callers are supposed to use a macro, MAKEINTRESOURCE, to convert an integer ID into a "pseudo string". The same applies for resource types. There are some built-in types (ICON, CURSOR, BITMAP, et al), but you can always define your own types using strings.
If you look around in your code, you should be able to find a header file (Resource.h, probably) with the definition for IDB_PNG1. It's most likely a small integer, so you need to use the MAKEINTRESOURCE macro. PNG was probably not defined anywhere, and it's not one of the built-in resource types, so the resource compiler treated it as a string and so should you.
e.g.
FindResource(
NULL, // This component
MAKEINTRESOURCE(IDB_PNG1), // Resource name
L"PNG", // Resource type
);
Try that and let us know if it works.
I'm looking at the documentation for IDXGIKeyedMutex and I'm a bit unsure regarding the following:
You must call the ReleaseSync method when you are done rendering to a
surface.
My question is what does "when you are done rendering" mean? Is it after is remove the texture as the render target for the immidiate context, when I call Flush on the immediate context or do I need some other form of GPU fence/sync before I can call ReleaseSync?
Also why is D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX preferred over D3D11_RESOURCE_MISC_SHARED?
You should call IDXGIKeyedMutex::ReleaseSync after you have called the ID3D11DeviceContext::Draw calls or other calls to issue GPU commands to write to the buffer (i.e. ID3D11DeviceContext::CopyResource). You don't need to explicitly call Flush. For a sample of using AcquireSync/ReleaseSync, please look at http://code.msdn.microsoft.com/DXGISyncSharedSurf
D3D11_RESOURCE_MISC_SHARED_KEYEDMUTEX is preferred over D3D11_RESOURCE_MISC_SHARED because it can be used with D3D11_RESOURCE_MISC_SHARED_NTHANDLE, which provides better security for cross-proc surface sharing.
I am designing a game engine in DirectX 11 and I had a question about the ID3D11DeviceContext::IASetInputLayout function. From what i can find in the documentation there is no mention of what the function will do if you set an input layout to the device that has been previously set. In context, if i were to do the following:
//this assumes dc is a valid ID3D11DeviceContex interface and that
//ia is a valid ID3D11InputLayout interface.
dc->IASetInputLayout(&ia);
//other program lines: drawing, setting vertex shaders/pixel shaders, etc.
dc->IASetInputLayout(&ia);
//continue execution
would this incur a performance penalty through device state switching, or would the runtime recognize the input layout as being equivalent to the one already set and return?
While I also can not find anything related to if the InputLayout is already set, you could get a pointer to the input layout already bound by calling ID3D11DeviceContext::IAGetInputLayout or by doing an internal check by keeping your own reference, that way you do not have a call to your ID3D11DeviceContext object.
As far as I know, it should detect that there are no changes and so the call is to be ignored. But it can be easily tested - just call this method 10000 times each render and see how bad FPS drop is :)