how to enable vertical sync in opengl? - opengl

How do you enable vertical sync?
Is it something simple like glEnable(GL_VSYNC)? (though there's no such thing as GL_VSYNC or anything like it in the options that glEnable accepts).
or is there no standard way to do this in opengl?

On Windows there is OpenGL extension method wglSwapIntervalEXT.
From the post by b2b3 http://www.gamedev.net/community/forums/topic.asp?topic_id=360862:
If you are working on Windows you have to use extensions to use
wglSwapIntervalExt function. It is
defined in wglext.h. You will also want to
download glext.h file.
In wglext file all entry points for Windows specific extensions are
declared. All such functions start
with prefix wgl.
To get more info about all published extensions you can look into
OpenGL Extension Registry.
wglSwapIntervalEXT is from WGL_EXT_swap_control extension. It
lets you specify minimum number of
frames before each buffer swap.
Usually it is used for vertical
synchronization (if you set swap
interval to 1). More info about whole
extension can be found here.
Before using this function you need query whether you card has
support for WGL_EXT_swap_control and
then obtain pointer to the function
using wglGetProcAddress function.
To test for support of given extension you can use function like this:
#include <windows.h>
#include "wglext.h"
bool WGLExtensionSupported(const char *extension_name)
{
// this is pointer to function which returns pointer to string with list of all wgl extensions
PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL;
// determine pointer to wglGetExtensionsStringEXT function
_wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC) wglGetProcAddress("wglGetExtensionsStringEXT");
if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL)
{
// string was not found
return false;
}
// extension is supported
return true;
}
To initialize your function pointers you need to:
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL;
if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT");
// this is another function from WGL_EXT_swap_control extension
wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC) wglGetProcAddress("wglGetSwapIntervalEXT");
}
Then you can use these pointers as any other pointer to function. To enable vync you can call wglSwapIntervalEXT(1), to disable it you call wglSwapIntervalEXT(0).
To get current swap interval you need to call wglGetSwapIntervalEXT().

WGL case is described in the answer by eugensk00.
For CGL (MacOSX) see this answer to another SO question.
For EGL there's eglSwapInterval() function, but apparently (see this and this) it doesn't guarantee tearing-free result — only waits given period (maybe it's just due to broken drivers).
For GLX (Linux with X11 etc.) there are at least 3 similar extensions for this, with varying degree of functionality. OpenGL wiki currently lists only one, which is unsupported by Mesa <= 10.5.9 (and maybe higher). Here's a list from most feature-complete extension (listed in OpenGL wiki) to the least:
GLX_EXT_swap_control
Set swap interval per-drawable per-display: glXSwapIntervalEXT(dpy, drawable, interval)
Get current swap interval: glXQueryDrawable(dpy, drawable, GLX_SWAP_INTERVAL_EXT, &interval)
Get maximum swap interval: glXQueryDrawable(dpy, drawable, GLX_MAX_SWAP_INTERVAL_EXT, &maxInterval)
Disable Vsync: set interval to 0
GLX_MESA_swap_control
Set swap interval per-context: glXSwapIntervalMESA(interval)
Get current swap interval: glXGetSwapIntervalMESA()
Get maximum swap interval: unsupported
Disable Vsync: set interval to 0
GLX_SGI_swap_control
Set swap interval: glXSwapIntervalSGI(interval).
Get current swap interval: unsupported
Get maximum swap interval: unsupported
Disable Vsync: unsupported (interval==0 is an error)
For adaptive Vsync see OpenGL wiki.

((BOOL(WINAPI*)(int))wglGetProcAddress("wglSwapIntervalEXT"))(1);
https://www.khronos.org/opengl/wiki/Swap_Interval
"wglSwapIntervalEXT(1) is used to enable vsync; wglSwapIntervalEXT(0) to disable vsync."
"A swap interval of 1 tells the GPU to wait for one v-blank before swapping the front and back buffers. A swap interval of 0 specifies that the GPU should never wait for v-blanks"
Alternatively: (wgl function typedefs are in #include <GL/wglext.h>)
((PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT"))(1);
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");
wglSwapIntervalEXT(1);

For WGL case described in the answer by eugensk.
If you run into a nullptr error, make sure you are running {wglGetProcAddress} part code in an OpenGL Context.
i.e. after codes {glfwMakeContextCurrent(window);}.
See answer here.

Related

How to fix processor affinity for Qt 5.9 Thread on Windows with Ryzen 7

I have been developing a program for my Master's Thesis with OpenSceneGraph-3.4.0 and GUI from Qt 5.9 (otherwise in Visual Studio 2015 and 2017). At work everything works fine, but now that I have a new Computer at home I tried to get it running.
However, when I call the frame() method for the viewer, I get a Read Access Violation in QtThread.cpp at the setProcessorAffinity(unsigned int cpunum), specifically in the following line:
QtThreadPrivateData* pd = static_cast<QtThreadPrivateData*>(_prvData);
Here is the complete function (QtThread.cpp is part of OpenThreads of OSG):
// Description: set processor affinity for the thread
//
// Use: public
//
int Thread::setProcessorAffinity(unsigned int cpunum)
{
QtThreadPrivateData* pd = static_cast<QtThreadPrivateData*>(_prvData);
pd->cpunum = cpunum;
if (!pd->isRunning) return 0;
// FIXME:
// Qt doesn't have a platform-independent thread affinity method at present.
// Does it automatically configure threads on different processors, or we have to do it ourselves?
return -1;
}
The viewer in OSG is set to osgViewer::Viewer::SingleThreaded, but if I remove that line I get an error "Cannot make QOpenGLContext current in a different thread" in GraphicsWindowQt.cpp(which is part of OsgQt), so that's probably a dead end.
Edit for clarification
I call frame()on the osgViewer::Viewer object.
In this function, the viewer calls realize() (which is a function of the Viewer class).
In there setUpThreading()is called (which is a function of the Viewer Base class).
This in turn calls OpenThreads::SetProcessorAffinityOfCurrentThread(0)
In there, the following code is executed:
Thread* thread = Thread::CurrentThread();
if (thread)
return thread->setProcessorAffinity(cpunum);
thread (after the first line) has a value 0x00000000fdfdfdfd which looks like an error to me.
In any case, the last call is the one I posted in my original question.
I don't even have an idea of where to start fixing this. I assume, it's some processor related problem. My processor is a Ryzen 7 1700 (at work it's an Intel i7 3770k), so maybe that helps.
Otherwise, at home I'm using Windows 10, wheras at work it's Windows 7.
I'd be thankful for any help at all.
So in the end, it seems to be a problem with OpenThreads (and thus the OpenSceneGraph part, which I can do nothing about). When using cmake for the OpenSceneGraph source, there is an option "BUILD_OPENTHREADS_WITH_QT" that needs to be disabled.
I found the solution in this thread in the OSG forum, so thanks to this guy.

SystemParametersInfo(SPI_GETFONTSMOOTHINGTYPE) return 0

if (SystemParametersInfo(SPI_GETFONTSMOOTHINGTYPE, 0, &uiType, 0) != 0) {
Debug(uiType); // shows 0
}
This happened to me on Remote desktop with Windows Server 2012 R2.
According to the docs there are 2 possible values:
The possible values are FE_FONTSMOOTHINGSTANDARD (1) and
FE_FONTSMOOTHINGCLEARTYPE (2).
I also found this similar question but no answers:
Meaning of, SystemInformation.FontSmoothingType's return value
Does anyone knows what uiType 0 means?
EDIT: On that remote machine SPI_GETFONTSMOOTHING returns 0.
Determines whether the font smoothing feature is enabled.
The docs are obviously wrong. I would assume the correct way should be to first check the SPI_GETFONTSMOOTHING and only then SPI_GETFONTSMOOTHINGTYPE
The font smoothing "type" (SPI_GETFONTSMOOTHINGTYPE) is only meaningful if font smoothing is enabled (SPI_GETFONTSMOOTHING). The same is true for all of the other font smoothing attributes, like SPI_GETFONTSMOOTHINGCONTRAST and SPI_GETFONTSMOOTHINGORIENTATION.
You should check SPI_GETFONTSMOOTHING first. If it returns TRUE (non-zero), then you can query the other font smoothing attributes. If it returns FALSE (zero), then you are done. If you request the other font smoothing attributes, you will get meaningless noise.
So, in other words, your edit is correct, and the MSDN documentation could afford to be improved. I'm not sure it is "incorrect"; this seems like a pretty obvious design to me. It is a C API; calling it with the wrong parameters can be assumed to lead to wrong results.
The documentation does say that the only possible return values for SPI_GETFONTSMOOTHINGTYPE are FE_FONTSMOOTHINGSTANDARD and FE_FONTSMOOTHINGCLEARTYPE, so it would not be possible for this parameter to indicate that font smoothing is disabled or not applicable. The current implementation of SystemParametersInfo might return 0 for the case where font smoothing is disabled, but since the documentation doesn't explicitly say that you can rely on that, you shouldn't rely on it.

Anyone tried using glMultiDrawArraysIndirect? Compiler can't find the function

Has anyone successfully used glMultiDrawArraysIndirect? I'm including the latest glext.h but compiler can't seem to find the function. Do I need to define something (#define ... ) before including glext.h?
error: ‘GL_DRAW_INDIRECT_BUFFER’ was not declared in this scope
error: ‘glMultiDrawArraysIndirect’ was not declared in this scope
I'm trying to implement OpenGL superBible example. Here are snippets from source code :
GLuint indirect_draw_buffer;
glGenBuffers(1, &indirect_draw_buffer);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, indirect_draw_buffer);
glBufferData(GL_DRAW_INDIRECT_BUFFER,
NUM_DRAWS * sizeof(DrawArraysIndirectCommand),
draws,
GL_STATIC_DRAW);
....
// fill the buffers
.....
glMultiDrawArraysIndirect (GL_TRIANGLES, NULL, 3, 0);
I'm on Linux with Quadro 2000 & latest drivers installed (NVidia 319.60).
You cannot simply #include <glext.h> and expect this problem to fix itself. This header is only half of the equation, it defines the basic constants, function signatures, typedefs, etc. used by OpenGL extensions but does not actually solve the problem of extending OpenGL.
On most platforms you are guaranteed a certain version of OpenGL (1.1 on Windows) and to use any part of OpenGL that is newer than this version you must extend the API at runtime. Linux is no different, in order to use glMultiDrawArraysIndirect (...) you have to load this extension from the driver at runtime. This usually means setting up function pointers that are NULL until runtime in order to keep the compiler/linker happy.
By far, the simplest solution is going to be to use something like GLEW, which will load all of the extensions your driver supports for versions up to OpenGL 4.4 at runtime. It will take the place of glext.h, all you have to do is initialize the library after you setup your render context.

OpenGL compressed textures and extensions

I've an nVidia Quadro NVS 295/PCIe/SSE2 card in which when I do glGetString(GL_EXTENSIONS), print out the values and grep for "compress", I get this list
GL_ARB_compressed_texture_pixel_storage
GL_ARB_texture_compression
GL_ARB_texture_compression_rgtc
GL_EXT_texture_compression_dxt1
GL_EXT_texture_compression_latc
GL_EXT_texture_compression_rgtc
GL_EXT_texture_compression_s3tc
GL_NV_texture_compression_vtc
But then again glCompressedTexImage2D says that glGet with GL_COMPRESSED_TEXTURE_FORMATS returns the supported compressions, which only gives
0x83f0 = GL_COMPRESSED_RGB_S3TC_DXT1_EXT
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
these three values.
Now why does glGet not expose the other compression formats that my card can process? Say LATC, RGTC or VTC?
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
Now why does glGet not expose the other compression formats that my card can process?
Because NVIDIA doesn't want to. And really, there's no point. The ARB even decided to deprecate (though not remove) the COMPRESSED_TEXTURE_FORMATS stuff from GL 4.3.
In short, don't rely on that particular glGet. Rely on the extensions. If you have GL 3.0+, then you have the RGTC formats; that's required by GL 3.0+. If you have EXT_texture_compression_s3tc, then you have the "DXT" formats. If you have EXT_texture_sRGB as well, then you have the sRGB versions of the "DXT" formats. And so forth.
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
ahem:
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
Those are just different forms of S3TC.
why am I not seeing corresponding DXT3 or 5 extensions in the first list?
You are. They're covered by GL_EXT_texture_compression_s3tc.

How do I set the DPI of a scan using TWAIN in C++

I am using TWAIN in C++ and I am trying to set the DPI manually so that a user is not displayed with the scan dialog but instead the page just scans with set defaults and is stored for them. I need to set the DPI manually but I can not seem to get it to work. I have tried setting the capability using the ICAP_XRESOLUTION and the ICAP_YRESOLUTION. When I look at the image's info though it always shows the same resolution no matter what I set it to using the ICAPs. Is there another way to set the resolution of a scanned in image or is there just an additional step that needs to be done that I can not find in the documentation anywhere?
Thanks
I use ICAP_XRESOLUTION and the ICAP_YRESOLUTION to set the scan resolution for a scanner, and it works at least for a number of HP scanners.
Code snipset:
float x_res = 1200;
cap.Cap = ICAP_XRESOLUTION;
cap.ConType = TWON_ONEVALUE;
cap.hContainer = GlobalAlloc(GHND, sizeof(TW_ONEVALUE));
if(cap.hContainer)
{
val_p = (pTW_ONEVALUE)GlobalLock(cap.hContainer);
val_p->ItemType = TWTY_FIX32;
TW_FIX32 fix32_val = FloatToFIX32(x_res);
val_p->Item = *((pTW_INT32) &fix32_val);
GlobalUnlock(cap.hContainer);
ret_code = SetCapability(cap);
GlobalFree(cap.hContainer);
}
TW_FIX32 FloatToFIX32(float i_float)
{
TW_FIX32 Fix32_value;
TW_INT32 value = (TW_INT32) (i_float * 65536.0 + 0.5);
Fix32_value.Whole = LOWORD(value >> 16);
Fix32_value.Frac = LOWORD(value & 0x0000ffffL);
return Fix32_value;
}
The value should be of type TW_FIX32 which is a floating point format defined by twain (strange but true).
I hope it works for you!
It should work the way.
But unfortunately we're not living in a perfect world. TWAIN drivers are among the most buggy drivers out there. Controlling the scanning process with TWAIN has always been a big headache because most drivers have never been tested without the scan dialog.
As far as I know there is also no test-suite for twain-drivers, so each of them will behave slightly different.
I wrote an OCR application back in the 90th and had to deal with these issues as well. What I ended up was having a list of supported scanners and a scanner module with lots of hacks and work-arounds for each different driver.
Take the ICAP_XRESOLUTION for example: The TWAIN documentation sais you have to send the resolution as a 32 bit float. Have you tried to set it using an integer instead? Or send it as float but put the bit-representation of an integer into the float, or vice versa. All this could work for the driver you're working with. Or it could not work at all.
I doubt the situation has changed much since then. So good luck getting it working on at least half of the machines that are out there.