I've an nVidia Quadro NVS 295/PCIe/SSE2 card in which when I do glGetString(GL_EXTENSIONS), print out the values and grep for "compress", I get this list
GL_ARB_compressed_texture_pixel_storage
GL_ARB_texture_compression
GL_ARB_texture_compression_rgtc
GL_EXT_texture_compression_dxt1
GL_EXT_texture_compression_latc
GL_EXT_texture_compression_rgtc
GL_EXT_texture_compression_s3tc
GL_NV_texture_compression_vtc
But then again glCompressedTexImage2D says that glGet with GL_COMPRESSED_TEXTURE_FORMATS returns the supported compressions, which only gives
0x83f0 = GL_COMPRESSED_RGB_S3TC_DXT1_EXT
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
these three values.
Now why does glGet not expose the other compression formats that my card can process? Say LATC, RGTC or VTC?
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
Now why does glGet not expose the other compression formats that my card can process?
Because NVIDIA doesn't want to. And really, there's no point. The ARB even decided to deprecate (though not remove) the COMPRESSED_TEXTURE_FORMATS stuff from GL 4.3.
In short, don't rely on that particular glGet. Rely on the extensions. If you have GL 3.0+, then you have the RGTC formats; that's required by GL 3.0+. If you have EXT_texture_compression_s3tc, then you have the "DXT" formats. If you have EXT_texture_sRGB as well, then you have the sRGB versions of the "DXT" formats. And so forth.
Also why am I not seeing corresponding DXT3 or 5 extensions in the first list?
ahem:
0x83f2 = GL_COMPRESSED_RGBA_S3TC_DXT3_EXT
0x83f3 = GL_COMPRESSED_RGBA_S3TC_DXT5_EXT
Those are just different forms of S3TC.
why am I not seeing corresponding DXT3 or 5 extensions in the first list?
You are. They're covered by GL_EXT_texture_compression_s3tc.
Related
I am developing a terminal TUI application for myself using the ncurses library. (Running on Linux)
I cannot seem to find much info regarding the use of a "strikethrough/strikeout" text attribute when adding a string to a ncurses window using addstr and friends.
The only information I've found online was on this site:
https://midnight-commander.org/ticket/3264
Ncurses will not add [strikethrough text] because the bitfield is already fully packed.
I was wondering if there are any workarounds to this, or any official way to do this.
Any help would be appreciated.
Thanks.
ncurses has 16 bits allocated for video-attributes. SVr4 curses used 8; XOpen Curses added 7. Those 15 are defined for X/Open Curses compatibility.
Referring to the X/Open Curses documentation, there are two sets of definitions:
A_ALTCHARSET Alternate character set
A_BLINK Blinking
A_BOLD Extra bright or bold
A_DIM Half bright
A_INVIS Invisible
A_PROTECT Protected
A_REVERSE Reverse video
A_STANDOUT Best highlighting mode of the terminal
A_UNDERLINE Underlining
and
WA_ALTCHARSET Alternate character set
WA_BLINK Blinking
WA_BOLD Extra bright or bold
WA_DIM Half bright
WA_HORIZONTAL Horizontal highlight
WA_INVIS Invisible
WA_LEFT Left highlight
WA_LOW Low highlight
WA_PROTECT Protected
WA_REVERSE Reverse video
WA_RIGHT Right highlight
WA_STANDOUT Best highlighting mode of the terminal
WA_TOP Top highlight
WA_UNDERLINE Underlining
WA_VERTICAL Vertical highlight
depending on whether the bits are stored in a attr_t or a chtype (X/Open and SVr4 respectively). In ncurses, those are the same (see the manual page), so that it does not matter if one refers to A_BOLD or WA_BOLD (Solaris xpg4 curses does store those differently).
Discounting the A_ vs WA_, the two lists are different. The newer ones from X/Open Curses are rarely used. Since ncurses doesn't know what it looks like on the screen, someone could add the corresponding terminfo capability to a terminal description and ncurses would handle it.
The terminfo manual page mentions these:
The XSI Curses standard added these hardcopy capabilities. They were
used in some post-4.1 versions of System V curses, e.g., Solaris 2.5
and IRIX 6.x. Except for YI, the ncurses termcap names for them are
invented. According to the XSI Curses standard, they have no termcap
names. If your compiled terminfo entries use these, they may not be
binary-compatible with System V terminfo entries after SVr4.1; beware!
(Explaining how to modify a terminal description can be found in thousands of webpages, and is off-topic for this forum).
Possible attributes in ncurses are:
A_NORMAL Normal display (no highlight)
A_STANDOUT Best highlighting mode of the terminal.
A_UNDERLINE Underlining
A_REVERSE Reverse video
A_BLINK Blinking
A_DIM Half bright
A_BOLD Extra bright or bold
A_PROTECT Protected mode
A_INVIS Invisible or blank mode
A_ALTCHARSET Alternate character set
A_CHARTEXT Bit−mask to extract a character
COLOR_PAIR(n) Color−pair number n
Functions like attron(), attroff(), attrset() may be used to work with attributes,
Strikethrough is not and will not be available.
If you know your terminal and want your software to be able to to work just on such an terminal type AND the terminal supports strikethrough, then you can use control characters or escape sequences to activate such a funcionality.
You can use Unicode for that:
(I know it's an old question, but I had a similar issue, and this is the top result for "curses strikethrough" on Google, so this answer might be helpful to someone.)
I made it work using Python, but the strategy should work in any language:
import curses
def strike(text: str) -> str:
# See <https://stackoverflow.com/a/25244576/4039050>
return "\u0336".join(text) + "\u0336"
def main(stdscr):
message = "The quick brown fox jumps over the lazy dog."
stdscr.addstr(strike(message))
stdscr.refresh()
stdscr.getch()
if __name__ == "__main__":
curses.wrapper(main)
I'm trying to use the Ocaml Graphics package. I want to create a GUI for my chat server application. My code is:
let window = Graphics.open_graph "";
Graphics.set_window_title "caml-chat";
Graphics.set_font "ubuntu";
Graphics.set_text_size 12;
Graphics.draw_string "hello!"
However, Graphics.set_font "ubuntu" does not work. The documentation says that the string argument is system dependent, but I cannot find any more information than that. The only mention I found was in the answers to this question, and it didn't work.
Does anyone know anything else about setting the font? (Or can point me in the direction of a simple graphics library with better documentation?)
Although you didn't specify your system, I will assume that it is Linux (I doubt that Windows has an ubuntu font).
On Linux, the set_font function passes the argument to the X Lib's XLoadFont function. You can use the fc-list or xfontsel utilities to query for the available fonts on your system, or call directly to the XListFonts function.
For example,
fc-list | cut -d: -f2 | sort -u
will give you a list of font families, which you can pass to set_font function. Some lines will have more than one family per line, separated with comman (,). There are many more options, you can specify various styles, sizes, etc. But this is all out of the scope. You can the fontconfig guide to learn more about the font subsystem. For example, [here], at the section "Font Names", you can find the explanation of how the font name is constructed.
Has anyone successfully used glMultiDrawArraysIndirect? I'm including the latest glext.h but compiler can't seem to find the function. Do I need to define something (#define ... ) before including glext.h?
error: ‘GL_DRAW_INDIRECT_BUFFER’ was not declared in this scope
error: ‘glMultiDrawArraysIndirect’ was not declared in this scope
I'm trying to implement OpenGL superBible example. Here are snippets from source code :
GLuint indirect_draw_buffer;
glGenBuffers(1, &indirect_draw_buffer);
glBindBuffer(GL_DRAW_INDIRECT_BUFFER, indirect_draw_buffer);
glBufferData(GL_DRAW_INDIRECT_BUFFER,
NUM_DRAWS * sizeof(DrawArraysIndirectCommand),
draws,
GL_STATIC_DRAW);
....
// fill the buffers
.....
glMultiDrawArraysIndirect (GL_TRIANGLES, NULL, 3, 0);
I'm on Linux with Quadro 2000 & latest drivers installed (NVidia 319.60).
You cannot simply #include <glext.h> and expect this problem to fix itself. This header is only half of the equation, it defines the basic constants, function signatures, typedefs, etc. used by OpenGL extensions but does not actually solve the problem of extending OpenGL.
On most platforms you are guaranteed a certain version of OpenGL (1.1 on Windows) and to use any part of OpenGL that is newer than this version you must extend the API at runtime. Linux is no different, in order to use glMultiDrawArraysIndirect (...) you have to load this extension from the driver at runtime. This usually means setting up function pointers that are NULL until runtime in order to keep the compiler/linker happy.
By far, the simplest solution is going to be to use something like GLEW, which will load all of the extensions your driver supports for versions up to OpenGL 4.4 at runtime. It will take the place of glext.h, all you have to do is initialize the library after you setup your render context.
I am using TWAIN in C++ and I am trying to set the DPI manually so that a user is not displayed with the scan dialog but instead the page just scans with set defaults and is stored for them. I need to set the DPI manually but I can not seem to get it to work. I have tried setting the capability using the ICAP_XRESOLUTION and the ICAP_YRESOLUTION. When I look at the image's info though it always shows the same resolution no matter what I set it to using the ICAPs. Is there another way to set the resolution of a scanned in image or is there just an additional step that needs to be done that I can not find in the documentation anywhere?
Thanks
I use ICAP_XRESOLUTION and the ICAP_YRESOLUTION to set the scan resolution for a scanner, and it works at least for a number of HP scanners.
Code snipset:
float x_res = 1200;
cap.Cap = ICAP_XRESOLUTION;
cap.ConType = TWON_ONEVALUE;
cap.hContainer = GlobalAlloc(GHND, sizeof(TW_ONEVALUE));
if(cap.hContainer)
{
val_p = (pTW_ONEVALUE)GlobalLock(cap.hContainer);
val_p->ItemType = TWTY_FIX32;
TW_FIX32 fix32_val = FloatToFIX32(x_res);
val_p->Item = *((pTW_INT32) &fix32_val);
GlobalUnlock(cap.hContainer);
ret_code = SetCapability(cap);
GlobalFree(cap.hContainer);
}
TW_FIX32 FloatToFIX32(float i_float)
{
TW_FIX32 Fix32_value;
TW_INT32 value = (TW_INT32) (i_float * 65536.0 + 0.5);
Fix32_value.Whole = LOWORD(value >> 16);
Fix32_value.Frac = LOWORD(value & 0x0000ffffL);
return Fix32_value;
}
The value should be of type TW_FIX32 which is a floating point format defined by twain (strange but true).
I hope it works for you!
It should work the way.
But unfortunately we're not living in a perfect world. TWAIN drivers are among the most buggy drivers out there. Controlling the scanning process with TWAIN has always been a big headache because most drivers have never been tested without the scan dialog.
As far as I know there is also no test-suite for twain-drivers, so each of them will behave slightly different.
I wrote an OCR application back in the 90th and had to deal with these issues as well. What I ended up was having a list of supported scanners and a scanner module with lots of hacks and work-arounds for each different driver.
Take the ICAP_XRESOLUTION for example: The TWAIN documentation sais you have to send the resolution as a 32 bit float. Have you tried to set it using an integer instead? Or send it as float but put the bit-representation of an integer into the float, or vice versa. All this could work for the driver you're working with. Or it could not work at all.
I doubt the situation has changed much since then. So good luck getting it working on at least half of the machines that are out there.
How do you enable vertical sync?
Is it something simple like glEnable(GL_VSYNC)? (though there's no such thing as GL_VSYNC or anything like it in the options that glEnable accepts).
or is there no standard way to do this in opengl?
On Windows there is OpenGL extension method wglSwapIntervalEXT.
From the post by b2b3 http://www.gamedev.net/community/forums/topic.asp?topic_id=360862:
If you are working on Windows you have to use extensions to use
wglSwapIntervalExt function. It is
defined in wglext.h. You will also want to
download glext.h file.
In wglext file all entry points for Windows specific extensions are
declared. All such functions start
with prefix wgl.
To get more info about all published extensions you can look into
OpenGL Extension Registry.
wglSwapIntervalEXT is from WGL_EXT_swap_control extension. It
lets you specify minimum number of
frames before each buffer swap.
Usually it is used for vertical
synchronization (if you set swap
interval to 1). More info about whole
extension can be found here.
Before using this function you need query whether you card has
support for WGL_EXT_swap_control and
then obtain pointer to the function
using wglGetProcAddress function.
To test for support of given extension you can use function like this:
#include <windows.h>
#include "wglext.h"
bool WGLExtensionSupported(const char *extension_name)
{
// this is pointer to function which returns pointer to string with list of all wgl extensions
PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL;
// determine pointer to wglGetExtensionsStringEXT function
_wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC) wglGetProcAddress("wglGetExtensionsStringEXT");
if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL)
{
// string was not found
return false;
}
// extension is supported
return true;
}
To initialize your function pointers you need to:
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL;
PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL;
if (WGLExtensionSupported("WGL_EXT_swap_control"))
{
// Extension is supported, init pointers.
wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT");
// this is another function from WGL_EXT_swap_control extension
wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC) wglGetProcAddress("wglGetSwapIntervalEXT");
}
Then you can use these pointers as any other pointer to function. To enable vync you can call wglSwapIntervalEXT(1), to disable it you call wglSwapIntervalEXT(0).
To get current swap interval you need to call wglGetSwapIntervalEXT().
WGL case is described in the answer by eugensk00.
For CGL (MacOSX) see this answer to another SO question.
For EGL there's eglSwapInterval() function, but apparently (see this and this) it doesn't guarantee tearing-free result — only waits given period (maybe it's just due to broken drivers).
For GLX (Linux with X11 etc.) there are at least 3 similar extensions for this, with varying degree of functionality. OpenGL wiki currently lists only one, which is unsupported by Mesa <= 10.5.9 (and maybe higher). Here's a list from most feature-complete extension (listed in OpenGL wiki) to the least:
GLX_EXT_swap_control
Set swap interval per-drawable per-display: glXSwapIntervalEXT(dpy, drawable, interval)
Get current swap interval: glXQueryDrawable(dpy, drawable, GLX_SWAP_INTERVAL_EXT, &interval)
Get maximum swap interval: glXQueryDrawable(dpy, drawable, GLX_MAX_SWAP_INTERVAL_EXT, &maxInterval)
Disable Vsync: set interval to 0
GLX_MESA_swap_control
Set swap interval per-context: glXSwapIntervalMESA(interval)
Get current swap interval: glXGetSwapIntervalMESA()
Get maximum swap interval: unsupported
Disable Vsync: set interval to 0
GLX_SGI_swap_control
Set swap interval: glXSwapIntervalSGI(interval).
Get current swap interval: unsupported
Get maximum swap interval: unsupported
Disable Vsync: unsupported (interval==0 is an error)
For adaptive Vsync see OpenGL wiki.
((BOOL(WINAPI*)(int))wglGetProcAddress("wglSwapIntervalEXT"))(1);
https://www.khronos.org/opengl/wiki/Swap_Interval
"wglSwapIntervalEXT(1) is used to enable vsync; wglSwapIntervalEXT(0) to disable vsync."
"A swap interval of 1 tells the GPU to wait for one v-blank before swapping the front and back buffers. A swap interval of 0 specifies that the GPU should never wait for v-blanks"
Alternatively: (wgl function typedefs are in #include <GL/wglext.h>)
((PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT"))(1);
PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC)wglGetProcAddress("wglSwapIntervalEXT");
wglSwapIntervalEXT(1);
For WGL case described in the answer by eugensk.
If you run into a nullptr error, make sure you are running {wglGetProcAddress} part code in an OpenGL Context.
i.e. after codes {glfwMakeContextCurrent(window);}.
See answer here.