C++ How to detect graphics card model - c++

I'm writing an application that can be sped up a lot if a graphics card is available.
I've got the necessary DLL's to make my application utilize NVIDIA and AMD cards. But it requires being launched with specific command line arguments depending on which card is available.
I'd like to make an installer that will detect the specific brand of GPU and then launch the real application with the necessary command line arguments.
What's the best way to detect the type of card?

You have to detect it during runtime by using OpenGL. Use the command glGetString(GL_VENDOR) or GL_VERSION.
To do so using Direct3D 9 API:
Step 1
D3DADAPTER_IDENTIFIER AdapterIdentifier;
Step 2
m_pD3D->GetAdapterIdentifier(D3DADAPTER_DEFAULT, 0, &AdapterIdentifier);
Step 3 Get the max size of the graphics card identifier string
const int cch = sizeof(AdapterIdentifier.Description);
Step 4 Define a TCHAR to hold the description
TCHAR szDescription[cch];
Step 5 Use the unicode DX utility to convert the char string to TCHAR
DXUtil_ConvertAnsiStringToGenericCch( szDescription, AdapterIdentifier.Description, cch );
Credit goes to: Anonymous_Poster_* # http://www.gamedev.net/topic/358770-obtain-video-card-name-size-etc-in-c/

#mark-setchell already post the link from superuser.com above. I just want to make it easier for people to find out the solution.
wmic path win32_VideoController get name

Related

Fatal error: exception Graphics.Graphic_failure("Cannot open display ")

I was trying to run the code and it keeps showing the same error.
I start by compiling with ocamlc -o cardioide graphics.cma cardioide.ml and it appears to work, but then I do ./cardioide to execute it and the message Fatal error: exception Graphics.Graphic_failure("Cannot open display ") appears...
I've searched all across the internet and i can't find the solution, can someone please help me?
Thank you
open Graphics
let () = open_graph "300x20"
let () =
moveto 200 150;
for i = 0 to 200 do
let th = atan 1. *. float i /. 25. in
let r = 50. *. (1. -. sin th) in
lineto (150 + truncate (r *. cos th))
(150 + truncate (r *. sin th))
done;
ignore (read_key ())
Error message:
Fatal error: exception Graphics.Graphic_failure("Cannot open display ")
The string argument to the open_graph function is not the size or title, but actually implementation-dependent information that is passed to the underlying graphical subsystem (in X11 it is the screen number). In modern OCaml, optional arguments are passed using labels, but Graphics was written long before this feature was introduced to the language. Therefore, you have to pass an empty string there (if you don't want to pass any specific to implementation of the underlying graphical subsystem information), e.g.,
open_graph ""
will do the work for you in a system-independent way.
Besides, if you want to resize the window, then you can use the resize_window function. And to set the title, use set_window_title.
For the historic reference, the string parameter passed to the open_graph is having the following syntax (it is no longer documented, so there is no reason to believe that it will be respected):
Here are the graphics mode specifications supported by
Graphics.open_graph on the X11 implementation of this library: the
argument to Graphics.open_graph has the format "display-name
geometry", where display-name is the name of the X-windows display
to connect to, and geometry is a standard X-windows geometry
specification. The two components are separated by a space. Either
can be omitted, or both. Examples:
Graphics.open_graph "foo:0" connects to the display foo:0 and creates a
window with the default geometry
Graphics.open_graph "foo:0 300x100+50-0" connects to the display foo:0 and
creates a window 300 pixels wide by 100 pixels tall, at location (50,0)
Graphics.open_graph " 300x100+50-0" connects to the default display and
creates a window 300 pixels wide by 100 pixels tall, at location (50,0)
Graphics.open_graph "" connects to the default display and creates a
window with the default geometry.
Put a 'space' in the argument to get the window you want (should be 200 for your cardioide):
let () = open_graph " 300x200"
I met the same problem and it was because I used the Windows subsystem for linux(WSL) so it needs a XServer to run graphical application. And the Ubuntu Wiki For WSL helped solve the problem. I downloaded and installed MobaXterm (it has free community version!) and it automatically detects the WSL and runs it inside the app. Try the same code before and a graphical window will pop up!

D3D11: Reference Rasterizer vs WARP

I have a test for a pixel shader that does some rendering and compares the result to a reference image to verify that the shader produces an expected output. When this test is run on a CI machine, it is on a VM without a GPU, so I call D3D11CreateDevice with D3D_DRIVER_TYPE_REFERENCE to use the reference rasterizer. We have been doing this for years without issue on a Windows 7 VM.
We are now trying to move to a Windows 10 VM for our CI tests. When I run the test here, various API calls start failing after some number of successful tests (on the order of 5000-10000) with DXGI_ERROR_DEVICE_REMOVED, and calling GetDeviceRemovedReason returns DXGI_ERROR_DRIVER_INTERNAL_ERROR. After some debugging I've found that the failure originates during a call to ID3D11DeviceContext::PSSetShader (yes, this returns void, but I found this via a breakpoint in KernelBase.dll!RaiseException). This call looks exactly like the thousands of previous calls to PSSetShader as far as I can tell. It doesn't appear to be a resource issue, the process is only using 8MB of memory when the error occurs, and the handle count is not growing.
I can reproduce the issue on multiple Win10 systems, and it succeeds on multiple Win7 systems. The big difference between the two is that on Win7, the API calls are going through d3d11ref.dll, and on Win10 they are going through d3d10warp.dll. I am not really familiar with what the differences are or why one or the other would be chosen, and MSDN's documentation is quite opaque on the subject. I know that both d3d11ref.dll and d3d10warp.dll are both present on both failing and passing systems; I don't know what the logic is for one or the other being loaded for the same set of calls, or why the d3d10warp library fails.
So, can someone explain the difference between the two, and/or suggest how I could get d3d11ref.dll to load in Windows 10? As far as I can tell it is a bug in d3d10warp.dll and for now I would just like to side-step it.
In case it matters, I am calling D3D11CreateDevice with the desired feature level set to D3D_FEATURE_LEVEL_11_0, and I verify that the same level is returned as acheived. I am passing 0 for creationFlags, and my D3D11_SDK_VERSION is defined as 7 in d3d11.h. Below is the call stack above PSSetShader when the failure occurs. This seems to be the first call that fails, and every call after it with a return code also fails.
KernelBase.dll!RaiseException()
KernelBase.dll!OutputDebugStringA()
d3d11.dll!CDevice::RemoveDevice(long)
d3d11.dll!NDXGI::CDevice::RemoveDevice()
d3d11.dll!CContext::UMSetError_()
d3d10warp.dll!UMDevice::MSCB_SetError(long,enum UMDevice::DDI_TYPE)
d3d10warp.dll!UMContext::SetShaderWithInterfaces(enum PIXELJIT_SHADER_STAGE,struct D3D10DDI_HSHADER,unsigned int,unsigned int const *,struct D3D11DDIARG_POINTERDATA const *)
d3d10warp.dll!UMDevice::PsSetShaderWithInterfaces(struct D3D10DDI_HDEVICE,struct D3D10DDI_HSHADER,unsigned int,unsigned int const *,struct D3D11DDIARG_POINTERDATA const *)
d3d11.dll!CContext::TID3D11DeviceContext_SetShaderWithInterfaces_<1,4>(class CContext *,struct ID3D11PixelShader *,struct ID3D11ClassInstance * const *,unsigned int)
d3d11.dll!CContext::TID3D11DeviceContext_SetShader_<1,4>()
MyTest.exe!MyFunctionThatCallsPSSetShader()
Update: With the D3D Debug layers enabled, I get the following additional output when the error occurs:
D3D11: Removing Device.
D3D11 WARNING: ID3D11Device::RemoveDevice: Device removal has been triggered for the following reason (DXGI_ERROR_DRIVER_INTERNAL_ERROR: There is strong evidence that the driver has performed an undefined operation; but it may be because the application performed an illegal or undefined operation to begin with.). [ EXECUTION WARNING #379: DEVICE_REMOVAL_PROCESS_POSSIBLY_AT_FAULT]
D3D11 ERROR: ID3D11DeviceContext::Map: Returning DXGI_ERROR_DEVICE_REMOVED, when a Resource was trying to be mapped with READ or READWRITE. [ RESOURCE_MANIPULATION ERROR #2097214: RESOURCE_MAP_DEVICEREMOVED_RETURN]
The third line about the call to Map happens after my test fails to notice and handle the device removed and later tries to map a texture, so I don't think that's related. The other is about what I expected; there's an error in the driver, and possibly my test is doing something bad to cause it. I still don't know what that might be, or why it worked in Windows 7.
Update 2: I have found that if I run my tests on Windows 10 in Windows 7 compatibility mode, there is no device removed error and all of my tests pass. It is still using d3d10warp.dll instead of d3d11ref.dll, so that wasn't exactly the problem. I'm not sure how to investigate "what am I doing that's incompatible with Windows 10 or its WARP device"; this might need to be a Microsoft support ticket.
The problem is that you haven't enabled the Windows 10 optional feature "Graphics Tools" on that system. That is how you install the DirectX 11/12 Debug Runtime on Windows 10 including Direct3D 11's reference device, WARP for DirectX 12, the DirectX SDK debug layer for DX11/DX12, etc.
WARP for DirectX 11 is available on all systems, not just those with the "Graphics Tools" feature. Generally speaking, most people have switched to using WARP instead of the software reference driver since it is a lot faster. If you are having crashes under WARP, you should investigate the source of those crashes by enabling the DEBUG device.
See this blog post.

What is the argument for Graphics.set_font from the OCaml Graphics package?

I'm trying to use the Ocaml Graphics package. I want to create a GUI for my chat server application. My code is:
let window = Graphics.open_graph "";
Graphics.set_window_title "caml-chat";
Graphics.set_font "ubuntu";
Graphics.set_text_size 12;
Graphics.draw_string "hello!"
However, Graphics.set_font "ubuntu" does not work. The documentation says that the string argument is system dependent, but I cannot find any more information than that. The only mention I found was in the answers to this question, and it didn't work.
Does anyone know anything else about setting the font? (Or can point me in the direction of a simple graphics library with better documentation?)
Although you didn't specify your system, I will assume that it is Linux (I doubt that Windows has an ubuntu font).
On Linux, the set_font function passes the argument to the X Lib's XLoadFont function. You can use the fc-list or xfontsel utilities to query for the available fonts on your system, or call directly to the XListFonts function.
For example,
fc-list | cut -d: -f2 | sort -u
will give you a list of font families, which you can pass to set_font function. Some lines will have more than one family per line, separated with comman (,). There are many more options, you can specify various styles, sizes, etc. But this is all out of the scope. You can the fontconfig guide to learn more about the font subsystem. For example, [here], at the section "Font Names", you can find the explanation of how the font name is constructed.

Directx get VRAM used by game

I'm trying to get the total amount of VRAM my game is currently using. I want to display this in my debug information.
When I was using the Visual Studio Graphics Analyzer I got an idea.
I figured that I could get the amount of VRAM used by adding the size of each of the graphic objects as seen in the Graphics Object Table.
Unfortunately I have no idea how to get each of those objects. Is there a simple way to get these?
I actually found an easier way to do this:
#include <dxgi1_4.h>
...
IDXGIFactory4* pFactory;
CreateDXGIFactory1(__uuidof(IDXGIFactory4), (void**)&pFactory);
IDXGIAdapter3* adapter;
pFactory->EnumAdapters(0, reinterpret_cast<IDXGIAdapter**>(&adapter));
DXGI_QUERY_VIDEO_MEMORY_INFO videoMemoryInfo;
adapter->QueryVideoMemoryInfo(0, DXGI_MEMORY_SEGMENT_GROUP_LOCAL, &videoMemoryInfo);
size_t usedVRAM = videoMemoryInfo.CurrentUsage / 1024 / 1024;
This gets the currently used VRAM from the default (ID 0) adapter and converts it to megabytes.
Note: This does require the use of the windows 10 SDK

CImg Error : 'gm.exe' is not recognized as an internal or external command,

I am new to c++ programming , today i was trying to save an image using CImg .
CImg is C++ Template Image Processing Library .
The basic code i wrote is(Please forgive any syntax erros , as copied part of my codes) :
#include "CImg.h"// Include CImg library header.
#include <iostream>
using namespace cimg_library;
using namespace std;
const int screen_size = 800;
//-------------------------------------------------------------------------------
// Main procedure
//-------------------------------------------------------------------------------
int main()
{
CImg<unsigned char> img(screen_size,screen_size,1,3,20);
CImgDisplay disp(img, "CImg Tutorial");
//Some drawing using img.draw_circle( 10, 10, 60, RED);
img.save("result.jpg"); // save the image
return 0;
}
But I cannot run my program as it says :
Invalid Parameter - 100%
'gm.exe' is not recognized as an internal or external command,
operable program or batch file.
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
[CImg] *** CImgIOException *** [instance(800,800,1,3,02150020,non-shared)] CImg<unsigned char>::save_other() : Failed to save file 'result.jpg'. Format is not natively supported, and no external commands succeeded.
terminate called after throwing an instance of 'cimg_library::CImgIOException'
what(): [instance(800,800,1,3,02150020,non-shared)] CImg<unsigned char>::save_other() : Failed to save file 'result.jpg'. Format is not natively supported, and no external commands succeeded.
Though i can see the image , I cannot save it.
After googling a bit i found people saying to install ImageMagick , i have installed it but no help .
Some of the Forum says to compile against libjpeg, libpng, libmagick++. But i don't know how to compile against those libraries.
I am using Eclipse CDT plugin to write C++ project .
Please help me .
I had the same error, and installing of GraphicsMagick (not ImageMagick) helped me.
I've downloaded and installed GraphicsMagick-1.3.26-Q8-win64-dll.exe from ftp://ftp.graphicsmagick.org/pub/GraphicsMagick/windows/. You may choose another one, if you need:
Note that the QuantumDepth=8 version (Q8) which provides industry
standard 24/32 bit pixels consumes half the memory and about 30% less
CPU than the QuantumDepth=16 version (Q16) which provides 48/64 bit
pixels for high-resolution color. A Q8 version is fine for processing
typical photos intended for viewing on a computer screen. If you are
dealing with film, scientific, or medical images, use ICC color
profiles, or deal with images that have limited contrast, then the Q16
version is recommended.
Important: during installation, don't remove checkbox "Update executable search path", which updates environment variable %PATH%, making gm.exe available from any place.
In my case, it was also required to install Ghostscript - which is highly recommended to install by GraphicsMagick. There is a link to x64 Ghostscript: https://sourceforge.net/projects/ghostscript/files/GPL%20Ghostscript/9.09/gs909w64.exe/download (I've put it here, because links from the GraphicMagick websites leads you to 32-bit only).
After that, it worked fine for me.
For some image formats (as .jpg, .png, .tif and basically all formats that require data compression), CImg will try to use an external tool to save them (such as convert from ImageMagick or gm from GraphicsMagick).
If you don't have any installed, then you won't be able to save .jpg files without having to link your code with the libjpeg library, to get a native support for JPEG read/write (then, you'll need to #define cimg_use_jpeg before #include "CImg.h", to tell the library you want to use the libjpeg features).
If you want to keep things simpler, I'd recommend to save your image using another (non-compressed) image format, as .bmp or .ppm.
These formats are handled natively by CImg and do not require to link with external libraries.
I know this question is old, but I kept getting the same error on one project and not on another and this is the only thing on Google.
To get rid of it, you must do 2 things:
Install dynamic ImageMagick libraries for your appropriate OS and architecture(32/64). Link
I was using VisualStudio, and the character set must be set to "Unicode". The error would appear again when I reverted back to Multi-Byte character set. I guess this has something to do with the way CImg handles strings and miscompares them.