I'm trying to get the total amount of VRAM my game is currently using. I want to display this in my debug information.
When I was using the Visual Studio Graphics Analyzer I got an idea.
I figured that I could get the amount of VRAM used by adding the size of each of the graphic objects as seen in the Graphics Object Table.
Unfortunately I have no idea how to get each of those objects. Is there a simple way to get these?
I actually found an easier way to do this:
#include <dxgi1_4.h>
...
IDXGIFactory4* pFactory;
CreateDXGIFactory1(__uuidof(IDXGIFactory4), (void**)&pFactory);
IDXGIAdapter3* adapter;
pFactory->EnumAdapters(0, reinterpret_cast<IDXGIAdapter**>(&adapter));
DXGI_QUERY_VIDEO_MEMORY_INFO videoMemoryInfo;
adapter->QueryVideoMemoryInfo(0, DXGI_MEMORY_SEGMENT_GROUP_LOCAL, &videoMemoryInfo);
size_t usedVRAM = videoMemoryInfo.CurrentUsage / 1024 / 1024;
This gets the currently used VRAM from the default (ID 0) adapter and converts it to megabytes.
Note: This does require the use of the windows 10 SDK
Related
This might be a general question, but this problem is really getting me confused.
I have two different C++ applications, compiled with Visual Studio 2012, needing an instance of the same object. I have put a breakpoint before the creation of each object to measure the RAM usage by stepping my programs. The first one takes approximately 2.5 MiB more RAM after creating the object, while the second app is taking 30 MiB!
Both objects are created using a simple call to new with the same parameters. The code behind the constructors is the same.
As a detail: the first project contains much fewer .cpp files than the second one. So I thought it might be a problem of internal fragmentation of the exe. Plus, I've also tried to break BEFORE any code was executed inside the main function, and the memory usage was already much different (6 MiB for the first app, 35 MiB for the second one).
Does anyone have any idea what might be going on?
EDIT : The said object is a DirectX context, with a constructor creating a Direct3D instance and a device. Both the instance AND device are created the same way, but both have different RAM usage between the two apps.
Here is the code for the D3D device creation :
d3d_presentParams = new D3DPRESENT_PARAMETERS; ZeroMemory( d3d_presentParams, sizeof(D3DPRESENT_PARAMETERS) );
d3d_presentParams->Windowed = !window->isFullscreen();
d3d_presentParams->SwapEffect = D3DSWAPEFFECT_DISCARD;
d3d_presentParams->hDeviceWindow = window->getHwnd();
d3d_presentParams->MultiSampleType = antiAlias;
d3d_presentParams->EnableAutoDepthStencil = true;
d3d_presentParams->AutoDepthStencilFormat = D3DFMT_D32F_LOCKABLE;
d3d_presentParams->PresentationInterval = (info.m_vsync ? D3DPRESENT_INTERVAL_ONE : D3DPRESENT_INTERVAL_IMMEDIATE);
{
d3d_presentParams->BackBufferCount = 1;
d3d_presentParams->BackBufferFormat = D3DFMT_A8R8G8B8;
d3d_presentParams->BackBufferWidth = m_viewportSize.x;
d3d_presentParams->BackBufferHeight = m_viewportSize.y;
}
d3d->CreateDevice(
D3DADAPTER_DEFAULT,
D3DDEVTYPE_HAL,
window->getHwnd(),
D3DCREATE_HARDWARE_VERTEXPROCESSING | D3DCREATE_MULTITHREADED,
d3d_presentParams,
&d3d_device);
EDIT 2 : Problem has been "solved" for now. See the answer below.
Alright, I "solved" it, kind of. You're not going to believe what the problem was.
TL;DR : Renaming the executable lowers the RAM usage from 80 MiB to 28 MiB. It seems that Windows is suspicious of non-verified applications, as I discovered a directory full of logs inside C:/Users/<Me>/AppVerifierLogs/.
It seemed that RAM usage was linked to how many sources files it contained. Thus, I tried copying one of my projects entirely. I had to rename it, because a solution cannot have two projects with the same name... And guess what, memory usage was down to 7.3 MiB at the beginning of main(), instead of 30 MiB previously.
Later, I found out that it was a specific executable name that caused it to use more memory. I simply renamed the .exe file, and the problem was gone. I even tried to rename Firefox's exe by my app's name, and it made it crash.
And finally, I discovered in C:/Users/<me>/AppVerifierLogs/ at least 3000 files named after my application. This immediately made me think about the way Windows tries to verify applications (e.g. in order to see if they're not viruses).
I have no idea where this is coming from, nor how to turn off this verfier, but it's really annoying as a dev to have Windows panicking about an application you're developping, and injecting (possibly useless) stuff inside it.
The fix for now is just to rename the executable. If anyone knows about which application might have access to this "AppVerifierLogs" directory, please mention it, because even Google searches dont't seem to be able to help at this point...
I'm writing an application that can be sped up a lot if a graphics card is available.
I've got the necessary DLL's to make my application utilize NVIDIA and AMD cards. But it requires being launched with specific command line arguments depending on which card is available.
I'd like to make an installer that will detect the specific brand of GPU and then launch the real application with the necessary command line arguments.
What's the best way to detect the type of card?
You have to detect it during runtime by using OpenGL. Use the command glGetString(GL_VENDOR) or GL_VERSION.
To do so using Direct3D 9 API:
Step 1
D3DADAPTER_IDENTIFIER AdapterIdentifier;
Step 2
m_pD3D->GetAdapterIdentifier(D3DADAPTER_DEFAULT, 0, &AdapterIdentifier);
Step 3 Get the max size of the graphics card identifier string
const int cch = sizeof(AdapterIdentifier.Description);
Step 4 Define a TCHAR to hold the description
TCHAR szDescription[cch];
Step 5 Use the unicode DX utility to convert the char string to TCHAR
DXUtil_ConvertAnsiStringToGenericCch( szDescription, AdapterIdentifier.Description, cch );
Credit goes to: Anonymous_Poster_* # http://www.gamedev.net/topic/358770-obtain-video-card-name-size-etc-in-c/
#mark-setchell already post the link from superuser.com above. I just want to make it easier for people to find out the solution.
wmic path win32_VideoController get name
This is driving me up the wall..
I've got a very simple SDL2 program.
It has a array of 3 SDL_Texture pointers.
These textures are filled as follows:
SDL_Texture *myarray[15];
SDL_Surface *surface;
for(int i=0;i<3;i++)
{
char filename[] = "X.bmp";
filename[0] = i + '0';
surface = SDL_LoadBMP(filename);
myarray[i] = SDL_CreateTextureFromSurface(myrenderer,surface);
SDL_FreeSurface(surface);
}
This works, no errors.
In the main loop (which is just a standard event loop waiting for SDL_QUIT, keystrokes and a user-event which a SDL_Timer puts in the event queue every second) I just do (for the timer triggered event):
idx = (idx+1) % 3; // idx is global var initially 0.
SDL_RenderClear(myrenderer);
SDL_RenderCopy(myrenderer, myarray[idx], NULL, NULL);
SDL_RendererPresent(myrenderer);
This works fine for 0.bmp and 1.bmp, but the 3rd image (2.bmp) simply shows as a black field.
This is structural.
If I alternate the first 2 images they are both fine.
If I alternate the 2nd and 3rd image the 3rd image doesn't show.
If I use more than 3 images then 3 and upwards show as black.
Loading order doesn't matter. It starts going wrong with the 3rd image loaded from disk.
All images are properly formatted BMP's.
I even saved 2.bmp back to disk under a different name by using SDL_SaveBMP() after it was loaded to make sure it got loaded in memory OK. The new file is bit for bit identical to the original.
This program, without modifications and the same bmp files, works fine on OSX (XCode5) and Windows (VC++ 2012 Express).
The problem only shows on the Raspberry PI.
I have placed explicit error checks on every call that can leave a result/error-code (not shown in the samples above for brevity) but all of them show "no error".
I have used the latest stable source set of www.libsdl.org and compiled as instructed (configure, make, make install, etc.).
Anybody got any idea what could be going on ?
P.S.
Keyboard input doesn't seem to work either on my PI, but I haven't delved into that yet.
Answering myself as I finally figured it out myself...
I finally went back to the README-raspberrypi.txt that came with the SDL2 sources.
I didn't read it carefully enough the first time around...
Problem 1: I'am running on a FULL-HD display. The PI's default GPU memory is 64MB which is not enough for large displays and double-buffering. As suggested in the README I increased this to 128MB and this solved the black image problem.
Problem 2: Text input wasn't working because my user-account was not in the input group. I had added the default "pi" account to the input group initially, but when I later started using another account I forgot to add that user to the group.
In short: Caught by my own (too) quick skimming of the documentation.
I need to retrieve the total amount of RAM present in a system and the total RAM currently being used, so I can calculate a percentage. This is similar to: Retrieve system information on MacOS X?
However, in that question the best answer suggests how to get RAM by reading from:
/usr/bin/vm_stat
Due to the nature of my program, I found out that I am not cannot read from that file - I require a method that will provide me RAM info without simply opening a file and reading from it. I am looking for something to do with function calls. Something like this preferably : getTotalRam() and getRamInUse().
I obviously do not expect it to be that simple but I was looking for a solution other than reading from a file.
I am running Mac OS X Snow Leopard, but would preferably get a solution that would work across all current Mac OS X Platforms (i.e. Lion).
Solutions can be in C++, C or Obj-C, however C++ would the best possible solution in my case so if possible please try to provide it in C++.
Getting the machine's physical memory is simple with sysctl:
int mib [] = { CTL_HW, HW_MEMSIZE };
int64_t value = 0;
size_t length = sizeof(value);
if(-1 == sysctl(mib, 2, &value, &length, NULL, 0))
// An error occurred
// Physical memory is now in value
VM stats are only slightly trickier:
mach_msg_type_number_t count = HOST_VM_INFO_COUNT;
vm_statistics_data_t vmstat;
if(KERN_SUCCESS != host_statistics(mach_host_self(), HOST_VM_INFO, (host_info_t)&vmstat, &count))
// An error occurred
You can then use the data in vmstat to get the information you'd like:
double total = vmstat.wire_count + vmstat.active_count + vmstat.inactive_count + vmstat.free_count;
double wired = vmstat.wire_count / total;
double active = vmstat.active_count / total;
double inactive = vmstat.inactive_count / total;
double free = vmstat.free_count / total;
There is also a 64-bit version of the interface.
You're not supposed to read from /usr/bin/vm_stat, rather you're supposed to run it; it is a program. Look at the first four lines of output
Pages free: 1880145.
Pages active: 49962.
Pages inactive: 43609.
Pages wired down: 123353.
Add the numbers in the right column and multiple by the system page size (as returned by getpagesize()) and you get the total amount of physical memory in the system in bytes.
vm_stat isn't setuid on Mac OS, so I assume there is a non-privileged API somewhere to access this information and that vm_stat is using it. But I don't know what that interface is.
You can figure out the answer to this question by looking at the source of the top command. You can download the source from http://opensource.apple.com/. The 10.7.2 source is available as an archive here or in browsable form here. I recommend downloading the archive and opening top.xcodeproj so you can use Xcode to find definitions (command-clicking in Xcode is very useful).
The top command displays physical memory (RAM) numbers after the label "PhysMem". Searching the project for that string, we find it in the function update_physmem in globalstats.c. It computes the used and free memory numbers from the vm_stat member of struct libtop_tsamp_t.
You can command-click on "vm_stat" to find its declaration as a membor of libtop_tsamp_t in libtop.h. It is declared as type vm_statistics_data_t. Command-clicking that jumps to its definition in /usr/include/mach/vm_statistics.h.
Searching the project for "vm_stat", we find that it is filled in by function libtop_tsamp_update_vm_stats in libtop.c:
mach_msg_type_number_t count = sizeof(tsamp->vm_stat) / sizeof(natural_t);
kr = host_statistics(libtop_port, HOST_VM_INFO, (host_info_t)&tsamp->vm_stat, &count);
if (kr != KERN_SUCCESS) {
return kr;
}
You will need to figure out how libtop_port is set if you want to call host_statistics. I'm sure you can figure that out for yourself.
It's been 4 years but I just wanted to add some extra info on calculating total RAM.
To get the total RAM, we should also consider Pages occupied by compressor and Pages speculative in addition to Kyle Jones answer.
You can check out this post for where the problem occurs.
I am using TWAIN in C++ and I am trying to set the DPI manually so that a user is not displayed with the scan dialog but instead the page just scans with set defaults and is stored for them. I need to set the DPI manually but I can not seem to get it to work. I have tried setting the capability using the ICAP_XRESOLUTION and the ICAP_YRESOLUTION. When I look at the image's info though it always shows the same resolution no matter what I set it to using the ICAPs. Is there another way to set the resolution of a scanned in image or is there just an additional step that needs to be done that I can not find in the documentation anywhere?
Thanks
I use ICAP_XRESOLUTION and the ICAP_YRESOLUTION to set the scan resolution for a scanner, and it works at least for a number of HP scanners.
Code snipset:
float x_res = 1200;
cap.Cap = ICAP_XRESOLUTION;
cap.ConType = TWON_ONEVALUE;
cap.hContainer = GlobalAlloc(GHND, sizeof(TW_ONEVALUE));
if(cap.hContainer)
{
val_p = (pTW_ONEVALUE)GlobalLock(cap.hContainer);
val_p->ItemType = TWTY_FIX32;
TW_FIX32 fix32_val = FloatToFIX32(x_res);
val_p->Item = *((pTW_INT32) &fix32_val);
GlobalUnlock(cap.hContainer);
ret_code = SetCapability(cap);
GlobalFree(cap.hContainer);
}
TW_FIX32 FloatToFIX32(float i_float)
{
TW_FIX32 Fix32_value;
TW_INT32 value = (TW_INT32) (i_float * 65536.0 + 0.5);
Fix32_value.Whole = LOWORD(value >> 16);
Fix32_value.Frac = LOWORD(value & 0x0000ffffL);
return Fix32_value;
}
The value should be of type TW_FIX32 which is a floating point format defined by twain (strange but true).
I hope it works for you!
It should work the way.
But unfortunately we're not living in a perfect world. TWAIN drivers are among the most buggy drivers out there. Controlling the scanning process with TWAIN has always been a big headache because most drivers have never been tested without the scan dialog.
As far as I know there is also no test-suite for twain-drivers, so each of them will behave slightly different.
I wrote an OCR application back in the 90th and had to deal with these issues as well. What I ended up was having a list of supported scanners and a scanner module with lots of hacks and work-arounds for each different driver.
Take the ICAP_XRESOLUTION for example: The TWAIN documentation sais you have to send the resolution as a 32 bit float. Have you tried to set it using an integer instead? Or send it as float but put the bit-representation of an integer into the float, or vice versa. All this could work for the driver you're working with. Or it could not work at all.
I doubt the situation has changed much since then. So good luck getting it working on at least half of the machines that are out there.