Detecting available graphics memory on the windows platform using C++ - c++

I'd like to be able to detect how much graphics memory is available. I've written a C++ project that uses DirectShow.
Some ancient gfx cards can't do video properly and fall back to four colour mode. If I try to allocate more than one video window, the program just crashes on these machines without warning.
This is less than elegant, and I'd like to detect available graphics memory ahead of time, so I can determine if the program has enough gfx mem to run.

A really sneaky way that should work on XP and lower is to read the registry:
For example, I access \HKLM\Hardware\Devicemap\Video and get a GUID:
{3468769C-3D6B-4BB1-85B6-7B5AE7F4E8F8}
Then I access \HKLM\CCS\Control\Video, and read "HardwareInformation.MemorySize" for that device:
HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Video{3468769C-3D6B-4BB1-85B6-7B5AE7F4E8F8}
A much better approach (the recommended approach, in fact) is to use WMI:
GetVideoMemoryViaWMI

8mb. IIRC this is the maximum amount according to the AGP standard. All the extra memory on the gfx card was there to buffer the main memory so it didn't have to go across the bus.
I'd be surprised if the standard hadn't been revised.
If you have really old gfx cards to work with you can try looking at the Video Bios Extensions (VBE). That has a method for querying memory.

Related

Shutting down fan c++ [duplicate]

Is there a Windows standard way to do things such as "start fan", "decrease speed" or the like, from C/C++?
I have a suspicion it might be ACPI, but I am a frail mortal and cannot read that kind of documentation.
Edit: e.g. Windows 7 lets you select in your power plan options such as "passive cooling" (only when things get hot?) vs. "active cooling" (keep the CPU proactively cool?). It seems the OS does have a way to control the fan generically.
I am at the moment working on a project that, among other things, controls the computer fans. Basically, the fans are controlled by the superIO chip of your computer. We access the chip directly using port-mapped IO, and from there we can get to the logical fan device. Using port-mapped IO requires the code to run in kernel mode, but windows does not supply any drivers for generic port IO (with good reason, since it is a very powerful tool), so we wrote our own driver, and used that.
If you want to go down this route, you basically need knowledge in two areas: driver development and how to access and interpret superIO chip information. When we started the project, we didn't know anything in either of these areas, so it has been learning by browsing, reading and finally doing. To gain the knowledge, we have been especially helped by looking at these links:
The WDK, which is the Windows Driver Kit. You need this to compile any driver you write for windows, With it comes a whole lot of source code for example drivers, including a driver for general port-mapped IO, called portio.
WinIO has source code for a driver in C, a dll in C that programmatically installs and loads that driver, and some C# code for a GUI, that loads the dll and reads/writes to the ports. The driver is very similar to the one in portio.
lm-sensors is a linux project, that, among other things, detects your superIO chip. /prog/detect/sensors-detect is the perl program, that does the detecting, and we have spent some time going through the code to see how to interface with a superIO chip.
When we were going through the lm-sensors code, it was very nice to have tools like RapidDriver and RW-everything, since they allowed us to simulate a run of sensors-detect. The latter is the more powerful, and is very helpful in visualising the IO space, while the former provides easier access to some operations which map better to the ones in sensors-detect (read/write byte to port)
Finally, you need to find the datasheet of your superIO chip. From the examples, that I have seen, the environment controllers of each chip provide similar functionality (r/w fan speed, read temperature, read chip voltage), but vary in what registers you have to write to in order to get to this functionality. This place has had all the datasheets, we have needed so far.
If you want something real quick to just lower fans to a level where you know things won't overheat, there's the speedfan program to do so. Figuring out how to configure it in the early versions to automatically lower fans to 50% on computer startup was so painful that my first approach was to simply byte-patch it to start the only superio managed fan I had at lower speed. The newer versions are still bit tough but it's doable - there's a graphical slider system that looks like audio equalizer except that the x axis is temp and y is fan speed. You drag them down one by one. After you figure out how to get manual control for the fan you want, this is next step.
There's a project to monitor hardware (like fans) with C#:
http://code.google.com/p/open-hardware-monitor/
I haven't extensively looked at it, but the source code and use of WinRing0.sys atleast gives the impression that if you know what fan controller you have and have the datasheet, it should be modifiable to also set values instead of just getting them. I don't know what tool is suited (beside kernel debugger) to look at what Speedfan does, if you preferred to snoop around and imitate speedfan instead of looking at the datasheets and trying things out.
Yes, It would be ACPI, and to my knowledge windows doesn't give much/any control over that from user space. So you'd have to start mucking with drivers, which is nigh impossible on windows.
That said, google reveals there are a few open source windows libraries for this for specific hardware... so depending on your hardware you might be able to find something.
ACPI may or may not allow you to adjust the fan settings. Some BIOS implementations may not allow that control though -- they may force control depending on the BIOS/CMOS settings. One might be hard-pressed for a good use case where the BIOS control (even customized) is insufficient. I have come across situations where the BIOS control indeed was insufficient, but not for all possible motherboard platforms.
WIndows Management Instrumentation library (WMI) does provide a Win32_Fan Class and even a SetSpeed method. Alas, the docs say this is not implemented, so I guess it's not very helpful. But you may be able to control things by setting the power state.

C++, OpenCV, & Kinect: Processing speed goes down

I use C++ (Visual Studio 2015) and OpenCV (ver 3.2.0) to process data sent from Kinect v1. My C++ program has no problem when it starts debugging for the first time. After it stops debugging and re-start debugging, however, it gets very slow.
I am suspecting that the program closes without releasing some memory (i.e., memory leak). I am aware of that I would need to use the delete function to release the memory if I use the new function. But I didn't use the new function in the C++ program (I neither used the malloc() function, which is equivalent to the new function in C programs).
For OpenCV, I use the destroyAllWindows function at the end of the program. For Kinect v1, I also use the NuiShutdown(), Release(), and CloseHandle() functions at the end of the program.
Is there anything else I need do to release the memory (e.g., releasing memory associated with Mat in OpenCV)? Or is something else causing the decrease in processing speed?
I'd appreciate your help. Thanks.
After first run disconnect Kinect then reconnect and try second run.
If all goes well now then the problem is most likely stuck thread. The device access is usually handled by separate threads and especially with USB they can get stuck (in case of error or sync problem between accessing form host and expecting on device side) until you disconnect device (not sure which Kinect driver are you using but JUNGO version which NuiShutdown() infers have this problem). You can also check task manager before disconnection if there are not some stuck processes left after first run.
To remedy this you need to find out what are you doing wrong during access. It could be:
wrong USB port
use the back side not front slots.
invalid USB transfer request
device is always waiting for specific set of commands or stream and waits until it does not receive it so it blocks all other things. So using unsupported commands or reading in wrong times or sizes of packets can cause this.
USB communication is out of sync
PC host can timeout in case you do not have enough CPU power while critical operation is processed (or have opened too many apps on background).
This can be caused also by wrong gfx driver as I suspect you are using rendering ... Intel HD graphics can generate such problems with ease especially on notebooks. Try to disable any rendering in your app or at least limit rendering to OpenGL 1.0 to see if speed is the same in between runs. If this is the case the whole desktop usually flickers or is not repainting parts of apps ... and animations are sometimes sluggish.
Another problem might be a debugger. If without it all is well then debugger is the problem and you can not solve it. Debugging while accessing IO can cause sync and timeout problems especially with USB.
To check for memory leaks you can simply see how much free memory you got before 1st run and compare it to values after 1st,2nd,3th .. runs if the value lowers you got something stuck somewhere. After app close all the memory belonging to app is freed by OS so even if you forget some delete that does not matter unless some thread is still running ...
Some USB drivers based on libUSB I encountered got also problem with Handle leaks. But that behaves differently ... all runs fine until there are no free handles. After that OS is non functional you can not open any window,app, anything ... until any app is closed.
[Edit1] Front USB slots
Front slots are usually connected to motherboard with relatively long cable (usually flat and not very well shielded) so it is more susceptible to noise. Also as it is located usually around HDD and above high frequency parts of the motherboard it also induce it into the USB feed. All this degrades the quality of USB signal causing much much bigger rejection rate hence lowering sync capability and also the overall usable bandwidth.
If you compare that with backside USB ports they have no cables but are connected directly in PCB with short and well shielded paths so the connection quality is much much better.
So if you use device demanding high bandwith or synchronism then front ports are a bad choice.

How to get a pointer to a hardware driver in Windows?

I want to write a program that will monitor memory in a driver and print the memory contents every so often.
However, I'm not finding any resources in the Windows API that seem to allow me to grab a pointer (Handle) to a specific driver.
I'd appreciate any answer either from User space OR kernel space.
If you want to know exactly what I'm doing, I'm attempting to duplicate the results from this paper except on Windows. After I gain the ability to monitor a buffer in a basic windows console program, I intend to monitor from the GPU.
[For the record: I am a Graduate Student who is pursuing this as a summer project... this is ethical malware research.]
============UPDATE ==================
This might technically be better suited as an answer, but not really until I have a working solution.
My initial plan of attack is to use WinDbg to do dynamic analysis on the keyboard driver when it gets loaded, so I can get some idea about normal loading/unloading behavior. I'm using chapter 10 of this book, to guide setting up my testbed and once I understand more about the keyboard structure and its buffer, I'll work backwards towards getting a permanent reference to this structure and see about passing it into the graphics card and monitoring it with DMA as the original paper did on Linux.
You won't solve this problem by "grabbing a pointer to a specific driver". You need to locate the specific buffer used by the keyboard driver that resides on top of the USB driver.
You will have to actually grok the keyboard and USB drivers for Windows. At least part of which is probably available if you have a DDK (driver development kit) [aka WDK, Windows Driver Kit]. You will definitely need a graphics driver for this part of the project.
You will also have to develop a driver mechanism to map an arbitrary (kernel) lump of memory to your graphics driver - which means you need access to the source code for the graphics driver. (In theory, you could perhaps hack about in the page-tables, but Windows itself isn't too keen on software messing with the page-tables, and you'd definitely need to be VERY careful if the system is SMP, since modifying page-tables in an SMP system requires that you flush the TLB's of the "other" CPU(cores) in the system after updates).
To me, this seems like a rather interesting project, but a really tough one in a closed source system like Windows. At least in Linux, the developer has the source-code to read. When it comes to Windows, most of the relevant source code is completely unavailable (unless your school has special license to the MS Source code - I think there are some that do).

Get GPU temperature

I am really puzzled here. I want to create an application that does different events upon different temperatures of my graphics card which is an AMD one.
The reason i want to make such an applications is because, for a GPU i haven't found one, and the second is to ensure i never fry my card by reaching enormous temps.
However i have no idea how people(not connected to amd/intel/nvidia) write applications to monitor temperatures of any kind.
So how does it happen? Some APIs i don't know or something?
After a little bit of googling, i found this:
I think this is really vendor specific, it will probably involve interfacing directly with the motherboard or video driver and knowing which IOCTL represents the code for requesting the temperature. I reverse engineered a motherboard driver once for this purpose. It's not as hard as it sounds, download a manufacturer motherboard/BIOS utility and try to hook the function that gets called when that app needs to display the temperature to the user. Then watch for a call to DeviceIoControl() in Windows, or ioctl() in linux and see what the inputs / outputs are.
This may be your best bet. I found this information here:
http://www.gamedev.net/topic/557599-get-gpucpu-temperature/
Edit:
Also found this:
http://msdn.microsoft.com/en-us/library/aa389762%28v=VS.85%29.aspx
http://msdn.microsoft.com/en-us/library/aa394493%28VS.85%29.aspx
Hope it helps.
You could use one of the existing GPU temperature monitoring programs, such as GPU-Z, configure it for continuous monitoring, and read the log entries.
RivaTuner is another GPU monitoring program, which has a shared memory interface allowing other programs to get the data in real-time, but is nVidia focused. As long as your action isn't "reduce the GPU clock speed" it'll probably work well enough with ATI cards.

Why can't I set master volume for USB/Firewire Audio interface with IAudioEndpointVolume::SetMasterVolumeLevelScalar

I am trying to fix an Audacity bug that revolves around portmixer. The output/input level is settable using the mac version of portmixer, but not always in windows. I am debugging portmixer's window code to try to make it work there.
Using IAudioEndpointVolume::SetMasterVolumeLevelScalar to set the master volume works fine for onboard sound, but using pro external USB or firewire interfaces like the RME Fireface 400, the output volume won't change, although it is reflected in Window's sound control panel for that device, and also in the system mixer.
Also, outside of our program, changing the master slider for the system mixer (in the taskbar) there is no effect - the soundcard outputs the same (full) level regardless of the level the system says it is at. The only way to change the output level is using the custom app that the hardware developers give with the card.
The IAudioEndpointVolume::QueryHardwareSupport function gives back ENDPOINT_HARDWARE_SUPPORT_VOLUME so it should be able to do this.
This behavior exists for both input and output on many devices.
Is this possibly a Window's bug?
It is possible to workaround this by emulating (scaling) the output, but this is not preferred as it is not functionally identical - better to let the audio interface do the scaling (esp. for input if it involves a preamp).
The cards you talk about -like the RME- ones simply do not support setting the master or any other level through software, and there is not much you can do about it. This is not a Windows bug. One could argue that giving back ENDPOINT_HARDWARE_SUPPORT_VOLUME is a bug though, but that likely originates from the driver level, not Windows itself.
The only solution I found so far is hooking up a debugger (or adding a dll hook) to the vendor supplied software and looking at the DeviceIOControl calls it makes (those are the ones used to talk to the hardware) while setting the volume in the vendor software. Pretty hard to do this for every single card, but probably worth doing for a couple of pro cards. Especially for Audacity, for open source audio software it's actually not that bad so I can imagine some people being really happy if the volume on their card could be set by it. (at the time we were exclusively using an RME Multiface I spent quite some time in figuring out the DeviceIOControl calls, but in the end it was definitely worth it as I could set the volume in dB for any point in the matrix)