I want to create my own Overclocking Monitor for which I need to read information like the current voltage, clockspeeds and others.
In C++ I can easily get the Information from Nvidia-smi with typing for example:
console("nvidia-smi -q -i voltage");
Which then displays me:
==============NVSMI LOG==============
Timestamp : Tue Dec 13 17:55:54 2022
Driver Version : 526.47
CUDA Version : 12.0
Attached GPUs : 1
GPU 00000000:01:00.0
Voltage
Graphics : 806.250 mV
From that I need only the voltage number, in this case "806.25".
I´ve investigated a bit into <cctype> which was something I´ve read about, but I´m not making any progress.
So how can I import only that number into my c++ Program? I´d just guess that the process will be the same for the other commands.
I don't currently have an Nvidia GPU to test this (stuck with Intel integrated graphics), so I can't import cuda.h but feel free to test this and let me know if it works or not.
#include <iostream>
#include <chrono>
#include <cuda.h>
int main() {
// Get the current timestamp
auto current_time = std::chrono::system_clock::now();
// Get the current driver version
int driver_version;
cudaDriverGetVersion(&driver_version);
// Get the current CUDA version
int cuda_version;
cudaRuntimeGetVersion(&cuda_version);
// Get the name of the attached GPU
cudaDeviceProp device_properties;
cudaGetDeviceProperties(&device_properties, 0);
std::string gpu_name = device_properties.name;
// Get the current voltage
int power_usage;
cudaDeviceGetPowerUsage(&power_usage, 0);
int voltage = power_usage / current;
// Output the overclocking data
std::cout << "Timestamp: " << current_time << std::endl;
std::cout << "Driver version: " << driver_version << std::endl;
std::cout << "CUDA version: " << cuda_version << std::endl;
std::cout << "Attached GPU: " << gpu_name << std::endl;
std::cout << "Voltage: " << voltage << std::endl;
return 0;
}
If it works then your voltage can be accessed from int voltage.
Related
I want to create a simple C++ application on windows which check the display turn off time.
After some search I found this function using windows.h
int time;
bool check;
check = SystemParametersInfo(SPI_GETSCREENSAVETIMEOUT, 0, &time, 0);
if (check) {
cout << "The Screen Saver time is : " << time << endl;
}
else {
cout << "Sorry dude the windows api can't do it" << endl;
}
but when I use this code the time is always zero and in my windows setting i set the windows to turn off display on 5 minutes
I tried some solution my self I changed the time type to long long and I got garbage number a very big number, so what I made wrong to get the screen turn off time.
OS: Windows 10
Compiler: Mingw32 and i test with MSVC 2015
Screen saver timeout and display power-off timeout are two different things.
SPI_GETSCREENSAVETIMEOUT returns the screen saver timeout - the time after which the Screen Saver is activated. If a screen saver was never configured, the value is 0.
The display power-off timeout is the time after which the power to the screen is cut, and is part of the power profile (and can differ e.g. for battery vs. AC power).
Use CallNtPowerInformation to get the display power-off timeout:
#include <iostream>
#include <windows.h>
#include <powerbase.h>
#pragma comment(lib, "PowrProf.lib")
int main() {
SYSTEM_POWER_POLICY powerPolicy;
DWORD ret;
ret = CallNtPowerInformation(SystemPowerPolicyCurrent, nullptr, 0, &powerPolicy, sizeof(powerPolicy));
if (ret == ERROR_SUCCESS) {
std::cout << "Display power-off timeout : " << powerPolicy.VideoTimeout << "s \n";
}
else {
std::cerr << "Error 0x" << std::hex << ret << std::endl;
}
}
Example output:
Display power-off timeout : 600 s
in CMake, I built OpenCV with OpenCL Enable ON(It automatically detected the OPENCL_INCLUDE_DIR path but the OPENCL_LIBRARY was empty, even after clicking config. for OPENCL_LIBRARY i don't see browse button either .. after generating opencv binaries then i run the below code
#include <iostream>
#include <fstream>
#include <string>
#include <iterator>
#include <opencv2/opencv.hpp>
#include <opencv2/core/ocl.hpp>
int main()
{
if (!cv::ocl::haveOpenCL())
cout << "OpenCL is not avaiable..." << endl;
else cout << "OpenCL is AVAILABLE! :) " << endl; //this is the output
cv::ocl::setUseOpenCL(true);
cout << context.ndevices() << " GPU devices are detected." << endl;
for (int i = 0; i < context.ndevices(); i++)
{
cv::ocl::Device device = context.device(i);
cout << "name: " << device.name() << endl;
cout << "available: " << device.available() << endl;
cout << "imageSupport: " << device.imageSupport() << endl;
cout << "OpenCL_C_Version: " << device.OpenCL_C_Version() << endl;
cout << endl;
} //this works & i can see my video card name & opencl version
cv::ocl::Device(context.device(0));
}
When i make use of UMat to measure the performance, the performance with(UMat) or without(Mat) OpenCL did not make any difference.
I downloaded AMD-APP-SDK from this link and tried to build but there was no OpenCL binaries (instead i saw opengl dll files[glew32.dll & glut32.dll]. How do i build OpenCV with OpenCL by linking the OPENCL_LIBRARY?
I believe you have OpenCL, hence the result of your call to haveOpenCL and from the version request. I'm not sure the results of your performance test equate that you don't have OpenCL.
If you want to understand OpenCL, I would take a step back and figure it out first and then try to understand OpenCV with it.
Your link didn't work, did you try this. It has a link to the current AMD APP SDK (3.0) I would go through that setup and make sure you can make the OpenCL samples build/work on your system and then you should be able to troubleshoot why it isn't working in OpenCV (if it truly isn't).
As to performance, well, it depends. Every time you send data to and from the the graphics card it comes at a cost; the Transparent API was designed to make that choice for you: if sending it to the card for faster processing is worth the trip there and back... if it is not worth the trip you will actually have poorer performance. Additionally, not all of the library will run on the GPU. See some of the explanation on opencv.org.
According to Microsoft, starting with Windows 10, applications using shared-mode WASAPI can request buffer sizes smaller than 10ms (see https://msdn.microsoft.com/en-us/library/windows/hardware/mt298187%28v=vs.85%29.aspx).
According to the article, achieving such low latencies requires some driver updates, which I did. Using an exclusive-mode render and capture stream, I measured a total round-trip latency (using a hardware loopback cable) of around 13ms. This suggests to me that at least one of the endpoints successfully achieves a latency of < 10ms. (Is this assumption correct?)
The article mentions that applications can use the new IAudioClient3 interface to query the minimum buffer size supported by the Windows audio engine using IAudioClient3::GetSharedModeEnginePeriod(). However, this function always returns 10ms on my system, and any attempt to initialize an audio stream using either IAudioClient::Initialize() or IAudioClient3::InitializeSharedAudioStream() with a period lower than 10ms always results in AUDCLNT_E_INVALID_DEVICE_PERIOD.
Just to be sure, I also disabled any effects processing in the audio drivers.
What am I missing? Is it even possible to get low latency from shared mode?
See below for some sample code.
#include <windows.h>
#include <atlbase.h>
#include <mmdeviceapi.h>
#include <audioclient.h>
#include <iostream>
#define VERIFY(hr) do { \
auto temp = (hr); \
if(FAILED(temp)) { \
std::cout << "Error: " << #hr << ": " << temp << "\n"; \
goto error; \
} \
} while(0)
int main(int argc, char** argv) {
HRESULT hr;
CComPtr<IMMDevice> device;
AudioClientProperties props;
CComPtr<IAudioClient> client;
CComPtr<IAudioClient2> client2;
CComPtr<IAudioClient3> client3;
CComHeapPtr<WAVEFORMATEX> format;
CComPtr<IMMDeviceEnumerator> enumerator;
REFERENCE_TIME minTime, maxTime, engineTime;
UINT32 min, max, fundamental, default_, current;
VERIFY(CoInitializeEx(nullptr, COINIT_APARTMENTTHREADED));
VERIFY(enumerator.CoCreateInstance(__uuidof(MMDeviceEnumerator)));
VERIFY(enumerator->GetDefaultAudioEndpoint(eRender, eMultimedia, &device));
VERIFY(device->Activate(__uuidof(IAudioClient), CLSCTX_ALL, nullptr, reinterpret_cast<void**>(&client)));
VERIFY(client->QueryInterface(&client2));
VERIFY(client->QueryInterface(&client3));
VERIFY(client3->GetCurrentSharedModeEnginePeriod(&format, ¤t));
// Always fails with AUDCLNT_E_OFFLOAD_MODE_ONLY.
hr = client2->GetBufferSizeLimits(format, TRUE, &minTime, &maxTime);
if(hr == AUDCLNT_E_OFFLOAD_MODE_ONLY)
std::cout << "GetBufferSizeLimits returned AUDCLNT_E_OFFLOAD_MODE_ONLY.\n";
else if(SUCCEEDED(hr))
std::cout << "hw min = " << (minTime / 10000.0) << " hw max = " << (maxTime / 10000.0) << "\n";
else
VERIFY(hr);
// Correctly? reports a minimum hardware period of 3ms and audio engine period of 10ms.
VERIFY(client->GetDevicePeriod(&engineTime, &minTime));
std::cout << "hw min = " << (minTime / 10000.0) << " engine = " << (engineTime / 10000.0) << "\n";
// All values are set to a number of frames corresponding to 10ms.
// This does not change if i change the device's sampling rate in the control panel.
VERIFY(client3->GetSharedModeEnginePeriod(format, &default_, &fundamental, &min, &max));
std::cout << "default = " << default_
<< " fundamental = " << fundamental
<< " min = " << min
<< " max = " << max
<< " current = " << current << "\n";
props.bIsOffload = FALSE;
props.cbSize = sizeof(props);
props.eCategory = AudioCategory_ForegroundOnlyMedia;
props.Options = AUDCLNT_STREAMOPTIONS_RAW | AUDCLNT_STREAMOPTIONS_MATCH_FORMAT;
// Doesn't seem to have any effect regardless of category/options values.
VERIFY(client2->SetClientProperties(&props));
format.Free();
VERIFY(client3->GetCurrentSharedModeEnginePeriod(&format, ¤t));
VERIFY(client3->GetSharedModeEnginePeriod(format, &default_, &fundamental, &min, &max));
std::cout << "default = " << default_
<< " fundamental = " << fundamental
<< " min = " << min
<< " max = " << max
<< " current = " << current << "\n";
error:
CoUninitialize();
return 0;
}
Per Hans in the comment above, double-check that you've followed the instructions for Low Latency Audio here.
I'd reboot the machine just to be sure; Windows can be a bit finicky with that kind of thing.
I want to write the date of the execution and the end of execution of a file in my log file.
I can't install anything, just use standard module ( I execute my code in command line with linux ).
I want something like this :
[TRACE] 2014-07-24 14:18:50,2014-07-24 14:18:52
I have this result for the moment :
[TRACE] , Start date of execution : Aug 25 2014 : 10:43:02
End date of execution : Mon Aug 25 10:43:06 2014
here my code :
#include <iostream>
#include <string>
#include <fstream>
#include <ctime>
using namespace std;
void startDateExecution(fstream& file) {
if(fichier)
{
file << "[TRACE]" << " , " << "Start date of execution : " << __DATE__ << " : " << __TIME__ << endl;
}
else
cerr << "Unable to open file" << endl;
}
void endDateExecution(fstream& file) {
time_t result = time(NULL);
file << "End date of execution : " << asctime(localtime(&result)) << endl;
file.close();
}
void displayDate(fstream& file) {
startDateExecution(file);
endDateExecution(file);
}
int main(){
fstream file("trace.log", ios::out | ios::trunc);
displayDate(file);
return 0;
}
You can use log4cpp library. It has lots of other features too. There are sample programs available on the following website.
http://log4cpp.sourceforge.net/
You just need to instantiate the appender based on the needs. I have used RollingFileAppender in my project where I needed the log file to be divided after some threshold (i.e. file size reaches 1MB). Then you need to set the pattern in which you want the logs to be written.
Hope this helps.
As many have commented, __DATE__ and __TIME__ refer to the time of compilation, not execution.
You'll need to retrieve the current time both at the start and at the end of the execution; you'll use the same method, whichever one you use, in both cases.
Here's an example of how you can format time using strftime.
std::string format(time_t when)
{
char timestr[256] = {0};
const char* my_format = "%m/%d/%y # %H:%M:%S";
std::strftime(timestr, sizeof(timestr), my_format, std::localtime(&when));
return timestr;
}
You would use it like this:
int main()
{
time_t start = std::time(NULL);
// Do stuff
time_t end = std::time(NULL);
std::cout << "Start: " << format(start) << std::endl
<< "End: " << format(end) << std::endl;
}
Read the documentation for strftime to learn how to specify your own format.
The following simple program never exits if the cudaMalloc call is executed. Commenting out just the cudaMalloc causes it to exit normally.
#include <iostream>
using std::cout;
using std::cin;
#include "cuda.h"
#include "cutil_inline.h"
void PrintCudaVersion(int version, const char *name)
{
int versionMaj = version / 1000;
int versionMin = (version - (versionMaj * 1000)) / 10;
cout << "CUDA " << name << " version: " << versionMaj << "." << versionMin << "\n";
}
void ReportCudaVersions()
{
int version = 0;
cudaDriverGetVersion(&version);
PrintCudaVersion(version, "Driver");
cudaRuntimeGetVersion(&version);
PrintCudaVersion(version, "Runtime");
}
int main(int argc, char **argv)
{
//CUresult r = cuInit(0); << These two lines were in original post
//cout << "Init result: " << r << "\n"; << but have no effect on the problem
ReportCudaVersions();
void *ptr = NULL;
cudaError_t err = cudaSuccess;
err = cudaMalloc(&ptr, 1024*1024);
cout << "cudaMalloc returned: " << err << " ptr: " << ptr << "\n";
err = cudaFree(ptr);
cout << "cudaFree returned: " << err << "\n";
return(0);
}
This is running on Windows 7, CUDA 4.1 driver, CUDA 3.2 runtime. I've trace the return from main through the CRT to ExitProcess(), from which it never returns (as expected) but the process never ends either. From VS2008 I can stop debugging OK. From the command line, I must kill the console window.
Program output:
Init result: 0
CUDA Driver version: 4.1
CUDA Runtime version: 3.2
cudaMalloc returned: 0 ptr: 00210000
cudaFree returned: 0
I tried making the allocation amount so large that cudaMalloc would fail. It did and reported an error, but the program still would not exit. So it apparently has to do with merely calling cudaMalloc, not the existence of allocated memory.
Any ideas as to what is going on here?
EDIT: I was wrong in the second sentence - I have to eliminate both the cudaMalloc and the cudaFree to get the program to exit. Leaving either one in causes the hang up.
EDIT: Although there are many references to the fact that CUDA driver versions are backward compatible, this problem went away when I reverted the driver to V3.2.
It seems like you're mixing the driver API (cuInit) with the runtime API (cudaMalloc).
I don't know if anything funny happens (or should happen) behind the scenes, but one thing you could try is to remove the cuInit and see what happens.