OpenCL finds platform, but cannot open them - c++

I am currently using a Lenovo Yoga 510 which makes use of an AMD Radon R5 Graphics card. OpenCL works on it, but however, when I run my code to query and get platform details, the total number of available platforms is returned, but if gives an error that this platforms cannot be opened. Please see error message below.
Error: Failed to open platforms key SOFTWARE\Intel\OpenCL\Boards to load board library at runtime.
Either link to the board library while compiling your host code or refer to your board vendor's documentation on how to install the board library so that it can be loaded at runtime.
Failed to close platforms key (null), ignoring
Warning: Cannot find any Intel(R) FPGA Board libraries.
No Intel(R) FPGA devices will be loaded.
Please contact your board vender or see section "Linking Your Host Application to the Khronos ICD Loader Library" of the Programming Guide to set up FCD manually.
2 PLATFORM(s) FOUND
SEE MY CODE BELOW
[INCLUDE STATEMENTS]
int main() {
cl_int returned;
cl_int zero = (cl_int)0;
//SET-UP DEVICE EXECUTION ENVIRONMENT
cl_uint no_of_platforms;
//cl_uint no_of_entries;
cl_platform_id* platforms;
size_t device_info_val_size;
char* detail;
//1. Query and select the vendor specific platform
returned = clGetPlatformIDs(zero, NULL, &no_of_platforms);
if (returned == CL_SUCCESS) {
printf("%d PLATFORM(s) FOUND \n", no_of_platforms);
}
else {
printf("No Platform Found\n");
return EXIT_FAILURE; //Terminante programme
}
platforms = (cl_platform_id*)malloc(sizeof(cl_platform_id) * no_of_platforms); //create enough space to put platofrm IDs into
clGetPlatformIDs(no_of_platforms, platforms, NULL); //Fill in platform with their ID
free(platforms);
return 0;
}
Any Idea What I may be doing Wrong or have set-up wrong? I am wondering why it is looking for Intel FPGAs on my Radon graphics card

Based on what you've provided, it sounds like the OpenCL ICD (Installable Client Driver) is configured incorrectly. This can be caused by a number of factors (independently):
Old/outdated graphics drivers
Corruption caused by a system update/Registry edit
The most reliable advice is to update (or, as a last resort, reinstall) your graphics drivers. Unless your GPU/iGPU are too old to have working OpenCL drivers, this should set everything up correctly.
Since you're using MSVC, I'll also recommend you to download the OpenCL SDK provided by Intel (or AMD if this were an AMD CPU), as not only does this ensure that you have the most up-to-date headers and utilities associated with OpenCL, it also installs a CPU ICD for OpenCL, giving you an extra platform to test with.

Related

Ampere A100 not supporting Memory Pools?

I am trying to run the "graphMemoryNodes" sample provided with Cuda Toolkit 11.6 on a A100 card. It compiles fine but stops after some compatibility checks:
cudaDeviceGetAttribute(&deviceSupportsMemoryPools,
cudaDevAttrMemoryPoolsSupported, device);
if (!deviceSupportsMemoryPools) {
printf("Waiving execution as device does not support Memory Pools\n");
exit(EXIT_WAIVED);
} else {
printf("Setting up sample.\n");
}
Here, deviceSupportsMemoryPools is equal to 0, meaning that the card is not supported to use Memory Pools. Is it correct? It sounds surprising to me but I couldn't find the answer.
I am using VS2022 on Windows Server 2019. The card driver is 513.46.
Thank you for your help

How to perform raytracing on a non-RTX graphics card?

I want to simulate a raytracing on non-RTX graphics card but I can't. I got this error "Raytracing not supported on device" that I indicate in a code at bottom. I set m_useWarpDevice to true but why I still got the error? According to my understand WARP makes an application run any feature (including raytracing) even the hardware is not supported, but why it doesn't work?
Question: How to perform raytracing on a non-RTX graphics card? The reason I insist is I tried to ask the question in Microsoft Forum but no answer.
What is Windows Advanced Rasterization Platform (WARP) Guide?
From https://learn.microsoft.com/en-us/windows/win32/direct3darticles/directx-warp
WARP does not require graphics hardware to execute. It can execute even in situations where hardware is not available or cannot be initialized.
From https://alternativesp.com/software/alternative/windows-advanced-rasterization-platform-warp/
In Windows 10, WARP has been updated to support Direct3D 12 at level 12_1; under Direct3D 12, WARP also replaces the reference rasterizer.
Compiler: Visual Studio 2019
Graphic card: NVIDIA GeForce 920M (non-RTX)
DXSample.cpp
From https://github.com/ScrappyCocco/DirectX-DXR-Tutorials/blob/master/01-Dx12DXRTriangle/Project/DXSample.cpp
At line 19
DXSample::DXSample(const UINT width, const UINT height, const std::wstring name) :
m_width(width),
m_height(height),
m_useWarpDevice(true), // <-- It was false but I set it to true.
m_title(name)
{
m_aspectRatio = static_cast<float>(width) / static_cast<float>(height);
}
D3D12HelloTriangle.cpp
From https://github.com/ScrappyCocco/DirectX-DXR-Tutorials/blob/master/01-Dx12DXRTriangle/Project/D3D12HelloTriangle.cpp
At line 91
if (m_useWarpDevice) { // m_useWarpDevice = true
ComPtr<IDXGIAdapter> warpAdapter;
ThrowIfFailed(factory->EnumWarpAdapter(IID_PPV_ARGS(&warpAdapter))); // <-- Success
ThrowIfFailed(D3D12CreateDevice(warpAdapter.Get(), D3D_FEATURE_LEVEL_12_1, IID_PPV_ARGS(&m_device))); // <-- Success
}
else {
ComPtr<IDXGIAdapter1> hardwareAdapter;
GetHardwareAdapter(factory.Get(), &hardwareAdapter);
ThrowIfFailed(D3D12CreateDevice(hardwareAdapter.Get(), D3D_FEATURE_LEVEL_12_1, IID_PPV_ARGS(&m_device)));
}
At line 494
void D3D12HelloTriangle::CheckRaytracingSupport() const {
D3D12_FEATURE_DATA_D3D12_OPTIONS5 options5 = {};
ThrowIfFailed(m_device->CheckFeatureSupport(D3D12_FEATURE_D3D12_OPTIONS5, &options5, sizeof(options5)));
if (options5.RaytracingTier < D3D12_RAYTRACING_TIER_1_0) // <-- options5.RaytracingTier = 0 on my computer which means that raytracing is not suppored.
throw std::runtime_error("Raytracing not supported on device"); // <-- I got this error.
}
Off-topic (just help in the future in case I forget):
https://alternativesp.com/software/alternative/windows-advanced-rasterization-platform-warp/
To force an application to use WARP without disabling the display driver, install the Direct X SDK. http://www.microsoft.com/en-us/download/details.aspx?id=6812 , go to C: / windows / system32, run dxcpl.exe, under “Scope” click “Edit list”, add the path to the application.
I tried to use dxcpl.exe to force WARP but options5.RaytracingTier is always 0.
Instead of using warp device you can use the dx12 RTX fallback layer.
https://github.com/microsoft/DirectX-Graphics-Samples/tree/e5ea2ac7430ce39e6f6d619fd85ae32581931589/Libraries/D3D12RaytracingFallback
Please note that is has a few limitations (resource binding is slightly different, also it's unlikely that they will continue to support it).
Also of course since it emulates the on chip RTX with compute shaders, performances are not as good as native.

How to check if a true hardware video adapter is used

I develop an application which shows something like a video in its window. I use technologies which are described here Introducing Direct2D 1.1. In my case the only difference is that eventually I create a bitmap using
ID2D1DeviceContext::CreateBitmap
then I use
ID2D1Bitmap::CopyFromMemory
to copy raw RGB data to it and then I call
ID2D1DeviceContext::DrawBitmap
to draw the bitmap. I use the high quality cubic interpolation mode D2D1_INTERPOLATION_MODE_HIGH_QUALITY_CUBIC for scaling to have the best picture but in some cases (RDP, Citrix, virtual machines, etc) it is very slow and has very high CPU consumption. It happens because in those cases a non-hardware video adapter is used. So for non-hardware adapters I am trying to turn off the interpolation and use faster methods. The problem is that I cannot exactly check if the system has a true hardware adapter.
When I call D3D11CreateDevice, I use it with D3D_DRIVER_TYPE_HARDWARE but on virtual machines it typically returns "Microsoft Basic Render Driver" which is a software driver and does not use GPU (it consumes CPU). So currently I check the vendor ID. If the vendor is AMD (ATI), NVIDIA or Intel, then I use the cubic interpolation. In the other case I use the fastest method which does not consume CPU a lot.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
// NVIDIA
if (desc.VendorId == 0x10DE ||
// AMD
desc.VendorId == 0x1002 || // 0x1022 ?
// Intel
desc.VendorId == 0x8086) // 0x163C, 0x8087 ?
{
bSupported = true;
}
}
}
}
It works for physical (console) Windows session even in virtual machines. But for RDP sessions IDXGIAdapter still returns the vendors in case of real machines but it does not use GPU (I can see it via the Process Hacker 2 and AMD System Monitor (in case of ATI Radeon)) so I still have high CPU consumption with the cubic interpolation. In case of an RDP session to Windows 7 with ATI Radeon it is 10% bigger than via the physical console.
Or am I mistaken and somehow RDP uses GPU resources and that is the reason why it returns a real hardware adapter via IDXGIAdapter::GetDesc?
DirectDraw
Also I looked at DirectX Diagnostic Tool. It looks like the "DirectDraw Acceleration" info field returns exactly what I need. In case of physical (console) sessions it says "Enabled". In case of RDP and virtual machine (without hardware video acceleration) sessions it says "Not Available". I looked at sources and theoretically I can use the verification algorithm. But it is actually for DirectDraw which I do not use in my application. I would like to use something which is directly linked to ID3D11Device, IDXGIDevice, IDXGIAdapter and so on.
IDXGIAdapter1::GetDesc1 and DXGI_ADAPTER_FLAG
I also tried to use IDXGIAdapter1::GetDesc1 and check the flags.
Microsoft::WRL::ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(m_pD3dDevice->QueryInterface(...)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
Microsoft::WRL::ComPtr<IDXGIAdapter1> adapter1;
if (SUCCEEDED(adapter->QueryInterface(__uuidof(IDXGIAdapter1), reinterpret_cast<void**>(adapter1.GetAddressOf()))))
{
DXGI_ADAPTER_DESC1 desc;
if (SUCCEEDED(adapter1->GetDesc1(&desc)))
{
// desc.Flags
// DXGI_ADAPTER_FLAG_NONE = 0,
// DXGI_ADAPTER_FLAG_REMOTE = 1,
// DXGI_ADAPTER_FLAG_SOFTWARE = 2,
// DXGI_ADAPTER_FLAG_FORCE_DWORD = 0xffffffff
}
}
}
}
Information about the DXGI_ADAPTER_FLAG_SOFTWARE flag
Virtual Machine RDP Win Serv 2012 (Microsoft Basic Render Driver) -> (0x02) DXGI_ADAPTER_FLAG_SOFTWARE
Physical Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
Physical Win 7 (ATI Radeon) - > (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 10 (Intel Video) -> (0x00) DXGI_ADAPTER_FLAG_NONE
RDP Win 7 (ATI Radeon) -> (0x00) DXGI_ADAPTER_FLAG_NONE
In case of RDP session on a real machine with a hardware adapter, Flags == 0 but as I can see via Process Hacker 2 the GPU is not used. At least on Windows 7 with ATI Radeon I can see bigger CPU usage in case of an RDP session. So it looks like DXGI_ADAPTER_FLAG_SOFTWARE is only for Microsoft Basic Render Driver. So the issue is not solved.
The question
Is there a correct way to check if a real hardware video card (GPU) is used for the current Windows session? Or maybe it is possible to check if a specific interpolation mode of ID2D1DeviceContext::DrawBitmap has hardware implementation and uses GPU for the current session?
UPD
The topic is not about detecting RDP or Citrix sessions. It is not about detecting if the application is inside a virtual machine or not. I already have the all verifications and use the linear interpolation for those cases. The topic is about detecting if a real GPU is used for the current Windows session to display the desktop. I am looking for a more sophisticated solution to make decision using features of DirectX and DXGI.
If you want to detect the Microsoft Basic Renderer, the best option is to use it's VID/PID combo:
ComPtr<IDXGIDevice> dxgiDevice;
if (SUCCEEDED(device.As(&dxgiDevice)))
{
ComPtr<IDXGIAdapter> adapter;
if (SUCCEEDED(dxgiDevice->GetAdapter(&adapter)))
{
DXGI_ADAPTER_DESC desc;
if (SUCCEEDED(adapter->GetDesc(&desc)))
{
if ( (desc.VendorId == 0x1414) && (desc.DeviceId == 0x8c) )
{
// WARNING: Microsoft Basic Render Driver is active.
// Performance of this application may be unsatisfactory.
// Please ensure that your video card is Direct3D10/11 capable
// and has the appropriate driver installed.
}
}
}
}
See Microsoft Docs and Anatomy of Direct3D 11 Create Device
You will probably find for testing/debugging that you don't want to explicitly block these scenarios, but you do want to provide some kind of warning or notice feedback to the user that they are using software rather than hardware rendering.
Remote Desktop detection from Win32 classic desktop applications is better done directly via GetSystemMetrics( SM_REMOTESESSION ).
See Microsoft Docs
Answering a 3 years old question as I struggled myself to do so.
I had to go through the registry. First thing is to find the adapter LUID in the registry, to get the adapter GUID
private string GetAdapterGuid(long luid)
{
var directXRegistryKey = Registry.LocalMachine.OpenSubKey(#"SOFTWARE\Microsoft\DirectX");
if (directXRegistryKey == null)
return "";
var subKeyNames = directXRegistryKey.GetSubKeyNames();
foreach (var subKeyName in subKeyNames)
{
var subKey = directXRegistryKey.OpenSubKey(subKeyName);
if (subKey.GetValueKind("AdapterLuid") != RegistryValueKind.QWord)
continue;
var luidValue = (long)subKey.GetValue("AdapterLuid");
if (luidValue == luid)
return subKeyName;
}
return "";
}
Once you have that Guid, you can search for the details of the graphic card in HKLM like this. If it is virtual, the service name will be "INDIRECTKMD" :
private bool IsVirtualAdapter(string adapterGuid)
{
var videoRegistryKey = Registry.LocalMachine.OpenSubKey($#"SYSTEM\CurrentControlSet\Control\Video\{adapterGuid}\Video");
if (videoRegistryKey == null)
return false;
if (videoRegistryKey.GetValueKind("Service") != RegistryValueKind.String)
return false;
var serviceName = (string)videoRegistryKey.GetValue("Service");
return serviceName.ToUpper() == "INDIRECTKMD";
}
Checking the service name felt easier than parsing the DeviceDesc value.
My use case involved having the Guid ready so I split up the function, you could merge it into one.
It also only detect RDP/MSTSC through this, additional service names might be needed for other virtual adapters. Or you could try to detect only Nvidia/AMD/Intel driver names... up to you.

tensorflow places softmax op on cpu instead of gpu

I have a tensorflow model with multiple inputs and several layers, and a final softmax layer. The model is trained in Python (using the Keras framework), then saved and inference is done using a C++ program that facilitates a CMake build of TensorFlow (following basically those instructions: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake).
In python (tensorflow-gpu) all ops use the GPU (using log_device_placement):
out/MatMul: (MatMul): /job:localhost/replica:0/task:0/gpu:0
2017-12-04 14:07:38.005837: I C:\tf_jenkins\home\workspace\rel-in\M\windows-gpu\PY\35\tensorflow\core\common_runtime\simple_placer.cc:872] out/MatMul: (MatMul)/job:localhost/replica:0/task:0/gpu:0
out/BiasAdd: (BiasAdd): /job:localhost/replica:0/task:0/gpu:0
2017-12-04 14:07:38.006201: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\simple_placer.cc:872]
out/BiasAdd: (BiasAdd)/job:localhost/replica:0/task:0/gpu:0
out/Softmax: (Softmax): /job:localhost/replica:0/task:0/gpu:0
2017-12-04 14:07:38.006535: I C:\tf_jenkins\home\workspace\rel-win\M\windows-gpu\PY\35\tensorflow\core\common_runtime\simple_placer.cc:872] out/Softmax: (Softmax)/job:localhost/replica:0/task:0/gpu:0
To save the graph, the freeze_graph script is used (the script producing the log above loads again the freezed graph in .pb format).
When I use the C++ program and load the freezed graph (following closely the LoadGraph() function in https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/label_image/main.cc - ReadBinaryProto() and session->Create()), and log again the device placements, I find that the Softmax is placed on CPU (all others ops are on GPU):
dense_6/MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
dense_6/BiasAdd: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
dense_6/Relu: (Relu): /job:localhost/replica:0/task:0/device:GPU:0
out/MatMul: (MatMul): /job:localhost/replica:0/task:0/device:GPU:0
out/BiasAdd: (BiasAdd): /job:localhost/replica:0/task:0/device:GPU:0
out/Softmax: (Softmax): /job:localhost/replica:0/task:0/device:CPU:0
This placement is also confirmed by high CPU/low GPU utilization, and also apparent from profiling the application. The data type of the out layer is float32 (out/Softmax -> (<tf.Tensor 'out/Softmax:0' shape=(?, 1418) dtype=float32>,)).
Further investigation revealed:
Creating the softmax-op in C++ and placing it on GPU explicitly throws this error message:
Cannot assign a device for operation 'tsoftmax': Could not satisfy explicit device specification '/device:GPU:0' because no supported kernel for GPU devices is available.
A call to tensorflow::LogAllRegisteredKernels() showed also that Softmax is only available for CPU!
The build directory contains many files related to "softmax" (e.g. `tf_core_gpu_kernels_generated_softmax_op_gpu.cu.cc.obj.Release.cmake). Don't know how to check every compilation step, though.
when I look into the "tf_core_gpu_kernels.lib" (one can open a .lib with 7Z ;)), there are files like "tf_core_gpu_kernels_generated_softmax_op_gpu.cu.cc.lib" - so I believe there is nothing wrong with compiling the kernels itself
But: inspecting the "tensorflow.dll" (Dependency Walker) shows that only CPU kernels for Softmax are included (there are functions like const tensorflow::SoftmaxOp<struct Eigen::ThreadPoolDevice,double>, but no functions with GPU such as const tensorflow::SoftplusGradOp<struct Eigen::GpuDevice,float>).
Setup: Tensorflow 1.3.0, Windows 10, GPU: NVidia GTX 1070 (8GB RAM, memory utilization also very low).
I found a workaround - the workaround is to include the tf_core_gpu_kernels.lib in some of the steps (create_def_file.py). More details here: GitHub Issue 15254

Error getting number of devices using Libusb-win32-1.2.6.0

I am trying to run a simple code written with libusb-win32 on VS2010 to get information about the USB devices connected. I cannot get it run sucessfully.
#include <stdio.h>
#include <string.h>
#include "lusb0_usb.h"
int verbose = 1;
int print_device(struct usb_device *dev, int level);
int main(int argc, char *argv[])
{
struct usb_bus *bus;
if (argc > 1 && !strcmp(argv[1], "-v"))
verbose = 1;
usb_init();
usb_set_debug(255);
int nBusses = usb_find_busses();
int nDevices = usb_find_devices();
if(nDevices <=0) return 0;
for (bus = usb_get_busses(); bus; bus = bus->next)
{
if (bus->root_dev && !verbose)
print_device(bus->root_dev, 0);
else
{
struct usb_device *dev;
for (dev = bus->devices; dev; dev = dev->next)
print_device(dev, 0);
}
}
return 0;
}
The project configuration is Win32, debug and I am using the x86 dll & lib files. I am able to compile the code.
The usb_find_busses() returns 1 & usb_find_devices() returns 0. I do not understand why I am not getting correct numbers. Printing bus->root_dev prints the following output.
Dev #0: 0000 - 0000
bLength: 18
bDescriptorType: 01h
bcdUSB: 0200h
bDeviceClass: 09h
bDeviceSubClass: 00h
bDeviceProtocol: 00h
bMaxPacketSize0: 40h
idVendor: 0000h
idProduct: 0000h
bcdDevice: 0100h
iManufacturer: 0
iProduct: 0
iSerialNumber: 0
bNumConfigurations: 0
Couldn't retrieve descriptors
I ran the inf-wizard.exe and there I am able to see all the devices with vendor and device ids. I do not understand what I am missing.
The usb_find_devices() documentation in the libusb 0.1 API says:
Returns the number of changes since the previous call to this function (total of new device and devices removed).
So, it does not return the number of available devices, like you are expecting. If you need that value, enumerate the bus list counting the devices yourself.
Otherwise, just ignore the count and move on to your bus printing loop, it will run across any connected devices.
That being said, there is a newer libusb 1.0 API, but its documentation does not mention usb_find_devices() at all. usb_find_devices() appears to have been replaced with a new usb_get_devices_list() function:
Returns a list of USB devices currently attached to the system.
This is your entry point into finding a USB device to operate.
You really should be using the newer 1.0 API, per the libusb website:
There are several libusb-0.1 API implementations:
libusb-0.1 is the very first libusb implementation.
libusb-compat-0.1 is a compatibility library which provides the libusb-0.1 API by using the libusb-1.0 API.
libusb-win32 is a Windows-only implementation of the libusb-0.1 API. The libusb-win32 project has also created the open source libusb0.sys Windows kernel driver, which exposes a userspace API that allows USB devices to be accessed outside of the Windows kernel.
...
Because the 0.1 and 1.0 APIs use different prefixes they are compatible with each other. It is common that both are installed in parallel on a system. We strongly recommend using libusb-1.0 together with libusb-compat-0.1 instead of the ancient libusb-0.1 code, so that programs which use both the 0.1 API and the 1.0 API in different parts of the program, or in different libraries used by the program, will all use libusb-1.0 for the actual device access. This is important to avoid potential conflicts between libusb-1.0 and libusb-0.1 being used in the same process.
... libusb-1.0 is recommended for all new development. Developers are encouraged to port existing applications which use libusb-0.1 to use the new API.
...
libusb-0.1 (legacy API)
...
Note that libusb-win32 is a separate project which still sees active development. The next generation libusb-win32 kernel driver (libusbk.sys) is based on KMDF. The ​libusbk library will support the existing libusb-win32 API, libusb-1.0 API and WinUSB-like API. libusb-win32 users who are fine with the libusb-win32 API are recommended to keep using it since it will be supported by the libusb-win32 project. libusb-win32 users who are interested in libusb-1.0 will also be supported once the libusbk backend is integrated. Future enhancement of the libusb-1.0 API (say libusb-1.1) may be required to be more suitable for Windows users.