OpenGL driver crash on glTexture2D - opengl

with the following Kotlin/JVM/LWJGL + java.nio.ByteBuffer + OpenGL code it seems I can crash some driver of mine:
val texture = glGenTextures()
glBindTexture(GL_TEXTURE_2D, texture)
val w = 1026
val h = 1029
val byteBuffer = ByteBuffer
.allocateDirect(w*h)
.position(0)
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, w, h, 0, GL_RED, GL_UNSIGNED_BYTE, byteBuffer)
Executing this after the usual GLFW+OpenGL init, this results in a crash of the application and the following message:
#
# A fatal error has been detected by the Java Runtime Environment:
#
# EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ff98593509c, pid=13572, tid=15424
#
# JRE version: OpenJDK Runtime Environment (12.0.1+12) (build 12.0.1+12)
# Java VM: OpenJDK 64-Bit Server VM (12.0.1+12, mixed mode, sharing, tiered, compressed oops, g1 gc, windows-amd64)
# Problematic frame:
# C [atio6axx.dll+0x1bb509c]
#
# No core dump will be written. Minidumps are not enabled by default on client versions of Windows
#
# An error report file with more information is saved as:
# C:\Users\Antonio\Documents\IdeaProjects\VideoStudio\hs_err_pid13572.log
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
Is there something I can do about this, but to avoid non-power-of-2-textures?
I tested some resolutions, and only got crashes with textures with width or height > 1024.
In the case of 1026 x 1029 (and some more, e.g. 1590 x 2244) I get a crash in 100% of the cases.
I am using an RX 580, R5 2600, Win 10, Radeon Drivers updated to Recommended, just in case it would change something.

The default alignment for rows in the image data is 4 bytes. Unless your texture has a width which is a multiple of 4, you have to supply additional bytes for padding at the end of each row.
The other option is to change the unpack alignment to 1 byte by calling
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
before calling glTexImage2D.

Related

How to perform raytracing on a non-RTX graphics card?

I want to simulate a raytracing on non-RTX graphics card but I can't. I got this error "Raytracing not supported on device" that I indicate in a code at bottom. I set m_useWarpDevice to true but why I still got the error? According to my understand WARP makes an application run any feature (including raytracing) even the hardware is not supported, but why it doesn't work?
Question: How to perform raytracing on a non-RTX graphics card? The reason I insist is I tried to ask the question in Microsoft Forum but no answer.
What is Windows Advanced Rasterization Platform (WARP) Guide?
From https://learn.microsoft.com/en-us/windows/win32/direct3darticles/directx-warp
WARP does not require graphics hardware to execute. It can execute even in situations where hardware is not available or cannot be initialized.
From https://alternativesp.com/software/alternative/windows-advanced-rasterization-platform-warp/
In Windows 10, WARP has been updated to support Direct3D 12 at level 12_1; under Direct3D 12, WARP also replaces the reference rasterizer.
Compiler: Visual Studio 2019
Graphic card: NVIDIA GeForce 920M (non-RTX)
DXSample.cpp
From https://github.com/ScrappyCocco/DirectX-DXR-Tutorials/blob/master/01-Dx12DXRTriangle/Project/DXSample.cpp
At line 19
DXSample::DXSample(const UINT width, const UINT height, const std::wstring name) :
m_width(width),
m_height(height),
m_useWarpDevice(true), // <-- It was false but I set it to true.
m_title(name)
{
m_aspectRatio = static_cast<float>(width) / static_cast<float>(height);
}
D3D12HelloTriangle.cpp
From https://github.com/ScrappyCocco/DirectX-DXR-Tutorials/blob/master/01-Dx12DXRTriangle/Project/D3D12HelloTriangle.cpp
At line 91
if (m_useWarpDevice) { // m_useWarpDevice = true
ComPtr<IDXGIAdapter> warpAdapter;
ThrowIfFailed(factory->EnumWarpAdapter(IID_PPV_ARGS(&warpAdapter))); // <-- Success
ThrowIfFailed(D3D12CreateDevice(warpAdapter.Get(), D3D_FEATURE_LEVEL_12_1, IID_PPV_ARGS(&m_device))); // <-- Success
}
else {
ComPtr<IDXGIAdapter1> hardwareAdapter;
GetHardwareAdapter(factory.Get(), &hardwareAdapter);
ThrowIfFailed(D3D12CreateDevice(hardwareAdapter.Get(), D3D_FEATURE_LEVEL_12_1, IID_PPV_ARGS(&m_device)));
}
At line 494
void D3D12HelloTriangle::CheckRaytracingSupport() const {
D3D12_FEATURE_DATA_D3D12_OPTIONS5 options5 = {};
ThrowIfFailed(m_device->CheckFeatureSupport(D3D12_FEATURE_D3D12_OPTIONS5, &options5, sizeof(options5)));
if (options5.RaytracingTier < D3D12_RAYTRACING_TIER_1_0) // <-- options5.RaytracingTier = 0 on my computer which means that raytracing is not suppored.
throw std::runtime_error("Raytracing not supported on device"); // <-- I got this error.
}
Off-topic (just help in the future in case I forget):
https://alternativesp.com/software/alternative/windows-advanced-rasterization-platform-warp/
To force an application to use WARP without disabling the display driver, install the Direct X SDK. http://www.microsoft.com/en-us/download/details.aspx?id=6812 , go to C: / windows / system32, run dxcpl.exe, under “Scope” click “Edit list”, add the path to the application.
I tried to use dxcpl.exe to force WARP but options5.RaytracingTier is always 0.
Instead of using warp device you can use the dx12 RTX fallback layer.
https://github.com/microsoft/DirectX-Graphics-Samples/tree/e5ea2ac7430ce39e6f6d619fd85ae32581931589/Libraries/D3D12RaytracingFallback
Please note that is has a few limitations (resource binding is slightly different, also it's unlikely that they will continue to support it).
Also of course since it emulates the on chip RTX with compute shaders, performances are not as good as native.

Tensorflow C API Selecting GPU

I am using the Tensorflow C API to run models saved/frozen in python. We use to run these models on CPU but recently switched to GPU for performance. To interact with the C API we use a wrapper library called CPPFlow (https://github.com/serizba/cppflow). I recently updated this library so that we can pass in GPU Config options so that we can control GPU memory allocations. However we also now have systems with multiple GPUs which is causing some issues. It seems like I cant get Tensorflow to use the same GPU as our software does.
I use the visible_device_list parameter with the same GPU ID as our software. If I set our software to run on device 1 and Tensorflow to device 1, Tensorflow will pick device 2. If I set our software to use device 1 and Tensorflow to use device 2, both software use the same GPU.
How does Tensorflow order GPU devices and do I need to use another method to manually select the device? Every where I look suggests it can be done using the GPU Config options.
One way to set the device is getting the hex string in python and then using the string in C API: For example,
Sample 1:
gpu_options = tf.GPUOptions(allow_growth=True,visible_device_list='1')
config = tf.ConfigProto(gpu_options=gpu_options)
serialized = config.SerializeToString()
print(list(map(hex, serialized)))
Sample 2:
import tensorflow as tf
config = tf.compat.v1.ConfigProto(device_count={"CPU":1}, inter_op_parallelism_threads=1,intra_op_parallelism_threads=1)
ser = config.SerializeToString()
list(map(hex,ser))
Out[]:
['0xa',
'0x7',
'0xa',
'0x3',
'0x43',
'0x50',
'0x55',
'0x10',
'0x1',
'0x10',
'0x1',
'0x28',
'0x1']
Use this string in C API as
uint8_t config[13] = {0xa, 0x7, 0xa, ... , 0x28, 0x1};
TF_SetConfig(opts, (void*)config, 13, status);
For more details:
https://github.com/tensorflow/tensorflow/issues/29217
https://github.com/cyberfire/tensorflow-mtcnn/issues/1
https://github.com/tensorflow/tensorflow/issues/27114
You can set Tensorflow GPU order by setting the environment variable CUDA_VISIBLE_DEVICES during execution. For more details, you can check it here
//Set TF to use GPU:1 and GPU:0 (in this order)
setenv( "CUDA_VISIBLE_DEVICES", "1,0", 1 );
//Set TF to use only GPU:0 (in this order)
setenv( "CUDA_VISIBLE_DEVICES", "0", 1 );
//Set TF to do not use GPUs
setenv( "CUDA_VISIBLE_DEVICES", "-1", 1 );

How to set camera to auto-exposure with OpenCV 3.4.2?

I am working with a PS-Eye-3 camera, libusb, PSEye driver, OpenCV 3.4.2 and Visual Studio 2015 / C++ on Windows 10.
I can set the exposure of the camera to any value by using this code:
cv::VideoCapture *cap;
...
cap = new cv::VideoCapture(0);
cap->set(CV_CAP_PROP_EXPOSURE, exposure); // exposure = [0, 255]
Now I would like to switch to auto-exposure too. How can I set the camera to auto-exposure mode?
I tried the following:
cap->set(CV_CAP_PROP_EXPOSURE, 0); // not working
cap->set(CV_CAP_PROP_EXPOSURE, -1); // not working
cap->set(CV_CAP_PROP_AUTO_EXPOSURE, 1); // not working, exposure stays fixed
cap->set(CV_CAP_PROP_AUTO_EXPOSURE, 0); // not working, exposure stays fixed
cap->set(CV_CAP_PROP_AUTO_EXPOSURE, -1); // not working, exposure stays fixed
Some idea?
It depends on the capture api you are using. If you are using CAP_V4L2, then automatic exposure is set to 'on' with value 3 and 'off' with value 1.
All settable values can be found by typing v4l2-ctl -l in terminal.
I think for OpenCV < 4.0 default api is CAP_GSTREAMER and automatic exposure is set to 'on' with value 0.75 and 'off' with value 0.25.
Try cap->set(CV_CAP_PROP_AUTO_EXPOSURE, X);
where X is a camera-dependent value such as 0.25 or 0.75.
For a similar issue see the discussion:
https://github.com/opencv/opencv/issues/9738
i think finally i found a solution, at least for my problem,
capture = cv2.VideoCapture(id)
capture.set(cv2.CAP_PROP_AUTO_EXPOSURE, 3) # auto mode
capture.set(cv2.CAP_PROP_AUTO_EXPOSURE, 1) # manual mode
capture.set(cv2.CAP_PROP_EXPOSURE, desired_exposure_value)
i have to first set the auto_exposure to 3 (auto mode)
then i have to set it to 1 (manual mode)
and then i can set the manual exposure
you can set the settings with shell too
list available options
video_id=1
v4l2-ctl --device /dev/video$video_id -l
set them with python
def set_manual_exposure(dev_video_id, exposure_time):
commands = [
("v4l2-ctl --device /dev/video"+str(id)+" -c exposure_auto=3"),
("v4l2-ctl --device /dev/video"+str(id)+" -c exposure_auto=1"),
("v4l2-ctl --device /dev/video"+str(id)+" -c exposure_absolute="+str(exposure_time))
]
for c in commands:
os.system(c)
# usage
set_manual_exposure(1, 18)

Vulkan: vkGetPhysicalDeviceSurfaceFormatsKHR No Formats available?

I'm In the process of doing ogldev's vulkan tutorials and I've run into this problem with specifically the vkGetPhysicalDeviceSurfaceFormatsKHR function. The documentation says that if the pSurfaceFormats argument is NULL, it will tell how many surface formats are actually available in the pSurfaceFormatCount pointer.
Here's where my problem comes in.. It doesn't touch the integer pointer at all.
uint NumFormats = 0;
res = vkGetPhysicalDeviceSurfaceFormatsKHR(PhysDev, Surface, &NumFormats, NULL);
if(res != VK_SUCCESS) {
LIFE_ERROR("vkGetPhysicalDeviceSurfaceFormatsKHR error: %d\n", res);
assert(0);
}
assert(NumFormats > 0);
(the assert(NumFormats > 0) fails) I am running Linux with nvidia drivers, and I am pretty sure that the vulkan API can see my gpu properly, since my output is this:
Found 6 extensions
Instance extension 0 - VK_KHR_surface
Instance extension 1 - VK_KHR_xcb_surface
Instance extension 2 - VK_KHR_xlib_surface
Instance extension 3 - VK_KHR_wayland_surface
Instance extension 4 - VK_EXT_debug_report
Instance extension 5 - VK_NV_external_memory_capabilities
Surface created
Num physical devices 1
Device name: GTX 980 Ti
API version: 1.0.24
Num of family queues: 2
....(assert failes)
Problem solved. I looked at this answer and figured out that I had forgot to initialize my xcb window before I tried to check what surface formats and capabilities were available.

openCL-openGL interop stopped working after X upgrade

The following code used to work:
cl_context_properties Properties [] = {
CL_GL_CONTEXT_KHR, (cl_context_properties) glXGetCurrentContext(),
CL_GLX_DISPLAY_KHR, (cl_context_properties) glXGetCurrentDisplay(),
CL_CONTEXT_PLATFORM, (cl_context_properties) CL.Platform,
0
};
CL.Context = clCreateContext(Properties, 1, CL.Device, 0, 0, &err);
if (err < 0) printf("Context error %i!\n", err);
but now prints
Context error -1000!
If I comment out
//CL_GL_CONTEXT_KHR, (cl_context_properties) glXGetCurrentContext(),
//CL_GLX_DISPLAY_KHR, (cl_context_properties) glXGetCurrentDisplay(),
then it works fine. So, it would seem the issue is with the glX calls.
Now, what has changed is that I have upgraded X on my machine. I run AMD catalyst, and this upgrade resulted in the loss of my display, so after purging and reinstalling fglrx, I regained my display but suspect something got broken in the process. As as aside, I used to play Zandronum on this machine but since the upgrade, any attempt to play yields the following error:
zandronum: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.0.
I don't think this is a coincidence.
However, I'm not sure how to proceed in debugging. I can print the results of the glX calls in gdb:
(gdb) p Properties
$1 = {8200, 8519632, 8202, 6308672, 4228, 140737247522016, 0}
but I don't know how to verify any of it, or get further information on the values these calls are returning. What steps can I take to get to the root of the problem? Am I even looking in the right place?