CUDA + OpenGL. Unknown code=4(cudaErrorLaunchFailure) error - opengl

I am doing a simple n-body simulation on CUDA, which then I am trying to visualize with OpenGL.
After I have initialitzed my particle data on the CPU, allocated the respective memory and transfered that data on the GPU, the program has to enter the following cycle:
1) Compute the forces on each particle (CUDA part)
2) update particle positions (CUDA part)
3) display the particles for this time step (OpenGL part)
4) go back to 1)
The interface between CUDA and OpenGL I achieve with the following code:
GLuint dataBufferID;
particle_t* Particles_d;
particle_t* Particles_h;
cudaGraphicsResource *resources[1];
I allocate space on OpenGLs Array_Buffer and register the latter as a cudaGraphicsResource using the following code:
void createVBO()
{
// create buffer object
glGenBuffers(1, &dataBufferID);
glBindBuffer(GL_ARRAY_BUFFER, dataBufferID);
glBufferData(GL_ARRAY_BUFFER, bufferStride*N*sizeof(float), 0, GL_DYNAMIC_DRAW);
glBindBuffer(GL_ARRAY_BUFFER, 0);
checkCudaErrors(cudaGraphicsGLRegisterBuffer(resources, dataBufferID, cudaGraphicsMapFlagsNone));
}
Lastly, the program cycle that I described (steps 1 to 4) is realized by the following function update(int)
void update(int value)
{
// map OpenGL buffer object for writing from CUDA
float* dataPtr;
checkCudaErrors(cudaGraphicsMapResources(1, resources, 0));
size_t num_bytes;
//get a pointer to that buffer object for manipulation with cuda!
checkCudaErrors(cudaGraphicsResourceGetMappedPointer((void **)&dataPtr, &num_bytes,resources[0]));
//fill the Graphics Resource with particle position Data!
launch_kernel<<<NUM_BLOCKS,NUM_THREADS>>>(Particles_d,dataPtr,1);
// unmap buffer object
checkCudaErrors(cudaGraphicsUnmapResources(1, resources, 0));
glutPostRedisplay();
glutTimerFunc(milisec,update,0);
}
I compile end I get the following errors :
CUDA error at src/main.cu:390 code=4(cudaErrorLaunchFailure) "cudaGraphicsMapResources(1, resources, 0)"
CUDA error at src/main.cu:392 code=4(cudaErrorLaunchFailure) "cudaGraphicsResourceGetMappedPointer((void **)&dataPtr, &num_bytes,resources[0])"
CUDA error at src/main.cu:397 code=4(cudaErrorLaunchFailure) "cudaGraphicsUnmapResources(1, resources, 0)"
Does anyone know what might be the reasons for that exception? Am I supposed to create the dataBuffer using createVBO() every time prior to the execution of update(int) ...?
p.s. Just for more clarity, my kernel function is the following:
__global__ void launch_kernel(particle_t* Particles,float* data, int KernelMode){
int i = blockIdx.x*THREADS_PER_BLOCK + threadIdx.x;
if(KernelMode == 1){
//N_d is allocated on device memory
if(i > N_d)
return;
//and update dataBuffer!
updateX(Particles+i);
for(int d=0;d<DIM_d;d++){
data[i*bufferStride_d+d] = Particles[i].p[d]; // update the new coordinate positions in the data buffer!
}
// fill in also the RGB data and the radius. In general THIS IS NOT NECESSARY!! NEED TO PERFORM ONCE! REFACTOR!!!
data[i*bufferStride_d+DIM_d] =Particles[i].r;
data[i*bufferStride_d+DIM_d+1] =Particles[i].g;
data[i*bufferStride_d+DIM_d+2] =Particles[i].b;
data[i*bufferStride_d+DIM_d+3] =Particles[i].radius;
}else{
// if KernelMode = 2 then Update Y
float* Fold = new float[DIM_d];
for(int d=0;d<DIM_d;d++)
Fold[d]=Particles[i].force[d];
//of course in parallel :)
computeForces(Particles,i);
updateV(Particles+i,Fold);
delete [] Fold;
}
// in either case wait for all threads to finish!
__syncthreads();
}

As I mentioned at one of the comments above , it turned out that I had mistaken the computing capability compiler option. I ran cuda-memcheck and I saw that that the cuda Api launch was failing. After I found the right compiler options, everything worked like a charm.

Related

OpenGL, glMapNamedBuffer takes a long time

I've been writing an openGL program that generates vertices on the GPU using compute shaders, the problem is I need to read back the number of vertices from a buffer written to by one compute shader dispatch on the CPU so that I can allocate a buffer of the right size for the next compute shader dispatch to fill with vertices.
/*
* Stage 1- Populate the 3d texture with voxel values
*/
_EvaluateVoxels.Use();
glActiveTexture(GL_TEXTURE0);
GLPrintErrors("glActiveTexture(GL_TEXTURE0);");
glBindTexture(GL_TEXTURE_3D, _RandomSeedTexture);
glBindImageTexture(2, _VoxelValuesTexture, 0, GL_TRUE, NULL, GL_READ_WRITE, GL_R32F);
_EvaluateVoxels.SetVec3("CellSize", voxelCubeDims);
SetMetaBalls(metaballs);
_EvaluateVoxels.SetVec3("StartPos", chunkPosLL);
glDispatchCompute(voxelDim.x + 1, voxelDim.y + 1, voxelDim.z + 1);
glMemoryBarrier(GL_SHADER_IMAGE_ACCESS_BARRIER_BIT);
/*
* Stage 2 - Calculate the marching cube's case for each cube of 8 voxels,
* listing those that contain polygons and counting the no of vertices that will be produced
*/
_GetNonEmptyVoxels.Use();
_GetNonEmptyVoxels.SetFloat("IsoLevel", isoValue);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 3, _IntermediateDataSSBO);
glBindBufferBase(GL_ATOMIC_COUNTER_BUFFER, 0, _AtomicCountersBuffer);
glDispatchCompute(voxelDim.x, voxelDim.y, voxelDim.z);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT | GL_ATOMIC_COUNTER_BARRIER_BIT);
//printStage2(_IntermediateDataSSBO, true);
_StopWatch.StopTimer("stage2");
_StopWatch.StartTimer("getvertexcounter");
// this line takes a long time
unsigned int* vals = (unsigned int*)glMapNamedBuffer(_AtomicCountersBuffer, GL_READ_WRITE);
unsigned int vertex_counter = vals[1];
unsigned int index_counter = vals[0];
vals[0] = 0;
vals[1] = 0;
glUnmapNamedBuffer(_AtomicCountersBuffer);
The image below shows times in milliseconds that each stage of the code takes to run, "timer Evaluate" refers to the method as a whole, IE the sum total of the previous stages. getvertexcounter refers to only the mapping, reading and unmapping of a buffer containing the number of vertices. Please see code for more detail.
I've found this to be by far the slowest stage in the process, and I gather it has something to do with the asynchronous nature of the communication between openGL and the GPU and the need to synchronise data that was written by the compute shader so it can be read by the CPU. My question is this: Is this delay avoidable? I don't think that the overall approach is flawed because I know that someone else has implemented the algorithm in a similar way, albeit using direct X (I think).
You can find my code at https://github.com/JimMarshall35/Marching-cubes-cpp/tree/main/MarchingCubes , the code in question is in the file ComputeShaderMarcher.cpp and the method unsigned int ComputeShaderMarcher::GenerateMesh(const glm::vec3& chunkPosLL, const glm::vec3& chunkDim, const glm::ivec3& voxelDim, float isoValue, GLuint VBO)
In order to access data from a buffer that you have had OpenGL write some data to, the CPU must halt execution until the GPU has actually written that data. Whatever process you use to access this data (glMapBufferRange, glGetBufferSubData, etc), that process must halt until the GPU has finished generating the data.
So don't try to access GPU-generated data until you're sure the GPU has actually generated it (or you have absolutely nothing better to do on the CPU than wait). Use fence sync objects to test whether the GPU has finished executing past a certain point.

Write code, which does not cause many page faults

I have a code for rendering an OpenGL scene. This code is causing many page faults, when started without visual studio. The code seen in paintGL() is only a fraction what happens there, but it takes the most time.
Example code:
void prepareData() {
std::vector<int> m_indices; // vector of point indices, that should be connected
std::vector<float> m_vertices; // vector of the 3d points
/*
fill the vectors
*/
}
void MyGLWidget::paintGL() {
glBegin(GL_TRIANGLE_STRIP);
for (unsigned int i=0; i < m_indices.size(); i++)
{
// search end of strip
if (m_indices[i] < 0)
{
// store end of strip
endStrip = i;
// we need at least three vertices for a triangle strip
if (startStrip+2 < endStrip)
{
// draw strip
for (unsigned int j=startStrip; j<endStrip; j++) {
idx = 3 * m_indices[j];
glVertex3dv(m_vertices[idx]));
}
}
// store start of next strip
startStrip = i+1;
}
}
glEnd();
}
So here is the problem: when the data changes and gets calculated, the next call of paintGL() is very slow, because accessing the new values causes a lot of page faults.
When the data does not change, paintGL() is as fast as it should be.
Both data vectors can get be really big, normally we have sizes like 10 million indices and 15 million vertices.
My question is, how can I achieve to make the paintGL faster, when the values to display are freshly calculated?
When the application is started with Visual Studio (both Releae builds), there aren't that many page faults and it is faster than normal. How does Visual Studio achieve that and can I do this too, without visual studio monitoring my application.
The problem was already described here, but now I have found out the root cause for the problem: Release Build is faster, when started from Visual Studio than started “normally”
Additional information: C++/opengl application running smoother with debugger attached
The increased page fault load is just a secondary symptom of the really poor rendering loop. Modern GPUs operate on (large/-ish) buffers of vertex and index data. When using glBegin…glEnd intermediate mode, the driver is forced to create such buffers in situ. To speed things up there are a lot of heuristics, including the drivers also marking pages so that they get notified, if the contents of the pages changes, so that buffers are recreated only when needed.
Rewrite it to use indexed triangles in a vertex array, this is the mode GPUs and OpenGL drivers work best.
Even a client side vertex array will massively speed things up, since the driver can then coalesce the buffer copy. Of course the best thing would be, if you could just place m_vertices in a Vertex Buffer Object.
#include <vector>
#include <utility>
// overloaded wrappers to deduce the
// GL type from the type of the index buffer vector
namespace gl_wrap {
void DrawElements(
GLenum mode, GLsizei count,
std::vector<GLubyte> const &idx_buffer,
size_t offset )
{
glDrawElements(mode, count, GL_UNSIGNED_BYTE, idx_buffer.data()+offset);
}
void DrawElements(
GLenum mode, GLsizei count,
std::vector<GLushort> const &idx_buffer,
size_t offset )
{
glDrawElements(mode, count, GL_UNSIGNED_SHORT, idx_buffer.data()+offset);
}
void DrawElements(
GLenum mode, GLsizei count,
std::vector<GLuint> const &idx_buffer,
size_t offset )
{
glDrawElements(mode, count, GL_UNSIGNED_INT, idx_buffer.data()+offset);
}
}
void MyGLWidget::paintGL() {
glBegin(GL_TRIANGLE_STRIP);
std::vector<std::pair<size_t,size_t>> strips;
size_t i_prev = 0, i = 0;
for( auto idx : m_indices ){
++i;
if( idx < 0 ){
strips.push_back(std::make_pair(i_prev, i-i_prev));
i_prev = i;
}
}
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_DOUBLE, 0, m_vertices.data());
for( auto const &s : strips ){
gl_wrap::DrawElements(GL_TRIANGLE_STRIP, m_indices.data(), s.second, s.first);
}
glDisableClientState(GL_VERTEX_ARRAY);
}

Can I call `glDrawArrays` multiple times while updating the same `GL_ARRAY_BUFFER`?

In a single frame, is it "allowed" to update the same GL_ARRAY_BUFFER continuously and keep calling glDrawArrays after each update?
I know this is probably not the best and not the most recommended way to do it, but my question is: Can I do this and expect to get the GL_ARRAY_BUFFER updated before every call to glDrawArrays ?
Code example would look like this:
// setup a single buffer and bind it
GLuint vbo;
glGenBuffers(1, &vbo);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
while (!renderStack.empty())
{
SomeObjectClass * my_object = renderStack.back();
renderStack.pop_back();
// calculate the current buffer size for data to be drawn in this iteration
SomeDataArrays * subArrays = my_object->arrayData();
unsigned int totalBufferSize = subArrays->bufferSize();
unsigned int vertCount = my_object->vertexCount();
// initialise the buffer to the desired size and content
glBufferData(GL_ARRAY_BUFFER, totalBufferSize, NULL, GL_STREAM_DRAW);
// actually transfer some data to the GPU through glBufferSubData
for (int j = 0; j < subArrays->size(); ++j)
{
unsigned int subBufferOffset = subArrays->get(j)->bufferOffset();
unsigned int subBufferSize = subArrays->get(j)->bufferSize();
void * subBufferData = subArrays->get(j)->bufferData();
glBufferSubData(GL_ARRAY_BUFFER, subBufferOffset, subBufferSize, subBufferData);
unsigned int subAttributeLocation = subArrays->get(j)->attributeLocation();
// set some vertex attribute pointers
glVertexAttribPointer(subAttributeLocation, ...);
glEnableVertexAttribArray(subAttributeLocation, ...);
}
glDrawArrays(GL_POINTS, 0, (GLsizei)vertCount);
}
You may ask - why would I want to do that and not just preload everything onto the GPU at once ... well, obvious answer, because I can't do that when there is too much data that can't fit into a single buffer.
My problem is, that I can only see the result of one of the glDrawArrays calls (I believe the first one) or in other words, it appears as if the GL_ARRAY_BUFFER is not updated before each glDrawArrays call, which brings me back to my question, if this is even possible.
I am using an OpenGL 3.2 CoreProfile (under OS X) and link with GLEW for OpenGL setup as well as Qt 5 for setting up the window creation.
Yes, this is legal OpenGL code. It is in no way something that anyone should ever actually do. But it is legal. Indeed, it makes even less sense in your case, because you're calling glVertexAttribPointer for every object.
If you can't fit all your vertex data into memory, or need to generate it on the GPU, then you should stream the data with proper buffer streaming techniques.

DirectX using multiple Render Targets as input to each other

I have a fairly simple DirectX 11 framework setup that I want to use for various 2D simulations. I am currently trying to implement the 2D Wave Equation on the GPU. It requires I keep the grid state of the simulation at 2 previous timesteps in order to compute the new one.
How I went about it was this - I have a class called FrameBuffer, which has the following public methods:
bool Initialize(D3DGraphicsObject* graphicsObject, int width, int height);
void BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const;
void EndRender() const;
// Return a pointer to the underlying texture resource
const ID3D11ShaderResourceView* GetTextureResource() const;
In my main draw loop I have an array of 3 of these buffers. Every loop I use the textures from the previous 2 buffers as inputs to the next frame buffer and I also draw any user input to change the simulation state. I then draw the result.
int nextStep = simStep+1;
if (nextStep > 2)
nextStep = 0;
mFrameArray[nextStep]->BeginRender(0.0f,0.0f,0.0f,1.0f);
{
mGraphicsObj->SetZBufferState(false);
mQuad->GetRenderer()->RenderBuffers(d3dGraphicsObj->GetDeviceContext());
ID3D11ShaderResourceView* texArray[2] = { mFrameArray[simStep]->GetTextureResource(),
mFrameArray[prevStep]->GetTextureResource() };
result = mWaveShader->Render(d3dGraphicsObj, mQuad->GetRenderer()->GetIndexCount(), texArray);
if (!result)
return false;
// perform any extra input
I_InputSystem *inputSystem = ServiceProvider::Instance().GetInputSystem();
if (inputSystem->IsMouseLeftDown()) {
int x,y;
inputSystem->GetMousePos(x,y);
int width,height;
mGraphicsObj->GetScreenDimensions(width,height);
float xPos = MapValue((float)x,0.0f,(float)width,-1.0f,1.0f);
float yPos = MapValue((float)y,0.0f,(float)height,-1.0f,1.0f);
mColorQuad->mTransform.position = Vector3f(xPos,-yPos,0);
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
}
mGraphicsObj->SetZBufferState(true);
}
mFrameArray[nextStep]->EndRender();
prevStep = simStep;
simStep = nextStep;
ID3D11ShaderResourceView* currTexture = mFrameArray[nextStep]->GetTextureResource();
// Render texture to screen
mGraphicsObj->SetZBufferState(false);
mQuad->SetTexture(currTexture);
result = mQuad->Render(&viewMatrix,&orthoMatrix);
if (!result)
return false;
mGraphicsObj->SetZBufferState(true);
The problem is nothing is happening. Whatever I draw appears on the screen(I draw using a small quad) but no part of the simulation is actually ran. I can provide the shader code if required, but I am certain it works since I've implemented this before on the CPU using the same algorithm. I'm just not certain how well D3D render targets work and if I'm just drawing wrong every frame.
EDIT 1:
Here is the code for the begin and end render functions of the frame buffers:
void D3DFrameBuffer::BeginRender(float clearRed, float clearGreen, float clearBlue, float clearAlpha) const {
ID3D11DeviceContext *context = pD3dGraphicsObject->GetDeviceContext();
context->OMSetRenderTargets(1, &(mRenderTargetView._Myptr), pD3dGraphicsObject->GetDepthStencilView());
float color[4];
// Setup the color to clear the buffer to.
color[0] = clearRed;
color[1] = clearGreen;
color[2] = clearBlue;
color[3] = clearAlpha;
// Clear the back buffer.
context->ClearRenderTargetView(mRenderTargetView.get(), color);
// Clear the depth buffer.
context->ClearDepthStencilView(pD3dGraphicsObject->GetDepthStencilView(), D3D11_CLEAR_DEPTH, 1.0f, 0);
void D3DFrameBuffer::EndRender() const {
pD3dGraphicsObject->SetBackBufferRenderTarget();
}
Edit 2 Ok, I after I set up the DirectX debug layer I saw that I was using an SRV as a render target while it was still bound to the Pixel stage in out of the shaders. I fixed that by setting shader resources to NULL after I render with the wave shader, but the problem still persists - nothing actually gets ran or updated. I took the render target code from here and slightly modified it, if its any help: http://rastertek.com/dx11tut22.html
Okay, as I understand correct you need a multipass-rendering to texture.
Basiacally you do it like I've described here: link
You creating SRVs with both D3D11_BIND_SHADER_RESOURCE and D3D11_BIND_RENDER_TARGET bind flags.
You ctreating render targets from textures
You set first texture as input (*SetShaderResources()) and second texture as output (OMSetRenderTargets())
You Draw()*
then you bind second texture as input, and third as output
Draw()*
etc.
Additional advices:
If your target GPU capable to write to UAVs from non-compute shaders, you can use it. It is much more simple and less error prone.
If your target GPU suitable, consider using compute shader. It is a pleasure.
Don't forget to enable DirectX debug layer. Sometimes we make obvious errors and debug output can point to them.
Use graphics debugger to review your textures after each draw call.
Edit 1:
As I see, you call BeginRender and OMSetRenderTargets only once, so, all rendering goes into mRenderTargetView. But what you need is to interleave:
SetSRV(texture1);
SetRT(texture2);
Draw();
SetSRV(texture2);
SetRT(texture3);
Draw();
SetSRV(texture3);
SetRT(backBuffer);
Draw();
Also, we don't know what is mRenderTargetView yet.
so, before
result = mColorQuad->Render(&viewMatrix,&orthoMatrix);
somewhere must be OMSetRenderTargets .
Probably, it s better to review your Begin()/End() design, to make resource binding more clearly visible.
Happy coding! =)

Using vertex buffers in jogl, crash when too many triangles

I have written a simple application in Java using Jogl which draws a 3d geometry. The camera can be rotated by dragging the mouse. The application works fine, but drawing the geometry with glBegin(GL_TRIANGLE) ... calls ist too slow.
So I started to use vertex buffers. This also works fine until the number of triangles gets larger than 1000000. If that happens, the display driver suddenly crashes and my montior gets dark. Is there a limit of how many triangles fit in the buffer? I hoped to get 1000000 triangles rendered at a reasonable frame rate.
I have no idea on how to debug this problem. The nasty thing is that I have to reboot Windows after each launch, since I have no other way to get my display working again. Could anyone give me some advice?
The vertices, triangles and normals are stored in arrays float[][] m_vertices, int[][] m_triangles, float[][] m_triangleNormals.
I initialized the buffer with:
// generate a VBO pointer / handle
if (m_vboHandle <= 0) {
int[] vboHandle = new int[1];
m_gl.glGenBuffers(1, vboHandle, 0);
m_vboHandle = vboHandle[0];
}
// interleave vertex / normal data
FloatBuffer data = Buffers.newDirectFloatBuffer(m_triangles.length * 3*3*2);
for (int t=0; t<m_triangles.length; t++)
for (int j=0; j<3; j++) {
int v = m_triangles[t][j];
data.put(m_vertices[v]);
data.put(m_triangleNormals[t]);
}
data.rewind();
// transfer data to VBO
int numBytes = data.capacity() * 4;
m_gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vboHandle);
m_gl.glBufferData(GL.GL_ARRAY_BUFFER, numBytes, data, GL.GL_STATIC_DRAW);
m_gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);
Then, the scene gets rendered with:
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, m_vboHandle);
gl.glEnableClientState(GL2.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL2.GL_NORMAL_ARRAY);
gl.glVertexPointer(3, GL.GL_FLOAT, 6*4, 0);
gl.glNormalPointer(GL.GL_FLOAT, 6*4, 3*4);
gl.glDrawArrays(GL.GL_TRIANGLES, 0, 3*m_triangles.length);
gl.glDisableClientState(GL2.GL_VERTEX_ARRAY);
gl.glDisableClientState(GL2.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, 0);
Try checking the return value of calling glBufferData. It will return GL_OUT_OF_MEMORY if it cannot satisfy numBytes.