I've been using Go's go-gl package for quite a while now. Everything was working 100% until I did some refactoring and now I'm getting the stranges error:
fatal error: unexpected signal during runtime execution
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x0]
runtime stack:
runtime.throw(0x65d0fe, 0x2a)
/usr/lib/go/src/runtime/panic.go:596 +0x95
runtime.sigpanic()
/usr/lib/go/src/runtime/signal_unix.go:274 +0x2db
runtime.asmcgocall(0x8, 0x97ed40)
/usr/lib/go/src/runtime/asm_amd64.s:633 +0x70
goroutine 1 [syscall, locked to thread]:
runtime.cgocall(0x5b8ad0, 0xc420049c00, 0xc4200001a0)
/usr/lib/go/src/runtime/cgocall.go:131 +0xe2 fp=0xc420049bc0 sp=0xc420049b80
github.com/go-gl/gl/v4.5-core/gl._Cfunc_glowGenVertexArrays(0x0, 0xc400000001, 0xc42006c7d8)
github.com/go-gl/gl/v4.5-core/gl/_obj/_cgo_gotypes.go:4805 +0x45 fp=0xc420049c00 sp=0xc420049bc0
github.com/go-gl/gl/v4.5-core/gl.GenVertexArrays(0x1, 0xc42006c7d8)
...
runtime.main()
/usr/lib/go/src/runtime/proc.go:185 +0x20a fp=0xc420049fe0 sp=0xc420049f88
runtime.goexit()
/usr/lib/go/src/runtime/asm_amd64.s:2197 +0x1 fp=0xc420049fe8 sp=0xc420049fe0
goroutine 17 [syscall, locked to thread]:
runtime.goexit()
/usr/lib/go/src/runtime/asm_amd64.s:2197 +0x1
exit status 2
shell returned 1
I was wondering if anyone has a solution. I've updated my drivers and a empty OpenGL scene works 100% without generating vertex arrays.
Here is my go env
GOARCH="amd64"
GOBIN=""
GOEXE=""
GOHOSTARCH="amd64"
GOHOSTOS="linux"
GOOS="linux"
GOPATH="/home/<user>/Projects/<project>"
GORACE=""
GOROOT="/usr/lib/go"
GOTOOLDIR="/usr/lib/go/pkg/tool/linux_amd64"
GCCGO="gccgo"
CC="gcc"
GOGCCFLAGS="-fPIC -m64 -pthread -fmessage-length=0 -fdebug-prefix-map=/tmp/go-build983667275=/tmp/go-build -gno-record-gcc-switches"
CXX="g++"
CGO_ENABLED="1"
PKG_CONFIG="pkg-config"
CGO_CFLAGS="-g -O2"
CGO_CPPFLAGS=""
CGO_CXXFLAGS="-g -O2"
CGO_FFLAGS="-g -O2"
CGO_LDFLAGS="-g -O2"
The function making the call:
var vertexArrayID uint32
// ERROR ON LINE BELOW.
gl.GenVertexArrays(1, &vertexArrayID)
gl.BindVertexArray(vertexArrayID)
// Vertex buffer
var vertexBuffer uint32
gl.GenBuffers(1, &vertexBuffer)
gl.BindBuffer(gl.ARRAY_BUFFER, vertexBuffer)
gl.BufferData(gl.ARRAY_BUFFER, len(verticies)*4, gl.Ptr(verticies), gl.STATIC_DRAW)
Thank you
Turns out the a OpenGL context was created after the function call instead of before. Very strange that the empty scene still worked and only crashed after trying to generate buffers.
Related
I am trying to solve onResume error in my game created by Cocos2d-x. When i go background and come back to game then game is crashed. and log-cat are as followed.
11:49:10.997 E/cocos2d-x: cocos2d: warning, Director::setProjection() failed because size is 0
11:49:11.001 D/Cocos2dxActivity: onWindowFocusChanged() hasFocus=true
11:49:11.020 D/FA: Connected to remote service
11:49:11.020 V/FA: Processing queued up service tasks: 4
11:49:11.062 D/Cocos2dxActivity: onPause()
11:49:11.064 D/AudioFocusManager: abandonAudioFocus succeed!
11:49:11.077 D/Cocos2dxActivity: onWindowFocusChanged() hasFocus=false
11:49:11.080 V/FA: onActivityCreated
11:49:11.087 I/SDKBOX_CORE: Sdkbox Droid starting.
11:49:11.087 I/SDKBOX_CORE: Sdkbox got VM.
11:49:11.087 I/SDKBOX_CORE: Initialize is called more than once.
11:49:11.094 D/Cocos2dxActivity: model=MotoE2
11:49:11.094 D/Cocos2dxActivity: product=otus_reteu_ds
11:49:11.094 D/Cocos2dxActivity: isEmulator=false
11:49:11.099 D/AutoManageHelper: starting AutoManage for client 0 false null
11:49:11.103 D/AutoManageHelper: onStart true {0=com.google.android.gms.internal.zzbau$zza#2efa3200}
11:49:11.106 D/Cocos2dxActivity: onResume()
11:49:11.109 D/AudioFocusManager: requestAudioFocus succeed
11:49:11.187 D/Cocos2dxActivity: onWindowFocusChanged() hasFocus=true
11:49:11.208 V/FA: Screen exposed for less than 1000 ms. Event not sent. time: 360
11:49:11.214 V/FA: Activity paused, time: 2492571
11:49:11.215 V/FA: Activity resumed, time: 2492614
11:49:11.674 D/cocos2d-x: reload all texture
11:49:11.674 E/cocos2d-x: cocos2d: warning, Director::setProjection() failed because size is 0
11:49:11.706 D/cocos2d-x debug info: Cocos2d-x-lite v1.6.0
11:49:12.411 A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x1aa in tid 16084 (GLThread 1061)
11:49:12.411 A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x0 in tid 16061 (GLThread 1048)
I always see these two error logs.
11:49:12.411 A/libc: Fatal signal 11 (SIGSEGV), code 1, fault addr 0x1aa in tid 16084 (GLThread 1061
11:49:11.674 E/cocos2d-x: cocos2d: warning, Director::setProjection() failed because size is 0
I have already tried this but does not work.
cocos2d-x game crashes when entered background
My OpenCL program doesn't always finish before further host (c++) code is executed. The OpenCL code is only executed up to a certain point (which apperears to be random). The code is shortened a bit, so there may be a few things missing.
cl::Program::Sources sources;
string code = ResourceLoader::loadFile(filename);
sources.push_back({ code.c_str(),code.length() });
program = cl::Program(OpenCL::context, sources);
if (program.build({ OpenCL::default_device }) != CL_SUCCESS)
{
exit(-1);
}
queue = CommandQueue(OpenCL::context, OpenCL::default_device);
kernel = Kernel(program, "main");
Buffer b(OpenCL::context, CL_MEM_READ_WRITE, size);
queue.enqueueWriteBuffer(b, CL_TRUE, 0, size, arg);
buffers.push_back(b);
kernel.setArg(0, this->buffers[0]);
vector<Event> wait{ Event() };
Version 1:
queue.enqueueNDRangeKernel(kernel, NDRange(), range, NullRange, NULL, &wait[0]);
Version 2:
queue.enqueueNDRangeKernel(kernel, NDRange(), range, NullRange, &wait, NULL);
.
wait[0].wait();
queue.finish();
Version 1 just does not wait for the OpenCL program. Version 2 crashes the program (at queue.enqueueNDRangeKernel):
Exception thrown at 0x51D99D09 (nvopencl.dll) in foo.exe: 0xC0000005: Access violation reading location 0x0000002C.
How would one make the host wait for the GPU to finish here?
EDIT: queue.enqueueNDRangeKernel returns -1000. While it returns 0 on a rather small kernel
Version 1 says to signal wait[0] when the kernel is finished - which is the right thing to do.
Version 2 is asking your clEnqueueNDRangeKernel() to wait for the events in wait before it starts that kernel [which clearly won't work].
On it's own, queue.finish() [or clFinish()] should be enough to ensure that your kernel has completed.
Since you haven'd done clCreateUserEvent, and you haven't passed it into anything else that initializes the event, the second variant doesn't work.
It is rather bad that it crashes [it should return "invalid event" or some such - but presumably the driver you are using doesn't have a way to check that the event hasn't been initialized]. I'm reasonably sure the driver I work with will issue an error for this case - but I try to avoid getting it wrong...
I have no idea where -1000 comes from - it is neither a valid error code, nor a reasonable return value from the CL C++ wrappers. Whether the kernel is small or large [and/or completes in short or long time] shouldn't affect the return value from the enqueue, since all that SHOULD do is to enqueue the work [with no guarantee that it starts until a queue.flush() or clFlush is performed]. Waiting for it to finish should happen elsewhere.
I do most of my work via the raw OpenCL API, not the C++ wrappers, which is why I'm referring to what they do, rather than the C++ wrappers.
I faced a similar problem with OpenCL that some packages of a data stream we're not processed by OpenCL.
I realized it just happens while the notebook is plugged into a docking station.
Maybe this helps s.o.
(No clFlush or clFinish calls)
I am trying to figure out where I made a mistake in my c++ poco code. While running it on Ubuntu 14, the program runs correctly but when recompiled for the arm via gnueabi, it just crashes with sigsegv:
This is report from the stack trace (where it falls):
socket(PF_INET, SOCK_STREAM, IPPROTO_IP) = 3
connect(3, {sa_family=AF_INET, sin_port=htons(8888), sin_addr=inet_addr("192.168.2.101")}, 16) = 0
--- SIGSEGV {si_signo=SIGSEGV, si_code=SEGV_MAPERR, si_addr=0x6502a8c4} ---
+++ killed by SIGSEGV +++
And this is code where it falls ( it should connect to the tcp server ):
this->address = SocketAddress(this->host, (uint16_t)this->port);
this->socket = StreamSocket(this->address); // !HERE
Note that I am catching any exceptions (like econnrefused) and it dies correctly when it can't connect. When it connect's to the server side, it just falls.
When trying to start valgrind, it aborts with error. No idea what shadow memory range means
==4929== Shadow memory range interleaves with an existing memory mapping. ASan cannot proceed correctly. ABORTING.
http://pastebin.com/Ky4RynQc here is full log
Thank you
Don't know why, this compiled badly on ubuntu, but when compiled on fedora (same script, same build settings, same gnu), it's working.
Thank you guys for your comments.
This is somewhat of a follow up to a question posted earlier last month.
In porting my work, to the work computer, I'm experiencing some new convolution problem.
So, my kernel is 30x30 size and now OpenCV complains:
Assertion failed (templ.cols <= 17 && templ.rows <= 17) in convolve, file /home/jeffrey/opencv/src/modules/gpu/src/imgproc.cpp, line 1677
terminate called after throwing an instance of 'cv::Exception'
So far this is the only noticeable problem I'm getting with the port so far..
Is this new, is this normal?
i'm also getting 2 errors which are most likely unrelated, but worth mentioning
[swscaler # 0x9422b00]No accelerated colorspace conversion found.
This is likely ffmpeg error
but i'm also getting an error stating OpenGL version unsupported
Could this be the culprit?
Ok, the problem was simply a lack of CUFFT
Thank you
I'm writing a simple OpenGL program using go-gl. While the program runs fine on most machines, it fails with a segfault when running under Windows on my laptop (it works on Linux though - this is what's odd about it). The culprit is my call to glEnableVertexArrayAttrib. I've attached the stack trace and relevant code below.
Partial stack trace:
Exception 0xc0000005 0x8 0x0 0x0
PC=0x0
signal arrived during external code execution
github.com/go-gl/gl/v3.3-core/gl._Cfunc_glowEnableVertexArrayAttrib(0x0, 0x1)
github.com/go-gl/gl/v3.3-core/gl/_obj/_cgo_gotypes.go:4141 +0x41
github.com/go-gl/gl/v3.3-core/gl.EnableVertexArrayAttrib(0x1)
C:/Users/mpron/go/src/github.com/go-gl/gl/v3.3-core/gl/package.go:5874 +0x3a
github.com/caseif/cubic-go/graphics.prepareVbo(0x1, 0xc0820086e0, 0xc0820a7e70)
C:/Users/mpron/go/src/github.com/caseif/cubic-go/graphics/block_renderer.go:145 +0x108
Relevant code:
gl.GenVertexArrays(1, &vaoHandle)
gl.BindVertexArray(vaoHandle)
gl.BindBuffer(gl.ARRAY_BUFFER, handle)
gl.BufferData(gl.ARRAY_BUFFER, len(*vbo) * 4, gl.Ptr(*vbo), gl.STATIC_DRAW)
gl.EnableVertexArrayAttrib(vaoHandle, positionAttrIndex) // line 145
gl.VertexAttribPointer(positionAttrIndex, 3, gl.FLOAT, false, 12, nil)
I made a subtle mistake in calling glEnableVertexArrayAttrib, only available since OpenGL 4.5, instead of glEnableVertexAttribArray, which is available since OpenGL 2.1. The former function allows attribute arrays to be toggled on a per-VAO basis, which isn't at all necessary in this context.