I ported a game I am making from SDL 1.2 to SDL2. After porting the game and getting it to compile properly I get a segfault when I call TTF_OpenFont here:
bool cargararchivos(SDL_Texture* &background,SDL_Texture* &player,TTF_Font* &font,SDL_Texture* &bullet,Config* placlips,SDL_Renderer* renderer)
{
string playerss;
//Open the font
font = TTF_OpenFont( "lazy.ttf", 28 );
//If there was an error in loading the font
if(font==NULL)
{
return false;
}
try{
playerss = placlips->lookup("filename").c_str();
}catch(const SettingNotFoundException &nfex)
{
cerr << "No 'name' setting in configuration file." << endl;
return false;
}
//Open background
background = cargarimagen("fondo.png",renderer);
if(background==NULL){
return false;
}
//Open player sprites
player = cargarimagen(playerss,renderer);
if(player==NULL){
return false;
}
bullet = cargarimagen("bullet.png",renderer);
if(bullet==NULL)
return false;
return true;
}
The segfault happens before TTF_OpenFont ends. The backtrace I get is:
#0 ?? ?? () (??:??)
#1 0x7ffff7410ce5 TTF_CloseFont(font=0x8af1e0) (SDL_ttf.c:933)
#2 0x7ffff74110fd TTF_OpenFontIndexRW(src=<optimized out>, freesrc=<optimized out>, ptsize=<optimized out>, index=0) (SDL_ttf.c:489)
#3 0x409c9d cargararchivos(background=#0x7fffffffe598: 0x0, player=#0x7fffffffe590: 0x0, font=#0x7fffffffe580: 0x0, bullet=#0x7fffffffe588: 0x0, placlips=0x7fffffffe560, renderer=0x9c25b0) (/home/xxxxx/xxxxx/main.cpp:33)
#4 0x40a526 main(argc=1, args=0x7fffffffe6e8) (/home/xxxxx/xxxxx/main.cpp:173)
If I take out all the SDL_ttf stuff out I still get a similar segfault but with IMG_Load. I suspect it is an issue with my CodeBlocks setup because I can buiid the Lazy Foo SDL2 tutorials fine with g++ and run them. Or maybe it is a bug? I am using Debian sid (Linux) by the way. Please help.
D'oh!
There was an include to the 1.2 version of SDL_ttf that I forgot to change to the new version. Man, I am stupid. Thanks keltar!
Related
I use a self compile webrtc source in cpp. Reagulary when I use DataChannel after reconnection, my program just stuck.
Here is the debug I have:
Thread 13 (Thread 0x7f3f39a33700 (LWP 819)):
#0 pthread_cond_wait##GLIBC_2.3.2 () at ../sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x000055a8542d4d05 in rtc::Event::Wait(int, int) ()
#2 0x000055a85451f630 in webrtc::DataChannelProxyWithInternal<webrtc::DataChannelInterface>::reliable() const ()
#3 0x000055a854234c4f in DataChannelInterfaceManager::reliable (this=0x7f3f00006af0) at ./PeerConnectionManager.h:100
What is that rtc::Event::Wait function and why the program just stuck with no errors?
rtc::scoped_refptr<webrtc::DataChannelInterface> dataChannel;
....
bool
reliable(void) {
if (dataChannel.get() == nullptr) {
RTC_LOG(LERROR) << __LINE__;
return false;
}
return dataChannel->reliable(); // Freeze here
}
I also add the complete log from gdb
https://pastebin.com/YGE5qUPi
I used msys64 - mingw32 (on Windows 7, 64 bit) to build a 3rd party library (libosmscout) using a supplied makefile. I was primarily interested in one particular example from the library (Demos/src/DrawMapCairo). With makefile complete library was built successfuly, including demos. Example console application of interest works fine.
My intention however is to make my own application using Code::Blocks IDE, which would use functionality of the example app. Therefore I tried to build the example in Code::Blocks (New project -> console application, GCC 5.1 MinGW). After a while I managed to get a successful build with 0 errors/warnings. But the application doesn't work, it crashes with sigsegv fault. "cout debugging" and stepping into in debugger suggest that the issue seems to be (or start) at line
osmscout::DatabaseRef database(new osmscout::Database(databaseParameter));
Source code with main() :
#includes...
static const double DPI=96.0;
int main(int argc, char* argv[])
{
std::string map;
std::string style;
std::string output;
size_t width,height;
double lon,lat,zoom;
if (argc!=9) {
std::cerr << "DrawMap <map directory> <style-file> <width> <height> <lon> <lat> <zoom> <output>" << std::endl;
return 1;
}
map=argv[1];
style=argv[2];
//next 6 lines not exactly as in source, but for shorter code:
osmscout::StringToNumber(argv[3],width);
osmscout::StringToNumber(argv[4],height);
sscanf(argv[5],"%lf",&lon);
sscanf(argv[6],"%lf",&lat);
sscanf(argv[7],"%lf",&zoom);
output=argv[8];
osmscout::DatabaseParameter databaseParameter;
osmscout::DatabaseRef database(new osmscout::Database(databaseParameter));
osmscout::MapServiceRef mapService(new osmscout::MapService(database));
if (!database->Open(map.c_str())) {
std::cerr << "Cannot open database" << std::endl;
return 1;
}
osmscout::StyleConfigRef styleConfig(new osmscout::StyleConfig (database->GetTypeConfig()));
if (!styleConfig->Load(style)) {
std::cerr << "Cannot open style" << std::endl;
}
cairo_surface_t *surface;
cairo_t *cairo;
surface=cairo_image_surface_create(CAIRO_FORMAT_RGB24,width,height);
if (surface!=NULL) {
cairo=cairo_create(surface);
if (cairo!=NULL) {
osmscout::MercatorProjection projection;
osmscout::MapParameter drawParameter;
osmscout::AreaSearchParameter searchParameter;
osmscout::MapData data;
osmscout::MapPainterCairo painter(styleConfig);
drawParameter.SetFontSize(3.0);
projection.Set(lon,
lat,
osmscout::Magnification(zoom),
DPI,
width,
height);
std::list<osmscout::TileRef> tiles;
mapService->LookupTiles(projection,tiles);
mapService->LoadMissingTileData(searchParameter,*styleConfig,tiles);
mapService->ConvertTilesToMapData(tiles,data);
if (painter.DrawMap(projection,
drawParameter,
data,
cairo)) {
if (cairo_surface_write_to_png(surface,output.c_str())!=CAIRO_STATUS_SUCCESS) {
std::cerr << "Cannot write PNG" << std::endl;
}
}
cairo_destroy(cairo);
}
else {
std::cerr << "Cannot create cairo cairo" << std::endl;
}
cairo_surface_destroy(surface);
}
else {
std::cerr << "Cannot create cairo surface" << std::endl;
}
return 0;
}
How can I find exactly what the problem is and solve it? What's really puzzling me is that the same code built with makefile works just fine.
EDIT:
After running GDB (Gnu Debugger, gdb32.exe) and then bt (backtrace) I get the following output:
[New Thread 3900.0x538]
Program received signal SIGSEGV, Segmentation fault.
0x777ec159 in ntdll!RtlDecodeSystemPointer ()
from C:\Windows\SysWOW64\ntdll.dll
(gdb)
(gdb) bt
#0 0x777e3c28 in ntdll!RtlQueryPerformanceCounter ()
from C:\Windows\SysWOW64\ntdll.dll
#1 0x00000028 in ?? ()
#2 0x00870000 in ?? ()
#3 0x777ec1ed in ntdll!RtlDecodeSystemPointer ()
from C:\Windows\SysWOW64\ntdll.dll
#4 0x777ec13e in ntdll!RtlDecodeSystemPointer ()
from C:\Windows\SysWOW64\ntdll.dll
#5 0x777e3541 in ntdll!RtlQueryPerformanceCounter ()
from C:\Windows\SysWOW64\ntdll.dll
#6 0x00000010 in ?? ()
#7 0x00000028 in ?? ()
#8 0x008700c4 in ?? ()
#9 0x77881dd3 in ntdll!RtlpNtEnumerateSubKey ()
from C:\Windows\SysWOW64\ntdll.dll
#10 0x7783b586 in ntdll!RtlUlonglongByteSwap ()
from C:\Windows\SysWOW64\ntdll.dll
#11 0x00870000 in ?? ()
#12 0x777e3541 in ntdll!RtlQueryPerformanceCounter ()
from C:\Windows\SysWOW64\ntdll.dll
#13 0x00000010 in ?? ()
#14 0x00000000 in ?? ()
(gdb)
What does this error mean and how to find what caused it to correct the fault?
I got an segment fault when trying to call function getLazyBitcodeModule.
The code that causes the fault is shown below:
// Load the bytecode...
std::string ErrorMsg;
Module *mainModule = 0;
OwningPtr<MemoryBuffer> BufferPtr;
error_code ec = MemoryBuffer::getFileOrSTDIN(InputFile, BufferPtr);
if (ec) {
printf("error loading program '%s': %s\n", InputFile, ec.message().c_str());
exit(1);
}
mainModule = getLazyBitcodeModule(BufferPtr.get(), getGlobalContext(), &ErrorMsg);
The gdb debug information is also shown below:
(gdb) backtrace
#0 0x0000000000000241 in ?? ()
#1 0x000000000040630e in ~OwningPtr (this=0x7fffffffdd20, __in_chrg= <optimized out>) at /usr/local/include/llvm/ADT/OwningPtr.h:45
#2 readBitFile (InputFile=InputFile#entry=0xafec23 "test.bc") at main.cpp:36
#3 0x0000000000405fce in main (argc=2, argv=0x7fffffffde78) at main.cpp:71
Any suggestion to get around the error?
If you look inside the implementation of getLazyBitcodeModule() you'll see the line:
R->setBufferOwned(true);
The BitcodeReader now owns your buffer and will delete the buffer for you when it gets destroyed. Your OwningPtr is also freeing that same memory so the double free is causing the segfault.
Trying to get set with SDL and OpenGL on D. Specifically, SDL2 and OpenGL 3.3 core/forward compatible. (although I left the last two out in the example, because it breaks at the same point whether or not they're there). The equivalent to the following in GLFW works fine, so apparently I'm screwing something up on the SDL end, or SDL does some magic things that break Derelict - which seems hard to believe considering that Derelict-gl doesn't do all that much other than loading a few function pointers, but something goes wrong somewhere and I wouldn't exclude a bug in Derelict or SDL, though it's more likely my code.
I don't see it though, and here it is:
import std.stdio;
import std.c.stdlib;
import derelict.sdl2.sdl;
import derelict.opengl3.gl;
void fatal_error_if(Cond,Args...)(Cond cond, string format, Args args) {
if(!!cond) {
stderr.writefln(format,args);
exit(1);
}
}
void main()
{
//set up D bindings to SDL and OpenGL 1.1
DerelictGL.load();
DerelictSDL2.load();
fatal_error_if(SDL_Init(SDL_INIT_VIDEO),"Failed to initialize sdl!");
// we want OpenGL 3.3
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION,3);
SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION,3);
auto window = SDL_CreateWindow(
"An SDL2 window",
SDL_WINDOWPOS_UNDEFINED,
SDL_WINDOWPOS_UNDEFINED,
800,
600,
SDL_WINDOW_OPENGL); // we want this window to support OpenGL
fatal_error_if(window is null,"Failed to create SDL window!");
auto glprof = SDL_GL_CreateContext(window); // Create the actual context and make it current
fatal_error_if(glprof is null,"Failed to create GL context!");
DerelictGL.reload(); //<-- BOOM SIGSEGV
// just some stuff so we actually see something if nothing exploded
glClearColor(1,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
SDL_GL_SwapWindow(window);
SDL_Delay(5000);
SDL_DestroyWindow(window);
SDL_Quit();
writeln("If we got to this point everything went alright...");
}
Like the question title says, it breaks on DerelictGL.reload() (which is supposed to load OpenGL functions similar to GLEW). Here's the stacktrace...
#0 0x00007ffff71a398d in __strstr_sse2_unaligned () from /usr/lib/libc.so.6
#1 0x000000000048b8d5 in derelict.opengl3.internal.findEXT() (extname=..., extstr=0x0)
at ../../../../.dub/packages/derelict-gl3-master/source/derelict/opengl3/internal.d:74
#2 0x000000000048b8b0 in derelict.opengl3.internal.isExtSupported() (name=..., glversion=<incomplete type>)
at ../../../../.dub/packages/derelict-gl3-master/source/derelict/opengl3/internal.d:67
#3 0x0000000000487778 in derelict.opengl3.gl.DerelictGLLoader.reload() (this=0x7ffff7ec5e80)
at ../../../../.dub/packages/derelict-gl3-master/source/derelict/opengl3/gl.d:48
#4 0x0000000000473bba in D main () at source/app.d:36
#5 0x00000000004980c8 in rt.dmain2._d_run_main() ()
#6 0x0000000000498022 in rt.dmain2._d_run_main() ()
#7 0x0000000000498088 in rt.dmain2._d_run_main() ()
#8 0x0000000000498022 in rt.dmain2._d_run_main() ()
#9 0x0000000000497fa3 in _d_run_main ()
#10 0x00000000004809e5 in main ()
The error here seems to occur because glGetString(GL_EXTENSIONS) returns null. Why I don't quite understand. If I remove the call to DerelictGL.reload the rest of the program runs, but that'd mean that post OpenGL1.1 functions don't get loaded.
To phrase this as an actual question - am I doing something wrong? If so, what?
Additional
I confirmed that an OpenGL 3.3 context was created - glGet returns 3 on GL_MAJOR_VERSION and GL_MINOR_VERSION respectively.
This seems to be a bug in Derelict-gl3 - if I change this line in gl.d
if( maxVer >= GLVersion.GL12 && isExtSupported( GLVersion.GL12, "GL_ARB_imaging" ) ) {
to
if( maxVer >= GLVersion.GL12 && isExtSupported( maxVer, "GL_ARB_imaging" ) ) {
it works fine. I'll submit an issue on the github repo, see if this is actually the case (I'm not that familiar with how Derelict works, but this appears fairly obvious to me).
Following code is pretty boiler plate code that runs fine as is, but when run in gdb crashes. As such I won't care about that, but this is reduced version of my bigger program which also crashes with or without gdb. Any help on what I'm doing wrong here would be tremendously appreciated.
It crashes in in the very last call to JVM "jobject hbase_configuration = env->CallStaticObjectMethod(cls, create_mid);"
I have tried calling HBaseConfiguration.Create many times through JNI through different things, and in all cases it crashes. The stack trace on gdb does not seem very helpful, I Can't get any symbols out of it, despite having compiled with -g.
#include <string>
#include <glog/logging.h>
#include <jni.h>
// (edit - this was hidden in the original post).
int main(int argc, char* argv[]) {
JavaVM *jvm;
JNIEnv *env;
JavaVMInitArgs vm_args;
JavaVMOption options[5];
vm_args.nOptions = 5;
vm_args.version = JNI_VERSION_1_6;
vm_args.options = options;
vm_args.ignoreUnrecognized = 1;
JNI_GetDefaultJavaVMInitArgs(&vm_args);
options[0].optionString = "-Djava.class.path=hbase-1.0-SNAPSHOT.jar:activation-1.1.jar:asm-3.1.jar:avro-1.7.1.cloudera.2.jar:commons-beanutils-1.7.0.jar:commons-beanutils-core-1.8.0.jar:commons-cli-1.2.jar:commons-codec-1.4.jar:commons-collections-3.2.1.jar:commons-configuration-1.6.jar:commons-daemon-1.0.3.jar:commons-digester-1.8.jar:commons-el-1.0.jar:commons-httpclient-3.1.jar:commons-io-2.1.jar:commons-lang-2.5.jar:commons-logging-1.1.1.jar:commons-math-2.1.jar:commons-net-3.1.jar:ftplet-api-1.0.0.jar:ftpserver-core-1.0.0.jar:ftpserver-deprecated-1.0.0-M2.jar:guava-11.0.2.jar:hadoop-annotations-2.0.0-cdh4.1.1.jar:hadoop-auth-2.0.0-cdh4.1.1.jar:hadoop-common-2.0.2-alpha.jar:hadoop-common-2.0.2-alpha-tests.jar:hadoop-hdfs-2.0.0-cdh4.1.1.jar:hadoop-test-2.0.0-mr1-cdh4.1.1.jar:hbase-0.92.1-cdh4.1.0.jar:hbase-0.92.1-cdh4.1.0-sources.jar:hbase-0.92.1-cdh4.1.0-tests.jar:high-scale-lib-1.1.1.jar:hsqldb-1.8.0.10.jar:jaxb-api-2.1.jar:jaxb-impl-2.2.3-1.jar:jersey-core-1.8.jar:jersey-json-1.8.jar:jersey-server-1.8.jar:jets3t-0.6.1.jar:jline-0.9.94.jar:jsch-0.1.42.jar:jsp-api-2.1.jar:jsr305-1.3.9.jar:junit-4.10.jar:kfs-0.3.jar:log4j-1.2.17.jar:metrics-core-2.1.2.jar:paranamer-2.3.jar:protobuf-java-2.4.1.jar:servlet-api-2.5.jar:tools.jar";
options[1].optionString = "-verbose:jni";
options[2].optionString = "-Xcheck:jni:pedantic,verbose";
options[3].optionString = "-Xdebug";
options[4].optionString = "-Xrunjdwp:transport=dt_socket,address=4242,server=y,suspend=n";
vm_args.nOptions = 5;
vm_args.version = JNI_VERSION_1_6;
vm_args.options = options;
vm_args.ignoreUnrecognized = 1;
// Load and initialize a Java VM, return a JNI interface
// pointer in env.
long result = JNI_CreateJavaVM(&jvm, (void **)&env, &vm_args);
if (result == JNI_ERR) {
LOG(ERROR) << "Failed to create a JVM";
return false;
}
jclass cls = env->FindClass("org/apache/hadoop/hbase/HBaseConfiguration");
if (cls == NULL) {
LOG(ERROR) << " Could not find class org/apache/hadoop/hbase/HBaseConfiguration";
return false;
}
jmethodID create_mid = env->GetStaticMethodID(
cls, "create", "()Lorg/apache/hadoop/conf/Configuration;");
if (create_mid == NULL) {
LOG(ERROR) << "Could not find static method create in HBaseConfiguration";
return false;
}
LOG(INFO) << "Creating conf";
jobject hbase_configuration = env->CallStaticObjectMethod(cls, create_mid);
LOG(INFO) << "Created conf";
return 0;
}
Stack trace looks like:
#0 0x00007ffff134a722 in ?? ()
#1 0x00007ffff12e8410 in ?? ()
#2 0x0000000700000000 in ?? ()
#3 0x00007fffffffd150 in ?? ()
#4 0x00007fffffffd108 in ?? ()
#5 0x000000000060e800 in ?? ()
#6 0x000000077fbcaa30 in ?? ()
#7 0x000000000000001b in ?? ()
#8 0x0000000000000000 in ?? ()
This was indeed what technomage had suggested in the comments. The gdb crash was a red herring because of jvm throwing SIGSEGV which was meant to be handled by jvm.
Once I tell gdb "handle SIGSEGV nostop", it works just fine and I was able to debug my larger program.