I got an error with sdl2 ttf and emscripten - c++

i was porting my game to emscripten. Everything was fine until SDL_ttf.
Actually i am using sdl2 + sdl2 image + sdl mixer.
I will show an example:
SDL_Color color = {255,255,255};
std::cout << "1\n";
font = TTF_OpenFont("saucery/font/font1.otf", 8);
if (!font)
printf("Unable to load font: %s \n", TTF_GetError());
std::cout << "2\n";
SDL_Surface *surf = TTF_RenderText_Solid(font,"Oieee",color);
std::cout << "3\n";
if (surf){
std::cout << (int)surf << "\n";
texture = SDL_CreateTextureFromSurface(Game::instance->GetRenderer(),surf);
std::cout << "4\n";
Uint32 format;
int acess,w,h;
SDL_QueryTexture(texture, &format,&acess,&w,&h);
dimensions2.x = 0;
dimensions2.y = 0;
dimensions2.h = h;
dimensions2.w = w;
SDL_FreeSurface(surf);
}
In this code i just open an font (i have already changed the size and the type to .ttf). Everything seems to run fine until:
SDL_CreateTextureFromSurface
I put some std::cout to see where the code was "crashing". Everytime it calls the createTextureFrom on the console shows to me "45" and the running stops there.
Even with any std::cout i still get this error.
this is driving me crazy already ._.

Related

SDL_LoadBMP() is successful, but the window becomes entirely black

I apologize if this question has already been asked but I've been researching for about a week now and cannot find the answer any where.
The problem that I am having is that while SDL_LoadBMP() is loading the image successfully, The window does not render an image at all, and instead renders an entirely black screen. However I do know something is being loaded (not just because there is no error being returned by SDL_LoadBMP() but also) because when I run the program with the SDL_LoadBMP() call commented out the window stays entirely white.
if it helps I have been writing along with the Lazyfoo tutorial located here. Code below...
From Main.cpp
int main(int argc, char* args[])
{
//the surface that we will be applying an image on
SDL_Surface* ImageSurface = NULL;
//try to initalize SDL
try
{
initSDL();
}
//if an error is caught
catch (string Error)
{
//print out the error
cout << "SDL error occurred! SDL Error: " << Error << endl;
//return an error
return -1;
}
//try loading an image on to the ImageSurface
try
{
loadMedia(ImageSurface, "ImageTest.bmp");
}
//if an error is caught
catch(string Error)
{
//print the error out
cout << "SDL error occurred! SDL Error: " << Error << endl;
//return an error
SDL_Delay(6000);
return -1;
}
//Apply Image surface to the main surface
SDL_BlitSurface(ImageSurface, NULL, Surface, NULL);
//upadte the surface of the main window
SDL_UpdateWindowSurface(Window);
//wait for 2 seconds (2000 miliseconds)
SDL_Delay(10000);
//close SDL
close();
//return
return 0;
}
From SDLBackend.cpp (I will only be posting the code relevant to the image loading process)
void loadMedia(SDL_Surface* surface, string path)
{
cout << "Attempting to load an image!" << endl;
//load the image at path into our surface
surface = SDL_LoadBMP(path.c_str());
//if there was an error in the loading procdure
if(surface == NULL)
{
//make a string to store our error in
string Error = SDL_GetError();
//throw our error
throw Error;
}
cout << "Successfully loaded an image!" << endl;
cout << "Pushing surface into the Surface List" << endl;
//Put the surface in to our list
SurfaceList.push_back(surface);
return;
}
I am compiling using visual studio 2013, and the image ImageTest.bmp is located in the same directory as the vcxproj file.
The problem is in loadMedia(). The loaded surface is being assigned to a local variable. You'll need to use a reference to pointer,
void loadMedia(SDL_Surface*& surface, string path)
{
surface = SDL_LoadBMP(path.c_str());
}
Or a double pointer (maybe preferred, clarifies intent),
void loadMedia(SDL_Surface** surface, string path)
{
*surface = SDL_LoadBMP(path.c_str());
}
Alternately, you could return it or even extract it from SurfaceList.back().

VideoStream::setVideoMode() function doesn't work

I want to change VideoStream setting in my program, but it doesn't work
#include <OpenNI.h>
int main()
{
OpenNI::initialize();
Device device;
device.open(ANY_DEVICE);
VideoStream depthStream;
depthStream.create(device, SENSOR_DEPTH);
depthStream.start();
VideoMode depthMode;
depthMode.setFps(20);
depthMode.setResolution(640, 480);
depthMode.setPixelFormat(PIXEL_FORMAT_DEPTH_100_UM);
depthStream.setVideoMode(depthMode);
...
}
Even I change depthStream.start() line after setVideoMode() function, but still doesn't work.
I changed Fps to 24, 20, 5, 1 but it doesn't change anything.
p.s. : This is my simple code, without error handling.
Edit:
Answer:
with the help of dear "api55" i found that my device (Kinect Xbox) support only one mode of videoMode. so I can't change it.
My only supported video is :
FPS:30
Width:640
Height:480
I change the VideoMode succesfully in a code a did before. After creating the VideoStream you should do something like:
rc = depth.create(device, openni::SENSOR_DEPTH);
if (rc != openni::STATUS_OK)
error_manager(3);
// set the new resolution and fps
openni::VideoMode depth_videoMode = depth.getVideoMode();
depth_videoMode.setResolution(frame_width,frame_height);
depth_videoMode.setFps(30);
depth.setVideoMode(depth_videoMode);
rc = depth.start();
if (rc != openni::STATUS_OK)
error_manager(4);
First I get the VideoMode that is inside the stream to conserve the other values and only change what I wanted. I think your code should work, but not all settings work in all cameras. To check the possible settings you can use the function openni::VideoStream::getSensorInfo. The code to check this should be something like:
#include <OpenNI.h>
int main()
{
OpenNI::initialize();
Device device;
device.open(ANY_DEVICE);
VideoStream depthStream;
depthStream.create(device, SENSOR_DEPTH);
depthStream.start();
SensorInfo& info = depthStream.getSensorInfo();
Array& videoModes = info.getSupportedVideoModes();
for (int i = 0; i < videoModes.getSize(); i++){
std::cout << "VideoMode " << i << std::endl;
std::cout << "FPS:" << videoModes[i].getFps() << std::endl;
std::cout << "Width:" << videoModes[i].getResolutionX() << std::endl;
std::cout << "Height:" << videoModes[i].getResolutionY() << std::endl;
}
...
}
I haven't test this last piece of code, so it may have errors, but you get the idea of it. The supported settings change with each camera, but I think the supported FPS in my camera were 15 and 30.
I hope this helps you

Issue with GLX/X11 on Ubuntu not showing correct Window Contents

I'm in the process of porting my engine across to Linux support.
I can successfully create a window, and set up an OpenGL context however, the contents of the window are whatever was displayed behind it at the time of creation. NOTE: This is not a transparent window, if I drag the window around it still contains an 'image' of whatever was behind it at the time of creation. (See attached image).
Now I'm not sure where the issue could be, however I'm not looking for a solution to a specific issue in my code, mainly just any insight from other Linux/GLX developers who may have seen a similar issue and might know where I should start looking?
I stripped all the code in my update function right down to just be:
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glXSwapBuffers(dpy, win);
With no joy. My first thought was that it was garbage, but with just those calls I'd expect to the see the glClearColor().
glGetError() returns no errors anywhere in my application.
Immediately after glXCreateContext() I then call glXMakeCurrent() and calling glGetIntegerv() with GL_MAJOR_VERSION and GL_MINOR_VERSION returns 4 and 2 (4.2) respectively which indicates the GL context has been created successfully.
I tried having a glXMakeCurrent() call immediately before I try my glClear/glXSwapBuffers() but to effect.
Further info!, this is a multithreaded application, however all X11/GLX/OpenGL calls are only made by a single thread. I have also tried calling XInitThreads() from the main application thread, and from the Rendering thread with no luck either.
Code for Creating Window
bool RenderWindow::createWindow(std::string title, unsigned int width, unsigned int height)
{
std::cout << "createWindow() called" << std::endl;
this->m_Width = width;
this->m_Height = height;
this->m_Display = XOpenDisplay(NULL);
if (this->m_Display == NULL)
{
std::cout << "Unable to connect to X Server" << std::endl;
return false;
}
this->m_Root = DefaultRootWindow(this->m_Display);
this->m_Active = true;
XSetErrorHandler(RenderWindow::errorHandler);
return true;
}
Code for initialising OpenGL Context
bool RenderingSubsystem::initialiseContext()
{
if(!this->m_Window)
{
std::cout << "Unable to initialise context because there is no Window" << std::endl;
return false;
}
this->m_Window->createWindow(this->m_Window->GetTitle(), this->m_Window->GetWidth(), this->m_Window->GetHeight());
int att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
this->m_VI = glXChooseVisual(this->m_Window->GetDisplay(), 0, att);
if (this->m_VI == NULL)
{
std::cout << "Unable to initialise context because no suitable VisualInfo could be found" << std::endl;
return false;
}
this->m_CMap = XCreateColormap(this->m_Window->GetDisplay(), this->m_Window->GetHandle(), this->m_VI->visual, AllocNone);
this->m_SWA.colormap = this->m_CMap;
this->m_SWA.event_mask = ExposureMask;
std::cout << "Width: " << this->m_Window->GetWidth() << " Height: " << this->m_Window->GetHeight() << std::endl;
this->m_Wnd = XCreateWindow(this->m_Window->GetDisplay(), this->m_Window->GetHandle(), 0, 0, this->m_Window->GetWidth(), this->m_Window->GetHeight(), 0, this->m_VI->depth, InputOutput, this->m_VI->visual, CWColormap | CWEventMask, &this->m_SWA);
XMapWindow(this->m_Window->GetDisplay(), this->m_Wnd);
XStoreName(this->m_Window->GetDisplay(), this->m_Wnd, this->m_Window->GetTitle().c_str());
this->m_DC = glXCreateContext(this->m_Window->GetDisplay(), this->m_VI, NULL, GL_TRUE);
if(this->m_DC == 0)
{
std::cout << "Unable to create GL Context" << std::endl;
return false;
}
glXMakeCurrent(this->m_Window->GetDisplay(), this->m_Window->GetHandle(), this->m_DC);
int major, minor;
glGetIntegerv( GL_MAJOR_VERSION, &major );
glGetIntegerv( GL_MINOR_VERSION, &minor );
std::cout << "InitialiseContext complete (" << major << "." << minor << ")" << std::endl;
return true;
}

FreeType error: FT_Load_Char returns 36

I've been trying to get FreeType working in my game, but it doesn't seem to work whenever I try to load the characters from a font. I've looked at a bunch of tutorials on the web and as far as I can see there doesn't seem to be anything wrong with the code.
I've tried with different fonts but they produce the same error.
This is the code that fails:
int errorCode = FT_New_Face(freetype, filename.c_str(), 0, &face);
if (errorCode != 0)
{
std::cerr << "Failed to load font: " << filename << ". Error code: " << errorCode << std::endl;
}
for (char i = 0; i < 256; i++)
{
errorCode = FT_Load_Char(face, i, FT_LOAD_RENDER); // This returns 36 when i is 0
if (errorCode != 0)
{
std::cerr << filename << ": Failed to load character '" << i << "'. Error code: " << errorCode << std::endl;
}
}
This prints out:
FreeSans.ttf: Failed to load character ' '. Error code: 36
I looked up error code 36 in the FreeType headers and it appears to be "Invalid_Size_Handle". This confounds me since no size handles where passed in the function. The only size handle I can think of the the face->size property, but there shouldn't be anything wrong with it since the face struct was initialized in the FT_New_Face function.
And yes, I do initialize FreeType earlier on in the code.
So, does anybody know what I am doing wrong and what I can do to fix this?
I got that same error code once. I guess the "Invalid_Size_Handle" means that the font size instead of memory size setting is invalid.
I fixed the problem by adding a call to FT_Set_Char_Size right after FT_New_Face. Hope that can be of help.
try to set font size after loading the face:
int errorCode = FT_New_Face(freetype, filename.c_str(), 0, &face);
if (!errorCode)
std::cerr << "Failed to load font: " << filename << ". Error code: " << errorCode << std::endl;
int pensize = 64;
FT_Set_Pixel_Sizes(face, pensize, 0);
// you can now load the glyphs
hope it helps!

Video display window in OpenCV not sizing up to video

I have written the following code below to display a video in OpenCV. I have compiled it fine but when I run it, the window that is supposed to show the video opens but it is too small to actually see if the video is playing. Everything else seems to be working fine. The width, height and number of frames are printed on the command line as coded. Anyone know what the problem is? Check it out.
void info()
{
cout << "This program will accept input video with fixed lengths and produce video textures" << endl;
}
int main(int argc, char *argv[])
{
info();
if(argc != 2)
{
cout << "Please enter more parameters" << endl;
return -1;
}
const string source = argv[1];
VideoCapture input_vid(source);
if(! input_vid.isOpened())
{
cout << "Error: Could not find input video file" << source << endl;
return -1;
}
Size S = Size((int) input_vid.get(CV_CAP_PROP_FRAME_WIDTH), //Acquire size of input video
(int) input_vid.get(CV_CAP_PROP_FRAME_HEIGHT));
cout << "Width: = " << S.width << " Height: = " << S.height << " Number of frames: " << input_vid.get(CV_CAP_PROP_FRAME_COUNT)<<endl;
const char* PLAY = "Video player";
namedWindow(PLAY, CV_WINDOW_AUTOSIZE);
//imshow(PLAY,100);
char c;
c = (char)cvWaitKey(27);
//if ( c == 27)break;
return 0;
}
assuming video is from webcam:
capture = CaptureFromCAM( 0 );
SetCaptureProperty(capture,CV_CAP_PROP_FRAME_HEIGHT, 640);
SetCaptureProperty(capture,CV_CAP_PROP_FRAME_WIDTH, 480);
this will fix your problem
another simple tweak could be using CV_WINDOW_NORMAL instead of CV_WINDOW_AUTOSIZE
namedWindow(PLAY, CV_WINDOW_AUTOSIZE);
which lets you resize the window manually