GLFW takes 30 seconds to initialize - c++

I've been trying to get into C++ and OpenGL development and such, but I've run into an issue I can't seem to figure out with the following code:
int main() {
std::cout << "Attempting to load" << std::endl;
if (!glfwInit()) {
std::cout << "Error loading GLFW" << std::endl;
return 0;
}
else {
std::cout << "Loaded GLFW" << std::endl;
}
using namespace kreezyEngine;
using namespace graphics;
Window window("Kreezy Engine", 800, 600);
while (!window.isClosed()) {
window.update();
}
return 0;
}
Now, the code works fine, it's just that I noticed it took 30 seconds to even create the window. After some debugging, I noticed that "Attempting to load" is being printed, but it takes around 30 seconds for "Loaded GLFW" to be printed in the console. I feel like thats really slow to initialize glfw, as the tutorial im watching takes no more than a second.
Any help?
Thanks :)

Related

C++ Change the desktop with ChangeParametersInfo not working GetLastError returns 0

I'm trying to use the following code to change the wall paper on a Windows 7 machine. I'm compiling with Multi Byte Character Set.
if(SystemParametersInfoW(SPI_SETDESKWALLPAPER, 0, L"c:\\temp\\extracted.png", SPIF_SENDCHANGE) != 0)
{
std::cout << "Success !" << std::endl;
}
else
{
std::cout << "Failure :(" << std::endl;
std::cout << "Error: " << GetLastError() << std::endl;
system("title :(");
}
I have no idea of why this is not working since it doesn't return an error code (GetLastError gives 0). No need to say that the wall paper remains unchanged.
EDIT: tried to change to this and to put a bmp file instead.
int error(0);
if(SystemParametersInfoW(SPI_SETDESKWALLPAPER, 0, L"c:\\temp\\extracted.bmp", SPIF_SENDCHANGE) != 0)
{
std::cout << "Success !" << std::endl;
}
else
{
error = GetLastError();
std::cout << "Failure :(" << std::endl;
std::cout << "Error: " << error << std::endl;
system("title :(");
}
system("pause");
Output in console is Failure :( followed by Error: 0
From the advice on the comments I abandoned ChangeParametersInfo and implemented this quick function I found. Worked instantly.
void SetWallpaper(LPCWSTR file)
{
CoInitializeEx(0, COINIT_APARTMENTTHREADED);
IActiveDesktop* desktop;
HRESULT status = CoCreateInstance(CLSID_ActiveDesktop, NULL, CLSCTX_INPROC_SERVER, IID_IActiveDesktop, (void**)&desktop);
WALLPAPEROPT wOption;
ZeroMemory(&wOption, sizeof(WALLPAPEROPT));
wOption.dwSize = sizeof(WALLPAPEROPT);
wOption.dwStyle = WPSTYLE_CENTER;
status = desktop->SetWallpaper(file, 0);
status = desktop->SetWallpaperOptions(&wOption, 0);
status = desktop->ApplyChanges(AD_APPLY_ALL);
desktop->Release();
CoUninitialize();
}
Usage
SetWallpaper(L"c:\\temp\\extracted.png");
This is so much easier than bothering with the old one. Still wondering why it wasn't giving an error. Hope this will help someone else.
Thanks for the advices everyone.

Get the Windows screen saver timeout using Win32 API

I want to create a simple C++ application on windows which check the display turn off time.
After some search I found this function using windows.h
int time;
bool check;
check = SystemParametersInfo(SPI_GETSCREENSAVETIMEOUT, 0, &time, 0);
if (check) {
cout << "The Screen Saver time is : " << time << endl;
}
else {
cout << "Sorry dude the windows api can't do it" << endl;
}
but when I use this code the time is always zero and in my windows setting i set the windows to turn off display on 5 minutes
I tried some solution my self I changed the time type to long long and I got garbage number a very big number, so what I made wrong to get the screen turn off time.
OS: Windows 10
Compiler: Mingw32 and i test with MSVC 2015
Screen saver timeout and display power-off timeout are two different things.
SPI_GETSCREENSAVETIMEOUT returns the screen saver timeout - the time after which the Screen Saver is activated. If a screen saver was never configured, the value is 0.
The display power-off timeout is the time after which the power to the screen is cut, and is part of the power profile (and can differ e.g. for battery vs. AC power).
Use CallNtPowerInformation to get the display power-off timeout:
#include <iostream>
#include <windows.h>
#include <powerbase.h>
#pragma comment(lib, "PowrProf.lib")
int main() {
SYSTEM_POWER_POLICY powerPolicy;
DWORD ret;
ret = CallNtPowerInformation(SystemPowerPolicyCurrent, nullptr, 0, &powerPolicy, sizeof(powerPolicy));
if (ret == ERROR_SUCCESS) {
std::cout << "Display power-off timeout : " << powerPolicy.VideoTimeout << "s \n";
}
else {
std::cerr << "Error 0x" << std::hex << ret << std::endl;
}
}
Example output:
Display power-off timeout : 600 s

VideoStream::setVideoMode() function doesn't work

I want to change VideoStream setting in my program, but it doesn't work
#include <OpenNI.h>
int main()
{
OpenNI::initialize();
Device device;
device.open(ANY_DEVICE);
VideoStream depthStream;
depthStream.create(device, SENSOR_DEPTH);
depthStream.start();
VideoMode depthMode;
depthMode.setFps(20);
depthMode.setResolution(640, 480);
depthMode.setPixelFormat(PIXEL_FORMAT_DEPTH_100_UM);
depthStream.setVideoMode(depthMode);
...
}
Even I change depthStream.start() line after setVideoMode() function, but still doesn't work.
I changed Fps to 24, 20, 5, 1 but it doesn't change anything.
p.s. : This is my simple code, without error handling.
Edit:
Answer:
with the help of dear "api55" i found that my device (Kinect Xbox) support only one mode of videoMode. so I can't change it.
My only supported video is :
FPS:30
Width:640
Height:480
I change the VideoMode succesfully in a code a did before. After creating the VideoStream you should do something like:
rc = depth.create(device, openni::SENSOR_DEPTH);
if (rc != openni::STATUS_OK)
error_manager(3);
// set the new resolution and fps
openni::VideoMode depth_videoMode = depth.getVideoMode();
depth_videoMode.setResolution(frame_width,frame_height);
depth_videoMode.setFps(30);
depth.setVideoMode(depth_videoMode);
rc = depth.start();
if (rc != openni::STATUS_OK)
error_manager(4);
First I get the VideoMode that is inside the stream to conserve the other values and only change what I wanted. I think your code should work, but not all settings work in all cameras. To check the possible settings you can use the function openni::VideoStream::getSensorInfo. The code to check this should be something like:
#include <OpenNI.h>
int main()
{
OpenNI::initialize();
Device device;
device.open(ANY_DEVICE);
VideoStream depthStream;
depthStream.create(device, SENSOR_DEPTH);
depthStream.start();
SensorInfo& info = depthStream.getSensorInfo();
Array& videoModes = info.getSupportedVideoModes();
for (int i = 0; i < videoModes.getSize(); i++){
std::cout << "VideoMode " << i << std::endl;
std::cout << "FPS:" << videoModes[i].getFps() << std::endl;
std::cout << "Width:" << videoModes[i].getResolutionX() << std::endl;
std::cout << "Height:" << videoModes[i].getResolutionY() << std::endl;
}
...
}
I haven't test this last piece of code, so it may have errors, but you get the idea of it. The supported settings change with each camera, but I think the supported FPS in my camera were 15 and 30.
I hope this helps you

No output from cout when using SDL

Sorry for my newbie-ness
Does anyone have problems getting std::cout to output when using SDL?
I can't seem to get anything shown in the output even when I comment away the SDL codes.
#include <iostream>
//#include <SDL.h>
int main(int argc, char **argv){
//if (SDL_Init(SDL_INIT_EVERYTHING) != 0){
// std::cout << "SDL_Init Error: " << SDL_GetError() << std::endl;
// return 1;
//}
//SDL_Quit();
std::cout << "Testing" << std::endl;
return 0;
}
Edited: The window was closed too fast to see anything, so I added SDL_Delay(2000); after my std::cout, and I saw my output :)
Posting #Jonathan comment as the answer.
One thought might be that the window disappears too quick to see something. Try putting sleep(5); after your cout statement

Issue with GLX/X11 on Ubuntu not showing correct Window Contents

I'm in the process of porting my engine across to Linux support.
I can successfully create a window, and set up an OpenGL context however, the contents of the window are whatever was displayed behind it at the time of creation. NOTE: This is not a transparent window, if I drag the window around it still contains an 'image' of whatever was behind it at the time of creation. (See attached image).
Now I'm not sure where the issue could be, however I'm not looking for a solution to a specific issue in my code, mainly just any insight from other Linux/GLX developers who may have seen a similar issue and might know where I should start looking?
I stripped all the code in my update function right down to just be:
glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT);
glXSwapBuffers(dpy, win);
With no joy. My first thought was that it was garbage, but with just those calls I'd expect to the see the glClearColor().
glGetError() returns no errors anywhere in my application.
Immediately after glXCreateContext() I then call glXMakeCurrent() and calling glGetIntegerv() with GL_MAJOR_VERSION and GL_MINOR_VERSION returns 4 and 2 (4.2) respectively which indicates the GL context has been created successfully.
I tried having a glXMakeCurrent() call immediately before I try my glClear/glXSwapBuffers() but to effect.
Further info!, this is a multithreaded application, however all X11/GLX/OpenGL calls are only made by a single thread. I have also tried calling XInitThreads() from the main application thread, and from the Rendering thread with no luck either.
Code for Creating Window
bool RenderWindow::createWindow(std::string title, unsigned int width, unsigned int height)
{
std::cout << "createWindow() called" << std::endl;
this->m_Width = width;
this->m_Height = height;
this->m_Display = XOpenDisplay(NULL);
if (this->m_Display == NULL)
{
std::cout << "Unable to connect to X Server" << std::endl;
return false;
}
this->m_Root = DefaultRootWindow(this->m_Display);
this->m_Active = true;
XSetErrorHandler(RenderWindow::errorHandler);
return true;
}
Code for initialising OpenGL Context
bool RenderingSubsystem::initialiseContext()
{
if(!this->m_Window)
{
std::cout << "Unable to initialise context because there is no Window" << std::endl;
return false;
}
this->m_Window->createWindow(this->m_Window->GetTitle(), this->m_Window->GetWidth(), this->m_Window->GetHeight());
int att[] = { GLX_RGBA, GLX_DEPTH_SIZE, 24, GLX_DOUBLEBUFFER, None };
this->m_VI = glXChooseVisual(this->m_Window->GetDisplay(), 0, att);
if (this->m_VI == NULL)
{
std::cout << "Unable to initialise context because no suitable VisualInfo could be found" << std::endl;
return false;
}
this->m_CMap = XCreateColormap(this->m_Window->GetDisplay(), this->m_Window->GetHandle(), this->m_VI->visual, AllocNone);
this->m_SWA.colormap = this->m_CMap;
this->m_SWA.event_mask = ExposureMask;
std::cout << "Width: " << this->m_Window->GetWidth() << " Height: " << this->m_Window->GetHeight() << std::endl;
this->m_Wnd = XCreateWindow(this->m_Window->GetDisplay(), this->m_Window->GetHandle(), 0, 0, this->m_Window->GetWidth(), this->m_Window->GetHeight(), 0, this->m_VI->depth, InputOutput, this->m_VI->visual, CWColormap | CWEventMask, &this->m_SWA);
XMapWindow(this->m_Window->GetDisplay(), this->m_Wnd);
XStoreName(this->m_Window->GetDisplay(), this->m_Wnd, this->m_Window->GetTitle().c_str());
this->m_DC = glXCreateContext(this->m_Window->GetDisplay(), this->m_VI, NULL, GL_TRUE);
if(this->m_DC == 0)
{
std::cout << "Unable to create GL Context" << std::endl;
return false;
}
glXMakeCurrent(this->m_Window->GetDisplay(), this->m_Window->GetHandle(), this->m_DC);
int major, minor;
glGetIntegerv( GL_MAJOR_VERSION, &major );
glGetIntegerv( GL_MINOR_VERSION, &minor );
std::cout << "InitialiseContext complete (" << major << "." << minor << ")" << std::endl;
return true;
}