Why do invoking member function of GDI Plus kills the app? - c++

The program get killed as soon as I inoke GetFamilyCount() so the "Exit" is not printed.
Invoking GetLastStatus() prints value 18 which is equals to GdiplusNotInitialized
I'm not sure why it's not initialized. You can clearly see that I initialized it here,
Gdiplus::PrivateFontCollection privateFontCollection;
I'm using MinGW to build the program.
MinGW-w64 9.0.0
GCC Version 11.1
OS is Windows 10
MinGW taken from http://winlibs.com/
#include <iostream>
#include <Windows.h>
#include <Gdiplus.h>
int main() {
std::cout << "Start" << "\n";
Gdiplus::PrivateFontCollection privateFontCollection;
std::cout << privateFontCollection.GetLastStatus() << "\n";
std::cout << privateFontCollection.GetFamilyCount() << "\n";
std::cout << "Exit" << "\n";
}

Related

ucrtbased.dll: Debug assertion failed

I have problem when I use csignal.
I use Visual Studio 2019
#include <Windows.h>
#include <csignal>
#include <iostream>
void signalHandler(int signum)
{
std::cout << "Interrupt signal (" << signum << ") received.\n";
exit(signum);
}
int main()
{
std::signal(SIGINT, signalHandler);
while (1)
{
std::cout << "Going to sleep...." << std::endl;
Sleep(1);
raise(0);
}
std::cout << "Hello World!\n";
return 0;
}
when raise called after I had:
I have ucrtbased.dll in:
C:\Windows\System32
I installed Windows SDK. I don't understand what is wrong?
You're raising signal 0 (raise(0);) which is probably an invalid signal value.
You should use the standard #defines (which may have compiler-specific values) for parameter (see spec).

missing output in the eclipse console when debugging

I recently started using Eclipse CDT (version 2019-03) with the Cygwin toolchain and have noticed some bizarre behaviour when using the debugger.
Under the debugger the following program behaves as you would expect
#include <iostream>
int main()
{
std::cout << "hello world\n" << std::flush;
}
However the following produces no output
#include <iostream>
int main()
{
std::cout << "* world\n" << std::flush;
}
And for the following the output is world
#include <iostream>
int main()
{
std::cout << "# world\n" << std::flush;
}
This behaviour is completely consistent and reproduceable. Does anyone have any explanation or workarounds?

string conversion with boost locale: different behaviour on windows and linux

This is my sample code:
#pragma execution_character_set("utf-8")
#include <boost/locale.hpp>
#include <boost/algorithm/string/case_conv.hpp>
#include <iostream>
int main()
{
std::locale loc = boost::locale::generator().generate("");
std::locale::global(loc);
#ifdef MSVC
std::cout << boost::locale::conv::from_utf("grüßen vs ", "ISO8859-15");
std::cout << boost::locale::conv::from_utf(boost::locale::to_upper("grüßen"), "ISO8859-15") << std::endl;
std::cout << boost::locale::conv::from_utf(boost::locale::fold_case("grüßen"), "ISO8859-15") << std::endl;
std::cout << boost::locale::conv::from_utf(boost::locale::normalize("grüßen", boost::locale::norm_nfd), "ISO8859-15") << std::endl;
#else
std::cout << "grüßen vs ";
std::cout << boost::locale::to_upper("grüßen") << std::endl;
std::cout << boost::locale::fold_case("grüßen") << std::endl;
std::cout << boost::locale::normalize("grüßen", boost::locale::norm_nfd) << std::endl;
#endif
return 0;
}
Output on Windows 7 is:
grüßen vs GRÜßEN
grüßen
grußen
Output on Linux (openSuSE 12.3) is:
grüßen vs GRÜSSEN
grüssen
grüßen
On Linux the german letter 'ß' is converted to 'SS' as predicted, while this character remains unchanged on Windows.
Question: why is this so? How can I correct the conversion?
Some notes: Windows console codepage is set to 1252. In both cases locales are set to de_DE. I tried to replace the default locale setting in the listing above by "de_DE.UTF-8" - without any effect.
On Windows this code is compiled with Visual Studio 2013, on Linux with GCC 4.7, c++11 enabled.
Any suggestions are appreciated - thanks in advance for your support!
Windows doesn't do this conversion because "it would be too confusing" for developers if the string length changed all of a sudden. And boost presumably just delegates all the Unicode conversions to the underlying Windows APIs
Source
I guess the robust way to handle it would be to use a third-party Unicode library such as ICU.

boost::this_thread::sleep() returns immediately

I'm working with a very simply boost sample on Windows, and I'm running into several strange issues.
Here's the program:
// BoostThreadTest.cpp : Defines the entry point for the console application.
//
#define BOOST_ALL_NO_LIB
#include "stdafx.h"
#include <iostream>
#include <boost/thread.hpp>
#include <boost/date_time.hpp>
void workerFunc()
{
boost::posix_time::seconds workTime(3);
std::cout << "Worker: running" << std::endl;
boost::this_thread::sleep(workTime);
std::cout << "Worker: finished" << std::endl;
}
int _tmain(int argc, _TCHAR* argv[])
{
std::cout << "main: startup" << std::endl;
boost::thread workerThread(workerFunc);
std::cout << "main: waiting for thread" << std::endl;
boost::posix_time::seconds workTime(10);
boost::this_thread::sleep(workTime);
//workerThread.join();
std::cout << "main: done" << std::endl;
return 0;
}
The main issue is that the boost::this_thread::sleep call in the workerFunc is not actually sleeping, it is returning immediately. I also get a generic exception in the debugger when I attempt to join with the thread. The really strange thing is that a call to boost::this_thread::sleep in the main method works fine!
Does anyone have any clue what the issue might be?
I am on Windows 7, using boost 1_53_0. I built the boost thread library as a static library and am linking it in with my application built using Visual Studio 2008.

How do I use CUDA driver functions?

I have a GUI application with a producer thread and an OpenGL thread, the OpenGL thread needs to call CUDA functions and the producer needs to call cudaMemcpy etc.
No matter what I do I can't seem to get the CUDA driver api to work. Every time I try to use these function I get a cudaErrorMissingConfiguration.
I want to use multi-threaded CUDA, what is the paradigmatic way to accomplish this?
Original
void program::initCuda()
{
CUresult a;pctx=0;
cudaSafeCall(cudaSetDevice(0));
cudaSafeCall(cudaGLSetGLDevice(0));
a=cuInit(0);
cudaSafeCall(cudaFree(0));
cout <<"cuInit :" <<a << endl;assert(a == cudaSuccess);
//a=cuCtxGetCurrent(pctx);
a=cuCtxCreate(pctx,CU_CTX_SCHED_AUTO,0);
cout <<"GetContext :" <<a << endl;assert(a == cudaSuccess);
//Fails with cudaErrorMissingConfiguration
a=cuCtxPopCurrent(pctx);
cout <<"cuCtxPopCurrent :" <<a << endl;assert(a == cudaSuccess);
cout <<"Initialized CUDA" << endl;
}
Revised
void glStream::initCuda()
{
CUresult a;
pctx=0;
cudaSafeCall(cudaSetDevice(0));
cudaSafeCall(cudaGLSetGLDevice(0));
cudaFree(0);// From post http://stackoverflow.com/questions/10415204/how-to-create-a-cuda-context seems to indicate that `cudaSetDevice` should make a context.
a=cuCtxGetCurrent(pctx);
cout <<"GetContext :" <<a << endl;assert(a == cudaSuccess);
a=cuCtxPopCurrent(pctx);
cout <<"cuCtxPopCurrent :" <<a << endl;assert(a == cudaSuccess);
cout <<"Initialized CUDA" << endl;
}
The simplest version of your second code should look like this:
#include <iostream>
#include <assert.h>
#include <cuda.h>
#include <cuda_runtime.h>
int main(void)
{
CUresult a;
CUcontext pctx;
cudaSetDevice(0); // runtime API creates context here
a = cuCtxGetCurrent(&pctx);
std::cout << "GetContext : " << a << std::endl;
assert(a == CUDA_SUCCESS);
a = cuCtxPopCurrent(&pctx);
std::cout << "cuCtxPopCurrent : " << a << std::endl;
assert(a == CUDA_SUCCESS);
std::cout << "Initialized CUDA" << std::endl;
return 0;
}
which yields the following on OS X 10.6 with CUDA 5.0:
$ g++ -I/usr/local/cuda/include -L/usr/local/cuda/lib driver.cc -lcuda -lcudart
$ ./a.out
GetContext :0
cuCtxPopCurrent :0
Initialized CUDA
ie. "just works". Here the context is lazily initiated by the cudaSetDevice call (note I incorrectly asserted that cudaSetDevice doesn't establish a context, but at least in CUDA 5 it appears to. This behaviour may have changed when the runtime API was revised in CUDA 4).
Alternatively, you can use the driver API to initiate the context:
#include <iostream>
#include <assert.h>
#include <cuda.h>
#include <cuda_runtime.h>
int main(void)
{
CUresult a;
CUcontext pctx;
CUdevice device;
cuInit(0);
cuDeviceGet(&device, 0);
std::cout << "DeviceGet : " << a << std::endl;
cuCtxCreate(&pctx, CU_CTX_SCHED_AUTO, device ); // explicit context here
std::cout << "CtxCreate : " << a << std::endl;
assert(a == CUDA_SUCCESS);
a = cuCtxPopCurrent(&pctx);
std::cout << "cuCtxPopCurrent : " << a << std::endl;
assert(a == CUDA_SUCCESS);
std::cout << "Initialized CUDA" << std::endl;
return 0;
}
which also "just works":
$ g++ -I/usr/local/cuda/include -L/usr/local/cuda/lib driver.cc -lcuda -lcudart
$ ./a.out
DeviceGet : 0
CtxCreate : 0
cuCtxPopCurrent : 0
Initialized CUDA
What you shouldn't do is mix both as in your first example. All I can suggest is try both of these and confirm they work for you, then adopt the call sequences to whatever it is you are actually trying to achieve.