Force point and not comma for floating point in Qt - c++

I have a very basic question: how can I enforce the use of points in floating-point numbers instead of a comma (I have a french version of my OS) in Qt?
Other question :is it possible to display numbers with space for thousands separators?

Try this:
QLocale loc = QLocale::system(); // current locale
loc.setNumberOptions(QLocale::c().numberOptions()); // borrow number options from the "C" locale
QLocale::setDefault(loc); // set as default
If you want all of the options as in the "C" locale, you can simply do
QLocale::setDefault(QLocale::c());
Regarding your second question: Qt does not support custom locales, but you can try setting the number options to, say, Hungary's locale (it should produce 1234 and 12 345.67 - I haven't tried it myself)
QLocale loc = QLocale::system(); // current locale
QLocale hungary(QLocale::Hungarian);
loc.setNumberOptions(hungary.numberOptions()); // borrow number options from the Hungarian locale
QLocale::setDefault(loc); // set as default

For Linux:
Since I live in Germany but want my system to use english messages, I have a mixed locale on my PC:
LANG=en_US.UTF-8
LANGUAGE=
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC=de_DE.UTF-8
LC_TIME=de_DE.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=de_DE.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=de_DE.UTF-8
LC_NAME=de_DE.UTF-8
LC_ADDRESS=de_DE.UTF-8
LC_TELEPHONE=de_DE.UTF-8
LC_MEASUREMENT=de_DE.UTF-8
LC_IDENTIFICATION=de_DE.UTF-8
LC_ALL=
This caused some trouble with the representation of numbers.
Because I was new to QLocale and was on a time budget, I used a simple hack to "fix" the problem temporarily and it worked quite well for me:
int main(int argc, char *argv[]) {
// make numbers predictable
// a preliminary hack. Will be fixed with native language support
setenv("LC_NUMERIC", "C",1);
QApplication a(argc, argv);
...
This had the advantage, that numbers got unified in both, on-screen presentation and reading and writing from and to persistent storage. The latter one was the biggest issue as e.g. floats got written like '123,45' instead of '123.45'.
Not a native solution, but a viable trick to get ahead for the time being.
Just to be complete: The application was just for myself. So this trick does of course not claim any professionality.

Related

Not eliding correctly on QListView in Windows (OS)

I work with both operation systems (windows and linux) and in linux (image 1) ElideRight is working well but in windows (image 2) is not working well. (...) supposed to be "a" instead.
I use the code below for "Eliding".
Also you need to know, this is happening in QListView.
ui->geometry_list->setTextElideMode(Qt::ElideRight);
I tried to reproduce OPs issue with the following MCVE testQListViewElide.cc:
// Qt header:
#include <QtWidgets>
// main application
int main(int argc, char **argv)
{
qDebug() << "Qt Version:" << QT_VERSION_STR;
QApplication app(argc, argv);
// setup GUI
QListWidget qLst;
qLst.resize(200, 200);
qLst.setTextElideMode(Qt::ElideRight);
qLst.addItem(QString("A very long item text to make the elide feature visible"));
qLst.show();
// runtime loop
return app.exec();
}
My platform is Visual Studio 2019 on Windows 10.
Output:
Qt Version: 5.15.1
There are in fact no ellipses. However… Please, note the horizontal scrollbar.
So, I added one line to switch the scrollbar off:
qLst.setHorizontalScrollBarPolicy(Qt::ScrollBarAlwaysOff);
Output:
So, this is working in Windows in general. (I would have been surprised if not.)
The claim of OP that an à is shown instead of ellipses made me a bit suspicious. This could be a sign for encoding problems. However, as Qt is using Unicode in QString such issues are unlikely. (I never experienced such issues while developing with Qt in Windows in daily business.)
Just, out of curiosity, I compared the Windows 1252 encoding of à (224 = 0xE0) with the Unicode of ellipses (U+2026), in UTF-8 encoding: e2 80 a6, in UTF-16 encoding: 26 20 (LE) or 20 26 (BE).
This doesn't look like a mis-interpreted encoding – at least, no obvious. However, to sort this out, OP had to provide a little bit more info like e.g. an MCVE which makes the issue reproducible.
(Thus, OP could use my MCVE whether it reproduces the issue on OPs platform.)
I suspect that this an encoding problem but happening while the item texts are stored in QStrings. It's just the list view which exposes the broken item text. Thereby, consider that it's very likely that strings retrieved in Linux are already in UTF-8 encoding. If a QString is assigned from a std::string, UTF-8 encoding is assumed as well. (QString::fromStdString()).
This is different in Windows where the internal encoding is UTF-16 but these ANSI flavors of system functions (with different meanings of character values depending on current code page) are still available (which are always good for any encoding damage).

Strange error initializing a QApplication before using std::stod

I'm receiving an unexpected behavior in my current project.
I use the DICOM library dcmtk to read information from some dicom files, and Qt to show the images.
During the information extraction, I have to convert fields of the format "<64bit float>\<64 bit float>" (Dicom Tag PixelSpacing). I split into 2 strings at "\" and convert the strings into a double. So far, everything works fine.
Well, almost: whenever I create a QApplication object before I convert the strings to doubles, it gives me integers instead of doubles.
The code looks like this:
// Faulty situation
Database db;
QApplication app(&argc, argv);
db.fill_from_source(source); // here i get ints instead of doubles
// Rearrange code and recompile:
Database db;
db.fill_from_source(source); // now it gets me doubles.
QApplication app(&argc, argv);
// The fill function looks like this (simplified)
void Database::fill_from_source(const Source& source){
string s = source.get_pixel_spacing_string();
vector<string> s2 = split(s, "\\");
// get the double, that should not be integers!
double a = stod(s2[0]);
double b = stod(s2[1]);
}
It confuses me even more, that it does work stepping through the code using QtCreator and GDB. However, when I run the executable, I get integers again.
So I tracked the issue down to the stod operation: I get the right strings out of the DICOM file, but after stod the numbers after the dot are just truncated. Same behavior with stdlib's strtod
Does the QApplication allocation does something with the std::stod function? Since everything happens while runtime, I do not understand how.
Replacing stod with QString::toDouble resolves the issue...
I'am using gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3), GNU ld (GNU Binutils for Ubuntu) 2.24.
Other code dependencies include Eigen3, Boost.Python. The code is built using a CMake project with QtCreator as IDE.
Has anyone an idea where this problem comes from? Is this a Qt bug?
Being one of the DCMTK developers, i.e. the DICOM toolkit you use, I wonder why you don't retrieve the floating point values directly from the DICOM data element "Pixel Spacing", I mean instead of retrieving the entire character string (including the backslash separator) first and then converting its components to individual floating point numbers. That way, there would be no problem at all with the current locale settings.
By the way, because of the locale settings issue we've introduced our own locale-independent OFStandard::atof() helper function :-)
std::stod behaviour depends on the currently installed C locale.
According to cppreference:
During program startup, the equivalent of std::setlocale(LC_ALL, "C"); is executed before any user code is run.
As pointed by #peppe in the comments, during QApplication's construction setlocale(LC_ALL, ""); is called on Unix, thus altering std::stod.
You can store the locale and set it back as follows:
std::string backup(
// pass a null pointer to query the current C locale without modifying it
std::setlocale(LC_ALL, nullptr)
);
QApplication app(&argc, argv);
// restore the locale
std::setlocale(LC_ALL, backup.c_str());
EDIT:
After rereading documentation for QCoreApplication, there is a paragraph about locale settings in detailed description:
On Unix/Linux Qt is configured to use the system locale settings by default. This can cause a conflict when using POSIX functions, for instance, when converting between data types such as floats and strings, since the notation may differ between locales. To get around this problem, call the POSIX function setlocale(LC_NUMERIC,"C") right after initializing QApplication, QGuiApplication or QCoreApplication to reset the locale that is used for number formatting to "C"-locale.
However #J.Riesmeier provided an interessant answer as one of the DCMTK developers.

Printing out Korean in console C++

I am having trouble with printing out korean.
I have tried various methods with no avail.
I have tried
1.
cout << "한글" << endl;
2.
wcout << "한글" << endl;
3.
wprintf(L"한글\n");
4.
setlocale(LC_ALL, "korean");
wprintf("한글");
and more. But all of those prints "한글".
I am using MinGW compiler, and my OS is windows 7.
P.S Strangely Java prints out Korean fine,
String kor = "한글";
System.out.println(kor);
works.
Set the console codepage to utf-8 before printing the text
::SetConsoleOutputCP(65001)
Since you are using Windows 7 you can use WriteConsoleW which is part of the windows API. #include <windows.h> and try the following code:
DWORD numCharsToWrite = str.length();
LPDWORD numCharsWritten = NULL;
WriteConsoleW(GetStdHandle(STD_OUTPUT_HANDLE), str.c_str(), numCharsToWrite, numCharsWritten, NULL);
where str is the is a std::wstring
More on WriteConsoleW: https://msdn.microsoft.com/en-us/library/windows/desktop/ms687401%28v=vs.85%29.aspx
After having tried other methods this worked for me.
Problem is that there are a lot of places where this could be broken.
Here is answer I've posted some time ago (covers Korean). Answear is for MSVC, but same applies to MinGW (compiler switches are different, locale name may be different).
Here are 5 traps which makes this hard:
Source code encoding. Source has to use encoding which supports all required characters. Nowadays UTF-8 is recommended. It is best to make sure your editor (IDE) is properly configure to enforce source encoding.
You have to inform compiler what is encoding of source file. For gcc it is: -finput-charset=utf-8 (it is default)
Encoding used by executable. You have to define what kind of encoding string literals should be encode in final executable. This encoding should also cover required characters. Here UTF-8 is also the best. Gcc option is -fexec-charset=utf-8
When you run application you have to inform standard library what kind of encoding your string literals are define in or what encoding in program logic is used. So somewhere in your code at beginning of execution you need something like this (here UTF-8 is enforced):
std::locale::global(std::locale{".utf-8"});
and finally you have to instruct stream what kind of encoding it should use. So for std::cout and std::cin you should set locale which is default for the system:
auto streamLocale = std::locale{""};
// this impacts date/time/floating point formats, so you may want tweak it just to use sepecyfic encoding and use C-loclae for formating
std::cout.imbue(streamLocale);
std::cin.imbue(streamLocale);
After this everything should work as desired without code which explicitly does conversions.
Since there are 5 places to make mistake, this is reason people have trouble with it and internet is full of "hack" solutions.
Note that if system is not configured for support all needed characters (for example wrong code page is set) then with thsi configuration characters which could not be converted will be replaced with question mark.

Can I get a code page from a language preference?

Windows seems to keep track of at least four dimensions of "current locale":
http://www.siao2.com/2005/02/01/364707.aspx
DEFAULT USER LOCALE
DEFAULT SYSTEM LOCALE
DEFAULT USER INTERFACE LANGUAGE
DEFAULT INPUT LOCALE
My brain hurts just trying to keep track of what the hell four separate locale's are useful for...
However, I don't grok the relationship between code page and locale (or LCID, or Language ID), all of which appear to be different (e.g. Japanese (Japan) is LANGID = 0x411 location code 1, but the code page for Japan is 932).
How can I configure our application to use the user's desired language as the default MBCS target when converting between Unicode and narrow strings?
That is to say, we used to be an MBCS application. Then we switched to Unicode. Things work well in English, but fail in Asian languages, apparently because Windows conversion functions WideCharToMultiByte and MultiByteToWideChar take an explicit code page (not a locale ID or language ID), which can be set to CP_ACP (default to ANSI code page), but don't appear to have a value for "default to user's default interface language's code page".
I mean, this is some seriously convoluted twaddle. Four separate dimensions of "current language", three different identifier types, as well as (different) string-identifiers for C library and C++ standard library.
In our previous MBCS builds, disk I/O and user I/O worked correctly: everything remained in the DEFAULT SYSTEM LOCALE (Windows XP term: "Language for non-Unicode Programs"). But now, in our UNICODE builds, everything tries to use "C" as the locale, and file I/O fails to properly transcode UNICODE to user's locale, and vice verse.
We want to have text files written out (when narrow) using the current user's language's code page. And when read in, the current user's language's code page should be converted back to UNICODE.
Help!!!
Clarification: I would ideally like to use the MUI language code page rather than the OS default code page. GetACP() returns the system default code page, but I am unaware of a function that returns the user's chosen MUI language (which auto-reverts to system default if no MUI specified / installed).
I agree with the comments by Jon Trauntvein, the GetACP function does reflect the user's language settings in the control panel. Also, based on the link to the "sorting it all out" blog, that you provided, DEFAULT USER INTERFACE LANGUAGE is the language that the Windows user interface will use, which is not the same as the language to be used by programs.
However, if you really want to use DEFAULT USER INTERFACE LANGUAGE then you get it by calling GetUserDefaultUILanguage and then you can map the language id to a code page, using the following table.
Language Identifiers and Locales
You can also use the GetLocaleInfo function to do the mapping, but first you would have to convert the language id that you got from GetUserDefaultUILanguage into a locale id, and I think you will get the name of the code page instead of a numeric value, but you could try it and see.
If all you want to be able to do is configure a locale object to use the currently selected locale settings, you should be able to do something like this:
std::locale loc = std::locale("");
You can also access the current code page in windows using the Win32 ::GetACP() function. Here is an example that I implemented in a string class to append multi-byte characters to a unicode string:
void StrUni::append_mb(char const *buff, size_t buff_len)
{
UINT current_code_page = ::GetACP();
int space_needed;
if(buff_len == 0)
return;
space_needed = ::MultiByteToWideChar(
current_code_page,
MB_PRECOMPOSED | MB_ERR_INVALID_CHARS,
buff,
buff_len,
0,
0);
if(space_needed > 0)
{
reserve(this->buff_len + space_needed + 1);
MultiByteToWideChar(
current_code_page,
MB_PRECOMPOSED | MB_ERR_INVALID_CHARS,
buff,
buff_len,
storage + this->buff_len,
space_needed);
this->buff_len += space_needed;
terminate();
}
}
Just use CW2A() or CA2W() which will take care of the conversion for you using the current system locale (or language used for non-Unicode applications).
FWIW, this is what I ended up doing:
#define _CONVERSION_DONT_USE_THREAD_LOCALE // force CP_ACP *not* CP_THREAD_ACP for MFC CString auto-conveters!!!
In application startup, construct the desired locale: m_locale(FStringA(".%u", GetACP()).GetString(), LC_CTYPE)
force it to agree with GetACP(): // force C++ and C libraries based on setlocale() to use system locale for narrow strings
m_locale = ::std::locale::global(m_locale); // we store the previous global so we can restore before termination to avoid memory loss
This gives me relatively ideal use of MFC's built-in narrow<->wide conversions in CString to automatically use the user's default language when converting to or from MBCS strings for the current locale.
Note: m_locale is type ::std::locale

stupid bug with fscanf - comma instead of point

I have a stupid bug in one of my c++ source for a project. I do in this part of the source I/O operations. I have a stupid bug where I print the fscanf read value. Below in this part :
Firstly, I don't read the good value and when I print a float value, I get a decimal value with a comma ',' instead of a point '.' between the integer part and the floating part.
FILE* file3;
file3=fopen("test.dat","r");
float test1;
fscanf(file3," %f ",&test1);
printf("here %f\n",test1);
float test3 = 1.2345;
printf("here %f\n",test3);
fclose(file3);
where test.dat file contains "1.1234" and I get at the execution :
here 1,000000
here 1,234500
So, I did a simple test C program compiled with g++ :
#include <stdio.h>
#include <stdlib.h>
int main()
{
FILE* file3;
float test3;
file3=fopen("test.dat","r");
fscanf(file3,"%f",&test3);
printf("here %f\n",test3);
fclose(file3);
}
and it gives :
here 1.123400
This is the first time I have a bug like this. Anyone could see what's wrong ?
Is your C++ locale somehow set to use a European convention? They use commas where we use points and points for thousand's separators.
Have a look at settings of environment variables
LANG
LC_CTYPE
LC_ALL
try setting en_GB or en_US. Having established that it is a locale problem, next decide what behaviour makes sense. Is diplaying 1224,45 a bug at all? The user has locale set for a reason.
You're using code that using the locale set for the programs environment. In some locale's such as in French-speaking locale's, the comma is a the decimal separator. So this code is doing what its locale is presumably telling it to.
In your simple code, you have not initialise the locale support, so this does not happen.
Assuming a Unix-like environment, what is the value of the environment variable LANG, and the various LC_* environment variables?
env | grep -e ^LANG -e ^LC_
For some background reading, try some of the GNU Libc manual (Locales and Internationalisation)
http://www.gnu.org/software/libc/manual/html_node/Locales.html#Locales
My guess is that the application is setting the locale to the
users preference, with std::locale::global( std::locale( "" )
). Which is what console applications should
do, always; they should also imbue std::cin, std::cout and
std::cerr with this locale. Most linguistic communities use
the comma for the decimal, rather than the point. (For demon
processes, it is often more appropriate to use the "C" locale,
or the "Posix" locale under Unix, regardless of where the
server is actually running. Since the "C" locale is the
default, doing nothing is generally fine for demons and
servers.)
The global locale affects all C style input and output, which
is yet another reason for using C++'s iostream instead. With
std::ifstream, just imbue the stream with the locale in which
the file was written before doing your first input. For
machine generated files, the "C" locale is the usual default,
so your code would become something like:
std::ifstream file3( "test.dat" );
if ( ! file3.is_open() ) {
// error handling...
}
file3.imbue( std::locale( "C" ) );
float test1;
file3 >> test1;
// ...
For output to the terminal, expect the locale conventions to be
followed. And set the environment variables to specify the
locale you want to see.