I have a 32bit and a 64bit version of my application and need to do something special in case it's the 32bit version running on 64bit Windows. I'd like to avoid platform specific calls and instead use Qt or boost. For Qt I found Q_PROCESSOR_X86_32 besides Q_OS_WIN64 and it seems this is exactly what I need. But it doesn't work:
#include <QtGlobal>
#ifdef Q_PROCESSOR_X86_64
std::cout << "64 Bit App" << std::endl;
#endif
#ifdef Q_PROCESSOR_X86_32
std::cout << "32 Bit App" << std::endl;
#endif
This puts out nothing when running the 32 bit App on my 64 bit Windows 7. Do I misunderstand the documentation of these global declarations?
Since there's some confusion: This is not about detecting the OS the app is currently running on, it's about detecting the "bitness" of the app itself.
Take a look at QSysInfo::currentCpuArchitecture(): it will return a string containing "64" in it when running on a 64-bit host. Similarly, QSysInfo::buildCpuArchitecture() will return such string when compiled on a 64-bit host:
bool isHost64Bit() {
static bool h = QSysInfo::currentCpuArchitecture().contains(QLatin1String("64"));
return h;
}
bool isBuild64Bit() {
static bool b = QSysInfo::buildCpuArchitecture().contains(QLatin1String("64"));
return b;
}
Then, the condition you want to detect is:
bool is32BuildOn64Host() { return !isBuild64Bit() && isHost64Bit(); }
This should be portable to all architectures that support running 32-bit code on a 64-bit host.
You can use Q_PROCESSOR_WORDSIZE (or here). I'm surprised it's not documented because it's not in a private header (QtGlobal) and is quite handy.
It could be more preferable for some use cases because it doesn't depend on processor architecture. (e.g. it's defined the same way for x86_64 as well as arm64 and many others)
Example:
#include <QtGlobal>
#include <QDebug>
int main() {
#if Q_PROCESSOR_WORDSIZE == 4
qDebug() << "32-bit executable";
#elif Q_PROCESSOR_WORDSIZE == 8
qDebug() << "64-bit executable";
#else
qDebug() << "Processor with unexpected word size";
#endif
}
or even better:
int main() {
qDebug() << QStringLiteral("%1-bit executable").arg(Q_PROCESSOR_WORDSIZE * 8);
}
Preprocessor directives are evaluated at compile-time. What you want to do is to compile for 32 bit and at run-time check if you're running on a 64 bit system (note that your process will be 32 bit):
#ifdef Q_PROCESSOR_X86_32
std::cout << "32 Bit App" << std::endl;
BOOL bIsWow64 = FALSE;
if (IsWow64Process(GetCurrentProcess(), &bIsWow64) && bIsWow64)
std::cout << "Running on 64 Bit OS" << std::endl;
#endif
That example is Windows specific. There is not a portable way to do it, on Linux you may use run system("getconf LONG_BIT") or system("uname -m") and check its output.
You can use GetNativeSystemInfo (see below) to determine the OS's version. As other already mentioned, the App's version is determined at the compile-time.
_SYSTEM_INFO sysinfo;
GetNativeSystemInfo(&sysinfo);
if (sysinfo.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_INTEL)
{
//this is a 32-bit OS
}
if (sysinfo.wProcessorArchitecture == PROCESSOR_ARCHITECTURE_AMD64)
{
//this is a 64-bit OS
}
The correct name is Q_PROCESSOR_X86_32 or Q_PROCESSOR_X86_64.
However if it happens your application runs on ARM in future this will not work. Rather consider checking sizeof(void *) or QT_POINTER_SIZE for example.
Additionally note that on Windows the GUI applications usually can't output to stdout so it may work but doesn't show anything. Use eithrr debugger or message box or output to some dummy file instead.
Related
I'm using Visual Studio 2019: why does this command do nothing?
std::cout << unsigned char(133);
It literally gets skipped by my compiler (I verified it using step-by-step debug):
I expected a print of à.
Every output before the next command is ignored, but not the previous ones. (std::cout << "12" << unsigned char(133) << "34"; prints "12")
I've also tried to change it to these:
std::cout << unsigned char(133) << std::flush;
std::cout << (unsigned char)(133);
std::cout << char(-123);
but the result is the same.
I remember that it worked before, and some of my programs that use this command have misteriously stopped working... In a blank new project same result!
I thought that it my new custom keyboard layout could be the cause, but disabling it does not change so much.
On other online compilers it works properly, so may it be a bug of Visual Studio 2019?
The "sane" answer is: don't rely on extended-ASCII characters. Unicode is widespread enough to make this the preferred approach:
#include <iostream>
int main() {
std::cout << u8"\u00e0\n";
}
This will explicitly print the character à you requested; in fact, that's also how your browser understands it, which you can easily verify by putting into e.g. some unicode character search, which will result in LATIN SMALL LETTER A WITH GRAVE, with the code U+00E0 which you can spot in the code above.
In your example, there's no difference between using a signed or unsigned char; the byte value 133 gets written to the terminal, but the way it interprets it might differ from machine to machine, basing on how it's actually set up to interpret it. In fact, in a UTF-8 console, this is simply a wrong unicode sequence (u"\0x85" isn't a valid character) - if your OS was switched to UTF-8, that might be why you're seeing no output.
You can try to use static_cast
std::cout << static_cast<unsigned char>(133) << std::endl;
Or
std::cout << static_cast<char>(133) << std::endl;
Since in mine all of this is working, it's hard to pinpoint the problem, the common sense would point to some configuration issue.
This is a follow up question to:
std::isgraph asserts, how to fix?
After setting locale to "en_US.UTF-8", std::isgraph no longer asserts.
However, the unicode character 架 (U+67B6) is reported as false in the same function. What is going on ?
It's a unicode built on Windows platform.
If you want to test characters that are too large to fit in an unsigned char, you can try using the wide-character versions, or a Unicode library as already suggested (Which is really the better option for portable code, as it removes any system or locale based differences in behavior).
This program:
#include <clocale>
#include <cwctype>
#include <iostream>
int main() {
wchar_t x = L'\u67B6';
char *loc = std::setlocale(LC_CTYPE, "");
std::wcout << "Using locale " << loc << ".\n";
std::wcout << "Character " << x << " is graphical: " << std::boolalpha
<< static_cast<bool>(std::iswgraph(x)) << '\n';
return 0;
}
when compiled and ran on my Ubuntu test system, outputs
Using locale en_US.utf8.
Character 架 is graphical: true
You said you're using Windows, but I don't have a Windows computer available for testing, so I can't confirm if this'll work there or not.
std::isgraph is not a Unicode-aware function.
It's an antiquity from C.
From the documentation:
The behavior is undefined if the value of ch is not representable as unsigned char and is not equal to EOF.
It only takes int because .. it's an antiquity from C. Just like std::tolower.
You should be using something like ICU instead.
I am printing progress of many iterations of a computation and the output is actually the slowest part of it, but only if I use Visual C++ compiler, MinGW works fine on the same system.
Consider following code:
#include <iostream>
#include <chrono>
using namespace std;
#define TO_SEC(Time) \
chrono::duration_cast<chrono::duration<double> >(Time).count();
const int REPEATS = 100000;
int main() {
auto start_time = chrono::steady_clock::now();
for (int i = 1; i <= REPEATS; i++)
cout << '\r' << i << "/" << REPEATS;
double run_time = TO_SEC(chrono::steady_clock::now() - start_time);
cout << endl << run_time << "s" << endl;
}
Now the output I get when compiled with MinGW ("g++ source.cpp -std==c++11") is:
100000/100000
0.428025s
Now the output I get when compiled with Visual C++ Compiler November 2013 ("cl.exe source.cpp") is:
100000/100000
133.991s
Which is quite preposterous. What comes to mind is that VC++ is conducting unnecessary flushes.
Would anybody know how to prevent this?
EDIT: The setup is:
gcc version 4.8.2 (GCC), target i686-pc-cygwin
Microsoft (R) C/C++ Optimizing Compiler Version 18.00.21005.1 for x86
Windows 7 Professional N 64-bit with CPU i7-3630QM, 2.4GHz with 8.00GB RAM
std::cout in MSVC is slow (https://web.archive.org/web/20170329163751/https://connect.microsoft.com/VisualStudio/feedback/details/642876/std-wcout-is-ten-times-slower-than-wprintf-performance-bug-in-c-library).
It is an unfortunate consequence of how our C and C++ Standard Library
implementations are designed. The problem is that when printing to the
console (instead of, say, being redirected to a file), neither our C
nor C++ I/O are buffered by default. This is sometimes concealed by
the fact that C I/O functions like printf() and puts() temporarily
enable buffering while doing their work.
Microsoft suggests this fix (to enable buffering on cout/stdout):
setvbuf(stdout, 0, _IOLBF, 4096)
You could also try with:
cout.sync_with_stdio(false);
but probably it won't make a difference.
avoid using std::endl but use instead "\n". std::endl is supposed to flush according to the standard.
I have the following code which works on Windows:
bool fileExists(const wstring& src)
{
#ifdef PLATFORM_WINDOWS
return (_waccess(src.c_str(), 0) == 0);
#else
// ???? how to make C access() function to accept the wstring on Unix/Linux/MacOS ?
#endif
}
How do I make the code work on *nix platforms the same way as it does on Windows, considering that scr is a Unicode string and might contain file path with Unicode characters?
I have seen various StackOverflow answers which partly answer my question but I have problems to put it all together. My system relies on wide strings, especially on Windows where file names might contain non-ASCII characters. I know that generally it's better to write to the file and check for errors, but my case is the opposite - I need to skip the file if it already exists. I just want to check if the file exists, no matter if I can read/write it or not.
On many filesystems other than FAT and NTFS, filenames aren't exactly well defined as strings. They're technically byte sequences. What those byte sequences mean is a matter of interpretation. A common interpretation is UTF-8-like. Not exact UTF-8, because Unicode specifies string equality regardless of encoding. Most systems use byte equality instead. (Again, FAT and NTFS are exceptions, using case-insensitive comparisons)
A good portable solution I use is to use the following:
ifstream my_file(myFilenameHere);
if (my_file.good())
{
// file exists and do what you need to do when it exists
}
else
{
// the file doesn't exist do what you need to do to create it etc.
}
For example a small file existence checker function could be (this one works in windows, linux and unix):
inline bool doesMyFileExist (const std::string& myFilename)
{
#if defined(__unix__) || defined(__posix__) || defined(__linux__ )
// all UNIXes, POSIX (including OS X I think (cant remember been a while)) and
// all the various flavours of Linus Torvalds digital offspring:)
struct stat buffer;
return (stat (myFilename.c_str(), &buffer) == 0);
#elif defined(__APPLE__)|| defined(_WIN32)
// this includes IOS AND OSX and Windows (x64 and x86)
// note the underscore in the windows define, without it can cause problems
if (FILE *file = fopen(myFilename.c_str(), "r"))
{
fclose(file);
return true;
}
else
{
return false;
}
#else // a catch-all fallback, this is the slowest method, but works on them all:)
ifstream myFile(myFilename.c_str());
if (myFile.good())
{
myFile.close();
return true;
}
else
{
myFile.close();
return false;
}
#endif
}
The function above uses the fastest possible method to check the file for each OS variant, and has a fallback in case you are on an os other than the ones explicitly listed (original Amiga OS for example). This has been used in GCC4.8.x and VS 2010/2012.
The good method will check that everything is as it should be, and this way you actually have the file open.
The only caveat is pay close attention to how the file name is represented in the OS (as mentioned in another answer).
So far this has worked cross platform for me just fine:)
I spent some hours experimenting on my Ubuntu machine. It took many trials and errors but finally I got it working. I'm not sure if it will work on MacOS or even on other *nixes.
As many suspected, direct casting to char* did not work - then I got only the first slash of my test path /home/progmars/абвгдāēī . The trick was to use wcstombs() combined with setlocale() Although I could not get the text to display in console after this conversion, still access() function got it right.
Here is the code which worked for me:
bool fileExists(const wstring& src)
{
#ifdef PLATFORM_WINDOWS
return (_waccess(src.c_str(), 0) == 0);
#else
// hopefully this will work on most *nixes...
size_t outSize = src.size() * sizeof(wchar_t) + 1;// max possible bytes plus \0 char
char* conv = new char[outSize];
memset(conv, 0, outSize);
// MacOS claims to have wcstombs_l which has locale argument,
// but I could not find something similar on Ubuntu
// thus I had to use setlocale();
char* oldLocale = setlocale(LC_ALL, NULL);
setlocale(LC_ALL, "en_US.UTF-8"); // let's hope, most machines will have "en_US.UTF-8" available
// "Works on my machine", that is, Ubuntu 12.04
size_t wcsSize = wcstombs(conv, src.c_str(), outSize);
// we might get an error code (size_t-1) in wcsSize, ignoring for now
// now be good, restore the locale
setlocale(LC_ALL, oldLocale);
return (access(conv, 0) == 0);
#endif
}
And here is some experimental code which led me to the solution:
// this is crucial to output correct unicode characters in console and for wcstombs to work!
// empty string also works instead of en_US.UTF-8
// setlocale(LC_ALL, "en_US.UTF-8");
wstring unicoded = wstring(L"/home/progmars/абвгдāēī");
int outSize = unicoded.size() * sizeof(wchar_t) + 1;// max possible bytes plus \0 char
char* conv = new char[outSize];
memset(conv, 0, outSize);
size_t szt = wcstombs(conv, unicoded.c_str(), outSize); // this needs setlocale - only then it returns 31. else it returns some big number - most likely, an error message
wcout << "wcstombs result " << szt << endl;
int resDirect = access("/home/progmars/абвгдāēī", 0); // works fine always
int resCast = access((char*)unicoded.c_str(), 0);
int resConv = access(conv, 0);
wcout << "Raw " << unicoded.c_str() << endl; // output /home/progmars/абвгдāēī but only if setlocale has been called; else output is /home/progmars/????????
wcout << "Casted " << (char*)unicoded.c_str() << endl; // output /
wcout << "Converted " << conv << endl; // output /home/progmars/ - for some reason, Unicode chars are cut away in the console, but still they are there because access() picks them up correctly
wcout << "resDirect " << resDirect << endl; // gives correct result depending on the file existence
wcout << "resCast " << resCast << endl; // wrong result - always 0 because it looks for / and it's the filesystem root which always exists
wcout << "resConv " << resConv << endl;
// gives correct result but only if setlocale() is present
Of course, I could avoid all that hassle with ifdefs to define my own version of string which would be wstring on Windows and string on *nix because *nix seems to be more liberal about UTF8 symbols and doesn't mind using them in plain strings. Still, I wanted to keep my function declarations consistent for all platforms and also I wanted to learn how Unicode filenames work in Linux.
How can I get all the physical drive paths (\\.\PhysicalDriveX) on a Windows computer, with C/C++?
The answers in this question suggest getting the logical drive letter, and then getting the physical drive corresponding to that mounted drive. The problem is, I want to get all
physical drives connected to the computer, including drives that are not mounted.
Other answers suggest incrementing a value from 0-15 and checking if a drive exists there (\\.\PhysicalDrive0, \\.\PhysicalDrive1, ...) or calling WMIC to list all the drives.[
As these seem like they would work, they seem like they are not the best approach to be taking. Is there not a simple function such as GetPhysicalDrives that simply returns a vector of std::string's containing the paths of all the physical drives?
You can use QueryDosDevice. Based on the description, you'd expect this to list things like C: and D:, but it will also lists things like PhysicalDrive0, PhysicalDrive1 and so on.
The major shortcoming is that it will also list a lot of other device names you probably don't care about, so (for example) on my machine, I get a list of almost 600 device names, of which only a fairly small percentage is related to what you care about.
Just in case you care, some (old) sample code:
#define WIN32_LEAN_AND_MEAN
#include <windows.h>
#include <iostream>
int main(int argc, char **argv) {
char physical[65536];
char logical[65536];
if ( argc > 1) {
for (int i=1; i<argc; i++) {
QueryDosDevice(argv[i],logical, sizeof(logical));
std::cout << argv[i] << " : \t" << logical << std::endl << std::endl;
}
return 0;
}
QueryDosDevice(NULL, physical, sizeof(physical));
std::cout << "devices: " << std::endl;
for (char *pos = physical; *pos; pos+=strlen(pos)+1) {
QueryDosDevice(pos, logical, sizeof(logical));
std::cout << pos << " : \t" << logical << std::endl << std::endl;
}
return 0;
}
However, if I run this like `devlist | grep "^Physical", it lists the physical drives.
Yes, you can just type NET USE. Here is an example output...
NET USE
New connections will be remembered.
Status Local Remote Network
-------------------------------------------------------------------------------
H: \\romaxtechnology.com\HomeDrive\Users\Henry.Tanner
Microsoft Windows Network
OK N: \\ukfs01.romaxtechnology.com\romaxfs
Microsoft Windows Network
OK X: \\ukfs03.romaxtechnology.com\exchange
Microsoft Windows Network
OK Z: \\ukfs07\Engineering Microsoft Windows Network
\\romaxtechnology.com\HomeDrive
Microsoft Windows Network
OK \\ukfs07\IPC$ Microsoft Windows Network
The command completed successfully.