How does the OS language represented in Linux - c++

My Application is running in Linux.
it would be localized.
how can I get the OS Language?
is it represented in Linux?
as a numeric values?

See the question at SuperUser:
https://superuser.com/questions/62031/how-to-change-the-linux-localization
If you just want to know which locales are supported, you can see which are installed, which can be done in Debian by doing a:
ls -l /usr/share/locale
I guess the locale named "C" (the default locale name, which is just english) should always be installed.
And if you want to set the locale, just use:
export -n LC_ALL=de_DE
for German, e.g.

Since most of your other questions relate to Qt, you might want to read the documentation of the static function QLocale::system()

You can find some useful information in the GLibC Manual.

Related

Print colorful ascii art in CPP console [duplicate]

I'm building a lightweight version of the ncurses library. So far, it works pretty well with VT100-compatible terminals, but win32 console fails to recognise the \033 code as the beginning of an escape sequence:
# include <stdio.h>
# include "term.h"
int main(void) {
puts(BOLD COLOR(FG, RED) "Bold text" NOT_BOLD " is cool!" CLEAR);
return 0;
}
What needs to be done on the C code level, in order that the ANSI.SYS driver is loaded and the ANSI/VT100 escape sequences recognized?
[UPDATE] For latest Windows 10 please read useful contribution by #brainslugs83, just below in the comments to this answer.
While for versions before Windows 10 Anniversary Update:
ANSI.SYS has a restriction that it can run only in the context of the MS-DOS sub-system under Windows 95-Vista.
Microsoft KB101875 explains how to enable ANSI.SYS in a command window, but it does not apply to Windows NT. According to the article: we all love colors, modern versions of Windows do not have this nice ANSI support.
Instead, Microsoft created a lot of functions, but this is far from your need to operate ANSI/VT100 escape sequence.
For a more detailed explanation, see the Wikipedia article:
ANSI.SYS also works in NT-derived systems for 16-bit legacy programs executing under the NTVDM.
The Win32 console does not natively support ANSI escape sequences at all. Software such as Ansicon can however act as a wrapper around the standard Win32 console and add support for ANSI escape sequences.
So I think ANSICON by Jason Hood is your solution. It is written in C, supports 32-bit and 64-bit versions of Windows, and the source is available.
Also I found some other similar question or post which ultimately have been answered to use ANSICON:
How to load ANSI escape codes or get coloured file listing in WinXP cmd shell?
how to use ansi.sys in windows 7
How can I get cmd.exe to display ANSI color escape sequences?
ansi color in windows shells
enable ansi colors in windows command prompt
Starting from Windows 10 TH2 (v1511), conhost.exe and cmd.exe support ANSI and VT100 Escape Sequences out of the box (although they have to be enabled).
See my answer over at superuser for more details.
Base on #BrainSlugs83 you can activate on the current Windows 10 version via register, with this command line:
REG ADD HKCU\CONSOLE /f /v VirtualTerminalLevel /t REG_DWORD /d 1
For Python 2.7 the following script works for me fine with Windows 10 (v1607)
import os
print '\033[35m'+'color-test'+'\033[39m'+" test end"
os.system('') #enable VT100 Escape Sequence for WINDOWS 10 Ver. 1607
print '\033[35m'+'color-test'+'\033[39m'+" test end"
Result should be:
[35mcolor-test[39m test end
color-test test end
Starting from Windows 10, you can use ENABLE_VIRTUAL_TERMINAL_PROCESSING to enable ANSI escape sequences:
https://msdn.microsoft.com/en-us/library/windows/desktop/mt638032(v=vs.85).aspx
If ANSICON is not acceptable since it requires you to install something on the system, a more lightweight solution that parses and translates the ANSI codes into the relevant Win32 API console functions such as SetConsoleTextAttribute.
https://github.com/mattn/ansicolor-w32.c
For coloring the cmd you need Windows.h and use SetConsoleTextAttribute() more details can be found in http://msdn.microsoft.com/en-us/library/windows/desktop/ms686047%28v=vs.85%29.aspx
In lastest win10, it can be done by SetConsoleMode(originMode | ENABLE_VIRTUAL_TERMINAL_PROCESSING). See https://learn.microsoft.com/en-us/windows/console/console-virtual-terminal-sequences#example
Maybe ANSICON can help u
Just download and extract files, depending on your windows os: 32bit or 64bit
Install it with: ansicon -i
I personally like clink. It not only processes ANSI codes, it also adds many other features so Windows Console behaves like bash (history, reverse history search, keyboard shortcuts, etc.):
The same line editing as Bash (from GNU's Readline library).
History persistence between sessions.
Context sensitive completion;
Executables (and aliases).
Directory commands.
Environment variables
Thirdparty tools; Git, Mercurial, SVN, Go, and P4.
New keyboard shortcuts;
Paste from clipboard (Ctrl-V).
Incremental history search (Ctrl-R/Ctrl-S).
Powerful completion (TAB).
Undo (Ctrl-Z).
Automatic "cd .." (Ctrl-PgUp).
Environment variable expansion (Ctrl-Alt-E).
(press Alt-H for many more...)
Scriptable completion with Lua.
Coloured and scriptable prompt.
Auto-answering of the "Terminate batch job?" prompt.
Ansi.sys (in the system32 folder) is an "MSDOS driver" provided as part of Windows XP, 2000, and earlier versions of NT. In 2000 and XP, it is located in the system32 folder (I don't remember the structure of earlier versions of NT). Programs that run in the DOS subsystem and use standard output can use ANSI.SYS just as they could running over MSDOS.
To load ansi.sys, you must use the device= or devicehigh= command in config, just as you would in MSDOS. On Windows NT 5 (2K & XP), each copy of the DOS subsystem can be given a separate config file in the pif/shortcut (use the "advanced" button), and there is a default file called CONFIG.NT (also in the system32 folder), which is used if the pif/shortcut does not specify a special config file.
When ansi.sys is loaded correctly, mem /d will report that it is loaded. On earlier versions of NT, you can and must load a proper DOS environment to load ansi.sys, and ansi art will work at the prompt. On Win 2K and XP, loading ansi.sys will have no effect on your "CMD prompt" because CMD is not a DOS program: it is a 32 bit Windows console program. For some reason that I do not understand, on WinXP, even if you load a fixed copy of command.com using "command.com /p", the command prompt will not be ansi enabled: perhaps when you do it that way it only emulates loading command.com?
In any case, when you use an actual DOS version of command.com, ansi is enabled after being loaded: you can demonstrate it's use with a bit of ansi art like this:
command /c type ansiart.ans
(here is an example: http://artscene.textfiles.com/ansi/artwork/beastie.ans)
CONFIG.NT (in the system32 folder) contains an example of the syntax for loading device drivers. You will need to be an Administrator to edit that default file, or you can make a copy of it.
On Win 2K and XP, the default "shortcut" for MSDOS is a .PIF file, not a .LNK file. If you create a .lnk file to CMD, you won't be able to set special config and autoexec files, it will use the default CONFIG.NT. If you want to use a special config file for just one DOS application, you can make a copy of the "MSDOS shortcut", or you can make a copy of "_default.pif", found in your Windows folder.
Had the same issue. I installed ConEmu and that one solved my problem.
I found this tool to be working for my end.
Microsoft Color Tool from GitHub
Unzip the compressed file then open CMD with Administration permission.
Go to the folder where you unzip the file in CMD.
Then execute this command "colortool -b scheme-name"
The scheme-name needs to be replaced with any of these options below:
campbell.ini
campbell-legacy.ini
cmd-legacy.ini
deuternopia.itermcolors
OneHalfDark.itermcolors
OneHalfLight.itermcolors
solarized_dark.itermcolors
solarized_light.itermcolors
In my case, the command would be like this "colortool -b solarized_dark.itermcolors"
Click right on the console window and select Properties.
You don't need to change any value just click "OK" to save the setting. (You will notice that your font already contains colors).
Console Property
Then restart your cmd or powerShell.
The ANSI color should be enabled and working with the color scheme you chose before.
Somehow in Windows you just need to call any shell command first, rather call the system function. Just in start of your main method put system("");, and don't forget to include stdlib.h.
I noticed this when I looked at some of my old programs that also used ANSI codes to understand why they work, but my new code is not

how I get version of a program in linux

In Windows you can do:
CSystemInfo info;
this->m_strVersion = info.GetFileVersion( CFileSystemHelper::GetApplicationPath() + _T("/test.exe") );
to get the version number.
How would I do it in C++ on linux ?
Windows adopts a version resource system with standard API support, Linux and UNIX have no such high level concepts for a variety of reasons ranging from legacy to redundancy.
Best options are to query the local packaging system (RPM, APT, etc), or try executing with --version command line parameter which is a recommended GNU standard.
Example RPM query on command line for the Samba tool smbget:
# rpm -q -f /usr/bin/smbget --queryformat '%{version}\n'
3.0.33
You probably want to retrieve the path of the currently executing executable.
On Linux, you could use the /proc/ pseudo-file system. Read the proc(5) man page for more.
Specifically, you probably want to do something like
char myexepath[512];
memset (myexepath, 0, sizeof(myexepath);
readlink ("/proc/self/exe", myexepath, sizeof(myexepath));
(but you really should check for runtime errors above)
If you simply wanted to display the version of a program, you should have a convention about it. Usually accepting --version as the program first argument.
I invite you to read Advanced Linux Programming.

in the code (.c file) how I can find the linux distribution name version

I'd like to know in the code (.c file) how I can find the linux distribution name version (like ubuntu 10.0.4, or centOS 5.5...)?
The c function that I'm looking for should be like the uname() system call used in (.c files) to get kernel version.
it will be appreciated that the function is working for all linux distribution (standard)
I 'm not looking to get distribution name and version by the use of command line linux from code (.c file) (like the use of system("cat /etc/release");).
Any suggestion will be appreciated!
Regards
There is no standard for this yet. You can query following files or check for existence:
/etc/lsb-release
/etc/issue
/etc/*release
/etc/*version
Well, you can (and should) use fopen and fgets instead of system("cat"), for reading /etc/release.
There's no universal method though, I can even build a linux image which has no filesystem at all (except initramfs) and definitely no distribution name.
AFAIK there isn't a standard system call to get this if uname(2) doesn't give you enough info.
Safest approach is probably to check for "/proc/version" and read that
You could fopen("/etc/lsb-release") and parse its contents. It looks like this:
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=10.04
DISTRIB_CODENAME=lucid
DISTRIB_DESCRIPTION="Ubuntu 10.04.3 LTS"
This method is not universal. You'll need to make sure that it works on all distros that you care about (if it doesn't, I suggest you go with #ott--'s answer).
Is it acceptable to run some shell commands?
$ /usr/bin/lsb_release -r
Release: 11.04
$ /usr/bin/lsb_release -d
Description: Ubuntu 11.04
$ /usr/bin/lsb_release -rd
Description: Ubuntu 11.04
Release: 11.04
There is no portable way to do that, you'll have to use some OS detection tool/library.
Fortunately, there are a few out there. I know those 2 :
Facter, a professional (yet free/open) information gathering program in ruby : http://puppetlabs.com/puppet/related-projects/facter/
a shell script : http://www.novell.com/coolsolutions/feature/11251.html
(I used facter via puppet and it is very good.)
With a little additional scripting, you can use one of those program's output to generate a .h that you can then use in your code.
You can even integrate this generation as a step in your makefile.
I usually inspect /etc/issue; while (as others pointed out) it is not guaranteed, I've fount in the field that's quite reliable.
As far as I've experienced, it works on ubuntu, debian, redhat, centos, slackware and archlinux.

Mac OSX and Unix quick questions

I have 3 questions. I am making a C++ executable to launch a Perl program I made. I will compile it for Winows, Mac OSX and Linux. It's pretty much just: system("perl progam.pl");
When compiled with Mac OSX, the program starts in ~. How would I get it to start in the dir it was launched from, or is it just a problem with the compiler?
I'm using - echo -n -e "\033[0;Program\007" - in an attempt to make the windows title "Program". Is this is best way?
I'm using - echo -n -e "\033[7;30;47m" - to make the background of the window black. Is this the best way?
Thanks.
This sounds like something Finder is doing. Launching the app from a shell should work as you expect.
Use tput
See answer to 2, above.
On Mac OS/Unix, invoking system does not change the current working dir. When executing program.pl the current working directory is the same from which you executed the C++ executable. When you launch the executable using Launch Services (e.g. the Finder) the working directory should be /.
On #1 you can refer to the current directory with ./ so system("perl ./progam.pl"); should do it assuming both scripts are sitting in the same folder. ../program.pl would be one level higher.
For #1, use getcwd & then pass an explicit path to system:
cwd=getcwd(NULL, PATH_MAX);
sprintf(cmd, "perl %s/program.pl", cwd);
system(cmd);
free(cwd);
If your perl program itself relies on a specific working directory, then do this instead:
sprintf(cmd, "cd %s && perl program.pl", cwd);
This is probably a silly question, but why are you making an application to launch a perl script? Just add the following to the top of your perl script and use "chmod a+x" to make it executable:
#! /usr/bin/perl
When you use the system command from C and C++, you are basically launching the default system shell and executing the given command in that shell. Doing that is not very portable and somewhat defeats the purpose of using C or C++ (since you could simply create a shell script that does the same thing). If you want to actually do this with C++, you should probably use popen or fork+exec to launch perl. Generally speaking, it isn't nice to end users to play with their Terminal in the manner that you have proposed; most users, by default, have the Terminal configured to display the most recently executed command or their current directory or some other information of their choosing, and changing that is -- on UNIX systems such as Mac OS X and Linux -- considered improper etiquitte. If you are trying to create a terminal interface, though, you might want to look at the curses library.

std::locale breakage on MacOS 10.6 with LANG=en_US.UTF-8

I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding.
Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program:
#include <locale>
int main ( int argc, char *argv [] ) {
std::locale::global(std::locale(""));
return 0;
}
This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration.
My questions are:
Is this expected behavior for this code on MacOS 10.6?
What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library.
For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode.
edit The error given by the above code on 10.6.1 is:
$ ./locale
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Abort trap
Ok I don't have an answer for you, but I have some clues:
This isn't limited to OS X 10.6. I get the same result on a 10.4 machine.
I looked at the GCC source for libstdc++ and hunted around for _S_create_c_locale. What I found is on line 143 of config/locale/generic/c_locale.cc. The comment there says "Currently, the generic model only supports the "C" locale." That's not promising. In fact if I do LANG=C the runtime error goes away, but any other value for LANG I try causes the same error, regardless of what arguments I give to the locale constructor. (I tried locale::classic(), "C", "", and the default). This is true as far back as GCC 4.0
That same page has a reference to libstdc++ mailing list discussion on this topic. I don't know how fruitful it is: I only followed it a little way down, and it gets very technical very fast.
None of this tells you why the default locale on 10.6 wouldn't work with std::locale but it does suggest a workaround, which is to set LANG=C before running the program.
I have encountered this problem very recently on Ubuntu 14.04 LTS and on a Raspberry Pi running the latest Raspbian Wheezy.
It has nothing to do with OS X, rather with a combination of G++ and Boost (at least up to V1.55) and the default locale settings on certain platforms. There are Boost bug tickets sort of related to this issue, see
ticket #4688 and ticket #5928.
My "solution" was first to do some extra locale setup, as suggested by this AskUbuntu posting:
sudo locale-gen en_US en_US.UTF-8
sudo dpkg-reconfigure locales
But then, I also had to make sure that the environment variable LC_ALL is set to the value of LANG (it is advisable to put this in your .profile):
export LC_ALL=$LANG
In my case I use the locale en_US.UTF-8.
Final remark: the OP said "This program fails when I run this through g++". I understand that this thread was started in 2009, but today there is absolutely no need to use GCC or G++ on the Mac, the much better LLVM/Clang compiler suite is available from Apple free of charge, see the XCode home page.
The situation is still the same. But some functionality may be gained by
setlocale( LC_ALL, "" );
This gets you UTF-8 coding on wide iostreams but not money formatting, for my two data points.
locale::global( locale( "" ) );
should be equivalent, but it crashes if subsequently run in the very same program.
I had the same problem, checked LANG and LC_MESSAGES and they are not set when you lunch the application through Finder, so the following lines saved the day:
unset("LANG");
unset("LC_MESSAGES");
The _S_create_c_locale exception seems to indicate some sort of misconfiguration: check that whatever your LC_ALL or LANG environment variable is set to, exists in the output of locale -a.
$ env LC_ALL=xx_YY ./test
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Aborted
$ env LC_ALL=C ./test
$ echo $?
0
But since you're on OS X, I'm not really sure how locale information is supposed to be handled.
Quoting the accepted answer:
It has nothing to do with OS X
I encountered this issue on MacOS Big Sur using an outdated MacOS utility. The specific utility was VMWare's ovftool, but none of the above LANG/LC_ALL workarounds fixed it. Updating the tool was the only way to get the error to go away. No combination of locale workarounds would fix this.
In my specific case, the error occurred using ovftool 4.1.0, and the error went away using ovftool 4.4.3.