Extended ASCII in C++ - c++

I've tried to execute the following code:
#include <iostream>
using namespace std;
int main() {
unsigned char datoChar = 168;
cout << datoChar << endl;
return 0;
}
And it returns a symbol with "?" but it should return "¿"

You have not specified on which platform you are and what the current locale is, but its very likely that if you are running on a Unix-like operating system, the character is being interpreted by the terminal as being UTF-8 and not cp437. CP437, CP1252 and extended ASCII are nowadays mostly obsolete, and are only used in a DOS-like terminals such as Windows's cmd.exe. You may try switching to CP437 in CMD.EXE using the CHCP command.
In UTF-8, all bytes over 0x7f are considered reserved, as they are used to implement multi-character encoding. In UTF-8, '¿' is codepoint U+00BF, and spans over two separate bytes.
You may get specify it into a literal by using the hexadecimal \uXXXX notation, such as in
std::cout << "\u00BF" << std::endl;

Related

Ascii character codes in C++ visual studio don't match up with actual character symbols?

I am trying to represent 8 bit numbers with characters and I don't want to have to use an int to define the character, I want to use the actual character symbol. But when I use standard ASCII codes, everything outside of the range 32-100 something are completely different.
So I looped through all 256 ascii codes and printed them, for instance it said that this character '±' is code 241. But when I copy and paste that symbol from the console, or even use the alt code, it says that character is code -79... How can I get these two things to match up? Thank you!
It should be a problem of codepage. Windows console applications have a codepage that "maps" the 0-255 characters to a subset of Unicode characters. It is something that exists from the pre-Unicode era to support non-american character sets. There are some Windows API to select the codepage for the console: SetConsoleOutputCP (and SetConsoleCP for input)
#include <iostream>
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
int main() {
unsigned int cp = ::GetConsoleOutputCP();
std::cout << "Default cp of console: " << cp << std::endl;
::SetConsoleCP(28591);
::SetConsoleOutputCP(28591);
cp = ::GetConsoleOutputCP();
std::cout << "Now the cp of console is: " << cp << std::endl;
for (int i = 32; i < 256; i++) {
std::cout << (char)i;
if (i % 32 == 31) {
std::cout << std::endl;
}
}
}
The 28591 codepage is the iso-8859-1 codepage that maps the 0-255 characters to the unicode 0-255 codepoints (the 28591 number is taken from https://learn.microsoft.com/en-us/windows/win32/intl/code-page-identifiers)

How to apply <cctype> functions on text files with different encoding in c++

I would like to Split some files (around 1000) into words and remove numbers and punctuation. I will then process these tokenized words accordingly... However, the files are mostly in German language and are encoded in different types:
ISO-8859-1
ISO Latin-1
ASCII
UTF-8
The problem that I am facing is that I cannot find a correct way to apply Character Conversion functions such as tolower() and I also get some weird icons in the terminal when I use std::cout at Ubuntu linux.
For example, in non UTF-8 files, the word französische is shown as franz�sische, für as
f�r etc... Also, words like Örebro or Österreich are ignored by tolower(). From what I know the "Unicode replacement character" � (U+FFFD) is inserted for any character that the program cannot decode correctly when trying to handle Unicode.
When I open UTF-8 files i dont get any weird characters but i still cannot convert upper case special characters such as Ö to lower case... I used std::setlocale(LC_ALL, "de_DE.iso88591"); and some other options that I have found on stackoverflow but I still dont get the desired output.
My guess on how I should solve this is:
Check encoding of file that is about to be opened
open file according to its specific encoding
Convert file input to UTF-8
Process file and apply tolower() etc
Is the above algorithm feasible or the complexity will skyrocket?
What is the correct approach for this problem? How can I open the files with some sort of encoding options?
1. Should my OS have the corresponding locale enabled as global variable to process (without bothering how console displays it) text? (in linux for example I do not have de_DE enabled when i use -locale -a)
2. Is this problem only visible due to terminal default encoding? Do I need to take any further steps before i process the extracted string normally in c++?
My linux locale:
LANG=en_US.UTF-8
LANGUAGE=en_US
LC_CTYPE="en_US.UTF-8"
LC_NUMERIC=el_GR.UTF-8
LC_TIME=el_GR.UTF-8
LC_COLLATE="en_US.UTF-8"
LC_MONETARY=el_GR.UTF-8
LC_MESSAGES="en_US.UTF-8"
LC_PAPER=el_GR.UTF-8
LC_NAME=el_GR.UTF-8
LC_ADDRESS=el_GR.UTF-8
LC_TELEPHONE=el_GR.UTF-8
LC_MEASUREMENT=el_GR.UTF-8
LC_IDENTIFICATION=el_GR.UTF-8
LC_ALL=
C
C.UTF-8
el_GR.utf8
en_AG
en_AG.utf8
en_AU.utf8
en_BW.utf8
en_CA.utf8
en_DK.utf8
en_GB.utf8
en_HK.utf8
en_IE.utf8
en_IN
en_IN.utf8
en_NG
en_NG.utf8
en_NZ.utf8
en_PH.utf8
en_SG.utf8
en_US.utf8
en_ZA.utf8
en_ZM
en_ZM.utf8
en_ZW.utf8
POSIX
Here is some sample code that I wrote that doesnt work as I want atm.
void processFiles() {
std::string filename = "17454-8.txt";
std::ifstream inFile;
inFile.open(filename);
if (!inFile) {
std::cerr << "Failed to open file" << std::endl;
exit(1);
}
//calculate file size
std::string s = "";
s.reserve(filesize(filename) + std::ifstream::pos_type(1));
std::string line;
while( (inFile.good()) && std::getline(inFile, line) ) {
s.append(line + "\n");
}
inFile.close();
std::cout << s << std::endl;
//remove punctuation, numbers, tolower,
//TODO encoding detection and specific transformation (cannot catch Ö, Ä etc) will add too much complexity...
std::setlocale(LC_ALL, "de_DE.iso88591");
for (unsigned int i = 0; i < s.length(); ++i) {
if (std::ispunct(s[i]) || std::isdigit(s[i]))
s[i] = ' ';
if (std::isupper(s[i]))
s[i]=std::tolower(s[i]);
}
//std::cout << s << std::endl;
//tokenize string
std::istringstream iss(s);
tokens.clear();
tokens = {std::istream_iterator<std::string>{iss}, std::istream_iterator<std::string>{}};
for (auto & i : tokens)
std::cout << i << std::endl;
//PROCESS TOKENS
return;
}
Unicode defines "code points" for characters. A code point is a 32 bit value. There are some types of encodings. ASCII only uses 7 bits, which gives 128 different chars. The 8th bit was used by Microsoft to define another 128 chars, depending on the locale, and called "code pages". Nowadays MS uses UTF-16 2 bytes encoding. Because this is not enough for the whole Unicode set, UTF-16 is also locale dependant, with names that match Unicode's names "Latin-1", or "ISO-8859-1" etc.
Most used in Linux (typically for files) is UTF-8, which uses a variable number of bytes for each character. The first 128 chars are exactly the same as ASCII chars, with just one byte per character. To represent a character UTF8 can use up to 4 bytes. More onfo in the Wikipedia.
While MS uses UTF-16 for both files and RAM, Linux likely uses UFT-32 for RAM.
In order to read a file you need to know its encoding. Trying to detect it is a real nightmare which may not succeed. The use of std::basic_ios::imbue allows you to set the desired locale for your stream, like in this SO answer
tolower and such functions can work with a locale, e.g.
#include <iostream>
#include <locale>
int main() {
wchar_t s = L'\u00D6'; //latin capital 'o' with diaeresis, decimal 214
wchar_t sL = std::tolower(s, std::locale("en_US.UTF-8")); //hex= 00F6, dec= 246
std::cout << "s = " << s << std::endl;
std::cout << "sL= " << sL << std::endl;
return 0;
}
outputs:
s = 214
sL= 246
In this other SO answer you can find good solutions, as the use of iconv Linux or iconv W32 library.
In Linux the terminal can be set to use a locale with the help of LC_ALL, LANG and LANGUAGE, e.g.:
//Deutsch
LC_ALL="de_DE.UTF-8"
LANG="de_DE.UTF-8"
LANGUAGE="de_DE:de:en_US:en"
//English
LC_ALL="en_US.UTF-8"
LANG="en_US.UTF-8"
LANGUAGE="en_US:en"

C++ Visual Studio Unicode confusion

I've been looking at the Unicode chart, and know that the first 127 code points are equivalent for almost all encoding schemes, ASCII (probably the original), UCS-2, ANSI, UTF-8, UTF-16, UTF-32 and anything else.
I wrote a loop to go through the characters starting from decimal 122, which is lowercase "z". After that there are a couple more characters such as {, |, and }. After that it gets into no-man's land which is basically around 20 "control characters", and then the characters begin again at 161 with an inverted exclamation mark, 162 which is the cent sign with a stroke through it, and so on.
The problem is, my results don't correspond the Unicode chart, UTF-8, or UCS-2 chart, the symbols seem random. By the way, the reason I made the "character variable a four-byte int was that when I was using "char" (which is essentially a one byte signed data type, after 127 it cycled back to -128, and I thought this might be messing it up.
I know I'm doing something wrong, can anyone figure out what's happening? This happens whether I set the character set to Unicode or Multibyte characters in the project settings. Here is the code you can run.
#include <iostream>
using namespace std;
int main()
{
unsigned int character = 122; // Starting at "z"
for (int i = 0; i < 100; i++)
{
cout << (char)character << endl;
cout << "decimal code point = " << (int)character << endl;
cout << "size of character = " << sizeof(character) << endl;
character++;
system("pause");
cout << endl;
}
return 0;
}
By the way, here is the Unicode chart
http://unicode-table.com/en/#control-character
Very likely the bytes you're printing are displayed using the console code page (sometimes referred to as OEM), which may be different than the local single- or double-byte character set used by Windows applications (called ANSI).
For instance, on my English language Windows install ANSI means windows-1252, while a console by default uses code page 850.
There are a few ways to write arbitrary Unicode characters to the console, see How to Output Unicode Strings on the Windows Console

C++ Non ASCII letters

How do i loop through the letters of a string when it has non ASCII charaters?
This works on Windows!
for (int i = 0; i < text.length(); i++)
{
std::cout << text[i]
}
But on linux if i do:
std::string text = "á";
std::cout << text.length() << std::endl;
It tells me the string "á" has a length of 2 while on windows it's only 1
But with ASCII letters it works good!
In your windows system's code page, á is a single byte character, i.e. every char in the string is indeed a character. So you can just loop and print them.
On Linux, á is represented as the multibyte (2 bytes to be exact) utf-8 character 'C3 A1'. This means that in your string, the á actually consists of two chars, and printing those (or handling them in any way) separately yields nonsense. This will never happen with ASCII characters because the utf-8 representation of every ASCII character fits in a single byte.
Unfortunately, utf-8 is not really supported by C++ standard facilities. As long as you only handle the whole string and neither access individual chars from it nor assume the length of the string equals the number of actual characters in the string, std::string will most likely do fine.
If you need more utf-8 support, look for a good library that implements what you need.
You might also want to read this for a more detailed discussion on different character sets on different systems and advice regarding string vs. wstring.
Also have a look at this for information on how to handle different character encodings portably.
Try using std::wstring. The encoding used isn't supported by the standard as far as I know, so I wouldn't save these contents to a file without a library that handles a specific format. of some sort. It supports multi-byte characters so you can use letters and symbols not supported by ASCII.
#include <iostream>
#include <string>
int main()
{
std::wstring text = L"áéíóú";
for (int i = 0; i < text.length(); i++)
std::wcout << text[i];
std::wcout << text.length() << std::endl;
}

Output arbitrary number as unicode

How can I make an arbitrary number be interpreted as Unicode when outputted to the terminal?
So for example:
#include <iostream>
int main() {
int euro_dec = 0x20AC;
std::cout << "from int: " << euro_dec
<< "\nfrom \\u: \u20AC" << std::endl;
return 0;
}
This prints:
from int: 8364
from \u: €
What does the escape sequence \u do to make the number 0x20AC be interpreted as Unicode?
I tested using wcout and the output was:
from int: 8364
from \u:
A unicode escape sequence occurring in program text is converted to the equivalent Unicode character at the very first phase of translation (2.2p1b1 [lex.phases]). This occurs even before the program is tokenized or preprocessed.
To convert a Unicode codepoint expressed as an integer to your native narrow multibyte encoding, use c32rtomb:
#include <cuchar>
char buf[MB_CUR_MAX];
std::mbstate_t ps{};
std::size_t ret = std::c32rtomb(buf, euro_dec, &ps);
if (ret != static_cast<std::size_t>(-1)) {
std::cout << std::string(buf, &buf[ret]); // outputs €
}
Note that cuchar is poorly supported; if you know that your native narrow string encoding is UTF-8 you can use codecvt_utf8<char32_t> but otherwise you'll have to use platform-specific facilities.
When you output the integer variable, the library converts the value to text, it doesn't actually output the value as an integer.
When using the "\u", it's the compiler which reads the number and converts it to the appropriate byte sequence which it inserts directly into the literal string.