How to access box drawing characters in ascii in c++ on Mac - c++

The character I'm first looking for is usually 201 in normal ascii code, but its different for mac. How do i work around this?

It's possible to input the Unicode characters on a Mac by switching to the Unicode Hex Input keyboard layout.
Open system preferences
Choose keyboard
Add Unicode Hex Input to the list
Select "Show Input menu in menu bar"
Close the preferences
Click on the flag that's appeared in the menu bar
Select Unicode Hex Input
Then you need the codes and you can find a nice summary of the box codes here at Wikipedia.
To enter a code:
Hold down Option (alt)
Type the code, without the preceding U, i.e for U+2560, type 2560
Release Option
I drew this example using that method: ╠╩╬╩╣
After you're finished, you can change your keyboard input back to your normal one using the flag in the menu bar.

This character in not available in any single byte character set on OS X.
Unlike the Windows environment (which require special coding to use Unicode), Unicode is readily available in OS X.
Use Unicode U+2554 or UTF-8 E2 95 94
You can just use the following in a character or string ╔

There is no such thing as ASCII character 201. ASCII is a 7-bit single byte character encoding, where code points go from 0 to 127, inclusive. Maybe you are referring to “╔” in the original IBM PC character set?
Then you can do this:
Use a Windows PC with a keyboard that has a numeric keypad.
In a console window with input (e.g. the command interpreter), hold down Alt and type 201 on the numeric keypad, in number mode (NumLock on).
Start Word or Windows’ WordPad.
Copy and paste the character into Word or WordPad.
Type Alt+X.
On my laptop WordPad reports 2554, which means it's Unicode character U+2554 (hexadecimal).
In C++ you can express that character as L'\u2554', which is of type wchar_t.

On the other hand, if you prefer names to numbers, ncurses has supported double- and thick-line drawing characters in Unicode terminals since late 2009. That is after the (rather old) ncurses 5.7 bundled with OSX, but newer releases are available with MacPorts, etc.
Here are a couple of screenshots to illustrate:

Related

How to store the character ú in char array with std::cin?

In console the user type several characters including: ú .I would like store these characters in a char array using std::cin, but the character ú is stored as: 163'£', I really want to store it as: 163'ú', How could I do it?.
The character set of the console defines how a char value will be displayed. For example:
if console uses ISO 8859-1 or windows-1252 character set, the value 163 is a £;
if console uses an old DOS code page 850, the same value of 163 is an ù.
In principle, if you input a char from the console and output this char on the same console, you should graphically get the same result.
However, if there's some mixing, this is not the case. For example if you input ù on a CMD window using 850 code page, but then output the result in a unicode window, you would get £ as output. Same phenomenon if you write a file to the disk and open it in an editor using another character encoding.
Unfortunately, console settings and default encodings are things which are very much system dependent, and more information is needed to provide you accurate advise on the best way to solve the issue.

Unicode Support in MFC Controls

I'm exploring converting an existing MFC app from MBCS to Unicode, and I'm compiling a simple starter app in Unicode mode to check out how edit controls, for example, behave differently in Unicode/W or MBCS/A mode.
But I'm getting some strange results.
If I enter Alt+1702 into Word, for example, I get the Arabic character (ڦ) which is expected from the Unicode table.
But if I enter Alt+1702 into an edit control in the Unicode MFC app, I get a superscript "a" (ª) instead. This is the same behaviour that I get from the existing MBCS app.
This second behaviour also happens in Word (2007) if I use File-Open and enter Alt+1702 in the Filename field. But it comes through properly if I enter it in the Font combo in the Ribbon.
What am I missing here?
Windows disables hex-numpad by default. You must enable it and enter the value using Alt++Hex value
How to enable it:
Insert Unicode characters via the keyboard?
How to enter Unicode characters in Microsoft Windows
About the reason why Alt+1702 produces ª
Alt codes are generally limited to ANSI or OEM code pages only and won't work for code points larger than 255. A few apps (like MS Word as you experienced) do support larger values, which means Alt+1702 will produce U+06A6 (Arabic letter peheh = ڦ) as expected (1702 = 0x06A6). Some other apps just throw away any digits after the third one. But by default in almost all applications if you input any larger values then only the low byte of the real value is taken as the code point, i.e. modulo 256
So pressing Alt+1702 will be equivalent to Alt+166 because 1702 ≡ 166 (mod 256). When you run US Windows which uses code page 437 for the OEM code page then the character at code point 166 is ª

C++ Infinity Sign

Hello I was just wondering how I can display the infinity (∞) in C++? I am using CodeBlocks. I read couple of Q&A's on this topic but I'm a newbie at this stuff, especially with Hex coding and stuff. What do I have to include and what do I type out exactly. If someone can write the code and explain it, that'd be great! Thanks!
The symbol is not part of the ASCII code. However, in the code page 437 (most of the time the default in Windows Command Prompt with English locales/US regional settings) it is represented as the character #236. So in principle
std::cout << static_cast<unsigned char>(236);
should display it, but the result depends on the current locale/encoding. On my Mac (OS X) it is not displayed properly.
The best way to go about it is to use the UNICODE set of characters (which standardized a large amount of characters/symbols). In this case,
std::cout << "\u221E";
should do the job, as the UNICODE character #221 represents inf.
However, to be able to display UNICODE, your output device should support UTF encoding. On my Mac, the Terminal uses UTF, however Windows Command Prompt still uses the old ASCII encoding CodePage 437 (thanks to #chris for pointing this out). According to this answer, you can change to UNICODE by typing
chcp 65001
in a Command Prompt.
You can show it through its UNICODE
∞ has the value: \u221E
You can show any character from the Character Map by its unicode.

ASCII character problem on mac. Can't print the black square(which is char(219))

When I'm trying to do this code in C++
cout << char(219);
the output on my mac is question mark ?
However, on PC it gives me a black square.
Does anyone have any idea why on mac there is only 128 characters, when it should be 256?
Thanks for your help.
There's no such thing as ASCII character 219. ASCII only goes up to 127. chars 128-255 are defined in different ways in different character encodings for different languages and different OSs.
MacRoman defines it as €.
IBM code page 437 (used at the Windows command prompt) defines it as █.
Windows code page 1252 (used in Windows GUI programs) defines it as Û.
UTF-8 defines it as a part of a 2-byte character. (Specifically, the lead byte of the characters U+06C0 to U+06FF.)
ASCII is really a 7-bit encoding. If you are printing char(219) that is using some other encoding: on Windows most probably CP 1252. On Mac, I have no idea...
When a character is missing from an encoding set, it shows a box on Windows (it's not character 219, which doesn't exist) Macs show the question mark in a diamond symbol because a designer wanted it that way. But they both mean the same thing, missing/invalid character.

Can all keys be represented as a single char in c++?

I've searched around and I can't seem to find a way to represent arrow keys or the escape key as single char in c++. Is this even possible? I would expect that it would be similar to \t or \n for tab and new line respectively. Whenever I search for escaped characters, there's only ever a list of five or six well known characters.
The short answer is no.
The long answer is that there are a number of control characters in the standard ANSI character set (from decimal 1 to decimal 31, inclusive), among which are the control codes for linefeed, carriage return, end-of-file, and so on. A few are commonly interpreted as arrows and the escape key, but only for compatibility with terminals.
Standard PC keyboards send a 2- or 3-byte control code that represents the key that was pressed, what state it's in, which control/alt/shift key is pressed, and a few other things. You'll want to look up "key codes" to see how to handle them. Handling them differs between operating systems and the base libraries you use, and their meaning differs based on the operating system's configured keyboard layout (which may include characters not found in the ANSI character set).
Not possible; keyboards built for some languages have characters that can't be represented in a char, and anyway, how do you represent control-option-command-shift-F11 in a char?
Keyboards send scancodes, which are either some kind of event in a GUI system or a short string of bytes that represent the key. What codes depends on your system, but on most terminal-like systems, ncurses knows how to deal with them.
char variables usually represent elements in the ASCII table.
http://www.asciitable.com/
there is also man ascii on unix. If you want arrow keys you'll need a more direct way to access keyboard input. the arrow keys get translated into sequences of characters before hitting stdio. If oyu want direct keyboard access consider a GUI library, sdl, direct input to name a few.
There aren't any escape characters for the arrow keys. They are represented as Keycodes, afaik. I suggest using a higher level input library to detect key presses. If you want to do it from scratch, the approach taken might vary with the specific platform you are programming for.
In any case, back in the days of TURBO C, I used -
//Wait for keypress
//store input in ch
//see if the ASCII code matches the code for one of the arrow keys