SendInput: Combine Unicode and VirtualKey - c++

I need to replay remote characters on a local machine. It seems I can only use SendInput to either send Unicode chars or VirtualKeys... But I need both. I do not know which of them an application needs. Games need virtual codes, but sometimes unicode (for chat), etc...
So how can I replay key input such that the end result is still the unicode char that I got from the remote end, but also transmits the virtual codes for apps that need it?
I was thinking of:
SendInput(virtualKey, DOWN)
SendInput(unicode, DOWN)
SendInput(unicode, UP)
SendInput(virtualKey, UP)
But I tried various combinations and always ended up with duplicated chars.

Related

Linux wide string to multibyte issue

I know there are many questions already asked on this topic but I am facing a very unusual situation here.
I am working in Centos. My application reads some data in wchar_t and converts in multibyte (UTF-8 encoding) and fills the char buffer in a google proto message and sends to another application.
The other application converts it again to wide string and displays it to user. I am using wcstombs for the conversion. My locale is "en_US.UTF-8".
For some strings it is working fine. I am facing issue in one particular wide string (maybe there are several others) in which wcstombs returns -1. Error number is set to 84 (Invalid or incomplete multibyte or wide character).
The problem is, when I am running my application through eclipse, the conversion is successful but when my application is run from root (as a service), the conversion fails.
Same string conversion is successful in windows using widechartomultibyte API.
I am not able to understand why this is happening.
Hope the experts can help me out.
EDIT
My Wide string is L"\006£æ?Jÿ" which when converted and displayed to user becomes as the image
L"\006" doesn't appear to be a valid unicode string (neither in UTF-16 nor in UTF-32). I agree with wcstombs, there's no corresponding UTF-8 sequence.
I suspect you didn't use WC_ERR_INVALID_CHARS on Windows. That would catch the same error.

How to manipulate the terminal output buffer directly

I want to write a game in Linux terminal(In C/C++), so firstly I should be able to print the character I want to it. I tried with "printf()", but it seems a little inconvenient. I think there should be a character buffer for the output characters for a terminal. Is there any way to directly manipulate the buffer?
Thanks a lot.
It goes in a way different manner.
A terminal is nothing else, but a character device, which means it is practically unbuffered. Despite of this, you still can manipulate the screen position with appropriate sequences of characters, called "escape sequences". For example, if you issue the \e[A (0x1B 0x91 0x41) sequence, the cursor goes one line up while leaving the characters intact, while if you issue \e[10;10H, (0x1B 0x91 0x31 0x30 0x3B 0x31 0x30 0x48), your cursor will go to column 10 of row 10 (exactly what you want). After you moved the cursor, the next character you write out goes to that position. For further information on escape sequences, look at this link.
Another important thing to know about is the dimensions of your terminal. ioctl can inform you about the size of the terminal window:
#include <stdio.h>
#include <sys/ioctl.h>
#include <termios.h>
#include <unistd.h>
int main ()
{
struct winsize ws;
ioctl (STDOUT_FILENO, TIOCGWINSZ, &ws);
printf ("Rows: %d, Cols: %d\n", ws.ws_row, ws.ws_col);
return 0;
}
Note that the technique mentioned above is a solution to send commands to the terminal emulator connected to your pseudo terminal device. That is, the terminal device itself remains unbuffered, the commands are interpreted by the terminal emulator.
You might want to use the setbuf function, which allows you tell printf which buffer to be used. You can use your own buffer and control the contents.
However, this is the wrong approach for 2 reasons.
1st, it won't save you work compared to printf(), fwrite() and putchar().
2nd, and more important, even these functions won't help you. From your comment it's clear that you want to manipulate a character on the screen, for example, replace a '.' (empty floor) by a D (Dragon) when that Dragon approaches. You can't do this by manipulating the output buffer of printf(). Once the '.' is displayed, the output buffer has been flushed to the terminal, and if you manipulate that buffer, it has no effect. The terminal has received a copy of that buffer, and has displayed what the data in the buffer instructed it to display. In order to change what is displayed, you have to send new commands.
And this is exactly what ncurses does for you. It keeps track of the state of the terminal, the current content, the curser position and all the nasty details, like, how to make a character appear bold.
You won't succeed with printf. That's hopeless. You need to learn what ncurses can do for you, and then everything else is easy.
TLDR: use ncurses
This answer focuses on the why?
Why?
There probably is a way to modify the buffer used by the terminal emulator on your system, given that you have sufficient priviledges to write into respective memory and maybe even modify other system resources, as required.
As terminals historically have been distinct, isolated, physical devices rather than beeing conceptually emulated in software, you couldn't access them in any way other than sending them data.
(I mean, you could always print a message locally, to instruct a human to take a screwdriver and physically mess around with the physical terminal device, but that's not been the way how humans wanted to solve the contemporary issue of "how do I change the cursor position and rewrite characters on my (and potentially any other connected) terminal?").
As others have pointed out, most physical terminals have (at some point) been built to give special meaning to certain input sequences, instead of printing them, which makes them escape sequences in this context, according to how wikipedia.org defines them, that is.
A behavioral convention in how to respond to certain input sequences emerged (presumably for the sake of interoperability or for reasons of market predominance)and got standardized as ANSI escape codes.
Those input sequences survived the transition from physical termial devices to their emulated counterparts and even though you could probably manipulate the terminal emulator's memory using system calls, libraries such as ncurses allow you to easily make use of said ANSI escape codes in your console application.
Also, using such libraries is the obvious solution to make your application work remotely:
Yes, technically, you could ssh into another system (or get access in any other more obscure way that works for you),and cause system calls or any other event that would interfere with the terminal emulator in the desired way.
Firstly, I doubt most users would want to grant you priviledge to modify their terminal emulator's memory merely to enjoy your text adventure.
Also, interoperability would reduce, as you couldn't easily support that one user, who still has a VT100 and insists on using it to play your game.

Windows version of wcswidth_l

I have some text to write to the Windows console that I need to know the real width of in columns. wcswidth_l seems to be the best option on platforms that have it (though mbswidth_l() would be better since I have no desire to use wchar_t, but for some reason it doesn't exist). But in addition to other platforms, I need something that works on Windows. Although it's unlikely that there's a portable solution, I don't know of any solution at all on Windows. I think the console has an API for getting cursor position and such, so I could write the text out and check the change in position. That would be accurate I guess, but writing out extra output isn't acceptable at all.
How does one go about getting the column width of a string or character on Windows?
Edit:
wcswidth_l returns the number of console columns used to display a string. Some characters take up one column and others, e.g. japanese characters, take up two.
As an example the 'column width' of "a あ" is four. 'a' is one, ' ' is one, and 'あ' is two. (Assuming the console is set up to actually display non-ascii characters that is). Also it'd be nice if the API supports strings using codepage 65001 (UTF-8).
First of all, the Windows Console API is located here.
Secondly, is the function you're looking for GetConsoleFontSize?
I'll try to quickly type an example in a second.
EDIT: Here you go. Forgive me if it there's a small error. I actually found it was even easier. GetCurrentConsoleFont fills in a COORD structure on the way to you getting the index to pass to GetConsoleFontSize, so step saved :)
#define _WIN32_WINNT 0x0501 //XP, 0x0601=windows 7
#include <windows.h>
int main()
{
HANDLE hStdOutput = GetStdHandle (STD_OUTPUT_HANDLE);
CONSOLE_FONT_INFO cfi;
GetCurrentConsoleFont (hStdOutput, FALSE, &cfi);
//cfi.dwFontSize.X == x size
//cfi.dwFontSize.Y == y size
}
EDIT:
If you don't mind invisible output, you can use CreateConsoleScreenBuffer to pretty much have an invisible console window at your command while leaving yours unaffected. GetConsoleScreenBufferInfoEx will tell you the cursor position, at which point you can use WriteConsole to write to your buffer (invisibly), and check the cursor location again versus the number of characters actually written. Note that checking the cursor location beforehand would not require clearing the screen to use this method.
If you cannot afford to do extra output, visible or invisible, I'm not sure there really is a possibility.
Portable approach
Since width of characters depends more on characters themselves rather than the system on which they are displayed (ok, there might be excepetions, but they should be rather rare), one can use separate function to do that (on Windows too). This requires Unicode characters as it makes it much easier to analyze width of strings, but one for sure can write a wrapper to convert between encodings.
Available implementation
Here is suitable and portable implementation, which one can plug in into his application and fallback to use it on Windows.

ASCII user interface in C++ w/ Unix PuTTY terminal using escape sequences

I'm trying to make a simple ASCII user interface for a simple internet chat program. I'm planning it to look like this:
(name): message
(name): message
---------------------------------------------
(you): message |(cursor)
I was going to use ASCII (ANSI?) control characters to accomplish this.
Whenever the chat client receives a message from the server, it should update so that the message appears as the first message above the dash-line, then return the cursor to its previous position so the user can continue typing where they left off.
My initial plan was:
1. save the current cursor position (\e7)
2. move the cursor up 1 line (to the dash-line) and to the beginning of that line (\e[1F])
3. move the dash line down (\n)
4. move the cursor up one line again (to the now empty line) (\e[1A)
5. print the message from the server
6. restore previous cursor position (\e8)
all together: "\e7\e[1F\n\e[1A" << message << "\e8";
Where I'm having trouble is that the newline character seems only to move the cursor to the next line, and not actually insert a blank line. How can I accomplish this behavior?
This is for a homework assignment, but this is just an extra bit of flair i wanted to add on for myself. The actual assignment is already completed.
note: algorithm for handling user's input on their own screen is handled correctly already.
Look into something like pdcurses. It's cross platform. That will make all of those manipulations a lot easier. You can also check into curses on *nix and the ancient conio library on Windows if you don't mind your code not being portable.
If you are using bash, you can do it with special commands using characters escapes.
See http://tldp.org/HOWTO/Bash-Prompt-HOWTO/x361.html
Seriously consider using a curses library (see Wikipedia for more information).

Multi-byte character set in MFC application

I have a MFC application in which I want to add internationalization support. The project is configured to use the "multi-byte character set" (the "unicode character set" is not an option in my situation).
Now, I would expect the CWnd::OnChar() function to send me multi-byte characters if I set my keyboard to some foreign language, but it doesn't seem to work that way. The OnChar() function always sends me a 1-byte character in its nChar variable.
I thought that the _getmbcp() function would give me the current code page for the application, but this function always return 0.
Any advice would be appreciated.
And help here? Multibyte Functions in Microsoft C Run-time
As far as changing the default code page:
The default code page for a user (for WinXP - not sure how it is on Vista) is set in the "Regional and Languages options" Control Panel applet on the "Advanced" tab.
The "Language for non-Unicode programs" sets the default code page for the current user. Unfortunately it does not actually tell you the codepage number it's configuring - it just gives the language (which might be further specified with a region variant). This meakes sense from an end-user perspective, because I think codepage numbers have no meaning to 99.999% of end users. You need to reboot for a change to take effect. If you use regmon to determine what changes you could probably come up with something that specifies the default codepage somewhat easier.
Microsoft also has an unsupported utility, AppLocale, for testing localization that changes the codepage for particular applications: http://www.microsoft.com/globaldev/tools/apploc.mspx
Also you can change the code page for a thread by calling SetThreadLocale() - but you also have to call the C runtime's setlocale() function because some CRT functions don't talk to the Win API locale functions (and vice versa). See "Windows SetThreadLocale and CRT setlocale" by Chris Grimes for details.
As always in non Unicode scenarios, you'll get a reliable result only if the system locale (aka in Control Panel "language for non-unicode applications") is set accordingly. If not, don't expect anything good.
For example, if system locale is Chinese Traditional, you'll receive 2 successive WM_CHAR messages (One for each byte, assuming user composed a 2-char character).
isleadbyte() should help you determine if a 2nd byte is coming soon.
If your system locale is NOT set to Chinese, don't expect to receive correct messages even iusing a Chinese keyboard/IME. The misleading part is that some scenarios work. e.g. using a Greek keyboard, you'll receive WM_CHAR char values based of the Greek codepage even if your system locale is Latin-based. But you should really stay away from trying to cope with such scenarios: Success is not guaranteed and will likely vary according to Windows version and locale.
As MikeB wrote, MS AppLocale is your friend to make basic tests.
[ad] and appTranslator is your friend if you need to translate your UI [/ad]
For _getmbcp, MSDN says "A return value of 0 indicates that a single byte code page is in use." That sure makes it not very useful. Try one of these: GetUserDefaultLCID GetSystemDefaultLCID GetACP. (Now why is there no "user" equivalent for GetACP?)
Anyway if you want _getmbcp to return an actual value then set your system default language to Chinese, Japanese, or Korean.
There is actually a very simple (but weird) way of force the OnChar function to send unicode characters to the application even if it's configured in multi-byte character set:
SetWindowLongW( m_hWnd, GWL_WNDPROC, GetWindowLong( m_hWnd, GWL_WNDPROC ) );
Simply by calling the unicode version of "SetWindowLong", it forces the application to receive unicode characters.