In the application there is a dialog where only numeric string entries are valid. Therefore I would like to set the numeric keyboard layout.
Does anyone know how to simulate key press on the keyboard or any other method to change the keyboard layout?
Thanks!
You don't need to.
Just like full windows, you can set the edit control to be numeric input only. You can either do it manually or in the dialog editor in the properites for the edit control.
The SIP should automatically display the numeric keyboard when the numeric only edit control goes into focus.
You can use the InputModeEditor:
InputModeEditor.SetInputMode(textBox1,InputMode.Numeric);
There is only one way to do this (edit: this is referring to the SIP in non-smartphone Windows Mobile, so I'm not sure it's relevant to your question), and it does involve simulating a mouse click on the 123 button. This is only half the problem, however, since you also need to know whether the keyboard is already in numeric mode or not. The way to do this is to peek a pixel near the upper left corner of the keypad - if you look at how the 123 button works, you'll see that it's system text on windows background, and then inverted in numeric mode (so the pixel will be the system text color only when in numeric mode). There's one more bit of weirdness you have to do to guarantee it will work on all devices (you have to draw a pixel on the keyboard, too).
Lucky for you, I have an easy-to-use code sample that does all this. Unlucky for you, it's in C#, but I think it should at least point you in the right direction.
Related
(this is not MFC)
I created a window which is transparent and covers the whole screen. However, I want it to be merely an overlay, not accepting any clicks or keyboard presses anywhere, only covering parts of the screen (and even there, don't accept input). It should be always on top (works so far) and should not block input to the windows below it. Is there a way to set this somewhere or a way to workaround this?
EnableWindow(hWnd, false); does not do what I want (obviously).
Ah, sorry for posting. Finally found it out!
WS_EX_TRANSPARENT is the style you want to add.
I searched, but most posts are just telling me what I already have, so below is basically my code right now:
DIKeyboard->Acquire();
DIMouse->Acquire();
DIMouse->GetDeviceState(sizeof(DIMOUSESTATE), &mouseCurrState);
DIKeyboard->GetDeviceState(sizeof(keyboardState),(LPVOID)&keyboardState);
MousePos.x += mouseCurrState.lX;
MousePos.y += mouseCurrState.lY;
Any post telling me how to get absolute position just says to use those last two lines. But my program is windowed, and the mouse can start anywhere on the screen.
i.e. If my mouse happens to be in the centre of my screen, that becomes position 0,0. I basically just want the top left of my window (not my screen) to be my 0,0 mouse coordinates, but am having a hard time finding anything relevant.
Thanks for any help! :)
Following the discussion in the comments, you'll have to decide which method works best for you. Unfortunately, having never worked with DirectInput, I do not know the ins-and-outs of it.
However, Window Messages work best for RTS-style controls, where a cursor is drawn to screen. This is due to the fact that this respects user settings, such as mouse acceleration and mouse speed, whereas DirectInput only uses the driver settings (so not the control panel settings). The user will expect the mouse to feel the same, especially in windowed mode.
DirectInput works better for FPS-style controls, when there is no cursor drawn, as window messages give you only the cursor coordinates, and not offset values. This means that once you are at the edge of the screen, window messages will no longer allow you to detect the mouse being moved further (actually, I am not 100% sure on this, so if someone could verify, please feel free to comment).
For keyboard, I would definitely suggest window messages, because DirectInput offers no advantages, and WM input is easier to use, and quite powerful (the WM_KEYDOWN messages contains a lot of useful data), and it'll allow you (via TranslateMessage) to get good text input, adjusted to locale, etc.).
Solving your problem with DirectInput:
You could probably use GetCursorPos followed by ScreenToClient to initialise your MousePos structure. I'm guessing you'll need to redo this every time you lose mouse input and reacquire it.
Hybrid solution (for RTS like controls):
It might be possible to use a hybrid solution for the mouse if you desire RTS-like controls. If this is the case, I suggest, though I have not tested this, to use WM for the movement of the mouse, which avoids the need for workaround mentioned above, and only use DirectInput to detect additional mouse buttons.
Now one thing I think you should do in such a hybrid approach is not directly use the button when you detect it via DirectInput, but rather post a custom application message to your own message queue (using PostMessage and WM_APP) with the relevant information. I suggest this because using WM you do not get the real-time state of the mouse & keyboard, but rather the state at the time of the message. Posting a message that the button was pressed allows you to handle the extra buttons in the same state-dependent manner (I don't know how noticeable this 'lag' effect is). It also makes the entire input handling very uniform, as every bit of input with this enters as a window message.
In Visual Studio you need to set the extended window style to get a reading-order of right to left (WS_EX_LAYOUTRTL). Why is this required since if I'm using UNICODE and displaying Arabic characters the only possible way to display it is right-to-left? I'm surprised the system doesn't simply render it the correct way around. To note: this is on a Windows Mobile system where I've copied the Arial Unicode MS font onto it, which perhaps might explain why it can't cope.
Windows' support for RTL is more complex than just the text: WS_EX_LAYOUTRTL is actually about controling the layout of other elements in the window - from MSDN:
The window layout applies to text but also affects the other GDI elements of the window, including bitmaps, icons, the location of the origin, buttons, cascading tree controls, and whether the horizontal coordinate increases as you go left or right. For example, after an application has set RTL layout, the origin is positioned at the right edge of the window or device, and the number representing the horizontal coordinate increases as you move left.
So if you create a dialog that has this, the dialog will be "flipped" automatically (because the coordinates are reversed). If a scrollbar is present, it will be on the left side of the window, not the right. Treeviews will have the expand/collapse box and connecting lines on the right side, not the left - and so on.
In the case of a static, which doesn't contain other windows, the style may not appear to make much difference - but it likely will flip the justification: a static that is right-justified using SS_RIGHT would likely end up actually left-justified when WS_EX_LAYOUTRTL is used.
Also, as the other answer notes, not all text is spans of a single language. It's possible to have a single string that mixes scripts: you can have L-to-R spans within R-to-L, and vice versa, so having Windows "do the right thing" based on the text used would be very fragile.
Also consider the case of a treeview that displays the filenames running on an Arabic system: the treeview should keep a right-to-left layout (aligned against the right side) even if the user just happens to be browsing a directory or file system that happens to have english filenames.
Long story short: WS_EX_LAYOUTRTL is really about overall window layout, not specifically text direction itself. Even without this flag, you should still get Arabic/Hebrew rendered correctly as R-to-L if using the standard APIs/controls.
Presumably because it can't be determined what you're going to display at the window level - you could be displaying nothing, a language read left to right or a language read right to left. Thus you need to set it explicitly rather than having it attempt to deduce based off incomplete information.
I'm making a random number generator, the program will create several random numbers and then choose from those random numbers and then displays that number in the window.
I was wondering if there was a way to make that specific piece of text bigger?
I don't want to change the size of all of the text in the window as I have writing in the window that i don't want to change the size of
Thanks for any help you can give
No, but you can make it bold, change the font color, or the background color for the specific text. If all you want is to make that specific piece of text stand out, I'd go with colorizing it.
As for how to do that... It's platform dependent. What platform are you on? Windows? Linux? What shell?
Take a look at the Windows Console API. That should have what you need.
Console text doesn't allow for the rich formatting you are referring to. You would have to move to a graphical output to render the size differences.
Generally, programs can't control the size of the text in the terminal. You may be able to change the color of a specific part of the text, though. Search for terminal escape sequences for information on how to do that on various terminals. Some terminals also handle bold, italics, and underlining.
No, but instead you can change colors of text and text's background. Will this be a good solution for your problem? There are a lot of specific examples available in the internet.
A possible console mode solution could involve FIGlet. You can tweak the output to write in many different fonts.
The output is larger, but no guarantee that it's suitable for your application.
open your console app, go to system menu of console window (left top corner, right click), font tab, choose what you wish. next time you open this (!) console app the font will be as you selected, other console windows are not affected
Right Click the top bar of the window
Click Properties
Click Font and select your font size
This isn't through the code but it will help for your pc.
I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).