I feel like this question must have been ask and answered a thousand times, but I can't seem to find it here. Sorry if I missed it!
I have developed a program using Qt on my desktop, where I sometime (rarely) have to input some text in a QLineEdit (typically a new username). It works fine, but now I want to run it on a computer with touch screen and be able to enter text using an on-screen keyboard. I've seen many post with related question (usually a bit less trivial) with answer that seem way too complex for such a common task. So my question is:
Isn't there a very basic way to show up the touch screen keyboard whenever the QLineEdit field is selected?
I'm wondering if this would maybe automatically happen with any regular tablet, but maybe not on those silly laptops (like the one I have) which do have a physical keyboard, but also a touch screen that can be collapsed onto the keyboard to then be used as a tablet.
Related
I think I understand the reasoning behind Wayland preventing windows from being manually positioned but there are a couple of instances in my Qt 5.15.2 application where I really need to have either some or absolute control over the position:
In the first instance the window is a pop-up. I'm using a QMainWindow so I can have it borderless with rounded corners, a coloured background and a transparency. I need it to pop-up when I hover over a certain element on my QGraphicsScene but want to ensure that when it appears it is close to the QGraphicsScene element in question but does not obscure it. Using a call to QMainWindow::move was perfect pre-Wayland and works fine on Windows too.
In the second instance my application has a number of windows which the user may open and close over time. The first time each one opens I don't care too much where it is positioned but if the user moves it before closing it then the next time the user opens it I want it to reappear where the user previously left it. Again, using the move() method of the various things that inherit from QWidget was perfect pre-Wayland.
Does anyone know of any work-arounds that would solve either of these problems under Qt on Wayland? It seems to me that neither is a particularly uncommon requirement and if Wayland won't let me set a specific position it might have a way of achieving something similar (e.g. a way of telling the compositor to position something near something else without obscuring it) and hopefully that maps onto something in the Qt API.
This question already exists:
Can you get the mouse position and if it is clicked in a console app? [duplicate]
Closed 3 years ago.
Say I was building a simple game and I had a menu with Play, Settings, Music etc. All laid out in the middle of the screen, how would I check if the mouse was over one of those areas and if the mouse clicked while over one of those areas?
Other people have asked similar things to this but I cannot find a working example or a clear solution to my question.
For clicking and mouse input, there are libraries/APIs that give user input (such as GLFW with Opengl). It really depends on what platform you're using. Then, you can convert your mouse XY position to a position relative to the window, and see whether or not its XY is greater than the bottom left XY of the GUI and less than the top right XY.
I am no expert in GUI in C++ because C++ GUI is not efficient. But to be making a clickable button, you may not want to do all those hard works. All you need to find is how to add a button from some libraries and bind that button to certain function. That’s how things work.
Additionally, you should have provided information regarding to what you have tried and the problem you are facing to give others less headaches
I've been weighing the pro's/con's of making a gui app, and I've decided a console app is much more powerful for my calculator, especialy since it does different things like foil, quadratic equations, etc. So my question is make the console look like a gui based app?
The answer to your question depends on exactly what you mean by "console." If you're talking about Windows console windows, then the answer is "maybe." Some Windows installations can emulate VGA/EGA graphics within a console window, making them able to play old games for DOS.
Your mission would be to implement every GUI widget you need, such as clickable buttons, text-entry fields, etc. in terms of simple graphics primitives for drawing lines and rectangles. Then you have to write code that figures out where the mouse is and draws the mouse pointer in the right spot. You'd also have to write code to make the cursor blink, to make the arrow keys move the cursor, and to make it possible to select characters in a text entry box and copy, cut, and paste them.
When you got done, you'd have a program that works on some people's computers, but not on others. On some Windows installations, the console windows can't do graphics or go fullscreen. Your app wouldn't work at all on those systems, although you could write a fullscreen Windows app using a 2D game library such as SDL or Allegro instead of writing a console app, which would bring you back to the previous paragraph.
As you might have guessed by now, rolling your own GUI would be a whole lot more work than writing a Windows GUI program in which the buttons, text fields, etc are already implemented for you, the cursor already blinks, the mouse already clicks, etc.
Also, the code that does the actual calculations should be totally separate
from the code that gets the input from the user and puts the answers on the screen, so that code shouldn't factor into whether you want to write a GUI or a console app. They shouldn't even be in the same .cpp file as the I/O routines.
Now, some programmers use the term "console" to refer to xterm windows on Linux. These are not the same thing at all, and cannot draw graphics (and "console" is the wrong name for them to boot). But sometimes you see menus and stuff within them, "drawn" with colored text. Usually, these are drawn and managed using the external dialog shell command.
I'm currently making a whimsical iPhone app that will allow you to change your windows cursor into a space ship controlled by the iPhone (simple rotation and such), and currently I have the movement and clicking handled, however I'd like to add additional features, such as bullets that you can shoot around the screen which will move until they die or hit a button, which will then be clicked. And I have two questions:
Question number one: Is there any way to detect if the mouse is currently over some click-able button? OR is there any way to see if a mouse event was handled?
Question number two: Is there any way to overlay the screen with small bullets? (perhaps small [3,3] child windows or something?)
Further Information:
The client program will be in c++
SDL or SFML will likely be the graphics libs, if any are necessary (winAPI should be fine)
The most reliable route would be the Microsoft Active Accessibility interface. Many tools to help visually impaired people need to answer the question "Is this a button?", and MSAA answers that question.
Overlaying the screen is trvial in a Windows environment; just create a window :). It can be partially transparent, so you're not restricted to rectangular bullets.
I'm not sure quite how to phrase the question concisely, so if there is a similar question, please point me in the right direction and close this one.
I am currently building a CAD app, the user interacts within the 3D viewports primarily through the mouse and the three keyboard modifiers (alt, shift, ctrl). Shift and control modify the currently selected tool options, and alt operates the camera - much like any other 3D CAD app.
However I'm currently developing with a Gnome desktop, and it's window manager (AFAIK) catches any Alt-RightButton mouse dragging events and interprets them as a window drag command - even when not holding the title bar and regardless of the currently highlighted widget.
This is a disaster for me because camera keyboard controls are quite standardised in my target industry. So does anyone know of a way to override this behaviour, preferably from within Qt, and preferably focus it for my one scenario in one particular widget class?
Thank you,
Cam
If you use the Qt::X11BypassWindowManagerHint on the window, then the window manager can't steal your keypresses. However, this means you lose the native window frame (including decoration, moving, and resizing), so it is likely you don't want to do this.
Another way: if your users are only on 1 or 2 varieties of Linux, add something to the installer which asks the user whether they want to manipulate the gnome (or whatever) keysettings, and if so, changes them via gconftool-2 (or equivalent).