I'm using SDL2 in my programm.
The Gamepad is initialised using:
SDL_Joystick* Pad1 = NULL;
Pad1 = SDL_JoystickOpen( 0 );
In my Event-Handling function, i included this thing:
switch( event.type ){
//Button-Event, as an example:
case SDL_JOYBUTTONDOWN:
//printf("Button: %d", event.jbutton.button, " ");
if(event.jbutton.button==ControllP1.MoveLeftButton)
MoveLeft=true;
//lot of other cases
case SDL_JOYHATMOTION:
if(event.jhat.value==SDL_HAT_UP){MoveUp=true;MoveLeft=false; MoveRight=false; MoveDown=false;}
if(event.jhat.value==SDL_HAT_DOWN){MoveDown=true;MoveUp=false; MoveLeft=false; MoveRight=false;}
if(event.jhat.value==SDL_HAT_LEFT){MoveLeft=true; MoveDown=false; MoveUp=false; MoveRight=false;}
if(event.jhat.value==SDL_HAT_RIGHT){MoveRight=true;MoveDown=false; MoveUp=false; MoveLeft=false; }
if(event.jhat.value==SDL_HAT_CENTERED){MoveDown=false; MoveUp=false; MoveLeft=false; MoveRight=false;}
if(event.jhat.value==SDL_HAT_LEFTUP){MoveDown=false; MoveUp=true; MoveLeft=true; MoveRight=false;}
if(event.jhat.value==SDL_HAT_RIGHTUP){MoveDown=false; MoveUp=true; MoveLeft=false; MoveRight=true;}
if(event.jhat.value==SDL_HAT_RIGHTDOWN){MoveDown=true; MoveUp=false; MoveLeft=false; MoveRight=true;}
if(event.jhat.value==SDL_HAT_LEFTDOWN){MoveDown=true; MoveUp=false; MoveLeft=true; MoveRight=false;}
break;
Note that this code isn't targeting only the specified pad but should react to the input on any gamepad.
Within OpenSuse/Linux this is fine. As soon as I use the Hat on any Gamepad, it triggers the event. It however doesn't work for windows. The rest of the Code is running as intented (including the specified axis, button, etc. events) but using the Hat doesn't cause any reaction. What is the reason for this? Do i need to specifiy a gamepad when using SDL2 under Windows?
Thanks and greetings, mumbo
Edit1:
Surfing arround, I probably did find an explanation for my problem:
https://forums.libsdl.org/viewtopic.php?p=39991
I suppose that the DPAD isn't detected as an HAT but rather as an Analog-Stick under Windows when using the Joystick-API?
Edit2:
It was a bug in the SDL2.dll on the windows-machine i used for testing. Replacing the SDL2.dll with the fresh one solved the Problem, hats are responding as intended :)
Thanks for the help guys, good to know about the GameController-API.
I did update SDL2 on the target-windows-machine - and the whole thing is working as intented. Code is fine.
Thanks for the Help anyone, good to have learned about the GameController-API.
tl;dr: On windows you might be having driver problems if your device is a weird one, and you might want to use the gamecontroller API if you're targeting gamepads as it gives you a more consistent interface to use.
Mumbo: The hat got usually the form of a cross (or a circle with a cross-form ontop). You can usually find it on the left side of your gamepad
So you mean the DPAD.
First, the joystick API from SDL is a bit lower level, handling stuff like actual joysticks, steering wheels and (in your use case) gamepads indistinguishable of the device. This means the API might not be consistent across devices, for example two different gamepads might map a button to different indexes.
Although I think the joyhat might be always mapped to the DPAD in the more common devices, the other buttons might not, (triggers, x, y, a, b star, circle, etc). Come GamePadController to save the day which gives you a more consistent way to handle the controller (by giving you an Xbox 360 like gamepad and a database of mappings for several devices).
In the source tree of SDL there is a databse of controllers you can load (or is loaded by default, I didn't check), you can also check this link where I think there is another database of mappings for all kinds of controllers that you can load into your program by hand.
This example uses the GameController API instead of the JoyStick API and prints values when the DPAD is pressed. I did a test on linux only, might hop on windows later to try it out.
#include <SDL2/SDL.h>
#include <SDL2/SDL_image.h>
#include <thread>
#define HEIGHT 600
#define WIDTH 800
using namespace std;
int main() {
SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER);
SDL_Window *window = SDL_CreateWindow("Test", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, WIDTH, HEIGHT, SDL_WINDOW_SHOWN);
SDL_Event event;
SDL_GameController *controller = SDL_GameControllerOpen(0);
bool quit = false;
//SDL_Joystick *joy = SDL_GameControllerGetJoystick(controller);
while (!quit) {
while (SDL_PollEvent(&event)) {
if (event.type == SDL_QUIT) {
quit = true;
}
if (event.type == SDL_CONTROLLERBUTTONDOWN || event.type == SDL_CONTROLLERBUTTONUP) {
SDL_ControllerButtonEvent ev = event.cbutton;
if (ev.button == SDL_CONTROLLER_BUTTON_DPAD_DOWN)
printf("SDL_DPAD_HAT_DOWN_UP\n");
if (ev.button == SDL_CONTROLLER_BUTTON_DPAD_UP)
printf("SDL_DPAD_HAT_UP_UP\n");
if (ev.button == SDL_CONTROLLER_BUTTON_DPAD_RIGHT)
printf("SDL_DPAD_HAT_RIGHT_UP\n");
if (ev.button == SDL_CONTROLLER_BUTTON_DPAD_LEFT)
printf("SDL_DPAD_HAT_LEFT_UP\n");
}
if (event.type == SDL_CONTROLLERBUTTONDOWN) { puts ("DPAD DOWN STATE"); }
}
std::this_thread::sleep_for(std::chrono::milliseconds{33});
}
SDL_DestroyWindow(window);
SDL_Quit();
return 0;
}
On the other hand you might have DRIVER problems (not uncommon on windows with rando controllers) or be against a gamepad that isn't mapped yet. (I've tried on linux with a PS4 controller and it worked correctly but with a cheap knockoff of a PS2 controller it didn't).
Related
I'm working on determining if a certain touchscreen will be compatible with an application and recently got a loaner model of an Elo 2402L touchscreen. I've installed the driver the company provides and was able to see multi-touch events using the evtest utility (parser for /dev/input/eventX).
The thing is that I'm running Scientific Linux 6.4, which uses Linux kernel 2.6.32. I've seen a lot of mixed information on touchscreen compatibility for Linux kernels before 3.x.x. Elo says that their driver only supports single-touch for 2.6.32. Also, I've seen people say that the majority of the compatibility issues with touch events in this kernel version are with Xorg interfaces.
I developed a very simple Qt5 application to test whether Qt could detect the touch events or not, because I'm not sure whether Qt applications are X-based and if they read events directly from /dev/input or something else.
However, despite a simple mouse event handler being able to correctly register mouse events, I also created a simple touch event handler and nothing happens when I touch the main screen. There is a beep, as part of the driver that Elo provides makes a beep when the screen is touched, so I know that SOMETHING is registering that touch, but neither the desktop, nor this application seem to recognize the touch event.
Also, yes, the WA_AcceptTouchEvents attribute is set to true in the window's constructor.
I have a simple mainwindow.h:
...
protected:
int touchEvent(QTouchEvent *ev);
...
And mainwindow.cpp:
MainWindow::MainWindow(QWidget *parent) {
...
setAttribute(Qt::WA_AcceptTouchEvents, true);
touchPoints = 0;
}
...
int MainWindow::touchEvent(QTouchEvent *ev) {
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints--;
break;
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
Is there something wrong with the way I'm using the touch event handler? Or is there some issue with the device itself? Does Qt read input events directly from /dev/input, or does it get its input events from X?
Very confused here, as I haven't used Qt before and want to narrow down the cause before I say that it's the device causing the issue.
Also, if anyone has any insight into the device / kernel compatibility issue, that would be extremely helpful.
The QTouchEvent documentation says:
Touch events occur when pressing, releasing, or moving one or more
touch points on a touch device (such as a touch-screen or track-pad).
To receive touch events, widgets have to have the
Qt::WA_AcceptTouchEvents attribute set and graphics items need to have
the acceptTouchEvents attribute set to true.
Probably you just need to call setAttribute(Qt::WA_AcceptTouchEvents, true) inside the MainWindow constructor.
Is there something wrong with the way I'm using the touch event handler?
There is no touch event handler. If you change:
int touchEvent(QTouchEvent *ev);
to:
int touchEvent(QTouchEvent *ev) override;
(which you should always do when you are trying to override virtual functions so you can catch exactly this kind of mistake), you'll see that there is no such function for you to override. What you need to override is the event() handler:
protected:
bool event(QEvent *ev) override;
You need to check for touch events there:
bool MainWindow::event(QEvent *ev)
{
switch(ev->type()) {
case QEvent::TouchBegin:
touchPoints++;
break;
case QEvent::TouchEnd:
touchPoints++;
break;
default:
return QMainWindow(ev);
}
ui->statusBar->showMessage("Touch Points: " + touchPoints);
}
However, it might be better to work with gestures instead of touch events. But I don't know what kind of application you're writing. If you wanted to let Qt recognize gestures rather than implementing them yourself through touch events, you would first grab the gestures you want, in this case pinching:
setAttribute(Qt::WA_AcceptTouchEvents);
grabGesture(Qt::PinchGesture);
and then handle it:
bool MainWindow::event(QEvent *ev)
{
if (e->type() != QEvent::Gesture) {
return QMainWindow::event(e);
}
auto* gestEv = static_cast<QGestureEvent*>(e);
if (auto* gest = gestEv->gesture(Qt::PinchGesture)) {
auto* pinchGest = static_cast<QPinchGesture*>(gest);
auto sf = pinchGest->scaleFactor();
// You could use the pinch scale factor here to zoom an image
// for example.
e->accept();
return true;
}
return QMainWindow::event(e);
}
Working with gestures instead of touch events has the advantage of using the platform's gesture recognition facilities, like those of Android and iOS. But again, I don't know what kind of application you're writing and on what kind of platform you're working on.
I am writing a C++ program using SDL 2 for the platform layer and opengl for graphics and rendering. I have a full working prototype with keyboard and mouse input. Now I am now trying to use SDL's game controller API to connect a gamepad (to replace or supplement keyboard controls). Unfortunately the controller does not seem to be recognized despite the fact that it works perfectly with other software. It's a Sony Dualshock 4 (for the Playstation 4 system). My system is Mac OS 10.9.5, and I am using SDL 2.0.5 with the official community controller database for SDL 2.0.5, which contains ps4 controller mappings:
030000004c050000c405000000000000,PS4 Controller,a:b1,b:b2,back:b8,dpdown:h0.4,dpleft:h0.8,dpright:h0.2,dpup:h0.1,guide:b12,leftshoulder:b4,leftstick:b10,lefttrigger:a3,leftx:a0,lefty:a1,rightshoulder:b5,rightstick:b11,righttrigger:a4,rightx:a2,righty:a5,start:b9,x:b0,y:b3,platform:Mac OS X,
4c05000000000000c405000000000000,PS4 Controller,a:b1,b:b2,back:b8,dpdown:h0.4,dpleft:h0.8,dpright:h0.2,dpup:h0.1,guide:b12,leftshoulder:b4,leftstick:b10,lefttrigger:a3,leftx:a0,lefty:a1,rightshoulder:b5,rightstick:b11,righttrigger:a4,rightx:a2,righty:a5,start:b9,x:b0,y:b3,platform:Mac OS X
I also added a new mapping using one of the official tools. That also loads successfully according to the relevant function call.
The following is my code, and it's about as close to a minimal example as I can get:
// in main
// window and graphics context initialization here
// initialize SDL
if (SDL_Init(SDL_INIT_VIDEO | SDL_INIT_GAMECONTROLLER | SDL_INIT_HAPTIC) < 0) {
fprintf(stderr, "%s\n", "SDL could not initialize");
return EXIT_FAILURE;
}
// load controller mappings, I tested this and 35 mappings load successfully, which is expected
SDL_GameControllerAddMappingsFromFile("./mapping/gamecontrollerdb_205.txt");
// the controller handle
SDL_GameController* controller = nullptr;
// max_joysticks is 1, which means that the device connects at least
int max_joysticks = SDL_NumJoysticks();
if (max_joysticks < 1) {
return EXIT_FAILURE;
}
// this returns, which means that the joystick exists, but it isn't recognized as a game controller.
if (!SDL_IsGameController(0)) {
return EXIT_FAILURE;
}
// I never get passed this.
controller = SDL_GameControllerOpen(0);
fprintf(stdout, "CONTROLLER: %s\n", SDL_GameControllerName(controller));
Has anyone encountered this problem? I've done some preliminary searching as I mentioned, but it seems that usually either the number of joysticks is 0, or everything is recognized.
Also, SDL_CONTROLLERDEVICEADDED isn't firing when I connect the controller.
The controller is connected via USB before I start the program. Also, this is one of the new controllers, and I'm not sure whether the mappings work with that new one. I assume so considering that there are two distinct entries.
Thank you.
EDIT:
I double checked and the PS4 controller works fine as a joystick, but it isn't recognized as a controller, which means that the mapping is incorrect or non-existent. This may be because my controller is "version 2" of the dualshock 4, and I'm not sure whether a 2.0.5-compatible mapping was added. hmmm
The controller was recognized as a joystick but not as a controller, meaning that none of the available mappings I could find (in 2.0.5 controller mapping format) corresponded with the controller. Updating from SDL 2.0.5 to 2.0.8 also updated available mappings it seems, and now the controller is recognized as a game controller.
Note: normally it is a terrible idea to upgrade tools mid-project, but in this case it was safe to do.
I'm trying to write simple mouse clicker for ubuntu via x11.
For first i wrote first variant (based on XSendEvent) of clicking procedure:
#include <unistd.h>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
void mouseClick(int button)
{
Display *display = XOpenDisplay(NULL);
XEvent event;
if(display == NULL)
{
std::cout << "clicking error 0" << std::endl;
exit(EXIT_FAILURE);
}
memset(&event, 0x00, sizeof(event));
event.type = ButtonPress;
event.xbutton.button = button;
event.xbutton.same_screen = True;
XQueryPointer(display, RootWindow(display, DefaultScreen(display)), &event.xbutton.root, &event.xbutton.window, &event.xbutton.x_root, &event.xbutton.y_root, &event.xbutton.x, &event.xbutton.y, &event.xbutton.state);
event.xbutton.subwindow = event.xbutton.window;
while(event.xbutton.subwindow)
{
event.xbutton.window = event.xbutton.subwindow;
XQueryPointer(display, event.xbutton.window, &event.xbutton.root, &event.xbutton.subwindow, &event.xbutton.x_root, &event.xbutton.y_root, &event.xbutton.x, &event.xbutton.y, &event.xbutton.state);
}
if(XSendEvent(display, PointerWindow, True, 0xfff, &event) == 0)
std::cout << "clicking error 1" << std::endl;
XFlush(display);
event.type = ButtonRelease;
event.xbutton.state = 0x100;
if(XSendEvent(display, PointerWindow, True, 0xfff, &event) == 0)
std::cout << "clicking error 2" << std::endl;
XFlush(display);
XCloseDisplay(display);
}
This code works fine on every application except chrome (with mozilla works fine too).
So i wrote second variant (based on XTestFakeButtonEvent):
#include <X11/extensions/XTest.h>
void SendClick(int button, Bool down)
{
Display *display = XOpenDisplay(NULL);
XTestFakeButtonEvent(display, button, down, CurrentTime);
XFlush(display);
XCloseDisplay(display);
}
And this code works fine everyvere include chrome.
Calling of those functions is very simple
// XSendEvent variant
mouseClick(1);
// XTestFakeButtonEvent variant
SendClick(1, true); // press lmb
SendClick(1, false); // release lmb
1: help me to understand what i'm doing wrong (or what wrong in chrome maybe) in first variant.
1.1: I think that i'm trying to send event not for needed window, when i open display with XOpenDisplay(NULL);. Does chrome have different connection system with x11 server?
2: is it good idea to use second variant in applications? It pretty short and works fine with every app i have)
P.S. to compile this code you need add -lX11 -lXtst libs
XSendEvent produces events that are marked as sent. Events sent by the server are not marked.
typedef struct {
int type;
unsigned long serial;
Bool send_event; // <----- here
Display *display;
Window window;
} XAnyEvent;
Some applications ignore events that have this flag set, for security reasons. Think of malware that somehow gets access to your X11 server — it can trick any application into doing whatever it wants by sending those events.
It is perfectly OK to use the second variant on your own machine, but it relies on an extension that can be disabled (again, for security reasons) and so not necessarily works on other people's X11 servers.
On XCB you can use the following function to verify if event was sent via XSendEvent() / xcb_send_event() API:
static bool fromSendEvent(const void *event)
{
// From X11 protocol: Every event contains an 8-bit type code. The most
// significant bit in this code is set if the event was generated from
// a SendEvent request.
const xcb_generic_event_t *e = reinterpret_cast<const xcb_generic_event_t *>(event);
return (e->response_type & 0x80) != 0;
}
AFACT, there is no way to tell if event was send via XTest extension.
You should use XTest as it will work better, XSendEvents don't know anything about internal X server state. Fom XSendEvent manual:
"The contents of the event are otherwise unaltered and unchecked by the X server except to force send_event to True".
So with XSendEvent you might have unexpected issues in some situations.
Although not by using Xlib directly, but through a python Xlib wrapper library as a proxy to Xlib, the first approach currently does work on all windows I currently have open on my desktop, other than with IntelliJ.
In this first approach, you are sending the event directly to a target window, and as others have noted, your event is also marked (tainted) with an attribute value marking it as a simulated one. The receiving window might act on it just the same, as many application windows do.
With the second approach however, you are emulating the actual thing happening ― per my understanding virtually indistinguishable from a user solicited event: the event goes through the fuller X11 flow of handling for a user input event (rather than being blindly dispatched directly to the target window) which means that it will trickle down to the window (or Gnome desktop widget) under the pointer as in the natural flow of events for real user solicited events.
As such, the second approach appears to be more broadly applicable than the first approach ― it will have the desired effect also for windows that opt not to act on the event sent to them through the first approach, as well as on e.g. Gnome desktop elements which are not ordinary windows per-se (such as the language and power widgets). You supply the coordinates without any mention of a window, and the click goes through.
If I had to come up with some kind of explanation for this duality of routes, I might think that XSendEvent is more of a general purpose event sending facility, whereas XTEST provides means for specifically simulating user input events.
I created an application using Qt in GNU/Linux and I run in the background. I want to execute certain application functionalities when user presses some key combinations, for example Ctrl+Alt+A...
I know it is possible, Gnome Pie does it but I don't know how I can capture the keys. I tried using the examples provided in this question but none of them worked...also I wouldn't want to run my application as root...
Can anyone point me some resources or give me some hints on that?
EDIT:
#iharob suggested I should use libkeybinder. I found it, tried it but it uses GTK and GTK doesn't play well with Qt...I'm not even a GTK beginner, never worked with it but I think the GTK event loop conflicts with the Qt event loop; when I emit a Qt signal from the callback which gets called after the key was pressed(which is also after gtk_init was called) the application crashes.
What would be great is if I could create a class that emits a signal whenever a keyboard key combination was pressed(e.g. Ctrl+Alt+A).
As far as I see and as #SamVarshavchik pointed out libkeybinder uses libx11 in the background so you could just use libx11 in order to get rid of the GTK event loop which is not very Qt friendly. AFAIK KDE's KAction uses the same technique for their global short keys so I think this technique will play well with Qt's event loop.
These things being said, you can use a hot-key example as presented here:
x11_hot_key.pro:
#-------------------------------------------------
#
# Project created by QtCreator 2015-05-04T01:47:22
#
#-------------------------------------------------
QT += core
QT -= gui
TARGET = x11_hot_key
CONFIG += console
CONFIG -= app_bundle
TEMPLATE = app
SOURCES += main.cpp
CONFIG += link_pkgconfig
PKGCONFIG += x11
main.cpp:
#include <QCoreApplication>
#include <iostream>
#include <X11/Xlib.h>
#include <X11/Xutil.h>
using namespace std;
int main(int argc, char *argv[])
{
QCoreApplication a(argc, argv);
Display* dpy = XOpenDisplay(0);
Window root = DefaultRootWindow(dpy);
XEvent ev;
unsigned int modifiers = ControlMask | ShiftMask;
int keycode = XKeysymToKeycode(dpy,XK_Y);
Window grab_window = root;
Bool owner_events = False;
int pointer_mode = GrabModeAsync;
int keyboard_mode = GrabModeAsync;
XGrabKey(dpy, keycode, modifiers, grab_window, owner_events, pointer_mode,
keyboard_mode);
XSelectInput(dpy, root, KeyPressMask );
while(true)
{
bool shouldQuit = false;
XNextEvent(dpy, &ev);
switch(ev.type)
{
case KeyPress:
cout << "Hot key pressed!" << endl;
XUngrabKey(dpy,keycode,modifiers,grab_window);
shouldQuit = true;
default:
break;
}
if(shouldQuit)
break;
}
XCloseDisplay(dpy);
return a.exec();
}
or you could just use this simple library as presented here which also has some simple examples together with a handy Makefile for you to get along with.
As I don't have knowledge of an asynchronous correspondent to XGrabKey, a problem you will have is that the while(true) loop never returns and blocks the main thread thus the application so what you want is to move that in a separate thread and connect it to the main thread using signals and slots. This shouldn't be a big issue though and won't affect your application's performance because AFAIK XNextEvent blocks until your key is hit so the processor won't be uselessly processing...
Hope this helps.
A brief look at libkeybinder's very small source indicates that all it does in install a keygrab on the X display's root window.
This should be doable, but it won't be easy, and requires some knowledge and understanding of the low level X Window System protocol. It should be possible for both Qt and libxcb to coexist peacefully in one process. The way I would try to implement something like this would be as follows:
Start a separate thread.
The thread would open a separate connection to the X server, enumerate all screens on the display, obtain each screen's root window, install a key grab on each root window, then enter a loop reading X events from the xcb_connection_t handle.
Upon receipt of a key event (the only key events I expect to process in this loop would be the ones corresponding to the grabbed key), immediately ungrab the keyboard so that the X server can proceed on its merry way, then notify your application's main thread, in some form or fashion, that the key has been pressed.
Your application will have to have some means of stopping this thread, when it's time to quit.
Possible solution would be to simulate this behavior - have a small standalone application that sends signal to your background process (there are many variants doing this, signal() would be probably the simplest). Then attach that application for desired key binding in window manager for particular environment. It may require to learn how to do that for various window managers, but result could be cleaner and faster to implement.
Here is a sample from Allegro5 tutorial: (to see the original sample, follow the link, I've simplified it a bit for illustratory purposes.
#include <allegro5/allegro.h>
int main(int argc, char **argv)
{
ALLEGRO_DISPLAY *display = NULL;
ALLEGRO_EVENT_QUEUE *event_queue = NULL;
al_init()
display = al_create_display(640, 480);
event_queue = al_create_event_queue();
al_register_event_source(event_queue, al_get_display_event_source(display));
al_clear_to_color(al_map_rgb(0,0,0));
al_flip_display();
while(1)
{
ALLEGRO_EVENT ev;
ALLEGRO_TIMEOUT timeout;
al_init_timeout(&timeout, 0.06);
bool get_event = al_wait_for_event_until(event_queue, &ev, &timeout);
//-->// if(get_event && ev.type == ALLEGRO_EVENT_DISPLAY_CLOSE) {
//-->// break;
//-->// }
al_clear_to_color(al_map_rgb(0,0,0));
al_flip_display();
}
al_destroy_display(display);
al_destroy_event_queue(event_queue);
return 0;
}
If I don't manually check for the ALLEGRO_EVENT_DISPLAY_CLOSE, then I can't close the window or terminate the program (without killing the process through task manager). I understand this. But in this case I don't understand how the minimize button works without me manually handling it. Can someone please explain?
Disclaimer: I don't know Allegro.
Minimizing a window at the most basic level only involves work from the process that deals with the windows (the Window Manager), not the process itself.
Terminating a program, usually requires files to be closed or memory to be freed or something else that only the process itself can do.
The biggest reason that you must handle it yourself via an event is that closing (destroying) a window invalidates the ALLEGRO_DISPLAY * pointer. The request to terminate the window comes from a different thread, so it would be unsafe to destroy it immediately. Allowing you to process it yourself on your own time is safe and easy, and fits in with the event model that Allegro 5 uses for all other things.
There are other ways to solve the problem, but they are no more simple than this method and don't really have any major advantages.
I don't know anything about allegro, but minimizing windows is generally handled by the window manager without the need of further intervention by your program. The main window is set to a "minimized"-state and your program continues running in the background without a visible window.
You can check if your app is being minized by intercepting specific window-messages (those being WM_ACTIVATEAPP, WM_ACTIVATE or WM_SIZE). Maybe allegro provides something like that, too.
In contrast closing the window does need to be done by your program. Clicking on the X simply sends a message to the window (WM_CLOSE), that the user has clicked it, and you have to respond accordingly (save states, quit the program, or you could prevent it).
At least that's how the normal winapi works, and allegro seems to work the same way.