Hi so I'm trying to make it so a little UFO bitmap (drawing/painting already taken care of) can be dragged around the screen. I can't seem to make the UFO position update and then redraw repeatedly from the MouseButtonDown() function (simplified code for mouse event handler). Any suggestions on detecting the dragging and redrawing accordingly? Code is below for relevant functions:
void MouseButtonDown(int x, int y, BOOL bLeft)
{
if (bLeft)
{
while(_bMouseMoving == true && _bMouseDragRelease == false) {
_iSaucerX = x - (_pSaucer->GetWidth() / 2);
_iSaucerY = y - (_pSaucer->GetHeight() / 2);
InvalidateRect(_pGame->GetWindow(), NULL, FALSE);
}
// Set the saucer position to the mouse position
_iSaucerX = x - (_pSaucer->GetWidth() / 2);
_iSaucerY = y - (_pSaucer->GetHeight() / 2);
}
else
{
// Stop the saucer
_iSpeedX = 0;
_iSpeedY = 0;
}
}
void MouseButtonUp(int x, int y, BOOL bLeft)
{
_bMouseDragRelease = true;
}
void MouseMove(int x, int y)
{
_bMouseMoving = true;
}
To clarify what chris said, you're only going to get the WM_xBUTTONDOWN message once, and you'll need to use that to toggle a dragging state that you can query when you recieve a WM_MOUSEMOVE message.
When you get the mouse move message during a dragging state, you'll want to invalidate the rectangle surrounding where the ufo was, and the rectangle surrounding where it is.
Invalidating a rectangle causes WM_PAINT messages, where you redraw whatever was behind the ufo, and the ufo in it's new place.
Or you could cheat and make the UFO a cursor when you're dragging :)
Related
I added touch input for my game's character. The camera moves when a player's finger is dragged across the screen. However, when the joysticks are used to move, the camera doesn't move at the same time.
When I do the same code with Blueprints the code works and the camera still moves. When I drag the touch while pressing the joystick to move, I think that there is a problem in the ETouchIndex::Type. This is because in the BP when I use the finger index given in the touch event, it works. However, when I only use touch 1 as the finger index, it does not work. I think if I put that index in my CPP code it will work too, but where can I find the finger index? Can anyone please help me?
//here's my touch code that executes every tick.
FVector2D TouchLocation;
APlayerController* ActivePlayerController = UGameplayStatics::GetPlayerController(this, 0);
ActivePlayerController->GetInputTouchState(TouchType, TouchLocation.X, TouchLocation.Y, IsTouched);
if (!IsTouched)
{
DidOnce = false;
}
if (IsTouchMoved())
{
if (!DidOnce)
{
ActivePlayerController->GetInputTouchState(TouchType, PrevX, PrevY, IsTouched);
DidOnce = true;
}
ActivePlayerController->GetInputTouchState(TouchType, X, Y, IsTouched);
float FinalRotYaw = (X - PrevX) * UGameplayStatics::GetWorldDeltaSeconds(this) * 20;
float FinalRotPitch = (Y - PrevY) * UGameplayStatics::GetWorldDeltaSeconds(this) * 20;
AddControllerYawInput(FinalRotYaw);
AddControllerPitchInput(FinalRotPitch);
ActivePlayerController->GetInputTouchState(TouchType, PrevX, PrevY, IsTouched);
}
I used the glfw callback function to move the camera with the mouse.
mouse callback function is:
void mouse_callback(GLFWwindow *window, double xposIn, double yposIn)
{
if (is_pressed)
{
camera.ProcessMouseMovement((static_cast<float>(yposIn) - prev_mouse.y) / 3.6f, (static_cast<float>(xposIn) - prev_mouse.x) / 3.6f);
prev_mouse.x = xposIn;
prev_mouse.y = yposIn;
}
cur_mouse.x = xposIn;
cur_mouse.y = yposIn;
}
void mouse_btn_callback(GLFWwindow *window, int button, int action, int mods)
{
if (button == GLFW_MOUSE_BUTTON_LEFT && action == GLFW_PRESS)
{
prev_mouse.x = cur_mouse.x;
prev_mouse.y = cur_mouse.y;
is_pressed = true;
}
else
{
is_pressed = false;
}
}
However, in this case, the camera will move even when operated in other imgui windows as shown below.
I don't know how to handle this.
Should I put this logic between begin and end of IMGUI, using something like ImGui::IsWindowHovered()?
like this:
ImGui::Begin("scene");
{
if(ImGui::IsWindowHovered())
{
//camera setting
}
}
ImGui::End()
I had the same problem today.
For anyone seeing this now, you have to define your glfw callbacks before initializing ImGui. ImGui sets its own callbacks up at this point and handles sending inputs to already existing ones, if not consumed before. If you define your callbacks afterwards you overwrite those created by ImGui.
Answer above are wrong.
This is answered in the Dear ImGui FAQ:
https://github.com/ocornut/imgui/blob/master/docs/FAQ.md#q-how-can-i-tell-whether-to-dispatch-mousekeyboard-to-dear-imgui-or-my-application
TL;DR check the io.WantCaptureMouse flag for mouse.
I'm not familiar with ImGui, so I don't know what functions might or might not need to be called in ImGui.
But, GLFW is a relatively low level windowing API that has no regard for the higher level abstractions that might exist on the window. When you pass the callback to glfwSetCursorPosCallback, that callback will be called on any accessible part of the window.
If you need to have the mouse movements (or any mouse interactions) only respond when the mouse is hovered over the relevant part of the interface, you need some kind of mechanism to define what that part is. Again: I don't know how you'd do that in ImGui, but it'll probably look something like this:
void mouse_callback(GLFWwindow *window, double xposIn, double yposIn)
{
//Structured Binding; we expect these values to all be doubles.
auto [minX, maxX, minY, maxY] = //Perhaps an ImGui call to return the bounds of the openGL surface?
if(xposIn < minX || xposIn > maxX || yposIn < minY || yposIn > maxY) {
return; //We're outside the relevant bounds; do not do anything
}
//I'm assuming setting these values should only happen if the mouse is inside the bounds.
//Move it above the first if-statement if it should happen regardless.
cur_mouse.x = xposIn;
cur_mouse.y = yposIn;
if (is_pressed)
{
camera.ProcessMouseMovement((static_cast<float>(yposIn) - prev_mouse.y) / 3.6f, (static_cast<float>(xposIn) - prev_mouse.x) / 3.6f);
prev_mouse.x = xposIn;
prev_mouse.y = yposIn;
}
}
I want to use glutmotionfunc() to get the x position at the start of the dragging motion and the end i.e. so I can get the change in x. Is there a simple way of doing this?
I think glutmotionfunc() doesn't work the way you expect it to do.
The registered callback is not only called once, when the drag has finished. Instead it gets called every time the mouse gets moved, while the button is pushed.
The idea here is to be able to update the scene continiously for every little dragging motion.
To get the movement made between calls you have to store the values you got before.
Oh, and don't forget to mark the stored values as invalid, if meanwhile the button was released. Otherwise you will get some funny results.
So here the general idea
int old_x=0;
int old_y=0;
int valid=0;
void mouse_func (int button, int state, int x, int y) {
old_x=x;
old_y=y;
valid = state == GLUT_DOWN;
}
void motion_func (int x, int y) {
if (valid) {
int dx = old_x - x;
int dy = old_y - y;
/* do something with dx and dy */
}
old_x = x;
old_y = y;
}
Don't forget to connect both callbacks with glutMotionFunc and glutMouseFunc
I'm trying to detect horizontal mouse motion with OpenGL, so, when detected, execute a glutPostRedisplay(). Problem is that scene is also redrawed on vertical mouse movement.
This is the code of the registered callbacks (note mouse_inix and mouse_iniy are global (double) variables):
void mouse(int button, int state, int x, int y)
{
if (state == GLUT_DOWN) {
mouse_inix = (double)x;
mouse_iniy = (double)y;
}
}
void motion(int x, int y)
{
if (((double)x) != mouse_inix) {
angle += 20.0;
glutPostRedisplay();
}
}
Are you sure? It doesn't look like from the code you've posted that vertical mouse movement will trigger the glutPostRedisplay() call.
BUT, you've defined "horizontal mouse movement" very narrowly here. If you move the mouse up and down, you're almost sure to get a few pixels of horizontal movement. Maybe you could put a dead zone around the mouse to keep it from moving on every pixel. Something like:
void motion(int x, int y)
{
if ((abs(x - (int)mouse_inix) > 10) {
angle += 20.0;
glutPostRedisplay();
}
}
That's one thing that's going on here. Another is the use of "double". Since glut is returning mouse coordinates as ints, you're better off sticking with that. Trying to compare "(double)x != mouse_inix" will almost certainly be true because of the precision issues with doubles. You generally don't want to compare for exactly equal to or not equal to using floating point numbers. The use of the dead zone will negate that issue, but still, why convert to doubles if you don't need them?
I don't know if "20" is degrees or radians, but it could result in some pretty jumpy moves either way. Consider scaling the size of the move to the size of the mouse move:
void motion(int x, int y)
{
int deltaX = (abs(x - (int)mouse_inix);
if (deltaX > 10) {
angle += (double)deltaX; // or "deltaX/scaleFactor" to make it move more/less
glutPostRedisplay();
}
}
It will still only rotate one way. If you used the sign of "deltaX", you would be able to rotated both directions depending on how you moved the mouse.
How can I get the x,y coordinate of a mouse click, to see if it is over my menu button drawn by directx? Currently, my codebase has the following mouse-related class that doesn't seem to be able to give me this..I'm not sure how this might work.
InputMouse::InputMouse() :
m_LastX(-1),
m_LastY(-1)
{
m_MouseActionEvent.clear();
}
InputMouse::~InputMouse()
{
}
void InputMouse::PostUpdate()
{
m_CurrentAction.clear();
}
bool InputMouse::IsEventTriggered(int eventNumber)
{
for (unsigned int i = 0; i < m_CurrentAction.size(); i++)
{
if (m_MouseActionEvent.size() > 0 && m_MouseActionEvent[m_CurrentAction[i]] == eventNumber)
{
return true;
}
}
return false;
}
void InputMouse::AddInputEvent(int action, int eventNumber)
{
m_MouseActionEvent[action] = eventNumber;
}
void InputMouse::SetMouseMouse(int x, int y)
{
if (m_LastX != -1)
{
if (x > m_LastX)
{
m_CurrentAction.push_back(MOUSE_RIGHT);
}
else if (x < m_LastX)
{
m_CurrentAction.push_back(MOUSE_LEFT);
}
if (y > m_LastY)
{
m_CurrentAction.push_back(MOUSE_UP);
}
else if (y < m_LastY)
{
m_CurrentAction.push_back(MOUSE_DOWN);
}
}
m_LastX = x;
m_LastY = y;
}
DirectX or not, GetCursorPos is going to retrieve the position of the mouse in screen co-ordinates. ScreenToClient will map the screen relative point to a point relative to the client area of your window/directX surface.
If your menu buttons are 2D, this should be as simple as remembering the screen co-ordinates used for your buttons.
If you're trying to determine if a click lands on a 3D object that's been rendered, then the technique you are looking for is called Picking.
A simple Google for "directx picking" comes up with some good results:
http://www.mvps.org/directx/articles/rayproj.htm
http://www.gamedev.net/community/forums/topic.asp?topic_id=316274
Basically, the technique involves converting the mouse click into a ray into the scene. For your menu items, a simple bounding box will probably suffice for determining a 'hit'.
Once an object is drawn, the system has no knowledge of which pixels on the screen it changed, nor do those pixels know which object or objects changed it (nor if those objects even still exist). Therefore, if you need to know where something is on-screen, you have to track it yourself. For buttons and other GUI elements this usually means keeping your GUI in memory along with the rectangles that define the boundaries of each element. Then you can compare your mouse position to the boundary of each element to see which one it is pointing at.