Related
I have this in my Notepad:
hello
I want to simulate selecting in C++ using Windows' keybd_event function.
here is my code:
keybd_event(VK_SHIFT, 0, 0, 0);
for (size_t i = 0; i < 5; i++)
{
keybd_event(VK_LEFT, 0, 0, 0);
keybd_event(VK_LEFT, 0, KEYEVENTF_KEYUP, 0);
}
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP, 0);
but after I run this, it didn't select anything, it just go to the start of the file. Why isn't this working?
Add KEYEVENTF_EXTENDEDKEY will select rightly.
https://learn.microsoft.com/en-us/windows/win32/api/winuser/ns-winuser-keybdinput#members
#include <windows.h>
void main()
{
Sleep(2000);
keybd_event(VK_SHIFT, MapVirtualKey(VK_SHIFT, 0), KEYEVENTF_EXTENDEDKEY, 0);
for (size_t i = 0; i < 5; i++)
{
keybd_event(VK_LEFT, 0, 0, 0);
keybd_event(VK_LEFT, 0, KEYEVENTF_KEYUP, 0);
Sleep(20);
}
keybd_event(VK_SHIFT, MapVirtualKey(VK_SHIFT, 0), KEYEVENTF_EXTENDEDKEY | KEYEVENTF_KEYUP, 0);
}
Use VK_SHIFT in keybd_event, There is a problem that shift cannot be released,I recommend you use SendInput instead of keybd_event.
For higher-level operations, I also recommend you use UI Automation.
https://learn.microsoft.com/en-us/windows/win32/api/winuser/nf-winuser-sendinput
https://learn.microsoft.com/en-us/windows/win32/winauto/entry-uiauto-win32
I'm writing a program to draw a cube on OpenGL and rotate it continuously on mouse clicks. At particular angles, I'm able to see through the cube (transparent). I've enabled Depth Test, so I don't know why this is happening. I am not sure if I have enabled it correctly.
#include <math.h>
#include <vector>
#include <Windows.h>
#include <gl\glut.h>
using namespace std;
void myInit() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0, 0, 0, 1);
glOrtho(-2, 2, -2, 2, 2, -2);
glMatrixMode(GL_MODELVIEW);
}
float Cube[][3] = { {-1, -1, -1}, {1, -1, -1}, {1, 1, -1}, {-1, 1, -1}, {-1, -1, 1}, {1, -1, 1}, {1, 1, 1}, {-1, 1, 1} };
float Colors[][3] = { {0, 0, 0}, {1, 0, 0}, {0, 1, 0}, {0, 0, 1}, {1, 1, 0}, {0, 1, 1}, {1, 0, 1}, {1, 1, 1} };
int axis = 0, theta[3] = {0, 0, 0};
void draw_face (int a, int b, int c, int d) {
glBegin(GL_QUADS);
glColor3fv(Colors[a]);
glVertex3fv(Cube[a]);
glColor3fv(Colors[b]);
glVertex3fv(Cube[b]);
glColor3fv(Colors[c]);
glVertex3fv(Cube[c]);
glColor3fv(Colors[d]);
glVertex3fv(Cube[d]);
glEnd();
}
void draw_cube () {
draw_face(0, 3, 2, 1);
draw_face(2, 3, 7, 6);
draw_face(0, 4, 7, 3);
draw_face(1, 2, 6, 5);
draw_face(4, 5, 6, 7);
draw_face(0, 1, 5, 4);
}
void spin_cube() {
theta[axis] += 2;
if (theta[axis] > 360)
theta[axis] = -360;
glutPostRedisplay();
}
void idle_func() {
Sleep(10);
spin_cube();
}
void mouse_func(int button, int state, int x, int y) {
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
axis = 0;
else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN)
axis = 1;
else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN)
axis = 2;
}
void myDrawing() {
glClear(GL_COLOR_BUFFER_BIT);
glPushMatrix();
glRotatef(theta[0], 1, 0, 0);
glRotatef(theta[1], 0, 1, 0);
glRotatef(theta[2], 0, 0, 1);
draw_cube();
glPopMatrix();
glFlush();
glutSwapBuffers();
}
int main(int argc, char *argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glEnable(GL_DEPTH_TEST);
glutInitWindowSize(500, 500);
glutInitWindowPosition(0, 0);
glutCreateWindow("sample");
glutDisplayFunc(myDrawing);
glutIdleFunc(idle_func);
glutMouseFunc(mouse_func);
myInit();
glutMainLoop();
}
Multiple issues:
You aren't requesting a depth buffer from GLUT:
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
Solution: OR in GLUT_DEPTH to make sure GLUT requests some depth buffer bits from the OS during GL context creation:
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
You're calling glEnable(GL_DEPTH_TEST) before GLUT has created a GL context:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
// no GL context yet
glEnable(GL_DEPTH_TEST);
Solution: Move the glEnable() to after glutCreateWindow() so it has a current GL context to work with:
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(500, 500);
glutInitWindowPosition(0, 0);
glutCreateWindow("sample");
glEnable(GL_DEPTH_TEST);
...
You never clear the depth buffer:
void myDrawing() {
// where's GL_DEPTH_BUFFER_BIT?
glClear(GL_COLOR_BUFFER_BIT);
...
Solution: OR in GL_DEPTH_BUFFER_BIT to your glClear() argument:
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
All the fixes together:
#include <cmath>
#include <vector>
#include <GL/glut.h>
using namespace std;
void myInit() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glClearColor(0, 0, 0, 1);
glOrtho(-2, 2, -2, 2, 2, -2);
glMatrixMode(GL_MODELVIEW);
}
float Cube[][3] = { {-1, -1, -1}, {1, -1, -1}, {1, 1, -1}, {-1, 1, -1}, {-1, -1, 1}, {1, -1, 1}, {1, 1, 1}, {-1, 1, 1} };
float Colors[][3] = { {0, 0, 0}, {1, 0, 0}, {0, 1, 0}, {0, 0, 1}, {1, 1, 0}, {0, 1, 1}, {1, 0, 1}, {1, 1, 1} };
int axis = 0, theta[3] = {0, 0, 0};
void draw_face (int a, int b, int c, int d) {
glBegin(GL_QUADS);
glColor3fv(Colors[a]);
glVertex3fv(Cube[a]);
glColor3fv(Colors[b]);
glVertex3fv(Cube[b]);
glColor3fv(Colors[c]);
glVertex3fv(Cube[c]);
glColor3fv(Colors[d]);
glVertex3fv(Cube[d]);
glEnd();
}
void draw_cube () {
draw_face(0, 3, 2, 1);
draw_face(2, 3, 7, 6);
draw_face(0, 4, 7, 3);
draw_face(1, 2, 6, 5);
draw_face(4, 5, 6, 7);
draw_face(0, 1, 5, 4);
}
void spin_cube() {
theta[axis] += 2;
if (theta[axis] > 360)
theta[axis] = -360;
glutPostRedisplay();
}
void idle_func() {
Sleep(10);
spin_cube();
}
void mouse_func(int button, int state, int x, int y) {
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN)
axis = 0;
else if (button == GLUT_MIDDLE_BUTTON && state == GLUT_DOWN)
axis = 1;
else if (button == GLUT_RIGHT_BUTTON && state == GLUT_DOWN)
axis = 2;
}
void myDrawing() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glPushMatrix();
glRotatef(theta[0], 1, 0, 0);
glRotatef(theta[1], 0, 1, 0);
glRotatef(theta[2], 0, 0, 1);
draw_cube();
glPopMatrix();
glFlush();
glutSwapBuffers();
}
int main(int argc, char *argv[]) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(500, 500);
glutInitWindowPosition(0, 0);
glutCreateWindow("sample");
glEnable(GL_DEPTH_TEST);
glutDisplayFunc(myDrawing);
glutIdleFunc(idle_func);
glutMouseFunc(mouse_func);
myInit();
glutMainLoop();
}
Using the following code
Sleep(3000);
keybd_event(VK_SHIFT, 0, 0, 0);
keybd_event(VK_DOWN, 0, 0, 0);
keybd_event(VK_DOWN, 0, KEYEVENTF_KEYUP, 0);
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP, 0);
I expect windows to select a line of text if i place my cursor in an editor during the sleep (hence it is foreground window)
However this just moves the cursor down a line.
Using the following works but surely this isn't the way its supposed to be done.
Sleep(3000);
keybd_event(VK_SHIFT, 0, KEYEVENTF_EXTENDEDKEY, 0);
keybd_event(VK_DOWN, 0, 0, 0);
keybd_event(VK_DOWN, 0, KEYEVENTF_KEYUP, 0);
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP | KEYEVENTF_EXTENDEDKEY, 0);
keybd_event(VK_SHIFT, 0, 0, 0);
keybd_event(VK_SHIFT, 0, KEYEVENTF_KEYUP, 0);
It appears by using the KEYEVENTF_EXTENDEDKEY flag I can hold it down for the arrow key. But no matter how I try to release it doesn't work. But I can press and release it normally to clear the held shift key.
I'm currently working on a little game using SFML and C++, but I have a problem. I have a class Character in character.h with 2 functions inside, but when I try to access these functions in an other file (Game.cpp), one works perfectly while the other acts as if it doesn't even exist. Since this is my first post, and I don't know how to properly showcase my code, so please tell me if I'm not clear enough.
Thanks to all of you and have a good day.
error message:
/****CHARACTER.H****/
#ifndef CHARACTER_H
#define CHARACTER_H
#include <SFML/Graphics.hpp>
using namespace std;
class Character{
public:
Character();
~Character();
void initPlayer(string& fileName, sf::IntRect rect);
void moveCharacter();
sf::Sprite m_sprite;
private:
sf::VertexArray m_vertices;
sf::Texture m_texture;
};
#endif
/****CHARACTER.CPP*****/
#include "/home/hichem/C++/sfml/Game Engine/character.h"
#include "string"
#include <iostream>
using namespace std;
Character::Character(){
}
Character::~Character(){
}
void Character::initPlayer(string& fileName, sf::IntRect rect){
if (!m_texture.loadFromFile(fileName, rect)){
cout << "failed to load image" << endl;
}
m_sprite.setTexture(m_texture);
m_sprite.setPosition(sf::Vector2f(400, 200));
}
void Character::moveCharacter(){
}
/****GAME.H****/
#ifndef GAME_H
#define GAME_H
#include <SFML/Graphics.hpp>
#include "character.h"
#include "tileMap.h"
#include "string"
class Game: public sf::Transformable{
public:
Game();
~Game();
void run(sf::RenderWindow &window);
private:
void event(sf::RenderWindow &window);
void update();
void draw(sf::RenderWindow &window);
TileMap carte;
Character player;
const int level_1[128] =
{
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
};
};
#endif // GAME_H
/****GAME.CPP****/
#include "Game.h"
using namespace std;
Game::Game(){
string a = "images/tiles.png";
carte.load(a, sf::Vector2u(50, 50), level_1, 16, 8);
string b = "images/player.png";
/*FUNCTION THAT WORKS FROM CHARACTER.H*/
player.initPlayer(b, sf::IntRect(0,0,50,50));
}
Game::~Game(){
//dtor
}
void Game::event(sf::RenderWindow &window){
sf::Event event;
while (window.pollEvent(event)){
if (event.type == sf::Event::Closed)
window.close();
/*FUNCTION THAT DOES NOT WORKS FROM CHARACTER.H*/
player.moveCharacter();
}
}
void Game::update(){
}
void Game::draw(sf::RenderWindow &window){
window.clear();
window.draw(carte);
window.draw(player.m_sprite);
window.display();
}
void Game::run(sf::RenderWindow &window){
while(window.isOpen()){
event(window);
update();
draw(window);
}
}
Thanks for your help guys!, I was really tired last night and didn't noticed that I was linking the wrong path in character.cpp
I am working on a project using opengl off-screen rendering.But after I create opengl context, I found some opengl extension is unusable.for example:
#include <windows.h>
#include <GL/glew.h>
#include <iostream>
#include <gl/gl.h>
#include <gl/glu.h>
#include <string>
#include <time.h>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
using namespace std;
using namespace cv;
void mGLRender()
{
glClearColor(0.9f, 0.9f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(30.0, 1.0, 1.0, 10.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, 0, -5, 0, 0, 0, 0, 1, 0);
glBegin(GL_TRIANGLES);
glColor3d(1, 0, 0);
glVertex3d(0, 1, 0);
glColor3d(0, 1, 0);
glVertex3d(-1, -1, 0);
glColor3d(0, 0, 1);
glVertex3d(1, -1, 0);
glEnd();
glFlush(); // remember to flush GL output!
}
void mGLRender1()
{
glClearColor(0.3f, 0.3f, 0.3f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(30.0, 1.0, 1.0, 10.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0, 0, -5, 0, 0, 0, 0, 1, 0);
glBegin(GL_TRIANGLES);
glColor3d(1, 0, 0);
glVertex3d(0, 1, 0);
glColor3d(0, 1, 0);
glVertex3d(-1, -1, 0);
glColor3d(0, 0, 1);
glVertex3d(1, -1, 0);
glEnd();
glFlush(); // remember to flush GL output!
}
int main(int argc, char* argv[])
{
clock_t clockBegin, clockEnd;
const int WIDTH = 400;
const int HEIGHT = 400;
// Create a memory DC compatible with the screen
HDC hdc = CreateCompatibleDC(0);
if (hdc == 0) cout << "Could not create memory device context";
// Create a bitmap compatible with the DC
// must use CreateDIBSection(), and this means all pixel ops must be synchronised
// using calls to GdiFlush() (see CreateDIBSection() docs)
BITMAPINFO bmi = {
{ sizeof(BITMAPINFOHEADER), WIDTH, HEIGHT, 1, 32, BI_RGB, 0, 0, 0, 0, 0 },
{ 0 }
};
unsigned char *pbits; // pointer to bitmap bits
HBITMAP hbm = CreateDIBSection(hdc, &bmi, DIB_RGB_COLORS, (void **)&pbits,
0, 0);
if (hbm == 0) cout << "Could not create bitmap";
//HDC hdcScreen = GetDC(0);
//HBITMAP hbm = CreateCompatibleBitmap(hdcScreen,WIDTH,HEIGHT);
// Select the bitmap into the DC
HGDIOBJ r = SelectObject(hdc, hbm);
if (r == 0) cout << "Could not select bitmap into DC";
// Choose the pixel format
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR), // struct size
1, // Version number
PFD_DRAW_TO_BITMAP | PFD_SUPPORT_OPENGL, // use OpenGL drawing to BM
PFD_TYPE_RGBA, // RGBA pixel values
32, // color bits
0, 0, 0, // RGB bits shift sizes...
0, 0, 0, // Don't care about them
0, 0, // No alpha buffer info
0, 0, 0, 0, 0, // No accumulation buffer
32, // depth buffer bits
0, // No stencil buffer
0, // No auxiliary buffers
PFD_MAIN_PLANE, // Layer type
0, // Reserved (must be 0)
0, // No layer mask
0, // No visible mask
0, // No damage mask
};
int pfid = ChoosePixelFormat(hdc, &pfd);
if (pfid == 0) cout << "Pixel format selection failed";
// Set the pixel format
// - must be done *after* the bitmap is selected into DC
BOOL b = SetPixelFormat(hdc, pfid, &pfd);
if (!b) cout << "Pixel format set failed";
// Create the OpenGL resource context (RC) and make it current to the thread
HGLRC hglrc = wglCreateContext(hdc);
if (hglrc == 0) cout << "OpenGL resource context creation failed";
wglMakeCurrent(hdc, hglrc);
GLenum err = glewInit();
if (GLEW_OK != err)
{
/* Problem: glewInit failed, something is seriously wrong. */
std::cout << "glew init error" << std::endl;
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
std::cout << (glewGetExtension("GL_ARB_fragment_shader") == GL_TRUE);
std::cout << (glewGetExtension("GL_ARB_shader_objects") == GL_TRUE);
std::cout << (glewGetExtension("GL_ARB_shading_language_100") == GL_TRUE);
// Draw using GL - remember to sync with GdiFlush()
clockBegin = clock();
GdiFlush();
mGLRender();
//SaveBmp(hbm,"output.bmp");
clockEnd = clock();
printf("%d\n", clockEnd - clockBegin);
clockBegin = clock();
GdiFlush();
mGLRender1();
//SaveBmp(hbm,"output1.bmp");
clockEnd = clock();
printf("%d\n", clockEnd - clockBegin);
//opencv show img
Mat img(HEIGHT, WIDTH, CV_8UC4, (void *)pbits);
imshow("img", img);
waitKey();
destroyWindow("img");
// Clean up
wglDeleteContext(hglrc); // Delete RC
SelectObject(hdc, r); // Remove bitmap from DC
DeleteObject(hbm); // Delete bitmap
DeleteDC(hdc); // Delete DC
system("pause");
return 0;
}
above code works well in vs2015. But the line:
std::cout << (glewGetExtension("GL_ARB_fragment_shader") == GL_TRUE);
turns out the GL_ARB_fragment_shader extension is unusable. But I am sure my gpu support this extension.Because in a Simple freeglut application, glewGetExtension("GL_ARB_fragment_shader") return True.the code is here:
#include <stdlib.h>
#include <GL/glew.h>
#include <GL/glut.h>
#include <iostream>
// Window attributes
static const unsigned int WIN_POS_X = 30;
static const unsigned int WIN_POS_Y = WIN_POS_X;
static const unsigned int WIN_WIDTH = 512;
static const unsigned int WIN_HEIGHT = WIN_WIDTH;
void glInit(int, char **);
int main(int argc, char * argv[])
{
// Initialize OpenGL
glInit(argc, argv);
// A valid OpenGL context has been created.
// You can call OpenGL functions from here on.
GLenum err = glewInit();
if (GLEW_OK != err)
{
/* Problem: glewInit failed, something is seriously wrong. */
std::cout << "glew init error" << std::endl;
fprintf(stderr, "Error: %s\n", glewGetErrorString(err));
}
std::cout << (glewGetExtension("GL_ARB_fragment_shader") == GL_TRUE);
std::cout << (glewGetExtension("GL_ARB_shader_objects") == GL_TRUE);
std::cout << (glewGetExtension("GL_ARB_shading_language_100") == GL_TRUE);
glutMainLoop();
return 0;
}
void Display()
{
} // end Display()
void glInit(int argc, char ** argv)
{
// Initialize GLUT
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE);
glutInitWindowPosition(WIN_POS_X, WIN_POS_Y);
glutInitWindowSize(WIN_WIDTH, WIN_HEIGHT);
glutCreateWindow("Hello OpenGL!");
glutDisplayFunc(Display);
return;
}
above code works well in vs2015. And the value of glewGetExtension("GL_ARB_fragment_shader")
is True. So does different opengl context has different opengl extension? Please help me.
Yes, different OpenGL contexts may support different OpenGL versions and/or extensions. In your particular case the off-screen context you're creating will use the GDI software rasterizer fallback. The way you create the context it will never be GPU accelerated!
If you want to create a GPU accelerated OpenGL context you'll either have to
use a PBuffer (which gives you a HDC without a window)
or
create an OpenGL context on a hidden window and render to a FBO (the most commom method these days)
or
use one of the new pure offscreen context creation methods that are independent of the OS (see e.g. https://devblogs.nvidia.com/parallelforall/egl-eye-opengl-visualization-without-x-server/ for how to do it on NVidia – also applies to Windows)
However even if OpenGL contexts are GPU accelerated, and even if they happen to be created on the same machine and GPU, they may differ in version and extension support.