How can I get joystick data using glfw? - c++

I am trying to get joystick input using glfw. I am using nintendo pro controller.
The code seems pretty simple and straightfoward, but I can't seem to get accurate joystick data.
I used
int axesCount;
const float *axes = glfwGetJoystickAxes(GLFW_JOYSTICK_1, &axesCount);
and tried to access joystick data by
axes[0], axes[1], axes[2], and axis[3]
glfwGetJoystickAxes gave me an accurate number for axesCount, which is 4.
But even if I moved the joystick the axes array always returned 1.52590219e-05.
Below is the whole code for my appliation
#include "pch.h"
#include <iostream>
#include <gl/glew.h>
#include <GLFW/glfw3.h>
int main()
{
GLFWwindow *window;
// initialize
if (!glfwInit()) {
return -1;
}
// create window and its opengl context
window = glfwCreateWindow(640, 480, "OpenGL Project Tutorial", NULL, NULL);
if (!window) {
glfwTerminate();
return -1;
}
// make the window's context current
// loop untill the user closes the window
while (!glfwWindowShouldClose(window)) {
glClear(GL_COLOR_BUFFER_BIT);
// render OpenGL here
int present = glfwJoystickPresent(GLFW_JOYSTICK_1);
std::cout << "Joystick/Gamepad status: " << present << std::endl;
int axesCount;
const float *axes = glfwGetJoystickAxes(GLFW_JOYSTICK_1, &axesCount);
if (1 == present) {
//std::cout << "Number of axes available: " << axesCount << std::endl;
std::cout << "Axis 1: " << axes[0] << std::endl;
std::cout << "Axis 2: " << axes[1] << std::endl;
std::cout << "Axis 3: " << axes[2] << std::endl;
std::cout << "Axis 4: " << axes[3] << std::endl;
std::cout << std::endl;
std::cout << std::endl;
std::cout << std::endl;
}
// swap front and back buffers
glfwSwapBuffers(window);
// poll for and process events
glfwPollEvents();
}
glfwTerminate();
return 0;
}
It didn't matter whether I put
int axesCount;
const float *axes = glfwGetJoystickAxes(GLFW_JOYSTICK_1, &axesCount);
inside the while loop or not.
Any help would be appreciated, thanks.

Related

Why isn't pigpio - gpioHardwarePWM Working when gpioServo is in c++

I have been reading the documentation
I can get gpioServo to run
My motor spins
my goal is to spin my motor one revolution per second
which means; as far as i understand it; a frequency of 1 hz and a duty cycle of 100%
but i can only get gpio servo to work
here is my current code
//#include <QDebug>
//#include <QApplication>
#include <pigpio.h>
#include <iostream>
#include <chrono>
int main(int argc, char *argv[]) {
// QApplication a(argc, argv);
gpioInitialise();
//
//
// system("clear");
// std::cout << std::endl << "Running..." << std::endl;
//
// gpioServo(13, 2000);
// std::cout << std::endl << "Calibrate hi..." << std::endl;
// time_sleep(3);
//
//
// gpioServo(13, 1000);
// std::cout << std::endl << "Calibrate low..." << std::endl;
// time_sleep(3);
//
// gpioServo(13, 0);
// std::cout << std::endl << "Calibrate stop..." << std::endl;
std::cout << std::endl << gpioHardwarePWM(13, 10000, 100000) << std::endl;
std::cout << std::endl << "Run duty cycle..." << std::endl;
time_sleep(3);
system("clear");
std::cout << std::endl << "Done" << std::endl;
gpioTerminate();
// qDebug() << "Hello World";
// return QApplication::exec();
return 0;
}
To me I was assuming the duty cycle would be 100 and the frequency would be 1
but the numbers in my function parameters 10000 and 100000 come from a post on one of the forums i read by the author of pigpio
either way nothing has worked yet with gpioPWM for me
I am in the learning stage so this is very basic but gpioHardwarePWM returns a 0 and the documention on pigpio says that means its good but no spinning motor
I'm just trying learn please a help me see what I'm missing?
Documentation: https://abyz.me.uk/rpi/pigpio/

GTK drawing area is not realized

In my application I am using a Gtk::DrawingArea like this:
Window win;
DrawingArea area;
Box box(ORIENTATION_VERTICAL);
area.signal_realize().connect(sigc::ptr_fun(&on_video_area_realize));
box.pack_start(myWidgets, true, true);
box.pack_start(area, false, false);
win.add(box);
win.show_all();
The problem is, the function on_video_area_realize is not being called and if I query the status of the DrawingArea with area.get_realized(), it is false, so it has not been realized yet.
I do not understand why it has not been realized? As far as I understand, a widget is realized when it is added to a window - which, as far as I think, I am doing already.
The realize signal happens when the window (and its children) is showed. The following code (I tested this with Gtkmm 3.24.20):
#include <iostream>
#include <gtkmm.h>
void on_video_area_realize()
{
std::cout << "Video drawing area realized!" << std::endl;
}
int main(int argc, char *argv[])
{
std::cout << "Gtkmm version : " << gtk_get_major_version() << "."
<< gtk_get_minor_version() << "."
<< gtk_get_micro_version() << std::endl;
auto app = Gtk::Application::create(argc, argv, "org.gtkmm.examples.base");
Gtk::Window window;
Gtk::Box layout{Gtk::ORIENTATION_VERTICAL};
Gtk::Label label{"Example"};
std::cout << "Checkpoint 1" << std::endl;
Gtk::DrawingArea area;
std::cout << "Checkpoint 2" << std::endl;
area.signal_realize().connect(sigc::ptr_fun(&on_video_area_realize));
std::cout << "Checkpoint 3" << std::endl;
layout.pack_start(label, true, true);
layout.pack_start(area, false, false);
std::cout << "Checkpoint 4" << std::endl;
window.add(layout);
window.show_all();
std::cout << "Checkpoint 5" << std::endl;
return app->run(window);
}
Yields the following output:
Gtkmm version : 3.24.20
Checkpoint 1
Checkpoint 2
Checkpoint 3
Checkpoint 4
Video drawing area realized!
Checkpoint 5
From this, we can see that the drawing area is realized, but not when added to the window.

How to get camera's parameters at PCL?

I installed PCL1.7.2. and I am trying use PCL libraries.
I want to show camera's parameters by "const", so, I want to get camera's parameters. But I don't understand how to get the camera's parameters.
I saw the "pcl::visualization::Camera Class Reference".
http://docs.pointclouds.org/trunk/classpcl_1_1visualization_1_1_camera.html
and I understood there are focal, pos, view etc on the "Camera" object.
and now I have confirmed that the following code run.
but I can't understand how to get Camera's member.
this is how to set Camera's member values.
viewer.setCameraPosition(pos_x, pos_y, pos_z, view_x, view_y, view_z, up_x, up_y, up_z, viewport);
so, someone please show me how to get Camera's parameters at the following code.
this is now roading source.
#include "stdafx.h"
#include <pcl/visualization/cloud_viewer.h>
#include <iostream>
#include <pcl/io/io.h>
#include <pcl/io/pcd_io.h>
int user_data=0;
void
viewerOneOff(pcl::visualization::PCLVisualizer& viewer)
{
viewer.setBackgroundColor(1.0, 0.5, 1.0);
pcl::PointXYZ o;
o.x = 1.0;
o.y = 0;
o.z = 0;
viewer.addSphere(o, 0.25, "sphere", 0);
std::cout << "i only run once" << std::endl;
}
void
viewerPsycho(pcl::visualization::PCLVisualizer& viewer)
{
static unsigned count = 0;
std::stringstream ss;
ss << "Once per viewer loop: " << count++;
viewer.removeShape("text", 0);
viewer.addText(ss.str(), 200, 300, "text", 0);
//FIXME: possible race condition here:
user_data++;
}
int _tmain(int argc, const _TCHAR** argv)
{
pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloud(new pcl::PointCloud<pcl::PointXYZRGB>);
pcl::io::loadPCDFile("c:\\data\\triceratops\\raw_0.pcd", *cloud);
pcl::visualization::CloudViewer viewer("Cloud Viewer");
//blocks until the cloud is actually rendered
viewer.showCloud(cloud);
//use the following functions to get access to the underlying more advanced/powerful
//PCLVisualizer
//This will only get called once
viewer.runOnVisualizationThreadOnce(viewerOneOff);
//This will get called once per visualization iteration
viewer.runOnVisualizationThread(viewerPsycho);
while (!viewer.wasStopped())
{
//you can also do cool processing here
//FIXME: Note that this is running in a separate thread from viewerPsycho
//and you should guard against race conditions yourself...
user_data++;
}
return 0;
}
This should work:
boost::shared_ptr<pcl::visualization::PCLVisualizer> viewer;
std::vector<pcl::visualization::Camera> cam;
//Save the position of the camera
viewer->getCameras(cam);
//Print recorded points on the screen:
cout << "Cam: " << endl
<< " - pos: (" << cam[0].pos[0] << ", " << cam[0].pos[1] << ", " << cam[0].pos[2] << ")" << endl
<< " - view: (" << cam[0].view[0] << ", " << cam[0].view[1] << ", " << cam[0].view[2] << ")" << endl
<< " - focal: (" << cam[0].focal[0] << ", " << cam[0].focal[1] << ", " << cam[0].focal[2] << ")" << endl;

Hide marker in ArtoolKit (C)

I am trying to hide the marker im artoolkit and in my research i found this code for alvar, it's a free AR toolkit made by a research technical center in finland.
This code helps hiding the marker but its in opencv, but i want to do the same in simpleVRML example.
Any help how can i change this code for the artoolkit example?
#include "CvTestbed.h"
#include "MarkerDetector.h"
#include "GlutViewer.h"
#include "Shared.h"
using namespace alvar;
using namespace std;
#define GLUT_DISABLE_ATEXIT_HACK // Needed to compile with Mingw?
#include <GL/gl.h>
const double margin = 1.0;
std::stringstream calibrationFilename;
// Own drawable for showing hide-texture in OpenGL
struct OwnDrawable : public Drawable {
unsigned char hidingtex[64*64*4];
virtual void Draw() {
glPushMatrix();
glMultMatrixd(gl_mat);
glPushAttrib(GL_ALL_ATTRIB_BITS);
glEnable(GL_TEXTURE_2D);
int tex=0;
glBindTexture(GL_TEXTURE_2D, tex);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexImage2D(GL_TEXTURE_2D,0,GL_RGBA,64,64,0,GL_RGBA,GL_UNSIGNED_BYTE,hidingtex);
glDisable(GL_CULL_FACE);
glDisable(GL_LIGHTING);
glDisable(GL_DEPTH_TEST);
glEnable(GL_ALPHA_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glBegin(GL_QUADS);
glTexCoord2d(0.0,0.0);
glVertex3d(-margin,-margin,0);
glTexCoord2d(0.0,1.0);
glVertex3d(-margin,margin,0);
glTexCoord2d(1.0,1.0);
glVertex3d(margin,margin,0);
glTexCoord2d(1.0,0.0);
glVertex3d(margin,-margin,0);
glEnd();
glPopAttrib();
glPopMatrix();
}
};
void videocallback(IplImage *image)
{
static bool init=true;
static const int marker_size=15;
static Camera cam;
static OwnDrawable d[32];
static IplImage *hide_texture;
bool flip_image = (image->origin?true:false);
if (flip_image) {
cvFlip(image);
image->origin = !image->origin;
}
static IplImage* bg_image = 0;
if(!bg_image) bg_image = cvCreateImage(cvSize(512, 512), 8, 3);
if(image->nChannels == 3)
{
bg_image->origin = 0;
cvResize(image, bg_image);
GlutViewer::SetVideo(bg_image);
}
if (init) {
init = false;
cout<<"Loading calibration: "<<calibrationFilename.str();
if (cam.SetCalib(calibrationFilename.str().c_str(), image->width, image->height)) {
cout<<" [Ok]"<<endl;
} else {
cam.SetRes(image->width, image->height);
cout<<" [Fail]"<<endl;
}
double p[16];
cam.GetOpenglProjectionMatrix(p,image->width,image->height);
GlutViewer::SetGlProjectionMatrix(p);
hide_texture = CvTestbed::Instance().CreateImage("hide_texture", cvSize(64, 64), 8, 4);
}
static MarkerDetector<MarkerData> marker_detector;\
marker_detector.Detect(image, &cam, false, false);
GlutViewer::DrawableClear();
for (size_t i=0; i<marker_detector.markers->size(); i++) {
if (i >= 32) break;
GlutViewer::DrawableAdd(&(d[i]));
}
for (size_t i=0; i<marker_detector.markers->size(); i++) {
if (i >= 32) break;
// Note that we need to mirror both the y- and z-axis because:
// - In OpenCV we have coordinates: x-right, y-down, z-ahead
// - In OpenGL we have coordinates: x-right, y-up, z-backwards
// TODO: Better option might be to use OpenGL projection matrix that matches our OpenCV-approach
Pose p = (*(marker_detector.markers))[i].pose;
BuildHideTexture(image, hide_texture, &cam, d[i].gl_mat, PointDouble(-margin, -margin), PointDouble(margin, margin));
//DrawTexture(image, hide_texture, &cam, d[i].gl_mat, PointDouble(-0.7, -0.7), PointDouble(0.7, 0.7));
p.GetMatrixGL(d[i].gl_mat);
for (int ii=0; ii<64*64; ii++) {
d[i].hidingtex[ii*4+0] = hide_texture->imageData[ii*4+2];
d[i].hidingtex[ii*4+1] = hide_texture->imageData[ii*4+1];
d[i].hidingtex[ii*4+2] = hide_texture->imageData[ii*4+0];
d[i].hidingtex[ii*4+3] = hide_texture->imageData[ii*4+3];
}
}
if (flip_image) {
cvFlip(image);
image->origin = !image->origin;
}
}
int main(int argc, char *argv[])
{
try {
// Output usage message
std::string filename(argv[0]);
filename = filename.substr(filename.find_last_of('\\') + 1);
std::cout << "SampleMarkerHide" << std::endl;
std::cout << "================" << std::endl;
std::cout << std::endl;
std::cout << "Description:" << std::endl;
std::cout << " This is an example of how to detect 'MarkerData' markers, similarly" << std::endl;
std::cout << " to 'SampleMarkerDetector', and hide them using the 'BuildHideTexture'" << std::endl;
std::cout << " and 'DrawTexture' classes." << std::endl;
std::cout << std::endl;
std::cout << "Usage:" << std::endl;
std::cout << " " << filename << " [device]" << std::endl;
std::cout << std::endl;
std::cout << " device integer selecting device from enumeration list (default 0)" << std::endl;
std::cout << " highgui capture devices are prefered" << std::endl;
std::cout << std::endl;
std::cout << "Keyboard Shortcuts:" << std::endl;
std::cout << " q: quit" << std::endl;
std::cout << std::endl;
// Initialise GlutViewer and CvTestbed
GlutViewer::Start(argc, argv, 640, 480, 15);
CvTestbed::Instance().SetVideoCallback(videocallback);
// Enumerate possible capture plugins
CaptureFactory::CapturePluginVector plugins = CaptureFactory::instance()->enumeratePlugins();
if (plugins.size() < 1) {
std::cout << "Could not find any capture plugins." << std::endl;
return 0;
}
// Display capture plugins
std::cout << "Available Plugins: ";
outputEnumeratedPlugins(plugins);
std::cout << std::endl;
// Enumerate possible capture devices
CaptureFactory::CaptureDeviceVector devices = CaptureFactory::instance()->enumerateDevices();
if (devices.size() < 1) {
std::cout << "Could not find any capture devices." << std::endl;
return 0;
}
// Check command line argument for which device to use
int selectedDevice = defaultDevice(devices);
if (argc > 1) {
selectedDevice = atoi(argv[1]);
}
if (selectedDevice >= (int)devices.size()) {
selectedDevice = defaultDevice(devices);
}
// Display capture devices
std::cout << "Enumerated Capture Devices:" << std::endl;
outputEnumeratedDevices(devices, selectedDevice);
std::cout << std::endl;
// Create capture object from camera
Capture *cap = CaptureFactory::instance()->createCapture(devices[selectedDevice]);
std::string uniqueName = devices[selectedDevice].uniqueName();
// Handle capture lifecycle and start video capture
// Note that loadSettings/saveSettings are not supported by all plugins
if (cap) {
std::stringstream settingsFilename;
settingsFilename << "camera_settings_" << uniqueName << ".xml";
calibrationFilename << "camera_calibration_" << uniqueName << ".xml";
cap->start();
if (cap->loadSettings(settingsFilename.str())) {
std::cout << "Loading settings: " << settingsFilename.str() << std::endl;
}
std::stringstream title;
title << "SampleMarkerHide (" << cap->captureDevice().captureType() << ")";
CvTestbed::Instance().StartVideo(cap, title.str().c_str());
if (cap->saveSettings(settingsFilename.str())) {
std::cout << "Saving settings: " << settingsFilename.str() << std::endl;
}
cap->stop();
delete cap;
}
else if (CvTestbed::Instance().StartVideo(0, argv[0])) {
}
else {
std::cout << "Could not initialize the selected capture backend." << std::endl;
}
return 0;
}
catch (const std::exception &e) {
std::cout << "Exception: " << e.what() << endl;
}
catch (...) {
std::cout << "Exception: unknown" << std::endl;
}
}

OpenGL context without opening a window - wglMakeCurrent fails with HDC and HGLRC when using HWND made with GetDesktopWindow

This is somewhat a duplicate of this question.
I am trying to make a windowless console application to check up on OpenGL version supported. In order to do this I need to set up a render context - but without creating a window. I am trying to use desktop handle, which I won't write to.
I forgot to set pixel format in previous example - that is probable reason why creation of render context failed - however even with pixel format set, I cannot activate it. wglMakeCurrent(hDC, hRC) just returns 0.
Here is the complete source code dump:
#include <iostream>
#include <GL/GLee.h>
#include <windows.h>
HDC hDC = NULL;
HGLRC hRC = NULL;
HWND hWnd = NULL;
HINSTANCE hInstance;
int res = 0;
int pf = 0;
PIXELFORMATDESCRIPTOR pfd = {
sizeof(PIXELFORMATDESCRIPTOR),
1, /* version */
PFD_DRAW_TO_WINDOW |
PFD_SUPPORT_OPENGL |
PFD_DOUBLEBUFFER,
PFD_TYPE_RGBA,
24, /* 24-bit color depth */
0, 0, 0, 0, 0, 0, /* color bits */
0, /* alpha buffer */
0, /* shift bit */
0, /* accumulation buffer */
0, 0, 0, 0, /* accum bits */
32, /* z-buffer */
0, /* stencil buffer */
0, /* auxiliary buffer */
PFD_MAIN_PLANE, /* main layer */
0, /* reserved */
0, 0, 0 /* layer masks */
};
std::string trash;
int main(int argc, char**argv) {
hInstance = GetModuleHandle(NULL); // Grab An Instance For Our Window
hWnd = GetDesktopWindow(); // Instead of CreateWindowEx
if (!(hDC = GetDC(hWnd))) {
std::cout << "Device context failed" << std::endl;
std::cout << std::endl << "Press ENTER to exit" << std::endl;
std::getline(std::cin, trash);
return 1;
}
// pixel format
pf = ChoosePixelFormat(hDC, &pfd);
res = SetPixelFormat(hDC, pf, &pfd);
if (!(hRC = wglCreateContext(hDC))) {
std::cout << "Render context failed" << std::endl;
std::cout << std::endl << "Press ENTER to exit" << std::endl;
std::getline(std::cin, trash);
return 1;
}
if(!wglMakeCurrent(hDC,hRC)) { // fail: wglMakeCurrent returns 0
std::cout << "Activating render context failed" << std::endl;
std::cout << std::endl << "Press ENTER to exit" << std::endl;
std::getline(std::cin, trash);
return 1;
}
std::cout << "OpenGL 1.2 support ... ";
if (GLEE_VERSION_1_2) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << "OpenGL 1.3 support ... ";
if (GLEE_VERSION_1_3) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << "OpenGL 1.4 support ... ";
if (GLEE_VERSION_1_4) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << "OpenGL 1.5 support ... ";
if (GLEE_VERSION_1_5) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << "OpenGL 2.0 support ... ";
if (GLEE_VERSION_2_0) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << "OpenGL 2.1 support ... ";
if (GLEE_VERSION_2_1) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << "OpenGL 3.0 support ... ";
if (GLEE_VERSION_3_0) {
std::cout << "OK" << std::endl;
} else {
std::cout << "FAIL" << std::endl;
}
std::cout << std::endl << "Press ENTER to exit" << std::endl;
std::getline(std::cin, trash);
// cleanup
wglMakeCurrent(NULL, NULL); /* make our context not current */
wglDeleteContext(hRC); /* delete the rendering context */
ReleaseDC(hWnd, hDC); /* release handle to DC */
return 0;
}
edit: I know that wglMakeCurrent() fails if either of the handles passed to it is invalid or if the rendering context that is to become current is presently current for another thread, but I am not really sure which of these is true in this case.
One must not create a OpenGL context on the desktop window. To create a OpenGL context, one must set the windows's pixelformat, which is strictly forbidden to be done on the desktop window.
If you want to do offscreen rendering, use a PBuffer instead of a window, or create a window you don't make visible and use a Frame Buffer Object (FBO) as render target.
Does it work if you use CreateWindow() rather than GetDesktopWindow()?
I would say GetDesktopWindow() is extremely unlikely to work. I would expect that to be unlike a normal window and the value you receive to be a special handle value.
If push comes to shove, just open a window without WS_VISIBLE. No one will be the wiser.
P.s. I note you're making this a console application. I will be quite surprised if console applications work with anything graphical, be it OpenGL or just the Windows 2D drawing API.
P.s.s I'm pretty sure Windows applications can write (one way or another) to the stdout of the command line they're run from. You could simply write a normal windows app, but just emit your output to stdout.