I'm trying to figure out how to get openAL to pan in 2D (by manipulating the 3D positioning). Ideally I want to achieve panning such that the Left or Right channel can be fully engaged with the other channel completely silent. It seems that Open AL handles 3d distances and falloffs nicely, but I'm struggling to emulate this kind of 2D panning.
I'm using
alDistanceModel(AL_LINEAR_DISTANCE_CLAMPED)
float sourcePosition[3] = {0.99f,0.f,0.f};
alSourcefv(sourceID, AL_POSITION, sourcePosition);
alSourcei(sourceID, AL_SOURCE_RELATIVE, AL_FALSE);
alSourcef(sourceID, AL_MAX_DISTANCE, 1.f);
alSourcef(sourceID, AL_REFERENCE_DISTANCE, 0.5f);
However there is a substantial amount of audio in the right channel. I don't really want gain to drop off based on distance, just proportion the channels.
Is it possible to emulate 2d panning with open AL?
You'll want to set AL_SOURCE_RELATIVE to AL_TRUE, rather than false.
AL_SOURCE_RELATIVE set to AL_TRUE indicates that the position,
velocity, cone, and direction properties of a source are to be
interpreted relative to the listener position.
So says the OpenAL 1.1 Specification (page 34)!
So, changing
alSourcei(sourceID, AL_SOURCE_RELATIVE, AL_FALSE);
to
alSourcei(sourceID, AL_SOURCE_RELATIVE, AL_TRUE);
should achieve the desired result.
Related
I'm absolutely new to ROS/Gazebo world; this is probably a simple question, but I cannot find a good answer.
I have a simulated depth camera (Kinect) in a Gazebo scene. After some elaborations, I get a point of interest in the RGB image in pixel coordinates, and I want to retrieve its 3D coordinates in the world frame.
I can't understand how to do that.
I have tried compensating the distortions, given by the CameraInfo msg. I have tried using PointCloud with pcl library, retrieving the point as cloud.at(x,y).
In both cases, the coordinates are not correct (I have put a small sphere in the coords given out by the program, so to check if it's correct or no).
Every help would be very appreciated. Thank you very much.
EDIT:
Starting from the PointCloud, I try to find the coords of the points doing something like:
point = cloud.at(xInPixel, yInPixel);
point.x = point.x + cameraPos.x;
point.y = point.y + cameraPos.y;
point.z = point.z - cameraPos.z;
but the x,y,z coords I get as point.x seems not to be correct.
The camera has a pitch angle of pi/2, so to points on the ground.
I am clearly missing something.
I assume you've seen the gazebo examples for the kinect (brief, full). You can get, as topics, the raw image, raw depth, and calculated pointcloud (by setting them in the config):
<imageTopicName>/camera/color/image_raw</imageTopicName>
<cameraInfoTopicName>/camera/color/camera_info</cameraInfoTopicName>
<depthImageTopicName>/camera/depth/image_raw</depthImageTopicName>
<depthImageCameraInfoTopicName>/camera/dept/camera_info</depthImageCameraInfoTopicName>
<pointCloudTopicName>/camera/depth/points</pointCloudTopicName>
Unless you need to do your own things with the image_raw for rgb and depth frames (ex ML over rgb frame & find corresponding depth point via camera_infos), the pointcloud topic should be sufficient - it's the same as the pcl pointcloud in c++, if you include the right headers.
Edit (in response):
There's a magical thing in ros called tf/tf2. Your pointcloud, if you look at the msg.header.frame_id, says something like "camera", indicating it's in the camera frame. tf, like the messaging system in ros, happens in the background, but it looks/listens for transformations from one frame of reference to another frame. You can then transform/query the data in another frame. For example, if the camera is mounted at a rotation to your robot, you can specify a static transformation in your launch file. It seems like you're trying to do the transformation yourself, but you can make tf do it for you; this allows you to easily figure out where points are in the world/map frame, vs in the robot/base_link frame, or in the actuator/camera/etc frame.
I would also look at these ros wiki questions which demo a few different ways to do this, depending on what you want: ex1, ex2, ex3
I want to draw a triangle and text using C++ and Cairo like this:
|\
| \
|PP\
|___\
If I add the triangle and the text using Cairo I get:
___
| /
|PP/
| /
|/
So the y-axis is from top to bottom, but I want it from bottom to top. So I tried to changed the viewpoint matrix (cairo_transform(p, &mat);) or scale the data (cairo_scale(p, 1.0, -1.0);). I get:
|\
| \
|bb\
|___\
Now the triangle is the way I want it BUT the TEXT is MIRRORED, which I do not want to be mirrored.
Any idea how to handle this problem?
I was in a similar situation as the OP that required me to change a variety of coordinates in the cartesian coordinate system with the origin at the bottom left. (I had to port an old video game that was developed with a coordinate system different from Cairo's, and because of time constraints/possible calculation mistakes/port precision I decided it was better to not rewrite the whole bunch) Luckily, I found an okay approach to change Cairo's coordinate system. The approach is based around Cairo's internal transformation matrix, that transforms Cairo's input to the user device. The solution was to change this matrix to a reflection matrix, a matrix that mirrors it's input through the x-axis, like so:
cairo_t *cr;
cairo_matrix_t x_reflection_matrix;
cairo_matrix_init_identity(&x_reflection_matrix); // could not find a oneliner
/* reflection through the x axis equals the identity matrix with the bottom
left value negated */
x_reflection_matrix.yy = -1.0;
cairo_set_matrix(cr, &x_reflection_matrix);
// This would result in your drawing being done on top of the destination
// surface, so we translate the surface down the full height
cairo_translate(cr, 0, SURFACE_HEIGHT); // replace SURFACE_HEIGHT
// ... do your drawing
There is one catch however: text will also get mirrored. To solve this, one could alter the font transformation matrix. The required code for this would be:
cairo_matrix_t font_reflection_matrix;
// We first set the size, and then change it to a reflection matrix
cairo_set_font_size(cr, YOUR_SIZE);
cairo_get_font_matrix(cr, &font_reflection_matrix);
// reverse mirror the font drawing matrix
font_reflection_matrix.yy = font_reflection_matrix.yy * -1;
cairo_set_font_matrix(cr, &font_reflection_matrix);
Answer:
Rethink your coordinates and pass them correctly to cairo. If your coordinates source has an inverted axis, preprocess them to flip the geometry. That would be called glue code, and it is ofter neccessary.
Stuff:
It is a very common thing with 2D computer graphics to have the origin (0,0) in the top left corner and the y-axis heading downwards (see gimp/photoshop, positioning in html, webgl canvas). As allways there are other examples too (PDFs).
I'm not sure what the reason is, but I would assume the reading direction on paper (from top to bottom) and/or the process of rendering/drawing an image on a screen.
To me, it seems to be the easiest way to procedurally draw an image at some position from the first to the last pixel (you don't need to precalculate it's size).
I don't think that you are alone with your oppinion. But I don't think that there is a standard math coordinate system. Even the very common carthesian coordinate system is incomplete when the arrows that indicate axis direction are missing.
Summary: From the discussion I assume that there is only one coordinate system used by Cairo: x-axis to the right, y-axis down. If one needs a standard math coordinate system (x-axis to the right, y-axis up) one has to preprocess the data.
I am using OpenGL to create the 3D space.
I have a spaceship which can fire lasers.
Up until now I have had it so that the lasers will simply to deeper into the Z-axis once fired.
But I am attempting to make a proper aiming system with crosshairs so that you can aim and shoot in any direction, but I have not been successfull in trying to update the laser's path.
I have a directional vector based off the lasers end tip and start tip, which is gotten from the aiming.
How should I update the laser's X,Y,Z values (or vectors) properly so that it looks natural?
I think I see.
Let's say you start with the aiming direction as a 3D vector, call it "aimDir". Then in your update loop add all 3 (x, y and z) to the projectile "position". (OK, at the speed of light you wouldn't actually see any movement, but I think I see what you're going for here).
void OnUpdate( float deltaT )
{
// "move" the laser in the aiming direction, scaled by the amount of time elapsed
// since our last update (you probably want another scale factor here to control
// how "fast" the laser appears to move)
Vector3 deltaLaser = deltaT * aimDir; // calc 3d offset for this frame
laserEndpoint += deltaLaser; // add it to the end of the laser
}
then in the render routine draw the laser from the firing point to the new endpoint:
void OnRender()
{
glBegin(GL_LINES);
glVertex3f( gunPos.x, gunPos.Y, gunPos.z );
glVertex3f( laserEndPoint.x, laserEndPoint.y, laserEndPoint.z );
glEnd();
}
I'm taking some liberties because I don't know if you're using glut, sdl or what. But I'm sure you have at least an update function and a render function.
Warning, just drawing a line from the gun to the end of the laser might be disappointing visually, but it will be a critical reference for adding better effects (particle systems, bloom filter, etc.). A quick improvement might be to make the front of the laser (line) a bright color and the back black. And/or make multiple lines like a machine gun. Feel free to experiment ;-)
Also, if the source of the laser is directly in front of the viewer you will just see a dot! So you may want to cheat a bit and fire from just below or to the right of the viewer and then have in fire slightly up or in. Especially if you have one one each side (wing?) that appear to converge as in conventional machine guns.
Hope that's helpful.
It seems that this is quite a common question, but I can't find a person with my same circumstances. The closest seems to be: OpenGL: scale then translate? and how?.
The problem I'd really like some help with is to do with moving around while zoomed into (and out of) a 2d scene using OpenGl. The code for zooming out is pretty simple:
void RefMapGLScene::zoomOut(){
currentScale = currentScale-zoomFactor;
double xAdjust = (((get_width())*zoomFactor/2));
double yAdjust = ((get_height()*zoomFactor/2));
zoomTranslateX -= xAdjust;
zoomTranslateY -= yAdjust;
}
The code for zooming in is basically the same (add the zoomFactor to currentScale, and increment zoomTranslateX and Y).
The code for rending everything is also simple:
glPushMatrix();
glTranslated(-zoomTranslateX, -zoomTranslateY, 0);
glScaled(currentScale, currentScale, 1);
glTranslated(totalMovedX, totalMovedY, 0);
graph->draw();
glPopMatrix();
Essentially, zoomTranslate stores an adjustment needed to make the screen move a little towards the middle when zooming. I don't do anything nice like move to where the mouse is pointing, I just move to the middle (ie, to the right and up/down depending on your co-ordinate system). TotalMovedX and Y store the mouse movement as follows:
if (parent->rightButtonDown){
totalMovedX += (-(mousex-curx))/currentScale;
totalMovedY += (-(mousey-cury))/currentScale;
}
Dragging while not zoomed in or out works great. Zooming works great. Dragging while zoomed in/out does not work great :) Essentially, when zoomed in, the canvas moves a lot slower than the mouse. The opposite for when zoomed out.
I've tried everything I can think of, and have read a lot of this site about people with similar issues. I also tried reimplementing my code using glOrtho to handle the zooms, but ended up facing other problems, so came back to this way. Could anybody please suggest how I handle these dragging events?
The order of operations matter. Operations on matrices are applied in the reverse order in which you multiplied the matrices. In your case you apply the canvas movement before the scaling, so your mouse drag is also zoomed.
Change your code to this:
glPushMatrix();
glTranslated(-zoomTranslateX, -zoomTranslateY, 0);
glTranslated(totalMovedX, totalMovedY, 0);
glScaled(currentScale, currentScale, 1);
graph->draw();
glPopMatrix();
Also after changing that order you don't have to scale your mouse moves, so you can omit that division by currentScale
if (parent->rightButtonDown){
totalMovedX += (-(mousex-curx));
totalMovedY += (-(mousey-cury));
}
I am creating a simple 2D OpenGL game, and I need to know when the player clicks or mouses over an OpenGL primitive. (For example, on a GL_QUADS that serves as one of the tiles...) There doesn't seems to be a simple way to do this beyond brute force or opengl.org's suggestion of using a unique color for every one of my primitives, which seems a little hacky. Am I missing something? Thanks...
My advice, don't use OpenGL's selection mode or OpenGL rendering (brute force method you are talking about), use a CPU-based ray picking algorithm if 3D. For 2D, like in your case, it should be straightforward, it's just a test to know if a 2D point is in a 2D rectangle.
I would suggest to use the hacky method if you want a quick implementation (coding time, I mean). Especially if you don't want to implement a quadtree with moving ojects. If you are using opengl immediate mode, that should be straightforward:
// Rendering part
glClearColor(0,0,0,0);
glClear(GL_COLOR_BUFFER_BIT);
for(unsigned i=0; i<tileCout; ++i){
unsigned tileId = i+1; // we inc the tile ID in order not to pick up the black
glColor3ub(tileId &0xFF, (tileId >>8)&0xFF, (tileId >>16)&0xFF);
renderTileWithoutColorNorTextures(i);
}
// Let's retrieve the tile ID
unsigned tileId = 0;
glReadPixels(mouseX, mouseY, 1, 1, GL_RGBA, GL_UNSIGNED_BYTE,
(unsigned char *)&tileId);
if(tileId!=0){ // if we didn't picked the black
tileId--;
// we picked the tile number tileId
}
// We don't want to show that to the user, so we clean the screen
glClearColor(...); // the color you want
glClear(GL_COLOR_BUFFER_BIT);
// Now, render your real scene
// ...
// And we swap
whateverSwapBuffers(); // might be glutSwapBuffers, glx, ...
You can use OpenGL's glRenderMode(GL_SELECT) mode. Here is some code that uses it, and it should be easy to follow (look for the _pick method)
(and here's the same code using GL_SELECT in C)
(There have been cases - in the past - of GL_SELECT being deliberately slowed down on 'non-workstation' cards in order to discourage CAD and modeling users from buying consumer 3D cards; that ought to be a bad habit of the past that ATI and NVidia have grown out of ;) )