OpenGL - Trying to add texture to triangle - c++

I am trying to add a stop sign texture to a triangle in opengl.
But for some reason the image is coming up weird like I have the coordinates in the wrong order and the image is 1, mirrored and 2, not angled correctly:
I believe I set the texture coordinates correct but I am unsure. Have i got the texture coordinates in the wrong order?
Here is the code I have for it:
#include <glm\glm.hpp>
#include <graphics_framework.h>
#include <memory>
using namespace std;
using namespace graphics_framework;
using namespace glm;
mesh m;
effect eff;
target_camera cam;
texture tex;
bool load_content() {
// Construct geometry object
geometry geom;
// Create triangle data
// Positions
vector<vec3> positions{vec3(0.0f, 1.0f, 0.0f), vec3(-1.0f, -1.0f, 0.0f), vec3(1.0f, -1.0f, 0.0f)};
// *********************************
// Define texture coordinates for triangle
vector<vec2> tex_coords{ vec2(0.0f, 0.0f), vec2(1.0f, 0.0f), vec2(0.5f, 1.0f) };
// *********************************
// Add to the geometry
geom.add_buffer(positions, BUFFER_INDEXES::POSITION_BUFFER);
// *********************************
// Add texture coordinate buffer to geometry
geom.add_buffer(tex_coords, BUFFER_INDEXES::TEXTURE_COORDS_0);
// *********************************
// Create mesh object
m = mesh(geom);
// Load in texture shaders here
eff.add_shader("27_Texturing_Shader/simple_texture.vert", GL_VERTEX_SHADER);
eff.add_shader("27_Texturing_Shader/simple_texture.frag", GL_FRAGMENT_SHADER);
// *********************************
// Build effect
eff.build();
// Load texture "textures/sign.jpg"
tex = texture("textures/sign.jpg");
// *********************************
// Set camera properties
cam.set_position(vec3(10.0f, 10.0f, 10.0f));
cam.set_target(vec3(0.0f, 0.0f, 0.0f));
auto aspect = static_cast<float>(renderer::get_screen_width()) / static_cast<float>(renderer::get_screen_height());
cam.set_projection(quarter_pi<float>(), aspect, 2.414f, 1000.0f);
return true;
}
bool update(float delta_time) {
// Update the camera
cam.update(delta_time);
return true;
}
bool render() {
// Bind effect
renderer::bind(eff);
// Create MVP matrix
auto M = m.get_transform().get_transform_matrix();
auto V = cam.get_view();
auto P = cam.get_projection();
auto MVP = P * V * M;
// Set MVP matrix uniform
glUniformMatrix4fv(eff.get_uniform_location("MVP"), // Location of uniform
1, // Number of values - 1 mat4
GL_FALSE, // Transpose the matrix?
value_ptr(MVP)); // Pointer to matrix data
// *********************************
// Bind texture to renderer
renderer::bind(tex, 0);
// Set the texture value for the shader here
glUniform1i(eff.get_uniform_location("tex"), 0);
// *********************************
// Render the mesh
renderer::render(m);
return true;
}
void main() {
// Create application
app application("27_Texturing_Shader");
// Set load content, update and render methods
application.set_load_content(load_content);
application.set_update(update);
application.set_render(render);
// Run application
application.run();
}

That's because the order of your vertex positions and texture coordinates do not match. The first position is the top corner (assuming y is up), but the first texture coordinate is the bottom left corner of the image. Moving the last texture coordinate to the front should do the trick.
Your texture is probably mirrored along the y-axis. OpenGL expects the texture lines to be bottom-up, but most image libraries provide them top-down. Depends on what library you use to load the image. And of course, it will depend on the side from which you view the triangle.

Related

OpenGL doesn't draw my Quad. Everything seems to be fine. No errors from OpenGL

I'm trying to make a system that allows you to type in a position and scale and it will create a vector that automatically generates all the vertices. The problem is when I try to draw my object it just won't show up. I have used OpenGL's built-in debugging system but it didn't say anything was wrong. So then I tried to manually debug myself but everything seemed to draw just fine.
Renderer::createQuad() method:
Shape Renderer::createQuad(glm::vec2 position, glm::vec2 scale, Shader shader, Texture texture)
{
float x = position.x;
float y = position.y;
float width = scale.x;
float height = scale.y;
std::vector<float> vertices =
{
x+width, y+height, 1.0f, 1.0f, // TR
x+width, y-height, 1.0f, 0.0f, // BR
x-width, y-height, 0.0f, 0.0f, // BL
x-width, y+height, 0.0f, 1.0f // TL
};
std::vector<uint32_t> indices =
{
0, 1, 3,
1, 2, 3
};
m_lenVertices = vertices.size();
m_lenIndices = indices.size();
// these Create methods should be fine as OpenGL does not give me any error
// also I have another function that requires you to pass in the vertex data and indices that works just fine
// I bind the thing I am creating
createVAO();
createVBO(vertices);
createEBO(indices);
createTexture(texture);
createShader(shader.getVertexShader(), shader.getFragmentShader());
Shape shape;
glm::mat4 model(1.0f);
glUniformMatrix4fv(glGetUniformLocation(m_shader, "model"), 1, GL_FALSE, glm::value_ptr(model));
shape.setShader(m_shader);
shape.setVAO(m_VAO);
shape.setTexture(m_texture);
shape.setPosition(position);
return shape;
}
Renderer::draw() method:
void Renderer::draw(Shape shape)
{
if (!m_usingIndices)
{
// Unbinds any other shapes
glBindVertexArray(0);
glUseProgram(0);
shape.bindShader();
shape.bindVAO();
shape.bindTexture();
glDrawArrays(GL_TRIANGLES, 0, m_lenVertices);
}
else
{
// Unbinds any other shapes
glBindVertexArray(0);
glUseProgram(0);
shape.bindShader();
shape.bindVAO();
shape.bindTexture();
glDrawElements(GL_TRIANGLES, m_lenIndices, GL_UNSIGNED_INT, 0);
}
}
Projection matrix:
glm::mat4 m_projectionMat = glm::ortho(-Window::getWidth(), Window::getWidth(), -Window::getHeight(), Window::getHeight, 0.1f, 100.0f);
Creating then rendering the Quad:
// Creates the VBO, VAO, EBO, etc.
quad = renderer.createQuad(glm::vec2(500.0f, 500.0f), glm::vec2(200.0F, 200.0f), LoadFile::loadShader("Res/Shader/VertShader.glsl", "Res/Shader/FragShader.glsl"), LoadFile::loadTexture("Res/Textures/Lake.jpg"));
// In the main game loop we render the quad
quad.setCamera(camera); // Sets the View and Projection matrix for the quad
renderer.draw(quad);
Output:
Output of the code before

How to get keyboard navigation in OpenGL

I'm trying to create a solar system in OpenGL. I have the basic code for earth spinning on its axis and im trying to set the camera to move with the arrow keys.
using namespace std;
using namespace glm;
const int windowWidth = 1024;
const int windowHeight = 768;
GLuint VBO;
int NUMVERTS = 0;
bool* keyStates = new bool[256]; //Create an array of boolean values of length 256 (0-255)
float fraction = 0.1f; //Fraction for navigation speed using keys
// Transform uniforms location
GLuint gModelToWorldTransformLoc;
GLuint gWorldToViewToProjectionTransformLoc;
// Lighting uniforms location
GLuint gAmbientLightIntensityLoc;
GLuint gDirectionalLightIntensityLoc;
GLuint gDirectionalLightDirectionLoc;
// Materials uniform location
GLuint gKaLoc;
GLuint gKdLoc;
// TextureSampler uniform location
GLuint gTextureSamplerLoc;
// Texture ID
GLuint gTextureObject[11];
//Navigation variables
float posX;
float posY;
float posZ;
float viewX = 0.0f;
float viewY = 0.0f;
float viewZ = 0.0f;
float dirX;
float dirY;
float dirZ;
vec3 cameraPos = vec3(0.0f,0.0f,5.0f);
vec3 cameraView = vec3(viewX,viewY,viewZ);
vec3 cameraDir = vec3(0.0f,1.0f,0.0f);
These are all my variables that im using to edit the camera.
static void renderSceneCallBack()
{
// Clear the back buffer and the z-buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Create our world space to view space transformation matrix
mat4 worldToViewTransform = lookAt(
cameraPos, // The position of your camera, in world space
cameraView, // where you want to look at, in world space
cameraDir // Camera up direction (set to 0,-1,0 to look upside-down)
);
// Create out projection transform
mat4 projectionTransform = perspective(45.0f, (float)windowWidth / (float)windowHeight, 1.0f, 100.0f);
// Combine the world space to view space transformation matrix and the projection transformation matrix
mat4 worldToViewToProjectionTransform = projectionTransform * worldToViewTransform;
// Update the transforms in the shader program on the GPU
glUniformMatrix4fv(gWorldToViewToProjectionTransformLoc, 1, GL_FALSE, &worldToViewToProjectionTransform[0][0]);
glEnableVertexAttribArray(0);
glEnableVertexAttribArray(1);
glEnableVertexAttribArray(2);
glBindBuffer(GL_ARRAY_BUFFER, VBO);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), 0);
glVertexAttribPointer(1, 3, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)12);
glVertexAttribPointer(2, 2, GL_FLOAT, GL_FALSE, sizeof(aitVertex), (const GLvoid*)24);
// Set the material properties
glUniform1f(gKaLoc, 0.8f);
glUniform1f(gKdLoc, 0.8f);
// Bind the texture to the texture unit 0
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, gTextureObject[0]);
// Set our sampler to user Texture Unit 0
glUniform1i(gTextureSamplerLoc, 0);
// Draw triangle
mat4 modelToWorldTransform = mat4(1.0f);
static float angle = 0.0f;
angle+=1.0f;
modelToWorldTransform = rotate(modelToWorldTransform, angle, vec3(0.0f, 1.0f, 0.0f));
glUniformMatrix4fv(gModelToWorldTransformLoc, 1, GL_FALSE, &modelToWorldTransform[0][0]);
glDrawArrays(GL_TRIANGLES, 0, NUMVERTS);
glDisableVertexAttribArray(0);
glDisableVertexAttribArray(1);
glDisableVertexAttribArray(2);
glutSwapBuffers();
}
This is the function that draws the earth onto the screen and determines where the camera is at.
void keyPressed (unsigned char key, int x, int y)
{
keyStates[key] = true; //Set the state of the current key to pressed
cout<<"keyPressed ";
}
void keyUp(unsigned char key, int x, int y)
{
keyStates[key] = false; //Set the state of the current key to released
cout<<"keyUp ";
}
void keyOperations (void)
{
if(keyStates['a'])
{
viewX += 0.5f;
}
cout<<"keyOperations ";
}
These are the functions I'm trying to use to edit the camera variables dynamically
// Create a vertex buffer
createVertexBuffer();
glutKeyboardFunc(keyPressed); //Tell Glut to use the method "keyPressed" for key events
glutKeyboardUpFunc(keyUp); //Tell Glut to use the method "keyUp" for key events
keyOperations();
glutMainLoop();
Finally here's the few lines in my main method where I'm trying to call the key press functions. In the console I see it detects that im pressing them but the planet doesnt move at all, I think I may be calling the keyOperations in the wrong place but I'm not sure.
You are correct, key operations is being called in the wrong place. Where it is now is called once then never again. It needs to go in your update code where you update the rotation of the planet. That way it is called at least once per frame.

Best place to store model matrix in OpenGL?

I'm currently refactoring my OpenGL program (used to be one single enormous file) to use C++ classes. The basic framework looks like this:
I have an interface Drawable with the function virtual void Render(GLenum type) const = 0; and a bunch of classes implementing this interface (Sphere, Cube, Grid, Plane, PLYMesh and OBJMesh).
In my main.cpp I'm setting up a scene containing multiple of these objects, each with its own shader program. After setting uniform buffer objects and each program's individual uniforms, I'm calling glutMainLoop().
In my Display function called each frame, the first thing I'm doing is setting up all the transformation matrices and finally call the above mentioned Render function for every object in the scene:
void Display()
{
// Clear framebuffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
modelViewMatrix = glm::mat4(1.0);
projectionMatrix = glm::mat4(1.0);
normalMatrix = glm::mat4(1.0);
modelViewMatrix = glm::lookAt(glm::vec3(0.0, 0.0, mouse_translate_z), glm::vec3(0.0, 0.0, 0.0), glm::vec3(0.0, 1.0, 0.0));
modelViewMatrix = glm::rotate(modelViewMatrix, -mouse_rotate_x, glm::vec3(1.0f, 0.0f, 0.0f));
modelViewMatrix = glm::rotate(modelViewMatrix, -mouse_rotate_y, glm::vec3(0.0f, 1.0f, 0.0f));
projectionMatrix = glm::perspective(45.0f, (GLfloat)WINDOW_WIDTH / (GLfloat)WINDOW_HEIGHT, 1.0f, 10000.f);
// No non-uniform scaling (only use mat3(normalMatrix in shader))
normalMatrix = modelViewMatrix;
glBindBuffer(GL_UNIFORM_BUFFER, ubo_global_matrices);
glBufferSubData(GL_UNIFORM_BUFFER, 0, sizeof(glm::mat4), glm::value_ptr(modelViewMatrix));
glBufferSubData(GL_UNIFORM_BUFFER, 1 * sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(projectionMatrix));
glBufferSubData(GL_UNIFORM_BUFFER, 2 * sizeof(glm::mat4), sizeof(glm::mat4), glm::value_ptr(normalMatrix));
glBindBuffer(GL_UNIFORM_BUFFER, 0);
// ************************************************** //
// **************** DRAWING COMMANDS **************** //
// ************************************************** //
// Grid
if (grid->GetIsRendered())
{
program_GRID_NxN->Use();
grid->Render(GL_LINES);
program_GRID_NxN->UnUse();
}
// Plane
...
// Sphere
...
// Swap front and back buffer and redraw scene
glutSwapBuffers();
glutPostRedisplay();
}
My question now is the following: With the current code, I'm using the same ModelView matrix for every object. What if I wanna translate only the sphere, or rotate only the plane without changing the vertex positions? Where is the best place to store the model matrix in a large OpenGL program? What about putting a protected member variable glam::mat4 modelMatrix into the Drawable interface? Also, should the model and the view matrix be split (for example using a Camera class containing the view matrix only)?
My answer is mainly based off Tom Dalling's excellent tutorial, but with some minor changes.
Firstly all your view and projection matrix operations should go in the Camera class. Camera will provide a convenient way of getting the view and projection matrix by calling the matrix() method.
glm::mat4 Camera::matrix() const {
return projection() * view();
}
Camera.cpp
Then for this example you'd have an Model Asset, which contains everything you need to render the geometry. This asset should be unique and stored in a ResourceManager or something similar.
struct ModelAsset {
Shader* shader;
Texture* texture;
GLuint vbo;
GLuint vao;
GLenum drawType;
GLint drawStart;
GLint drawCount;
};
Then you have an Model Instance, which has a pointer to the assest plus a unique transform matrix. This way you can create as many instances of a particular asset each one with its own unique transformation.
struct ModelInstance {
ModelAsset* asset;
glm::mat4 transform;
};
ModelInstance cube;
cube.asset = &asset; // An asset that you created somewhere else (e.g. ResourceManager)
cube.transform = glm::mat4(); // Your unique transformation for this instance
To render an instance you pass the view and model matrix as uniforms to the shader, and shader does the rest of the work.
shaders->setUniform("camera", camera.matrix());
shaders->setUniform("model", cube.transform);
Finally it's best when all your instances are grouped nicely in some resizable container.
std::vector<ModelInstance> instances;
instances.push_back(cube);
instances.push_back(sphere);
instances.push_back(pyramid);
for (ModelInstance i : instances) {
i.transform = glm::rotate(i.transform, getTime(), glm::vec3(0.0f, 1.0f, 0.0f));
}

How to re-write 2D OpenGL app for OpenGL ES?

I am working on an OpenGL 2D game with sprite graphics. I was recently advised that I should use OpenGL ES calls as it is a subset of OpenGL and would allow me to port it more easily to mobile platforms. The majority of the code is just calls to a draw_image function, which is defined so:
void draw_img(float x, float y, float w, float h, GLuint tex,float r=1,float g=1, float b=1) {
glColor3f(r,g,b);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, tex);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f);
glVertex2f( x, y);
glTexCoord2f(1.0f, 0.0f);
glVertex2f(w+x, y);
glTexCoord2f(1.0f, 1.0f);
glVertex2f( w+x, h+y);
glTexCoord2f(0.0f, 1.0f);
glVertex2f( x, h+y);
glEnd();
}
What do I need to change to make this OpenGL ES compatible? Also, the reason I am using fixed-function rather than shaders is that I am developing on a machine which doesn't support GLSL.
In OpenGL ES 1.1 use the glVertexPointer(), glColorPointer(), glTexCoordPointer() and glDrawArrays() functions to draw a quad. In contrast to your OpenGL implementation, you will have to describe the structures (vectors, colors, texture coordinates) that your quad consists of instead of just using the built-in glTexCoord2f, glVertex2f and glColor3f methods.
Here is some example code that should do what you want. (I have used the argument names you used in your function definition, so it should be simple to port your code from the example.)
First, you need to define a structure for one vertex of your quad. This will hold the quad vertex positions, colors and texture coordinates.
// Define a simple 2D vector
typedef struct Vec2 {
float x,y;
} Vec2;
// Define a simple 4-byte color
typedef struct Color4B {
GLbyte r,g,b,a;
};
// Define a suitable quad vertex with a color and tex coords.
typedef struct QuadVertex {
Vec2 vect; // 8 bytes
Color4B color; // 4 bytes
Vec2 texCoords; // 8 bytes
} QuadVertex;
Then, you should define a structure describing the whole quad consisting of four vertices:
// Define a quad structure
typedef struct Quad {
QuadVertex tl;
QuadVertex bl;
QuadVertex tr;
QuadVertex br;
} Quad;
Now, instantiate your quad and assign quad vertex information (positions, colors, texture coordinates):
Quad quad;
quad.bl.vect = (Vec2){x,y};
quad.br.vect = (Vec2){w+x,y};
quad.tr.vect = (Vec2){w+x,h+y};
quad.tl.vect = (Vec2){x,h+y};
quad.tl.color = quad.tr.color = quad.bl.color = quad.br.color
= (Color4B){r,g,b,255};
quad.tl.texCoords = (Vec2){0,0};
quad.tr.texCoords = (Vec2){1,0};
quad.br.texCoords = (Vec2){1,1};
quad.bl.texCoords = (Vec2){0,1};
Now tell OpenGL how to draw the quad. The calls to gl...Pointer provide OpenGL with the right offsets and sizes to your vertex structure's values, so it can later use that information for drawing the quad.
// "Explain" the quad structure to OpenGL ES
#define kQuadSize sizeof(quad.bl)
long offset = (long)&quad;
// vertex
int diff = offsetof(QuadVertex, vect);
glVertexPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
// color
diff = offsetof(QuadVertex, color);
glColorPointer(4, GL_UNSIGNED_BYTE, kQuadSize, (void*)(offset + diff));
// texCoods
diff = offsetof(QuadVertex, texCoords);
glTexCoordPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
Finally, assign the texture and draw the quad. glDrawArrays tells OpenGL to use the previously defined offsets together with the values contained in your Quad object to draw the shape defined by 4 vertices.
glBindTexture(GL_TEXTURE_2D, tex);
// Draw the quad
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindTexture(GL_TEXTURE_2D, 0);
Please also note that it is perfectly OK to use OpenGL ES 1 if you don't need shaders. The main difference between ES1 and ES2 is that, in ES2, there is no fixed pipeline, so you would need to implement a matrix stack plus shaders for the basic rendering on your own. If you are fine with the functionality offered by the fixed pipeline, just use OpenGL ES 1.

Matrix Transformation Problem - Z Axis Rotation is Skewing

For a simple 2d game I'm making I'm trying to rotate sprites around the z axis using matrices. I'm clearly doing something wrong as when I attempt to rotate my sprite it looks like it's being rotated around the screen origin (bottom, left) and not the sprite origin. I'm confused as my quad is at the origin already so I didn't think I need to translate -> rotate and translate back. Here's a code snippet and a small video or the erroneous transformation
void MatrixMultiply(
MATRIX &mOut,
const MATRIX &mA,
const MATRIX &mB);
/*!***************************************************************************
#Function TransTransformArray
#Output pTransformedVertex Destination for transformed vectors
#Input pV Input vector array
#Input nNumberOfVertices Number of vectors to transform
#Input pMatrix Matrix to transform the vectors of input vector (e.g. use 1 for position, 0 for normal)
#Description Transform all vertices in pVertex by pMatrix and store them in
pTransformedVertex
- pTransformedVertex is the pointer that will receive transformed vertices.
- pVertex is the pointer to untransformed object vertices.
- nNumberOfVertices is the number of vertices of the object.
- pMatrix is the matrix used to transform the object.
*****************************************************************************/
void TransTransformArray(
VECTOR3 * const pTransformedVertex,
const VECTOR3 * const pV,
const int nNumberOfVertices,
const MATRIX * const pMatrix);
RenderQuad CreateRenderQuad(
const Texture2D & texture,
float x,
float y,
float scaleX,
float scaleY,
float rotateRadians,
int zIndex,
const Color & color,
const Quad2 & textureCoord,
const char * name
) {
MATRIX mT;
MATRIX mS;
MATRIX concat;
MATRIX mR;
MatrixTranslation(mT, x, y, 0.0f);
MatrixRotationZ(mR, rotateRadians);
MatrixScaling(mS, scaleX, scaleY, 1.0f);
VECTOR3 quad[] = {
{-0.5f, 0.5f, 0.f}, //tl
{0.5f, 0.5f, 0.f}, //tr
{-0.5, -0.5f, 0.0f}, //bl
{0.5f, -0.5f, 0.0f}, //br
};
MatrixMultiply(concat, mR, mT);
MatrixMultiply(concat, concat, mS);
// apply to all the points in the quad
TransTransformArray(quad, quad, 4, &concat);
== Update:
here's the structs and render code:
I'm using the matrix class from the oolongengine code.google.com/p/oolongengine/source/browse/trunk/Oolong%20Engine2/Math/Matrix.cpp
I transform all the quads then later render them using OpenGL. Here are my data structs and render code:
typedef struct _RenderData {
VECTOR3 vertex;
RenderColor3D color;
RenderTextureCoord textureCoord;
float zIndex;
GLuint textureId;
} RenderData;
typedef struct _RenderQuad {
//! top left
RenderData tl;
//! top right
RenderData tr;
//! bottom left
RenderData bl;
//! bottom right
RenderData br;
float zIndex;
Texture2D * texture; // render quad draws a source rect from here
ESpriteBlendMode blendMode;
} RenderQuad ;
/// Draw
class QuadBatch {
GLushort * m_indices;
const Texture2D * m_texture;
GLuint m_vbos[2];
RenderData * m_vertices;
};
QuadBatch::Draw () {
int offset = (int)&m_vertices[startIndex];
// vertex
int diff = offsetof( RenderData, vertex);
glVertexPointer(3, GL_FLOAT, kRenderDataSize, (void*) (offset + diff) );
// color
diff = offsetof( RenderData, color);
glColorPointer(4, GL_FLOAT, kRenderDataSize, (void*)(offset + diff));
// tex coords
diff = offsetof( RenderData, textureCoord);
glTexCoordPointer(2, GL_FLOAT, kRenderDataSize, (void*)(offset + diff));
// each quad has 6 indices
glDrawElements(GL_TRIANGLES, vertexCount * elementMultiplier, GL_UNSIGNED_SHORT, m_indices);
'Rotation', by definition, is around the origin (0,0,0). If you want a different axis of rotation, you have to apply a Translation component. Say you want to apply a rotation R around an axis a. The transformation to apply to an arbitrary vector x is:
x --> a + R(x - a) = Rx + (a - Ra)
(This might take some staring to digest). So, after applying your rotation - which, as you observed, rotates around the origin - you have to add the constant vector (a - Ra).
[Edit:] This answer is language and platform agnostic - the math is the same wherever you look. Specific libraries contain different structures and API to apply transformations. Both DirectX and OpenGL, for example, maintain 4x4 matrix transforms, to unify rotations and translations into a single matrix multiplication (via an apparatus called homogeneous coordinates).