Tiles being drawn in the wrong location - c++

I've finally managed to get my tiles drawn on the screen somewhat in a correct way. Although the location is a bit off and I can't seem to figure out why...
I'm using SFML for drawing.
Tile.hpp:
#ifndef TILE_HPP
#define TILE_HPP
#include <SFML/Graphics.hpp>
#include <SFML/System.hpp>
#include "textureManager.hpp"
class Tile {
public:
Tile();
Tile(sf::Vector2i coord, int biome);
~Tile();
sf::Vector2i getCoord() const { return coord; };
int getBiome() const { return biome; };
void setCoord(sf::Vector2i coord) { this->coord = coord; };
void setBiome(int biome) { this->biome = biome; };
void draw(int x, int y, sf::RenderWindow* rw);
void update(sf::Texture& texture);
private:
sf::Vector2i coord;
int biome;
sf::Sprite sprite;
};
#endif
Tile.cpp
#include <SFML/Graphics.hpp>
#include <SFML/System.hpp>
#include "textureManager.hpp"
#include "tile.hpp"
Tile::Tile()
{}
Tile::Tile(sf::Vector2i coord, int biome) {
this->biome = biome;
this->coord = coord;
}
Tile::~Tile(){}
void Tile::draw(int x, int y, sf::RenderWindow* rw)
{
sprite.setPosition(x, y);
rw->draw(sprite);
}
void Tile::update(sf::Texture& texture)
{
switch (biome)
{
// Not important here
}
}
Now the more relevant part: the drawing
void StatePlay::draw(const float dt)
{
game->window.setView(view);
game->window.clear(sf::Color::Black);
sf::Vector2f offset = camera.getLocation();
int newX = (offset.x / map.getTileSize()) - (map.chunkSize / 2);
int newY = (offset.y / map.getTileSize()) - (map.chunkSize / 2);
for (int x = 0; x < map.chunkSize; x++)
{
for (int y = 0; y < map.chunkSize; y++)
{
Tile tile = map.getTile(newX + x, newY + y);
tile.draw((newX + x) * map.getTileSize(), (newY + y) * map.getTileSize(), &game->window);
}
}
return;
}
StatePlay::StatePlay(Game* game)
{
this->game = game;
sf::Vector2f pos = sf::Vector2f(game->window.getSize()); // 1366x768
view.setSize(pos);
pos *= 0.5f; // 688x384
view.setCenter(pos);
// Initialize map
map.init(game->gameTime, game->textureManager.getImage("tileset.png"));
float w = (float) map.getWidth(); // 500
float h = (float) map.getHeight(); // 500
w *= 0.5f; // 250
h *= 0.5f; // 250
w *= map.getTileSize(); // 250 * 32 = 8000
h *= map.getTileSize(); // 250 * 32 = 8000
// Move camera
// Uses view::move from sfml to move the view with w and h
// Also sets camera private to w and h values, return with camera::getLocation()
camera.setLocation(&view, sf::Vector2f(w, h));
}
The result is that I only see the ~10 tiles squared, in the bottom left corner of my screen, covering about 3/4.
The correct tiles are chosen, but the draw location is wrong... It should draw the center of 64x64 (x 32px each) tiles, as much as fit on the screen.

I have fixed the problem. It was a very stupid mistake...
At first without drawing anything, it is normal to center the view on 0.5f * sf::View::getSize() to get the view centered in your window. So the center was already at half of my window size. When using Camera::setLocation(), I used the sf::View::move() to move the view accordingly. So when trying to center it on the map, it added the x and y correctly, but also half of my window size. This resulted in having an offset which was incorrect. Substracting or leaving those values out has fixed this stupid problem.
Thank you for the help.

Related

2D Diamond (isometric) map editor - Textures extended infinitely?

I'm currently developing a 2D isometric map editor.
I display entity(cube, player) which contains points and textures.
Each cubes are composed by 12 points.(12 points, but handled as 3 sides of 4 points when displayed by sfml(sf::VertexArray).
(I know I include some '.cpp' times to times, I have a problem with my IDE(visual studio) which I'm trying to resolve, please do not care about it.)
main.cpp
#pragma once
#include "globalfunctions.h" //global functions + main headers + class headers
int main() {
int mapSize = 0;
int cubeSize = 0;
cout << "Map size: "; cin >> mapSize; cout << endl;
cout << "Cube size: "; cin >> cubeSize; cout << endl;
int windowWidth = (mapSize * cubeSize) - (cubeSize * 2);
int windowHeight = ((mapSize * cubeSize) - (cubeSize * 2)) / 2;
renderWindow window(windowWidth, windowHeight, mapSize, cubeSize);
int nbMaxTextures = 9;
for (int t = 0; t < nbMaxTextures; t++) {
window.loadTexture("test", t);
}
window.run();
return EXIT_SUCCESS;
}
globalfunctions.h
#pragma once
#include <SFML/System.hpp>
#include <SFML/Graphics.hpp>
#include <SFML/Window.hpp>
#include <iostream>
#include <math.h>
//#include <sstream>
#include <vector>
using namespace std;
sf::Vector2u isometricToCartesian(int i, int j, int cubeSize) {
sf::Vector2u carth;
carth.x = (j - i) * (cubeSize / 2);
carth.y = (j + i) * (cubeSize / 4);
return carth;
}
sf::Vector2u cartesianToIsometric(int x, int y, int cubeSize) {//TODO
sf::Vector2u iso;
iso.x = 0;
iso.y = 0;
return iso;
}
#include "entity.h"
#include "renderWindow.h"
renderWindow.h
#pragma once
class renderWindow {
public:
renderWindow(float WIDTH, float HEIGHT, int MAPSIZE, int CUBESIZE);
void run();
void loadTexture(sf::String folder, int numTexture);
//SETTERS
//...
//GETTERS
//...
private:
int mCurrentLayerID;
int mMapSize;
int mCubeSize;
int mSelectedTexture;
vector<entity> mMap;
sf::RenderWindow mWindow;
vector<sf::Texture> mTextures;
sf::Texture mMemoryTexture;
void processEvent();
void update(sf::Time deltaTime);
void render();
//CUBE ACTION-------------------------------------------
void addCube(int layerID, float x, float y);
entity& getCube(int ID);
entity& getCubeAt(float x, float y);
vector<sf::VertexArray> loadCube(int cubeID);//UPDATE DATA LIKE COORDINATES -> create/chnge the vertex
void drawCube(int cubeID);//draw the vertex
//VARIABLES
vector<sf::VertexArray> verticesSide1;
vector<sf::VertexArray> verticesSide2;
vector<sf::VertexArray> verticesSide3;
//CUBE ACTION-------------------------------------------
};
#include "renderWindow.cpp"
renderWindow.cpp
#pragma once
renderWindow::renderWindow(float WIDTH, float HEIGHT, int MAPSIZE, int CUBESIZE) : mWindow(sf::VideoMode(WIDTH, HEIGHT), "") {
mMapSize = MAPSIZE;
mCubeSize = CUBESIZE;
mSelectedTexture = 6;
mCurrentLayerID = -1;
int x = 0;
int y = 0;
//default layer
for (int j = 0; j < mMapSize; j++) {
for (int i = 0; i < mMapSize; i++) {
x = isometricToCartesian(i, j, mCubeSize).x;
y = isometricToCartesian(i, j, mCubeSize).y;
addCube(0, x, y);
}
}
for (int c = 0; c < mMap.size(); c++) {
verticesSide1.push_back(loadCube(c)[0]);
verticesSide2.push_back(loadCube(c)[1]);
verticesSide3.push_back(loadCube(c)[2]);
//then only do that when something the cube's coordinate changed
}
}
void renderWindow::run() {
sf::Clock clock;
sf::Time timeSinceLastUpdate = sf::Time::Zero;
sf::Time TimePerFrame = sf::seconds(1.f / 60.f);
while (mWindow.isOpen()) {
processEvent();
timeSinceLastUpdate += clock.restart();
while (timeSinceLastUpdate > TimePerFrame) {
timeSinceLastUpdate -= TimePerFrame;
processEvent();
update(TimePerFrame);
}
render();
}
}
void renderWindow::loadTexture(sf::String folder, int numTexture) {
if (mMemoryTexture.loadFromFile("textures/" + folder + "/" + to_string(numTexture) + ".jpg"))
mTextures.push_back(mMemoryTexture);
else
cout << "Texture n°" << numTexture << " as failed to load." << endl;
}
//SETTERS
//...
//GETTERS
//...
//PRIVATE METHODE
void renderWindow::processEvent() {
sf::Event event;
while (mWindow.pollEvent(event)) {
switch (event.type) {
case sf::Event::Closed:
mWindow.close();
break;
case sf::Event::KeyPressed:
if (event.key.code == sf::Keyboard::Escape)
mWindow.close();
break;
case sf::Event::MouseButtonPressed:
if (event.MouseButtonPressed == sf::Mouse::Left)
getCubeAt(event.mouseButton.x, event.mouseButton.y).setTexture(0, mSelectedTexture);//TEST
getCubeAt(event.mouseButton.x, event.mouseButton.y).setTexture(1, mSelectedTexture + 1);//TEST
getCubeAt(event.mouseButton.x, event.mouseButton.y).setTexture(2, mSelectedTexture + 2);//TEST
break;
/*case sf::Event::MouseMoved:
cout << "(" << event.mouseMove.x << ", " << event.mouseMove.y << ")" << endl;
break;*/
}
}
}
void renderWindow::update(sf::Time deltaTime) {
//REMEMBER: distance = speed * time
//MOVEMENT, ANIMATIONS ETC. ...
}
void renderWindow::render() {
mWindow.clear();
for (int c = 0; c < mMap.size(); c++) {
drawCube(c);
}
mWindow.display();
}
//CUBE ACTION-------------------------------------------
void renderWindow::addCube(int layerID, float x, float y) {
//Thoses make the code more readable:
int half_cubeSize = mCubeSize / 2;
int oneQuarter_cubeSize = mCubeSize / 4;
int twoQuarter_cubeSize = oneQuarter_cubeSize * 2;
int treeQuarter_cubeSize = oneQuarter_cubeSize * 3;
mCurrentLayerID = layerID;
entity dummy(mMap.size(), 0, layerID);
dummy.addPoint(12);
dummy.addTexture(6);
dummy.addTexture(7);
dummy.addTexture(8);
//SIDE 1------------------------------------------------
dummy.setPoint(0, x, y + oneQuarter_cubeSize);
dummy.setPoint(1, x + half_cubeSize, y + twoQuarter_cubeSize);
dummy.setPoint(2, x + half_cubeSize, y + mCubeSize);
dummy.setPoint(3, x, y + treeQuarter_cubeSize);
//SIDE 2------------------------------------------------
dummy.setPoint(4, x + half_cubeSize, y + twoQuarter_cubeSize);
dummy.setPoint(5, x + mCubeSize, y + oneQuarter_cubeSize);
dummy.setPoint(6, x + mCubeSize, y + treeQuarter_cubeSize);
dummy.setPoint(7, x + half_cubeSize, y + mCubeSize);
//SIDE 3------------------------------------------------
dummy.setPoint(8, x, y + oneQuarter_cubeSize);
dummy.setPoint(9, x + half_cubeSize, y);
dummy.setPoint(10, x + mCubeSize, y + oneQuarter_cubeSize);
dummy.setPoint(11, x + half_cubeSize, y + twoQuarter_cubeSize);
mMap.push_back(dummy);
}
entity& renderWindow::getCube(int ID) {
for (int c = 0; c < mMap.size(); c++) {
if (mMap[c].getID() == ID)
return mMap[c];
}
}
entity& renderWindow::getCubeAt(float x, float y) {//TO DO
return entity(-1, 0, 0);
}
vector<sf::VertexArray> renderWindow::loadCube(int cubeID) {
vector<sf::VertexArray> vertices;
vertices.push_back(sf::VertexArray());
vertices.push_back(sf::VertexArray());
vertices.push_back(sf::VertexArray());
vertices[0].setPrimitiveType(sf::Quads);
vertices[0].resize(4);
vertices[1].setPrimitiveType(sf::Quads);
vertices[1].resize(4);
vertices[2].setPrimitiveType(sf::Quads);
vertices[2].resize(4);
sf::Vector2f tv0 = sf::Vector2f(0, 0);
sf::Vector2f tv1 = sf::Vector2f(mCubeSize, 0);
sf::Vector2f tv2 = sf::Vector2f(mCubeSize, mCubeSize);
sf::Vector2f tv3 = sf::Vector2f(0, mCubeSize);
sf::Vector2f v0 = sf::Vector2f(getCube(cubeID).getPoint(0, 0), getCube(cubeID).getPoint(0, 1));
sf::Vector2f v1 = sf::Vector2f(getCube(cubeID).getPoint(1, 0), getCube(cubeID).getPoint(1, 1));
sf::Vector2f v2 = sf::Vector2f(getCube(cubeID).getPoint(2, 0), getCube(cubeID).getPoint(2, 1));
sf::Vector2f v3 = sf::Vector2f(getCube(cubeID).getPoint(3, 0), getCube(cubeID).getPoint(3, 1));
sf::Vector2f v4 = sf::Vector2f(getCube(cubeID).getPoint(4, 0), getCube(cubeID).getPoint(4, 1));
sf::Vector2f v5 = sf::Vector2f(getCube(cubeID).getPoint(5, 0), getCube(cubeID).getPoint(5, 1));
sf::Vector2f v6 = sf::Vector2f(getCube(cubeID).getPoint(6, 0), getCube(cubeID).getPoint(6, 1));
sf::Vector2f v7 = sf::Vector2f(getCube(cubeID).getPoint(7, 0), getCube(cubeID).getPoint(7, 1));
sf::Vector2f v8 = sf::Vector2f(getCube(cubeID).getPoint(8, 0), getCube(cubeID).getPoint(8, 1));
sf::Vector2f v9 = sf::Vector2f(getCube(cubeID).getPoint(9, 0), getCube(cubeID).getPoint(9, 1));
sf::Vector2f v10 = sf::Vector2f(getCube(cubeID).getPoint(10, 0), getCube(cubeID).getPoint(10, 1));
sf::Vector2f v11 = sf::Vector2f(getCube(cubeID).getPoint(11, 0), getCube(cubeID).getPoint(11, 1));
vertices[0][0] = sf::Vertex(v0, tv0);
vertices[0][1] = sf::Vertex(v1, tv1);
vertices[0][2] = sf::Vertex(v2, tv2);
vertices[0][3] = sf::Vertex(v3, tv3);
vertices[1][0] = sf::Vertex(v4, tv0);
vertices[1][1] = sf::Vertex(v5, tv1);
vertices[1][2] = sf::Vertex(v6, tv2);
vertices[1][3] = sf::Vertex(v7, tv3);
vertices[2][0] = sf::Vertex(v8, tv0);
vertices[2][1] = sf::Vertex(v9, tv1);
vertices[2][2] = sf::Vertex(v10, tv2);
vertices[2][3] = sf::Vertex(v11, tv3);
return vertices;
}
void renderWindow::drawCube(int cubeID) {
mWindow.draw(verticesSide1[cubeID], &mTextures[getCube(cubeID).getTexture(0)]);
mWindow.draw(verticesSide2[cubeID], &mTextures[getCube(cubeID).getTexture(1)]);
mWindow.draw(verticesSide3[cubeID], &mTextures[getCube(cubeID).getTexture(2)]);
}
//CUBE ACTION-------------------------------------------
entity.h
#pragma once
class entity {
public:
entity();
entity(int id, int type, int numlayer);
void addPoint(int nbPoints);
void addTexture(int numTexture);
//SETTERS
void setPoint(int numPoint, float x, float y);
void setTexture(int textureID, int numTexture);
//GETTERS
int getID();
float getPoint(int numPoint, int numIndex);//if numIndex = 0 -> x || if numIndex = 1 -> y
int getType();
int getNumLayer();
int getTexture(int numTexture);
private:
int mID;
int mType;
int mNumLayer;
vector<sf::Vector2u> mPoints;
vector<int> mTextures;
};
#include "entity.cpp"
entity.cpp
#pragma once
entity::entity() {
mID = 0;
mType = -1;
mNumLayer = 0;
}
entity::entity(int id, int type, int numlayer) {
mID = id;
mType = type;
mNumLayer = numlayer;
}
void entity::addPoint(int nbPoints) {
mPoints.clear();
int newSize = 0;
for (int p = 0; p < nbPoints; p++) {
newSize++;
}
mPoints = vector<sf::Vector2u>(newSize);
}
void entity::addTexture(int numTexture) {
mTextures.push_back(numTexture);
}
//SETTERS
void entity::setPoint(int numPoint, float x, float y) {
mPoints[numPoint].x = x;
mPoints[numPoint].y = y;
}
void entity::setTexture(int textureID, int numTexture) {
mTextures[textureID] = numTexture;
}
//GETTERS
int entity::getID() {
return mID;
}
float entity::getPoint(int numPoint, int numIndex) {
if (numIndex == 0)
return mPoints[numPoint].x;
else
return mPoints[numPoint].y;
}
int entity::getType() {
return mType;
}
int entity::getNumLayer() {
return mNumLayer;
}
int entity::getTexture(int numTexture) {
return mTextures[numTexture];
}
I've done a lot of test, too much, so I won't post them right now, but if you have any question, feel free to ask.
Here is the problem described in the title :
And here, screens with only one face displayed(in the same order in the code):
The only thing I don't understand is that a cube displayed alone work perfectly fine if you enter the coordinates manually. Even the extended ones. But the coordinates formula is ok... (I noticed that the cube n°50 for a 15x15 map with 64x64 cube display a rectangle 'infinite' in width)
If the texture is extended(maybe to the infinite), it suggest that the coordinates are continuously increasing somewhere ? Then, why the cubes are still well placed ?
Here are the assets(64*64 png) :
Directories : textures/test/
Not really an answer (as the code will be rewritten anyway) so few hints for the new code instead (some of them are already mentioned in the comments).
Tileset
In the final isometric engine use sprites. They are faster and support pixel art. For my purposes I use compilation of these two free to use tilesets (64x64):
outside tileset
medieval building tileset
Both are compatible. I compiled and edited them to suite the needs of my engine. So this is what I use (still work in progress):
White color 0x00FFFFFF means transparent. The sprite is not enough. I added info about the height of tile and rotations.
If you see first 4 tiles from upper left corner they all are the same thing rotated by 90 degrees. So all my tiles have index of 4 tiles (the 90 degree rotations) int rot[4]. This way I can rotate the whole map or just view. I compile the set so the rotations are next to each other. There are 3 options:
tile[ix].rot[]={ ix,ix,ix,ix }; where ix is tile without rotation (ground)
tile[ix].rot[]={ ix,ix+1,ix,ix+1 }; where ix is tile with 2 rotations (those 2 tiles with chunk of chopped tree in the middle right)
tile[ix].rot[]={ ix,ix+1,ix+2,ix+3 }; where ix is tile with 4 rotations (like the first tile)
The indexes are valid of coarse only for the first tile only, the others have the whole rot[] array rotated by 1 value from neighbor. Some rotations are invisible (see the wide trees) but the tile is still present to allow rotations.
The tile height is important for placing tiles while editing and also for automatic map generations.
I plan to add also A* map for each tile so I can use path finding or compute watter flows and more.
Map editor
I prefer 3D maps. with bigger resolution you need to properly select the viewed area for viewing to maximize performance. Also a good idea is to create hollow underground so the rendering is much faster (this can be also done virtually during rendering process without the need of updating map).
I recommend to code these features:
make ground hollow
make ground solid
random terrain (diamond & square)
filter out small holes and smooth edges (add the slope tiles to cubic ones)
tile editor
Apart from the obvious paint editor you should add also another features like:
floor <-> ceiling
left <-> right
front <-> back
divide large sprite into regular tiles
copy/merge/paste
adjust lighting after left <-> right mirror operation
They are really handy while compiling/editing tileset resources. As you can see my tileset has many of tiles not present in the source tilesets. They were created by these functions + some minor paint editing... The colored masks on the bottom of the tileset are used to mask out and properly combine parts of tiles to create the missing ones ... (you can take one side form one tile and other from other ...)
[Notes]
For more info/ideas have a look at some related Q/As:
Improving performance of click detection on a staggered column isometric grid
How to procedurally generate isometric map
And here my Standalone no install Win32 Demo:
demo v1.000
demo v1.034
In OpenGL, when you are creating a OpenGL texture manually, you can assign 4 types:
GL_REPEAT, GL_CLAMP_TO_EDGE, GL_CLAMP and GL_CLAMP_TO_BORDER
If you want to learn more from the openGL textures differences, take a look here. Basically, it extend the last pixel in a image to the rest of the reserved memory.
In order to solve your problem, try to load the texture, modifing the parameters. I don't know if sfml allow to do it with Texture.hpp header, in the reference appear setRepeated, try to set true to see if solve the problem. Other way, loadfromfile with a size sf::IntRect(0, 0, 32, 32) in example.
This code is not tested, but teorically, using OpenGL will work:
void renderWindow::loadTexture(sf::String folder, int numTexture)
{
if (mMemoryTexture.loadFromFile("textures/" + folder + "/" + to_string(numTexture) + ".jpg"))
mTextures.push_back(mMemoryTexture);
else
cout << "Texture n°" << numTexture << " as failed to load." << endl;
// Generate OpenGL Texture manually
GLuint texture_handle;
glGenTextures(1, &texture_handle);
// Attach the texture
glBindTexture(GL_TEXTURE_2D, texture_handle);
// Upload to Graphic card
glTexImage2D(
GL_TEXTURE_2D, 0, GL_RGBA,
mMemoryTexture.GetWidth(), mMemoryTexture.GetHeight(),
0,
GL_RGBA, GL_UNSIGNED_BYTE, mMemoryTexture.GetPixelsPtr()
);
// Set the values
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
}
Maybe this helps you to solve your problem.
I endend by finding better way to do this code, thanks to members of stackoverflow. For people who got there by looking for a solution to a similar problem, I invite you to look at the comments for some usefull links and comment.

std::vector memory, vector of unwanted 0's

My Code works for my purely glut implementation, but I am trying to get it to work in qt.
I have a vector of masspoints for a wire mesh system
std::vector<masspoint> m_particles;
The problem is in my qt version none of what I write really sticks and I am left with an array of zeros. Basically I am confused why the glut version has correct values but the qt one does not given that it is basically identical code. What is wrong with the qt code?
Yes I only see zeros when using qDebug. When I am calling my drawing function in the qt version all vertex points turn out to be 0 in all components so nothing is seen.
int myboog = 1;
int county = 0;
// Constructors
Cloth::Cloth(float width, float height, int particles_in_width, int particles_in_height):
m_width(particles_in_width),
m_height(particles_in_height),
m_dimensionWidth(width),
m_dimensionHeight(height),
m_distanceX(width/(float)particles_in_width),
m_distanceY(height/(float)particles_in_height)
{
//Set the particle array to the given size
//Height by width
//mparticles is the name of our vector
m_particles.resize(m_width*m_height);
qDebug() << m_particles.size();
// Create the point masses to simulate the cloth
for (int x = 0; x < m_width; ++x)
{
for (int y=0; y < m_height; ++y)
{
// Place the pointmass of the cloth, lift the edges to give the wind more effect as the cloth falls
Vector3f position = Vector3f(m_dimensionWidth * (x / (float)m_width),
((x==0)||(x==m_width-1)||(y==0)||(y==m_height-1)) ? m_distanceY/2.0f:0,
m_dimensionHeight * (y / (float)m_height));
// The gravity effect is applied to new pmasspoints
m_particles[y * m_width + x] = masspoint(position,Vector3f(0,-0.06,0));
}
}
int num = (int)m_particles.size();
for (int i=0; i<num; ++i)
{
masspoint* p = &m_particles[i];
if(myboog)
{
qDebug() << "test " << *p->getPosition().getXLocation() << county;
county++;
}
}
myboog = 0;
// Calculate the normals for the first time so the initial draw is correctly lit
calculateClothNormals();
}
Code for masspoint involved in constructor for CLoth
#ifndef MASSPOINT_H
#define MASSPOINT_H
#include <QGLWidget>
#include "vector3f.h"
class masspoint
{
private:
Vector3f m_position; // Current Location of the pointmass
Vector3f m_velocity; // Direction and speed the pointmass is traveling in
Vector3f m_acceleration; // Speed at which the pointmass is accelerating (used for gravity)
Vector3f m_forceAccumulated; // Force that has been accumulated since the last update
Vector3f m_normal; // Normal of this pointmass, used to light the cloth when drawing
float m_damping; // Amount of velocity lost per update
bool m_stationary; // Whether this pointmass is currently capible of movement
public:
masspoint& operator= (const masspoint& particle);
//Some constructors
masspoint();
masspoint(const masspoint& particle);
masspoint(Vector3f position, Vector3f acceleration);
//Like eulur integration
void integrate(float duration);
// Accessor functions
//Get the position of the point mass
inline Vector3f getPosition() const {return m_position;}
Vector stuff involved in the constructor for CLoth
#ifndef VECTOR3F_H
#define VECTOR3F_H
#include <math.h>
// Vector library to be used
class Vector3f
{
private:
float m_x, m_y, m_z;
public:
const float* getXLocation() const { return &m_x; }

zooming mandelbrot set second time doesnot let it zoom in desired place

I am using opengl/c++ to draw mandelbrot set and trying to zoom into. I am able to zoom for the first time and zooms where i want (by clicking), but when i try to zoom next time it does not zoom where i intended to zoom instead it shift and zoom little bit far from the place i want to zoom.
I use
#include <GL/gl.h>
#include <GL/glu.h>
#include <GL/glut.h>
#include <math.h>
#include <stdio.h>
#include <stdlib.h>
double dividecubesby = 700;
double left = -2.0;
double right = 2.0;
double bottom = -2.0;
double top = 2.0;
int maxiteration = 128;
int zoomlevel = 3;
double baseSize = 4.0;
double Size = 0.0;
double xco=0.0;
double yco=0.0;
void SetXYpos(int px,int py)
{
xco = left+(right-left)*px/dividecubesby;
yco = top-(top-bottom)*py/dividecubesby;
}
void keyPressed(unsigned char key, int x, int y)
{
int xx= x;
int yy= y;
setXYpos(xx,yy);
Size = 0.5*(pow(2.0, (-zoomlevel)));
switch(key){
case 'z':
left = xco - Size;
right = xco + Size;
bottom = yco - Size;
top = yco + Size;
dividecubesby = dividecubesby+100;
maxiteration = maxiteration+100;
zoomlevel=zoomlevel+1;
glutPostRedisplay();
break;
}
}
int mandtest(double Cr, double Ci)
{
double Zr = 0.0;
double Zi = 0.0;
int times = 0;
double temp;
Zr = Zr+Cr;
Zi = Zi+Ci;
while ((((Zr*Zr)+(Zi*Zi))<=4) && (times < maxiteration)){
temp = (Zr*Zr)-(Zi*Zi);
Zi = 2*Zr*Zi;
Zr = temp+Cr;
Zi = Zi+Ci;
times = times+1;
}
return times;
}
void display(void)
{
glClear(GL_COLOR_BUFFER_BIT);
glColor3f(1.0f,1.0f,1.0f);
double deltax = ((right - left)/(dividecubesby-1));
double deltay = ((top- bottom)/(dividecubesby-1));
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluOrtho2D(left,right,bottom,top);
glBegin(GL_POINTS);
for(double x= left;x<=right;x += deltax ){
for(double y= bottom; y<=top;y += deltay ){
if((mandtest(x,y))==maxiteration){
glColor3f(1.0f,1.0f,1.0f);
glVertex2f(x,y);
}
else {
glColor3f((float)mandtest(x,y)/10,0.0f,(float)mandtest(x,y)/30);
glVertex2f(x,y);
}
}
}
glEnd();
glFlush();
}
to calculate where the mouse is clicked interms of the cartesian co-ordinate [-2,2]
px and py are pixel coordinate
You have too many variables. What defines the width of your image? (right - left)? baseSize + f(zoomLevel)? SizeReal? It's not clear whose job it is to set whom and who is used by whom, so you cannot hope to update everything consistently.
Also, why does dividecubesby increase by a flat 500 while the image size halves with every zoom? Where is the width/height of your window system window which define the limits of the clicked coordinates?
My suggestion is to start from scratch and maybe draw a graph of who updates whom (left/right -> imageWidth). Make sure that you get the correct clicked coordinates independent of what your drawing window (left/right/top/bottom) is, and go on from there. As it is, I think your first zoom works correctly by accident.

OpenGL draw circle, weird bugs

I'm no mathematician, but I need to draw a filled in circle.
My approach was to use someone else's math to get all the points on the circumference of a circle, and turn them into a triangle fan.
I need the vertices in a vertex array, no immediate mode.
The circle does appear. However, when I try and overlay circles strange things happen. They appear only a second and then disappear. When I move my mouse out of the window a triangle sticks out from nowhere.
Here's the class:
class circle
{
//every coordinate with have an X and Y
private:
GLfloat *_vertices;
static const float DEG2RAD = 3.14159/180;
GLfloat _scalex, _scaley, _scalez;
int _cachearraysize;
public:
circle(float scalex, float scaley, float scalez, float radius, int numdegrees)
{
//360 degrees, 2 per coordinate, 2 coordinates for center and end of triangle fan
_cachearraysize = (numdegrees * 2) + 4;
_vertices = new GLfloat[_cachearraysize];
for(int x= 2; x < (_cachearraysize-2); x = x + 2)
{
float degreeinRadians = x*DEG2RAD;
_vertices[x] = cos(degreeinRadians)*radius;
_vertices[x + 1] = sin(degreeinRadians)*radius;
}
//get the X as X of 0 and X of 180 degrees, subtract to get diameter. divide
//by 2 for radius and add back to X of 180
_vertices[0]= ((_vertices[2] - _vertices[362])/2) + _vertices[362];
//same idea for Y
_vertices[1]= ((_vertices[183] - _vertices[543])/2) + _vertices[543];
//close off the triangle fan at the same point as start
_vertices[_cachearraysize -1] = _vertices[0];
_vertices[_cachearraysize] = _vertices[1];
_scalex = scalex;
_scaley = scaley;
_scalez = scalez;
}
~circle()
{
delete[] _vertices;
}
void draw()
{
glScalef(_scalex, _scaley, _scalez);
glVertexPointer(2,GL_FLOAT, 0, _vertices);
glDrawArrays(GL_TRIANGLE_FAN, 0, _cachearraysize);
}
};
That's some ugly code, I'd say - lots of magic numbers et cetera.
Try something like:
struct Point {
Point(float x, float y) : x(x), y(y) {}
float x, y;
};
std::vector<Point> points;
const float step = 0.1;
const float radius = 2;
points.push_back(Point(0,0));
// iterate over the angle array
for (float a=0; a<2*M_PI; a+=step) {
points.push_back(cos(a)*radius,sin(a)*radius);
}
// duplicate the first vertex after the centre
points.push_back(points.at(1));
// rendering:
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2,GL_FLOAT,0, &points[0]);
glDrawArrays(GL_TRIANGLE_FAN,0,points.size());
It's up to you to rewrite this as a class, as you prefer. The math behind is really simple, don't fear to try and understand it.

How to fix weird camera rotation while moving camera with sdl, opengl in c++

I have a camera object that I have put together from reading on the net that handles moving forward and backward, strafe left and right and even look around with the mouse. But when I move in any direction plus try to look around it jumps all over the place, but when I don't move and look around its fine.
I'm hoping someone can help me work out why I can move and look around at the same time?
main.h
#include "SDL/SDL.h"
#include "SDL/SDL_opengl.h"
#include <cmath>
#define CAMERASPEED 0.03f // The Camera Speed
struct tVector3 // Extended 3D Vector Struct
{
tVector3() {} // Struct Constructor
tVector3 (float new_x, float new_y, float new_z) // Init Constructor
{ x = new_x; y = new_y; z = new_z; }
// overload + operator
tVector3 operator+(tVector3 vVector) {return tVector3(vVector.x+x, vVector.y+y, vVector.z+z);}
// overload - operator
tVector3 operator-(tVector3 vVector) {return tVector3(x-vVector.x, y-vVector.y, z-vVector.z);}
// overload * operator
tVector3 operator*(float number) {return tVector3(x*number, y*number, z*number);}
// overload / operator
tVector3 operator/(float number) {return tVector3(x/number, y/number, z/number);}
float x, y, z; // 3D vector coordinates
};
class CCamera
{
public:
tVector3 mPos;
tVector3 mView;
tVector3 mUp;
void Strafe_Camera(float speed);
void Move_Camera(float speed);
void Rotate_View(float speed);
void Position_Camera(float pos_x, float pos_y,float pos_z,
float view_x, float view_y, float view_z,
float up_x, float up_y, float up_z);
};
void Draw_Grid();
camera.cpp
#include "main.h"
void CCamera::Position_Camera(float pos_x, float pos_y, float pos_z,
float view_x, float view_y, float view_z,
float up_x, float up_y, float up_z)
{
mPos = tVector3(pos_x, pos_y, pos_z);
mView = tVector3(view_x, view_y, view_z);
mUp = tVector3(up_x, up_y, up_z);
}
void CCamera::Move_Camera(float speed)
{
tVector3 vVector = mView - mPos;
mPos.x = mPos.x + vVector.x * speed;
mPos.z = mPos.z + vVector.z * speed;
mView.x = mView.x + vVector.x * speed;
mView.z = mView.z + vVector.z * speed;
}
void CCamera::Strafe_Camera(float speed)
{
tVector3 vVector = mView - mPos;
tVector3 vOrthoVector;
vOrthoVector.x = -vVector.z;
vOrthoVector.z = vVector.x;
mPos.x = mPos.x + vOrthoVector.x * speed;
mPos.z = mPos.z + vOrthoVector.z * speed;
mView.x = mView.x + vOrthoVector.x * speed;
mView.z = mView.z + vOrthoVector.z * speed;
}
void CCamera::Rotate_View(float speed)
{
tVector3 vVector = mView - mPos;
tVector3 vOrthoVector;
vOrthoVector.x = -vVector.z;
vOrthoVector.z = vVector.x;
mView.z = (float)(mPos.z + sin(speed)*vVector.x + cos(speed)*vVector.z);
mView.x = (float)(mPos.x + cos(speed)*vVector.x - sin(speed)*vVector.z);
}
and the mousemotion code
void processEvents()
{
int mid_x = screen_width >> 1;
int mid_y = screen_height >> 1;
int mpx = event.motion.x;
int mpy = event.motion.y;
float angle_y = 0.0f;
float angle_z = 0.0f;
while(SDL_PollEvent(&event))
{
switch(event.type)
{
case SDL_MOUSEMOTION:
if( (mpx == mid_x) && (mpy == mid_y) ) return;
// Get the direction from the mouse cursor, set a resonable maneuvering speed
angle_y = (float)( (mid_x - mpx) ) / 1000; //1000
angle_z = (float)( (mid_y - mpy) ) / 1000; //1000
// The higher the value is the faster the camera looks around.
objCamera.mView.y += angle_z * 2;
// limit the rotation around the x-axis
if((objCamera.mView.y - objCamera.mPos.y) > 8) objCamera.mView.y = objCamera.mPos.y + 8;
if((objCamera.mView.y - objCamera.mPos.y) <-8) objCamera.mView.y = objCamera.mPos.y - 8;
objCamera.Rotate_View(-angle_y);
SDL_WarpMouse(mid_x, mid_y);
break;
case SDL_KEYUP:
objKeyb.handleKeyboardEvent(event,true);
break;
case SDL_KEYDOWN:
objKeyb.handleKeyboardEvent(event,false);
break;
case SDL_QUIT:
quit = true;
break;
case SDL_VIDEORESIZE:
screen = SDL_SetVideoMode( event.resize.w, event.resize.h, screen_bpp, SDL_OPENGL | SDL_HWSURFACE | SDL_RESIZABLE | SDL_GL_DOUBLEBUFFER | SDL_HWPALETTE );
screen_width = event.resize.w;
screen_height = event.resize.h;
init_opengl();
std::cout << "Resized to width: " << event.resize.w << " height: " << event.resize.h << std::endl;
break;
default:
break;
}
}
}
I'm not entirely sure what you are doing above.
Personally I would just allow a simple 4x4 matrix. Any implementation will do. To rotate you, simply, need to rotate using the change of mouse x and y as euler inputs for rotation around the y and x axes. There is lots of code available all over the internet that will do this for you.
Some of those matrix libraries won't provide you with a "MoveForward()" function. If this is the case its ok, moving forward is pretty easy. The third column (or row if you are using row major matrices) is your forward vector. Extract it. Normalise it (It really should be normalised anyway so this step may not be needed). Multiply it by how much you wish to move forward and then add it to the position (the 4th column/row).
Now here is the odd part. A view matrix is a special type of matrix. The matrix above defines the view space. If you multiply your current model matrix by this matrix you will not get the answer you expect. Because you wish to transform it such that the camera is at the origin. As such you need to, effectively, undo the camera transformation to re-orient things to the view defined above. To do this you multiply your model matrix by the inverse of the view matrix.
You now have an object defined in the correct view space.
This is my very simple camera class. It does not handle the functionality you describe but hopefully will give you a few ideas on how to set up the class (Be warned, I use row major, ie DirectX style, matrices).
BaseCamera.h:
#ifndef BASE_CAMERA_H_
#define BASE_CAMERA_H_
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
#include "Maths/Vector4.h"
#include "Maths/Matrix4x4.h"
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
class BaseCamera
{
protected:
bool mDirty;
MathsLib::Matrix4x4 mCameraMat;
MathsLib::Matrix4x4 mViewMat;
public:
BaseCamera();
BaseCamera( const BaseCamera& camera );
BaseCamera( const MathsLib::Vector4& vPos, const MathsLib::Vector4& vLookAt );
BaseCamera( const MathsLib::Matrix4x4& matCamera );
bool IsDirty() const;
void SetDirty();
MathsLib::Matrix4x4& GetOrientationMatrix();
const MathsLib::Matrix4x4& GetOrientationMatrix() const;
MathsLib::Matrix4x4& GetViewMatrix();
};
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline MathsLib::Matrix4x4& BaseCamera::GetOrientationMatrix()
{
return mCameraMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline const MathsLib::Matrix4x4& BaseCamera::GetOrientationMatrix() const
{
return mCameraMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline bool BaseCamera::IsDirty() const
{
return mDirty;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
inline void BaseCamera::SetDirty()
{
mDirty = true;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
#endif
BaseCamera.cpp:
#include "Render/stdafx.h"
#include "BaseCamera.h"
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera() :
mDirty( true )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const BaseCamera& camera ) :
mDirty( camera.mDirty ),
mCameraMat( camera.mCameraMat ),
mViewMat( camera.mViewMat )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const MathsLib::Vector4& vPos, const MathsLib::Vector4& vLookAt ) :
mDirty( true )
{
MathsLib::Vector4 vDir = (vLookAt - vPos).Normalise();
MathsLib::Vector4 vLat = MathsLib::CrossProduct( MathsLib::Vector4( 0.0f, 1.0f, 0.0f ), vDir ).Normalise();
MathsLib::Vector4 vUp = MathsLib::CrossProduct( vDir, vLat );//.Normalise();
mCameraMat.Set( vLat, vUp, vDir, vPos );
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
BaseCamera::BaseCamera( const MathsLib::Matrix4x4& matCamera ) :
mDirty( true ),
mCameraMat( matCamera )
{
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
MathsLib::Matrix4x4& BaseCamera::GetViewMatrix()
{
if ( IsDirty() )
{
mViewMat = mCameraMat.Inverse();
mDirty = false;
}
return mViewMat;
}
/*+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+----+*/
I agree with Goz. You need to use homegenous 4x4 matrices if you want to represent affine transformations such as rotate + translate
Assuming row major representation then if there is no scaling or shearing, your 4x4 matrix represents the following:
Rows 0 to 2 : The three basis vectors of your local co-ordinate system ( i.e x,y,z )
Row 3 : the current translation from the origin
So to move along your local x vector, as Goz says, because you can assume it's a unit vector
if there is no scale/shear you just multiply it by the move step ( +ve or -ve ) then add the resultant vector onto Row 4 in the matrix
So taking a simple example of starting at the origin with your local frame set to world frame then your matrix would look something like this
1 0 0 0 <--- x unit vector
0 1 0 0 <--- y unit vector
0 0 1 0 <--- z unit vector
0 0 0 1 <--- translation vector
In terms of a way most game cameras work then the axes map like this:
x axis <=> Camera Pan Left/Right
y axis <=> Camera Pan Up/Down
z axis <=> Camera Zoom In/Out
So if I rotate my entire frame of reference to say look at a new point LookAt then as Goz puts in his BaseCamera overloaded constructor code, you then construct a new local co-ordinate system and set this into your matrix ( all mCameraMat.Set( vLat, vUp, vDir, vPos ) does typically is set those four rows of the matrix i.e VLat would be row 0, vUp row 1, vDir row 2 and vPos row 3 )
Then to zoom in/out would just become row 3 = row 2 * stepval
Again as Goz, rightly points out, you then need to transform this back into world-space and this is done by multiplying by the inverse of the view matrix