I've got an interesting problem. I'm using matrix multiplication to rotate and scale my images for my game. It works great when I scale the image down by half or more, but if the image stays its original size holes start to appear. I've attached some images of the problem below. My drawing code is below as well.
Before rotation
After rotation
Drawing code
void Graphics::drawImage(Graphics::Image i, float x, float y, float rot, float xScale, float yScale)
{
unsigned char r = 0, g = 0, b = 0;
Vector<float> pos = Vector<float>::create(0, 0);
i.setRotation(rot);
i.setXScale(xScale);
i.setYScale(yScale);
for (int j = 0; j < i.getWidth() * i.getHeight(); j++)
{
i.getPixel((j % i.getWidth()), (j / i.getWidth()), r, g, b);
SDL_SetRenderDrawColor(renderer, r, g, b, 255);
pos.elements[0] = (j % i.getWidth());
pos.elements[1] = (j / i.getWidth());
Vector<float> transPos = pos - Vector<float>::create(i.getCenterX(), i.getCenterY());
Matrix<float> trans = math::scale<float>(i.getXScale(), i.getYScale()) * math::rot<float>((double)i.getRot());
transPos = math::mult<float, float>(trans, transPos);
SDL_RenderDrawPoint(renderer, (int)x + transPos.elements[0] + (i.getCenterX() * i.getXScale()), (int)y + transPos.elements[1] + (i.getCenterY() * i.getYScale()));
}
}
Related
I have programmed a simple dragon curve fractal. It seems to work for the most part, but there is an odd logical error that shifts the rotation of certain lines by one pixel. This wouldn't normally be an issue, but after a few generations, at the right size, the fractal begins to look wonky.
I am using open cv in c++ to generate it, but I'm pretty sure it's a logical error rather than a display error. I have printed the values to the console multiple times and seen for myself that there is a one-digit difference between values that are intended to be the exact same - meaning a line may have a y of 200 at one end and 201 at another.
Here is the full code:
#include<iostream>
#include<cmath>
#include<opencv2/opencv.hpp>
const int width=500;
const int height=500;
const double PI=std::atan(1)*4.0;
struct point{
double x;
double y;
point(double x_,double y_){
x=x_;
y=y_;
}};
cv::Mat img(width,height,CV_8UC3,cv::Scalar(255,255,255));
double deg_to_rad(double degrees){return degrees*PI/180;}
point rotate(int degree, int centx, int centy, int ll) {
double radians = deg_to_rad(degree);
return point(centx + (ll * std::cos(radians)), centy + (ll * std::sin(radians)));
}
void generate(point & r, std::vector < point > & verticies, int rotation = 90) {
int curRotation = 90;
bool start = true;
point center = r;
point rot(0, 0);
std::vector<point> verticiesc(verticies);
for (point i: verticiesc) {
double dx = center.x - i.x;
double dy = center.y - i.y;
//distance from centre
int ll = std::sqrt(dx * dx + dy * dy);
//angle from centre
curRotation = std::atan2(dy, dx) * 180 / PI;
//add 90 degrees of rotation
rot = rotate(curRotation + rotation, center.x, center.y, ll);
verticies.push_back(rot);
//endpoint, where the next centre will be
if (start) {
r = rot;
start = false;
}
}
}
void gen(int gens, int bwidth = 1) {
int ll = 7;
std::vector < point > verticies = {
point(width / 2, height / 2 - ll),
point(width / 2, height / 2)
};
point rot(width / 2, height / 2);
for (int i = 0; i < gens; i++) {
generate(rot, verticies);
}
//draw lines
for (int i = 0; i < verticies.size(); i += 2) {
cv::line(img, cv::Point(verticies[i].x, verticies[i].y), cv::Point(verticies[i + 1].x, verticies[i + 1].y), cv::Scalar(0, 0, 0), 1, 8);
}
}
int main() {
gen(10);
cv::imshow("", img);
cv::waitKey(0);
return 0;
}
First, you use int to store point coordinates - that's a bad idea - you lose all accuracy of point position. Use double or float.
Second, your method for drawing fractals is not too stable numericly. You'd better store original shape and all rotation/translation/scale that indicate where and how to draw scaled copies of the original shape.
Also, I believe this is a bug:
for(point i: verices)
{
...
vertices.push_back(rot);
...
}
Changing size of vertices while inside such a for-loop might cause a crash or UB.
Turns out it was to do with floating-point precision. I changed
x=x_;
y=y_;
to
x=std::round(x_);
y=std::round(y_);
and it works.
I am a beginner in c++ and have coded a for loop to show a hollow circle when I run the code, however, I was wondering how I could achieve a filled-in circle using the distance formula (d = sqrt((ax-bx)^2 + (ay-by)^2). Here's what I have so far! Any help would be appreciated!
int MAX = 728;
for (float t = 0; t < 2 * 3.14; t += 0.01)
SetPixel(MAX / 4 + MAX / 6 * sin(t), MAX / 4 + MAX / 6 * cos(t), 255, 255, 0);
#include <windows.h>
#include <iostream>
using namespace std;
int main()
{
HWND consoleWindow = GetConsoleWindow(); // Get a console handle
HDC consoleDC = GetDC(consoleWindow); // Get a handle to device context
int max = 628;
float i = 0;
float t;
float doublePi = 6.29;
for (i = 0.0; i < max; i += 2.0) {
for (t = 0.0; t < doublePi; t += 0.01) {
SetPixel(consoleDC, max / 4 + (max - i) / 6 * sin(t), max / 4 + (max - i) / 6 * cos(t), RGB(255, 255, 0));
}
}
ReleaseDC(consoleWindow, consoleDC);
cin.ignore();
return 0;
}
Working almost well. Draw and fill in! A little slow...
Pffff... do not use sin and cos! instead use the sqrt(1-x^2) approach. You can view the formula rendering a circle in google for example: https://www.google.com/search?q=sqrt(1-x^2)
I edit this answer because it seems that is not clear:
float radius = 50.0f;
for (int x = -radius; x <= radius; ++x) {
int d = round(sqrt(1.0f - (x * x / radius / radius)) * radius);
for (int y = -d; y <= d; ++y) {
SetPixel(x, y, 255, 255, 0);
}
}
Note: each graphic library is different, so I assumed that you used rightfully the "SetPixel" function.
Now, for most people say the sqrt(1-x^2) approach should be enough, but it seem that some downvoters does not think the same XD.
Inefficient as can be, and probably the last way you really want to draw a circle ... but ...
Over the entire square encompassing your circle, calculate each pixel's distance from the center and set if under or equal the radius.
// Draw a circle centered at (Xcenter,Ycenter) with given radius using distance formula
void drawCircle(HDC dc, int XCenter, int YCenter, int radius, COLORREF c) {
double fRad = radius * 1.0; // Just a shortcut to avoid thrashing data types
for (int x = XCenter - radius; x<XCenter + radius; x++) {
for (int y = YCenter - radius; y<YCenter + radius; y++) {
double d = sqrt(((x - XCenter) * (x - XCenter)) + ((y - YCenter) * (y - YCenter)) );
if (d <= fRad) SetPixel(dc, x, y, c);
}
}
}
Caveat: No more caveat, used a C++ environment and tested it this time. :-)
Call thusly:
int main()
{
HWND consoleWindow = GetConsoleWindow();
HDC consoleDC = GetDC(consoleWindow);
drawCircle(consoleDC, 50, 50, 20, RGB(255, 0, 255));
ReleaseDC(consoleWindow, consoleDC);
return 0;
}
Issue
I'm trying to implement the Perlin Noise algorithm in 2D with a single octave with a size of 16x16. I'm using this as heightmap data for a terrain, however it only seems to work in one axis. Whenever the sample point moves to a new Y section in the Perlin Noise grid, the gradient is very different from what I expect (for example, it often flips from 0.98 to -0.97, which is a very sudden change).
This image shows the staggered terrain in the z direction (which is the y axis in the 2D Perlin Noise grid)
Code
I've put the code that calculates which sample point to use at the end since it's quite long and I believe it's not where the issue is, but essentially I scale down the terrain to match the Perlin Noise grid (16x16) and then sample through all the points.
Gradient At Point
So the code that calculates out the gradient at a sample point is the following:
// Find the gradient at a certain sample point
float PerlinNoise::gradientAt(Vector2 point)
{
// Decimal part of float
float relativeX = point.x - (int)point.x;
float relativeY = point.y - (int)point.y;
Vector2 relativePoint = Vector2(relativeX, relativeY);
vector<float> weights(4);
// Find the weights of the 4 surrounding points
weights = surroundingWeights(point);
float fadeX = fadeFunction(relativePoint.x);
float fadeY = fadeFunction(relativePoint.y);
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
return lerpC;
}
Surrounding Weights of Point
I believe the issue is somewhere here, in the function that calculates the weights for the 4 surrounding points of a sample point, but I can't seem to figure out what is wrong since all the values seem sensible in the function when stepping through it.
// Find the surrounding weight of a point
vector<float> PerlinNoise::surroundingWeights(Vector2 point){
// Produces correct values
vector<Vector2> surroundingPoints = surroundingPointsOf(point);
vector<float> weights;
for (unsigned i = 0; i < surroundingPoints.size(); ++i) {
// The corner to the sample point
Vector2 cornerToPoint = surroundingPoints[i].toVector(point);
// Getting the seeded vector from the grid
float x = surroundingPoints[i].x;
float y = surroundingPoints[i].y;
Vector2 seededVector = baseGrid[x][y];
// Dot product between the seededVector and corner to the sample point vector
float dotProduct = cornerToPoint.dot(seededVector);
weights.push_back(dotProduct);
}
return weights;
}
OpenGL Setup and Sample Point
Setting up the heightmap and getting the sample point. Variables 'wrongA' and 'wrongA' is an example of when the gradient flips and changes suddenly.
void HeightMap::GenerateRandomTerrain() {
int perlinGridSize = 16;
PerlinNoise perlin_noise = PerlinNoise(perlinGridSize, perlinGridSize);
numVertices = RAW_WIDTH * RAW_HEIGHT;
numIndices = (RAW_WIDTH - 1) * (RAW_HEIGHT - 1) * 6;
vertices = new Vector3[numVertices];
textureCoords = new Vector2[numVertices];
indices = new GLuint[numIndices];
float perlinScale = RAW_HEIGHT/ (float) (perlinGridSize -1);
float height = 50;
float wrongA = perlin_noise.gradientAt(Vector2(0, 68.0f / perlinScale));
float wrongB = perlin_noise.gradientAt(Vector2(0, 69.0f / perlinScale));
for (int x = 0; x < RAW_WIDTH; ++x) {
for (int z = 0; z < RAW_HEIGHT; ++z) {
int offset = (x* RAW_WIDTH) + z;
float xVal = (float)x / perlinScale;
float yVal = (float)z / perlinScale;
float noise = perlin_noise.gradientAt(Vector2( xVal , yVal));
vertices[offset] = Vector3(x * HEIGHTMAP_X, noise * height, z * HEIGHTMAP_Z);
textureCoords[offset] = Vector2(x * HEIGHTMAP_TEX_X, z * HEIGHTMAP_TEX_Z);
}
}
numIndices = 0;
for (int x = 0; x < RAW_WIDTH - 1; ++x) {
for (int z = 0; z < RAW_HEIGHT - 1; ++z) {
int a = (x * (RAW_WIDTH)) + z;
int b = ((x + 1)* (RAW_WIDTH)) + z;
int c = ((x + 1)* (RAW_WIDTH)) + (z + 1);
int d = (x * (RAW_WIDTH)) + (z + 1);
indices[numIndices++] = c;
indices[numIndices++] = b;
indices[numIndices++] = a;
indices[numIndices++] = a;
indices[numIndices++] = d;
indices[numIndices++] = c;
}
}
BufferData();
}
Turned out the issue was in the interpolation stage:
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
I had the interpolation in the y axis the wrong way around, so it should have been:
lerp(lerpB, lerpA, fadeY)
Instead of:
lerp(lerpA, lerpB, fadeY)
i am working on an implementation of the Separting Axis Theorem for use in 2D games. It kind of works but just kind of.
I use it like this:
bool penetration = sat(c1, c2) && sat(c2, c1);
Where c1 and c2 are of type Convex, defined as:
class Convex
{
public:
float tx, ty;
public:
std::vector<Point> p;
void translate(float x, float y) {
tx = x;
ty = y;
}
};
(Point is a structure of float x, float y)
The points are typed in clockwise.
My current code (ignore Qt debug):
bool sat(Convex c1, Convex c2, QPainter *debug)
{
//Debug
QColor col[] = {QColor(255, 0, 0), QColor(0, 255, 0), QColor(0, 0, 255), QColor(0, 0, 0)};
bool ret = true;
int c1_faces = c1.p.size();
int c2_faces = c2.p.size();
//For every face in c1
for(int i = 0; i < c1_faces; i++)
{
//Grab a face (face x, face y)
float fx = c1.p[i].x - c1.p[(i + 1) % c1_faces].x;
float fy = c1.p[i].y - c1.p[(i + 1) % c1_faces].y;
//Create a perpendicular axis to project on (axis x, axis y)
float ax = -fy, ay = fx;
//Normalize the axis
float len_v = sqrt(ax * ax + ay * ay);
ax /= len_v;
ay /= len_v;
//Debug graphics (ignore)
debug->setPen(col[i]);
//Draw the face
debug->drawLine(QLineF(c1.tx + c1.p[i].x, c1.ty + c1.p[i].y, c1.p[(i + 1) % c1_faces].x + c1.tx, c1.p[(i + 1) % c1_faces].y + c1.ty));
//Draw the axis
debug->save();
debug->translate(c1.p[i].x, c1.p[i].y);
debug->drawLine(QLineF(c1.tx, c1.ty, ax * 100 + c1.tx, ay * 100 + c1.ty));
debug->drawEllipse(QPointF(ax * 100 + c1.tx, ay * 100 + c1.ty), 10, 10);
debug->restore();
//Carve out the min and max values
float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;
//Project every point in c1 on the axis and store min and max
for(int j = 0; j < c1_faces; j++)
{
float c1_proj = (ax * (c1.p[j].x + c1.tx) + ay * (c1.p[j].y + c1.ty)) / (ax * ax + ay * ay);
c1_min = min(c1_proj, c1_min);
c1_max = max(c1_proj, c1_max);
}
//Project every point in c2 on the axis and store min and max
for(int j = 0; j < c2_faces; j++)
{
float c2_proj = (ax * (c2.p[j].x + c2.tx) + ay * (c2.p[j].y + c2.ty)) / (ax * ax + ay * ay);
c2_min = min(c2_proj, c2_min);
c2_max = max(c2_proj, c2_max);
}
//Return if the projections do not overlap
if(!(c1_max >= c2_min && c1_min <= c2_max))
ret = false; //return false;
}
return ret; //return true;
}
What am i doing wrong? It registers collision perfectly but is over sensitive on one edge (in my test using a triangle and a diamond):
//Triangle
push_back(Point(0, -150));
push_back(Point(0, 50));
push_back(Point(-100, 100));
//Diamond
push_back(Point(0, -100));
push_back(Point(100, 0));
push_back(Point(0, 100));
push_back(Point(-100, 0));
I am getting this mega-adhd over this, please help me out :)
http://u8999827.fsdata.se/sat.png
OK, I was wrong the first time. Looking at your picture of a failure case it is obvious a separating axis exists and is one of the normals (the normal to the long edge of the triangle). The projection is correct, however, your bounds are not.
I think the error is here:
float c1_min = FLT_MAX, c1_max = FLT_MIN;
float c2_min = FLT_MAX, c2_max = FLT_MIN;
FLT_MIN is the smallest normal positive number representable by a float, not the most negative number. In fact you need:
float c1_min = FLT_MAX, c1_max = -FLT_MAX;
float c2_min = FLT_MAX, c2_max = -FLT_MAX;
or even better for C++
float c1_min = std::numeric_limits<float>::max(), c1_max = -c1_min;
float c2_min = std::numeric_limits<float>::max(), c2_max = -c2_min;
because you're probably seeing negative projections onto the axis.
If I have a texture, is it then possible to generate a normal-map for this texture, so it can be used for bump-mapping?
Or how are normal maps usually made?
Yes. Well, sort of. Normal maps can be accurately made from height-maps. Generally, you can also put a regular texture through and get decent results as well. Keep in mind there are other methods of making a normal map, such as taking a high-resolution model, making it low resolution, then doing ray casting to see what the normal should be for the low-resolution model to simulate the higher one.
For height-map to normal-map, you can use the Sobel Operator. This operator can be run in the x-direction, telling you the x-component of the normal, and then the y-direction, telling you the y-component. You can calculate z with 1.0 / strength where strength is the emphasis or "deepness" of the normal map. Then, take that x, y, and z, throw them into a vector, normalize it, and you have your normal at that point. Encode it into the pixel and you're done.
Here's some older incomplete-code that demonstrates this:
// pretend types, something like this
struct pixel
{
uint8_t red;
uint8_t green;
uint8_t blue;
};
struct vector3d; // a 3-vector with doubles
struct texture; // a 2d array of pixels
// determine intensity of pixel, from 0 - 1
const double intensity(const pixel& pPixel)
{
const double r = static_cast<double>(pPixel.red);
const double g = static_cast<double>(pPixel.green);
const double b = static_cast<double>(pPixel.blue);
const double average = (r + g + b) / 3.0;
return average / 255.0;
}
const int clamp(int pX, int pMax)
{
if (pX > pMax)
{
return pMax;
}
else if (pX < 0)
{
return 0;
}
else
{
return pX;
}
}
// transform -1 - 1 to 0 - 255
const uint8_t map_component(double pX)
{
return (pX + 1.0) * (255.0 / 2.0);
}
texture normal_from_height(const texture& pTexture, double pStrength = 2.0)
{
// assume square texture, not necessarily true in real code
texture result(pTexture.size(), pTexture.size());
const int textureSize = static_cast<int>(pTexture.size());
for (size_t row = 0; row < textureSize; ++row)
{
for (size_t column = 0; column < textureSize; ++column)
{
// surrounding pixels
const pixel topLeft = pTexture(clamp(row - 1, textureSize), clamp(column - 1, textureSize));
const pixel top = pTexture(clamp(row - 1, textureSize), clamp(column, textureSize));
const pixel topRight = pTexture(clamp(row - 1, textureSize), clamp(column + 1, textureSize));
const pixel right = pTexture(clamp(row, textureSize), clamp(column + 1, textureSize));
const pixel bottomRight = pTexture(clamp(row + 1, textureSize), clamp(column + 1, textureSize));
const pixel bottom = pTexture(clamp(row + 1, textureSize), clamp(column, textureSize));
const pixel bottomLeft = pTexture(clamp(row + 1, textureSize), clamp(column - 1, textureSize));
const pixel left = pTexture(clamp(row, textureSize), clamp(column - 1, textureSize));
// their intensities
const double tl = intensity(topLeft);
const double t = intensity(top);
const double tr = intensity(topRight);
const double r = intensity(right);
const double br = intensity(bottomRight);
const double b = intensity(bottom);
const double bl = intensity(bottomLeft);
const double l = intensity(left);
// sobel filter
const double dX = (tr + 2.0 * r + br) - (tl + 2.0 * l + bl);
const double dY = (bl + 2.0 * b + br) - (tl + 2.0 * t + tr);
const double dZ = 1.0 / pStrength;
math::vector3d v(dX, dY, dZ);
v.normalize();
// convert to rgb
result(row, column) = pixel(map_component(v.x), map_component(v.y), map_component(v.z));
}
}
return result;
}
There's probably many ways to generate a Normal map, but like others said, you can do it from a Height Map, and 3d packages like XSI/3dsmax/Blender/any of them can output one for you as an image.
You can then output and RGB image with the Nvidia plugin for photoshop, an algorithm to convert it or you might be able to output it directly from those 3d packages with 3rd party plugins.
Be aware that in some case, you might need to invert channels (R, G or B) from the generated normal map.
Here's some resources link with examples and more complete explanation:
http://developer.nvidia.com/object/photoshop_dds_plugins.html
http://en.wikipedia.org/wiki/Normal_mapping
http://www.vrgeo.org/fileadmin/VRGeo/Bilder/VRGeo_Papers/jgt2002normalmaps.pdf
I don't think normal maps are generated from a texture. they are generated from a model.
just as texturing allows you to define complex colour detail with minimal polys (as opposed to just using millions of ploys and just vertex colours to define the colour on your mesh)
A normal map allows you to define complex normal detail with minimal polys.
I believe normal maps are usually generated from a higher res mesh, and then is used with a low res mesh.
I'm sure 3D tools, such as 3ds max or maya, as well as more specific tools will do this for you. unlike textures, I don't think they are usually done by hand.
but they are generated from the mesh, not the texture.
I suggest starting with OpenCV, due to its richness in algorithms. Here's one I wrote that iteratively blurs the normal map and weights those to the overall value, essentially creating more of a topological map.
#define ROW_PTR(img, y) ((uchar*)((img).data + (img).step * y))
cv::Mat normalMap(const cv::Mat& bwTexture, double pStrength)
{
// assume square texture, not necessarily true in real code
int scale = 1.0;
int delta = 127;
cv::Mat sobelZ, sobelX, sobelY;
cv::Sobel(bwTexture, sobelX, CV_8U, 1, 0, 13, scale, delta, cv::BORDER_DEFAULT);
cv::Sobel(bwTexture, sobelY, CV_8U, 0, 1, 13, scale, delta, cv::BORDER_DEFAULT);
sobelZ = cv::Mat(bwTexture.rows, bwTexture.cols, CV_8UC1);
for(int y=0; y<bwTexture.rows; y++) {
const uchar *sobelXPtr = ROW_PTR(sobelX, y);
const uchar *sobelYPtr = ROW_PTR(sobelY, y);
uchar *sobelZPtr = ROW_PTR(sobelZ, y);
for(int x=0; x<bwTexture.cols; x++) {
double Gx = double(sobelXPtr[x]) / 255.0;
double Gy = double(sobelYPtr[x]) / 255.0;
double Gz = pStrength * sqrt(Gx * Gx + Gy * Gy);
uchar value = uchar(Gz * 255.0);
sobelZPtr[x] = value;
}
}
std::vector<cv::Mat>planes;
planes.push_back(sobelX);
planes.push_back(sobelY);
planes.push_back(sobelZ);
cv::Mat normalMap;
cv::merge(planes, normalMap);
cv::Mat originalNormalMap = normalMap.clone();
cv::Mat normalMapBlurred;
for (int i=0; i<3; i++) {
cv::GaussianBlur(normalMap, normalMapBlurred, cv::Size(13, 13), 5, 5);
addWeighted(normalMap, 0.4, normalMapBlurred, 0.6, 0, normalMap);
}
addWeighted(originalNormalMap, 0.3, normalMapBlurred, 0.7, 0, normalMap);
return normalMap;
}