I'm creating bitmap/bmp files according to the specifications with my C code and I would like to draw simple primitives on my bitmap. The following code shows how I draw a rectangle on my bitmap:
if(curline->type == 1) // draw a rectangle
{
int xstart = curline->x;
int ystart = curline->y;
int width = curline->width + xstart;
int height = curline->height + ystart;
int x = 0;
int y = 0;
for(y = ystart; y < height; y++)
{
for(x = xstart; x < width; x++)
{
arr[x][y].blue = curline->blue;
arr[x][y].green = curline->green;
arr[x][y].red = curline->red;
}
}
printf("rect drawn.\n");
}
...
save_bitmap();
Example output:
So basically I'm setting the red, green and blue values for all pixels within the given x and y field.
Now I'd like to fill a circle by knowing its midpoint and radius. But how do I know which pixels are inside this circle and which pixels ain't? Any help would be appreciated, thanks for reading.
A point lies within the bounds of a circle if the distance from the point to the center of the circle is less than the radius of the circle.
Consider a point (x1,y1) compared to a circle with center (x2,y2) and radius r:
int dx = x2 - x1; // horizontal offset
int dy = y2 - y1; // vertical offset
if ( (dx*dx + dy*dy) <= (r*r) )
{
// set pixel color
}
You can also try the midpoint algorithm, here on wikipedia.
Related
Hopefully, my title makes a little bit of sense. I am new to programming and trying to create a for loop that draws pixels on the x-axis and then on the y-axis to make it look like a square shape with the pixels making it looked filled in. How would I go about doing this? As of right now whenever I run the code I get a diagonal line from the bottom left of the screen to the top right. I know there are probably more optimal ways of doing this but for the point of this assignment that I am working on it is required. Here's what I've got so far but again I can't stress this enough, I am very new to c++ programming. Any help would be greatly appreciated! Also, MAX is just an int set to 728 and SetPixel is (int x, int y, unsigned char red, unsigned char green, unsigned char blue).
void drawRectangle(int parameterX, int parameterY) {
// draw rectangle
for (int x = 0; x > parameterX; x++) {
for (int y = 0; y > parameterY; y++) {
SetPixel(x+y, MAX / 2, 255, 255, 255);
}
}
}
There'se issues with your code. First, the loop definition for (int x = 0; x > parameterX; x++) states that x > parameterX in order for the loop body to execute, likewise with for (int y = 0; y > parameterY; y++) with y > parameterY. From the code you posted it would seem that the rectangle (or square if parameterY == parameterX) goes from 0 to parameterX in the x-axis and from 0 to parameterY in the y-axis. Hence you should change your code to
void drawRectangle(int parameterX, int parameterY) {
// draw rectangle
for (int x = 0; x < parameterX; x++) {
for (int y = 0; y < parameterY; y++) {
SetPixel(x, y, 255, 255, 255);
}
}
}
Note that SetPixel(int x, int y, unsigned char red, unsigned char green, unsigned char blue) seems to take x and y as the current pixel's x and y coordinate, that's why it is called like SetPixel(x, y, 255, 255, 255);. Also note that MAX / 2 is dropped because it would fix your y-coordinate to that value always during the loop, and finally note that parameters red, green and blue are the color components of an RGB color space, the final color is the combination of those three channel values, and those values go from 0 to 255, that's why these parameters are defined as unsigned char. The new code is read as, for every pixel in the rectangle going from 0 to parameterX in the x-axis and from 0 to parameterY in the y-axis then set the pixel's red component to 255, green component to 255 and blue component also to 255. In RGB colorspace that color is white.
Neither the question nor accepted answer reveals what C++ library OP uses. I assume that this is Win32 API. If so, SetPixel takes an HDC handle as the first argument (see the documentation: https://learn.microsoft.com/en-us/windows/win32/api/wingdi/nf-wingdi-setpixel). With this in mind, here's code that works and solves OP's problem:
int r=123;
int g=123;
int b=123;
HWND myconsole1 = GetConsoleWindow()
HDC hdc1 =GetDC(myconsole1);
COLORREF Colors1= RGB(r,g,b);
//draw rectangle
for (int x = 0; x < parameterX; x++) {
for (int y = 0; y < parameterY; y++) {
SetPixel(hdc1,x, y,Colors1);
}
}
#include<iostream.>
#include<Windows.h>
HWND Console = GetConsoleWindow();
int main()
{
HDC hdc = GetDC(Console);
for(int x = 200; x < 250; x++)
{
for(int y = 200; y < 250, y++)
{
setPixel(hdc, x, y, RGB(0, 200, 0));
}
}
system("pause>0");
return 0;
}
Issue
I'm trying to implement the Perlin Noise algorithm in 2D with a single octave with a size of 16x16. I'm using this as heightmap data for a terrain, however it only seems to work in one axis. Whenever the sample point moves to a new Y section in the Perlin Noise grid, the gradient is very different from what I expect (for example, it often flips from 0.98 to -0.97, which is a very sudden change).
This image shows the staggered terrain in the z direction (which is the y axis in the 2D Perlin Noise grid)
Code
I've put the code that calculates which sample point to use at the end since it's quite long and I believe it's not where the issue is, but essentially I scale down the terrain to match the Perlin Noise grid (16x16) and then sample through all the points.
Gradient At Point
So the code that calculates out the gradient at a sample point is the following:
// Find the gradient at a certain sample point
float PerlinNoise::gradientAt(Vector2 point)
{
// Decimal part of float
float relativeX = point.x - (int)point.x;
float relativeY = point.y - (int)point.y;
Vector2 relativePoint = Vector2(relativeX, relativeY);
vector<float> weights(4);
// Find the weights of the 4 surrounding points
weights = surroundingWeights(point);
float fadeX = fadeFunction(relativePoint.x);
float fadeY = fadeFunction(relativePoint.y);
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
return lerpC;
}
Surrounding Weights of Point
I believe the issue is somewhere here, in the function that calculates the weights for the 4 surrounding points of a sample point, but I can't seem to figure out what is wrong since all the values seem sensible in the function when stepping through it.
// Find the surrounding weight of a point
vector<float> PerlinNoise::surroundingWeights(Vector2 point){
// Produces correct values
vector<Vector2> surroundingPoints = surroundingPointsOf(point);
vector<float> weights;
for (unsigned i = 0; i < surroundingPoints.size(); ++i) {
// The corner to the sample point
Vector2 cornerToPoint = surroundingPoints[i].toVector(point);
// Getting the seeded vector from the grid
float x = surroundingPoints[i].x;
float y = surroundingPoints[i].y;
Vector2 seededVector = baseGrid[x][y];
// Dot product between the seededVector and corner to the sample point vector
float dotProduct = cornerToPoint.dot(seededVector);
weights.push_back(dotProduct);
}
return weights;
}
OpenGL Setup and Sample Point
Setting up the heightmap and getting the sample point. Variables 'wrongA' and 'wrongA' is an example of when the gradient flips and changes suddenly.
void HeightMap::GenerateRandomTerrain() {
int perlinGridSize = 16;
PerlinNoise perlin_noise = PerlinNoise(perlinGridSize, perlinGridSize);
numVertices = RAW_WIDTH * RAW_HEIGHT;
numIndices = (RAW_WIDTH - 1) * (RAW_HEIGHT - 1) * 6;
vertices = new Vector3[numVertices];
textureCoords = new Vector2[numVertices];
indices = new GLuint[numIndices];
float perlinScale = RAW_HEIGHT/ (float) (perlinGridSize -1);
float height = 50;
float wrongA = perlin_noise.gradientAt(Vector2(0, 68.0f / perlinScale));
float wrongB = perlin_noise.gradientAt(Vector2(0, 69.0f / perlinScale));
for (int x = 0; x < RAW_WIDTH; ++x) {
for (int z = 0; z < RAW_HEIGHT; ++z) {
int offset = (x* RAW_WIDTH) + z;
float xVal = (float)x / perlinScale;
float yVal = (float)z / perlinScale;
float noise = perlin_noise.gradientAt(Vector2( xVal , yVal));
vertices[offset] = Vector3(x * HEIGHTMAP_X, noise * height, z * HEIGHTMAP_Z);
textureCoords[offset] = Vector2(x * HEIGHTMAP_TEX_X, z * HEIGHTMAP_TEX_Z);
}
}
numIndices = 0;
for (int x = 0; x < RAW_WIDTH - 1; ++x) {
for (int z = 0; z < RAW_HEIGHT - 1; ++z) {
int a = (x * (RAW_WIDTH)) + z;
int b = ((x + 1)* (RAW_WIDTH)) + z;
int c = ((x + 1)* (RAW_WIDTH)) + (z + 1);
int d = (x * (RAW_WIDTH)) + (z + 1);
indices[numIndices++] = c;
indices[numIndices++] = b;
indices[numIndices++] = a;
indices[numIndices++] = a;
indices[numIndices++] = d;
indices[numIndices++] = c;
}
}
BufferData();
}
Turned out the issue was in the interpolation stage:
float lerpA = MathUtils::lerp(weights[0], weights[1], fadeX);
float lerpB = MathUtils::lerp(weights[2], weights[3], fadeX);
float lerpC = MathUtils::lerp(lerpA, lerpB, fadeY);
I had the interpolation in the y axis the wrong way around, so it should have been:
lerp(lerpB, lerpA, fadeY)
Instead of:
lerp(lerpA, lerpB, fadeY)
I've got a 3D terrain environment like so:
I'm trying to get the character (camera) to look up when climbing hills, and look down when descending, like climbing in real life.
This is what it's currently doing:
Right now the camera moves up and down the hills just fine, but I can't get the camera angle to work correctly. The only way I can think of aiming up or down depending on the terrain is getting the z-index of the cell my character is currently facing, and set that as the focus, but I really have no idea how to do that.
This is admittedly for an assignment, and we're intentionally not using objects so things are organized a little strangely.
Here's how I'm currently doing things:
const int M = 100; // width
const int N = 100; // height
double zHeights[M+1][N+1]; // 2D array containing the z-indexes of terrain cells
double gRX = 1.5; // x position of character
double gRY = 2.5; // y position of character
double gDirection = 45; // direction of character
double gRSpeed = 0.05; // move speed of character
double getZ(double x, double y) // returns the height of the current cell
{
double z = .5*sin(x*.25) + .4*sin(y*.15-.43);
z += sin(x*.45-.7) * cos(y*.315-.31)+.5;
z += sin(x*.15-.97) * sin(y*.35-8.31);
double amplitute = 5;
z *= amplitute;
return z;
}
void generateTerrain()
{
glBegin(GL_QUADS);
for (int i = 0; i <= M; i++)
{
for (int j = 0; j <= N; j++)
{
zHeights[i][j] = getZ(i,j);
}
}
}
void drawTerrain()
{
for (int i = 0; i < M; i++)
{
for (int j = 0; j < N; j++)
{
glColor3ub( (i*34525+j*5245)%256, (i*3456345+j*6757)%256, (i*98776+j*6554544)%256);
glVertex3d(i, j, getZ(i,j));
glVertex3d(i, j+1, getZ(i,j+1));
glVertex3d(i+1, j+1, getZ(i+1,j+1));
glVertex3d(i+1, j, getZ(i+1,j));
}
}
}
void display() // callback to glutDisplayFunc
{
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glLoadIdentity();
double radians = gDirection /180.*3.141592654; // converts direction to radians
double z = getZ((int)gRX, (int)gRY); // casts as int to find z-index in zHeights[][]
double dx = cos(radians)*gRSpeed;
double dy = sin(radians)*gRSpeed;
double at_x = gRX + dx;
double at_y = gRY + dy;
double at_z = z; // source of problem, no idea what to do
gluLookAt(gRX, gRY, z + 2, // eye position
at_x, at_y, at_z + 2, // point to look at, also wrong
0, 0, 1); // up vector
drawTerrain();
glEnd();
}
void init()
{
generateTerrain();
}
Firstly, I don't see any reason to cast to int here:
double z = getZ((int)gRX, (int)gRY);
Just use the double values to get a smooth behavior.
Your basic approach is already pretty good. You take the current position (gRX, gRY), walk a bit in the viewing direction (dx, dy) and use that as the point to look at. There are just two small things that need adaptation:
double dx = cos(radians)*gRSpeed;
double dy = sin(radians)*gRSpeed;
Although multiplying by gRSpeed might be a good idea, in my opinion, this factor should not be related to the character's kinematics. Instead, this represents the smoothness of your view direction. Small values make the direction stick very closely to the terrain geometry, larger values smooth it out.
And finally, you need to evaluate the height at your look-at point:
double at_z = getZ(at_x, at_y);
I've been working on this for about an hour now and I can't figure out what I'm doing wrong. This is the problem statement for the problem:
Draw a series of circles along one diagonal of a window. The circles
should be different colors and each circle should touch (but not
overlap) the one above and below it. Allow the program user to
determine how many circles are to be drawn.
These are some hints that have been given to me:
You will find the geometry involved in putting geometric elements on
the diagonals easier if you make your window square. Rather than using
getmaxheight() and getmaxwidth(), consider using getmaxheight() for
both dimensions.
Don't forget the Pythagorean theorem when working out distances in
your code such as the length of the diagonal. Keep in mind, though,
that the units on the screen are pixels, so fractions in the
computations are not too useful. This is definitely a place for
integer arithmetic.
Use the number of elements you are going to draw (squares, circles,
etc) to divide up the total length into steps for your loops to work
with.
Use for loops to draw figures when you know how many to draw, and what
size they are to be. Determine the count and size before the loop.
So far this is the code that I have created. Inputting 4 circles only draws 3 on screen, with the third one partially off screen. The circles also do not touch, which makes no sense to me because moving the center of the next circle down and over by the length of the diameter should have to the two circles touching. This is the code I have:
#include <graphics.h>
#include <cmath>
#include <iostream>
using namespace std;
int main()
{
int foreColor;
int diagLength;
int radius,diameter;
int centerX = 0, centerY = 0;
int numCircles; // number of circles.
int width, height; // screen width and height
cout << "Enter number of circles: ";
cin >> numCircles;
width = getmaxheight();
height = getmaxheight();
initwindow(width, height, "Circles");
diagLength = sqrt((width * width) + (height * height));
diameter = diagLength / numCircles;
radius = diameter / 2;
centerX = radius;
centerY = radius;
for (int i = 1; i <= numCircles; i++)
{
foreColor = i % 16; // 0 <= foreColor <= 15
setcolor(foreColor);
setfillstyle(i % 12, foreColor); // Set fill style
fillellipse(centerX, centerY, radius, radius);
centerX = centerX + diameter;
centerY = centerY + diameter;
}
getch(); // pause for user
closegraph();
}
Here's a diagram of what I think you want:
The basic problem comes down to determining
What the diameter D of each circle is
Where the center of each circle is.
The diameter is easy. First calculate the length L of the diagonal using Pythagoras' theorem, then divide by the desired number of circles N. Of course, if you need the radius just divide again by 2.
L = Sqrt(Width * Width + Height * Height);
D = L / N;
The trick to working out the position of the circle centers is to realise that the X are evenly spaced along the X axis, and same with the Y coordinates - so you can work out the distances I've labelled Dx and Dy really easily using the same division:
Dx = Width / N;
Dy = Height / N;
From there the center of each circle is easily calculated:
for (i = 0; i < N; i++)
{
centerX = (Dx / 2) + i * Dx;
centerY = (Dy / 2) + i * Dy;
/* Draw the circle at (centerX, centerY) with diameter D */
}
That's all there is to it!
By the way, if you were wondering why your code was drawing circles further apart than they should be, the reason is because you were adding D to centerX and centerY rather than Dx and Dy.
I have a problem due to my terrible math abilities, that I cannot figure out how to scale a graph based on the maximum and minimum values so that the whole graph will fit onto the graph-area (400x420) without parts of it being off the screen (based on a given equation by user).
Let's say I have this code, and it automatically draws squares and then the line graph based on these values. What is the formula (what do I multiply) to scale it so that it fits into the small graphing area?
vector<int> m_x;
vector<int> m_y; // gets automatically filled by user equation or values
int HeightInPixels = 420;// Graphing area size!!
int WidthInPixels = 400;
int best_max_y = GetMaxOfVector(m_y);
int best_min_y = GetMinOfVector(m_y);
m_row = 0;
m_col = 0;
y_magnitude = (HeightInPixels/(best_max_y+best_min_y)); // probably won't work
x_magnitude = (WidthInPixels/(int)m_x.size());
m_col = m_row = best_max_y; // number of vertical/horizontal lines to draw
////x_magnitude = (WidthInPixels/(int)m_x.size())/2; Doesn't work well
////y_magnitude = (HeightInPixels/(int)m_y.size())/2; Doesn't work well
ready = true; // we have values, graph it
Invalidate(); // uses WM_PAINT
////////////////////////////////////////////
/// Construction of Graph layout on WM_PAINT, before painting line graph
///////////////////////////////////////////
CPen pSilver(PS_SOLID, 1, RGB(150, 150, 150) ); // silver
CPen pDarkSilver(PS_SOLID, 2, RGB(120, 120, 120) ); // dark silver
dc.SelectObject( pSilver ); // silver color
CPoint pt( 620, 620 ); // origin
int left_side = 310;
int top_side = 30;
int bottom_side = 450;
int right_side = 710; // create a rectangle border
dc.Rectangle(left_side,top_side,right_side,bottom_side);
int origin = 310;
int xshift = 30;
int yshift = 30;
// draw scaled rows and columns
for(int r = 1; r <= colrow; r++){ // draw rows
pt.x = left_side;
pt.y = (ymagnitude)*r+top_side;
dc.MoveTo( pt );
pt.x = right_side;
dc.LineTo( pt );
for(int c = 1; c <= colrow; c++){
pt.x = left_side+c*(magnitude);
pt.y = top_side;
dc.MoveTo(pt);
pt.y = bottom_side;
dc.LineTo(pt);
} // draw columns
}
// grab the center of the graph on x and y dimension
int top_center = ((right_side-left_side)/2)+left_side;
int bottom_center = ((bottom_side-top_side)/2)+top_side;
You are using ax^2 + bx + c (quadratic equation). You will get list of (X,Y) values inserted by user.
Let us say 5 points you get are
(1,1)
(2,4)
(4,1)
(5,6)
(6,7)
So, here your best_max_y will be 7 and best_min_y will be 1.
Now you have total graph area is
Dx = right_side - left_side //here, 400 (710 - 310)
Dy = bottom_side - top_side //here, 420 (450 - 30)
So, you can calculate x_magnitude and y_magnitude using following equation :
x_magnitude = WidthInPixels / Dx;
y_magnitude = HeightInPixels / Dy;
What I did was to determine how many points I had going in the x and y directions, and then divide that by the x and y dimensions, then divide that by 3, as I wanted each minimum point to be three pixels, so it could be seen.
The trick then is that you have to aggregate the data so that you are showing several points with one point, so it may be the average of them, but that depends on what you are displaying.
Without knowing more about what you are doing it is hard to make a suggestion.
For this part, subtract, don't add:
best_max_y+best_min_y as you want the difference.
The only other thing would be to divide y_magnitude and x_magnitude by 3. That was an arbitrary number I came up with, just so the users could see the points, you may find some other number to work better.