Raylib fails to render perlin noise - c++

so i was trying to implement perlin noise in c++ and render it with raylib. i successfully implemented it but can't render it. it just shows black screen even tho perlinNoise function returns some value. can anyone help?
#include <math.h>
#include <iostream>
#include <vector>
#include "raylib.h"
using namespace std;
int p[] = {151, 160, 137, 91, 90, 15, 131, 13, 201, 95, 96, 53, 194, 233,
7, 225, 140, 36, 103, 30, 69, 142, 8, 99, 37, 240, 21, 10,
23, 190, 6, 148, 247, 120, 234, 75, 0, 26, 197, 62, 94, 252,
219, 203, 117, 35, 11, 32, 57, 177, 33, 88, 237, 149, 56, 87,
174, 20, 125, 136, 171, 168, 68, 175, 74, 165, 71, 134, 139, 48,
27, 166, 77, 146, 158, 231, 83, 111, 229, 122, 60, 211, 133, 230,
220, 105, 92, 41, 55, 46, 245, 40, 244, 102, 143, 54, 65, 25,
63, 161, 1, 216, 80, 73, 209, 76, 132, 187, 208, 89, 18, 169,
200, 196, 135, 130, 116, 188, 159, 86, 164, 100, 109, 198, 173, 186,
3, 64, 52, 217, 226, 250, 124, 123, 5, 202, 38, 147, 118, 126,
255, 82, 85, 212, 207, 206, 59, 227, 47, 16, 58, 17, 182, 189,
28, 42, 223, 183, 170, 213, 119, 248, 152, 2, 44, 154, 163, 70,
221, 153, 101, 155, 167, 43, 172, 9, 129, 22, 39, 253, 19, 98,
108, 110, 79, 113, 224, 232, 178, 185, 112, 104, 218, 246, 97, 228,
251, 34, 242, 193, 238, 210, 144, 12, 191, 179, 162, 241, 81, 51,
145, 235, 249, 14, 239, 107, 49, 192, 214, 31, 181, 199, 106, 157,
184, 84, 204, 176, 115, 121, 50, 45, 127, 4, 150, 254, 138, 236,
205, 93, 222, 114, 67, 29, 24, 72, 243, 141, 128, 195, 78, 66,
215, 61, 156, 180};
float fade(float t) { return t * t * t * (6 * t * t - 15 * t + 10); }
float lerp(float a0, float a1, float t) { return a0 + t * (a1 - a0); }
float dotp(Vector2 a, Vector2 b) { return a.x * b.x + a.y * b.y; }
Vector2 grad(int a) {
int l = a % 4;
Vector2 v;
if (l == 0) {
v = {1.0, 1.0};
} else if (l == 1) {
v = {-1.0, 1.0};
} else if (l == 2) {
v = {-1.0, -1.0};
} else {
v = {1.0, -1.0};
}
return v;
}
float perlinNoise(float x, float y) {
// 1 - Topleft, 2 - topright, 3 - bottom left, 4 - bottom right
int xInd = (int)floor(x) & 255;
int yInd = (int)floor(y) & 255;
x -= (int)floor(x);
y -= (int)floor(y);
// Direction Vectors
Vector2 D_1 = {x, y};
Vector2 D_2 = {x - 1.0, y};
Vector2 D_3 = {x, y - 1.0};
Vector2 D_4 = {x - 1.0, y - 1.0};
// Gradient Vectors
Vector2 G_1 = grad(p[p[xInd + 1] + yInd + 1]);
Vector2 G_2 = grad(p[p[xInd] + yInd + 1]);
Vector2 G_3 = grad(p[p[xInd + 1] + yInd]);
Vector2 G_4 = grad(p[p[xInd] + yInd]);
// Dot Products
float dot1 = dotp(D_1, G_1);
float dot2 = dotp(D_2, G_2);
float dot3 = dotp(D_3, G_3);
float dot4 = dotp(D_4, G_4);
// Fade Values
float u = fade(x);
float v = fade(y);
return lerp(lerp(dot1, dot2, u), lerp(dot3, dot4, u), v);
}
int main(void) {
const int screenWidth = 512;
const int screenHeight = 512;
cout << perlinNoise(1.5, 1.5) << '\n';
InitWindow(screenWidth, screenHeight, "perlins go brrrrrrrrrrrrr");
SetTargetFPS(60);
while (!WindowShouldClose()) {
BeginDrawing();
ClearBackground(BLACK);
for (int y = 0; y < 512; y++) {
for (int x = 0; x < 512; x++) {
float n = perlinNoise(x * 0.01, y * 0.01);
n += 1;
n /= 2;
int c = round(n * 255);
Color col{c, c, c, (char)256};
DrawPixel(x, y, col);
}
}
EndDrawing();
}
CloseWindow();
return 0;
}
I try to render it by assigning each pixel with some color. I think problem might be variable "col". Maybe i don't assign value correctly.

When you create color, alpha channel is 0 because 256 overflows during conversion to 8-bit integer. 255 is correct maximal level of each channel, so if you want non-transparent color, use 255 for alpha channel:
Color col{c, c, c, (char)255};
Also note that channel type is actually unsigned char, so you ideally should cast to unsigned char here if you want to do that explicitly:
Color col{c, c, c, (unsigned char)255};
But you don't have to, because implicit conversion in 4th initializer of this aggregate initialization is not considered narrowing since it converts constant expression whose numerical value is representable in unsigned char. So you can just write:
Color col{c, c, c, 255};

Related

fast way of summing row entries in diagonal position of matrix in python

Hi I am trying to solve below equation where A is a sparse matrix and ptotal is an array of numbers. I have to sum all the entries in a row at diagonal position.
A[ptotal, ptotal] = -sum(A[ptotal, :])
The code seems to give right answer but since my ptotal array is long almost (100000 entries), it is computationally not efficient. Is there any fast method to solve this problem.
First a dense array version:
In [87]: A = np.arange(36).reshape(6,6)
In [88]: ptotal = np.arange(6)
Assuming ptotal is all the row indices, it can be replace with a sum method call:
In [89]: sum(A[ptotal,:])
Out[89]: array([ 90, 96, 102, 108, 114, 120])
In [90]: A.sum(axis=0)
Out[90]: array([ 90, 96, 102, 108, 114, 120])
We can make an array with those values on the diagonal:
In [92]: np.diagflat(A.sum(axis=0))
Out[92]:
array([[ 90, 0, 0, 0, 0, 0],
[ 0, 96, 0, 0, 0, 0],
[ 0, 0, 102, 0, 0, 0],
[ 0, 0, 0, 108, 0, 0],
[ 0, 0, 0, 0, 114, 0],
[ 0, 0, 0, 0, 0, 120]])
Add it to the original array - and the result is a 'zero-sum' array:
In [93]: A -= np.diagflat(A.sum(axis=0))
In [94]: A
Out[94]:
array([[-90, 1, 2, 3, 4, 5],
[ 6, -89, 8, 9, 10, 11],
[ 12, 13, -88, 15, 16, 17],
[ 18, 19, 20, -87, 22, 23],
[ 24, 25, 26, 27, -86, 29],
[ 30, 31, 32, 33, 34, -85]])
In [95]: A.sum(axis=0)
Out[95]: array([0, 0, 0, 0, 0, 0])
We could do the same with sparse
In [99]: M = sparse.csr_matrix(np.arange(36).reshape(6,6))
In [100]: M
Out[100]:
<6x6 sparse matrix of type '<class 'numpy.int32'>'
with 35 stored elements in Compressed Sparse Row format>
In [101]: M.sum(axis=0)
Out[101]: matrix([[ 90, 96, 102, 108, 114, 120]], dtype=int32)
A sparse diagonal matrix:
In [104]: sparse.dia_matrix((M.sum(axis=0),0),M.shape)
Out[104]:
<6x6 sparse matrix of type '<class 'numpy.int32'>'
with 6 stored elements (1 diagonals) in DIAgonal format>
In [105]: _.A
Out[105]:
array([[ 90, 0, 0, 0, 0, 0],
[ 0, 96, 0, 0, 0, 0],
[ 0, 0, 102, 0, 0, 0],
[ 0, 0, 0, 108, 0, 0],
[ 0, 0, 0, 0, 114, 0],
[ 0, 0, 0, 0, 0, 120]], dtype=int32)
Take the difference, getting a new matrix:
In [106]: M-sparse.dia_matrix((M.sum(axis=0),0),M.shape)
Out[106]:
<6x6 sparse matrix of type '<class 'numpy.int32'>'
with 36 stored elements in Compressed Sparse Row format>
In [107]: _.A
Out[107]:
array([[-90, 1, 2, 3, 4, 5],
[ 6, -89, 8, 9, 10, 11],
[ 12, 13, -88, 15, 16, 17],
[ 18, 19, 20, -87, 22, 23],
[ 24, 25, 26, 27, -86, 29],
[ 30, 31, 32, 33, 34, -85]], dtype=int32)
There is also a setdiag method
In [117]: M.setdiag(-M.sum(axis=0).A1)
/usr/local/lib/python3.5/dist-packages/scipy/sparse/compressed.py:774: SparseEfficiencyWarning: Changing the sparsity structure of a csr_matrix is expensive. lil_matrix is more efficient.
SparseEfficiencyWarning)
In [118]: M.A
Out[118]:
array([[ -90, 1, 2, 3, 4, 5],
[ 6, -96, 8, 9, 10, 11],
[ 12, 13, -102, 15, 16, 17],
[ 18, 19, 20, -108, 22, 23],
[ 24, 25, 26, 27, -114, 29],
[ 30, 31, 32, 33, 34, -120]], dtype=int32)
Out[101] is a 2d matrix; .A1 turns it into a 1d array which setdiag can use.
The sparse efficiency warning is aimed more at iterative use than a one time application like this. Still, looking at the setdiag code, I suspect the first approach is faster. But we really need to do time tests.

Re-Create Bitmap Color table from 8 bpp bitmap

I have a field device that uses the raster bytes from a bitmap to display an 8 Bit-per-pixel image. The device uses its own color table to display bitmaps which were created in MS Paint.
Given that the field device does not store the original color table, is it possible to download the image bytes, and recreate the bitmap in Windows? The bpp, height and width are all known, just the color table itself is missing. MS Paint seems to use the same color indexes for 256 bitmaps, so it seems that this should be possible.
I have a bitmap tools class, and I can create a 24-bit bitmap using the function shown below, and I am trying to modify this to also create 256 (8 bpp) bitmaps. What would it take to make this work?
// This function needs to be fixed.
// It only works for 24-BPP bitmaps.
void BitmapTools::SetHbitmap (BYTE* pBitmapBits, LONG lWidth,LONG lHeight, WORD wBitsPerPixel)
{
if (wBitsPerPixel < 24)
{
MessageBox (GetFrame()->m_hWnd,
"Error at BitmapTools::SetHbitmap(). This function only works with 24 BPP bitmaps.",
"Error", MB_ICONERROR);
return;
}
// Some basic bitmap parameters
unsigned long headers_size = sizeof( BITMAPFILEHEADER ) +
sizeof( BITMAPINFOHEADER );
unsigned long pixel_data_size = lHeight * ( ( lWidth * ( wBitsPerPixel / 8 ) ) );
BITMAPINFOHEADER bmpInfoHeader = {0};
// Set the size
bmpInfoHeader.biSize = sizeof(BITMAPINFOHEADER);
// Bit count
bmpInfoHeader.biBitCount = wBitsPerPixel;
// Use all colors
bmpInfoHeader.biClrImportant = 0;
// Use as many colors according to bits per pixel
if (wBitsPerPixel < 24)
{
bmpInfoHeader.biClrUsed = (1<<wBitsPerPixel);
}
else
{
bmpInfoHeader.biClrUsed = 0;
}
// Store as un Compressed
bmpInfoHeader.biCompression = BI_RGB;
// Set the height in pixels
bmpInfoHeader.biHeight = lHeight;
// Width of the Image in pixels
bmpInfoHeader.biWidth = lWidth;
// Default number of planes
bmpInfoHeader.biPlanes = 1;
// Calculate the image size in bytes
bmpInfoHeader.biSizeImage = pixel_data_size;
BITMAPFILEHEADER bfh = {0};
bfh.bfType = 0x4D42;
// Offset to the RGBQUAD
bfh.bfOffBits = headers_size;
// Total size of image including size of headers
bfh.bfSize = headers_size + pixel_data_size;
HDC hdc = ::GetDC(NULL);
UINT usage;
// This does not work. Is there a way to add an arbitrary color
// table containing all 256 colors?
if (wBitsPerPixel < 24)
{
usage = DIB_PAL_COLORS;
}
else
{
usage = DIB_RGB_COLORS;
}
//usage = DIB_RGB_COLORS;
this->H_Bitmap = CreateDIBitmap (hdc, &bmpInfoHeader, CBM_INIT, pBitmapBits,(BITMAPINFO*)&bmpInfoHeader, usage);
}
Edit: I made a new function based on someone else's post to create a 256 color bitmap, and I added the values from the color table used by MS Paint. It almost works, except the bottom-right of the image has a row of black pixels. Here is the code I am now using:
Edit2: Thanks to everyone for your help, esp. Mark. I got it working now. I made the corrections in the code below.
HBITMAP BitmapTools::Create8bppBitmap(HDC hdc, int width, int height, int paddedSize, LPVOID pBits)
{
BITMAPINFO *bmi = (BITMAPINFO *)malloc(sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * 256);
BITMAPINFOHEADER &bih(bmi->bmiHeader);
bih.biSize = sizeof (BITMAPINFOHEADER);
bih.biWidth = width;
bih.biHeight = -height;
bih.biPlanes = 1;
bih.biBitCount = 8;
bih.biCompression = BI_RGB;
bih.biSizeImage = 0;
//bih.biXPelsPerMeter = 14173;
//bih.biYPelsPerMeter = 14173;
bih.biClrUsed = 0;
bih.biClrImportant = 0;
BYTE red[256] = {0, 128, 0, 128, 0, 128, 0, 192, 192, 166, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96,
128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32,
64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224,
0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192,
224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160,
192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128,
160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96,
128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64,
96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32,
64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0,
32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 192, 224, 0, 32, 64, 96, 128, 160, 255, 160,
128, 255, 0, 255, 0, 255, 0, 255};
BYTE green[256] = {0, 0, 128, 128, 0, 0, 128, 192, 220, 202, 32, 32, 32, 32, 32, 32, 64, 64, 64, 64, 64,
64, 64, 64, 96, 96, 96, 96, 96, 96, 96, 96, 128, 128, 128, 128, 128, 128, 128, 128, 160, 160, 160, 160,
160, 160, 160, 160, 192, 192, 192, 192, 192, 192, 192, 192, 224, 224, 224, 224, 224, 224, 224, 224, 0,
0, 0, 0, 0, 0, 0, 0, 32, 32, 32, 32, 32, 32, 32, 32, 64, 64, 64, 64, 64, 64, 64, 64, 96, 96, 96, 96, 96,
96, 96, 96, 128, 128, 128, 128, 128, 128, 128, 128, 160, 160, 160, 160, 160, 160, 160, 160, 192, 192, 192,
192, 192, 192, 192, 192, 224, 224, 224, 224, 224, 224, 224, 224, 0, 0, 0, 0, 0, 0, 0, 0, 32, 32, 32, 32,
32, 32, 32, 32, 64, 64, 64, 64, 64, 64, 64, 64, 96, 96, 96, 96, 96, 96, 96, 96, 128, 128, 128, 128, 128,
128, 128, 128, 160, 160, 160, 160, 160, 160, 160, 160, 192, 192, 192, 192, 192, 192, 192, 192, 224, 224,
224, 224, 224, 224, 224, 224, 0, 0, 0, 0, 0, 0, 0, 0, 32, 32, 32, 32, 32, 32, 32, 32, 64, 64, 64, 64, 64,
64, 64, 64, 96, 96, 96, 96, 96, 96, 96, 96, 128, 128, 128, 128, 128, 128, 128, 128, 160, 160, 160, 160,
160, 160, 160, 160, 192, 192, 192, 192, 192, 192, 251, 160, 128, 0, 255, 255, 0, 0, 255, 255};
BYTE blue[256] = {0, 0, 0, 0, 128, 128, 128, 192, 192, 240, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,
64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64,
64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 64, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128,
128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 128, 192, 192, 192, 192, 192, 192, 192, 192, 192,
192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192,
192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192, 192,
192, 192, 192, 240, 164, 128, 0, 0, 0, 255, 255, 255, 255};
for (int i = 0; i <= 255; i++)
{
bmi->bmiColors[i].rgbBlue = blue[i];
bmi->bmiColors[i].rgbGreen = green[i];
bmi->bmiColors[i].rgbRed = red[i];
bmi->bmiColors[i].rgbReserved = 0;
}
void *Pixels = NULL;
HBITMAP hbmp = CreateDIBSection(hdc, bmi, DIB_RGB_COLORS, &Pixels, NULL, 0);
//HBITMAP hbmp = CreateDIBSection(hdc, bmi, DIB_PAL_COLORS, &Pixels, NULL, 0);
if(pBits != NULL)
{
//fill the bitmap
BYTE* pbBits = (BYTE*)pBits;
BYTE *Pix = (BYTE *)Pixels;
memcpy(Pix, pbBits, paddedSize); // --Correction made here--
}
free(bmi);
return hbmp;
}
I use this function to save the bitmap:
BOOL BitmapTools::SaveHBitmap(const char* filename, HBITMAP hbitmap)
{
BITMAP bitmap;
if (!GetObjectW(hbitmap, sizeof(BITMAP), (void*)&bitmap))
return FALSE;
// Convert the color format to a count of bits.
WORD clrbits = (WORD)(bitmap.bmPlanes * bitmap.bmBitsPixel);
if (clrbits == 1) clrbits = 1;
else if (clrbits <= 4) clrbits = 4;
else if (clrbits <= 8) clrbits = 8;
else if (clrbits <= 16) clrbits = 16;
else if (clrbits <= 24) clrbits = 24;
else clrbits = 32;
//clrUsed is zero for 24 bit and higher
int clrUsed = (clrbits <= 8) ? (1 << clrbits) : 0;
//TRACE("clrUsed %d\n", clrUsed);
int bitmapInfoSize = sizeof(BITMAPINFOHEADER) + sizeof(RGBQUAD) * clrUsed;
PBITMAPINFO bitmapInfo = (PBITMAPINFO)new char[bitmapInfoSize];
memset(bitmapInfo, 0, bitmapInfoSize);
// Initialize the fields in the BITMAPINFO structure.
bitmapInfo->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
bitmapInfo->bmiHeader.biWidth = bitmap.bmWidth;
bitmapInfo->bmiHeader.biHeight = bitmap.bmHeight;
bitmapInfo->bmiHeader.biPlanes = bitmap.bmPlanes;
bitmapInfo->bmiHeader.biBitCount = bitmap.bmBitsPixel;
bitmapInfo->bmiHeader.biClrUsed = clrUsed;
bitmapInfo->bmiHeader.biCompression = BI_RGB;
// Compute the number of bytes in the array of color
// indices and store the result in biSizeImage.
// The width must be DWORD aligned unless the bitmap
// is RLE compressed.
int dibSize = ((bitmap.bmWidth * clrbits + 31) & ~31) / 8 * bitmap.bmHeight;
char* dib = new char[dibSize];
bitmapInfo->bmiHeader.biSizeImage = dibSize;
// Set biClrImportant to 0, indicating that all of
// the device colors are important.
bitmapInfo->bmiHeader.biClrImportant = 0;
//bitmapInfo->bmiColors [0].rgbBlue
PBITMAPINFOHEADER bmpInfoHeader = (PBITMAPINFOHEADER)bitmapInfo;
HDC hdc = CreateCompatibleDC(0);
if (!GetDIBits(hdc, hbitmap, 0, bmpInfoHeader->biHeight, dib, bitmapInfo, 0))
{
delete bitmapInfo;
delete[]dib;
return FALSE;
}
DWORD dwTmp;
BITMAPFILEHEADER bmpFileHeader;
bmpFileHeader.bfType = 0x4d42; // 0x42 = "B" 0x4d = "M"
bmpFileHeader.bfOffBits = sizeof(BITMAPFILEHEADER) + sizeof(BITMAPINFOHEADER) + clrUsed * sizeof(RGBQUAD);
bmpFileHeader.bfSize = bmpFileHeader.bfOffBits + dibSize;
HANDLE hfile = CreateFile(filename, GENERIC_READ | GENERIC_WRITE, 0, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL);
if (hfile != INVALID_HANDLE_VALUE)
{
WriteFile(hfile,(LPVOID)&bmpFileHeader, sizeof(BITMAPFILEHEADER), (LPDWORD) &dwTmp, NULL);
WriteFile(hfile, (void*)bmpInfoHeader, sizeof(BITMAPINFOHEADER) + clrUsed * sizeof(RGBQUAD), (LPDWORD) &dwTmp, NULL);
WriteFile(hfile, (void*)dib, dibSize, (LPDWORD) &dwTmp, NULL);
CloseHandle(hfile);
}
DeleteDC(hdc);
delete bitmapInfo;
delete[]dib;
return TRUE;
}
Here is the image I get. Note the last row has a set of black pixels. I'm not 100% sure that the problem is with these functions (my next step will be to compare the bytes from the original bitmap to the ones from the field device).
Edit: I checked the bytes from the field device, and they match the original bitmap raster bytes 100%, so I believe the problem is in one of these functions.

My code works correctly but when I resize the output window, ground on the sceen is distorted. How can I fix it?

My code works correctly but when I resize the output window, ground on the sceen is distorted. How can I fix it?Correct output is below.When I resize the output window, it becomes like the below images.
Correct output is below.
My code is below:
#define _USE_MATH_DEFINES
#include <cmath>
#include <stdlib.h>
#include <math.h>
#include <GL/glut.h>
int x = 0;
int z = 0;
int y;
int data[17][21] =
{ { 14 ,25, 45 ,55 ,68 ,70 ,84 ,91 ,97, 101 ,105 ,105 ,105, 105 ,110 ,110, 110, 110 ,110, 110, 110 },
{ 5, 18, 43, 62 ,73, 82, 88, 94, 99, 102 ,105, 105 ,105, 105, 110, 110 ,110 ,110 ,110, 110, 110 },
{ 5, 18 ,38 ,56, 69, 77, 86, 94, 99, 103, 106, 105, 105, 105, 110, 110, 110, 110, 110, 110, 110 },
{ 5 ,9 ,31, 48, 60, 71, 81, 87, 95, 101, 106, 105, 105, 105, 110, 110, 110, 110, 110, 110, 110 },
{ 5, 5, 18, 37, 49, 56, 62, 81, 91, 94, 101, 105, 105, 105, 110, 110, 110 ,110 ,110, 110, 110 },
{ 5, 5, 12, 23 ,34, 40, 53 ,66 ,77 ,82, 97, 103, 105, 105, 109, 110, 110, 110, 110, 115, 115 },
{ 4 ,5 ,8 ,15, 20, 24, 35, 39, 40, 77, 92, 101, 104, 104 ,105, 110, 110, 110, 115, 115, 115 },
{ 5, 7 ,22, 36, 46, 48, 48, 44 ,50, 58, 80, 96, 96, 97, 106, 110, 110, 115, 115, 115, 115 },
{ 4, 15 ,31 ,46 ,61, 68, 69, 63, 53, 50, 67, 82, 84, 103, 108, 110, 110, 115, 115, 115, 115 },
{ 4, 12, 31, 46, 64, 78, 82, 80, 69, 54, 73, 71, 92, 105, 108, 110, 110, 115, 115, 115, 115 },
{ 6, 26 ,35 ,45, 63, 75, 84, 87, 84, 74 ,77, 80, 96, 103, 108, 110, 110, 110, 115, 115, 115 },
{ 21, 30, 46, 57 ,64 ,76 ,85 ,92 ,92, 87 ,79 ,80 ,86 ,102, 106, 110, 105 ,110, 115, 115, 115 },
{ 27, 40, 48 ,62 ,75 ,84 ,92, 96, 97 ,94 ,88 ,80 ,80 ,91, 104, 105, 105, 105, 110, 115, 115 },
{ 33, 43, 55, 65, 75, 87, 96, 101, 101, 101, 97, 92, 80, 80, 98, 105, 105, 105, 105, 110, 115 },
{ 45, 50, 58, 68, 80, 91, 99, 102, 105, 105, 105, 99, 90, 80, 80, 97, 105, 105, 105, 110, 100 },
{ 50, 60, 65, 71, 84, 95, 101, 105, 105, 107, 107, 106, 102, 101, 92, 80, 98, 104, 105, 100, 100 },
{ 60, 70, 76, 83, 88 ,96, 103, 106 ,107, 108 ,110, 109 ,108 ,108, 106, 101 ,90, 100, 100, 100, 100 } };
//bool tamam=true;
void display(void)
{
int type = GL_TRIANGLES; // or GL_LINE_LOOP
glLoadIdentity();
gluLookAt(350, 600, 400, 280, 300, 300, 0, 0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Sides of the cube as loops or polygons, in anti-clockwise order.
//glColor3f(1.0, 1.0, 1.0);
glBegin(type);
for (int i = 0; i < 16; i++)
{
x = 0;
for (int k = 0; k < 20; k++)
{
y = data[i][k];
if (y >= 80)
glColor3f(1.0, 0.0, 0.0);
if (y>0 && y<50)
glColor3f(0.0, 1.0, 0.0);
if (y >= 50 && y<80)
glColor3f(1.0, 1.0, 0.0);
glVertex3f(x, data[i][k], z);
glVertex3f(x, data[i + 1][k], z + 20);
glVertex3f(x + 20, data[i + 1][k + 1], z + 20);
x = x + 20;
}
z = z + 20;
}
z = 0;
for (int i = 0; i < 16; i++)
{
x = 0;
for (int k = 0; k < 20; k++)
{
y = data[i][k];
if (y>0 && y<50)
glColor3f(0.0, 1.0, 0.0);
if (y >= 50 && y<80)
glColor3f(1.0, 1.0, 0.0);
if (y >= 80)
glColor3f(1.0, 0.0, 0.0);
glVertex3f(x + 20, data[i + 1][k + 1], z + 20);//z*i
glVertex3f(x + 20, data[i][k + 1], z);
glVertex3f(x, data[i][k], z);
x = x + 20;
}
z = z + 20;
}
glEnd(); // front
/*if(tamam)*/
/*tamam=false;*/
glutSwapBuffers();
}
void keyboard(unsigned char key, int x, int y)
{
switch (key) {
case 27: case 'q': case 'Q':
exit(EXIT_SUCCESS);
break;
}
}
int main(int argc, char *argv[])
{
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutInitWindowSize(1200, 720);
glutCreateWindow(argv[0]);
glViewport(0, 0, glutGet(GLUT_WINDOW_WIDTH), glutGet(GLUT_WINDOW_HEIGHT));
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(300, glutGet(GLUT_WINDOW_WIDTH) /
glutGet(GLUT_WINDOW_HEIGHT), 0.0, 300.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glutDisplayFunc(display);
glutKeyboardFunc(keyboard);
//glutReshapeFunc(reshape);
//glEnable(GL_DEPTH_TEST);
glutMainLoop();
return EXIT_SUCCESS;
}
How can I fix it??
Thanks for helping...
This looks like an indexing error. Why are x, y, z (and probably also i) global variables? Most likely you're left one of them in a bad state, and the next time you iterate over your data it causes this mess.
BTW: You should strongly consider to ditch that fixed function pipeline immediate mode drawing. Place your data array into a VBO, build an index array that will iterate over it forming triangles (like you currently do) and use the vertex shader variable gl_VertexID in addition with a width specifying uniform to produce the x and z positions for the vertex.
I wonder whether it is because of the use of integer division in the call to gluPerspective. Kind of a guess.

I created a ground, but I can't see the ground in openGL. How can I fix it?

This code works with no errors but the ground that I created is not seen on the screen after running the program. I want to see the whole ground on the screen.
How can I fix the code?
#define A glVertex3f (0.5, 1.2, 0)
#define B glVertex3f (1, 2, 0)
#define C glVertex3f (2, 1.4, 0)
#define D glVertex3f ( 0.5, 0.5, -0.5)
#define E glVertex3f (-0.5, 0.5, 0.5)
#define F glVertex3f (-0.5, -0.5, 0.5)
#define G glVertex3f ( 0.5, -0.5, 0.5)
#define H glVertex3f ( 0.5, 0.5, 0.5)
#define _USE_MATH_DEFINES
#include <cmath>
#include <stdlib.h>
#include <math.h>
#include <glut.h>
float distance = 5.0;
int longitude = 0, latitude = 0, ainc = 5;
int lastx = -1, lasty = -1;
int x=0,z=0;
int y;
int data[17][21]=
{{14 ,25, 45 ,55 ,68 ,70 ,84 ,91 ,97, 101 ,105 ,105 ,105, 105 ,110 ,110, 110, 110 ,110, 110, 110},
{5, 18, 43, 62 ,73, 82, 88, 94, 99, 102 ,105, 105 ,105, 105, 110, 110 ,110 ,110 ,110, 110, 110},
{5, 18 ,38 ,56, 69, 77, 86, 94, 99, 103, 106, 105, 105, 105, 110, 110, 110, 110, 110, 110, 110},
{5 ,9 ,31, 48, 60, 71, 81, 87, 95, 101, 106, 105, 105, 105, 110, 110, 110, 110, 110, 110, 110},
{5, 5, 18, 37, 49, 56, 62, 81, 91, 94, 101, 105, 105, 105, 110, 110, 110 ,110 ,110, 110, 110},
{5, 5, 12, 23 ,34, 40, 53 ,66 ,77 ,82, 97, 103, 105, 105, 109, 110, 110, 110, 110, 115, 115},
{4 ,5 ,8 ,15, 20, 24, 35, 39, 40, 77, 92, 101, 104, 104 ,105, 110, 110, 110, 115, 115, 115},
{5, 7 ,22, 36, 46, 48, 48, 44 ,50, 58, 80, 96, 96, 97, 106, 110, 110, 115, 115, 115, 115},
{4, 15 ,31 ,46 ,61, 68, 69, 63, 53, 50, 67, 82, 84, 103, 108, 110, 110, 115, 115, 115, 115},
{4, 12, 31, 46, 64, 78, 82, 80, 69, 54, 73, 71, 92, 105, 108, 110, 110, 115, 115, 115, 115},
{6, 26 ,35 ,45, 63, 75, 84, 87, 84, 74 ,77, 80, 96, 103, 108, 110, 110, 110, 115, 115, 115},
{21, 30, 46, 57 ,64 ,76 ,85 ,92 ,92, 87 ,79 ,80 ,86 ,102, 106, 110, 105 ,110, 115, 115, 115},
{27, 40, 48 ,62 ,75 ,84 ,92, 96, 97 ,94 ,88 ,80 ,80 ,91, 104, 105, 105, 105, 110, 115, 115},
{33, 43, 55, 65, 75, 87, 96, 101, 101, 101, 97, 92, 80, 80, 98, 105, 105, 105, 105, 110, 115},
{45, 50, 58, 68, 80, 91, 99, 102, 105, 105, 105, 99, 90, 80, 80, 97, 105, 105, 105, 110, 100},
{50, 60, 65, 71, 84, 95, 101, 105, 105, 107, 107, 106, 102, 101, 92, 80, 98, 104, 105, 100, 100},
{60, 70, 76, 83, 88 ,96, 103, 106 ,107, 108 ,110, 109 ,108 ,108, 106, 101 ,90, 100, 100, 100, 100}};
void display (void)
{
float xc, yc, zc;
int type = GL_TRIANGLE_STRIP; // or GL_LINE_LOOP
xc = distance * cos (latitude /180.0*M_PI) * cos (longitude/180.0*M_PI);
yc = distance * sin (latitude /180.0*M_PI);
zc = distance * cos (latitude /180.0*M_PI) * sin (longitude/180.0*M_PI);
glLoadIdentity ();
gluLookAt (xc, yc, zc, 0.0, 0.0, 0.0, 0.0, 1.0, 0.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glColor3f (1.0, 1.0, 1.0);
glBegin (type);
for (int i = 0; i < 17; i++)
{
z=z+20;
for (int k = 0; k < 21; k=k+1)
{
y=data[i][k];
if(y>0&&y<50)
glColor3f(0.0, 1.0, 0.0);
if(y>=50&&y<80)
glColor3f(1.0, 1.0, 0.0);
if(y>=80)
glColor3f(1.0, 0.0, 0.0);
glVertex3f(x, data[i][k], z);
x=x+20;
}
}
glEnd(); // front
glutSwapBuffers ();
}
void keyboard (unsigned char key, int x, int y)
{
switch (key) {
case 27: case 'q': case 'Q':
exit (EXIT_SUCCESS);
break;
}
}
void special (int key, int x, int y)
{
switch (key) {
case GLUT_KEY_UP:
distance *= 2;
break;
case GLUT_KEY_DOWN:
distance /= 2;
break;
}
glutPostRedisplay ();
}
void click (int button, int state, int x, int y)
{
if (button == GLUT_LEFT_BUTTON && state == GLUT_DOWN) {
lastx = x;
lasty = y;
}
}
void mouse (int x, int y)
{
if (x > lastx) {
longitude = (longitude + ainc) % 360;
} else if (x < lastx) {
longitude = (longitude - ainc) % 360;
}
if (y > lasty) {
latitude = (latitude + ainc) % 360;
} else if (y < lasty) {
latitude = (latitude - ainc) % 360;
}
lastx = x;
lasty = y;
glutPostRedisplay ();
}
void reshape (int w, int h)
{
glViewport (0, 0, (GLsizei) w, (GLsizei) h);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective (65.0, (GLfloat) w / (GLfloat) h, 1.0, 20.0);
glMatrixMode (GL_MODELVIEW);
glLoadIdentity ();
glTranslatef (0.0, 0.0, -5.0);
}
int main (int argc, char *argv[])
{
glutInit (&argc, argv);
glutInitDisplayMode (GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH);
glutCreateWindow (argv[0]);
glMatrixMode (GL_PROJECTION);
glLoadIdentity ();
gluPerspective(50.0, 1.0, 3.0, 7.0);
glMatrixMode (GL_MODELVIEW);
glutDisplayFunc (display);
glutKeyboardFunc (keyboard);
glutSpecialFunc (special);
glutMouseFunc (click);
glutMotionFunc (mouse);
glutReshapeFunc (reshape);
glEnable (GL_DEPTH_TEST);
glutMainLoop ();
return EXIT_SUCCESS;
}
I can see few things which you can check
1) in reshape Fucntion glTranslatef (0.0, 0.0, -5.0) is called, just after that in display function you are calling loadIdentity() which will reset the matrix. So translate may not apply on the object you are drawing.
2) Also try changing the Z value more inwards Z axis.
3) Change your projection Far point, and FOV angle, as your X values are very higher.
I noticed one more thing your x,y,z are global values and are not reset on reshape.
Hope it helps.

My OpenGl 3D Tetris game has square cubes, but they are vanishing to infinity in the background, what is wrong?

I am making a 3d Tetris game, I have defined my cubes in 3d, and set up a LookAt and a Frustum matrix to view the game.
The game works fine with a Orthographic matrix, but when I switch to Frustum all the cubes vanish to infinity to the background.
The Tetris pieces are all defined in the +,+ quadrant if you think of it like a Euclidian grid.
The pieces have length, width and height of 33.
I first have a matrix that centers the entire game to 0, 0 so the game is in all 4 quadrants, than I have a LookAt matrix that the camera is at 0,0,0 and looking at 0,0,0.
Then now if I have a orthographic matrix, it looks fine, but if I switch to a Frustum, it looks like the
Here is an example of 1 of 4 cubes that make up a Tetris tile's vertices:
( 132, 660, 16.5, 1 )( 165, 660, 16.5, 1 )( 165, 693, 16.5, 1 )( 132, 660, 16.5, 1 )( 165, 693, 16.5, 1 )( 132, 693, 16.5, 1 )( 165, 660, 16.5, 1 )( 165, 660, -16.5, 1 )( 165, 693, -16.5, 1 )( 165, 660, 16.5, 1 )( 165, 693, -16.5, 1 )( 165, 693, 16.5, 1 )( 165, 660, -16.5, 1 )( 132, 660, -16.5, 1 )( 132, 693, -16.5, 1 )( 165, 660, -16.5, 1 )( 132, 693, -16.5, 1 )( 165, 693, -16.5, 1 )( 132, 660, -16.5, 1 )( 132, 660, 16.5, 1 )( 132, 693, 16.5, 1 )( 132, 660, -16.5, 1 )( 132, 693, 16.5, 1 )( 132, 693, -16.5, 1 )( 132, 660, -16.5, 1 )( 165, 660, -16.5, 1 )( 165, 660, 16.5, 1 )( 132, 660, -16.5, 1 )( 165, 660, 16.5, 1 )( 132, 660, 16.5, 1 )( 132, 693, 16.5, 1 )( 165, 693, 16.5, 1 )( 165, 693, -16.5, 1 )( 132, 693, 16.5, 1 )( 165, 693, -16.5, 1 )( 132, 693, -16.5, 1 )
Here is where I set up my matrices:
vec4 eye(0, 0, 17, 1.0);
vec4 at(0, 0, 0, 1.0 );
vec4 up( 0.0, 1.0, 0.0, 1.0);
mat4 mv = LookAt( eye, at, up );
mat4 center;
// window is 400 x 720, translating the game
// to make the game center at the origin
center[0][3] = -200;
center[1][3] = -360;
mv = mv * center;
glUniformMatrix4fv( model_view, 1, GL_TRUE, mv );
mat4 p = Frustum(-200, 200, -360, 360, 1, 45);
If I change the Frustum to Ortho above the game looks like this:
If I change the coordinates of the camera eye at all, the entire game visually messes up and doesn't make sense to why. Even if I move the camera up one pixel. This also happens in Ortho projection, and I don't understand why.