Can't get FastNoise to work for grass patches - c++

I have been trying to implement noise based grass patches all day. Before I just had it pick random tiles, but they looked bad because they weren't in patches.
FastNoise GrassNoise;
for (int x = 0; x < MapSizeX; x++) {
for (int y = 0; y < MapSizeY; y++) {
if (GrassNoise.GetValue(Map[x][y].Sprite.getPosition().x, Map[x][y].Sprite.getPosition().y) > 0.5) {
Map[x][y].Sprite.setTexture(*Grass);
}
}
}
I'm pretty sure that I'm just not generating the Noise correctly. With that code, all of the tiles get turned grass. I'm looking for just a few patches.
github.com/Auburns/FastNoise
Thanks in advance

For the record, the problem was that the arguments passed to GrassNoise.GetValue were floating points in the [0,1] range, whereas the function expects integers.

Related

How to create a Minecraft chunk in opengl?

I'm learning OpenGL and I have tried to make a voxel game like Minecraft. In the beginning, everything was good. I created a CubeRenderer class to render a cube with its position. The below picture is what I have done.
https://imgur.com/yqn783x
And then I got a serious problem when I try to create a large terrain, I hit a slowing performance. It was very slow and fps just around 15fps, I thought.
Next, I figured out Minecraft chunk and culling face algorithm can solve slowing performance by dividing the world map into small pieces like chunk and just rendering visible faces of a cube. So how to create a chunk in the right way and how the culling face algorithm is applied in Chunk?
So far, that is what I have tried
I read about Chunk in Minecraft at https://minecraft.gamepedia.com/Chunk
I created a demo Chunk by this below code (it is not the completed code because I removed it out)
I created a CubeData that contains cube position and cube type.
And I call the GenerateTerrain function to make a simple chunk data (16x16x16) like below (CHUNK_SIZE is 16)
for (int x = 0; x < CHUNK_SIZE; x++) {
for (int y = 0; y < CHUNK_SIZE; y++) {
for (int z = 0; z < CHUNK_SIZE; z++) {
CubeType cubeType = { GRASS_BLOCK };
Location cubeLocation = { x, y, z };
CubeData cubeData = { cubeLocation, cubeType };
this->Cubes[x][y][z] = cubeData;
}
}
}
After that, I had a boolean array which is called "mask" contains two values are 0 (not visible) or 1 (visible) and matches with their cube data. And then I call Render function of Chunk class to render a chunk. This code below like what I have done (but it is not complete code because I removed that code and replaced with new code)
for (int x = 0; x < CHUNK_SIZE; x++) {
for (int y = 0; y < CHUNK_SIZE; y++) {
for (int z = 0; z < CHUNK_SIZE; z++) {
for(int side = 0; side < 6;side++){
if(this->mask[x][y][z][side] == true) cubeRenderer.Render(cubeData[x][y][z]);
}
}
}
}
But the result I got that everything still slow (but it is better than the first fps, from 15fps up to 25-30fps, maybe)
I guess it is not gpu problem, it is a cpu problem because there is too many loops in render call.
So I have kept research because I think my approach was wrong. There may have some right way to create a chunk, right?
So I found the solution that puts every visible verticle to one VBO. So I just have to call and bind VBO definitely one time.
So this below code show what I have tried
cout << "Generating Terrain..." << endl;
for (int side = 0; side < 6; side++) {
for (int x = 0; x < CHUNK_SIZE; x++) {
for (int y = 0; y < CHUNK_SIZE; y++) {
for (int z = 0; z < CHUNK_SIZE; z++) {
if (this->isVisibleSide(x, y, z, side) == true) {
this->cubeRenderer.AddVerticleToVBO(this->getCubeSide(side), glm::vec3(x, y, z), this->getTexCoord(this->Cubes[x][y][z].cubeType, side));
}
}
}
}
}
this->cubeRenderer.GenerateVBO();
And call render one time at all.
void CubeChunk::Update()
{
this->cubeRenderer.Render(); // with VBO data have already init above
}
And I got this:
https://imgur.com/YqsrtPP
I think my way was wrong.
So what should I do to create a chunk? Any suggestion?

Optical Flow - Motion histograms

I'm currently working on optical flow with OpenCV C++. I'm using calcOpticalFlowPyrLK with a grid of point (= one interest point for each 5*5 pixels square).
Which is the best way to :
1) Compute the histogram of the computed values (orientation and distance) for each frame
2) Compute an histogram of the values (orientation and distance) that a given pixel took during several frames (for instance 100)
Are the functions of OpenCV adapted for this work ? How may I use them in a simple way in combination with calcOpticalFlowPyrLK ?
I was searching for the same OpenCV tools a couple of months ago. Unfortunately, OpenCV does not include any Motion Histogram implementation. Instead, what you should have to do is to run calcOpticalFlowPyrLK for each frame and calculate the orientation/length of each displacement. Then, you have to create/fill the histograms yourself . Not as hard as it sounds, believe me :)
The OpenCV implementation for the fist part of HOOF can be like below:
const int rows = flow1.rows;
const int cols = flow1.cols;
for (int y = 0; y < rows; ++y)
for (int x = 0; x < cols; ++x)
{
Vec2f flow1_at_point = flow1.at<Vec2f>(y, x);
float u1 = flow1_at_point[0];
float v1 = flow1_at_point[1];
magnitudeImage += sqrt((u1*u1) + (v1 + v1));
orientationImage += atan2(u1, v1);
}

Getting the position of all pixels that have a value bigger than X openCV

I'm looking for an efficiant way to get the posistion of all pixels that has a value bigger that let's say (100,100,100). I know I could put use minmaxLoc to get the position of the max and min but a want to add an average to it to I can get them all, and I don't want to use a while loop.
thanks in advance
You could use a pseudocode like the following:
//process the loop in a multithread way with openmp
#pragma omp parallel for
int xy=0;
for(int x = 0; x <= x_resolution; ++x) {
for(int y = 0; y <= y_resolution, ++y) {
if(value of Point bigger than (100,100,100) {
do_what_you_want;
}
++xy;
}
}

xna 4.0 perparing for multiple collision detection with list

I have a List of objects of which all have a bounding rectangle updated everytime... How can I effectively iterate among them ? I thought that checking it like this is fine but any ideas ?
for (int i = 0; i < birds.Count; i++)
{
for (int j = 0; j < birds.Count; j++)
{
if (j > i)
{
if (birds[j].boundingRectangle.Intersects(birds[i].boundingRectangle))
{
birds[i].tintColor = Color.Yellow;
birds[j].tintColor = Color.Yellow;
}
else
{
birds[i].tintColor = Color.White;
birds[j].tintColor = Color.White;
}
}
}
}
I can't see there why it would fail to detect the collision, the code seems to be ok. You should output some string showing the values of the bounding rectangles to see if they're properly set, or if you are executing that code at all, or if your "birds" array survives the scope in where this code is executed (you may be modifying a copy instead of the actual array).
As for improvements, you can do:
for (int j = i+1; j < birds.Count; j++)
and then you can remove the if (j > i) (j will always be > i).
Other thing that I would recommend is not declaring int j in the for statement. It's always better to having it declared outside than instancing on every i, so:
int i, j;
for (i = 0; i < birds.Count; i++)
for (j = i+1; j < birds.Count; j++)
I don't think there is much more room for improvement there without being able to use pointers. Your method is fine for 2D graphics anyway, unless you're checking for hundreds of objects.
PS: I believe your question could fit in the Game Development - SE site (unless you're using XNA and bounding boxes for something else :P)
Your problem is that when comparing rectangles you set the bird to white, even if it was previously set to yellow by a previous hit detection, so it may have been set to yellow, but it'll be set back again if the last test fails.
How about at the start of each frame (before collision detection) set the rectangles to white then if you get a collision set them to yellow (leaving them alone if there's no collision).

Correct flip/mirror of pixels of an image?

http://tinypic.com/r/fwubzc/5
That shows what a flip should be and what a mirror should be.
Code for both types of mirrors:
void mirrorLeftRight()
{
for (int x = 0; x < width/2; x++) {
for (int y = 0; y < height; y++) {
int temp = pixelData[x][y];
pixelData[x][y]=pixelData[width-x][y]
pixelData[width-x][y]=temp;
}
}
}
void mirrorUpDown()
{
for (int x = 0; x < width; x++) {
for (int y = 0; y < height/2; y++) {
int temp = pixelData[x][y];
pixelData[x][y]=pixelData[x][height-y]
pixelData[x][height-y]=temp;
}
}
}
Does this seem right for mirrors?
And for flip, just a matter of using width and height w/o dividing by 2?
You need to use width-1-x instead of width-x, and height-1-y instead of height-y. Otherwise for x==0 you'll try to index [width], which is outside the array.
It shouldn't work since you are swapping pixels while you just have to override the right part of the image with the left part. Same thing applies to the mirrorUpDown.
If you swap them you obtain a flip, if you overwrite them you obtain a mirror.
mirrorLeftRight: take pixels from left half and use them to overwrite right part
mirrorUpDown: take pixels from upper part and use them to overwrite lower one
flip: in this case you don't overwrite but you swap pixels (source half it's not influent in this case)
The code above seems more like the correct way to Flip, not mirror.
Mirror I would guess that you not switch the pixels, but rather copy from one side to the other.
With mirror I would guess that you need to change
int temp = pixelData[x][y];
pixelData[x][y]=pixelData[width-x][y]
pixelData[width-x][y]=temp;
to something like this only
pixelData[x][y]=pixelData[width-x][y]