I've generated a cubic world using FastNoiseLite but I don't know how to differentiate top level blocks as grass and bottom one's dirt when using 3d noise.
TArray<float> CalculateNoise(const FVector& ChunkPosition)
{
Densities.Reset();
// ChunkSize is 32
for (int z = 0; z < ChunkSize; z++)
{
for (int y = 0; y < ChunkSize; y++)
{
for (int x = 0; x < ChunkSize; x++)
{
const float Noise = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z));
Densities.Add(Noise - ChunkPosition.Z);
}
}
}
return Densities;
}
void AddCubeMaterial(const FVector& ChunkPosition)
{
const int32 DensityIndex = GetIndex(ChunkPosition);
const float Density = Densities[DensityIndex];
if (Density < 1)
{
// Add Grass block
}
// Add dirt block
}
void GetNoise(const FVector& Position) const
{
const float Height = 280.f;
if (bIs3dNoise)
{
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z) * Height;
}
FastNoiseLiteObj->GetNoise(Position.X, Position.Y) * Height;
}
This is the result when using 3D noise.
3D Noise result
But if I switch to 2D noise it works perfectly fine.
2D Noise result
This answer applies to Perlin like noise.
Your integer chunk size is dis-contiguous in noise space.
'Position' needs to be scaled by 1/Height. To scale the noise as a contiguous block. Then scale by Height.
If you were happy with the XY axes(2D), you could limit the scaling to the Z axis:
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z / Height) * Height;
This adjustment provides a noise continuous Z block location with respect to Position(X,Y).
Edit in response to comments
Contiguous:
The noise algorithm guarantees continuous output in all dimensions.
By sampling every 32 pixels (dis-contiguous sampling), The continuity is broken, on purpose(?) and augmented by the Density.
To guarantee a top level grass layer:
Densities.Add(Noise + (ChunkPosition.Z > Threshold) ? 1: 0);
Your code- ChunkPosition.Z made grass thicker as it went down. Add it back if you wish.
To add random overhangs/underhangs reduce the Density threshold randomly:
if (Density < (rnd() < 0.125)? 0.5 : 1)
I leave the definition of rnd() to your preferred random distribution.
To almost always have overhangs, requires forward lookup of the next and previous blocks' Z in noise.
Precalculate the noise values for the next line into alternating arrays 2 wider than the width to support the edges set at 0.
The algorithm is:
// declare arrays: currentnoise[ChunkSize + 2] and nextnoise[ChunkSize +2] and alpha=.2; //see text
for (int y = 0; y < ChunkSize; y++) // note the reorder y-z-x
{
// pre load currentnoise for z=0
currentnoise[0] = 0;
currentnoise[ChunkSize+1] = 0;
for (int x = 0; x < ChunkSize; x++)
{
currentnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z));
}
for (int z = 1; z < ChunkSize -2; z++)
{
nextnoise[0] = 0;
nextnoise[ChunkSize+1] = 0;
// load next
for (int x = 0; x < ChunkSize; x++)
{
nextnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z+1));
}
// apply current with next
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[x + 1] * .75 + nextnoise[x+2] * alpha + nextnoise[x] * alpha);
}
// move next to current in a memory safe manor:
// it is faster to swap pointers, but this is much safer for portability
for (int i = 1; i < ChunkSize + 1; i++)
currentnoise[i]=nextnoise[i];
}
// apply last z(no next)
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[X + 1]);
}
}
Where alpha is approximately between .025 and .25 depending on preferred fill amounts.
The 2 inner most x for loops could be streamlined into 1 loop, but left separate for readability.(it requires 2 preloads)
Related
I am trying to implement Laplace sharpening using C++ , here's my code so far:
img = imread("cow.png", 0);
Mat convoSharp() {
//creating new image
Mat res = img.clone();
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = 0.0;
}
}
//variable declaration
int filter[3][3] = { {0,1,0},{1,-4,1},{0,1,0} };
//int filter[3][3] = { {-1,-2,-1},{0,0,0},{1,2,1} };
int height = img.rows;
int width = img.cols;
int filterHeight = 3;
int filterWidth = 3;
int newImageHeight = height - filterHeight + 1;
int newImageWidth = width - filterWidth + 1;
int i, j, h, w;
//convolution
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
}
}
}
}
//img - laplace
for (int y = 0; y < res.rows; y++) {
for (int x = 0; x < res.cols; x++) {
res.at<uchar>(y, x) = img.at<uchar>(y, x) - res.at<uchar>(y, x);
}
}
return res;
}
I don't really know what went wrong, I also tried different filter (1,1,1),(1,-8,1),(1,1,1) and the result is also same (more or less). I don't think that I need to normalize the result because the result is in range of 0 - 255. Can anyone explain what really went wrong in my code?
Problem: uchar is too small to hold partial results of filerting operation.
You should create a temporary variable and add all the filtered positions to this variable then check if value of temp is in range <0,255> if not, you need to clamp the end result to fit <0,255>.
By executing below line
res.at<uchar>(i,j) += filter[h - i][w - j] * img.at<uchar>(h,w);
partial result may be greater than 255 (max value in uchar) or negative (in filter you have -4 or -8). temp has to be singed integer type to handle the case when partial result is negative value.
Fix:
for (i = 0; i < newImageHeight; i++) {
for (j = 0; j < newImageWidth; j++) {
int temp = res.at<uchar>(i,j); // added
for (h = i; h < i + filterHeight; h++) {
for (w = j; w < j + filterWidth; w++) {
temp += filter[h - i][w - j] * img.at<uchar>(h,w); // add to temp
}
}
// clamp temp to <0,255>
res.at<uchar>(i,j) = temp;
}
}
You should also clamp values to <0,255> range when you do the subtraction of images.
The problem is partially that you’re overflowing your uchar, as rafix07 suggested, but that is not the full problem.
The Laplace of an image contains negative values. It has to. And you can’t clamp those to 0, you need to preserve the negative values. Also, it can values up to 4*255 given your version of the filter. What this means is that you need to use a signed 16 bit type to store this output.
But there is a simpler and more efficient approach!
You are computing img - laplace(img). In terms of convolutions (*), this is 1 * img - laplace_kernel * img = (1 - laplace_kernel) * img. That is to say, you can combine both operations into a single convolution. The 1 kernel that doesn’t change the image is [(0,0,0),(0,1,0),(0,0,0)]. Subtract your Laplace kernel from that and you obtain [(0,-1,0),(-1,5,-1),(0,-1,0)].
So, simply compute the convolution with that kernel, and do it using int as intermediate type, which you then clamp to the uchar output range as shown by rafix07.
I have this code that implements Prewitt edge detection. What I need to do is to implement it with only one buffer, meaning, I will not create copy of the image but edit original image. So if i want to change pixel with value 78, I cant put the new value e.g. 100 until all surrounding pixels have read value 78. Color values of the pixels. I have tried all day to figure it out but couldn't, if someone would write me some kind of pseudocode I would be very grateful
void filter_serial_prewitt(int *inBuffer, int *outBuffer, int width, int height){
for (int i = 1; i < width - 1; i ++) {
for (int j = 1; j < height - 1; j ++) {
int Fx = 0;
int Fy = 0;
int F = 0;
for (int m = -1; m <= 1; m++) {
for (int n = -1; n <= 1; n++) {
Fx += inBuffer[(j + n) * width + (i + m)] * n;
Fy += inBuffer[(j + n) * width + (i + m)] * m;
}
}
F = abs(Fx) + abs(Fy);
if (F < THRESHOLD){
outBuffer[j * width + i] = 255;
} else{
outBuffer[j * width + i] = 0;
}
}
}
}
One thing to know about a Prewitt operator is that it is separable. See the Wikipedia article for details.
To calculate a single output row, you need to do the following (pseudocode):
int* buffer = malloc (sizeof(int) * width);
for (int i = 0; i < width; i++)
{
// Do the vertical pass of the convolution of the first 3 rows into
// the buffer.
buffer [ i ] = vertical_convolve(inBuffer [ i ], vertical_kernel);
}
// Next, do the horizontal convolution of the first row. We need to
// keep the previous value in a temp buffer while we work
int temp0 = horizontal_convolve(buffer [ 0 ], horizontal_kernel);
for (int i = 1; i < width; i++)
{
int temp1 = horizontal_convolve(buffer[ i ], horizontal_kernel);
inBuffer [ i - 1 ] = temp0;
temp0 = temp1;
}
That requires a buffer that is 1 pixel tall and the width of the image.
To work on the whole image, you need to keep 2 of the above buffers around and after you calculate a pixel on the third line, you can replace the first pixel of the first line of the image with the first pixel of the first buffer. Then you can put the newly calculated value into the buffer.
So in this scenario, you won't keep around an entire second image, but will need to keep around 2 1-pixel tall buffers that are as wide as the image.
I'm trying to clone space invader, collision with the green barriers.
https://www.youtube.com/watch?v=bLAhmnCZym4
Right now I have access to the pixels of the green barrier
I would like to draw a solid black circle around the collision point of the bullet, right now I'm using the following code, but it spread random pixels, not solid black circle and it's center the hitting point of the bullet
The result is shown here:
http://i.stack.imgur.com/mpgkM.png
int radius = 50;
for (int y = -radius; y <= radius; y++)
{
for (int x = -radius; x <= radius; x++)
{
if (x*x + y*y <= radius*radius)
{
int j = x + normX;
int i = y + normY;
uint8 pixelOffset = j + i;
ptr += pixelOffset;
*ptr = 0xff000000;
}
}
}
Pixels are normally stored in a raster order so you need to change
uint8 pixelOffset = j + i;
to
int pixelOffset = j + i*pitch;
where pitch is the width of your image.
In addition, each time you write a pixel you are moving ptr to a new location so you get a diagonal line.
Replace
ptr += pixelOffset;
*ptr = 0xff000000;
with
ptr[pixelOffset] = 0xff000000;
If I am having a convex polygon vertices, then my area calculation in image is not coming accurate by the standard formula.
(For simiplicity) if I am having 3x3 square, and the vertices are (1,1) (1,3) (3,3) (3,1)
by polygon area calculation method depicted here
and dividing the summation by 2 we get the Area.
So for the 3 x 3 data above, we'll get area as 4 instead of 9.
This is happening because vertices are not points but a pixel.
this is the corresponding code. The coordinates are cyclic.
int X[] = { 1, 1, 3, 3, 1};
int Y[] = { 1, 3, 3, 1, 1};
double Sum1 = 0;
double Sum2 = 0;
int numElements = 5;
for (int k = 0; k < numElements-1; k++)
{
Sum1 += X[k] * Y[k + 1];
Sum2 += Y[k] * X[k + 1];
}
double area = std::abs((double)(Sum1 - Sum2))/2;
For square, we can do +1 to width and height and get the area correct. But what about the irregular polygons in the image?
I hope the question makes sense.
If you don't want to work with pixel corners as vertices, consider next method (works for simple figures - all convex ones, some concave ones):
Add additional fake pixel at right side of every right-border pixel, at bottom side of every bottom-border pixel, at right-bottom of right-bottom corner pixel. Here gray pixels are initial, light blues ones - fake.
The area can be calculated in following steps:
1) fetching the pixels between the vertices
2) sorting the pixels by x coordinates (or y coordinates)
3) taking the difference between min and max y coordinates (or x) for a particular x (or y) value and adding one to the difference
4) summing up the total difference
NOTE: the area might vary (if there are slanted edges in the polygon) depending on the line drawing method chosen
int compare(const void * a, const void * b)
{
return (((Point*)a)->x() - ((Point*)b)->x());
}
double CalculateConvexHullArea(vector<int> ConvexHullX, vector<int> ConvexHullY)
{
float Sum1 = 0;
float Sum2 = 0;
std::vector<Point> FillPoints;
for (int k = 0; k < ConvexHullX.size() - 1; k++)
{
drawLine(ConvexHullX[k], ConvexHullX[k+1], ConvexHullY[k], ConvexHullY[k+1], FillPoints);
}
//sorting coordinates
qsort(FillPoints.data(), FillPoints.size(), sizeof(Point), compare);
double area = 0;
int startY = FillPoints[0].y(), endY = FillPoints[0].y();
int currX = FillPoints[0].x();
// traversing x and summing up diff of min and max Y
for (int cnt = 0; cnt < FillPoints.size(); cnt++)
{
if (FillPoints[cnt].x() == currX)
{
startY = startY > FillPoints[cnt].y() ? FillPoints[cnt].y() : startY;
endY = endY < FillPoints[cnt].y() ? FillPoints[cnt].y() : endY;
}
else
{
int diffY = endY - startY + 1;
area += diffY;
currX = FillPoints[cnt].x();
startY = endY = FillPoints[cnt].y();
}
}
return area + endY - startY + 1;
}
I would like to know how can I create a Texture3D from a Texture2D.
I've found some good examples : Unity 4 - 3D Textures (Volumes) or Unity - 3D Textures or Color Correction Lookup Texture
int dim = tex2D.height;
Color[] c2D = tex2D.GetPixels();
Color[] c3D = new Color[c2D.Length];
for (int x = 0; x < dim; ++x)
{
for (int y = 0; y < dim; ++y)
{
for (int z = 0; z < dim; ++z)
{
int y_ = dim - y - 1;
c3D[x + (y * dim) + (z * dim * dim)] = c2D[z * dim + x + y_ * dim * dim];
}
}
}
But this only works when you have
Texture2D.height= Mathf.FloorToInt(Mathf.Sqrt(Texture2D.width))
or if
Depth = Width = Height
How can I extract the values when the depth is not equal to the width or the height ?
It seems simple but I am missing something...
Thank you very much.
You can split the texture as follows:
//Iterate the result
for(int z = 0; z < depth; ++z)
for(int y = 0; y < height; ++y)
for(int x = 0; x < width; ++x)
c3D[x + y * width + z * width * height]
= c2D[x + y * width * depth + z * width]
You can get to this index formula as follows:
Advancing by 1 in the x-direction results in an increment by 1 (just the next pixel).
Advancing by 1 in the y-direction results in an increment by depth * width (skip 4 images with the according width).
Advancing by 1 in the z-direction results in an increment by width (skip one image row).
Or if you prefer the other direction:
//Iterate the original image
for(int y = 0; y < height; ++y)
for(int x = 0; x < width * depth; ++x)
c3D[(x % width) + y * width + (x / width) * width * height] = c2D[x + y * width * depth];
Unfortunately, there's not much documentation about the 3DTexture. I've tried to simply use the c2D as the Texture's data but it doesn't give an appropriate result.
For the moment I tried this which gives better result but I don't know of it's correct.
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
for (int z = 0; z < depth; ++z)
{
int y_ = height - y - 1;
c3D[x + (y * height) + (z * height * depth)] = c2D[z * height + x + y_ * height * depth];
}
}
}
From your picture, it looks like you have the planes of the 3D texture you want side by side? So you want a 3D texture with dimensions (width, height, depth) from a 2D texture with (width * depth, height)? You should be able to do this with something like this:
for (int z = 0; z < depth; ++z)
{
for (int y = 0; y < height; ++y)
{
memcpy(c3D + (z * height + y) * width, c2D + (y * depth + z) * width, width * sizeof(Color));
}
}