I'm trying to clone space invader, collision with the green barriers.
https://www.youtube.com/watch?v=bLAhmnCZym4
Right now I have access to the pixels of the green barrier
I would like to draw a solid black circle around the collision point of the bullet, right now I'm using the following code, but it spread random pixels, not solid black circle and it's center the hitting point of the bullet
The result is shown here:
http://i.stack.imgur.com/mpgkM.png
int radius = 50;
for (int y = -radius; y <= radius; y++)
{
for (int x = -radius; x <= radius; x++)
{
if (x*x + y*y <= radius*radius)
{
int j = x + normX;
int i = y + normY;
uint8 pixelOffset = j + i;
ptr += pixelOffset;
*ptr = 0xff000000;
}
}
}
Pixels are normally stored in a raster order so you need to change
uint8 pixelOffset = j + i;
to
int pixelOffset = j + i*pitch;
where pitch is the width of your image.
In addition, each time you write a pixel you are moving ptr to a new location so you get a diagonal line.
Replace
ptr += pixelOffset;
*ptr = 0xff000000;
with
ptr[pixelOffset] = 0xff000000;
Related
I've generated a cubic world using FastNoiseLite but I don't know how to differentiate top level blocks as grass and bottom one's dirt when using 3d noise.
TArray<float> CalculateNoise(const FVector& ChunkPosition)
{
Densities.Reset();
// ChunkSize is 32
for (int z = 0; z < ChunkSize; z++)
{
for (int y = 0; y < ChunkSize; y++)
{
for (int x = 0; x < ChunkSize; x++)
{
const float Noise = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z));
Densities.Add(Noise - ChunkPosition.Z);
}
}
}
return Densities;
}
void AddCubeMaterial(const FVector& ChunkPosition)
{
const int32 DensityIndex = GetIndex(ChunkPosition);
const float Density = Densities[DensityIndex];
if (Density < 1)
{
// Add Grass block
}
// Add dirt block
}
void GetNoise(const FVector& Position) const
{
const float Height = 280.f;
if (bIs3dNoise)
{
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z) * Height;
}
FastNoiseLiteObj->GetNoise(Position.X, Position.Y) * Height;
}
This is the result when using 3D noise.
3D Noise result
But if I switch to 2D noise it works perfectly fine.
2D Noise result
This answer applies to Perlin like noise.
Your integer chunk size is dis-contiguous in noise space.
'Position' needs to be scaled by 1/Height. To scale the noise as a contiguous block. Then scale by Height.
If you were happy with the XY axes(2D), you could limit the scaling to the Z axis:
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z / Height) * Height;
This adjustment provides a noise continuous Z block location with respect to Position(X,Y).
Edit in response to comments
Contiguous:
The noise algorithm guarantees continuous output in all dimensions.
By sampling every 32 pixels (dis-contiguous sampling), The continuity is broken, on purpose(?) and augmented by the Density.
To guarantee a top level grass layer:
Densities.Add(Noise + (ChunkPosition.Z > Threshold) ? 1: 0);
Your code- ChunkPosition.Z made grass thicker as it went down. Add it back if you wish.
To add random overhangs/underhangs reduce the Density threshold randomly:
if (Density < (rnd() < 0.125)? 0.5 : 1)
I leave the definition of rnd() to your preferred random distribution.
To almost always have overhangs, requires forward lookup of the next and previous blocks' Z in noise.
Precalculate the noise values for the next line into alternating arrays 2 wider than the width to support the edges set at 0.
The algorithm is:
// declare arrays: currentnoise[ChunkSize + 2] and nextnoise[ChunkSize +2] and alpha=.2; //see text
for (int y = 0; y < ChunkSize; y++) // note the reorder y-z-x
{
// pre load currentnoise for z=0
currentnoise[0] = 0;
currentnoise[ChunkSize+1] = 0;
for (int x = 0; x < ChunkSize; x++)
{
currentnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z));
}
for (int z = 1; z < ChunkSize -2; z++)
{
nextnoise[0] = 0;
nextnoise[ChunkSize+1] = 0;
// load next
for (int x = 0; x < ChunkSize; x++)
{
nextnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z+1));
}
// apply current with next
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[x + 1] * .75 + nextnoise[x+2] * alpha + nextnoise[x] * alpha);
}
// move next to current in a memory safe manor:
// it is faster to swap pointers, but this is much safer for portability
for (int i = 1; i < ChunkSize + 1; i++)
currentnoise[i]=nextnoise[i];
}
// apply last z(no next)
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[X + 1]);
}
}
Where alpha is approximately between .025 and .25 depending on preferred fill amounts.
The 2 inner most x for loops could be streamlined into 1 loop, but left separate for readability.(it requires 2 preloads)
I have a circular brush of with a diameter of 200px and hardness of 0 (the brush is a circular gradient). The spacing between each brush is 25% of the brush diameter. However, when I compare the stroke my program draws and the stroke Photoshop draws, where all settings are equal...
It is clear that photoshop's is much smoother! I can't reduce the spacing because that causes the edges to become harder
How can i make my stroke like photoshop's?
Here is the relevant code from my program...
//defining a circle
Mat alphaBrush(2*outerRadius,2*outerRadius,CV_32FC1);
float floatInnerRadius = outerRadius * hardness;
for(int i = 0; i < alphaBrush.rows; i++ ){
for(int j=0; j<alphaBrush.cols; j++ ){
int x = outerRadius - i;
int y = outerRadius - j;
float radius=hypot((float) x, (float) y );
auto& pixel = alphaBrush.at<float>(i,j);
if(radius>outerRadius){ pixel=0.0; continue;} // transparent
if(radius<floatInnerRadius){ pixel=1.0; continue;} // solid
pixel=1-((radius-floatInnerRadius)/(outerRadius-floatInnerRadius)); // partial
}
}
/*
(...irrelevant stuff)
*/
//drawing the brush onto the canvas
for (int j = 0; j < inMatROI.rows; j++) {
Vec3b *thisBgRow = inMatROI.ptr<Vec3b>(j);
float *thisAlphaRow = brushROI.ptr<float>(j);
for (int i = 0; i < inMatROI.cols; i++) {
for (int c = 0; c < 3; c++) {
thisBgRow[i][c] = saturate_cast<uchar>((brightness * thisAlphaRow[i]) + ((1.0 - thisAlphaRow[i]) * thisBgRow[i][c]));
}
}
}
I have also tried resultValue = max(backgroundValue, brushValue), but the intersection between the two circles is pretty obvious.
this is the approach, drawing a solid thin line and afterwards computing the distance of each pixel to that line.
As you can see there are some artifacts, probably mostly because of only approximated distance values from cv::distanceTransform. If you compute the distances precisely (and maybe in double precision) you should get very smooth results.
int main()
{
cv::Mat canvas = cv::Mat(768, 768, CV_8UC3, cv::Scalar::all(255));
cv::Mat canvasMask = cv::Mat::zeros(canvas.size(), CV_8UC1);
// make sure the stroke has always a size of >= 2, otherwise will be cv::line way not work...
std::vector<cv::Point> strokeSampling;
strokeSampling.push_back(cv::Point(250, 100));
strokeSampling.push_back(cv::Point(250, 200));
strokeSampling.push_back(cv::Point(600, 300));
strokeSampling.push_back(cv::Point(600, 400));
strokeSampling.push_back(cv::Point(250, 500));
strokeSampling.push_back(cv::Point(250, 650));
for (int i = 0; i < strokeSampling.size() - 1; ++i)
cv::line(canvasMask, strokeSampling[i], strokeSampling[i + 1], cv::Scalar::all(255));
// computing a distance map:
cv::Mat tmp1 = 255 - canvasMask;
cv::Mat distMap;
cv::distanceTransform(tmp1, distMap, CV_DIST_L2, CV_DIST_MASK_PRECISE);
float outerRadius = 50;
float innerRadius = 10;
cv::Scalar strokeColor = cv::Scalar::all(0);
for (int y = 0; y < distMap.rows; ++y)
for (int x = 0; x < distMap.cols; ++x)
{
float percentage = 0.0f;
float radius = distMap.at<float>(y, x);
if (radius>outerRadius){ percentage = 0.0; } // transparent
else
if (radius<innerRadius){ percentage = 1.0; } // solid
else
{
percentage = 1 - ((radius - innerRadius) / (outerRadius - innerRadius)); // partial
}
if (percentage > 0)
{
// here you could use the canvasMask if you like to, instead of directly drawing on the canvas
cv::Vec3b canvasColor = canvas.at<cv::Vec3b>(y, x);
cv::Vec3b cColor = cv::Vec3b(strokeColor[0], strokeColor[1], strokeColor[2]);
canvas.at<cv::Vec3b>(y, x) = percentage*cColor + (1 - percentage) * canvasColor;
}
}
cv::imshow("out", canvas);
cv::imwrite("C:/StackOverflow/Output/stroke.png", canvas);
cv::waitKey(0);
}
I am actually working on a framework which implements a Path Tracer. I am having issues at understanding how the final image is written. The result is correct and the image looks nice (low number of samples):
but I have to understand how the code works since (according to me) something works is weird with the indices. This is the code in short:
struct Vec {
double x, y, z; // position, also color (r,g,b)
Vec(double x_ = 0, double y_ = 0, double z_ = 0){ x = x_; y = y_; z = z_; }
};
Vec *c = new Vec[width * height];
for (int y = 0; y<height; y++){// Loop over image rows
for (unsigned short x = 0; x<width; x++) { // Loop cols
Vec r = calculatePixelColor(x,y);
int i = (height - y - 1) * width + x;
c[i] = c[i] + r;
}
}
FILE *ff = fopen("image.ppm", "w"); // Write image to PPM file.
fprintf(ff, "P3\n%d %d\n%d\n", width, height, 255);
for (int y = 0; y < height; y++) for (int x = 0; x < width; x++){
Vec pixel = c[x + y * width];
int red = CLAMP((int)(sqrtf(pixel.x) * 255.0f), 0, 255);
int green = CLAMP((int)(sqrtf(pixel.y) * 255.0f), 0, 255);
int blue = CLAMP((int)(sqrtf(pixel.z) * 255.0f), 0, 255);
fprintf(ff, "%d %d %d ", (red), (green), (blue));
}
fclose(ff);
Now, we have a pointer to Vec named c which contains all the informations of the pixels. This info are stored according to the index i = (height - y - 1) * width + x; . It means that the Vec* c starts describing the image from the last row. So, the first Vec pointed by c is the pixel at the bottom-left corner of the image (if I am not wrong). Therefore, if I am right, this leads me asking: how does fprintf work? According to the documentation it just writes down the stream from the top to the bottom.. so in theory the image should be flipped. Where is the trick?
If I am having a convex polygon vertices, then my area calculation in image is not coming accurate by the standard formula.
(For simiplicity) if I am having 3x3 square, and the vertices are (1,1) (1,3) (3,3) (3,1)
by polygon area calculation method depicted here
and dividing the summation by 2 we get the Area.
So for the 3 x 3 data above, we'll get area as 4 instead of 9.
This is happening because vertices are not points but a pixel.
this is the corresponding code. The coordinates are cyclic.
int X[] = { 1, 1, 3, 3, 1};
int Y[] = { 1, 3, 3, 1, 1};
double Sum1 = 0;
double Sum2 = 0;
int numElements = 5;
for (int k = 0; k < numElements-1; k++)
{
Sum1 += X[k] * Y[k + 1];
Sum2 += Y[k] * X[k + 1];
}
double area = std::abs((double)(Sum1 - Sum2))/2;
For square, we can do +1 to width and height and get the area correct. But what about the irregular polygons in the image?
I hope the question makes sense.
If you don't want to work with pixel corners as vertices, consider next method (works for simple figures - all convex ones, some concave ones):
Add additional fake pixel at right side of every right-border pixel, at bottom side of every bottom-border pixel, at right-bottom of right-bottom corner pixel. Here gray pixels are initial, light blues ones - fake.
The area can be calculated in following steps:
1) fetching the pixels between the vertices
2) sorting the pixels by x coordinates (or y coordinates)
3) taking the difference between min and max y coordinates (or x) for a particular x (or y) value and adding one to the difference
4) summing up the total difference
NOTE: the area might vary (if there are slanted edges in the polygon) depending on the line drawing method chosen
int compare(const void * a, const void * b)
{
return (((Point*)a)->x() - ((Point*)b)->x());
}
double CalculateConvexHullArea(vector<int> ConvexHullX, vector<int> ConvexHullY)
{
float Sum1 = 0;
float Sum2 = 0;
std::vector<Point> FillPoints;
for (int k = 0; k < ConvexHullX.size() - 1; k++)
{
drawLine(ConvexHullX[k], ConvexHullX[k+1], ConvexHullY[k], ConvexHullY[k+1], FillPoints);
}
//sorting coordinates
qsort(FillPoints.data(), FillPoints.size(), sizeof(Point), compare);
double area = 0;
int startY = FillPoints[0].y(), endY = FillPoints[0].y();
int currX = FillPoints[0].x();
// traversing x and summing up diff of min and max Y
for (int cnt = 0; cnt < FillPoints.size(); cnt++)
{
if (FillPoints[cnt].x() == currX)
{
startY = startY > FillPoints[cnt].y() ? FillPoints[cnt].y() : startY;
endY = endY < FillPoints[cnt].y() ? FillPoints[cnt].y() : endY;
}
else
{
int diffY = endY - startY + 1;
area += diffY;
currX = FillPoints[cnt].x();
startY = endY = FillPoints[cnt].y();
}
}
return area + endY - startY + 1;
}
I would like to know how can I create a Texture3D from a Texture2D.
I've found some good examples : Unity 4 - 3D Textures (Volumes) or Unity - 3D Textures or Color Correction Lookup Texture
int dim = tex2D.height;
Color[] c2D = tex2D.GetPixels();
Color[] c3D = new Color[c2D.Length];
for (int x = 0; x < dim; ++x)
{
for (int y = 0; y < dim; ++y)
{
for (int z = 0; z < dim; ++z)
{
int y_ = dim - y - 1;
c3D[x + (y * dim) + (z * dim * dim)] = c2D[z * dim + x + y_ * dim * dim];
}
}
}
But this only works when you have
Texture2D.height= Mathf.FloorToInt(Mathf.Sqrt(Texture2D.width))
or if
Depth = Width = Height
How can I extract the values when the depth is not equal to the width or the height ?
It seems simple but I am missing something...
Thank you very much.
You can split the texture as follows:
//Iterate the result
for(int z = 0; z < depth; ++z)
for(int y = 0; y < height; ++y)
for(int x = 0; x < width; ++x)
c3D[x + y * width + z * width * height]
= c2D[x + y * width * depth + z * width]
You can get to this index formula as follows:
Advancing by 1 in the x-direction results in an increment by 1 (just the next pixel).
Advancing by 1 in the y-direction results in an increment by depth * width (skip 4 images with the according width).
Advancing by 1 in the z-direction results in an increment by width (skip one image row).
Or if you prefer the other direction:
//Iterate the original image
for(int y = 0; y < height; ++y)
for(int x = 0; x < width * depth; ++x)
c3D[(x % width) + y * width + (x / width) * width * height] = c2D[x + y * width * depth];
Unfortunately, there's not much documentation about the 3DTexture. I've tried to simply use the c2D as the Texture's data but it doesn't give an appropriate result.
For the moment I tried this which gives better result but I don't know of it's correct.
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
for (int z = 0; z < depth; ++z)
{
int y_ = height - y - 1;
c3D[x + (y * height) + (z * height * depth)] = c2D[z * height + x + y_ * height * depth];
}
}
}
From your picture, it looks like you have the planes of the 3D texture you want side by side? So you want a 3D texture with dimensions (width, height, depth) from a 2D texture with (width * depth, height)? You should be able to do this with something like this:
for (int z = 0; z < depth; ++z)
{
for (int y = 0; y < height; ++y)
{
memcpy(c3D + (z * height + y) * width, c2D + (y * depth + z) * width, width * sizeof(Color));
}
}