I would like to know how can I create a Texture3D from a Texture2D.
I've found some good examples : Unity 4 - 3D Textures (Volumes) or Unity - 3D Textures or Color Correction Lookup Texture
int dim = tex2D.height;
Color[] c2D = tex2D.GetPixels();
Color[] c3D = new Color[c2D.Length];
for (int x = 0; x < dim; ++x)
{
for (int y = 0; y < dim; ++y)
{
for (int z = 0; z < dim; ++z)
{
int y_ = dim - y - 1;
c3D[x + (y * dim) + (z * dim * dim)] = c2D[z * dim + x + y_ * dim * dim];
}
}
}
But this only works when you have
Texture2D.height= Mathf.FloorToInt(Mathf.Sqrt(Texture2D.width))
or if
Depth = Width = Height
How can I extract the values when the depth is not equal to the width or the height ?
It seems simple but I am missing something...
Thank you very much.
You can split the texture as follows:
//Iterate the result
for(int z = 0; z < depth; ++z)
for(int y = 0; y < height; ++y)
for(int x = 0; x < width; ++x)
c3D[x + y * width + z * width * height]
= c2D[x + y * width * depth + z * width]
You can get to this index formula as follows:
Advancing by 1 in the x-direction results in an increment by 1 (just the next pixel).
Advancing by 1 in the y-direction results in an increment by depth * width (skip 4 images with the according width).
Advancing by 1 in the z-direction results in an increment by width (skip one image row).
Or if you prefer the other direction:
//Iterate the original image
for(int y = 0; y < height; ++y)
for(int x = 0; x < width * depth; ++x)
c3D[(x % width) + y * width + (x / width) * width * height] = c2D[x + y * width * depth];
Unfortunately, there's not much documentation about the 3DTexture. I've tried to simply use the c2D as the Texture's data but it doesn't give an appropriate result.
For the moment I tried this which gives better result but I don't know of it's correct.
for (int x = 0; x < width; ++x)
{
for (int y = 0; y < height; ++y)
{
for (int z = 0; z < depth; ++z)
{
int y_ = height - y - 1;
c3D[x + (y * height) + (z * height * depth)] = c2D[z * height + x + y_ * height * depth];
}
}
}
From your picture, it looks like you have the planes of the 3D texture you want side by side? So you want a 3D texture with dimensions (width, height, depth) from a 2D texture with (width * depth, height)? You should be able to do this with something like this:
for (int z = 0; z < depth; ++z)
{
for (int y = 0; y < height; ++y)
{
memcpy(c3D + (z * height + y) * width, c2D + (y * depth + z) * width, width * sizeof(Color));
}
}
Related
I've generated a cubic world using FastNoiseLite but I don't know how to differentiate top level blocks as grass and bottom one's dirt when using 3d noise.
TArray<float> CalculateNoise(const FVector& ChunkPosition)
{
Densities.Reset();
// ChunkSize is 32
for (int z = 0; z < ChunkSize; z++)
{
for (int y = 0; y < ChunkSize; y++)
{
for (int x = 0; x < ChunkSize; x++)
{
const float Noise = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z));
Densities.Add(Noise - ChunkPosition.Z);
}
}
}
return Densities;
}
void AddCubeMaterial(const FVector& ChunkPosition)
{
const int32 DensityIndex = GetIndex(ChunkPosition);
const float Density = Densities[DensityIndex];
if (Density < 1)
{
// Add Grass block
}
// Add dirt block
}
void GetNoise(const FVector& Position) const
{
const float Height = 280.f;
if (bIs3dNoise)
{
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z) * Height;
}
FastNoiseLiteObj->GetNoise(Position.X, Position.Y) * Height;
}
This is the result when using 3D noise.
3D Noise result
But if I switch to 2D noise it works perfectly fine.
2D Noise result
This answer applies to Perlin like noise.
Your integer chunk size is dis-contiguous in noise space.
'Position' needs to be scaled by 1/Height. To scale the noise as a contiguous block. Then scale by Height.
If you were happy with the XY axes(2D), you could limit the scaling to the Z axis:
FastNoiseLiteObj->GetNoise(Position.X, Position.Y, Position.Z / Height) * Height;
This adjustment provides a noise continuous Z block location with respect to Position(X,Y).
Edit in response to comments
Contiguous:
The noise algorithm guarantees continuous output in all dimensions.
By sampling every 32 pixels (dis-contiguous sampling), The continuity is broken, on purpose(?) and augmented by the Density.
To guarantee a top level grass layer:
Densities.Add(Noise + (ChunkPosition.Z > Threshold) ? 1: 0);
Your code- ChunkPosition.Z made grass thicker as it went down. Add it back if you wish.
To add random overhangs/underhangs reduce the Density threshold randomly:
if (Density < (rnd() < 0.125)? 0.5 : 1)
I leave the definition of rnd() to your preferred random distribution.
To almost always have overhangs, requires forward lookup of the next and previous blocks' Z in noise.
Precalculate the noise values for the next line into alternating arrays 2 wider than the width to support the edges set at 0.
The algorithm is:
// declare arrays: currentnoise[ChunkSize + 2] and nextnoise[ChunkSize +2] and alpha=.2; //see text
for (int y = 0; y < ChunkSize; y++) // note the reorder y-z-x
{
// pre load currentnoise for z=0
currentnoise[0] = 0;
currentnoise[ChunkSize+1] = 0;
for (int x = 0; x < ChunkSize; x++)
{
currentnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z));
}
for (int z = 1; z < ChunkSize -2; z++)
{
nextnoise[0] = 0;
nextnoise[ChunkSize+1] = 0;
// load next
for (int x = 0; x < ChunkSize; x++)
{
nextnoise[x + 1] = GetNoise(FVector(ChunkPosition.X + x, ChunkPosition.Y + y, ChunkPosition.Z + z+1));
}
// apply current with next
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[x + 1] * .75 + nextnoise[x+2] * alpha + nextnoise[x] * alpha);
}
// move next to current in a memory safe manor:
// it is faster to swap pointers, but this is much safer for portability
for (int i = 1; i < ChunkSize + 1; i++)
currentnoise[i]=nextnoise[i];
}
// apply last z(no next)
for (int x = 0; x < ChunkSize; x++)
{
Densities.Add(currentnoise[X + 1]);
}
}
Where alpha is approximately between .025 and .25 depending on preferred fill amounts.
The 2 inner most x for loops could be streamlined into 1 loop, but left separate for readability.(it requires 2 preloads)
I have 1d array (size = 4 * width * height + 1) of pixels of RGBA png image. I want to rotate image by X degrees clockwise. I already know how to do it for 90 degrees, but I guess I have some problem with trigonometry.
Here's the code:
std::pair<int, int> move(int x, int y, double rad) {
return {x * cos(rad) - y * sin(rad), x * cos(rad) + y * sin(rad)};
}
void turn(int deg) {
if (deg < 0) {
deg = 360 + deg;
}
double rad = deg * (M_PI / (double)180);
unsigned int oldWidth = width;
width = lround(sqrt(height * height + width * width));
height = lround(sqrt(height * height + oldWidth * oldWidth));
std::vector<unsigned char> output(rawPixels.size());
for (int X = 0; X < width; ++X) {
for (int Y = 0; Y < height; ++Y) {
for (int chan = 0; chan < CHANNELS_COUNT; ++chan) {
std::pair<int, int> xy = move(X, Y, rad);
output[(X * height + Y) * CHANNELS_COUNT + chan] = rawPixels[
((height - 1 - xy.second) * width + xy.first) * CHANNELS_COUNT + chan];
}
}
}
rawPixels = output;
}
It's ok to use addition arrays, but I don't want to use OpenCV or any other libraries.
Iterating through 1D array (pseudo 2D) with step of 3:
arr = new int[height * width * 3];
for (int i = 0; i < height * width * 3; i+=3) {
arr[i] = 1;
}
I have tried this, but what I got is column of one third:
for (int y = 0; y < height * 3; y++) {
for (int x = 0; x < width; x+=3) {
arr[x + width * y] = 1;
}
}
Assuming your cells have a 'size' of 3 entries, you should use the * 3 on the inner loop. Otherwise you miss 2 thirds of your cells on each row.
You also need to multiply width by 3 to get the correct row.
for (int y = 0; y < height; y++) {
for (int x = 0; x < width * 3; x+=3) {
arr[x + width * 3 * y] = 1;
}
}
In general you need the following structure for such situations:
for (int y = 0; y < height; y++) {
for (int x = 0; x < width * cellWidth; x+= cellWidth) {
arr[x + width * cellWidth * y] = 1;
}
}
(Were cellWidth is 3 in your case)
To slightly simplify this, you could assume in the loops that your cells have a width of 1 (like a normal situation) and multiply by cellWidth when actually assigning the values:
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
int index = (x + width * y) * cellWidth;
arr[index + 0] = 1; // First 'cell entry'
arr[index + 1] = 1; // Second
...
arr[index + cellWidth - 1] = 1; // Last
}
}
Another solution is to create larger 'items' using a struct for example:
typedef struct { int r, int g, int b } t_rgb;
t_rgb* arr = new t_rgb[height * width];
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
arr[x + width * y].r = 1;
}
}
and you are able to use it as a regular array (the compiler does all calculations for you). This also makes it more clear what is happening in your code.
What are you trying to accomplish exactly? Setting a channel in a RGB image?
I usually do it like this:
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
arr[(x + width * y) * 3] = 1;
In general, to set RGB values, you can simply add an offset like this:
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
{
size_t base = (x + width * y) * 3;
arr[base + 0] = r;
arr[base + 1] = g;
arr[base + 2] = b;
}
I am running for displaying RGB image from raw in C++ without any library. When I input the square image (e.g: 512x512), my program can display the image perfectly, but it does not in not_square size image (e.g: 350x225). I understand that I need padding for this case, then I tried to find the same case but it didn't make sense for me how people can pad their image.
If anyone can show me how to pad, I would be thanks for this. And below is what I have done for RGB from Raw.
void CImage_MyClass::Class_MakeRGB(void)
{
m_BMPheader.biHeight = m_uiHeight;
m_BMPheader.biWidth = m_uiWidth;
m_pcBMP = new UCHAR[m_uiHeight * m_uiWidth * 3];
//RGB Image
{
int ind = 0;
for (UINT y = 0; y < m_uiHeight; y++)
{
for (UINT x = 0; x < m_uiHeight*3; x+=3)
{
m_pcBMP[ind++] = m_pcIBuff[m_uiHeight - y -1][x+2];
m_pcBMP[ind++] = m_pcIBuff[m_uiHeight - y -1][x+1];
m_pcBMP[ind++] = m_pcIBuff[m_uiHeight - y -1][x];
}
}
}
}
You need to pad the number of bytes in each line out to a multiple of 4.
void CImage_MyClass::Class_MakeRGB(void)
{
m_BMPheader.biHeight = m_uiHeight;
m_BMPheader.biWidth = m_uiWidth;
//Pad buffer width to next highest multiple of 4
const int bmStride = m_uiWidth * 3 + 3 & ~3;
m_pcBMP = new UCHAR[m_uiHeight * bmStride];
//Clear buffer so the padding bytes are 0
memset(m_pcBMP, 0, m_uiHeight * bmStride);
//RGB Image
{
for(UINT y = 0; y < m_uiHeight; y++)
{
for(UINT x = 0; x < m_uiWidth * 3; x += 3)
{
const int bmpPos = y * bmWidth + x;
m_pcBMP[bmpPos + 0] = m_pcIBuff[m_uiHeight - y - 1][x + 2];
m_pcBMP[bmpPos + 1] = m_pcIBuff[m_uiHeight - y - 1][x + 1];
m_pcBMP[bmpPos + 2] = m_pcIBuff[m_uiHeight - y - 1][x];
}
}
}
}
I also changed the inner for loop to use m_uiWidth instead of m_uiHeight.
#Retired Ninja, Thanks anyway for your answer... you showed me a simple way for this...
But by the way, I have fixed mine as well with different way.. here is it:
void CImage_MyClass::Class_MakeRGB(void)
{
m_BMPheader.biHeight = m_uiHeight;
m_BMPheader.biWidth = m_uiWidth;
int padding = 0;
int scanline = m_uiWidth * 3;
while ( ( scanline + padding ) % 4 != 0 )
{
padding++;
}
int psw = scanline + padding;
m_pcBMP = new UCHAR[m_uiHeight * m_uiWidth * 3 + m_uiHeight * padding];
//RGB Image
int ind = 0;
for (UINT y = 0; y < m_uiHeight; y++)
{
for (UINT x = 0; x < m_uiHeight*3; x+=3)
{
m_pcBMP[ind++] = m_pcIBuff[m_uiHeight - y -1][x+2];
m_pcBMP[ind++] = m_pcIBuff[m_uiHeight - y -1][x+1];
m_pcBMP[ind++] = m_pcIBuff[m_uiHeight - y -1][x];
}
for(int i = 0; i < padding; i++)
ind++;
}
}
I'm trying to rotate an image using openFrameworks, but I have a problem. My rotated image is red instead of its original color.
void testApp::setup(){
image.loadImage("abe2.jpg");
rotatedImage.allocate(image.width, image.height, OF_IMAGE_COLOR);
imageCenterX = image.getWidth() / 2;
imageCenterY = image.getHeight() / 2;
w = image.getWidth();
h = image.getHeight();
int degrees = 180;
float radians = (degrees*(PI / 180));
for (int y = 0; y < h; y++) {
for (int x = 0; x < w; x++) {
int index = image.getPixelsRef().getPixelIndex(x, y);
int newX = (cos(radians) * (x - imageCenterX) - sin(radians) * (y - imageCenterY) + imageCenterX);
int newY = (sin(radians) * (x - imageCenterX) + cos(radians) * (y - imageCenterY) + imageCenterY);
int newIndex = rotatedImage.getPixelsRef().getPixelIndex(newX, newY);
rotatedImage.getPixelsRef()[newIndex] = image.getPixelsRef()[index];
}
}
rotatedImage.update();
}
void testApp::update(){
}
void testApp::draw(){
image.draw(0,0);
rotatedImage.draw(0,400);
}
Can someone tell me what I am doing wrong?
If your image has three color components (Red, Green, Blue), you need to transform all three of those. The following should do the trick:
rotatedImage.getPixelsRef()[newIndex] = image.getPixelsRef()[index];
rotatedImage.getPixelsRef()[newIndex+1] = image.getPixelsRef()[index+1];
rotatedImage.getPixelsRef()[newIndex+2] = image.getPixelsRef()[index+2];