Adding Ozone to my sky simulation - c++

I implemented a simulation for the colour of the sky a while ago by following the scratch a pixel tutorial: https://www.scratchapixel.com/lessons/procedural-generation-virtual-worlds/simulating-sky
I adapted it for the actual sun position and am able to get realistic sky colours during the day. However, I noticed that after sunset/ before sunrise, the colours are greyish when they should be deep blue. After researching about this, I read that this is due to the ozone absorption not being present in my model.
I used extinction coefficients : (3.426,8.298,0.356) * 0.06e-5 -> found on https://media.contentapi.ea.com/content/dam/eacom/frostbite/files/s2016-pbs-frostbite-sky-clouds-new.pdf
and also read that since the ozone does not scatter, it should only be added to the transmittance value.
Equation
Therefore, I modified the code from scratchapixel as follows:
for (uint32_t i = 0; i < numSamples; ++i) {
vec3 samplePosition = ray_in2.origin() + (tCurrent +
segmentLength * 0.5f) * ray_in2.direction();
float height = samplePosition.length() - atmosphere.earthRadius;
// compute optical depth for light
float hr = exp(-height / atmosphere.Hr) * segmentLength;
float hm = exp(-height / atmosphere.Hm) * segmentLength;
float ho = exp(-height / atmosphere.Hr)* segmentLength*(6e-7);
opticalDepthR += hr;
opticalDepthM += hm;
opticalDepthO += ho;
// light optical depth
float t0Light, t1Light;
...
for (j = 0; j < numSamplesLight; ++j) {
vec3 samplePositionLight = samplePosition + (tCurrentLight +
segmentLengthLight * 0.5f) * sunDir;
float heightLight = samplePositionLight.length() -
atmosphere.earthRadius;
if (heightLight < 0) break;
opticalDepthLightR += exp(-heightLight / atmosphere.Hr) *
segmentLengthLight;
opticalDepthLightM += exp(-heightLight / atmosphere.Hm) *
segmentLengthLight;
opticalDepthLightO += exp(-heightLight / atmosphere.Hr) *
segmentLengthLight*(6e-7); ;
tCurrentLight += segmentLengthLight;
}
if (j == numSamplesLight) {
vec3 tau = (betaR) * (opticalDepthR + opticalDepthLightR) +
betaM * 1.1f * (opticalDepthM + opticalDepthLightM)+ betaO*
(opticalDepthO + opticalDepthLightO);
vec3 attenuation(exp(-tau.x()), exp(-tau.y()), exp(-
tau.z()));
Summary:
I added variable opticalDepthO and opticalDepthLightO which
are calculated same as the optical depth for Rayleigh, but multiplied
by 6e-7.
Then, the sum of opticalDepthLightO and opticalDepthO is multiplied by the extiction coefficient for ozone and added to variable tau.
Problem is, I see no difference in my sky colour before and after
adding ozone. Can someone guide me to what it is that I'm doing wrong?

Related

how to implement a c++ function which creates a swirl on an image

imageData = new double*[imageHeight];
for(int i = 0; i < imageHeight; i++) {
imageData[i] = new double[imageWidth];
for(int j = 0; j < imageWidth; j++) {
// compute the distance and angle from the swirl center:
double pixelX = (double)i - swirlCenterX;
double pixelY = (double)j - swirlCenterY;
double pixelDistance = pow(pow(pixelX, 2) + pow(pixelY, 2), 0.5);
double pixelAngle = atan2(pixelX, pixelY);
// double swirlAmount = 1.0 - (pixelDistance/swirlRadius);
// if(swirlAmount > 0.0) {
// double twistAngle = swirlTwists * swirlAmount * PI * 2.0;
double twistAngle = swirlTwists * pixelDistance * PI * 2.0;
// adjust the pixel angle and compute the adjusted pixel co-ordinates:
pixelAngle += twistAngle;
pixelX = cos(pixelAngle) * pixelDistance;
pixelY = sin(pixelAngle) * pixelDistance;
// }
(this)->setPixel(i, j, tempMatrix[(int)(swirlCenterX + pixelX)][(int)(swirlCenterY + pixelY)]);
}
}
I am trying to implement a c++ function (code above) based on the following pseudo-code
which is supposed to create a swirl on an image, but I have some continuity problems on the borders.
The function I have for the moment is able to apply the swirl on a disk of a given size and to deform it almost as I whished but its influence doesn't decrease as we get close to the borders. I tried to multiply the angle of rotation by a 1 - (r/R) factor (with r the distance between the current pixel in the function and the center of the swirl, and R the radius of the swirl), but this doesn't give the effect I hoped for.
Moreover, I noticed that at some parts of the border, a thin white line appears (which means that the values of the pixels there is equal to 1) and I can't exactly explain why.
Maybe some of the problems I have are linked to the atan2 C++ standard function.

Character recognition from an image C++

*Note: while this post is pretty much asking about bilinear interpolation I kept the title more general and included extra information in case someone has any ideas on how I can possibly do this better
I have been having trouble implementing a way to identify letters from an image in order to create a word search solving program. For mainly educational but also portability purposes, I have been attempting this without the use of a library. It can be assumed that the image the characters will be picked off of contains nothing else but the puzzle. Although this page is only recognizing a small set of characters, I have been using it to guide my efforts along with this one as well. As the article suggested I have an image of each letter scaled down to 5x5 to compare each unknown letter to. I have had the best success by scaling down the unknown to 5x5 using bilinear resampling and summing the squares of the difference in intensity of each corresponding pixel in the known and unknown images. To attempt to get more accurate results I also added the square of the difference in width:height ratios, and white:black pixel ratios of the top half and bottom half of each image. The known image with the closest "difference score" to the unknown image is then considered the unknown letter. The problem is that this seems to have only about a 50% accuracy. To improve this I have tried using larger samples (instead of 5x5 I tried 15x15) but this proved even less effective. I also tried to go through the known and unknown images and look for features and shapes, and determine a match based on two images having about the same amount of the same features. For example shapes like the following were identified and counted up (Where ■ represents a black pixel). This proved less effective as the original method.
■ ■ ■ ■
■ ■
So here is an example: the following image gets loaded:
The program then converts it to monochrome by determining if each pixel has an intensity above or below the average intensity of an 11x11 square using a summed area table, fixes the skew and picks out the letters by identifying an area of relatively equal spacing. I then use the intersecting horizontal and vertical spaces to get a general idea of where each character is. Next I make sure that the entire letter is contained in each square picked out by going line by line, above, below, left and right of the original square until the square's border detects no dark pixels on it.
Then I take each letter, resample it and compare it to the known images.
*Note: the known samples are using arial font size 12, rescaled in photoshop to 5x5 using bilinear interpolation.
Here is an example of a successful match:
The following letter is picked out:
scaled down to:
which looks like
from afar. This is successfully matched to the known N sample:
Here is a failed match:
is picked out and scaled down to:
which, to no real surprise does not match to the known R sample
I changed how images are picked out, so that the letter is not cut off as you can see in the above images so I believe the issue comes from scaling the images down. Currently I am using bilinear interpolation to resample the image. To understand how exactly this works with downsampling I referred to the second answer in this post and came up with the following code. Previously I have tested that this code works (at least to a "this looks ok" point) so it could be a combination of factors causing problems.
void Image::scaleTo(int width, int height)
{
int originalWidth = this->width;
int originalHeight = this->height;
Image * originalData = new Image(this->width, this->height, 0, 0);
for (int i = 0; i < this->width * this->height; i++) {
int x = i % this->width;
int y = i / this->width;
originalData->setPixel(x, y, this->getPixel(x, y));
}
this->resize(width, height); //simply resizes the image, after the resize it is just a black bmp.
double factorX = (double)originalWidth / width;
double factorY = (double)originalHeight / height;
float * xCenters = new float[originalWidth]; //the following stores the "centers" of each pixel.
float * yCenters = new float[originalHeight];
float * newXCenters = new float[width];
float * newYCenters = new float[height];
//1 represents one of the originally sized pixel's side length
for (int i = 0; i < originalWidth; i++)
xCenters[i] = i + 0.5;
for (int i = 0; i < width; i++)
newXCenters[i] = (factorX * i) + (factorX / 2.0);
for (int i = 0; i < height; i++)
newYCenters[i] = (factorY * i) + (factorY / 2.0);
for (int i = 0; i < originalHeight; i++)
yCenters[i] = i + 0.5;
/* p[0] p[1]
p
p[2] p[3] */
//the following will find the closest points to the sampled pixel that still remain in this order
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
POINT p[4]; //POINT used is the Win32 struct POINT
float pDists[4] = { FLT_MAX, FLT_MAX, FLT_MAX, FLT_MAX };
float xDists[4];
float yDists[4];
for (int i = 0; i < originalWidth; i++) {
for (int j = 0; j < originalHeight; j++) {
float xDist = abs(xCenters[i] - newXCenters[x]);
float yDist = abs(yCenters[j] - newYCenters[y]);
float dist = sqrt(xDist * xDist + yDist * yDist);
if (xCenters[i] < newXCenters[x] && yCenters[j] < newYCenters[y] && dist < pDists[0]) {
p[0] = { i, j };
pDists[0] = dist;
xDists[0] = xDist;
yDists[0] = yDist;
}
else if (xCenters[i] > newXCenters[x] && yCenters[j] < newYCenters[y] && dist < pDists[1]) {
p[1] = { i, j };
pDists[1] = dist;
xDists[1] = xDist;
yDists[1] = yDist;
}
else if (xCenters[i] < newXCenters[x] && yCenters[j] > newYCenters[y] && dist < pDists[2]) {
p[2] = { i, j };
pDists[2] = dist;
xDists[2] = xDist;
yDists[2] = yDist;
}
else if (xCenters[i] > newXCenters[x] && yCenters[j] > newYCenters[y] && dist < pDists[3]) {
p[3] = { i, j };
pDists[3] = dist;
xDists[3] = xDist;
yDists[3] = yDist;
}
}
}
//channel is a typedef for unsigned char
//getOPixel(point) is a macro for originalData->getPixel(point.x, point.y)
float r1 = (xDists[3] / (xDists[2] + xDists[3])) * getOPixel(p[2]).r + (xDists[2] / (xDists[2] + xDists[3])) * getOPixel(p[3]).r;
float r2 = (xDists[1] / (xDists[0] + xDists[1])) * getOPixel(p[0]).r + (xDists[0] / (xDists[0] + xDists[1])) * getOPixel(p[1]).r;
float interpolated = (yDists[0] / (yDists[0] + yDists[3])) * r1 + (yDists[3] / (yDists[0] + yDists[3])) * r2;
channel r = (channel)round(interpolated);
r1 = (xDists[3] / (xDists[2] + xDists[3])) * getOPixel(p[2]).g + (xDists[2] / (xDists[2] + xDists[3])) * getOPixel(p[3]).g; //yDist[3]
r2 = (xDists[1] / (xDists[0] + xDists[1])) * getOPixel(p[0]).g + (xDists[0] / (xDists[0] + xDists[1])) * getOPixel(p[1]).g; //yDist[0]
interpolated = (yDists[0] / (yDists[0] + yDists[3])) * r1 + (yDists[3] / (yDists[0] + yDists[3])) * r2;
channel g = (channel)round(interpolated);
r1 = (xDists[3] / (xDists[2] + xDists[3])) * getOPixel(p[2]).b + (xDists[2] / (xDists[2] + xDists[3])) * getOPixel(p[3]).b; //yDist[3]
r2 = (xDists[1] / (xDists[0] + xDists[1])) * getOPixel(p[0]).b + (xDists[0] / (xDists[0] + xDists[1])) * getOPixel(p[1]).b; //yDist[0]
interpolated = (yDists[0] / (yDists[0] + yDists[3])) * r1 + (yDists[3] / (yDists[0] + yDists[3])) * r2;
channel b = (channel)round(interpolated);
this->setPixel(x, y, { r, g, b });
}
}
delete[] xCenters;
delete[] yCenters;
delete[] newXCenters;
delete[] newYCenters;
delete originalData;
}
I have utmost respect for anyone even remotely willing to sift through this to try and help. Any and all suggestion will be extremely appreciated.
UPDATE:
So as suggested I started augmenting the known data set with scaled down letters from word searches. This greatly improved accuracy from about 50% to 70% (percents calculated from a very small sample size so take the numbers lightly). Basically I'm using the original set of chars as a base (this original set was actually the most accurate out of other sets I've tried ex: a set calculated using the same resampling algorithm, a set using a different font etc.) And I just am manually adding knowns to that set. I basically will manually assign the first 20 or so images picked out in a search their corresponding letter and save that into the known set folder. I still am choosing the closest out of the entire known set to match a letter. Would this still be a good method or should some kind of change be made? I also implemented a feature where if a letter is about a 90% match with a known letter, I assume the match is correct and and the current "unknown" to the list of knowns. I could see this possibly going both ways, I feel like it could either a. make the program more accurate over time or b. solidify the original guess and possibly make the program less accurate over time. I have actually not noticed this cause a change (either for the better or for the worse). Am I on the right track with this? I'm not going to call this solved just yet, until I get accuracy just a little higher and test the program from more examples.

UV Mapping issue artifact on Sphere OpenGl

I am UV mapping a 2D Texture on a 3d sphere X, Y, Z coordinates, by using the formula
u = (0.5 + atan2(X, Y) / (2 * glm::pi<double>()));
v = (0.5 - asin(Z) / glm::pi<double>());
in modern openGL C++.
I dont know why there is this artifact in the sphere. Cant figure it out.
Ok, I have figured and corrected this out, thought I will answer here finally now.
Big thanks to BDL and Rabbid76.
Whenever u == 0, I added the same vertex position (X Y Z) to the vertices vector (or array) and also increased the index, but hardcoding the texture u to be 1.0f this time.
No issues now, the seam looks perfect now.
This is detail of a textured sphere geometry which is indexed. You should use index for better performance:
m_meridians and m_latitudes are detail level of sphere.
for (size_t i = 0; i < m_meridians + 1; i++)
{
for (size_t j = 0; j < m_latitudes + 2; j++)
{
// texCoord in the range [(0,0), (1,1)]
QVector2D texCoord((float)i / m_meridians, (float)j / (m_latitudes+1));
// theta = longitude from 0 to 2pi
// phi = latitude from -pi/2 to pi/2
double theta, phi;
theta = 2*M_PI * texCoord.x();
phi = M_PI * texCoord.y() - M_PI_2;
QVector3D pos;
pos.setY((float)std::sin(phi));
pos.setX((float)std::cos(phi) * std::cos(theta));
pos.setZ((float)std::cos(phi) * std::sin(theta));
m_vertices.push_back({pos, texCoord});
}
}
// Calculate triangle indices
for (size_t i = 0; i < m_meridians; i++)
{
// Construct triangles between successive meridians
for (size_t j = 0; j < m_latitudes + 1; j++)
{
m_indices.push_back(i * (m_latitudes+2) + j);
m_indices.push_back(i * (m_latitudes+2) + j+1);
m_indices.push_back((i+1) * (m_latitudes+2) + j+1);
m_triangleCount++;
m_indices.push_back((i+1) * (m_latitudes+2) + j+1);
m_indices.push_back((i+1) * (m_latitudes+2) + j);
m_indices.push_back(i * (m_latitudes+2) + j);
m_triangleCount++;
}
}

How do I return an array of float3's from my compute shader?

Basically I'm trying to handle ray-tracing on my compute shader and I've tested that it does work and outputs any individual float3 correctly, I made a much slower implementation where individual pixels were rendered on the GPU and copied back to the CPU every time, very slow but it proved my maths in the GPU ray tracing function were sound and gave the right results.
However I'm having difficulty outputting an array of float3's as the RWStructuredBuffer confuses me somewhat
Here is my RWStructuredBuffer
RWStructuredBuffer<float3> Data: register(u0);
Nothing special but there for reference
Here is my function on the compute shader that I call with Dispatch
groupshared uint things;
[numthreads(1, 1, 1)]
void RenderRay(uint3 Gid : SV_GroupID, uint3 DTid : SV_DispatchThreadID, uint3 GTid : SV_GroupThreadID, uint GI : SV_GroupIndex)
{
float4 empty = { 0, 0, 0, 0 };
uint offset = 0;
float3 pixel;
float invWidth = 1 / float(width.x), invHeight = 1 / float(height.x);
float fov = 30, aspectratio = width.x / float(height.x);
float angle = tan(M_PI * 0.5 * fov / 180.);
GroupMemoryBarrierWithGroupSync();
for (uint y = 0; y < height.x; ++y)
{
if(y>0)
offset += height.x;
for (uint x = 0; x < width.x; ++x)
{
float xx = (2 * ((x + 0.5) * invWidth) - 1) * angle * aspectratio;
float yy = (1 - 2 * ((y + 0.5) * invHeight)) * angle;
float4 raydirection = { xx, yy, -1, 0 };
normalize(raydirection);
pixel = trace(empty, raydirection, 0);
Data[0][x] = pixel;
}
}
GroupMemoryBarrierWithGroupSync();
Setting Data[0] = pixel worked for the pixel by pixel implementation since I only needed to return the one value, but I can't achieve the same results trying to output the whole image at once as I can't quite figure out how to add these into an array
Writing it out it sounds like a silly problem but nontheless I'm quite stuck.
Thanks in advance!

DirectX/C++: Marching Cubes Indexing

I've implemented the Marching Cube algorithm in a DirectX environment (To test and have fun). Upon completion, I noticed that the resulting model looks heavily distorted, as if the indices were off.
I've attempted to extract the indices, but I think the vertices are ordered correctly already, using the lookup tables, examples at http://paulbourke.net/geometry/polygonise/ . The current build uses a 15^3 volume.
Marching cubes iterates over the array as normal:
for (float iX = 0; iX < CellFieldSize.x; iX++){
for (float iY = 0; iY < CellFieldSize.y; iY++){
for (float iZ = 0; iZ < CellFieldSize.z; iZ++){
MarchCubes(XMFLOAT3(iX*StepSize, iY*StepSize, iZ*StepSize), StepSize);
}
}
}
The MarchCube function is called:
void MC::MarchCubes(){
...
int Corner, Vertex, VertexTest, Edge, Triangle, FlagIndex, EdgeFlags;
float Offset;
XMFLOAT3 Color;
float CubeValue[8];
XMFLOAT3 EdgeVertex[12];
XMFLOAT3 EdgeNorm[12];
//Local copy
for (Vertex = 0; Vertex < 8; Vertex++) {
CubeValue[Vertex] = (this->*fSample)(
in_Position.x + VertexOffset[Vertex][0] * Scale,
in_Position.y + VertexOffset[Vertex][1] * Scale,
in_Position.z + VertexOffset[Vertex][2] * Scale
);
}
FlagIndex = 0;
Intersection calculations:
...
//Test vertices for intersection.
for (VertexTest = 0; VertexTest < 8; VertexTest++){
if (CubeValue[VertexTest] <= TargetValue)
FlagIndex |= 1 << VertexTest;
}
//Find which edges are intersected by the surface.
EdgeFlags = CubeEdgeFlags[FlagIndex];
if (EdgeFlags == 0){
return;
}
for (Edge = 0; Edge < 12; Edge++){
if (EdgeFlags & (1 << Edge)) {
Offset = GetOffset(CubeValue[EdgeConnection[Edge][0]], CubeValue[EdgeConnection[Edge][1]], TargetValue); // Get offset function definition. Needed!
EdgeVertex[Edge].x = in_Position.x + VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0] * Scale;
EdgeVertex[Edge].y = in_Position.y + VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1] * Scale;
EdgeVertex[Edge].z = in_Position.z + VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2] * Scale;
GetNormal(EdgeNorm[Edge], EdgeVertex[Edge].x, EdgeVertex[Edge].y, EdgeVertex[Edge].z); //Need normal values
}
}
And the original implementation gets pushed into a holding struct for DirectX.
for (Triangle = 0; Triangle < 5; Triangle++) {
if (TriangleConnectionTable[FlagIndex][3 * Triangle] < 0) break;
for (Corner = 0; Corner < 3; Corner++) {
Vertex = TriangleConnectionTable[FlagIndex][3 * Triangle + Corner];3 * Triangle + Corner]);
GetColor(Color, EdgeVertex[Vertex], EdgeNorm[Vertex]);
Data.VertexData.push_back(XMFLOAT3(EdgeVertex[Vertex].x, EdgeVertex[Vertex].y, EdgeVertex[Vertex].z));
Data.NormalData.push_back(XMFLOAT3(EdgeNorm[Vertex].x, EdgeNorm[Vertex].y, EdgeNorm[Vertex].z));
Data.ColorData.push_back(XMFLOAT4(Color.x, Color.y, Color.z, 1.0f));
}
}
(This is the same ordering as the original GL implementation)
Turns out, I missed a parenthesis showing operator precedence.
EdgeVertex[Edge].x = in_Position.x + (VertexOffset[EdgeConnection[Edge][0]][0] + Offset * EdgeDirection[Edge][0]) * Scale;
EdgeVertex[Edge].y = in_Position.y + (VertexOffset[EdgeConnection[Edge][0]][1] + Offset * EdgeDirection[Edge][1]) * Scale;
EdgeVertex[Edge].z = in_Position.z + (VertexOffset[EdgeConnection[Edge][0]][2] + Offset * EdgeDirection[Edge][2]) * Scale;
Corrected, obtained Visine; resumed fun.