I'm writing a simple ray tracer and to keep it simple for now I've decided to just have spheres in my scene. I am at a stage now where I merely want to confirm that my rays are intersecting a sphere in the scene properly, nothing else. I've created a Ray and Sphere class and then a function in my main file which goes through each pixel to see if there's an intersection (relevant code will be posted below). The problem is that the whole intersection with the sphere is acting rather strangely. If I create a sphere with center (0, 0, -20) and a radius of 1 then I get only one intersection which is always at the very first pixel of what would be my image (upper-left corner). Once I reach a radius of 15 I suddenly get three intersections in the upper-left region. A radius of 18 gives me six intersections and once I reach a radius of 20+ I suddenly get an intersection for EACH pixel so something is acting as it's not supposed to do.
I was suspicious that my ray-sphere intersection code might be at fault here but having looked through it and looked through the net for more information most solutions describe the very same approach I use so I assume it shouldn't(!) be at fault here. So...I am not exactly sure what I am doing wrong, it could be my intersection code or it could be something else causing the problems. I just can't seem to find it. Could it be that I am thinking wrong when giving values for the sphere and rays? Below is relevant code
Sphere class:
Sphere::Sphere(glm::vec3 center, float radius)
: m_center(center), m_radius(radius), m_radiusSquared(radius*radius)
{
}
//Sphere-ray intersection. Equation: (P-C)^2 - R^2 = 0, P = o+t*d
//(P-C)^2 - R^2 => (o+t*d-C)^2-R^2 => o^2+(td)^2+C^2+2td(o-C)-2oC-R^2
//=> at^2+bt+c, a = d*d, b = 2d(o-C), c = (o-C)^2-R^2
//o = ray origin, d = ray direction, C = sphere center, R = sphere radius
bool Sphere::intersection(Ray& ray) const
{
//Squared distance between ray origin and sphere center
float squaredDist = glm::dot(ray.origin()-m_center, ray.origin()-m_center);
//If the distance is less than the squared radius of the sphere...
if(squaredDist <= m_radiusSquared)
{
//Point is in sphere, consider as no intersection existing
//std::cout << "Point inside sphere..." << std::endl;
return false;
}
//Will hold solution to quadratic equation
float t0, t1;
//Calculating the coefficients of the quadratic equation
float a = glm::dot(ray.direction(),ray.direction()); // a = d*d
float b = 2.0f*glm::dot(ray.direction(),ray.origin()-m_center); // b = 2d(o-C)
float c = glm::dot(ray.origin()-m_center, ray.origin()-m_center) - m_radiusSquared; // c = (o-C)^2-R^2
//Calculate discriminant
float disc = (b*b)-(4.0f*a*c);
if(disc < 0) //If discriminant is negative no intersection happens
{
//std::cout << "No intersection with sphere..." << std::endl;
return false;
}
else //If discriminant is positive one or two intersections (two solutions) exists
{
float sqrt_disc = glm::sqrt(disc);
t0 = (-b - sqrt_disc) / (2.0f * a);
t1 = (-b + sqrt_disc) / (2.0f * a);
}
//If the second intersection has a negative value then the intersections
//happen behind the ray origin which is not considered. Otherwise t0 is
//the intersection to be considered
if(t1<0)
{
//std::cout << "No intersection with sphere..." << std::endl;
return false;
}
else
{
//std::cout << "Intersection with sphere..." << std::endl;
return true;
}
}
Program:
#include "Sphere.h"
#include "Ray.h"
void renderScene(const Sphere& s);
const int imageWidth = 400;
const int imageHeight = 400;
int main()
{
//Create sphere with center in (0, 0, -20) and with radius 10
Sphere testSphere(glm::vec3(0.0f, 0.0f, -20.0f), 10.0f);
renderScene(testSphere);
return 0;
}
//Shoots rays through each pixel and check if there's an intersection with
//a given sphere. If an intersection exists then the counter is increased.
void renderScene(const Sphere& s)
{
//Ray r(origin, direction)
Ray r(glm::vec3(0.0f), glm::vec3(0.0f));
//Will hold the total amount of intersections
int counter = 0;
//Loops through each pixel...
for(int y=0; y<imageHeight; y++)
{
for(int x=0; x<imageWidth; x++)
{
//Change ray direction for each pixel being processed
r.setDirection(glm::vec3(((x-imageWidth/2)/(float)imageWidth), ((imageHeight/2-y)/(float)imageHeight), -1.0f));
//If current ray intersects sphere...
if(s.intersection(r))
{
//Increase counter
counter++;
}
}
}
std::cout << counter << std::endl;
}
Your second solution (t1) to the quadratic equation is wrong in the case disc > 0, where you need something like:
float sqrt_disc = glm::sqrt(disc);
t0 = (-b - sqrt_disc) / (2 * a);
t1 = (-b + sqrt_disc) / (2 * a);
I think it's best to write out the equation in this form rather than turning the division by 2 into a multiplication by 0.5, because the more the code resembles the mathematics, the easier it is to check.
A few other minor comments:
It seemed confusing to re-use the name disc for sqrt(disc), so I used a new variable name above.
You don't need to test for t0 > t1, since you know that both a and sqrt_disc are positive, and so t1 is always greater than t0.
If the ray origin is inside the sphere, it's possible for t0 to be negative and t1 to be positive. You don't seem to handle this case.
You don't need a special case for disc == 0, as the general case computes the same values as the special case. (And the fewer special cases you have, the easier it is to check your code.)
If I understand your code correctly, you might want to try:
r.setDirection(glm::vec3(((x-imageWidth/2)/(float)imageWidth),
((imageHeight/2-y)/(float)imageHeight),
-1.0f));
Right now, you've positioned the camera one unit away from the screen, but the rays can shoot as much as 400 units to the right and down. This is a very broad field of view. Also, your rays are only sweeping one octent of space. This is why you only get a handful of pixels in the upper-left corner of the screen. The code I wrote above should rectify that.
Related
Using the following code (emplyoing GeographicLib) which is moving a coordinate and back again creates an offset to the original starting point.
The difference is growing with the distance of movement and depends on the Azimuth. The same is true for using both GeodesicExact and Geodesic.
What i want to achieve in the end is creating a latLon shape by moving the starting coordinate.
Is there are an exact/better way of doing this or do I miss something fundamental?
inline double distanceInMeters(const GeoCoords &_c1, const GeoCoords &_c2) {
GeodesicExact geod = geodWGS84(); // GeodesicExact::WGS84()
double meters;
geod.Inverse(_c1.Latitude(), _c1.Longitude(),
_c2.Latitude(), _c2.Longitude(),
meters);
return meters;
}
// move coord _byMeters in direction _azimuth
// inexact with horiz moves !!!
inline GeoCoords move(const GeoCoords &_coords, const double &_azimuth, const double &_byMeters) {
GeodesicExact geod = geodWGS84(); // GeodesicExact::WGS84()
double latOut, lngOut;
geod.Direct(_coords.Latitude(), _coords.Longitude(), _azimuth, _byMeters, latOut, lngOut);
return GeoCoords(latOut, lngOut);
}
inline void testDistanceMove() {
GeoCoords c(12.3456789, 12.3456789);
GeoCoords cc = c;
double dist = 123459998.6789; // meters
bool bHorz = true; // <-- creates some offset???
bool bVert = true; // almost exact
if (bHorz) cc = move(cc, Azimuth::WEST, dist); // 270.
if (bVert) cc = move(cc, Azimuth::SOUTH, dist); // 180
if (bHorz) cc = move(cc, Azimuth::EAST, dist); // 90.
if (bVert) cc = move(cc, Azimuth::NORTH, dist); // 0.
ofLogNotice((__func__)) << "c : " << toString(c);
ofLogNotice((__func__)) << "cc: " << toString(cc);
double diff = distanceInMeters(c, cc);
ofLogNotice((__func__)) << "diff: " << ofToString(diff, 12) << " m";
}
Simple notions of planar notions of geometry don't apply to a sphere on an ellipsoid. For example the sum of the interior angles of a quadrilateral is more than 360°. You would get approximate closure if the distance were small (order of 1km) and you're not close to a pole; however your distance is more than 3 times the circumference of the earth so all bets are off.
ADDENDUM
To help picture the issues, consider starting 1 meter south of the north pole and draw the 4 edges (successively west, south, east, and north) of distance 1 meter. Because 1 meter is much less than the radius of the earth, this is a planar problem. The polyline then looks like (the dashed lines are meridians)
The picture looks even more strange if you start within 1 meter of the south pole.
I didn't see another post with a problem similar to mine, so hopefully this is not redundant.
I've been reading a book on the fundamentals of computer graphics (third edition) and I've been implementing a basic ray tracing program based on the principles I've learned from it. I had little trouble implementing parallel and perspective projection but after moving onto Lambertian and Blinn-Phong Shading I've run into a snag that I'm having trouble figuring out on my own.
I believe my problem is related to how I am calculating the ray-sphere intersection point and the vectors to the camera/light. I attached a picture that is output when I run simply perspective projection with no shading.
Perspective Output
However, when I attempt the same scene with Lambertian shading the spheres disappear.
Blank Ouput
While trying to debug this myself I noticed that if I negate the x, y, z coordinates calculated as the hit point, the spheres appear again. And I believe the light is coming from the opposite direction I expect.
Lambertian, negated hitPoint
I am calculating the hit point by adding the product of the projected direction vector and the t value, calculated by the ray-sphere intersection formula, to the origin (where my "camera" is, 0,0,0) or just e + td.
The vector from the hit point to the light, l, I am setting to the light's position minus the hit point's position (so hit point's coords minus light's coords).
v, the vector from the hit point to the camera, I am getting by simply negating the projected view vector;
And the surface normal I am getting by hit point minus the sphere's position.
All of which I believe is correct. However, while stepping through the part that calculates the surface normal, I notice something I think is odd. When subtracting the hit point's position from the sphere's position to get the vector from the sphere's center to the hit point, I believe I should expect to get a vector where all of the values lie within the range (-r,r); but that is not happening.
This is an example from stepping through my code:
Calculated hit point: (-0.9971, 0.1255, -7.8284)
Sphere center: (0, 0, 8) (radius is 1)
After subtracting, I get a vector where the z value is -15.8284. This seems wrong to me; but I do not know what is causing it. Would a z value of -15.8284 not imply that the sphere center and the hit position are ~16 units away from each other in the z plane? Obviously these two numbers are within 1 from each other in absolute value terms, that's what leads me to think my problem has something to do with this.
Here's the main ray-tracing loop:
auto origin = Position3f(0, 0, 0);
for (int i = 0; i < numPixX; i++)
{
for (int j = 0; j < numPixY; j++)
{
for (SceneSurface* object : objects)
{
float imgPlane_u = left + (right - left) * (i + 0.5f) / numPixX;
float imgPlane_v = bottom + (top - bottom) * (j + 0.5f) / numPixY;
Vector3f direction = (w.negated() * focal_length) + (u * imgPlane_u) + (v * imgPlane_v);
Ray viewingRay(origin, eye, direction);
RayTestResult testResult = object->TestViewRay(viewingRay);
if (testResult.m_bRayHit)
{
Position3f hitPoint = (origin + (direction) * testResult.m_fDist);//.negated();
Vector3f light_direction = (light - hitPoint).toVector().normalized();
Vector3f view_direction = direction.negated().normalized();
Vector3f surface_normal = object->GetNormalAt(hitPoint);
image[j][i] = object->color * intensity * fmax(0, surface_normal * light_direction);
}
}
}
}
GetNormalAt is simply:
Vector3f Sphere::GetNormalAt(Position3f &surface)
{
return (surface - position).toVector().normalized();
}
My spheres are positioned at (0, 0, 8) and (-1.5, -1, 6) with rad 1.0f.
My light is at (-3, -3, 0) with an intensity of 1.0f;
I ignore any intersection where t is not greater than 0 so I do not believe that is causing this problem.
I think I may be doing some kind of mistake when it comes to keeping positions and vectors in the same coordinate system (same transform?), but I'm still learning and admittedly don't understand that very well. If the view direction is always in the -w direction, why do we position scene objects in the positive w direction?
Any help or wisdom is greatly appreciated. I'm teaching this all to myself so far and I'm pleased with how much I've taken in, but something in my gut tells me this is a relatively simple mistake.
Just in case it is of any use, here's the TestViewRay function:
RayTestResult Sphere::TestViewRay(Ray &viewRay)
{
RayTestResult result;
result.m_bRayHit = false;
Position3f &c = position;
float r = radius;
Vector3f &d = viewRay.getDirection();
Position3f &e = viewRay.getPosition();
float part = d*(e - c);
Position3f part2 = (e - c);
float part3 = d * d;
float discriminant = ((part*part) - (part3)*((part2*part2) - (r * r)));
if (discriminant > 0)
{
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d) * (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 2;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
else if (discriminant == 0)
{
float t_add = ((d)* (part2)+sqrt(discriminant)) / (part3);
float t_sub = ((d)* (part2)-sqrt(discriminant)) / (part3);
float t = fmin(t_add, t_sub);
if (t > 0)
{
result.m_iNumberOfSolutions = 1;
result.m_bRayHit = true;
result.m_fDist = t;
}
}
return result;
}
EDIT:
I'm happy to report I figured out my problem.
Upon sitting down with my sister to look at this I noticed in my ray-sphere hit detection I had this:
float t_add = ((d) * (part2)+sqrt(discriminant)) / (part3);
Which is incorrect. d should be negative. It should be:
float t_add = ((neg_d * (e_min_c)) + sqrt(discriminant)) / (part2);
(I renamed a couple variables) Previously I had a zero'd vector so I could express -d as (zero_vector - d)and I had removed that because I implemented a member function to negate any given vector; but I forgot to go back and call it on d. After fixing that and moving my sphere's into the negative z plane my Lambertian and Blinn-Phong shading implementations work correctly.
Lambertian + Blinn-Phong
I have a set of 2D points. I need to find a minimum area ellipse enclosing all the points. Could someone give an idea of how the problem has to be tackled. For a circle it was simple. The largest distance between the center and the point. But for an ellipse its quite complicated which I do not know. I have to implement this in c++.
These don't go as far as giving you C++ code, but they include in-depth discussion of effective algorithms for what you need to do.
https://www.cs.cornell.edu/cv/OtherPdf/Ellipse.pdf
http://www.stsci.edu/~RAB/Backup%20Oct%2022%202011/f_3_CalculationForWFIRSTML/Gaertner%20&%20Schoenherr.pdf
The other answers here give approximation schemes or only provide links. We can do better.
Your question is addressed by the paper "Smallest Enclosing Ellipses -- Fast and Exact" by Gärtner and Schönherr (1997). The same authors provide a C++ implementation in their 1998 paper "Smallest Enclosing Ellipses -- An Exact and Generic Implementation in C++". This algorithm is implemented in a more usable form in CGAL here.
However, CGAL only provides the general equation for the ellipse, so we use a few transforms to get a parametric equation suitable for plotting.
All this is included in the implementation below.
Using WebPlotDigitizer to extract your data while choosing arbitrary values for the lengths of the axes, but preserving their aspect ratio, gives:
-1.1314123177813773 4.316368664322679
1.345680085331649 5.1848164974519015
2.2148682495160603 3.9139687117291504
0.9938150357523803 3.2732678860664475
-0.24524315569075128 3.0455750009876343
-1.4493153715482157 2.4049282977126376
0.356472958558844 0.0699802473037554
2.8166270295895384 0.9211630387547896
3.7889384901038987 -0.8484766720657362
1.3457654169794182 -1.6996053411290646
2.9287101489353287 -3.1919219373444463
0.8080480385572635 -3.990389523169913
0.46847074625686425 -4.008682890214516
-1.6521060324734327 -4.8415723146209455
Fitting this using the program below gives:
a = 3.36286
b = 5.51152
cx = 0.474112
cy = -0.239756
theta = -0.0979706
We can then plot this with gnuplot
set parametric
plot "points" pt 7 ps 2, [0:2*pi] a*cos(t)*cos(theta) - b*sin(t)*sin(theta) + cx, a*cos(t)*sin(theta) + b*sin(t)*cos(theta) +
cy lw 2
to get
Implementation
The code below does this:
// Compile with clang++ -DBOOST_ALL_NO_LIB -DCGAL_USE_GMPXX=1 -O2 -g -DNDEBUG -Wall -Wextra -pedantic -march=native -frounding-math main.cpp -lgmpxx -lmpfr -lgmp
#include <CGAL/Cartesian.h>
#include <CGAL/Min_ellipse_2.h>
#include <CGAL/Min_ellipse_2_traits_2.h>
#include <CGAL/Exact_rational.h>
#include <cassert>
#include <cmath>
#include <fstream>
#include <iostream>
#include <string>
#include <vector>
typedef CGAL::Exact_rational NT;
typedef CGAL::Cartesian<NT> K;
typedef CGAL::Point_2<K> Point;
typedef CGAL::Min_ellipse_2_traits_2<K> Traits;
typedef CGAL::Min_ellipse_2<Traits> Min_ellipse;
struct EllipseCanonicalEquation {
double semimajor; // Length of semi-major axis
double semiminor; // Length of semi-minor axis
double cx; // x-coordinate of center
double cy; // y-coordinate of center
double theta; // Rotation angle
};
std::vector<Point> read_points_from_file(const std::string &filename){
std::vector<Point> ret;
std::ifstream fin(filename);
float x,y;
while(fin>>x>>y){
std::cout<<x<<" "<<y<<std::endl;
ret.emplace_back(x, y);
}
return ret;
}
// Uses "Smallest Enclosing Ellipses -- An Exact and Generic Implementation in C++"
// under the hood.
EllipseCanonicalEquation get_min_area_ellipse_from_points(const std::vector<Point> &pts){
// Compute minimum ellipse using randomization for speed
Min_ellipse me2(pts.data(), pts.data()+pts.size(), true);
std::cout << "done." << std::endl;
// If it's degenerate, the ellipse is a line or a point
assert(!me2.is_degenerate());
// Get coefficients for the equation
// r*x^2 + s*y^2 + t*x*y + u*x + v*y + w = 0
double r, s, t, u, v, w;
me2.ellipse().double_coefficients(r, s, t, u, v, w);
// Convert from CGAL's coefficients to Wikipedia's coefficients
// A*x^2 + B*x*y + C*y^2 + D*x + E*y + F = 0
const double A = r;
const double B = t;
const double C = s;
const double D = u;
const double E = v;
const double F = w;
// Get the canonical form parameters
// Using equations from https://en.wikipedia.org/wiki/Ellipse#General_ellipse
const auto a = -std::sqrt(2*(A*E*E+C*D*D-B*D*E+(B*B-4*A*C)*F)*((A+C)+std::sqrt((A-C)*(A-C)+B*B)))/(B*B-4*A*C);
const auto b = -std::sqrt(2*(A*E*E+C*D*D-B*D*E+(B*B-4*A*C)*F)*((A+C)-std::sqrt((A-C)*(A-C)+B*B)))/(B*B-4*A*C);
const auto cx = (2*C*D-B*E)/(B*B-4*A*C);
const auto cy = (2*A*E-B*D)/(B*B-4*A*C);
double theta;
if(B!=0){
theta = std::atan(1/B*(C-A-std::sqrt((A-C)*(A-C)+B*B)));
} else if(A<C){
theta = 0;
} else { //A>C
theta = M_PI;
}
return EllipseCanonicalEquation{a, b, cx, cy, theta};
}
int main(int argc, char** argv){
if(argc!=2){
std::cerr<<"Provide name of input containing a list of x,y points"<<std::endl;
std::cerr<<"Syntax: "<<argv[0]<<" <Filename>"<<std::endl;
return -1;
}
const auto pts = read_points_from_file(argv[1]);
const auto eq = get_min_area_ellipse_from_points(pts);
// Convert canonical equation for rotated ellipse to parametric based on:
// https://math.stackexchange.com/a/2647450/14493
std::cout << "Ellipse has the parametric equation " << std::endl;
std::cout << "x(t) = a*cos(t)*cos(theta) - b*sin(t)*sin(theta) + cx"<<std::endl;
std::cout << "y(t) = a*cos(t)*sin(theta) + b*sin(t)*cos(theta) + cy"<<std::endl;
std::cout << "with" << std::endl;
std::cout << "a = " << eq.semimajor << std::endl;
std::cout << "b = " << eq.semiminor << std::endl;
std::cout << "cx = " << eq.cx << std::endl;
std::cout << "cy = " << eq.cy << std::endl;
std::cout << "theta = " << eq.theta << std::endl;
return 0;
}
Not sure if I can prove it, but it seems to me that the optimal solution would be characterized by tangenting (at least) 3 of the points, while all the other points are inside the ellipse (think about it!). So if nothing else, you should be able to brute force it by checking all ~n^3 triplets of points and checking if they define a solution. Should be possible to improve on that by removing all points that would have to be strictly inside any surrounding ellipse, but I'm not sure how that could be done. Maybe by sorting the points by x and y coordinates and then doing something fancy.
Not a complete solution, but it's a start.
EDIT:
Unfortunately 3 points aren't enough to define an ellipse. But perhaps if you restrict it to the ellipse of the smallest area tangenting 3 points?
as Rory Daulton suggest you need to clearly specify the constraints of solution and removal of any will greatly complicates things. For starters assume this for now:
it is 2D problem
ellipse is axis aligned
center is arbitrary instead of (0,0)
I would attack this as standard genere and test problem with approximation search (which is hybrid between binary search and linear search) to speed it up (but you can also try brute force from start so you see if it works).
compute constraints of solution
To limit the search you need to find approximate placement position and size of the ellipse. For that you can use out-scribed circle for your points. It is clear that ellipse area will be less or equal to the circle and placement will be near by. The circle does not have to be necessarily the smallest one possible so we can use for example this:
find bounding box of the points
let the circle be centered to that bounding box and with radius be the max distance from its center to any of the points.
This will be O(n) complexity where n is number of your points.
search "all" the possible ellipses and remember best solution
so we need to find ellipse center (x0,y0) and semi-axises rx,ry while area = M_PI*rx*ry is minimal. With approximation search each variable has factor of O(log(m)) and each iteration need to test validity which is O(n) so final complexity would be O(n.log^4(m)) where m is average number of possible variations of each search parameter (dependent on accuracy and search constraints). With simple brute search it would be O(n.m^4) which is really scary especially for floating point where m can be really big.
To speed this up we know that the area of ellipse will be less then or equal to area of found circle so we can ignore all the bigger ellipses. The constrains to rx,ry can be derived from the aspect ratio of the bounding box +/- some reserve.
Here simple small C++ example using that approx class from link above:
//---------------------------------------------------------------------------
// input points
const int n=15; // number of random points to test
float pnt[n][2];
// debug bounding box
float box_x0,box_y0,box_x1,box_y1;
// debug outscribed circle
float circle_x,circle_y,circle_r;
// solution ellipse
float ellipse_x,ellipse_y,ellipse_rx,ellipse_ry;
//---------------------------------------------------------------------------
void compute(float x0,float y0,float x1,float y1) // cal with bounding box where you want your points will be generated
{
int i;
float x,y;
// generate n random 2D points inside defined area
Randomize();
for (i=0;i<n;i++)
{
pnt[i][0]=x0+(x1-x0)*Random();
pnt[i][1]=y0+(y1-y0)*Random();
}
// compute bounding box
x0=pnt[0][0]; x1=x0;
y0=pnt[0][1]; y1=y0;
for (i=0;i<n;i++)
{
x=pnt[i][0]; if (x0>x) x0=x; if (x1<x) x1=x;
y=pnt[i][1]; if (y0>y) y0=y; if (y1<y) y1=y;
}
box_x0=x0; box_x1=x1;
box_y0=y0; box_y1=y1;
// "outscribed" circle
circle_x=0.5*(x0+x1);
circle_y=0.5*(y0+y1);
circle_r=0.0;
for (i=0;i<n;i++)
{
x=pnt[i][0]-circle_x; x*=x;
y=pnt[i][1]-circle_y; y*=y; x+=y;
if (circle_r<x) circle_r=x;
}
circle_r=sqrt(circle_r);
// smallest area ellipse
int N;
double m,e,step,area;
approx ax,ay,aa,ab;
N=3; // number of recursions each one improves accuracy with factor 10
area=circle_r*circle_r; // solution will not be bigger that this
step=((x1-x0)+(y1-y0))*0.05; // initial position/size step for the search as 1/10 of avg bounding box size
for (ax.init( x0, x1,step,N,&e);!ax.done;ax.step()) // search x0
for (ay.init( y0, y1,step,N,&e);!ay.done;ay.step()) // search y0
for (aa.init(0.5*(x1-x0),2.0*circle_r,step,N,&e);!aa.done;aa.step()) // search rx
for (ab.init(0.5*(y1-y0),2.0*circle_r,step,N,&e);!ab.done;ab.step()) // search ry
{
e=aa.a*ab.a;
// is ellipse outscribed?
if (aa.a>=ab.a)
{
m=aa.a/ab.a; // convert to circle of radius rx
for (i=0;i<n;i++)
{
x=(pnt[i][0]-ax.a); x*=x;
y=(pnt[i][1]-ay.a)*m; y*=y;
// throw away this ellipse if not
if (x+y>aa.a*aa.a) { e=2.0*area; break; }
}
}
else{
m=ab.a/aa.a; // convert to circle of radius ry
for (i=0;i<n;i++)
{
x=(pnt[i][0]-ax.a)*m; x*=x;
y=(pnt[i][1]-ay.a); y*=y;
// throw away this ellipse if not
if (x+y>ab.a*ab.a) { e=2.0*area; break; }
}
}
}
ellipse_x =ax.aa;
ellipse_y =ay.aa;
ellipse_rx=aa.aa;
ellipse_ry=ab.aa;
}
//---------------------------------------------------------------------------
Even this simple example with only 15 points took around 2 seconds to compute. You can improve performance by adding heuristics like test only areas lower then circle_r^2 etc, or better select solution area with some math rule. If you use brute force instead of approximation search that expect the computation time could be even minutes or more hence the O(scary)...
Beware this example will not work for any aspect ratio of the points as I hardcoded the upper bound for rx,ry to 2.0*circle_r which may not be enough. Instead you can compute the upper bound from aspect ratio of the points and or condition that rx*ry<=circle_r^2...
There are also other ("faster") methods for example variation of CCD (cyclic coordinate descend) can be used. But such methods usually can not guarantee that optimal solution will be found or any at all ...
Here overview of the example output:
The dots are individual points from pnt[n], the gray dashed stuff are bounding box and used out-scribed circle. The green ellipse is found solution.
Code for MVEE (minimal volume enclosing ellipse) can be found here, and works even for non-centered and rotated ellipses:
https://github.com/chrislarson1/MVEE
My related code:
bool _mvee(const std::vector<cv::Point> & contour, cv::RotatedRect & ellipse, const float epsilon, const float lmc) {
std::vector<cv::Point> hull;
cv::convexHull(contour, hull);
mvee::Mvee B;
std::vector<std::vector<double>> X;
// speedup: the mve-ellipse on the convex hull should be the same theoretically as the one on the entire contour
for (const auto &points : hull) {
std::vector<double> p = {double(points.x), double(points.y)};
X.push_back(p); // speedup: the mve-ellipse on part of the points (e.g. one every 4) should be similar
}
B.compute(X, epsilon, lmc); // <-- call to the MVEE algorithm
cv::Point2d center(B.centroid()[0], B.centroid()[1]);
cv::Size2d size(B.radii()[0] * 2, B.radii()[1] * 2);
float angle = asin(B.pose()[1][0]) * 180 / CV_PI;
if (B.pose()[0][0] < 0) angle *= -1;
ellipse = cv::RotatedRect(center, size, angle);
if (std::isnan(ellipse.size.height)) {
LOG_ERR("pupil with nan size");
return false;
}
return true;
}
I'm tasked with optimizing the following ray tracer:
void Scene::RayTrace()
{
for (int v = 0; v < fb->h; v++) // all vertical pixels in framebuffer
{
calculateFPS(); // calculates the current fps and prints it
for (int u = 0; u < fb->w; u++) // all horizontal pixels in framebuffer
{
fb->Set(u, v, 0xFFAAAAAA); // background color
fb->SetZ(u, v, FLT_MAX); // sets the Z values to all be maximum at beginning
V3 ray = (ppc->c + ppc->a*((float)u + .5f) + ppc->b*((float)v + .5f)).UnitVector(); // gets the camera ray
for (int tmi = 0; tmi < tmeshesN; tmi++) // iterates over all triangle meshes
{
if (!tmeshes[tmi]->enabled) // doesn't render a tmesh if it's not set to be enabled
continue;
for (int tri = 0; tri < tmeshes[tmi]->trisN; tri++) // iterates over all triangles in the mesh
{
V3 Vs[3]; // triangle vertices
Vs[0] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 0]];
Vs[1] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 1]];
Vs[2] = tmeshes[tmi]->verts[tmeshes[tmi]->tris[3 * tri + 2]];
V3 bgt = ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); // I don't entirely understand what this does
if (bgt[2] < 0.0f || bgt[0] < 0.0f || bgt[1] < 0.0f || bgt[0] + bgt[1] > 1.0f)
continue;
if (fb->zb[(fb->h - 1 - v)*fb->w + u] < bgt[2])
continue;
fb->SetZ(u, v, bgt[2]);
float alpha = 1.0f - bgt[0] - bgt[1];
float beta = bgt[0];
float gamma = bgt[1];
V3 Cs[3]; // triangle vertex colors
Cs[0] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 0]];
Cs[1] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 1]];
Cs[2] = tmeshes[tmi]->cols[tmeshes[tmi]->tris[3 * tri + 2]];
V3 color = Cs[0] * alpha + Cs[1] * beta + Cs[2] * gamma;
fb->Set(u, v, color.GetColor()); // sets this pixel accordingly
}
}
}
fb->redraw();
Fl::check();
}
}
Two things:
I don't entirely understand what ppc->C.IntersectRayWithTriangleWithThisOrigin(ray, Vs); does. Can anyone explain this, in terms of ray-tracing, to me? Here is the function inside my "Planar Pinhole Camera" class (this function was given to me):
V3 V3::IntersectRayWithTriangleWithThisOrigin(V3 r, V3 Vs[3])
{
M33 m; // 3X3 matrix class
m.SetColumn(0, Vs[1] - Vs[0]);
m.SetColumn(1, Vs[2] - Vs[0]);
m.SetColumn(2, r*-1.0f);
V3 ret; // Vector3 class
V3 &C = *this;
ret = m.Inverse() * (C - Vs[0]);
return ret;
}
The basic steps of this are apparent, I just don't see what it's actually doing.
How would I go about optimizing this ray-tracer from here? I've found something online about "kd trees," but I'm unsure how complex they are. Does anyone have some good resources on simple solutions for optimizing this? I've had some difficulty deciphering what's out there.
Thanks!
Probably the largest optimisation by far would be to use some sort of bounding volume hierarchy. Right now the code intersects all rays with all triangles of all objects. With a BVH, we instead ask: "given this ray, which triangles intersect?" This means that for each ray, you generally only need to test for intersection with a handful of primitives and triangles, rather than every single triangle in the scene.
IntersectRayWithTriangleWithThisOrigin
from the look of it
it creates inverse transform matrix from the triangle edges (triangle basis vectors are X,Y)
do not get the Z axis I would expect the ray direction there and not position of pixel (ray origin)
but can be misinterpreting something
anyway the inverse matrix computation is the biggest problem
you are computing it for each triangle per pixel that is a lot
faster would be having computed inverse transform matrix of each triangle before raytracing (once)
where X,Y are the basis and Z is perpendicular to booth of them facing always the same direction to camera
and then just transform your ray into it and check for limits of intersection
that is just matrix*vector and few ifs instead of inverse matrix computation
another way would be to algebraically solve ray vs. plane intersection
that should lead to much simpler equation then matrix inversion
after that is that just a mater of basis vector bound checking
I'm attempting to determine if a specific point lies inside a polyhedron. In my current implementation, the method I'm working on take the point we're looking for an array of the faces of the polyhedron (triangles in this case, but it could be other polygons later). I've been trying to work from the info found here: http://softsurfer.com/Archive/algorithm_0111/algorithm_0111.htm
Below, you'll see my "inside" method. I know that the nrml/normal thing is kind of weird .. it's the result of old code. When I was running this it seemed to always return true no matter what input I give it. (This is solved, please see my answer below -- this code is working now).
bool Container::inside(Point* point, float* polyhedron[3], int faces) {
Vector* dS = Vector::fromPoints(point->X, point->Y, point->Z,
100, 100, 100);
int T_e = 0;
int T_l = 1;
for (int i = 0; i < faces; i++) {
float* polygon = polyhedron[i];
float* nrml = normal(&polygon[0], &polygon[1], &polygon[2]);
Vector* normal = new Vector(nrml[0], nrml[1], nrml[2]);
delete nrml;
float N = -((point->X-polygon[0][0])*normal->X +
(point->Y-polygon[0][1])*normal->Y +
(point->Z-polygon[0][2])*normal->Z);
float D = dS->dot(*normal);
if (D == 0) {
if (N < 0) {
return false;
}
continue;
}
float t = N/D;
if (D < 0) {
T_e = (t > T_e) ? t : T_e;
if (T_e > T_l) {
return false;
}
} else {
T_l = (t < T_l) ? t : T_l;
if (T_l < T_e) {
return false;
}
}
}
return true;
}
This is in C++ but as mentioned in the comments, it's really very language agnostic.
The link in your question has expired and I could not understand the algorithm from your code. Assuming you have a convex polyhedron with counterclockwise oriented faces (seen from outside), it should be sufficient to check that your point is behind all faces. To do that, you can take the vector from the point to each face and check the sign of the scalar product with the face's normal. If it is positive, the point is behind the face; if it is zero, the point is on the face; if it is negative, the point is in front of the face.
Here is some complete C++11 code, that works with 3-point faces or plain more-point faces (only the first 3 points are considered). You can easily change bound to exclude the boundaries.
#include <vector>
#include <cassert>
#include <iostream>
#include <cmath>
struct Vector {
double x, y, z;
Vector operator-(Vector p) const {
return Vector{x - p.x, y - p.y, z - p.z};
}
Vector cross(Vector p) const {
return Vector{
y * p.z - p.y * z,
z * p.x - p.z * x,
x * p.y - p.x * y
};
}
double dot(Vector p) const {
return x * p.x + y * p.y + z * p.z;
}
double norm() const {
return std::sqrt(x*x + y*y + z*z);
}
};
using Point = Vector;
struct Face {
std::vector<Point> v;
Vector normal() const {
assert(v.size() > 2);
Vector dir1 = v[1] - v[0];
Vector dir2 = v[2] - v[0];
Vector n = dir1.cross(dir2);
double d = n.norm();
return Vector{n.x / d, n.y / d, n.z / d};
}
};
bool isInConvexPoly(Point const& p, std::vector<Face> const& fs) {
for (Face const& f : fs) {
Vector p2f = f.v[0] - p; // f.v[0] is an arbitrary point on f
double d = p2f.dot(f.normal());
d /= p2f.norm(); // for numeric stability
constexpr double bound = -1e-15; // use 1e15 to exclude boundaries
if (d < bound)
return false;
}
return true;
}
int main(int argc, char* argv[]) {
assert(argc == 3+1);
char* end;
Point p;
p.x = std::strtod(argv[1], &end);
p.y = std::strtod(argv[2], &end);
p.z = std::strtod(argv[3], &end);
std::vector<Face> cube{ // faces with 4 points, last point is ignored
Face{{Point{0,0,0}, Point{1,0,0}, Point{1,0,1}, Point{0,0,1}}}, // front
Face{{Point{0,1,0}, Point{0,1,1}, Point{1,1,1}, Point{1,1,0}}}, // back
Face{{Point{0,0,0}, Point{0,0,1}, Point{0,1,1}, Point{0,1,0}}}, // left
Face{{Point{1,0,0}, Point{1,1,0}, Point{1,1,1}, Point{1,0,1}}}, // right
Face{{Point{0,0,1}, Point{1,0,1}, Point{1,1,1}, Point{0,1,1}}}, // top
Face{{Point{0,0,0}, Point{0,1,0}, Point{1,1,0}, Point{1,0,0}}}, // bottom
};
std::cout << (isInConvexPoly(p, cube) ? "inside" : "outside") << std::endl;
return 0;
}
Compile it with your favorite compiler
clang++ -Wall -std=c++11 code.cpp -o inpoly
and test it like
$ ./inpoly 0.5 0.5 0.5
inside
$ ./inpoly 1 1 1
inside
$ ./inpoly 2 2 2
outside
If your mesh is concave, and not necessarily watertight, that’s rather hard to accomplish.
As a first step, find the point on the surface of the mesh closest to the point. You need to keep track the location, and specific feature: whether the closest point is in the middle of face, on the edge of the mesh, or one of the vertices of the mesh.
If the feature is face, you’re lucky, can use windings to find whether it’s inside or outside. Compute normal to face (don't even need to normalize it, non-unit-length will do), then compute dot( normal, pt - tri[0] ) where pt is your point, tri[0] is any vertex of the face. If the faces have consistent winding, the sign of that dot product will tell you if it’s inside or outside.
If the feature is edge, compute normals to both faces (by normalizing a cross-product), add them together, use that as a normal to the mesh, and compute the same dot product.
The hardest case is when a vertex is the closest feature. To compute mesh normal at that vertex, you need to compute sum of the normals of the faces sharing that vertex, weighted by 2D angles of that face at that vertex. For example, for vertex of cube with 3 neighbor triangles, the weights will be Pi/2. For vertex of a cube with 6 neighbor triangles the weights will be Pi/4. And for real-life meshes the weights will be different for each face, in the range [ 0 .. +Pi ]. This means you gonna need some inverse trigonometry code for this case to compute the angle, probably acos().
If you want to know why that works, see e.g. “Generating Signed Distance Fields From Triangle Meshes” by J. Andreas Bærentzen and Henrik Aanæs.
I have already answered this question couple years ago. But since that time I’ve discovered much better algorithm. It was invented in 2018, here’s the link.
The idea is rather simple. Given that specific point, compute a sum of signed solid angles of all faces of the polyhedron as viewed from that point. If the point is outside, that sum gotta be zero. If the point is inside, that sum gotta be ±4·π steradians, + or - depends on the winding order of the faces of the polyhedron.
That particular algorithm is packing the polyhedron into a tree, which dramatically improves performance when you need multiple inside/outside queries for the same polyhedron. The algorithm only computes solid angles for individual faces when the face is very close to the query point. For large sets of faces far away from the query point, the algorithm is instead using an approximation of these sets, using some numbers they keep in the nodes of that BVH tree they build from the source mesh.
With limited precision of FP math, and if using that approximated BVH tree losses from the approximation, that angle will never be exactly 0 nor ±4·π. But still, the 2·π threshold works rather well in practice, at least in my experience. If the absolute value of that sum of solid angles is less than 2·π, consider the point to be outside.
It turns out that the problem was my reading of the algorithm referenced in the link above. I was reading:
N = - dot product of (P0-Vi) and ni;
as
N = - dot product of S and ni;
Having changed this, the code above now seems to work correctly. (I'm also updating the code in the question to reflect the correct solution).