When implmenting shapes using structures and classes C++ - c++

Hi guys so I'm trying to draw a tower by implementing shapes using structures and classes in C++. Below is what I'm trying to draw using 6 different shapes: point, shape, rectangle, circle, triangle, square. I have to create .h and .cpp file for each shapes and I never really understood what to put on each .h and .cpp files for each shapes. I will include what I have so far on my main.cpp and I just would like to know if I'm going in the right direction as well as what kind of information/code would I have to write on each .h and .cpp file for each shapes.
Picture of what I'm trying to draw
main.cpp
#include <iostream>
#include "point.h"
#include "shape.h"
#include "rectangle.h"
#include "square.h"
#include "triangle.h"
#include "circle.h"
using namespace std;
int main(){
//create objects of the shapes
Rectangle r, r1;
Square s;
Triangle t;
Circle c, c1;
//first rectangle
r.setLineType('*');
r.moveBy(5, 5); //x and y coordinates
r.setHeight(5);
r.setWidth(20);
r.computeArea();
r.draw(); // draw a rectangle in 2D array
auto firstRectangleArea = r.computeArea();
auto firstRectangleCircumference = r.computerCircumference();
//second rectangle
r1.setLineType('*');
r1.moveBy(10, 0);
r1.setHeight(20);
r1.setWidth(5);
r1.computeArea();
r1.draw(); // draw a rectangle in 2D array
auto secondRectangleArea = r1.computeArea();
auto secondRectangleCircumference = r1.computeCircumference();
//triangle
t.setLineType('*');
t.moveBy(15, 0);
t.setHeight(5);
t.setBase(5);
//first circle
c.moveBy(15, 0);
c.setRadius(2);
auto firstCircleArea = c.computeArea();
auto firstCircleCircumference = c.computeCircumference();
//second circle
c1.setLineType('*');
c1.moveBy(6, 0);
c1.setRadius(4);
auto secondCircleArea = c1.computeArea();
auto secondCircleCircumference = c1.computeCircumference();

Related

How to correctly use VTK ConstrainedDelaunay2D?

I've started from the VTK ConstrainedDelaunay2D example and added my own points:
#include <vtkSmartPointer.h>
#include <vtkDelaunay2D.h>
#include <vtkCellArray.h>
#include <vtkProperty.h>
#include <vtkPolyDataMapper.h>
#include <vtkActor.h>
#include <vtkPoints.h>
#include <vtkPolyData.h>
#include <vtkPolygon.h>
#include <vtkMath.h>
#include <vtkRenderer.h>
#include <vtkRenderWindow.h>
#include <vtkRenderWindowInteractor.h>
#include <vtkNamedColors.h>
#include <vtkVersionMacros.h> // For version macros
int main(int, char *[])
{
vtkSmartPointer<vtkPoints> points = vtkSmartPointer<vtkPoints>::New();
int ptsHeight = 400;
std::vector<std::vector<int>> pts{ {166, 127},{103, 220},{166, 190},{174, 291},{189, 226},{227, 282},{213, 187},{242, 105},{196, 131},{182, 83} };
for (size_t i = 0; i < pts.size(); i++)
{
// !important: flip y
int x = pts[i][0];
int y = ptsHeight - pts[i][1];
points->InsertNextPoint(x, y, 0);
}
vtkSmartPointer<vtkPolyData> aPolyData = vtkSmartPointer<vtkPolyData>::New();
aPolyData->SetPoints(points);
// Create a cell array to store the polygon in
vtkSmartPointer<vtkCellArray> aCellArray = vtkSmartPointer<vtkCellArray>::New();
// Define a polygonal hole with a clockwise polygon
vtkSmartPointer<vtkPolygon> aPolygon = vtkSmartPointer<vtkPolygon>::New();
for (unsigned int i = 0; i < pts.size(); i++)
{
aPolygon->GetPointIds()->InsertNextId(i);
}
aCellArray->InsertNextCell(aPolygon);
// Create a polydata to store the boundary. The points must be the
// same as the points we will triangulate.
vtkSmartPointer<vtkPolyData> boundary =
vtkSmartPointer<vtkPolyData>::New();
boundary->SetPoints(aPolyData->GetPoints());
boundary->SetPolys(aCellArray);
// Triangulate the grid points
vtkSmartPointer<vtkDelaunay2D> delaunay =
vtkSmartPointer<vtkDelaunay2D>::New();
delaunay->SetInputData(aPolyData);
delaunay->SetSourceData(boundary);
// Visualize
vtkSmartPointer<vtkPolyDataMapper> meshMapper =
vtkSmartPointer<vtkPolyDataMapper>::New();
meshMapper->SetInputConnection(delaunay->GetOutputPort());
vtkSmartPointer<vtkNamedColors> colors =
vtkSmartPointer<vtkNamedColors>::New();
vtkSmartPointer<vtkActor> meshActor =
vtkSmartPointer<vtkActor>::New();
meshActor->SetMapper(meshMapper);
meshActor->GetProperty()->EdgeVisibilityOn();
meshActor->GetProperty()->SetEdgeColor(colors->GetColor3d("Peacock").GetData());
meshActor->GetProperty()->SetInterpolationToFlat();
meshActor->GetProperty()->SetBackfaceCulling(true);
// Create a renderer, render window, and interactor
vtkSmartPointer<vtkRenderer> renderer = vtkSmartPointer<vtkRenderer>::New();
vtkSmartPointer<vtkRenderWindow> renderWindow = vtkSmartPointer<vtkRenderWindow>::New();
renderWindow->AddRenderer(renderer);
vtkSmartPointer<vtkRenderWindowInteractor> renderWindowInteractor = vtkSmartPointer<vtkRenderWindowInteractor>::New();
renderWindowInteractor->SetRenderWindow(renderWindow);
// Add the actor to the scene
renderer->AddActor(meshActor);
//renderer->AddActor(boundaryActor);
renderer->SetBackground(colors->GetColor3d("Mint").GetData());
// Render and interact
renderWindow->SetSize(640, 480);
renderWindow->Render();
renderWindowInteractor->Start();
return EXIT_SUCCESS;
}
I'm experiencing two issues:
I get different results if I flip the Y coordinates: why is that ?
Why are there faces pointing in the wrong direction (flipped normal / wrong winding )?
Here's what I mean by the 1st issue:
If I don't flip the Y coordinates I get this:
I get the same effect if I don't flip the Y axis but insert the boundary polygon in reverse order:
for (unsigned int i = 0; i < pts.size(); i++)
{
aPolygon->GetPointIds()->InsertNextId(pts.size() - 1 - i);
}
I don't think I fully understand how the boundary/constraint works.
I thoght that the same points should produce the same triangulation wether the vertices are flipped vertically or not. (I suspect the order of indices changes then ?)
Regarding the second issue (unpredictable flipped faces) I'm not sure what the best way forward is. I had a look at the vtkDelaunay2D class and couldn't find anything related.
(I've tried setting projection plane mode to VTK_DELAUNAY_XY_PLANE, but it didn't seem to affect the output)
I've also tried to use vtkPolyDataNormals but got no output:
vtkSmartPointer<vtkPolyDataNormals> normalGenerator = vtkSmartPointer<vtkPolyDataNormals>::New();
normalGenerator->SetInputData(delaunay->GetOutput());
normalGenerator->ComputePointNormalsOff();
normalGenerator->ComputeCellNormalsOn();
normalGenerator->FlipNormalsOn();
normalGenerator->Update();
(normalGenerator's output has 0 cells and points)
Is there a way to compute constrained delaunay triangulation for a list of 2d points and ensure all the faces point the same way ? (If so, how ? Would it be possible to do this with the vtkDelaunay2D class alone or is it necessary to use other filters?)
Any hints/tips are more than welcome :)
I'm using VTK 8.2 by the way.
the flipping in y effectively reverses the faces orientation (what is clockwise becomes anti-clockwise, like in a mirror).
I'm not sure I can reproduce your example above. A quick test in python seems to give the expected behavior, maybe you can start from this and map it to your c++ version:
import vedo
pts = [
[166, 127],
[103, 220],
[166, 190],
[174, 291],
[189, 226],
[227, 282],
[213, 187],
[242, 105],
[196, 131],
[182, 83],
]
ids = [[2,4,6], [0,2,8]] # faces to erase by pt-index (clockwise)
dly = vedo.delaunay2D(pts, mode='xy', boundaries=ids)
dly.c('grey5').lc('red4').lw(2)
labels = vedo.Points(pts).labels('id').z(1)
vedo.show(labels, dly, axes=1)

Set object position relative to other object

So I try to learn about game development, and I want to make my character can move its elbow. the elbow looks like this, it consist of 2 sprites, arm1 and arm2. Arm1 can rotate base on its origin, and arm2 should locate at the tip of arm1(about 60 px from arm1 origin). But I don't know how to put arm2 at the correct position like in the img. I try to use polar coordinate because I know the angle and arm distance
lines[0].position=Vector2f(arm1.getPosition().x,arm1.getPosition().y);
lines[0].color=Color::Blue;
armPos.x=arm1.getPosition().x+(d*cos(AngleToRad(arm1.getRotation()-toleransi) ));
armPos.y=arm1.getPosition().y+(d*sin(AngleToRad(arm1.getRotation()-toleransi)));
lines[1].position=armPos;
lines[1].color=Color::Blue;
cir.setPosition(armPos);
arm1.setPosition(mc.getPosition().x+10,mc.getPosition().y-50);
arm2.setPosition(arm1.getPosition().x,mc.getPosition().y-10);
, but that doesn't work. I use circle and line just for debug.
The full code looks like this:
#include <SFML/Graphics.hpp>
#include <math.h>
#include <iostream>
#include <vector>
#include "Player.h"
#include "Particle.h"
using namespace sf;
float AngleToRad(float a)
{
return (a/360.0f)*3.14159265359;
}
int main()
{
RenderWindow window(VideoMode(1000,640), "Small Life");
//////////////Setup////////////
//mc//
Texture idle_texture;
idle_texture.loadFromFile("image/idle.png");
IntRect player_rect(264,0,264,264);
Sprite mc(idle_texture,player_rect);
mc.setOrigin(132,264);
Player player(&idle_texture, Vector2u(4,1),0.3f);
mc.setPosition(0,300);
mc.setScale(0.7,0.7);
//arm//
Texture arm1_texture;
arm1_texture.loadFromFile("image/arm1.png");
Sprite arm1(arm1_texture);
Texture arm2_texture;
arm2_texture.loadFromFile("image/arm2.png");
Sprite arm2(arm2_texture);
arm1.setOrigin(70,158);
arm2.setOrigin(79,158);
arm1.setScale(0.5,0.5);
arm2.setScale(0.7,0.7);
//blood//
Texture blood_texture;
blood_texture.loadFromFile("image/blood.png");
CircleShape cir(10);
cir.setOrigin(5,5);
VertexArray lines(LinesStrip,2);
cir.setFillColor(Color::Red);
float deltaTime=0.0f;
Clock clock;
Clock particle_time;
float speed=0.2f;
std::vector<Sprite>bloodVec;
std::cout<<sin(1.5708)<<" "<<cos(AngleToRad(180))<<" "<< AngleToRad(180)<<" "<<" "<<asin(1)<<" "<<acos(1)<<std::endl;
while (window.isOpen())
{
Event event;
deltaTime=clock.restart().asSeconds();
while (window.pollEvent(event))
{
if (event.type == Event::Closed)
window.close();
}
if(Keyboard::isKeyPressed(Keyboard::W)) mc.move(0,-speed);
if(Keyboard::isKeyPressed(Keyboard::S)) mc.move(0,speed);
if(Keyboard::isKeyPressed(Keyboard::A)) mc.move(-speed,0);
if(Keyboard::isKeyPressed(Keyboard::D)) mc.move(speed,0);
//blood particle//
if(particle_time.getElapsedTime().asSeconds()>1.5f)
{
IntRect blRect(0,0,200,200);
Sprite b_blood(blood_texture,blRect);
ParticleConstDrop(b_blood,mc.getPosition());
bloodVec.push_back(b_blood);
particle_time.restart();
}
int bloodCount=bloodVec.size();
for(int i=0;i<bloodCount;i++)
{
window.draw(bloodVec[i]);
}
Vector2f armPos(arm1.getPosition());
float d=30.0f;
float toleransi=90;
lines[0].position=Vector2f(arm1.getPosition().x,arm1.getPosition().y);
lines[0].color=Color::Blue;
armPos.x=arm1.getPosition().x+(d*cos(AngleToRad(arm1.getRotation()-toleransi) ));
armPos.y=arm1.getPosition().y+(d*sin(AngleToRad(arm1.getRotation()-toleransi)));
lines[1].position=armPos;
lines[1].color=Color::Blue;
cir.setPosition(armPos);
arm1.setPosition(mc.getPosition().x+10,mc.getPosition().y-50);
arm2.setPosition(arm1.getPosition().x,mc.getPosition().y-10);
arm1.setRotation(110);//arm1.getRotation()+0.1
player.Update(0,deltaTime);
mc.setTextureRect(player.plRect);
window.draw(arm1);
//window.draw(arm2);
window.draw(mc);
window.draw(cir);
window.draw(lines);
window.display();
window.clear(Color(255,255,255));
}
return 0;
}
Can anyone please tell me what wrong with my code, or is there another way to implement this?
Relative positions are achieved by transform composition (matrix multiplication). You can try to do it manually, but SFML already implements it, and even better: it uses it behind sf::Sprite::draw.
So let's see: arm2 must have a position relative to arm1, so how do we do that?
Set the origin of arm2 where the elbow joint is in arm2 local coordinates.
Set the position of arm2 where the elbow joint is in arm1 local coordinates.
Pass the arm1 transform to the sf::RenderStates each time you draw arm2. The transform multiplication will be performed underneath.
// Do this once
arm2.setOrigin(elbow_x_in_arm2, elbow_y_in_arm2);
arm2.setPosition(elbow_x_in_arm1, elbow_y_in_arm1);
// But this, each time you draw them
window.draw(arm1);
window.draw(arm2, sf::RenderStates(arm1.getTransform()));
Result:
Whenever you move, rotate or scale the arm1, the arm2 will remain attached. Also, if you rotate arm2, it will rotate around the elbow.
Important!
The transform of arm2 will represent the local transformation, so even though it's drawn in the correct position, the data does not contain the global position/rotation/scale. If you wanted to , for example, check for collisions in arm2, you should take this into account:
// don't use this to get the bounding box
sf::FloatRect boundingBoxBad = arm2.getGlobalBounds(); // WRONG: now they are not global
// use this:
sf::Transform tr1 = arm1.getTransform();
sf::Transform tr2 = arm2.getTransform();
sf::FloatRect boundingbBoxGood = tr2.transformRect(tr1.transformRect(arm2.getLocalBounds()));

Shrink/Expand the outline of a polygon with holes

I want to expand/shrink a polygon with holes using boost::polygon. So to clarify that a bit, I have a single data structure
boost::polygon::polygon_with_holes_data<int> inPoly
where inPoly contains data that describe a rectangular outline and a triangle which forms the hole within this rectangle (in picture below this is the left, black drawing).
Now I want to
a) expand the whole stuff so that the rectangle becomes bigger and the hole becomes smaller (resulting in the red polygon in image below) or
b) shrink it so that the rectangle becomes smaller and the hole bigger (resulting in the green image below).
The corners don't necessarily need to be straight, the also can be rounded or somehow "rough".
My question: how can this be done using boost::polygon?
Thanks!
I answered this Expand polygons with boost::geometry?
And yes you can teach Boost Geometry to act on Boost Polygon types:
#include <boost/geometry/geometries/adapted/boost_polygon.hpp>
I came up with a test polygon like you described:
boost::polygon::polygon_with_holes_data<int> inPoly;
bg::read_wkt("POLYGON ((0 0,0 1000,1000 1000,1000 0,0 0),(100 100,900 100,500 700,100 100))", inPoly);
Now, apparently we can't just buffer on the adapted polygon, nor can we bg::assign or bg::convert directly. So, I came up with an ugly workaround of converting to WKT and back. And then you can do the buffer, and conver back similarly.
It's not very elegant, but it does work:
poly in;
bg::read_wkt(boost::lexical_cast<std::string>(bg::wkt(inPoly)), in);
Full Demo
Include SVG output:
Live On Coliru
#include <boost/polygon/polygon.hpp>
#include <boost/polygon/polygon_set_data.hpp>
#include <boost/polygon/polygon_with_holes_data.hpp>
#include <boost/geometry.hpp>
#include <boost/geometry/strategies/buffer.hpp>
#include <boost/geometry/algorithms/buffer.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/geometry/geometries/multi_polygon.hpp>
#include <boost/geometry/geometries/point_xy.hpp>
#include <boost/geometry/geometries/adapted/boost_polygon.hpp>
#include <fstream>
namespace bp = boost::polygon;
namespace bg = boost::geometry;
using P = bp::polygon_with_holes_data<int>;
using PS = bp::polygon_set_data<int>;
using coordinate_type = bg::coordinate_type<P>::type;
int main() {
P inPoly, grow, shrink;
bg::read_wkt("POLYGON ((0 0,0 1000,1000 1000,1000 0,0 0),(100 100,900 100,500 700,100 100))", inPoly);
{
// define our boost geometry types
namespace bs = bg::strategy::buffer;
namespace bgm = bg::model;
using pt = bgm::d2::point_xy<coordinate_type>;
using poly = bgm::polygon<pt>;
using mpoly = bgm::multi_polygon<poly>;
// define our buffering strategies
using dist = bs::distance_symmetric<coordinate_type>;
bs::side_straight side_strategy;
const int points_per_circle = 12;
bs::join_round join_strategy(points_per_circle);
bs::end_round end_strategy(points_per_circle);
bs::point_circle point_strategy(points_per_circle);
poly in;
bg::read_wkt(boost::lexical_cast<std::string>(bg::wkt(inPoly)), in);
for (auto [offset, output_p] : { std::tuple(+15, &grow), std::tuple(-15, &shrink) }) {
mpoly out;
bg::buffer(in, out, dist(offset), side_strategy, join_strategy, end_strategy, point_strategy);
assert(out.size() == 1);
bg::read_wkt(boost::lexical_cast<std::string>(bg::wkt(out.front())), *output_p);
}
}
{
std::ofstream svg("output.svg");
using pt = bg::model::d2::point_xy<coordinate_type>;
boost::geometry::svg_mapper<pt> mapper(svg, 400, 400);
mapper.add(inPoly);
mapper.add(grow);
mapper.add(shrink);
mapper.map(inPoly, "fill-opacity:0.3;fill:rgb(153,204,0);stroke:rgb(153,204,0);stroke-width:2");
mapper.map(grow, "fill-opacity:0.05;fill:rgb(255,0,0);stroke:rgb(255,0,0);stroke-width:2");
mapper.map(shrink, "fill-opacity:0.05;fill:rgb(0,0,255);stroke:rgb(0,0,255);stroke-width:2");
}
}
The output.svg written:
More or less accidentally I found boost::polygon also provides a single function for that which is quite easy to use: boost::polygon::polygon_set_data offers a function resize() which is doing exactly what is described above. Using the additional, parameters corner_fill_arc and num_segments rounded corners can be created.
No idea why this function is located in boost::polygon::polygon_set_data and not in boost::polygon::polygon_with_holes_data which in my opinion would be the more logically place for such a function...

how to alternate colors in a circle, so that circle looks like rotating?

The expected output should be like this with the colors changing their position as well:
Expected output-:
the colors should change their positions in a circle so that it looks like they are moving without changing the position of circle.
though my code is written in codeblocks in c/c++, i will be happy to get answers in any other programming languages.
my present code
#include<graphics.h>
#include<stdlib.h>
#include<stdio.h>
#include<conio.h>
#include<math.h>
#include<string.h>
#include<iostream>
using namespace std;
void vvcircle(float xk,float yk,float radius);
int i=0;
int main()
{
float xk,yk,radius;
int gdriver=DETECT,gmode,errorcode;
initgraph(&gdriver,&gmode,"C:\\TURBOC3\\BGI");
// cout<<"enter the value of x, y and radius of circle"<<endl;
//cin>>xk>>yk>>radius;
vvcircle(200,200,100);
getch();
closegraph();
return 0;
}
void vvcircle(float xk,float yk,float radius)
{
int color[60]={0,1,2,3,4,5,6,7,8,9};
while(radius>0)
{
float xo,yo;
float P;
xo=0.0;
yo=radius;
P=1-radius;
/// vvcircle(200,200,100);
for(;xo<=yo;)
{
putpixel(xo+xk,yo+yk,1);
putpixel(yo+xk,xo+yk,1);
putpixel(-yo+xk,xo+yk,2);
putpixel(xo+xk,-yo+yk,2);
putpixel(-yo+xk,-xo+yk,4);
putpixel(-xo+xk,-yo+yk,4);
putpixel(yo+xk,-xo+yk,4);
putpixel(-xo+xk,+yo+yk,4);
if(P<0)
{
xo=xo+1;
yo=yo;
P=P+2*xo+1;
}
else
{
xo=xo+1;
yo=yo-1;
P=P+(2*xo)-(2*yo)+1;
// putpixel(xo,yo,WHITE);
}
}
radius=radius-1;
}
}
Present output-:
i get many concentric circles with colors. but i want to move the colors so that it looks like the circle is moving and it is not achieved.
How about something like this:
#include <math.h>
void my_circle(int xc,int yc,int r,float a) // center(x,y), radius, animation angle [rad]
{
const int n=4; // segments count
int x,sx,xx,x0,x1,rr=r*r,
y,sy,yy,y0,y1,i,
dx[n+1],dy[n+1], // segments edges direction vectors
c[n]={5,1,2,3}; // segments colors
float da=2.0*M_PI/float(n);
// BBOX
x0=xc-r; x1=xc+r;
y0=yc-r; y1=yc+r;
// compute segments
for (i=0;i<=n;i++,a+=da)
{
dx[i]=100.0*cos(a);
dy[i]=100.0*sin(a);
}
// all pixels in BBOX
for (sx=x0,x=sx-xc;sx<=x1;sx++,x++){ xx=x*x;
for (sy=y0,y=sy-yc;sy<=y1;sy++,y++){ yy=y*y;
// outside circle?
if (xx+yy>rr) continue;
// compute segment
for (i=0;i<n;i++)
if ((x*dy[i ])-(y*dx[i ])>=0)
if ((x*dy[i+1])-(y*dx[i+1])<=0)
break;
// render
putpixel(sx,sy,c[i]);
}}
}
It simply loop through all pixels of outscribed square to your circle, determines if pixel is inside and then detect which segment it is in and color it with segments color.
The segments are described by direction vectors from circle center towards the segments edges. So if pixel is inside it mean its CW to one edge and CCW to the other so in 2D inspecting z coordinate of the cross product between vector to pixel and vectors to edges will tell if the pixel is in or not ...
As you can see I did not use floating point math in the rendering it self, its needed only to compute the segments edge vectors prior rendering...
I used standard 256 color VGA palette (not sure what BGI uses I expect 16 col) so the colors might be different on your platform here preview:
The noise is caused by my GIF capturing tool dithering the render itself is clean ...
Do not forget to call the my_circle repeatedly with changing angle ...
PS. I encoded this in BDS2006 without BGI so in different compiler there might be some minor syntax problem related to used language quirks...
I faked the putpixel with this:
void putpixel(int x,int y,BYTE c)
{
static const DWORD pal[256]=
{
0x00000000,0x000000A8,0x0000A800,0x0000A8A8,0x00A80000,0x00A800A8,0x00A85400,0x00A8A8A8,
0x00545454,0x005454FC,0x0054FC54,0x0054FCFC,0x00FC5454,0x00FC54FC,0x00FCFC54,0x00FCFCFC,
0x00000000,0x00101010,0x00202020,0x00343434,0x00444444,0x00545454,0x00646464,0x00747474,
0x00888888,0x00989898,0x00A8A8A8,0x00B8B8B8,0x00C8C8C8,0x00DCDCDC,0x00ECECEC,0x00FCFCFC,
0x000000FC,0x004000FC,0x008000FC,0x00BC00FC,0x00FC00FC,0x00FC00BC,0x00FC0080,0x00FC0040,
0x00FC0000,0x00FC4000,0x00FC8000,0x00FCBC00,0x00FCFC00,0x00BCFC00,0x0080FC00,0x0040FC00,
0x0000FC00,0x0000FC40,0x0000FC80,0x0000FCBC,0x0000FCFC,0x0000BCFC,0x000080FC,0x000040FC,
0x008080FC,0x009C80FC,0x00BC80FC,0x00DC80FC,0x00FC80FC,0x00FC80DC,0x00FC80BC,0x00FC809C,
0x00FC8080,0x00FC9C80,0x00FCBC80,0x00FCDC80,0x00FCFC80,0x00DCFC80,0x00BCFC80,0x009CFC80,
0x0080FC80,0x0080FC9C,0x0080FCBC,0x0080FCDC,0x0080FCFC,0x0080DCFC,0x0080BCFC,0x00809CFC,
0x00B8B8FC,0x00C8B8FC,0x00DCB8FC,0x00ECB8FC,0x00FCB8FC,0x00FCB8EC,0x00FCB8DC,0x00FCB8C8,
0x00FCB8B8,0x00FCC8B8,0x00FCDCB8,0x00FCECB8,0x00FCFCB8,0x00ECFCB8,0x00DCFCB8,0x00C8FCB8,
0x00B8FCB8,0x00B8FCC8,0x00B8FCDC,0x00B8FCEC,0x00B8FCFC,0x00B8ECFC,0x00B8DCFC,0x00B8C8FC,
0x00000070,0x001C0070,0x00380070,0x00540070,0x00700070,0x00700054,0x00700038,0x0070001C,
0x00700000,0x00701C00,0x00703800,0x00705400,0x00707000,0x00547000,0x00387000,0x001C7000,
0x00007000,0x0000701C,0x00007038,0x00007054,0x00007070,0x00005470,0x00003870,0x00001C70,
0x00383870,0x00443870,0x00543870,0x00603870,0x00703870,0x00703860,0x00703854,0x00703844,
0x00703838,0x00704438,0x00705438,0x00706038,0x00707038,0x00607038,0x00547038,0x00447038,
0x00387038,0x00387044,0x00387054,0x00387060,0x00387070,0x00386070,0x00385470,0x00384470,
0x00505070,0x00585070,0x00605070,0x00685070,0x00705070,0x00705068,0x00705060,0x00705058,
0x00705050,0x00705850,0x00706050,0x00706850,0x00707050,0x00687050,0x00607050,0x00587050,
0x00507050,0x00507058,0x00507060,0x00507068,0x00507070,0x00506870,0x00506070,0x00505870,
0x00000040,0x00100040,0x00200040,0x00300040,0x00400040,0x00400030,0x00400020,0x00400010,
0x00400000,0x00401000,0x00402000,0x00403000,0x00404000,0x00304000,0x00204000,0x00104000,
0x00004000,0x00004010,0x00004020,0x00004030,0x00004040,0x00003040,0x00002040,0x00001040,
0x00202040,0x00282040,0x00302040,0x00382040,0x00402040,0x00402038,0x00402030,0x00402028,
0x00402020,0x00402820,0x00403020,0x00403820,0x00404020,0x00384020,0x00304020,0x00284020,
0x00204020,0x00204028,0x00204030,0x00204038,0x00204040,0x00203840,0x00203040,0x00202840,
0x002C2C40,0x00302C40,0x00342C40,0x003C2C40,0x00402C40,0x00402C3C,0x00402C34,0x00402C30,
0x00402C2C,0x0040302C,0x0040342C,0x00403C2C,0x0040402C,0x003C402C,0x0034402C,0x0030402C,
0x002C402C,0x002C4030,0x002C4034,0x002C403C,0x002C4040,0x002C3C40,0x002C3440,0x002C3040,
0x00000000,0x00000000,0x00000000,0x00000000,0x00000000,0x00000000,0x00000000,0x00000000,
};
if ((x<0)||(x>=Main->xs)) return;
if ((y<0)||(y>=Main->ys)) return;
Main->pyx[y][x]=pal[c];
}
Where Main->xs, Main->ys is my window resolution and Main->pyx is direct pixel acces to its canvas for more info see:
Graphics rendering: (#4 GDI Bitmap)

How to generate delaunay by some 3D coplanar vertices By CGAL

I'm new in developing with cgal library,I have tried the following code to generate delaunay in 2D.
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Constrained_Delaunay_triangulation_2.h>
#include <CGAL/Delaunay_triangulation_2.h>
#include <cassert>
#include <iostream>
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Delaunay_triangulation_2<K> Triangulation;
typedef Triangulation::Point Point;
int main()
{
std::vector<Point> PL;
PL.push_back(Point(0, 0));
PL.push_back(Point(1, 0));
PL.push_back(Point(1, 1));
PL.push_back(Point(0, 1));
auto a = PL.begin();
Triangulation T;
T.insert(PL.begin(),PL.end());
Triangulation::Finite_faces_iterator Finite_face_iterator;
for (Finite_face_iterator = T.finite_faces_begin(); Finite_face_iterator != T.finite_faces_end(); ++Finite_face_iterator)
{
std::cerr << T.triangle(Finite_face_iterator) << std::endl;
}
return 0;
}
those code output two faces,and if the vertices change to 3D like
Point(0,0,0),
Point(1,0,0),
Point(1,1,0),
Point(0,1,0)
those four vertices are in the same plane,how can I output two faces not intersected by CGAL?
You can use the Delaunay_triangulation_3 class for this purpose. It handles coplanar points as a special case of dimension 2. All your points must be exactly coplanar, then.
Another option is to use Delaunay_triangulation_2, by projecting your points to the plane they belong. This would handle points that are almost coplanar.