delete multi-line block of text with internal flag in povray file - replace

I have a pov-ray file, which defines a lot of cylinders and spheres. Sometimes these shapes are defined to have "color#", which makes the povray unrenderable. One solution I've found is to delete the offending cylinders and spheres. So a file that contains this text
cylinder {
< -0.17623, 0.24511, -0.27947>, < -0.15220, 0.22658, -0.26472>, 0.00716
texture { colorO }
}
sphere {
< -0.00950, 0.00357, 0.00227>, 0.00716
texture { color# }
}
cylinder {
< -0.00950, 0.00357, 0.00227>, < 0.00327, 0.00169, 0.00108>, 0.00716
texture { color# }
}
sphere {
< 0.15373, 0.00601, 0.18223>, 0.00716
texture { colorO }
}
would turn into this text
cylinder {
< -0.17623, 0.24511, -0.27947>, < -0.15220, 0.22658, -0.26472>, 0.00716
texture { colorO }
}
sphere {
< 0.15373, 0.00601, 0.18223>, 0.00716
texture { colorO }
}
Is there some way to do this replacement with a shell script? Preferably in tcsh. Thanks!

cat yourFile | egrep -B 2 -A 1 'color[^#].*' | egrep -v -- '^--$'
This should do the trick, providing the example you supplied is exact - i.e. 2 lines before 'color' and 1 line after 'color' are the lines which describe what you need.

Related

OSG: Why there is texture coordinate array but not texture itself?

I am trying to get texture file name from an osg::Geometry I get the texture coordinates like this:
osg::Geometry* geom = dynamic_cast<osg::Geometry*> (drawable);
const osg::Geometry::ArrayList& texCoordArrayList = dynamic_cast<const osg::Geometry::ArrayList&>(geom->getTexCoordArrayList());
auto texCoordArrayListSize = texCoordArrayList.size();
auto sset = geom->getOrCreateStateSet();
processStateSet(sset);
for (size_t k = 0; k < texCoordArrayListSize; k++)
{
const osg::Vec2Array* texCoordArray = dynamic_cast<const osg::Vec2Array*>(geom->getTexCoordArray(k));
//doing sth with vertexarray, normalarray and texCoordArray
}
But I am not able to get texture file name in processStateSet() function. I take the processStateSet function code from OSG examples (specifically from osganalysis example). Even though there is a texture file, Sometimes it works and gets the name but sometimes not. Here is my processStateSet function
void processStateSet(osg::StateSet* stateset)
{
if (!stateset) return;
for (unsigned int ti = 0; ti < stateset->getNumTextureAttributeLists(); ++ti)
{
osg::StateAttribute* sa = stateset->getTextureAttribute(ti, osg::StateAttribute::TEXTURE);
osg::Texture* texture = dynamic_cast<osg::Texture*>(sa);
if (texture)
{
LOG("texture! ");
//TODO: something with this.
for (unsigned int i = 0; i < texture->getNumImages(); ++i)
{
auto img (texture->getImage(i));
auto texturefname (img->getFileName());
LOG("image ! image no: " + IntegerToStr(i) + " file: " + texturefname);
}
}
}
}
EDIT:
I just realized that: if the model that I load is ".3ds", texturefname is exist but if model is ".flt" there is not texture name.
Is it about loading different types? But I know that they both have textures. What is the difference? I confused.
Some 3D models don't have texture names. Your choices are to deal with it, or use model files that do. It also depends on the format. Some formats can't have texture names. Some Blender export scripts can't write texture names even though the format supports it. And so on.
3D model formats are not interchangeable - every one is different.

How to find length of upper and lower arc from ellipse image

Here i try to find the upper arc and lower arc using image vector(contours of images) But It could n't gave Extract result. Suggest any other method to find upper and lower arc from images and their length.
Here my code
Mat image =cv::imread("thinning/20d.jpg");
int i=0,j=0,k=0,x=320;
for(int y = 0; y < image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 250 && image.at<Vec3b>(Point(x, y))[1] >= 250 && image.at<Vec3b>(Point(x, y))[2] >= 250){
qDebug()<<x<<y;
x1[i]=x;
y1[i]=y;
i=i+1;
}
}
for(i=0;i<=1;i++){
qDebug()<<x1[i]<<y1[i];
}
qDebug()<<"UPPER ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = 0; y <= (y1[0]+20); y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x2[j]=x;
y2[j]=y;
j=j+1;
qDebug()<<x<<y;
}}
}
qDebug()<<"Lower ARC";
for(int x = 0; x < image.cols; x++)
{
for(int y = (y1[1]-20); y <= image.rows; y++)
{
if(image.at<Vec3b>(Point(x, y))[0] >= 240 && image.at<Vec3b>(Point(x, y))[1] >= 240 && image.at<Vec3b>(Point(x, y))[2] >= 240){
x3[k]=x;
y3[k]=y;
k=k+1;
qDebug()<<x<<y;
}}
}
By Above code I get Coordinates, by using Coordinates points I can find the length of arc but its mismatch with extract result.
Here is actual image:
Image1:
After thinning i got:
Expected Output:
As you are unable to define what exactly is upper/lower arc then I will assume you cut the ellipse in halves by horizontal line going through the ellipse's middle point. If that is not the case then you have to adapt this on your own... Ok now how to do it:
binarize image
As you provide JPG the colors are distorted so there is more then just black and white
thin the border to 1 pixel
Fill the inside with white and then recolor all white pixels not neighboring any black pixels to some unused or black color. There are many other variation how to achieve this...
find the bounding box
search all pixels and remember min,max x,y coordinates of all white pixels. Let call them x0,y0,x1,y1.
compute center of ellipse
simply find middle point of bounding box
cx=(x0+x1)/2
cy=(y0+y1)/2
count the pixels for each elliptic arc
have counter for each arc and simply increment upper arc counter for any white pixel that have y<=cy and lower if y>=cy. If your coordinate system is different then the conditions can be reverse.
find ellipse parameters
simply find white pixel closest to (cx,cy) this will be endpoint of minor semi-axis b let call it (bx,by). Also find the most far white pixel to (cx,cy) that will be the major semi axis endpoint (ax,ay). The distances between them and center will give you a,b and their position substracted by center will give you vectors with rotation of your ellipse. the angle can be obtained by atan2 or use basis vectors as I do. You can test ortogonality by dot product. There can be more then 2 points for closest and farest point. in that case you should find the middle of each group to enhance precision.
Integrate fitted ellipse
You need first to find angle at which the ellipse points are with y=cy then integrate ellipse between these two angles. The other half is the same just integrate angles + PI. To determine which half it is just compute point in the middle between angle range and decide according y>=cy ...
[Edit2] Here updated C++ code I busted for this:
picture pic0,pic1,pic2;
// pic0 - source
// pic1 - output
float a,b,a0,a1,da,xx0,xx1,yy0,yy1,ll0,ll1;
int x,y,i,threshold=127,x0,y0,x1,y1,cx,cy,ax,ay,bx,by,aa,bb,dd,l0,l1;
pic1=pic0;
// bbox,center,recolor (white,black)
x0=pic1.xs; x1=0;
y0=pic1.ys; y1=0;
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].db[0]>=threshold)
{
if (x0>x) x0=x;
if (y0>y) y0=y;
if (x1<x) x1=x;
if (y1<y) y1=y;
pic1.p[y][x].dd=0x00FFFFFF;
} else pic1.p[y][x].dd=0x00000000;
cx=(x0+x1)/2; cy=(y0+y1)/2;
// fill inside (gray) left single pixel width border (thining)
for (y=y0;y<=y1;y++)
{
for (x=x0;x<=x1;x++) if (pic1.p[y][x].dd)
{
for (i=x1;i>=x;i--) if (pic1.p[y][i].dd)
{
for (x++;x<i;x++) pic1.p[y][x].dd=0x00202020;
break;
}
break;
}
}
for (x=x0;x<=x1;x++)
{
for (y=y0;y<=y1;y++) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
for (y=y1;y>=y0;y--) if (pic1.p[y][x].dd) { pic1.p[y][x].dd=0x00FFFFFF; break; }
}
// find min,max radius (periaxes)
bb=pic1.xs+pic1.ys; bb*=bb; aa=0;
ax=cx; ay=cy; bx=cx; by=cy;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
dd=((x-cx)*(x-cx))+((y-cy)*(y-cy));
if (aa<dd) { ax=x; ay=y; aa=dd; }
if (bb>dd) { bx=x; by=y; bb=dd; }
}
aa=sqrt(aa); ax-=cx; ay-=cy;
bb=sqrt(bb); bx-=cx; by-=cy;
//a=float((ax*bx)+(ay*by))/float(aa*bb); // if (fabs(a)>zero_threshold) not perpendicular semiaxes
// separate/count upper,lower arc by horizontal line
l0=0; l1=0;
for (y=y0;y<=y1;y++)
for (x=x0;x<=x1;x++)
if (pic1.p[y][x].dd==0x00FFFFFF)
{
if (y>=cy) { l0++; pic1.p[y][x].dd=0x000000FF; } // red
if (y<=cy) { l1++; pic1.p[y][x].dd=0x00FF0000; } // blue
}
// here is just VCL/GDI info layer output so you can ignore it...
// arc separator axis
pic1.bmp->Canvas->Pen->Color=0x00808080;
pic1.bmp->Canvas->MoveTo(x0,cy);
pic1.bmp->Canvas->LineTo(x1,cy);
// draw analytical ellipse to compare
pic1.bmp->Canvas->Pen->Color=0x0000FF00;
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+ax,cy+ay);
pic1.bmp->Canvas->MoveTo(cx,cy);
pic1.bmp->Canvas->LineTo(cx+bx,cy+by);
pic1.bmp->Canvas->Pen->Color=0x00FFFF00;
da=0.01*M_PI; // dash step [rad]
a0=0.0; // start
a1=2.0*M_PI; // end
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->MoveTo(x,y);
a+=da; if (a>=a1) { a=a1; i=0; }
x=cx+(ax*cos(a))+(bx*sin(a));
y=cy+(ay*cos(a))+(by*sin(a));
pic1.bmp->Canvas->LineTo(x,y);
}
// integrate the arclengths from fitted ellipse
da=0.001*M_PI; // integration step [rad] (accuracy)
// find start-end angles
ll0=M_PI; ll1=M_PI;
for (i=1,a=0.0;i;)
{
a+=da; if (a>=2.0*M_PI) { a=0.0; i=0; }
xx1=(ax*cos(a))+(bx*sin(a));
yy1=(ay*cos(a))+(by*sin(a));
b=atan2(yy1,xx1);
xx0=fabs(b-0.0); if (xx0>M_PI) xx0=2.0*M_PI-xx0;
xx1=fabs(b-M_PI);if (xx1>M_PI) xx1=2.0*M_PI-xx1;
if (ll0>xx0) { ll0=xx0; a0=a; }
if (ll1>xx1) { ll1=xx1; a1=a; }
}
// [upper half]
ll0=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll0+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x0000FF00; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// lower half
a0+=M_PI; a1+=M_PI; ll1=0.0;
xx0=cx+(ax*cos(a0))+(bx*sin(a0));
yy0=cy+(ay*cos(a0))+(by*sin(a0));
for (i=1,a=a0;i;)
{
a+=da; if (a>=a1) { a=a1; i=0; }
xx1=cx+(ax*cos(a))+(bx*sin(a));
yy1=cy+(ay*cos(a))+(by*sin(a));
// sum arc-line sizes
xx0-=xx1; xx0*=xx0;
yy0-=yy1; yy0*=yy0;
ll1+=sqrt(xx0+yy0);
// pic1.p[int(yy1)][int(xx1)].dd=0x00FF00FF; // recolor for visualy check the right arc selection
xx0=xx1; yy0=yy1;
}
// handle if the upper/lower parts are swapped
a=a0+0.5*(a1-a0);
if ((ay*cos(a))+(by*sin(a))<0.0) { a=ll0; ll0=ll1; ll1=a; }
// info texts
pic1.bmp->Canvas->Font->Color=0x00FFFF00;
pic1.bmp->Canvas->Brush->Style=bsClear;
x=5; y=5; i=16; y-=i;
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("center = (%i,%i) px",cx,cy));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("a = %i px",aa));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("b = %i px",bb));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper = %i px",l0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower = %i px",l1));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("upper`= %.3lf px",ll0));
pic1.bmp->Canvas->TextOutA(x,y+=i,AnsiString().sprintf("lower`= %.3lf px",ll1));
pic1.bmp->Canvas->Brush->Style=bsSolid;
It use my own picture class with members:
xs,ys resolution of image
p[y][x].dd pixel access as 32bit unsigned integer as color
p[y][x].db[4] pixel access as 4*8bit unsigned integer as color channels
You can look at picture::p member as simple 2D array of
union color
{
DWORD dd; WORD dw[2]; byte db[4];
int i; short int ii[2];
color(){}; color(color& a){ *this=a; }; ~color(){}; color* operator = (const color *a) { dd=a->dd; return this; }; /*color* operator = (const color &a) { ...copy... return this; };*/
};
int xs,ys;
color p[ys][xs];
Graphics::TBitmap *bmp; // VCL GDI Bitmap object you do not need this...
where each cell can be accessed as 32 bit pixel p[][].dd as 0xAABBGGRR or 0xAARRGGBB not sure now which. Also you can access the channels directly with p[][].db[4] as 8bit BYTEs.
The bmp member is GDI bitmap so bmp->Canvas-> access all the GDI stuff which is not important for you.
Here result for your second image:
Gray horizontal line is the arc boundary line going through center
Red,Blue are the arc halves (recolored during counting)
Green are the semi-axes basis vectors
Aqua dash-dash is analytical ellipse overlay to compare the fit.
As you can see the fit is pretty close (+/-1 pixel). The counted arc-lengths upper,lower are pretty close to approximated average circle half perimeter(circumference).
You should add a0 range check to decide if the start is upper or lower half because there is no quarantee which side of major axis this will find. The integration of both halves are almost the same and saturated around integration step 0.001*M_PI around 307.3 pixels per arc-length which is only 17 and 22 pixels difference from the direct pixel count which is even better then I anticipate due to aliasing ...
For more eccentric ellipses the fit is not as good but the results are still good enough:

Blank Screen after compiling openGL

I am really new to OpenGL and I am trying to just make a surface from two triangles. I don't know where I am going wrong with this code. I know that all the positions and colors are getting into the triangles class and that the Triangles are being made, but it's not getting outputted. Can someone help?
I tried to get just the output from the Triangle class but it doesn't seem to be working. I don't think there's anything wrong with the way I am calling the Display function.
Code:
#include<GL/gl.h>
#include<GL/glu.h>
#include<GL/glut.h>
#include<iostream>
#include<vector>
using namespace std;
class Triangle
{
public:
float position[9],color[3];
Triangle()
{}
Triangle(float position_t[], float color_t[])
{
for(int i=0;i<9;i++)
{position[i] = position_t[i];}
for(int i=0;i<3;i++)
{color[i]= color_t[i];}
}
void makeTriangle()
{
glBegin(GL_TRIANGLES);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[0],position[1],position[2]);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[3],position[4],position[5]);
glColor3f(color[0],color[1],color[2]);glVertex3f(position[6],position[7],position[8]);
glEnd();}
};
class Mesh
{
public:
/*float center[3],position[9],color[3];
float size;*/
vector<Triangle> elements;
float center[3],position[9],color[3];
float size;
Mesh(){}
Mesh(float center_in[3], float color_in[3])
{
for (int i=0;i<3;i++)
{
color[i] = color_in[i];
center[i] = center_in[i];
}
}
void getPositions()
{
position[0] = 1;position[1] = 1; position[2] = 1;
position[3] = -1;position[4] = -1; position[5] = 1;
position[6] = 1;position[7] = -1; position[8] = 1;
}
void getColor()
{
color[0] = 1; color[1]=0; color[2]=0;
}
static Mesh makeMesh()
{
Mesh a;
a.elements.resize(2);
a.getPositions();
a.getColor();
Triangle T(a.position,a.color);
a.elements[0] = T;
//Triangle O(2);
//a.elements[1] = 0;
return a;
}
};
void render()
{
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
Mesh a;
a.elements.resize(2);
a.getPositions();
a.getColor();
Triangle T(a.position,a.color);
//vector<Mesh> m;
//m.push_back(Mesh::makeMesh());
glPushMatrix();
T.makeTriangle();
glPopMatrix();
glFlush();
glutSwapBuffers();
glutPostRedisplay();
}
Full Code: http://pastebin.com/xa3B7166
As I suggested you in the comments, you are not setting the gluLookat() function. Everything is being drawn but you are just not looking at it!
Docs: https://www.opengl.org/sdk/docs/man2/xhtml/gluLookAt.xml
Your code does not specify any transformations. Therefore, your coordinates need to be within the default view volume, which is [-1, 1] in all coordinate directions.
Or more technically, the model/view/projection transformations (or all the transformations applied in your vertex shader if you use the programmable pipeline) transform the coordinates into the clip coordinate space, and after perspective division into the normalized device coordinate (aka NDC) space. The range of the NDC space is [-1, 1] for all coordinates.
If you don't apply any transformations, like is the case in your code, your original coordinates already have to be in NDC space.
With your current coordinates:
position[0] = 1;position[1] = 1; position[2] = 1;
position[3] = -1;position[4] = -1; position[5] = 1;
position[6] = 1;position[7] = -1; position[8] = 1;
all the z-coordinates have values of 1, which means that the whole triangle is right on the boundary of the clip volume. To make it visible, you can simply set the z-coordinates to 0:
position[0] = 1;position[1] = 1; position[2] = 0;
position[3] = -1;position[4] = -1; position[5] = 0;
position[6] = 1;position[7] = -1; position[8] = 0;
This centers it within the NDC space in z-direction, with the vertices being on 3 of the corners in the xy-plane. You will therefore see half of your window covered by the triangle, cutting it in half along the diagonal.
It's of course common in OpenGL to have the original coordinates in a different coordinate space, and then apply transformations to place them within the view volume.
You're probably already aware of this, but I thought I'd mention it anyway: If you're just starting to learn OpenGL, I would suggest that you learn what people often call "modern OpenGL". This includes the OpenGL Core Profile, or OpenGL ES 2.0 or later. The calls you are using now are mostly deprecated in newer versions of OpenGL, and not available anymore in the Core Profile and ES. The initial hurdle is somewhat higher for "modern OpenGL", particularly since you have to write your own shaders, but you will get on the path to acquiring knowledge that is still current.

Ogre3d / accessing passes via compositor listener and pass identifiers

Im trying to implement deferred shading using Ogre 1.8. This is my final compositor:
compositor DeferredShadingShowLit
{
technique
{
texture rt0 target_width target_height PF_A8R8G8B8
texture_ref mrt_output DeferredShadingGBuffer mrt_output
target rt0
{
input none
shadows off
pass clear
{
identifier 1
}
pass render_quad
{
identifier 2
material DeferredShadingPostQuadLight
input 0 mrt_output 0
input 1 mrt_output 1
}
}
target_output
{
input none
pass render_quad
{
identifier 3
material DeferredShadingFinal
input 0 rt0
}
}
}
}
I need to pass the lights position, that is altered every frame to the DeferredShadingPostQuadLight material (used to render lights). Its a simple example and i havent implemented any optimizations such as z tests and bounding volumes for lights. For that purpose im using compositor listener that is set up this way:
class LightListener : public Ogre::CompositorInstance::Listener
{
public:
LightListener(Ogre::Vector3 alightPos);
virtual ~LightListener();
virtual void notifyMaterialSetup(Ogre::uint32 pass_id, Ogre::MaterialPtr &mat);
virtual void notifyMaterialRender(Ogre::uint32 pass_id, Ogre::MaterialPtr &mat);
Ogre::Vector3 lightPos;
Ogre::GpuProgramParametersSharedPtr fpParams;
};
LightListener::LightListener(Ogre::Vector3 alightPos)
{
lightPos = alightPos;
}
LightListener::~LightListener()
{
}
void LightListener::notifyMaterialSetup(Ogre::uint32 pass_id, Ogre::MaterialPtr &mat)
{
//if (pass_id == 2) // This gives me error
fpParams = mat->getBestTechnique()->getPass(pass_id)->getFragmentProgramParameters();
}
void LightListener::notifyMaterialRender(Ogre::uint32 pass_id, Ogre::MaterialPtr &mat)
{
//if (pass_id == 2) // This gives me error
fpParams->setNamedConstant("lightPos", lightPos);
}
The problem is i cant access passes by their id as shown in the commented lines above.
However, if the lines are commented out and i change compositor script like this:
compositor DeferredShadingShowLit
{
technique
{
texture rt0 target_width target_height PF_A8R8G8B8
texture_ref mrt_output DeferredShadingGBuffer mrt_output
target_output
{
input none
shadows off
pass clear
{
}
pass render_quad
{
material DeferredShadingPostQuadLight
input 0 mrt_output 0
input 1 mrt_output 1
}
}
}
}
fragment program of the DeferredShadingPostQuadLight material gets updated every frame without any problems.
Thing is i need to render to rt0 first and only then to target_output. Can you please show me what im doing wrong here? Thank you!
I finally got it figured out! Answers on Ogre forums were often misleading. I traced the pass_id variables - the one you supply to the inherited virtual function notifyMaterialSetup and the one you put here mat->getBestTechnique()->getPass(pass_id) are actually completely different values. I have no idea why examples all over the internet have this code written like this. This is completely wrong. pass_id refers to compositor passes, whereas pass_id of mat->getBestTechnique()->getPass(pass_id) refers to materials passes. I got my own example working simply by altering the code like this:
void LightListener::notifyMaterialSetup(Ogre::uint32 pass_id, Ogre::MaterialPtr &mat)
{
if (pass_id == 2)
fpParams = mat->getBestTechnique()->getPass(0)->getFragmentProgramParameters();
// I put 0 here because my material has only one pass
}
void LightListener::notifyMaterialRender(Ogre::uint32 pass_id, Ogre::MaterialPtr &mat)
{
if (pass_id == 2)
fpParams->setNamedConstant("lightPos", lightPos);
}
Thanks for your attention!

Convert triangle strips to triangles?

I'm using the GPC tessellation library and it outputs triangle strips.
The example shows rendering like this:
for (s = 0; s < tri.num_strips; s++)
{
glBegin(GL_TRIANGLE_STRIP);
for (v = 0; v < tri.strip[s].num_vertices; v++)
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glEnd();
}
The issue is in the fact that this renders multiple triangle strips. This is the problem for me. My application renders with VBO's, perticularly 1 VBO for 1 polygon. I need a way to modify the above code so that instead it could look something more like this:
glBegin(GL_TRIANGLES);
for (s = 0; s < tri.num_strips; s++)
{
// How should I specify vertices here?
}
glEnd();
How could I do this?
perticularly 1 VBO for 1 polygon
Whoa. 1 VBO per polygon won't be efficient. Kills the whole reason for vertex buffer. The idea for vertex buffer is to cram as many vertices into it as you can. You can put multiple triangle strip into one vertex buffer, or render separate primitives that are stored in one buffer.
I need a way to modify the above code so that instead it could look something more like this:
This should work:
glBegin(GL_TRIANGLES);
for (v= 0; v < tri.strip[s].num_vertices-2; v++)
if (v & 1){
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
}
else{
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
}
glEnd();
Because trianglestrip triangulation goes like this (numbers represent vertex indexes):
0----2
| /|
| / |
| / |
|/ |
1----3
Note: I assume that vertices in trianglestrips are stored in the same order as in my picture AND that you want triangle vertices to be sent in counter-clockwise order. If you want them to be CW, then use different code:
glBegin(GL_TRIANGLES);
for (v= 0; v < tri.strip[s].num_vertices-2; v++)
if (v & 1){
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
}
else{
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
}
glEnd();