I have a problem when trying to tesselate a polygon using GLU. The vertex callback always calls back with the last vertex defined by gluTessVertex. It seems as though the values stored in GLdouble v[3] are getting GC'd in each iteration of the for loop. How can I store each GLdouble v[3] so it does not get GC'd?
for(int i = 0; i < vtxcnt; i++)
{
float lon = dbls[i * 2];
float lat = dbls[(i * 2)+1];
GLdouble v[3] = {lon, lat, 0.0f};
gluTessVertex(tess, v, v);
}
* EDIT: This seems to fix the problem... *
GLdouble *vtxs = new GLdouble[vtxcnt * 3];
for(int i = 0; i < vtxcnt; i++)
{
lon = dbls[i * 2];
lat = dbls[(i * 2)+1];
vtxs[(i * 3) + 0] = (double)lon;
vtxs[(i * 3) + 1] = (double)lat;
vtxs[(i * 3) + 2] = (double)0;
gluTessVertex(tess, &vtxs[(i * 3) + 0], &vtxs[(i * 3) + 0]);
}
gluTessVertex only stores the vertex pointer. The pointer must stay valid until the tesselation is performed. This is not the case in your code, so it fails.
Related
I am trying to understand this code:
void stencil(const int nx, const int ny, const int width, const int height,
double* image, double* tmp_image)
{
for (int j = 1; j < ny + 1; ++j) {
for (int i = 1; i < nx + 1; ++i) {
tmp_image[j + i * height] = image[j + i * height] * 3.0 / 5.0;
tmp_image[j + i * height] += image[j + (i - 1) * height] * 0.5 / 5.0;
tmp_image[j + i * height] += image[j + (i + 1) * height] * 0.5 / 5.0;
tmp_image[j + i * height] += image[j - 1 + i * height] * 0.5 / 5.0;
tmp_image[j + i * height] += image[j + 1 + i * height] * 0.5 / 5.0;
}
}
}
The 1-d array notation is very confusing. I am trying to convert it to a 2-d notation (which I find easier to read). Could someone point me in the right direction as to how I can accomplish this?
All this code is doing is creating a new image from an original image by taking 60% from the corresponding pixel and 10% from each neighboring pixel.
When you see tmp_image[j + i * height], read it as tmp_image[i][j].
Changing the code to literally use 2D syntax may require knowing at least one of the dimensions at compile time, whereas now it is a runtime argument. So that might be a non-starter, unless you're using C++ and want to write or use a matrix class instead of plain arrays.
After advice from krlzlx I have posted it as a new question.
From here:
3D Line Segment and Plane Intersection
I have a problem with this algorithm, I have implemented it like so:
template <class T>
class AnyCollision {
public:
std::pair<bool, T> operator()(Point3d &ray, Point3d &rayOrigin, Point3d &normal, Point3d &coord) const {
// get d value
float d = (normal.x * coord.x) + (normal.y * coord.y) + (normal.z * coord.z);
if (((normal.x * ray.x) + (normal.y * ray.y) + (normal.z * ray.z)) == 0) {
return std::make_pair(false, T());
}
// Compute the X value for the directed line ray intersecting the plane
float a = (d - ((normal.x * rayOrigin.x) + (normal.y * rayOrigin.y) + (normal.z * rayOrigin.z)) / ((normal.x * ray.x) + (normal.y * ray.y) + (normal.z * ray.z)));
// output contact point
float rayMagnitude = (sqrt(pow(ray.x, 2) + pow(ray.y, 2) + pow(ray.z, 2)));
Point3d rayNormalised((ray.x / rayMagnitude), (ray.y / rayMagnitude), (ray.z / rayMagnitude));
Point3d contact((rayOrigin.x + (rayNormalised.x * a)), (rayOrigin.y + (rayNormalised.y * a)), (rayOrigin.z + (rayNormalised.z * a))); //Make sure the ray vector is normalized
return std::make_pair(true, contact);
};
Point3d is defined as:
class Point3d {
public:
double x;
double y;
double z;
/**
* constructor
*
* 0 all elements
*/
Point3d() {
x = 0.0;
y = 0.0;
z = 0.0;
}
I am forced to use this structure, because in the larger system my component runs in it is defined like this and it cannot be changed.
My code compiles fine, but testing I get incorrect values for the point. The ratio of x, y, z is correct but the magnitude is wrong.
For example if:
rayOrigin.x = 0;
rayOrigin.y = 0;
rayOrigin.z = 0;
ray.x = 3;
ray.y = -5;
ray.z = 12;
normal.x = -3;
normal.y = 12;
normal.z = 0;
coord.x = 7;
coord.y = -5;
coord.z = 10;
I expect the point to be:
(0.63, 1.26, 1.89)
However, it is:
(3.52, -5.87, 14.09)
A magnitude of 5.09 too big.
And I also tested:
rayOrigin.x = 0;
rayOrigin.y = 0;
rayOrigin.z = 0;
ray.x = 2;
ray.y = 3;
ray.z = 3;
normal.x = 4;
normal.y = 1;
normal.z = 0;
p0.x = 2;
p0.y = 1;
p0.z = 5;
I expect the point to be:
(1.64, 2.45, 2.45)
However, it is:
(3.83761, 5.75642, 5.75642)
A magnitude of 2.34 too big?
Pseudocode (does not require vector normalization):
Diff = PlaneBaseCoordinate - RayOrigin
d = Normal.dot.Diff
e = Normal.dot.RayVector
if (e)
IntersectionPoint = RayOrigin + RayVector * d / e
otherwise
ray belongs to the plane or is parallel
Quick check:
Ray (0,0,0) (2,2,2) //to catch possible scale issues
Plane (0,1,0) (0,3,0) //plane y=1
Diff = (0,1,0)
d = 3
e = 6
IntersectionPoint = (0,0,0) + (2,2,2) * 3 / 6 = (1, 1, 1)
I'm currently working on 2D transformations (translation, scaling, shearing and rotation) in Qt. I have a problem with bilinear interpolation, which I want to use to cover the 'black pixels' in output image. I'm using matrix calculations to get new coordinates of pixels of input image. Then I use reverse matrix calculation to check which pixel of input image responds to output pixel. Result of that is some float number which I use to interpolation. I check the four neighbour points and calculate the value (color) of output pixel. I have checked my calculations 'by hand' and they seem to be good.
Can anyone find any bug in that code? (I cut out the parts of code which are responsible for interface such as sliders).
Geometric::Geometric(QWidget* parent) : QWidget(parent) {
resize(1000, 800);
displayLogoDefault = true;
a = shx = shy = x0 = y0 = 0;
scx = scy = 1;
tx = ty = 0;
x = 200, y = 200;
paintT = paintSc = paintR = paintShx = paintShy = false;
img = new QImage(600,600,QImage::Format_RGB32);
img2 = new QImage("logo.jpeg");
}
Geometric::~Geometric() {
delete img;
delete img2;
img = NULL;
img2 = NULL;
}
void Geometric::makeChange() {
displayLogoDefault = false;
// iteration through whole input image
for(int i = 0; i < img2->width(); i++) {
for(int j = 0; j < img2->height(); j++) {
// calculate new coordinates basing on given 2D transformations values
//I calculated that formula eariler by multiplying/adding matrixes
x = cos(a)*scx*(i-x0) - sin(a)*scy*(j-y0) + shx*sin(a)*scx*(i-x0) + shx*cos(a)*scy*(j-y0);
y = shy*(x) + sin(a)*scx*(i-x0) + cos(a)*scy*(j-y0);
// tx and ty goes for translation. scx and scy for scaling
// shx and shy for shearing and a is angle for rotation
x += (x0 + tx);
y += (y0 + ty);
if(x >= 0 && y >= 0 && x < img->width() && y < img->height()) {
// reverse matrix calculation formula to find proper pixel from input image
float tmx = x - x0 - tx;
float tmy = y - y0 - ty;
float recX = 1/scx * ( cos(-a)*( (tmx + shx*shy*tmx - shx*tmx) ) + sin(-a)*( shy*tmx - tmy ) ) + x0 ;
float recY = 1/scy * ( sin(-a)*(tmx + shx*shy*tmx - shx*tmx) - cos(-a)*(shy*tmx-tmy) ) + y0;
// here the interpolation starts. I calculate the color basing on four points from input image
// that points are taken from the reverse matrix calculation
float a = recX - floorf(recX);
float b = recY - floorf (recY);
if(recX + 1 > img2->width()) recX -= 1;
if(recY + 1 > img2->height()) recY -= 1;
QColor c1 = QColor(img2->pixel(recX, recY));
QColor c2 = QColor(img2->pixel(recX + 1, recY));
QColor c3 = QColor(img2->pixel(recX , recY + 1));
QColor c4 = QColor(img2->pixel(recX + 1, recY + 1));
float colR = b * ((1.0 - a) * (float)c3.red() + a * (float)c4.red()) + (1.0 - b) * ((1.0 - a) * (float)c1.red() + a * (float)c2.red());
float colG = b * ((1.0 - a) * (float)c3.green() + a * (float)c4.green()) + (1.0 - b) * ((1.0 - a) * (float)c1.green() + a * (float)c2.green());
float colB = b * ((1.0 - a) * (float)c3.blue() + a * (float)c4.blue()) + (1.0 - b) * ((1.0 - a) * (float)c1.blue() + a * (float)c2.blue());
if(colR > 255) colR = 255; if(colG > 255) colG = 255; if(colB > 255) colB = 255;
if(colR < 0 ) colR = 0; if(colG < 0 ) colG = 0; if(colB < 0 ) colB = 0;
paintPixel(x, y, colR, colG, colB);
}
}
}
// x0 and y0 are the starting point of image
x0 = abs(x-tx);
y0 = abs(y-ty);
repaint();
}
// function painting a pixel. It works directly on memory
void Geometric::paintPixel(int i, int j, int r, int g, int b) {
unsigned char *ptr = img->bits();
ptr[4 * (img->width() * j + i)] = b;
ptr[4 * (img->width() * j + i) + 1] = g;
ptr[4 * (img->width() * j + i) + 2] = r;
}
void Geometric::paintEvent(QPaintEvent*) {
QPainter p(this);
p.drawImage(0, 0, *img);
if (displayLogoDefault == true) p.drawImage(0, 0, *img2);
}
I have a uint8_t YUYV 422 (Interleaved) image array in memory and I want to be able to flip it both vertically and horizontally. I have successfully implemented a vertical flip but I'm having a problem with flipping both horizontally and vertically at the same time.
My code for the vertical flip, below, works perfectly.
int counter = 0;
int array_width = 2; // YUYV
for (int h = (m_Width * m_Height * array_width) - m_Width * array_width; h > 0; h -= m_Width * array_width)
{
for (int w = 0; w < m_Width * array_width; w++)
{
flipped[counter] = buffer[h + w];
counter++;
}
}
However, the following vertical and horizontal flip code appears to work but there is a loss of definition. To better understand what I am referring to, please see my sample images.
int x = 0;
for (int n = m_Width * m_Height * 2 - 1; n >= 0; n -= 4)
{
flipped[x] = buffer[n - 3]; // Y0
flipped[x + 1] = buffer[n - 2]; // U
flipped[x + 2] = buffer[n - 1]; // Y1
flipped[x + 3] = buffer[n]; // V
x += 4;
}
As you can see, I am moving the YUYV components and keeping them in the same order. I don't believe that I am dropping pixels so I don't understand why I am losing definition. To reiterate, I don't see this problem when flipping vertically (Using the first code snippet).
Here is the reference image, please note the stem of the lamp:
This is the flipped image, the stem of the lamp has lost definition:
You also need to swap Y0 and Y1 in your loop.
int x = 0;
for (int n = m_Width * m_Height * 2 - 1; n >= 3; n -= 4)
{
flipped[x] = buffer[n - 1]; // Y1->Y0
flipped[x + 1] = buffer[n - 2]; // U
flipped[x + 2] = buffer[n - 3]; // Y0->Y1
flipped[x + 3] = buffer[n]; // V
x += 4;
}
While I was at it, since you're accessing n - 3 I changed the loop condition to be absolutely sure it was safe.
m_Width * m_Height * 2 is not a multiple of 4 (the number of data blocks in YUYV format. Try changing '2' into '4', an also array_width.
I have a line in a vertex shader
gl_Position = gl_ModelViewProjectionMatrix * vertex;
I need to do the same computation without a shader, like:
float vertex[4];
float modelviewProjection[16];
glGetFloatv(GL_MODELVIEW_MATRIX, modelviewProjection);
glMatrixMode(GL_PROJECTION_MATRIX);
glMultMatrixf(modelviewProjection);
for ( counter = 0; counter < numPoints; counter++ )
{
vertex[0] = *vertexPointer + randomAdvance(timeAlive) + sin(ParticleTime);
vertex[1] = *( vertexPointer + 1 ) + randomAdvance(timeAlive) + timeAlive * 0.6f;
vertex[2] = *( vertexPointer + 2 );
glPushMatrix();
glMultMatrixf(vertex);
*vertexPointer = vertex[0];
*( vertexPointer + 1 ) = vertex[1];
*( vertexPointer + 2 ) = vertex[2];
vertexPointer += 3;
glPopMatrix();
}
If you have no suitable vector/matrix library, look into GLM (it can do that kind of thing without any fuss).
If you want to do it manually, the components of the transformed vector are the dot products of the respective rows in the matrix and the untransformed vector. That is because a vector can be seen as a matrix with one column (then just apply the rules of matrix multiplication).
Thus, assuming OpenGL memory layout, that would be:
x = x*m[0] + y*m[4] + z*m[8] + w*m[12], y = x*m[1] + y*m[5] + z*m[9] + w*m[13], etc.