I have a high energy physics application that displays particle detectors. I draw hits in the detectors as points. But the points in basic jogl/opengl seem to be just squares. So what I would like to do is draw small images (pngs) at the points. So the basic problem is: draw images from png files (resources) at points.
I start by loading the image from a jar file via
_sprite = TextureIO.newTexture(_url, false, ".png");
this seems to work fine. For instance, the sprite reports the correct image size.Then I try to draw the (same sprite) at a set of points using
public static void drawSprites(GLAutoDrawable drawable, float coords[],
Texture sprite, float size) {
GL2 gl = drawable.getGL().getGL2();
gl.glPointSize(size);
sprite.bind(gl);
//how many points?
int np = coords.length / 3;
gl.glBegin(GL.GL_POINTS);
for (int i = 0; i < np; i++) {
int j = i * 3;
gl.glVertex3f(coords[j], coords[j + 1], coords[j + 2]);
}
gl.glEnd();
}
But all I get are the points-- i.e. squares. The squares are at the right location--but they are just squares. No images.
Related
So I am trying to write a Raytracer as a personal project, and I have got the basic recursion, mesh geometry, and ray triangle intersection down.
I am trying to get a plausible image out of it but encounter the problem that all pixel rows are the same, giving me straight vertical lines.
I found that all pixel positions generated from the camera function are the same on the y axis but cannot find the problem with my vector math here (I use my Vertex structure as vectors too, its lazy I know):
void Renderer::CameraShader()
{
//compute the width and height of the screen based on angle and distance of the near clip plane
double widthRad = tan(0.5*m_Cam.angle)*m_Cam.nearClipPlane;
double heightRad = ((double)m_Cam.pixelRows / (double)m_Cam.pixelCols)*widthRad;
//get the horizontal vector of the camera by crossing the direction angle with an
Vertex cross = ((m_Cam.direction - m_Cam.origin).CrossProduct(Vertex(0, 1, 0)).Normalized(0.0001))*widthRad;
//get the up/down vector of the camera by crossing the horizontal vector with the direction vector
Vertex crossDown = m_Cam.direction.CrossProduct(cross).Normalized(0.0001)*heightRad;
//generate rays per pixel row and column
for (int i = 0; i < m_Cam.pixelCols;i++)
{
for (int j = 0; j < m_Cam.pixelRows; j++)
{
Vertex pixelPos = m_Cam.origin + (m_Cam.direction - m_Cam.origin).Normalized(0.0001)*m_Cam.nearClipPlane //vector of the screen center
- cross + (cross*((i / (double)m_Cam.pixelCols)*widthRad*2)) //horizontal vector based on i
+ crossDown - (crossDown*((j / (double)m_Cam.pixelRows)*heightRad*2)); //vertical vector based on j
//cast a ray through according screen pixel to get color
m_Image[i][j] = raycast(m_Cam.origin, pixelPos - m_Cam.origin, p_MaxBounces);
}
}
}
I hope the comments in the code make clear what is happening.
If anyone sees the problem help would be nice
The problem was that I had to substract the camera origin from the direction point. It now actually renders sillouettes, so I guess I can say its fixed :)
I have a dynamic body with many polygon shapes for my game character. In order to turn back the game character I flip vertices using this code:
void Box2dManager::flipFixtures(bool horizzontally, b2Body* physBody)
{
b2Fixture* fix = physBody->GetFixtureList();
while(fix)
{
b2Shape* shape = fix->GetShape();
if(shape->GetType()== b2Shape::e_polygon)
{
// flipping x or y coordinates
b2PolygonShape* ps = (b2PolygonShape*)shape;
for(int i=0; i < ps->GetVertexCount(); i++)
horizzontally ? ps->m_vertices[i].x *= -1 : ps->m_vertices[i].y *= -1;
// revert the vertices (no need after Box2D 2.3.0 as polygon creation computes the convex hull)
b2Vec2* reVert = new b2Vec2[ps->GetVertexCount()];
int j = ps->GetVertexCount() -1;
for(int i=0; i<ps->GetVertexCount();i++)
reVert[i] = ps->m_vertices[j--];
ps->Set(&reVert[0], ps->GetVertexCount());
}
fix = fix->GetNext();
}
}
I also have static edge shapes as walls. And it happens, that when I flip the character, sometimes vertices of the same polygon appear to be in different sides of the same static edge shape. As a result my character sticks to the wall (it is being trapped in the static edge shape). How I should handle this situation?
I am coding a small map editor (with rectangle tiles) and I need a way to draw a large amount of images OR one big image. The application is simple: You draw images on an empty screen with your mouse and when you are finished you can save it. A tile consists of a small image.
I tried out several solutions to display the tiles:
Each tile has its own QGraphicsItem (This works until you have a
1000x1000 map)
Each tile gets drawn on one big QPixmap (This means a very large image. Example: Map with 1000x100 and each tile has a size of 32x32 means that the QPixmap has a size of 32000x32000. This is a problem for QPainter.)
The current solution: Iterate through width & height of the TileLayer and draw each single tile with painter->drawPixmap(). The paint() method of my TileLayer looks like this:
void TileLayerGraphicsItem::paint(QPainter* painter, const QStyleOptionGraphicsItem* option,QWidget* /*widget*/)
{
painter->setClipRect(option->exposedRect);
int m_width=m_layer->getSize().width();
int m_height=m_layer->getSize().height();
for(int i=0;i<m_width;i++)
{
for(int j=0;j<(m_height);j++)
{
Tile* thetile=m_layer->getTile(i,j);
if(thetile==NULL)continue;
const QRectF target(thetile->getLayerPos().x()*thetile->getSize().width(),thetile->getLayerPos().y()*thetile->getSize().height(),thetile->getSize().width(),thetile->getSize().height());
const QRectF source(0, 0, thetile->getSize().width(), thetile->getSize().height());
painter->drawImage(target,*thetile->getImage(),source);
}
}}
This works for small maps with 100x100 or even 1000x100 tiles. But not for 1000x1000. The whole application begins to lag, this is of course because I have a for loop that is extremely expensive. To make my tool useful I need to be able to make at least 1000x1000 tilemaps without lags. Does anyone have an idea what I can do? How should I represent the tiles?
Update:
I changed the following: Only maps that exceed the window size of the minimap will be drawn with drawing single pixels for each tile. This is my render function now:
void RectangleRenderer::renderMinimapImage(QPainter* painter, TileMap* map,QSize windowSize)
{
for(int i=0;i<map->getLayers().size();i++)
{
TileLayer* currLayer=map->getLayers().at(i);
//if the layer is small draw it completly
if(windowSize.width()>currLayer->getSize().width()&&windowSize.height()>currLayer->getSize().height())
{
...
}
else // This is the part where the map is so big that only some pixels are drawn!
{
painter->fillRect(0,0,windowSize.width(),windowSize.height(),QBrush(QColor(map->MapColor)));
for(float i=0;i<windowSize.width();i++)
for(float j=0;j<windowSize.height();j++)
{
float tX=i/windowSize.width();
float tY=j/windowSize.height();
float pX=lerp(i,currLayer->getSize().width(),tX);
float pY=lerp(j,currLayer->getSize().height(),tY);
Tile* thetile=currLayer->getTile((int)pX,(int)pY);
if(thetile==NULL)continue;
QRgb pixelcolor=thetile->getImage()->toImage().pixel(thetile->getSize().width()/2,thetile->getSize().height()/2);
QPen pen;
pen.setColor(QColor::fromRgb(pixelcolor));
painter->setPen(pen);
painter->drawPoint(i,j);
}
}
}
}
This works not correct, however it is pretty fast. The problem is my lerp(linear interpolation) function to get the correct tiles to draw a pixel from.
Does anyone have a better solution to get the correct tiles while I iterate through the minimap pixels? At the moment I use linear interpolation between 0 and the maximum size of the tilemap and it does not work correctly.
UPDATE 2
//currLayer->getSize() returns how many tiles are in the map
// currLayer->getTileSize() returns how big each tile is (32 pixels width for example)
int raw_width = currLayer->getSize().width()*currLayer->getTileSize().width();
int raw_height = currLayer->getSize().height()*currLayer->getTileSize().height();
int desired_width = windowSize.width();
int desired_height = windowSize.height();
int calculated_width = 0;
int calculated_height = 0;
// if dealing with a one dimensional image buffer, this ensures
// the rows come out clean, and you don't lose a pixel occasionally
desired_width -= desired_width%2;
// http://qt-project.org/doc/qt-5/qt.html#AspectRatioMode-enum
// Qt::KeepAspectRatio, and the offset can be used for centering
qreal ratio_x = (qreal)desired_width / raw_width;
qreal ratio_y = (qreal)desired_height / raw_height;
qreal floating_factor = 1;
QPointF offset;
if(ratio_x < ratio_y)
{
floating_factor = ratio_x;
calculated_height = raw_height*ratio_x;
calculated_width = desired_width;
offset = QPointF(0, (qreal)(desired_height - calculated_height)/2);
}
else
{
floating_factor = ratio_y;
calculated_width = raw_width*ratio_y;
calculated_height = desired_height;
offset = QPointF((qreal)(desired_width - calculated_width)/2,0);
}
for (int r = 0; r < calculated_height; r++)
{
for (int c = 0; c < calculated_width; c++)
{
//trying to do the following: use your code to get the desired pixel. Then divide that number by the size of the tile to get the correct pixel
Tile* thetile=currLayer->getTile((int)((r * floating_factor)*raw_width)/currLayer->getTileSize().width(),(int)(((c * floating_factor)*raw_height)/currLayer->getTileSize().height()));
if(thetile==NULL)continue;
QRgb pixelcolor=thetile->getImage()->toImage().pixel(thetile->getSize().width()/2,thetile->getSize().height()/2);
QPen pen;
pen.setColor(QColor::fromRgb(pixelcolor));
painter->setPen(pen);
painter->drawPoint(r,c);
}
}
Trying to reverse engineer the example code, but it still does not work correctly.
Update 3
I tried (update 1) with linear interpolation again. And while I looked at the code I saw the error:
float pX=lerp(i,currLayer->getSize().width(),tX);
float pY=lerp(j,currLayer->getSize().height(),tY);
should be:
float pX=lerp(0,currLayer->getSize().width(),tX);
float pY=lerp(0,currLayer->getSize().height(),tY);
That's it. Now it works.
This shows how to do it properly. You use a level of detail (lod) variable to determine how to draw the elements that are currently visible on the screen, based on their zoom.
http://qt-project.org/doc/qt-5/qtwidgets-graphicsview-chip-example.html
Also don't iterate through all the elements that could be visible, but only go through the ones that have changed, and of those, only the ones that are currently visible.
Your next option to use is some other manual caching, so you don't have to repeatedly iterate through O(n^2) constantly.
If you can't optimize it for QGraphicsView/QGraphicsScene... then OpenGL is probably what you may want to look into. It can do a lot of the drawing and caching directly on the graphics card so you don't have to worry about it as much.
UPDATE:
Pushing changes to QImage on a worker thread can let you cache, and update a cache, while leaving the rest of your program responsive, and then you use a Queued connection to get back on the GUI thread to draw the QImage as a Pixmap.
QGraphicsView will let you know which tiles are visible if you ask nicely:
http://qt-project.org/doc/qt-5/qgraphicsview.html#items-5
UPDATE 2:
http://qt-project.org/doc/qt-5/qtwidgets-graphicsview-chip-chip-cpp.html
You may need to adjust the range of zooming out that is allowed on the project to test this feature...
Under where it has
const qreal lod = option->levelOfDetailFromTransform(painter->worldTransform());
if (lod < 0.2) {
if (lod < 0.125) {
painter->fillRect(QRectF(0, 0, 110, 70), fillColor);
return;
}
QBrush b = painter->brush();
painter->setBrush(fillColor);
painter->drawRect(13, 13, 97, 57);
painter->setBrush(b);
return;
}
Add in something like:
if(lod < 0.05)
{
// using some sort of row/col value to know which ones to not draw...
// This below would only draw 1/3 of the rows and 1/3 of the column
// speeding up the redraw by about 9x.
if(row%3 != 0 || col%3 != 0)
return;// don't do any painting, return
}
UPDATE 3:
Decimation Example:
// How to decimate an image to any size, properly
// aka fast scaling
int raw_width = 1000;
int raw_height = 1000;
int desired_width = 300;
int desired_height = 200;
int calculated_width = 0;
int calculated_height = 0;
// if dealing with a one dimensional image buffer, this ensures
// the rows come out clean, and you don't lose a pixel occasionally
desired_width -= desired_width%2;
// http://qt-project.org/doc/qt-5/qt.html#AspectRatioMode-enum
// Qt::KeepAspectRatio, and the offset can be used for centering
qreal ratio_x = (qreal)desired_width / raw_width();
qreal ratio_y = (qreal)desired_height / raw_height();
qreal floating_factor = 1;
QPointF offset;
if(ratio_x < ratio_y)
{
floating_factor = ratio_x;
calculated_height = raw_height*ratio_x;
calculated_width = desired_width;
offset = QPointF(0, (qreal)(desired_height - calculated_height)/2);
}
else
{
floating_factor = ratio_y;
calculated_width = raw_width*ratio_y;
calculated_height = desired_height;
offset = QPointF((qreal)(desired_width - calculated_width)/2);
}
for (int r = 0; r < calculated_height; r++)
{
for (int c = 0; c < calculated_width; c++)
{
pixel[r][c] = raw_pixel[(int)(r * floating_factor)*raw_width][(int)(c * floating_factor)];
}
}
Hope that helps.
Suppose we have a 32-bit PNG file of some ghostly/incorporeal character, which is drawn in a semi-transparent fashion. It is not equally transparent in every place, so we need the per-pixel alpha information when loading it to a surface.
For fading in/out, setting the alpha value of an entire surface is a good way; but not in this case, as the surface already has the per-pixel information and SDL doesn't combine the two.
What would be an efficient workaround (instead of asking the artist to provide some awesome fade in/out animation for the character)?
I think the easiest way for you to achieve the result you want is to start by loading the source surface containing your character sprites, then, for every instance of your ghost create a working copy of the surface. What you'll want to do is every time the alpha value of an instance change, SDL_BlitSurface (doc) your source into your working copy and then apply your transparency (which you should probably keep as a float between 0 and 1) and then apply your transparency on every pixel's alpha channel.
In the case of a 32 bit surface, assuming that you initially loaded source and allocated working SDL_Surfaces you can probably do something along the lines of:
SDL_BlitSurface(source, NULL, working, NULL);
if(SDL_MUSTLOCK(working))
{
if(SDL_LockSurface(working) < 0)
{
return -1;
}
}
Uint8 * pixels = (Uint8 *)working->pixels;
pitch_padding = (working->pitch - (4 * working->w));
pixels += 3; // Big Endian will have an offset of 0, otherwise it's 3 (R, G and B)
for(unsigned int row = 0; row < working->h; ++row)
{
for(unsigned int col = 0; col < working->w; ++col)
{
*pixels = (Uint8)(*pixels * character_transparency); // Could be optimized but probably not worth it
pixels += 4;
}
pixels += pitch_padding;
}
if(SDL_MUSTLOCK(working))
{
SDL_UnlockSurface(working);
}
This code was inspired from SDL_gfx (here), but if you're doing only that, I wouldn't bother linking against a library just for that.
I'm attempting ray casting an octree on the CPU (I know the GPU is better, but I'm unable to get that working at this time, I believe my octree texture is created incorrectly).
I understand what needs to be done, and so far I cast a ray for each pixel, and check if that ray intersects any nodes within the octree. If it does and the node is not a leaf node, I check if the ray intersects it's child nodes. I keep doing this until a leaf node is hit. Once a leaf node is hit, I get the colour for that node.
My question is, what is the best way to draw this to the screen? Currently im storing the colours in an array and drawing them with glDrawPixels, but this does not produce correct results, with gaps in the renderings, as well as the projection been wrong (I am using glRasterPos3fv).
Edit: Here is some code so far, it needs cleaning up, sorry. I have omitted the octree ray casting code as I'm not sure it's needed, but I will post if it'll help :)
void Draw(Vector cameraPosition, Vector cameraLookAt)
{
// Calculate the right Vector
Vector rightVector = Cross(cameraLookAt, Vector(0, 1, 0));
// Set up the screen plane starting X & Y positions
float screenPlaneX, screenPlaneY;
screenPlaneX = cameraPosition.x() - ( ( WINDOWWIDTH / 2) * rightVector.x());
screenPlaneY = cameraPosition.y() + ( (float)WINDOWHEIGHT / 2);
float deltaX, deltaY;
deltaX = 1;
deltaY = 1;
int currentX, currentY, index = 0;
Vector origin, direction;
origin = cameraPosition;
vector<Vector4<int>> colours(WINDOWWIDTH * WINDOWHEIGHT);
currentY = screenPlaneY;
Vector4<int> colour;
for (int y = 0; y < WINDOWHEIGHT; y++)
{
// Set the current pixel along x to be the left most pixel
// on the image plane
currentX = screenPlaneX;
for (int x = 0; x < WINDOWWIDTH; x++)
{
// default colour is black
colour = Vector4<int>(0, 0, 0, 0);
// Cast the ray into the current pixel. Set the length of the ray to be 200
direction = Vector(currentX, currentY, cameraPosition.z() + ( cameraLookAt.z() * 200 ) ) - origin;
direction.normalize();
// Cast the ray against the octree and store the resultant colour in the array
colours[index] = RayCast(origin, direction, rootNode, colour);
// Move to next pixel in the plane
currentX += deltaX;
// increase colour arry index postion
index++;
}
// Move to next row in the image plane
currentY -= deltaY;
}
// Set the colours for the array
SetFinalImage(colours);
// Load array to 0 0 0 to set the raster position to (0, 0, 0)
GLfloat *v = new GLfloat[3];
v[0] = 0.0f;
v[1] = 0.0f;
v[2] = 0.0f;
// Set the raster position and pass the array of colours to drawPixels
glRasterPos3fv(v);
glDrawPixels(WINDOWWIDTH, WINDOWHEIGHT, GL_RGBA, GL_FLOAT, finalImage);
}
void SetFinalImage(vector<Vector4<int>> colours)
{
// The array is a 2D array, with the first dimension
// set to the size of the window (WINDOW_WIDTH * WINDOW_HEIGHT)
// Second dimension stores the rgba values for each pizel
for (int i = 0; i < colours.size(); i++)
{
finalImage[i][0] = (float)colours[i].r;
finalImage[i][1] = (float)colours[i].g;
finalImage[i][2] = (float)colours[i].b;
finalImage[i][3] = (float)colours[i].a;
}
}
Your pixel drawing code looks okay. But I'm not sure that your RayCasting routines are correct. When I wrote my raytracer, I had a bug that caused horizontal artifacts in on the screen, but it was related to rounding errors in the render code.
I would try this...create a result set of vector<Vector4<int>> where the colors are all red. Now render that to the screen. If it looks correct, then the opengl routines are correct. Divide and conquer is always a good debugging method.
Here's a question though....why are you using Vector4 when later on you write the image as GL_FLOAT? I'm not seeing any int->float conversion here....
You problem may be in your 3DDDA (octree raycaster), and specifically with adaptive termination. It results from the quantisation of rays into gridcell form, that causes certain octree nodes which lie slightly behind foreground nodes (i.e. of a higher z depth) and which thus should be partly visible & partly occluded, to not be rendered at all. The smaller your voxels are, the less noticeable this will be.
There is a very easy way to test whether this is the problem -- comment out the adaptive termination line(s) in your 3DDDA and see if you still get the same gap artifacts.