OpenGL: distorted textures when not divisible by 2 - opengl

I have a game engine that uses OpenGL for display.
I coded a small menu for it, and then I noticed something odd after rendering the text.
http://img43.imageshack.us/i/garbagen.png/
As you see, the font is somewhat unreadable, but the lower parts (like "Tests") look as intended.
Seems the position on the screen affects readability, as the edges get cut.
The font is 9x5, a value that I divide by 2 to obtain width/height and render the object from the center.
So, with 4.5x2.5 pixels (I use floats for x, y, width and height of simple rectangles), the texture is messed up if rendered somewhere other than x.5 or so. However, it only does so in two computers for now, but I would dislike this error to come out since it makes text unreadable. I can make it 4.55x2.55 (by adding a bit of extra size when dividing by 2), and then it renders adequately in all machines (or at least doesn't happen as often in the problematic two), but I fear this is a hack too gross to keep it and doesn't solve the issue entirely, and it might scale the text up making the font look..."fat".
So my question is...is there any way I can prevent this, and not exchanging those values to integers? (I need the slight differences floats offers in comparison). Can I find out which width/heights are divisible by two, and those that aren't, handle them differently? If it's indeed a video card issue, would it be possible to circumvent it?
Sorry if there's anything lacking for the question, I don't resort to questioning the internet often and I have no coding studies. I'll be happy to provide any line or chunk of code that might be required.

If you have to draw your text at non-integer coordinates, you should enable texture filtering. Use glTexParameterfv to set GL_TEXTURE_MIN_FILTER and GL_TEXTURE_MAG_FILTER to GL_LINEAR. Your text will be blurred, but that cannot be avoided without resorting to pixel perfect (=integer) coordinates.
Note that your 0.05 workaround does nothing but change the way the effective coordinates are rounded to integers. When using GL_NEAREST texture filtering, there's no such thing as a half pixel offset. Even if you specify these coordinates, the texture filter will round them for you. You just push it in the right direction with the additional 0.05.

For best reliability I would find a way to eliminate the fractions. I only have a little experience with XNA and MDX, so I don't know if there is a good reason, but why are you going by the center rather than corner?

Trying to do pixel-perfect stuff like this can be hard in OpenGL due to different resolutions, texture filtering etc.
Some things you could try:
Draw your font into one large texture (say 512x512).
Draw the glyphs larger than you need and anti-alias using the alpha channel (transparency).
Leave some blank space (4 or 8 pixels) around each glyph. If you have them pushed up right against eachother (like you would if you were drawing a font for software-rendering back in the DOS days), then filtering will make them bleed into eachother.
Or you could take a different approach and make them out of line segments. This may work better for fonts on the scales you're dealing with.

Related

Generate volume out of 3d-matrix in cpp

I have a function for generating a 3d-matrix with grey values (char values from 0 to 255). Now I want to generate a 3d-object out of this matrix, e.g. I want to display these values as a 3d-object (in cpp). What is the best way to do that platform-independent and as fast as possible?
I have already read a bit about using OGL, but then I run in the following problem: The matrix can contain up to $4\cdot10^9$ values. When I want to load the complete matrix into the RAM, it will collapse. So a direct draw from the matrix is impossible. Furthermore I only found functions for drawing 2d-images in OGL. Is there a way to draw 3d-pixels in OGL? Or should I rather use another approach?
I do not need a moving functionality (at least not at the moment), I just want to display the data.
Edit 2: For narrowing the question in: Is there a way to draw pixels in 3d-space with OGL taken from a 3d-matrix? I did not find a suitable function, I only found 2d-functions.
What you're looking to do is called volume rendering. There are various techniques to achieve it, and ultimately it depends on what you want it to look like.
There is no simple way to do this either. You can't just draw 3d pixels. You can draw using GL_POINTS and have each transformed point raster to 1 pixel, but this is probably completely unsatisfactory for you because it will only draw a some pixels to the screen (you wont see anything on big resolutions).
A general solution would be to just render a cube using normal triangles, for each point. Sort it back to front if you need alpha blending. If you want a more specific answer you will need to narrow your request. Ray tracing also has merits in volume rendering. Learn more on volume rendering.

How to Detect Objects in the Image without using any library in C++?

I am writing an application in C++ that requires a little bit of image processing. Since I am completely new to this field I don't quite know where to begin.
Basically I have an image that contains a rectangle with several boxes. What I want is to be able to isolate that rectangle (x, y, width, height) as well as get the center coordinates of each of the boxes inside (18 total).
I was thinking of using a simple for-loop to loop through the pixels in the image until I find a pattern but I was wondering if there is a more efficient approach. I also want to see if I can do it efficiently without using big libraries like OpenCV.
Here are a couple example images, any help would be appreciated:
Also, what are some good resources where I could learn more about image processing like this.
The detection algorithm here can be fairly simple. Your box-of-squares (BOS) is always aligned with the edge of the image, and has a simple structure. Here's how I'd approach it.
Choose a colorspace. Assume RGB is OK for now, but it may work better in something else.
For each line
For each pixel, calculate the magnitude difference between the pixel and the pixel immediately below it. The magnitude difference is simply sqrt((X-x)^2+(Y-y)^2+(Z-z)^2)), where X,Y,Z are color coordinates of the first pixel, and x,y,z are color coordinates of the pixel below it. For RGB, XYZ=RGB of course.
Calculate the maximum run length of consecutive difference magnitudes that are below a certain threshold magThresh. You may also choose a forgiving version of this: maximum run length, but allowing intrusions up to intrLen pixels long that must be followed by up to contLen pixels long runs. This is to take care of possible line-to-line differences at the edges of the squares.
Find the largest set of consecutive lines that have the maximum run lengths above minWidth and below maxWidth.
Thus you've found the lines which contain the box, and by recalculating data in 2.1 above, you'll get to know where the boxes are in horizontal coordinates.
Detecting box edges is done by repeating the same thing but scanning left-to-right within the box. At that point you'll have approximate box centroids that take no notice of bleeding between pixels.
This can be all accomplished by repeatedly running the image through various convolution kernels followed by doing thresholding, I'd think. The good thing is that both of those operations have very fast library implementations. You do not want to reimplement them by hand, it will be likely significantly slower.
If you insist on doing it yourself (personally I'd use OpenCV, it's industrial-strength and free), you're going to need an edge detection algorithm first. There are a good few out there on the internet, but be prepared for some frightening mathematics...
Many involve iterating over each pixel, and lifting it and it's neighbours' values into a matrix, and then convolving with a kernel matrix. Be aware that this has to be done for every pixel (in principle though, in your case you can stop at the first discovered rectangle), and for each colour channel - so it would be highly advisable to push onto the GPU.

How to create large terrain/landscape

I was wandering how it's possible to create a large terrain in opengl. My first idea was using blender and create a plane, subdevide it, create the terrain and export it as .obj. After taking a look at blender I thought this should be possible but soon I realized that my hexacore + 8GB RAM aren't able too keep up the subdeviding in order to support the required precision for a very large terrain.
So my question is, what is the best way to do this?
Maybe trying another 3D rendering software like cinema4d?
Creating the terrain step-by-step in blender and put it together later? (might be problematic to maintain the ratio between the segments)
Some methods I don't know about?
I could create a large landscape with a random generation algorithm but I don't want a random landscape I need a customized landscape with many details. (heights, depth, paths)
Edit
What I'll do is:
Create 3 different heightmaps (1. cave ground (+maybe half of the wall height), 2. inverted heightmap for cave ceiling, 3. standard surface heightmap)
Combine all three heightmaps
Save them in a obj file or whatever format required
do some fine tuning in 3d editing tool (if it's too large to handle I'll create an app with LOD algorithm where I can edit some minor stuff)
save it again as whatever is required (maybe do some optimization)
be happy
Edit2
The map I'm creating is so big that Photoshop is using all of my 8GB Ram so I have to split all 3 heightmaps in smaller parts and assemble them on the fly when moving over the map.
I believe you would just want to make a height map. The larger you make the image, the further it can stretch. Perhaps if you made the seams match up, you could tile it, but if you want an endless terrain it's probably worth the effort to generate a terrain.
To make a height map, you'll make an image where each pixel represents a set height (you don't really have to represent it as an image, but it makes it very easy to visualize) which becomes a grey-scaled color. You can then scale this value to the desired maximum height (precision is decided by the bit-depth of the image).
If you wanted to do this with OpenGL, you could make an interface where you click at points to raise the height of particular points or areas.
Once you have this image, rendering it isn't too hard, because the X and Y coordinates are set for your space and the image will give you the Z coordinate.
This would have the downside of not allowing for caves and similar features (because there is only one height given for a point). If you needed these features, they might be added with meshes or a 2nd
If you're trying to store more data than fits in memory, you need to keep most of it on disk. Dividing the map into segments, loading the nearer segments as necessary, is the technique. A lot of groups access the map segments via quadtrees, which usually don't need much traversion to get to the "nearby" parts.
Variations include creating lower-resolution versions of larger chunks of map for use in rendering long views, so you're keeping a really low-res version of the Whole Map, a medium-res version of This Valley Here, and a high-res copy of This Grove Of Trees I'm Looking At.
It's complicated stuff, which is why nobody really put the whole thing together until about GTA:San Andreas or Oblivion.

OpenGL - A way to display lot of points dynamically

I am providing a question regarding a subject that I am now working on.
I have an OpenGL view in which I would like to display points.
So far, this is something I can handle ;)
For every point, I have its coordinates (X ; Y ; Z) and a value (unsigned char).
I have a color array giving the link between one value and a color.
For example, 255 is red, 0 is blue, and so on...
I want to display those points in an OpenGL view.
I want to use a threshold value so that depending on it, I can modify the transparency value of a color depending on the value of one point.
I want also that the performance doesn't go bad even if I have a lot of points (5 billions in the worst case but 1~2 millions in a standard case).
I am now looking for the effective way to handle this.
I am interested in the VBO. I have read that it will allow some good performance and also that I can modify the buffer as I want without recalculating it from scratch (as with display list).
So that I can solve the threshold issue.
However, doing this on a million points dynamically will provide some heavy calculations (at least a pretty bad for loop), no ?
I am opened to any suggestions and I would like to discuss about any of your ideas !
Trying to display a billion points or more is generally (forgive the pun) pointless.
Even an extremely high resolution screen has only a few million pixels. Nothing you can do will get it to display more points than that.
As such, your first step is almost undoubtedly to figure out a way to restrict your display to a number of points that's at least halfway reasonable. OpenGL can (and will) oblige if you ask it to display more, but your monitor won't and neither will mine or much or anybody else's.
Not directly related to the OpenGL part of your question, but if you are looking at rendering massive point clouds you might want to read up on space partitioning hierarchies such as octrees to keep performance in check.
Put everything into one VBO. Draw it as an array of points: glDrawArrays(GL_POINTS,0,num). Calculate alpha in a pixel shader (using threshold passed as uniform).
If you want to change a small subset of points - you can map a sub-range of the VBO. If you need to update large parts frequently - you can use Transform Feedback to utilize GPU.
If you need to simulate something for the updates, you should consider using CUDA or OpenCL to run the update completely on the GPU. This will give you the best performance. Otherwise, you can use a single VBO and update it once per frame from the CPU. If this gets too slow, you could try multiple buffers and distribute the updates across several frames.
For the threshold, you should use a shader uniform variable instead of modifying the vertex buffer. This allows you to set a value per-frame which can be then combined with the data from the vertex buffer (for instance, you set a float minVal; and every vertex with some attribute less than minVal gets discarded in the geometry shader.)

OpenGL GL_SELECT or manual collision detection?

As seen in the image
I draw set of contours (polygons) as GL_LINE_STRIP.
Now I want to select curve(polygon) under the mouse to delete,move..etc in 3D .
I am wondering which method to use:
1.use OpenGL picking and selection. ( glRenderMode(GL_SELECT) )
2.use manual collision detection , by using a pick-ray and check whether the ray is inside each polygon.
I strongly recommend against GL_SELECT. This method is very old and absent in new GL versions, and you're likely to get problems with modern graphics cards. Don't expect it to be supported by hardware - probably you'd encounter a software (driver) fallback for this mode on many GPUs, provided it would work at all. Use at your own risk :)
Let me provide you with an alternative.
For solid, big objects, there's an old, good approach of selection by:
enabling and setting the scissor test to a 1x1 window at the cursor position
drawing the screen with no lighting, texturing and multisampling, assigning an unique solid colour for every "important" entity - this colour will become the object ID for picking
calling glReadPixels and retrieving the colour, which would then serve to identify the picked object
clearing the buffers, resetting the scissor to the normal size and drawing the scene normally.
This gives you a very reliable "per-object" picking method. Also, drawing and clearing only 1 pixel with minimal per-pixel operation won't really hurt your performance, unless you are short on vertex processing power (unlikely, I think) or have really a lot of objects and are likely to get CPU-bound on the number of draw calls (but then again, I believe it's possible to optimize this away to a single draw call if you could pass the colour as per-pixel data).
The colour in RGB is 3 unsigned bytes, but it should be possible to additionally use the alpha channel of the framebuffer for the last byte, so you'd get 4 bytes in total - enough to store any 32-bit pointer to the object as the colour.
Alternatively, you can create a dedicated framebuffer object with a specific pixel format (like GL_R32UI, or even GL_RG32UI if you need 64 bits) for that.
The above is a nice and quick alternative (both in terms of reliability and in implementation time) for the strict geometric approach.
I found that on new GPUs, the GL_SELECT mode is extremely slow. I played with a few different ways of fixing the problem.
The first was to do a CPU collision test, which worked, but wasn't as fast as I would have liked. It definitely slows down when you are casting rays into the screen (using gluUnproject) and then trying to find which object the mouse is colliding with. The only way I got satisfactory speeds was to use an octree to reduce the number of collision tests down and then do a bounding box collision test - however, this resulted in a method that was not pixel perfect.
The method I settled on was to first find all the objects under the mouse (using gluUnproject and bounding box collision tests) which is usually very fast. I then rendered each of the objects that have potentially collided with the mouse in the backbuffer as a different color. I then used glReadPixel to get the color under the mouse, and map that back to the object. glReadPixel is a slow call, since it has to read from the frame buffer. However, it is done once per frame, which ends up taking a negligible amount of time. You can speed it up by rendering to a PBO if you'd like.
Giawa
umanga, Cant see how to reply inline... maybe I should sign up :)
First of all I must apologize for giving you the wrong algo - i did the back face culling one. But the one you need is very similar which is why I got confused... d'oh.
Get the camera position to mouse vector as said before.
For each contour, loop through all the coords in pairs (0-1, 1-2, 2-3, ... n-0) in it and make a vec out of them as before. I.e. walk the contour.
Now do the cross prod of those two (contour edge to mouse vec) instead of between pairs like I said before, do that for all the pairs and vector add them all up.
At the end find the magnitude of the resulting vector. If the result is zero (taking into account rounding errors) then your outside the shape - regardless of facing. If your interested in facing then instead of the mag you can do that dot prod with the mouse vector to find the facing and test the sign +/-.
It works because the algo finds the amount of distance from the vector line to each point in turn. As you sum them up and you are outside then they all cancel out because the contour is closed. If your inside then they all sum up. Its actually Gauss's Law of electromagnetic fields in physics...
See:http://en.wikipedia.org/wiki/Gauss%27s_law and note "the right-hand side of the equation is the total charge enclosed by S divided by the electric constant" noting the word "enclosed" - i.e. zero means not enclosed.
You can still do that optimization with the bounding boxes for speed.
In the past I've used GL_SELECT to determine which object(s) contributed the pixel(s) of interest and then used computational geometry to get an accurate intersection with the object(s) if required.
Do you expect to select by clicking the contour (on the edge) or the interior of the polygon? Your second approach sounds like you want clicks in the interior to select the tightest containing polygon. I don't think that GL_SELECT after rendering GL_LINE_STRIP is going to make the interior responsive to clicks.
If this was a true contour plot (from the image I don't think it is, edges appear to intersect) then a much simpler algorithm would be available.
You cant use select if you stay with the lines because you would have to click on the line pixels rendered not the space inside the lines bounding them which I read as what you wish to do.
You can use Kos's answer but in order to render the space you need to solid fill it which would involve converting all of your contours to convex types which is painful. So I think that would work sometimes and give the wrong answer in some cases unless you did that.
What you need to do is use the CPU. You have the view extents from the viewport and the perspective matrix. With the mouse coord, generate the view to mouse pointer vector. You also have all the coords of the contours.
Take the first coord of the first contour and make a vector to the second coord. Make a vector out of them. Take 3rd coord and make a vector from 2 to 3 and repeat all the way around your contour and finally make the last one from coord n back to 0 again. For each pair in sequence find the cross product and sum up all the results. When you have that final summation vector keep hold of that and do a dot product with the mouse pointer direction vector. If its +ve then the mouse is inside the contour, if its -ve then its not and if 0 then I guess the plane of the contour and the mouse direction are parallel.
Do that for each contour and then you will know which of them are spiked by your mouse. Its up to you which one you want to pick from that set. Highest Z ?
It sounds like a lot of work but its not too bad and will give the right answer. You might like to additionally keep bounding boxes of all your contours then you can early out the ones off of the mouse vector by doing the same math as for the full vector but only on the 4 sides and if its not inside then the contour cannot be either.
The first is easy to implement and widely used.