How can I render a bunch of hand drawn shapes in opengl 1.x? I know about instancing but how is it possible in old opengl? Could I get examples of some sort? This is for a game, I'm expecting a thousand or so shapes all of which will need to be updated every frame.
Assuming that (at least most of) the shapes remain unchanged from one frame to the next, so most of the update is just moving them around, you could at least consider building a display list for each shape, then rendering the display lists during an update.
The amount of good you'll get from this varies widely depending on the hardware (and possibly driver) in use though. Some hardware supports display lists directly, and gains a lot from it. With other hardware, you'll be hard put to find any difference at all.
The good points are that at worst this won't do any harm, and building/using display lists is pretty quick and easy. So, in the worst case you don't lose much, and in the best case you might gain quite a bit.
Related
I'm trying to make a game or 3D application using openGL. The game/program will have many objects in them and drawn to the screen(around 7000 of them). When I render them, I would need to calculate the distance between the camera and the object and sort them in order to correctly render the objects within the scene. Knowing this, what is the best way to sort them? I really want the sorting to be done really fast, but I've heard there are "trade off" for them, so what algorithm should I use to get the best performance out of it?
Any help would be greatly appreciated.
Edit: a lot of people are talking about the z-buffer/depth buffer. This doesn't work in some cases like a few people talked about. This is why I asked this question.
Sorting by distance doesn't solve the transparency problem perfectly. Consider the situation where two transparent surfaces intersect and each has a part which is closer to you. Perhaps rare in games, but still something to consider if you don't want an occasional glitched look to your renderer.
The better solution is order-independent transparency. With the latest graphics hardware supporting atomic operations, you can use an A-buffer to do this with little memory overhead and in a single pass so it is pretty efficient. See for example this article.
The issue of sorting your scene is still a valid one, though, even if it isn't for transparency -- it is still useful to sort opaque objects front to back to to allow depth testing to discard unseen fragments. For this, Vaughn provided the great solution of BSP trees -- these have been used for this purpose for as long as 3D games have been around.
Use http://en.wikipedia.org/wiki/Insertion_sort which has O(n) complexity for nearly sorted arrrays.
In your case by exploiting temporal cohesion insertion sort gives fastest results.
It is used for http://en.wikipedia.org/wiki/Sweep_and_prune
From link above:
In many applications, the configuration of physical bodies from one time step to the next changes very little. Many of the objects may not move at all. Algorithms have been designed so that the calculations done in a preceding time step can be reused in the current time step, resulting in faster completion of the calculation.
So in such cases insertion sort is best(or similar sorts with O(n) at best case)
for an application I'm developing I need to be able to
draw lines of different widths and colours
draw solid color filled triangles
draw textured (no alpha) quads
Very easy...but...
All coordinates are integer in pixel space and, very important: glReading all the pixels from the framebuffer
on two different machines, with two different graphic cards, running two different OS (Linux and freebsd),
must result in exactly the same sequence of bits (given an appropriate constant format conversion).
I think this is impossible to safely be achieved using opengl and hardware acceleration, since I bet different graphic
cards (from different vendors) may implement different algorithms for rasterization.
(OpenGl specs are clear about this, since they propose an algorithm but they also state that implementations may differ
under certain circumstances).
Also I don't really need hardware acceleration since I will be rendering very low speed and simple graphics.
Do you think I can achieve this by just disabling hardware acceleration? What happens in that case under linux, will I default on
MESA software rasterizer? And in that case, can I be sure it will always work or I am missing something?
That you're reading back in rendered pixels and strongly depend on their mathematical exactness/reproducability sounds like a design flaw. What's the purpose of this action? If you, for example, need to extract some information from the image, why don't you try to extract this information from the abstract, vectorized information prior to rendering?
Anyhow, if you depend on external rendering code and there's no way to make your reading code more robust to small errors, you're signing up for lots of pain and maintenance work. Other people could break your code with every tiny patch, because that kind of pixel exactness to the bit-level is usually a non-issue when they're doing their unit tests etc. Let alone the infinite permutations of hard- and software layers that are possible, and all might have influence on the exact pixel bits.
If you only need those two operatios: lines (with different widths and colors) and quads (with/without texture), I recommend writing your own rendering/rasterizer code which operates on a 8 bit uint array representing the image pixels (R8G8B8). The operations you're proposing aren't too nasty, so if performance is unimportant, this might actually be the better way to go on the long run.
I hear that GL_QUADS are going to be removed in the OpenGL versions > 3.0, why is that? Will my old programs not work in the future then? I have benchmarked, and GL_TRIANGLES or GL_QUADS have no difference in render speed (might even be that GL_QUADS is faster). So whats the point?
The point is that your GPU renders triangles, not quads. And it is pretty much trivial to construct a rectangle from two triangles, so the API doesn't really need to be burdened with the ability to render quads natively. OpenGL is going through a major trimming process, cutting a lot of functionality that made sense 15 years ago, but no longer match how the GPU works, or how the GPU is ever going to work. The fixed function pipeline is gone from the latest versions too, I believe, because, once again, it's no longer necessary, and it no longer matches how the GPU works (programmable shaders).
The point is that the smaller and tighter the OpenGL API can be made, the easier it is for vendors to write robust, high-performance drivers, and the easier it is to learn to use the API correctly and efficiently.
A few years ago, practically anything in OpenGL could be done in 3-5 different ways, which put a lot of burden on the developer to figure out which implementation is the right one if you want optimal performance.
So they're trying to streamline the API.
People have already answered quite well on your question. On top of their answer, one of the reason that GL_QUADS being deprecated is because of quads's undefined nature.
For example try to model a 2d square with points (0,0,0), (1,0,0), (1,1,1), (0,1,0). This is flat quad with one corner dragged up. It is impossible to draw a NORMAL flat square in such way. Depending on drivers, it will be split to 2 triangles either one or another way - which we can't control. Such a model MUST be modeled with two triangles. - All three points of a triangle always lies on a same plane.
It isn't "going" to be anything. As with a lot of other functionality, GL_QUADS was deprecated in version 3.0 and removed in version 3.1. Obviously this is all irrelevant if you create a compatibility context.
Any answer that anyone might give for the reason for deprecating them would be sheer speculation.
I searched and I found some tutorials how to do terrain collision but they were using .raw files, I'm using .x. But, I think i can do same thing they did. They took x,y,z values of an object can checked it against every single triangle in the terrain. It makes sense but It look like it will be slow. It is just like picking checking against every single triangle is slow.
Is there faster way to do it and good?
UPDATE
My terrain is not flat, if it was i would use bounding boxes.
Last time I did this, I used the Bullet library, and it worked great. It has various collision shapes to choose from, optimised for different scenarios, including general triangle meshes and heightfields. You can use the library's collision routines without the physics.
One common way to significantly reduce the time it takes to detect collisions is to organize the space into an octree, which will allow you to very quickly determine whether or not a collision could occur in a particular node. Generally speaking, it's easier to accomplish these sorts of tasks with a game engine.
Intuitively, it would seem that given a dozen or so 2d images from different angles of almost any object, it should be easy to construct a 3d representation of that object. Subsequently a library of 3d representations attained in this way could be used to identify new 2d images.
What literature is there along these lines, and why has it not yet produced strong object recognition?
It is your word "intuitively" that is causing you trouble there. Your brain is not designed to be very good at certain tasks, like multiplying thousands of numbers in an instant. However for raw computational power your brain makes the fastest computer look like mere tiddly-winks (neural response time of only about 10 milliseconds, but all those 10^14 or so neurons all working in parallel totally beats any modern machine). Its just that your brain is designed to solve problems that are intensely more computationally complex, like recognizing objects in a picture, parsing sound data and picking out individual speakers amidst background noise. Learning to classify and deal with tens of thousands of types of objects.
The incredibly computationally intense things your brain is designed to do really well are the things that, to a person, seem "intuitive". The things it isn't designed to do really well seem "unintuitive" or difficult. But the raw computation needed for strong object recognition (because there are just so MANY kinds of objects, many of which really have subobjects, and multiple classifications, and non-rigid forms, e.g. "trousers", "water", "dog") is WAY more than what is needed accomplish things one considers only possible for a computer. Things like using "common sense" to solve an every day problem are similarly trivial for a person, but computationally incredibly complex.
What you want to do is indeed possible, but (there are quite a few buts)
for the 3D reconstruction:
For anything but the simplest shapes you need more than just a few dozen images.
The shape you are reconstructing needs to have a lot of recognizable features that look similar enough from different angles so that you can match them.
Lighting needs to be fairly constant over your entire set of images, otherwise shadows will throw you off (or you need even more images)
even with very feature rich objects (i.e. lot of variation in colour and shape) 3D reconstruction accuracy from any matched pair of features is going to be terrible if you do not have full knowledge of the parameters (position, view direction and opening angle) of the camera used to take each picture.
These are all problems can be solved, so suppose you did, and now you have a new picture from the object that you want to match to your 3D shape.
You could of course try to find a 2D projection of your shape that fit the new picture, but the search space there is enormous. It would probably be a lot easier and faster to use the feature finding and matching system you built for the initial 3D reconstruction to directly match the new picture to the existing set, and find where it fits on the object that way.
So once you've solved the problem of creating the initial 3D reconstruction your second step is basically done as well.
Photosynth is a brilliant example of these two steps. Browse the site, try to find some of the references they have there.
As for your final step, strong object recognition, just imagine the search space! What you need for strong object recognition, apart from a good representation of the objects you want to recognize, is a good way to search the space of objects you know, and a good way to represent your new object (the image of an object in this case) in that space. This is something I know nearly nothing about.
For just matching the same object in different 2D images there are SIFT features. But I don't think this translates well to 3D.
Note that what you're describing is instance recognition. Computer can indeed do a good job of instance recognition these days. For example, Google Goggles is very good at recognizing landmarks like the Golden Gate Bridge and Eiffel Tower.
However, computers are less good at doing category recognition and classification. Creating dozens of 2D snapshots for all possible objects under all types of lighting conditions etc. becomes intractable very quickly. The fact that certain objects such as a dog can move around makes the space of possibilities even bigger. Computers become much worse at this.
Also, from the biological standpoint, our visual field is around 100 million pixels. Graphics cards have only now started to become capable of rendering that much data in real-time. Making sense of that much data is even more computationally intensive.
One often talks about having a machine reach a 5 year old's ability to process information. But let's think about how much data that is. 100 million pixels with 3 color channels and 1 byte per pixel = 300MB/s. Now multiply that by 30 frames per second, 31,556,926 seconds per year, and 5 years, you end up with roughly 1.4 exabytes (1.4x10^18).