openGL drawing lag: points vs line_strip - opengl

I'm using openGL to draw some very large images(3200 x 4000 pixels). I don't have objects that are anywhere near that large individually, but I do have many irregular lines that span the width and height. I'm currently displaying the lines by drawing all of the points on them as individual vertices (I know that's kind of ridiculous but I'm grabbing all the lines using image processing in openCV and the best way for me to do that is create a set of all the pixels that make up the line). I'm using user commands to rotate the images and I'm getting really large lag between the user input and the updated display. Would it be faster to instead draw with GL_LINE_STRIP and use all of the pixels and verteices in the line? or should i just thin out the pixels and use less data?

Related

Am I doing that right? Scaling in Qt

I wrote code to make a 2D transformation: scaling.
( value = variable from slider 1-10 )
int x=punktx.back();
int y=punkty.back();
DrawPixel(x*value/6.0,y*value/6.0,bits,255,255,255);
And I received that output:
As you can see I received a little breaks in that square. Is it okay or I have wrong code?
It's not how you scale things in Qt. Use QImage::scaled() or QPixmap::scaled() method instead.
Regarding your problem, the breaks are result of the fact that you use the same number of pixels for drawing the large square as for the small square - you would have to fill the gaps between the pixels, but scaling that way doesn't make sense anyway as mentioned above.
the problem is that if you iterate over an input image that's e.g. 10x10 pixels, and output the same number of pixels then you're only drawing 100 pixels no matter how much you "scale" it by. If you're scaling it to fill 20x20 pixels in size but you only draw your original 100 pixels, of course it will have holes in it.
If you want to implement a simple scaling function as a learning exercise, some approaches are:
instead of drawing 1 pixel per original pixel, draw a rectangle of the scaled size, in that pixel's color. This has the advantage that it's only a tiny change to your existing code, so it might be worth trying as a first attempt.
loop over the output pixels, then interpolate where that would be on the input image (reverse the scaling) then draw one pixel of the right color. This avoids some of the overheads with drawing lots of rectangles (which could paint the same output pixels more than once).
as above, but write the data into a bitmap stored in memory, then draw it all at once with a bitmap drawing command.
Also if you really want to get better results you can work out whether an output pixel crosses over several input pixels, then average out the colors, and so on. This will give smoother looking results but could blur things for solid color images.

Python: Reduce rectangles on images to their border

I have many grayscale input images which contain several rectangles. Some of them overlap and some go over the border of the image. An example image could look like this:
Now i have to reduce the rectangles to their border. My idea was to make all non-white pixels which are less than N (e.g. 3) pixels away from the border or a white pixel (using the Manhatten distance) white. The output should look like this (sorry for the different-sized borders):
It is not very hard to implement this. Unfortunately the implementation must be fast, because the input may contain extremly many images (e.g. 100'000) and the user has to wait until this step is finished.
I thought about using fromimage and do then everything with numpy, but i did not find a good solution.
Maybe someone has an idea or a hint how this problem may be solved very efficient?
Calculate the distance transform of the image (opencv distanceTrasform http://docs.opencv.org/2.4/modules/imgproc/doc/miscellaneous_transformations.html)
In the resulted image zero all the pixels that have value bigger than 3

Generate volume out of 3d-matrix in cpp

I have a function for generating a 3d-matrix with grey values (char values from 0 to 255). Now I want to generate a 3d-object out of this matrix, e.g. I want to display these values as a 3d-object (in cpp). What is the best way to do that platform-independent and as fast as possible?
I have already read a bit about using OGL, but then I run in the following problem: The matrix can contain up to $4\cdot10^9$ values. When I want to load the complete matrix into the RAM, it will collapse. So a direct draw from the matrix is impossible. Furthermore I only found functions for drawing 2d-images in OGL. Is there a way to draw 3d-pixels in OGL? Or should I rather use another approach?
I do not need a moving functionality (at least not at the moment), I just want to display the data.
Edit 2: For narrowing the question in: Is there a way to draw pixels in 3d-space with OGL taken from a 3d-matrix? I did not find a suitable function, I only found 2d-functions.
What you're looking to do is called volume rendering. There are various techniques to achieve it, and ultimately it depends on what you want it to look like.
There is no simple way to do this either. You can't just draw 3d pixels. You can draw using GL_POINTS and have each transformed point raster to 1 pixel, but this is probably completely unsatisfactory for you because it will only draw a some pixels to the screen (you wont see anything on big resolutions).
A general solution would be to just render a cube using normal triangles, for each point. Sort it back to front if you need alpha blending. If you want a more specific answer you will need to narrow your request. Ray tracing also has merits in volume rendering. Learn more on volume rendering.

How to create large terrain/landscape

I was wandering how it's possible to create a large terrain in opengl. My first idea was using blender and create a plane, subdevide it, create the terrain and export it as .obj. After taking a look at blender I thought this should be possible but soon I realized that my hexacore + 8GB RAM aren't able too keep up the subdeviding in order to support the required precision for a very large terrain.
So my question is, what is the best way to do this?
Maybe trying another 3D rendering software like cinema4d?
Creating the terrain step-by-step in blender and put it together later? (might be problematic to maintain the ratio between the segments)
Some methods I don't know about?
I could create a large landscape with a random generation algorithm but I don't want a random landscape I need a customized landscape with many details. (heights, depth, paths)
Edit
What I'll do is:
Create 3 different heightmaps (1. cave ground (+maybe half of the wall height), 2. inverted heightmap for cave ceiling, 3. standard surface heightmap)
Combine all three heightmaps
Save them in a obj file or whatever format required
do some fine tuning in 3d editing tool (if it's too large to handle I'll create an app with LOD algorithm where I can edit some minor stuff)
save it again as whatever is required (maybe do some optimization)
be happy
Edit2
The map I'm creating is so big that Photoshop is using all of my 8GB Ram so I have to split all 3 heightmaps in smaller parts and assemble them on the fly when moving over the map.
I believe you would just want to make a height map. The larger you make the image, the further it can stretch. Perhaps if you made the seams match up, you could tile it, but if you want an endless terrain it's probably worth the effort to generate a terrain.
To make a height map, you'll make an image where each pixel represents a set height (you don't really have to represent it as an image, but it makes it very easy to visualize) which becomes a grey-scaled color. You can then scale this value to the desired maximum height (precision is decided by the bit-depth of the image).
If you wanted to do this with OpenGL, you could make an interface where you click at points to raise the height of particular points or areas.
Once you have this image, rendering it isn't too hard, because the X and Y coordinates are set for your space and the image will give you the Z coordinate.
This would have the downside of not allowing for caves and similar features (because there is only one height given for a point). If you needed these features, they might be added with meshes or a 2nd
If you're trying to store more data than fits in memory, you need to keep most of it on disk. Dividing the map into segments, loading the nearer segments as necessary, is the technique. A lot of groups access the map segments via quadtrees, which usually don't need much traversion to get to the "nearby" parts.
Variations include creating lower-resolution versions of larger chunks of map for use in rendering long views, so you're keeping a really low-res version of the Whole Map, a medium-res version of This Valley Here, and a high-res copy of This Grove Of Trees I'm Looking At.
It's complicated stuff, which is why nobody really put the whole thing together until about GTA:San Andreas or Oblivion.

How to implement painting (with layer support) in OpenGL?

situation
I'm implementing a height field editor, with two views. The main view displays the height field in 3D enabling trackball navigation. The edit view shows the height field as a 2D image.
On top of this height field, new images can be applyed, that alter its appearence (cut holes, lower, rise secific areas). This are called patches.
Bouth the height field and the patches are one channel grayscale png images.
For visualisation I'm using the visualisation library framework (c++) and OpenGL 4.
task
Implement a drawing tool, available in the 2D edit view (orthographic projection), that creates this patches (as seperate images) at runtime.
important notes / constrains
the image of the height field may be scaled, rotated and transposed.
the patches need to have the same scale as the height field, so one pixel in the patch covers exactly a pixel in the height field.
as a result of the scaling the size of a framebuffer pixel may be bigger or smaller than the size of the height field/patch image pixel.
the scene contains objects (example: a pointing arrow) that should not appear in the patch.
question
What is the right approach to this task? So far I had the following ideas:
Use some kind of QT canvas to create the patch, then map it to the height field image proposions and save it as a new patch. This will be done everytime the user starts drawing, this way implementing undo will be easy (remove the last patch created).
Use an neutral colored image in combination with textre buffer objects to implement some kind of canvas myself. This way every time the user stops drawing the contents of the canvas is mapped to the height field and saved as a patch. Reseting the canvas for the next drawing.
Thre are some examples using a frame buffer object. However I'm not sure if this approach fits my needs. When I use open gl to draw a sub image into the frame buffer, woun't the resultig image contain all data?
Here is what I ende up:
I use the PickIntersector of the Visualisation Library to pick agains the Height Field Image in the edit view.
This yealds local coords of the image.
There are transformed to uv coords, wich in turn get transformed into pixel coords.
This is done when the user presses a mouse button and continues to happen when the mouse moves as long as its over the image.
I have a PatchCanvas class, that collects all this points. On commands it uses the Anti-Grain Geometry library to accually rasterize the lines that can be constructes from the points.
After that is done the rasterized image is divied up into a grid of fixed size. Every tile is scanned for a color different then the neutral one. Tiles that only contain the neutral color are dropped. The other are saved following the appropied naming schema, and can be loaded in the next frame.
Agg supports lines of different size. This issn't implemented jet, but the idea is to pick to adjacened points in screen space, get those uv coords, convert them to pixels and use this as the line thickness. This should result in broader strockes for zoom out views.