situation
I'm implementing a height field editor, with two views. The main view displays the height field in 3D enabling trackball navigation. The edit view shows the height field as a 2D image.
On top of this height field, new images can be applyed, that alter its appearence (cut holes, lower, rise secific areas). This are called patches.
Bouth the height field and the patches are one channel grayscale png images.
For visualisation I'm using the visualisation library framework (c++) and OpenGL 4.
task
Implement a drawing tool, available in the 2D edit view (orthographic projection), that creates this patches (as seperate images) at runtime.
important notes / constrains
the image of the height field may be scaled, rotated and transposed.
the patches need to have the same scale as the height field, so one pixel in the patch covers exactly a pixel in the height field.
as a result of the scaling the size of a framebuffer pixel may be bigger or smaller than the size of the height field/patch image pixel.
the scene contains objects (example: a pointing arrow) that should not appear in the patch.
question
What is the right approach to this task? So far I had the following ideas:
Use some kind of QT canvas to create the patch, then map it to the height field image proposions and save it as a new patch. This will be done everytime the user starts drawing, this way implementing undo will be easy (remove the last patch created).
Use an neutral colored image in combination with textre buffer objects to implement some kind of canvas myself. This way every time the user stops drawing the contents of the canvas is mapped to the height field and saved as a patch. Reseting the canvas for the next drawing.
Thre are some examples using a frame buffer object. However I'm not sure if this approach fits my needs. When I use open gl to draw a sub image into the frame buffer, woun't the resultig image contain all data?
Here is what I ende up:
I use the PickIntersector of the Visualisation Library to pick agains the Height Field Image in the edit view.
This yealds local coords of the image.
There are transformed to uv coords, wich in turn get transformed into pixel coords.
This is done when the user presses a mouse button and continues to happen when the mouse moves as long as its over the image.
I have a PatchCanvas class, that collects all this points. On commands it uses the Anti-Grain Geometry library to accually rasterize the lines that can be constructes from the points.
After that is done the rasterized image is divied up into a grid of fixed size. Every tile is scanned for a color different then the neutral one. Tiles that only contain the neutral color are dropped. The other are saved following the appropied naming schema, and can be loaded in the next frame.
Agg supports lines of different size. This issn't implemented jet, but the idea is to pick to adjacened points in screen space, get those uv coords, convert them to pixels and use this as the line thickness. This should result in broader strockes for zoom out views.
Related
Is it possible to "look at" an OpenGL scene which was rendered at e.g. 60 degrees vertical FoV through a frustum/corridor/hole that has a smaller FoV - and have that fill the resulting projection plane?
I thinks that's not the same thing as "zooming in".
Background:
I'm struggling with the OpenGL transform pipeline for an optical see-through AR display here.
It could be I have my understanding of what the OpenGL transform pipeline of my setup really needs mixed up...
I'm creating graphics that are meant to appear properly located in the real world when being overlaid through AR glasses. The glasses are properly tracked in 3D space.
For rendering the graphics, I'm using OpenGL's legacy fixed function pipeline. Results are good but I keep struggling with registration errors that seem to have their root in my combination of glFrustum() plus glLookAt() not recreating the "perspective impression" correctly.
These AR displays usually don't fill the entire field of view of the wearer but the display area appears like a smaller "window" floating in space, usually ~3-6 feet in front of the user, pinned to head movement.
In OpenGL, I use a layout very similar to Bourke's where (I hope I summarize it correctly) the display's aspect ratio (e.g. 4:3) with windowwidth and windowheight defines the vertical Field of View. So FoV forms a fixed link with window dimensions and the "transform frustum" used by OpenGL - while I need to combine two frustums (?):
My understanding is that the OpenGL scene must be rendered with parameters equivalent to "parameters" of the human eye in order to match up - as the AR glasses allow the user to look through.
Let's assume the focal length of the human eye is 22mm (Clark, R.N. Notes on the Resolution and Other Details of the Human Eye. 2007.) and the eyes' "sensor size" is 16mm w x 13mm h (my estimate). The calculated vertical FoV is ~33 degrees then - which we feed into the OpenGL pipeline.
The output of such a pipeline would be that I get either the application window filled with this "view" or I can get a scaled down version of if, depending on my glViewport settings.
But as the AR glasses need input for only a sub-section, a "smaller window", of the whole field of view of the human wearer, I think I need a way to "look at" a smaller sub-area of the whole rendered scene - as if I was looking through a tiny hole onto the scene.
These glasses, with their "display window", provide a vertical field of view of around under 20 degrees - but feeding that into the OpenGL pipeline would be wrong. So, how can I combine these conflicting FoVs? ...or am I on the wrong track here?
Using moveto and lineto to draw various lines on a window canvas...
What is the simplest way to determine at run-time if an object, like a bit map or a picture control is in "contact" (same x,y coordinates) with a line(s) that had been drawn with lineto on a window canvas?
A simple example would be a ball (bitmap or picture) "contacting" a drawn border and rebounding... What is the easiest way to know if "contact" occurs between the object, picture or bitmap and any line that exists on the window?
If I get it right you want collision detection/avoidance between circular object and line(s) while moving. There are more option to do this I know of...
Vector approach
you need to remember all the rendered stuff in vector form too so you need list of all rendered lines, objects etc ... Then for particular object loop through all the other ones and check for collision algebraically with vector math. Like detecting intersection between bounding boxes and then with particular line/polyline/polygon or what ever.
Raster approach
This is simpler to mplement and sometimes even faster but less acurate (only pixel precision). The idea is to clear object last position with background color. Then check all the pixels that would be rendered at new position and if no other than background color present then no colision occurs so you can render the pixels. If any non background color present then render the object on the original position again as collision occur.
You can also check between old and new position and place the object on first non collision position so you are closer to the edge...
This approach need fast pixel access otherwise it woul dbe too slow. Standard Canvas does not allow this without using BitBlt from GDI. Luckily VCL GRaphics::TBitmap has ScanLine[] property allowing direct pixel access without any performance hit if used right. See example of it in your other question I answered:
bitmap rotate using direct pixel access
accessing ScanLine[y][x] is as slow as Pixels[x][y] but you can store all the pointers to each line of bitmap once and then just use that instead which is the same as accessing your own 2D array. So you really need just bitmap->Height calls of ScanLine[y] for entire image rendering after any resize or assigment of bitmap...
If you got tile based scene you can use this approach on tiles instead of pixels something like this:
What is the best way to move an object on the screen? but it is in asm ...
Field approach
This one is also considered to be a vector approach but does not require collision checks. Instead each object creates repulsive force the bigger the closer you are to it which is added to the Newton/D'Alembert physics driving force. When coefficients set properly it will avoid collisions on its own. This is used also for automatic placement of items etc... for more info see:
How to implement a constraint solver for 2-D geometry?
Hybrid approach
You can combine any of the above approaches together to better suite your needs. For example see:
Path generation for non-intersecting disc movement on a plane
I'm at a point where I need to mix the DICOM Region of Interest (ROI) Relative Electron Density (RED) with the information from DICOM CT's where some of the ROIs should override the CT info. [I'm working in C# by the way.] My question is that I need to draw the ROI's filled, in the correct way such that lungs for instance are shown with low RED while the body is water eq. I can use the bounding rectangle to gain an idea if one is possibly inside the other, but once that is known, I still need to determine if they overlap or if one is completely contained within another. I can do a raw draw of each ROI on a separate bitmap and do a slice voxel by voxel comparison, but this seems likely to be slow. I have not found a good answer and I'm hoping someone knows a better way to determine ordering of drawing (painting filled) that works in a fast manner.
Thanks
ROI in DICOM is normally defined as a list of points to form a polygon (or several) on a plane of related CT-scan slice (they share the same frame of reference UID). So, you can draw your CT slice and then on top draw ROI polygons, or you can query every CT point you draw whether it belongs or not to ROI polygons set, and change the color correspondingly.
I'm using openGL to draw some very large images(3200 x 4000 pixels). I don't have objects that are anywhere near that large individually, but I do have many irregular lines that span the width and height. I'm currently displaying the lines by drawing all of the points on them as individual vertices (I know that's kind of ridiculous but I'm grabbing all the lines using image processing in openCV and the best way for me to do that is create a set of all the pixels that make up the line). I'm using user commands to rotate the images and I'm getting really large lag between the user input and the updated display. Would it be faster to instead draw with GL_LINE_STRIP and use all of the pixels and verteices in the line? or should i just thin out the pixels and use less data?
I receive an array of coordinates (double coordinates with -infinity < x < +infinity and 0 <= y <= 10) and want to draw a polyline using those points. I want the graph to always begin on the left border of my image, and end at the right. The bottom border of my image always represents a 0 y-value, and the top border always a 10 y-value. The width and height of the image that is created are decided by the user at runtime.
I want to realize this using Qt, and QImage in combination with QPainter seem to be my primary weapons of choice. The problem I am currently trying to solve is:
How to convert my coordinates to pixels in my image?
The y-values seem to be fairly simple, since I know the minimum and maximum of the graph beforehand, but I am struggling with the x-values. My approach so far is to find the min- and max-x-value and scale each point respectively.
Is there a more native approach?
Since one set of coordinates serves for several images with different widths and heights, I wondered whether a vector graphic (svg) may be a more suitable approach, but I couldn't find material on creating svg-files within Qt yet, just working with existing files. I would be looking for something comparable to the Windows metafiles.
Is there a close match to metafiles in Qt?
QGraphicsScene may help in this case. You plot the graph with either addPolygon() or addPath(). Then render the scene into a bitmap with QGraphicsScene::render()
The sceneRect will automatically grow as you add items to it. At the end of the "plotting" you will get the final size/bounds of the graph. Create a QImage and use it as the painter back store to render the scene.
QGraphicsScene also allows you to manipulate the transformation matrix to fit the orientation and scale to your need.
Another alternative to use QtOpenGL to render your 2d graph to a openGL context. No conversion/scaling of coordinates is required. Once you get past the opengl basics you can pick appropriate viewPort / eye parameters to achieve any zoom/pan level.