I receive an array of coordinates (double coordinates with -infinity < x < +infinity and 0 <= y <= 10) and want to draw a polyline using those points. I want the graph to always begin on the left border of my image, and end at the right. The bottom border of my image always represents a 0 y-value, and the top border always a 10 y-value. The width and height of the image that is created are decided by the user at runtime.
I want to realize this using Qt, and QImage in combination with QPainter seem to be my primary weapons of choice. The problem I am currently trying to solve is:
How to convert my coordinates to pixels in my image?
The y-values seem to be fairly simple, since I know the minimum and maximum of the graph beforehand, but I am struggling with the x-values. My approach so far is to find the min- and max-x-value and scale each point respectively.
Is there a more native approach?
Since one set of coordinates serves for several images with different widths and heights, I wondered whether a vector graphic (svg) may be a more suitable approach, but I couldn't find material on creating svg-files within Qt yet, just working with existing files. I would be looking for something comparable to the Windows metafiles.
Is there a close match to metafiles in Qt?
QGraphicsScene may help in this case. You plot the graph with either addPolygon() or addPath(). Then render the scene into a bitmap with QGraphicsScene::render()
The sceneRect will automatically grow as you add items to it. At the end of the "plotting" you will get the final size/bounds of the graph. Create a QImage and use it as the painter back store to render the scene.
QGraphicsScene also allows you to manipulate the transformation matrix to fit the orientation and scale to your need.
Another alternative to use QtOpenGL to render your 2d graph to a openGL context. No conversion/scaling of coordinates is required. Once you get past the opengl basics you can pick appropriate viewPort / eye parameters to achieve any zoom/pan level.
Related
Using moveto and lineto to draw various lines on a window canvas...
What is the simplest way to determine at run-time if an object, like a bit map or a picture control is in "contact" (same x,y coordinates) with a line(s) that had been drawn with lineto on a window canvas?
A simple example would be a ball (bitmap or picture) "contacting" a drawn border and rebounding... What is the easiest way to know if "contact" occurs between the object, picture or bitmap and any line that exists on the window?
If I get it right you want collision detection/avoidance between circular object and line(s) while moving. There are more option to do this I know of...
Vector approach
you need to remember all the rendered stuff in vector form too so you need list of all rendered lines, objects etc ... Then for particular object loop through all the other ones and check for collision algebraically with vector math. Like detecting intersection between bounding boxes and then with particular line/polyline/polygon or what ever.
Raster approach
This is simpler to mplement and sometimes even faster but less acurate (only pixel precision). The idea is to clear object last position with background color. Then check all the pixels that would be rendered at new position and if no other than background color present then no colision occurs so you can render the pixels. If any non background color present then render the object on the original position again as collision occur.
You can also check between old and new position and place the object on first non collision position so you are closer to the edge...
This approach need fast pixel access otherwise it woul dbe too slow. Standard Canvas does not allow this without using BitBlt from GDI. Luckily VCL GRaphics::TBitmap has ScanLine[] property allowing direct pixel access without any performance hit if used right. See example of it in your other question I answered:
bitmap rotate using direct pixel access
accessing ScanLine[y][x] is as slow as Pixels[x][y] but you can store all the pointers to each line of bitmap once and then just use that instead which is the same as accessing your own 2D array. So you really need just bitmap->Height calls of ScanLine[y] for entire image rendering after any resize or assigment of bitmap...
If you got tile based scene you can use this approach on tiles instead of pixels something like this:
What is the best way to move an object on the screen? but it is in asm ...
Field approach
This one is also considered to be a vector approach but does not require collision checks. Instead each object creates repulsive force the bigger the closer you are to it which is added to the Newton/D'Alembert physics driving force. When coefficients set properly it will avoid collisions on its own. This is used also for automatic placement of items etc... for more info see:
How to implement a constraint solver for 2-D geometry?
Hybrid approach
You can combine any of the above approaches together to better suite your needs. For example see:
Path generation for non-intersecting disc movement on a plane
I'm at a point where I need to mix the DICOM Region of Interest (ROI) Relative Electron Density (RED) with the information from DICOM CT's where some of the ROIs should override the CT info. [I'm working in C# by the way.] My question is that I need to draw the ROI's filled, in the correct way such that lungs for instance are shown with low RED while the body is water eq. I can use the bounding rectangle to gain an idea if one is possibly inside the other, but once that is known, I still need to determine if they overlap or if one is completely contained within another. I can do a raw draw of each ROI on a separate bitmap and do a slice voxel by voxel comparison, but this seems likely to be slow. I have not found a good answer and I'm hoping someone knows a better way to determine ordering of drawing (painting filled) that works in a fast manner.
Thanks
ROI in DICOM is normally defined as a list of points to form a polygon (or several) on a plane of related CT-scan slice (they share the same frame of reference UID). So, you can draw your CT slice and then on top draw ROI polygons, or you can query every CT point you draw whether it belongs or not to ROI polygons set, and change the color correspondingly.
I'm quite a beginner with c++, especially graphically related.
I would like to make an animated background for my graphicsview which looks kind of like this:
Gradient Field Airflow
The picture represents the turbulence of an airflow over an object.
The colors must be based on a matrix of values.
I can only find how to do single-direction gradients with QT.
How do I set this up? How do I get two-directional gradients?
/*edit
It has been pointed out well that technically speaking this is not a gradient, but an color interpolation on a 2d array of nodes.
*/
Well you have not provided the input data so no one knows what you really want to achieve !
if you have the flow trajectories and mass
Then you can use some particle system + heavy blurring/smoothing filtering to achieve this. For any known point along the trajectory plot a dithered circle with color depend on the mass/temp/velocity... and color scale. It should be solid in the middle and transparent on the edges. After rendering just blur/smooth the image few times and that should be it. The less points used the bigger the circles must be to cover the area nicely also can do it in multi pass and change the points coordinates randomly to improve randomness in the image...
if you have field strength/speed/temp or what ever grid values
Then it is similar to #1 also you can instead of particle system do the rendering via QUADs/Squares. The 2D linear gradient is called Bilinear Filtering
c00 -- x --> c01
|
|
y c(x,y)
|
|
V
c10 c11
where:
c00,c01,c10,c11 are corner colors
c(x,y) is color on x,y position inside square
x,y are in range <0,1> for simplicity (but you can use any with appropriate equations scaling)
Bilinear interpolation is 3x linear interpolation:
c0=c(x,0)=c00+((c01-c00)*x)
c1=c(x,1)=c10+((c11-c10)*x)
c(x,y) =c0 +((c1 -c0 )*y)
so render all pixels of the square with above computed colors and that is what you seek. This kind of filtering usually produce artifacts on the edges between squares or on diagonals to avoid that use non linear filtering or blur/smooth the final image
There is a tutorial on gradients in Qt: http://qt-project.org/doc/qt-4.8/demos-gradients.html and a class: http://harmattan-dev.nokia.com/docs/library/html/qt4/qgradient.html I have never used other than linear gradients and according to the docs, it seems there are only three basic types of gradients available in Qt: linear, radial and conical. If you cannot compose your desired gradient using these three types, then I am afraid you will need to program your image pixels by yourself. Not to forget, it might be worthy to explore if OpenGL somehow could help. Qt has some classes using OpenGL but I am not familiar with them to provide more advice.
situation
I'm implementing a height field editor, with two views. The main view displays the height field in 3D enabling trackball navigation. The edit view shows the height field as a 2D image.
On top of this height field, new images can be applyed, that alter its appearence (cut holes, lower, rise secific areas). This are called patches.
Bouth the height field and the patches are one channel grayscale png images.
For visualisation I'm using the visualisation library framework (c++) and OpenGL 4.
task
Implement a drawing tool, available in the 2D edit view (orthographic projection), that creates this patches (as seperate images) at runtime.
important notes / constrains
the image of the height field may be scaled, rotated and transposed.
the patches need to have the same scale as the height field, so one pixel in the patch covers exactly a pixel in the height field.
as a result of the scaling the size of a framebuffer pixel may be bigger or smaller than the size of the height field/patch image pixel.
the scene contains objects (example: a pointing arrow) that should not appear in the patch.
question
What is the right approach to this task? So far I had the following ideas:
Use some kind of QT canvas to create the patch, then map it to the height field image proposions and save it as a new patch. This will be done everytime the user starts drawing, this way implementing undo will be easy (remove the last patch created).
Use an neutral colored image in combination with textre buffer objects to implement some kind of canvas myself. This way every time the user stops drawing the contents of the canvas is mapped to the height field and saved as a patch. Reseting the canvas for the next drawing.
Thre are some examples using a frame buffer object. However I'm not sure if this approach fits my needs. When I use open gl to draw a sub image into the frame buffer, woun't the resultig image contain all data?
Here is what I ende up:
I use the PickIntersector of the Visualisation Library to pick agains the Height Field Image in the edit view.
This yealds local coords of the image.
There are transformed to uv coords, wich in turn get transformed into pixel coords.
This is done when the user presses a mouse button and continues to happen when the mouse moves as long as its over the image.
I have a PatchCanvas class, that collects all this points. On commands it uses the Anti-Grain Geometry library to accually rasterize the lines that can be constructes from the points.
After that is done the rasterized image is divied up into a grid of fixed size. Every tile is scanned for a color different then the neutral one. Tiles that only contain the neutral color are dropped. The other are saved following the appropied naming schema, and can be loaded in the next frame.
Agg supports lines of different size. This issn't implemented jet, but the idea is to pick to adjacened points in screen space, get those uv coords, convert them to pixels and use this as the line thickness. This should result in broader strockes for zoom out views.
I have been able to find a lot of information on actual logic development for games. I would really like to make a card game, but I just dont understand how, based on the mouse position, an object can be selected (or atleast the proper way) First I thought of bounding box checking but not all my bitmaps are rectangles. Then I thought f making a hidden buffer wih each object having a different color, but it seems ridiculous to have to do it this way. I'm wondering how it is really done. For example, how does Adobe Flash know the object under the mouse?
Thanks
Your question is how to tell if the mouse is above a non-rectangular bitmap. I am assuming all your bitmaps are really rectangular, but they have transparent regions. You must already somehow be able to tell which part of your (rectangular) bitmap is transparent, depending on the scheme you use (e.g. if you designate a color as transparent or if you use a bit mask). You will also know the z-order (layering) of bitmaps on your canvas. Then when you detect a click at position (x,y), you need to find the list of rectangular bitmaps that span over that pixel. Sort them by z-order and for each one check whether the pixel is transparent or not. If yes, move on to the next bitmap. If no, then this is the selected bitmap.
Or you may use geometric solution. You should store / manage the geometry of the card / item. For example a list of shapes like circles, rectangles.
Maybe triangles or ellipses if you have lots of time. Telling that a triangle has a point or not is a mathematical question and can be numerically unstable if the triangle is very thin (algorithm has a dividing).. Fix: How to determine if a point is in a 2D triangle?
I voted for abc.