Can't set map padding in Here Flutter SDK - heremaps

In previous versions of Here sdk there was always a function to setup padding. It allows overlapping map with app ui and still centering objects like polyline and polygons inside visible map part.
I can't find similar property in flutter sdk. Is there any alternative?
I checked setPrincipalPoint (on camera) but it's only changing center point without limiting camera bounds.

If you want to center objects in the map view, you can try these two options:
mapView.getCamera().getLimits().setTargetArea(route.getBoundingBox());
This will center the route polyline inside the map view.
The second option is to specify a rectangular area (with optional padding):
mapView.getCamera().lookAt​(GeoBox target, GeoOrientationUpdate orientation, Rectangle2D viewRectangle)
This makes the camera look at the specified geodetic area and pass a rectangle which specifies where the area should appear inside of the map view.
The supplied orientation is the orientation of the camera looking at the target, so the resulting camera state will have the same orientation as the one supplied to this method.

Related

Coin3D Crop the current View

In coin 3D, I have a scene setup, and the user can view the scene from various directions using a X,Y and Z and this will move the camera.
when these buttons are selected, I would like the viewer to Basically crop the output and the camera needs to zoom is as close as possible so that I can see the rendering in full at the current angle of viewing.
I have tried various things, including setting up the Camera->heightangle
Setting up Camera->Viewall, but the viewall will make sure the whole scene is visible in all 3 dimensions.
I need a better solution.
please assit.

Best way to detect the window coordinates of a drawn line in c++ Builder

Using moveto and lineto to draw various lines on a window canvas...
What is the simplest way to determine at run-time if an object, like a bit map or a picture control is in "contact" (same x,y coordinates) with a line(s) that had been drawn with lineto on a window canvas?
A simple example would be a ball (bitmap or picture) "contacting" a drawn border and rebounding... What is the easiest way to know if "contact" occurs between the object, picture or bitmap and any line that exists on the window?
If I get it right you want collision detection/avoidance between circular object and line(s) while moving. There are more option to do this I know of...
Vector approach
you need to remember all the rendered stuff in vector form too so you need list of all rendered lines, objects etc ... Then for particular object loop through all the other ones and check for collision algebraically with vector math. Like detecting intersection between bounding boxes and then with particular line/polyline/polygon or what ever.
Raster approach
This is simpler to mplement and sometimes even faster but less acurate (only pixel precision). The idea is to clear object last position with background color. Then check all the pixels that would be rendered at new position and if no other than background color present then no colision occurs so you can render the pixels. If any non background color present then render the object on the original position again as collision occur.
You can also check between old and new position and place the object on first non collision position so you are closer to the edge...
This approach need fast pixel access otherwise it woul dbe too slow. Standard Canvas does not allow this without using BitBlt from GDI. Luckily VCL GRaphics::TBitmap has ScanLine[] property allowing direct pixel access without any performance hit if used right. See example of it in your other question I answered:
bitmap rotate using direct pixel access
accessing ScanLine[y][x] is as slow as Pixels[x][y] but you can store all the pointers to each line of bitmap once and then just use that instead which is the same as accessing your own 2D array. So you really need just bitmap->Height calls of ScanLine[y] for entire image rendering after any resize or assigment of bitmap...
If you got tile based scene you can use this approach on tiles instead of pixels something like this:
What is the best way to move an object on the screen? but it is in asm ...
Field approach
This one is also considered to be a vector approach but does not require collision checks. Instead each object creates repulsive force the bigger the closer you are to it which is added to the Newton/D'Alembert physics driving force. When coefficients set properly it will avoid collisions on its own. This is used also for automatic placement of items etc... for more info see:
How to implement a constraint solver for 2-D geometry?
Hybrid approach
You can combine any of the above approaches together to better suite your needs. For example see:
Path generation for non-intersecting disc movement on a plane

How to make a consistent clicked/Intersects/contains method in SFML?

I have some problems with the intersection functionality in SFML when I am resizing the window.
So I do fairly know how to detect intersections or if something is clicked and so on when the window is in the predefined size.
But when resizing the window, the golbal bounds of the shapes/sprites in sfml stay exactly the same while their presentation in the window changes.
So when I now click on something it may happen that the normal SFML contains method of an object tells me that the mouse pointer is not inside, even if it seems to be like that on the screen.
The only thing I have in mind is to have a variable (e.g sf::vector2f) that stores the current change of the window compared to the original size and then not use the mouse position relative to the current window but the (with the change multiplied) projected mouse position.
But this may not be the best solution, so I wonder if I am missing something and therefore I am asking for advice, what to do here?
You can use the sf::RenderWindow::mapPixelToCoords method to find out the correct position of the mouse.
From the SFML documentation:
Convert a point from target coordinates to world coordinates.
This function finds the 2D position that matches the given pixel of
the render target. In other words, it does the inverse of what the
graphics card does, to find the initial position of a rendered pixel.
Initially, both coordinate systems (world units and target pixels)
match perfectly. But if you define a custom view or resize your render
target, this assertion is not true anymore, i.e. a point located at
(10, 50) in your render target may map to the point (150, 75) in your
2D world – if the view is translated by (140, 25).
For render-windows, this function is typically used to find which
point (or object) is located below the mouse cursor.
This version uses a custom view for calculations, see the other
overload of the function if you want to use the current view of the
render target.

Applying a transformation to a set in Raphael.js

Using Raphael 2.0, I am trying to apply a transform to a set of objects in a way that is relative to all of the objects in the set. However, the effect I am getting is that the transform is applied to each item individually, irrespective of the other objects in the set.
For example: http://jsfiddle.net/tim_iles/VCca9/8/ - if you now uncomment the last line and run the code, each circle is scaled to 0.5x. The actual effect I am trying to achieve would be to scale the whole set of circles, so their relative distances are also scaled, which should put all four of them inside the bounding box of the white square.
Is there a way to achieve this using Raphael's built in tools?
When you scale, the first parameter is the X-scale. If you provide no other parameters, it will use that as the Y-scale, and scale around the center of the object.
When you scaled the rectangle, it scaled around the center of the rectangle. If you want the circles to scale around that point as well, rather than their centers, you should provide that point.
So the last line could be set.transform("s0.5,0.5,100,100"); (100,100 being the center of the rectangle you scaled)
At least, I think this is what you're asking for.

How to use and set axes in a 3D scene

I'm creating a simulator coded in python and based on ODE (Open Dynamics Engine). For visualization I chose VTK.
For every object in the simulation, I create a corresponding source (e.g. vtkCubeSource), mapper and actor. I am able to show objects correctly and update them as the simulation runs.
I want to add axes to have a point of reference and to show the direction of each axis. Doing that I realized that, by default, X and Z are in the plane of the screen and Y points outwards. In my program I have a different convention.
I've been able to display axes in 2 ways:
1) Image
axes = vtk.vtkAxes()
axesMapper = vtk.vtkPolyDataMapper()
axesMapper.SetInputConnection(axes.GetOutputPort())
axesActor = vtk.vtkActor()
axesActor.SetMapper(axesMapper)
axesActor.GetProperty().SetLineWidth(4)
2) Image (colors do not match with the first case)
axesActor = vtk.vtkAxesActor()
axesActor.AxisLabelsOn()
axesActor.SetShaftTypeToCylinder()
axesActor.SetCylinderRadius(0.05)
In the second one, the user is allowed to set many parameters related to how the axis are displayed. In the first one, I only managed to set the line width but nothing else.
So, my questions are:
Which is the correct way to define and display axes in a 3D scene? I just want them in a fixed position and orientation.
How can I set a different convention for the axes orientation, both for their display and the general visualization?
Well, if you do not mess with objects' transformation matrix for display
purposes, it could probably be sufficient to just put your camera into a
different position while using axes approach 2. The easy methods to adjust
your camera position are: Pitch(), Azimuth() and Roll().
If you mess with object transforms, then apply the same transform to the
axes.
Dženan Zukić kindly answered this question in vtkusers#vtk.org mail list.
http://www.vtk.org/pipermail/vtkusers/2011-November/119990.html