I have a CListCtrl which has about 100,000+ entries. The user is presented with a search box to search among these entries. On finding a match, I set that as a selection and scroll to it using EnsureVisible.
This scroll happens instantaneously. I wanted to try and code an animation that looks similar to the ones demoed here (especially the 'Go Top - Easing 2' animation).
I'm thinking, for a basic animation,
Get current selection.
Get target selection.
Compute difference.
Get the pixel height of one item.
Mutiply results of step 3 and 4.
Scroll by an increment of 1 (or some other more optimal value) with a delay until increment = result of step 5.
I tried this and I got incredibly confused. Firstly, is my algorithm okay? Secondly, is there another, better way to achieve this (preferably similar to animation 2 in the link above)?
Your algorithm seems ok for a simple linear scroll. However your link points to scrolls using various easing functions.
Easing functions do not scroll by the same amount each time, but increase or decrease is order to look like they're speeding up, or slowing down.
A common way to work out easing values is to use the result of a sine. If you picture a sine wave and imagine that you can only see one pixel of it at a time, as the wave progresses, the pixel will "ease" at the extremes and accelerate through the middle values.
Your Easing 2 animation is just adding a bit of bounce at the start and end, this is easily achievable by using the a bit of the sine wave past the extremes at each end. eg.
_
/ \
/
\_/
If you want some code, I answered a similar question here in C#.
Related
I've a sprite sheet containing a set of icons as shown here:
I'd like to get the bounding box (at pixel precision) of all icons inside it, some cases like list, grid have to be considered as only one icons. Any ideas are more than welcome.
I think the main issue in your problem is that some icons contain disjoint parts.
If all the icons were in only one part, you could just find the "connected components" (groups of white pixels) in your image and isolate them.
I don't know your level in image processing but to connect the parts of one icons, I would probably use dilation, which is a morphological method to expand (under constraints) the areas of maximum intensity in an image.
If you need any clarification, please let me know !
In general, it is not possible: only the humans have enough context to determine which of the disjoint parts belong together. You can approximate it using various ways, but it's a lost cause - and IMHO completely unnecessary. Imagine writing a test for this functionality - it's impossible, it requires a human in the loop, since the results for any particular icon sheet don't generalize. Knowing that the algorithm works for some sheet tells you nothing about whether it will work for some other sheet that you know nothing about a-priori.
It'd be simpler to manually colorize each sprite to have a color different than that of its neighbors. Then a greedy algorithm could find the bounding boxes easily without having to approximate anything.
I'm looking for a suitable algorithm to interpolate and smooth 1Hz GPS logged (on file) coordinations up to 60Hz.
While I've found a couple of interpolation algorithms, I couldn't locate a suitable smoothing algorithm which handles interpolation as well.
ALGLIB sounds good for interpolation- but what for smoothing?
Since GPS cooridinates are already heavily Kalmann filtered, i would only apply a linear interplation between to coordiantes.
Smoothing makes the positions wrong. When the device moves, coordinates are already smooth. There is usually no need to smooth further.
If you have problems when the device is standing still, then remove that positions.
Consider using a running average filter to smooth data, Set filter window to 0,5 -1s; current position is at center of window. Delay will be half window size.
Depending on the implementation you will use the first half window, and the last. (Which would not be a problem)
I just want to do a simple animation (for example in C++ using OpenGL) of some moving object - let's say simple horizontal movement of a square from left to right.
In OpenGL, I can use "double-buffering" method and let's say a user (running my application with the animation) have turned "vertical sync" on - so I can call some function every time when a monitor refreshes itself (I can achieve that for example using Qt toolkit and its function "swapBuffers").
So, I think, the "smoothest" animation that I can achieve, is to "move the square by for example 1 pixel (can be other values) every time monitor refreshes", so at each "frame" the square is 1 pixel further - "I HAVE TESTED THIS, AND IT SURELY WORKS SMOOTHLY".
But the problem arises when I want to have "separate" thread for "game logic" (moving the square by 1 pixel to the right) and for "animation" (displaying current position of the square on the screen). Because let's say the game logic thread is a while loop where I move the square by 1 pixel and then "sleep" the thread for some time, for example 10 milliseconds, and my monitor refreshes for example every 16 milliseconds - the movement of the square "won't be 100% smooth" because sometimes the monitor will refresh two times where the square moves by only by 1 pixel and not by 2 pixels (because there two "different" frequencies of monitor and game logic thread) - and the movement will look "little jerky".
So, logically, I could stay with the first super smooth method, but, it cannot be used in for example "multiplayer" (for example "server-client") games - because different computers have different monitor frequencies (so I should use different threads for game logic (on the server) and for animation (on the clients) ).
So my question is:
Is there some method, using different threads for game logic and animation, which do "100% smooth" animation of some moving object and if some exists, please describe it here, or when I just had some "more complex scene to render", I just would not see that "little jerky movement" which I see now, when I move some simple square horizontally, and I deeply concentrate on it :) ?
Well, this is actually typical separate game-loop behavior. You manage all you physics (movement) related actions in one thread, letting the render thread to do its work. This is actually desirable.
Don´t forget this way of implementation of game loop is to have maximum available frame rate while preserving constant physics speed. At higher FPS, you can not see this effect by any chance, if there is not any other code related problem. Some hooking between framerate and physics for example.
If you want to achieve what you describe as perfect smoothness, you could synchronize your physics engine with VSync. Simply do all your physics BEFORE refresh kicks in, than wait for another.
But this all applies to constant speed objects. If you have object with dynamic speed, you can never know when to draw it to be "in sync". Same problem arises then you want multiple object with different constant speeds.
Also, this is NOT what you want in complex scenes. The whole idea of V-sync is to limit screen tearing effect. You should definitely NOT hook your physics or rendering code to display refresh rate. You want you physics code to run independent of users display refresh rate. This could be REAL pain in multiplayer games for example. For start, look at this page: How A Game Loop Works
EDIT:
I say your vision of perfect smoothness is unrealistic. You can mask it using techniques Kevin wrote. But you will always struggle with HW limits as refresh rate, or display pixelation. For example, you have window of 640x480 px. Now, you want your object to move horizontally. You can move your object by vector heading towards bottom right corner, BUT you must increment object coordinates by float number (640/480). But in rendering, you go to integers. So your object moves jagged. No way around this. In small speed, you can notice it. You can blur it, or make it move faster, but never get rid of it...
Allow your object to move by fractions of a pixel. In OpenGL, this can be done for your example of a square by drawing the square onto a texture (i.e. a one-pixel or larger border), rather than letting it be just the polygon edge. If you are rendering 2D sprite graphics, then you get this pretty much automatically (but if you have 1:1 pixel art it will be blurred/sharp/blurred as it crosses pixel boundaries).
Smooth (antialias) the polygon edge (GL_POLYGON_SMOOTH). The problem with this technique is that it does not work with Z-buffer-based rendering since it causes transparency, but if you are doing a 2D scene you can make sure to always draw back-to-front.
Enable multisample/supersample antialiasing, which is more expensive but doesn't have the above problem.
Make your object have a sufficiently animated appearance that the pixel shifts aren't easy to notice because there's much more going on at that edge (i.e. it is itself moving in place at much more than 1 pixel/frame).
Make your game sufficiently complex and engrossing that players are distracted from looking at the pixels. :)
I am trying to build a graph that will change resolution depending on how far you are zoomed in. Here is what it looks like when you are complete zoomed out.
So this looks good so when I zoom in I get a higher resolution data and my graph looks like this:
The problem is when I zoom out the higher resolution data does not get cleared out of the graph:
The tables below the graphs are table display what is in the DataTable. This is what drawing code looks like.
var g_graph = new google.visualization.AnnotatedTimeLine(document.getElementById('graph_div_json'));
var table = new google.visualization.Table(document.getElementById('table_div_json'));
function handleQueryResponse(response){
log("Drawing graph")
var data = response.getDataTable()
g_graph.draw(data, {allowRedraw:true, thickness:2, fill:50, scaleType:'maximized'})
table.draw(data, {allowRedraw:true})
}
I am try to find a way for it to only displaying the data that is in the DataTable. I have tried removing the allowRedraw flag but then it breaks the zooming operation.
Any help would be greatly appreciated.
Thanks
See also
Annotated TimeLine when zoomed-out, Too Many Datapoints.
you can remove the allow redraw flag.
In that case you have to put the data points manually in your data table
The latest date of the actual whole data
The most outdated date in the actual whole data.
this will retain your zooming operation.
I think you have already seen removing the allowRedraw flag, works but with a small problem, flickering the whole chart.
It seems to me that the best solution would be to draw every nth data point, depending on your level of zoom. On the Google Finance graph(s), the zoom levels are pre-determined at the top: 1m, 5m, 1h, 1 day, 5 days, etc. It seems evident that this is exactly what Google is doing. At the max view level, they're plotting points that fall on the month. If you're polling 1000 times a day (with each poll generating a single point), then you'd be taking every 30,000th point (the fist point being the very first one of the month, and the 30,000th one being the last point).
Each of these zoom levels would implement a different plot of the data points. Each point should have a time stamp with accuracy to the second, so you'll easily be able to scale the plot based on the level of detail.
For Operating Systems class I'm going to write a scheduling simulator entitled "Jurrasic Park".
The ultimate goal is for me to have a series of cars following a set path and passengers waiting in line at a set location for those cars to return to so they can be picked up and be taken on the tour. This will be a simple 2d, top-down view of the track and the cars moving along it.
While I can code this easily without having to visually display anything I'm not quite sure what the best way would be to implement a car moving along a fixed track.
To start out, I'm going to simply use OpenGL to draw my cars as rectangles but I'm still a little confused about how to approach updating the car's position and ensuring it is moving along the set path for the simulated theme park.
Should I store vertices of the track in a list and have each call to update() move the cars a step closer to the next vertex?
If you want curved track, you can use splines, which are mathematically defined curves specified by two vector endpoints. You plop down the endpoints, and then solve for a nice curve between them. A search should reveal source code or math that you can derive into source code. The nice thing about this is that you can solve for the heading of your vehicle exactly, as well as get the next location on your path by doing a percentage calculation. The difficult thing is that you have to do a curve length calculation if you don't want the same number of steps between each set of endpoints.
An alternate approach is to use a hidden bitmap with the path drawn on it as a single pixel wide curve. You can find the next location in the path by matching the pixels surrounding your current location to a direction-of-travel vector, and then updating the vector with a delta function at each step. We used this approach for a path traveling prototype where a "vehicle" was being "driven" along various paths using a joystick, and it works okay until you have some intersections that confuse your vector calculations. But if it's a unidirectional closed loop, this would work just fine, and it's dead simple to implement. You can smooth out the heading angle of your vehicle by averaging the last few deltas. Also, each pixel becomes one "step", so your velocity control is easy.
In the former case, you can have specially tagged endpoints for start/stop locations or points of interest. In the latter, just use a different color pixel on the path for special nodes. In either case, what you display will probably not be the underlying path data, but some prettied up representation of your "park".
Just pick whatever is easiest, and write a tick() function that steps to the next path location and updates your vehicle heading whenever the car is in motion. If you're really clever, you can do some radius based collision handling so that cars will automatically stop when a car in front of them on the track has halted.
I would keep it simple:
Run a timer (every 100msec), and on each timer draw each ones of the cars in the new location. The location is read from a file, which contains the 2D coordinates of the car (each car?).
If you design the road to be very long (lets say, 30 seconds) writing 30*10 points would be... hard. So how about storing at the file the location at every full second? Then between those 2 intervals you will have 9 blind spots, just move the car in constant speed (x += dx/9, y+= dy/9).
I would like to hear a better approach :)
Well you could use some path as you describe, ether a fixed point path or spline. Then move as a fixed 'velocity' on this path. This may look stiff, if the car moves at the same spend on the straight as cornering.
So you could then have speeds for each path section, but you would need many speed set points, or blend the speeds, otherwise you'll get jerky speed changes.
Or you could go for full car simulation, and use an A* to build the optimal path. That's over kill but very cool.
If there is only going forward and backward, and you know that you want to go forward, you could just look at the cells around you, find the ones that are the color of the road and move so you stay in the center of the road.
If you assume that you won't have abrupt curves then you can assume that the road is directly in front of you and just scan to the left and right to see if the road curves a bit, to stay in the center, to cut down on processing.
There are other approaches that could work, but this one is simple, IMO, and allows you to have gentle curves in your road.
Another approach is just to have it be tile-based, so you just look at the tile before you, and have different tiles for changes in road direction an so you know how to turn the car to stay on the tile.
This wouldn't be as smooth but is also easy to do.