I have decided to rewrite an old Zatacka clone of mine. The old thing, running under Allegro 4, utilizes logic bitmap, that is a bitmap used for non-display purposes, reflecting directly the visible "what's on the screen and doesn't move" bitmap, but the integers stored in it represent logical meaning of things on screen, because the game got quite colorful. So the things that players see may be of any color possible, but game just remembers what kind of object each pixels represents.
The new clone is not supposed to use Allegro, so I could write the logic bitmap code myself. That said, I would appreciate if someone suggested some more efficient and precise alternatives.
Structure must be able to be kept up with bitmap/texture visible to players. Think about Worms game, but utilizing player invisible ground type variations, or something. In addition, following methods must be implemented:
Checking if all pixels in a circle belong to a small (~6) set of "colors" given as a parameter.
Painting all pixels in a circle with a single "color".
Painting all pixels in a circle, (except/only) ones in small set of "colors" provided as a parameter, with a single "color".
Painting a silhouette of rotated, preprocessed if you wish so, bitmap with a single "color". (That's the tricky one: would interpreting the bitmap as a stupid polygon with loads of right angles do the job?)
This is the minimum. If your structure supports shapes other than circles, that's great.
Related
I want to be able to register/de-register Objects to a list and check if the mouse is hovering over them to display tool-tips. However I'm stumbling at the beginning.
I'm going to use: al_get_pixel & al_unmap_rgba to retrieve the alpha of each pixel and decide if it's visible enough to count as a hit when hovering over it with the mouse.
The major problem I'm having is working out how best to store this individual "hitmap" & the reference to the object that generated it; As many different types objects (as struct's) of different sizes may need hitmaps generated. I was hoping I could do something similar to checking if the complex object 'extends' the base object but I don't see how I can achieve this in c++.
Ps; I know I could create an array the size of the screen for each object, but I'm doing this mainly with the purpose of maximizing efficiency. I'd make dynamic sized arrays but...
al_get_pixel will work, but it will be terribly slow, even if you lock all your bitmaps, unless you use something like a picking buffer. The basic idea is to render every interactive area on each object with a different color id. This means you need to draw your scene twice, once normally, and once with picking colors. Then when you need to read back a mouse position, you can use the picking buffer to read a single pixel and get its color id.
You might also want to try different methods of collision detection, such as bounding boxes, bounding circles, or other easily collision detectable shapes.
There is a third option, which is pixel perfect collision. It involves making 1bpp masks out of all your objects and then checking for collision between those.
Using moveto and lineto to draw various lines on a window canvas...
What is the simplest way to determine at run-time if an object, like a bit map or a picture control is in "contact" (same x,y coordinates) with a line(s) that had been drawn with lineto on a window canvas?
A simple example would be a ball (bitmap or picture) "contacting" a drawn border and rebounding... What is the easiest way to know if "contact" occurs between the object, picture or bitmap and any line that exists on the window?
If I get it right you want collision detection/avoidance between circular object and line(s) while moving. There are more option to do this I know of...
Vector approach
you need to remember all the rendered stuff in vector form too so you need list of all rendered lines, objects etc ... Then for particular object loop through all the other ones and check for collision algebraically with vector math. Like detecting intersection between bounding boxes and then with particular line/polyline/polygon or what ever.
Raster approach
This is simpler to mplement and sometimes even faster but less acurate (only pixel precision). The idea is to clear object last position with background color. Then check all the pixels that would be rendered at new position and if no other than background color present then no colision occurs so you can render the pixels. If any non background color present then render the object on the original position again as collision occur.
You can also check between old and new position and place the object on first non collision position so you are closer to the edge...
This approach need fast pixel access otherwise it woul dbe too slow. Standard Canvas does not allow this without using BitBlt from GDI. Luckily VCL GRaphics::TBitmap has ScanLine[] property allowing direct pixel access without any performance hit if used right. See example of it in your other question I answered:
bitmap rotate using direct pixel access
accessing ScanLine[y][x] is as slow as Pixels[x][y] but you can store all the pointers to each line of bitmap once and then just use that instead which is the same as accessing your own 2D array. So you really need just bitmap->Height calls of ScanLine[y] for entire image rendering after any resize or assigment of bitmap...
If you got tile based scene you can use this approach on tiles instead of pixels something like this:
What is the best way to move an object on the screen? but it is in asm ...
Field approach
This one is also considered to be a vector approach but does not require collision checks. Instead each object creates repulsive force the bigger the closer you are to it which is added to the Newton/D'Alembert physics driving force. When coefficients set properly it will avoid collisions on its own. This is used also for automatic placement of items etc... for more info see:
How to implement a constraint solver for 2-D geometry?
Hybrid approach
You can combine any of the above approaches together to better suite your needs. For example see:
Path generation for non-intersecting disc movement on a plane
I am writing an app using QPainter and I need an analog of the cairo_push_group in the QPainter class to draw, say, a rectangle with bunch of holes in it, which may intersect.
The problem is that when I draw holes with the "clear" compose mode, everything is cleared underneath the holes I draw; I want the image, that was underneath the hole before I started drawing my complex shape, to stay. In other words - everything underneath the hole is cleared, when I just want everything underneath the hole to bee seen through.
One solution seems to be using the QPainterPath with Odd fill option (the default one), but that does not suit me, as in my app the holes may intersect, and this way the two holes won't combine (the intersection of two holes is not a hole).
One more solution is to just to use the QPainterPath::subtracted method, but for some reason it reduces the quality of polygons (circles become shapes with a countable number of sides, for example).
The other solution is to save the QImage I am drawing on to a temporary QImage, clear it, draw everything that I need and then using the "destination over" mode draw it again, but that seems to be a very slow and memory-consuming solution.
Is there any other solution to this problem? Maybe there IS an analog of the cairo_push_group function in Qt?
Please don't advice me to switch to cairo.
Pictures explain the problem better:
I've found anwser myself.
One way one can do it is to still use the QPainterPath += and -= operators (which are identical to the QPainterPath::united and QPainterPath::subtracted methods), but without any bezier curves. I've replaced all the arcs, circles, etc. with "polylines" (for example, every circle is replaced with a 100-sided polygon). You can achieve any quality you need just by changing the number of sides etc.
The other solution - with the temporary QImage - seems to be not too slow and works just okay. This is the way it is done in cairo also. Just create a QImage with the same size asthe original, a QPainter with the same settings as the original one and use the new QPainter to draw on the temporary image, and, finally, use the QPainter::drawImage method to draw everything on the original device.
How would you store game level for a Dizzy-like adventure game? How would you specify walkable areas and graphics? Is it tile-based, pixel-based or walkable surfaces can be described by vectors?
Very old adventure games (like Sierra's quests from the 80s) used to actually maintain a separate bitmap of the entire screen that represented z-depth and materials to determine where your character could go and where it would be hidden. They would use pixel sampling to check where their small sprites would go.
Though current machines are faster, long side scrolling levels make this sort of approach impractical, IMHO. You have to go with more sparse representations.
One option is to "reduce" your game into invisible tiles, which are easier to represent. The main problem with this is that it can constrain your design (e.g., difficult to do diagonal platforms), and it can make the animations very shoddy (e.g., your characters' feet not actually touching the platform). This option can work IMHO for zelda-like adventure games, but not for action games.
A second option is to represent the game world via some vector representation and implement some collision detection. I would personally go with the second solution, especially if you can be smart about how you organize your data structures to minimize access time (e.g., have faster access to a subset of world elements close to your characters current position).
I wouldn't be surprised if there are available 2D game engines that provide this sort of capability, as there are definitely 3D engines that do it. In fact, you may find it easier to use an existing 3D game engine and use it to render 2D.
The Dizzy game is probably using a tile-based system. The game world is made up of a palette of tiles that are repeated throughout the level. Each tile would have three elements - the image drawn to the screen, the z-buffer to allow the main character to walk behind parts of the image and a collision map. The latter two would be implemented as monochrome images such that:
colour | z map | collision
-------|--------------------|---------------
black | draw dizzy infront | collide
white | draw dizzy behind | don't collide
Storing these are monochrome images save a lot of ram too.
You would have an editor to build level that would display a grid where tiles can be dragged and dropped.
That specific game is a tile-based game, with pixel perfect collision. The collision was controlled by a single bit in the colour byte of the tile (the brightness) and you could also mirror the tile by setting the flashing bit of the colour.
The tiles, however, could only be placed at even x,y coordinates (I suspect this was done to help the collision system a bit.)
The collision system involved a persistent check around the hero. The rules were roughly:
- If it finds a non-collision pixel row below the hero, it dropped the hero by 1 pixel.
- If there is a collision intersection with the hero, it raised the hero by 1 pixel.
- When moving left or right, it checks in that direction and if:
- if finds a wall (collision height more than 4 pixels), deny movement in that direction;
- if it finds a climbable box (collision height up to 4 pixels), allow movement in that direction.
- If there is enough headroom allow jumping, else, stop at the last possible free position.
When combining these simple rules, you obtained a very smooth collision negotiation able to walk even arbitrary slopes without extra taxing.
In creating such a game, I would use individual tiles, I would assemble them on layers (background, foreground etc...) and render them. I would assemble a separate collision map from the tiles attributes that indicate a collision tile, and I would use that separate map to negotiate the hero's collision and movement.
Similar to this demo: http://games.catevaclickuri.ro/
I have been able to find a lot of information on actual logic development for games. I would really like to make a card game, but I just dont understand how, based on the mouse position, an object can be selected (or atleast the proper way) First I thought of bounding box checking but not all my bitmaps are rectangles. Then I thought f making a hidden buffer wih each object having a different color, but it seems ridiculous to have to do it this way. I'm wondering how it is really done. For example, how does Adobe Flash know the object under the mouse?
Thanks
Your question is how to tell if the mouse is above a non-rectangular bitmap. I am assuming all your bitmaps are really rectangular, but they have transparent regions. You must already somehow be able to tell which part of your (rectangular) bitmap is transparent, depending on the scheme you use (e.g. if you designate a color as transparent or if you use a bit mask). You will also know the z-order (layering) of bitmaps on your canvas. Then when you detect a click at position (x,y), you need to find the list of rectangular bitmaps that span over that pixel. Sort them by z-order and for each one check whether the pixel is transparent or not. If yes, move on to the next bitmap. If no, then this is the selected bitmap.
Or you may use geometric solution. You should store / manage the geometry of the card / item. For example a list of shapes like circles, rectangles.
Maybe triangles or ellipses if you have lots of time. Telling that a triangle has a point or not is a mathematical question and can be numerically unstable if the triangle is very thin (algorithm has a dividing).. Fix: How to determine if a point is in a 2D triangle?
I voted for abc.