I have a canvas in Qt (QGraphicsScene in QGraphicsView) on which user can add shapes: circle, square, rectangle, ellipse and triangle and change the size of either (as single QGraphicsObject subclass). Between these shapes user can create lines and the direction of the line is indicated by drawing an arrow at the intersection point of the line and a shape in the similar manner to Qt's example ElasticNodes (the connection is also QGraphicsObject suclass).
Now in order to support bi-directional connections and multiple connections of the same shapes I change any subsequent connection line into quadratic curve with progressively bigger arc to allow their selection and visualisation.
The implementation of the shapes is so that their centre is NOT top left, but instead the centre of the shape.
The connection between them is a line between these two centres pushed back in Z axis so it hides behind the shapes.
The arrow is position at the edge on a shape is determined depending on a shape: radius for circle and trigonometrics for other shapes.
Now when using quadratic curve I need to reposition the arrow to the intersection point of the curve and a shape. With that point I can then use the same procedure to render the arrow because I can get the angle at certain point from QPainterPath.
However he biggest challenge is to detect this collision point. The only option I can think of is to use QPainterPath::intersected but for that to work it requires fill areas (i.e. making the curve with a width of at least 2) and I would still need to somehow extract the correct point from the result - not sure how yet.
I would appreciate any ideas on how I could go about this.
When rehearsing Cocos2d V3 physics with debug mode enabled I noticed that physics body attached to its sprite has different anchor point from that of the sprite itself. Here's how it looks:
And this is how I create a sprite with physics body:
CCSprite *beam=[CCSprite spriteWithSpriteFrame:[[CCSpriteFrameCache sharedSpriteFrameCache] spriteFrameByName:#"w272.png"]];
beam.physicsBody=[CCPhysicsBody bodyWithRect:beam.boundingBox cornerRadius:0];
beam.position=ccp(125, 160);
[physicsWorld addChild:beam];
Do you have any idea how to fix this? I don't set any anchor point anywhere.
Physics objects automatically calculate a center of gravity, which is slightly different than an anchor point. Your real problem though is that you are using the sprite's bounding box as the rectangle to create the body and that's expressed in local coordinates. You want to make a text that goes from (0,0) to content size.
I have a problem and I saw it also in the game Candy Crush Saga, where they successfully dealt with it. I would like the sprite to show only when it is in the board (see image link below). The board can be of different shapes, like the levels in the mentioned game.
Has anyone some ideas how can this be achieved with Cocos2d?
I will be very glad if someone has some tips.
Thank you in advance.
image link: http://www.android-games.fr/public/Candy-Crush-Saga/candy-crush-saga-bonus.jpg
In Cocos2d you can render sprites at different z levels. Images at a lower z-level will be drawn first by the graphic card and the images (sprites) with a higher z-value will be drawn later. Hence if an image (say A) is at the same position of the other but has a higher z-value you will see only the pixels of image A where the two images intersect.
Cocos2d uses also layers so you can decide to add Sprites to a layer and set the layer to a specific z value. I expect that they used a layer for the board (say at z=1) with a PNG image containing transparent bits in the area where you can see the sprites, and a second layer at z=0 for the sprites. In this way you can only see the sprites when they are in the transparent area.
Does this help?
I found out Cocos2d has a class CCClippingNode which does exatclly what I wanted. First I thought it can clip only rectangular areas, but after some research I found it can clip also paths.
I am familiar with openCV, a powerful open source library and using that I am dealing with farm industry project where a mouse will be injected with drug , and its been kept on so called a stage which is surrounded by cylinder with painted strips of successive white and black. So i need to find out how many times the mouse will rotate its head to words the rotation of the cylinder . (its because it has got hang of drug) . How can i achieve this any opencv experts can help me out there.
I have added an image below
Seems an interesting one, these are my preliminary suggestions...
Depends on the resolution of the camera and how far your object (mouse) is from the camera...coz mouse is a small object so the image of the mouse need to cover good number of pixels in the image to differentiate head movement...
I don't think the mouse will stick to one position..it will keep moving in the cage...so you need to track the mouse...
At every position of the mouse you need to find the position of the head with respect to the body....that you can do using template matching (create templates of the head of the mouse)
Hence more info and some sample pictures are necessary to get the clear idea of the scene
EDIT AFTER IMAGE UPLOADED
since the camera is fixed hence create a circular region of interest...so that only movement inside this circle concerns you and not the moving cylinder outside the circle
subtract the present frame from the previous frame (frame differentiation) and store the absolute of the difference in an image.
absdiff(frameNow,framePrevs,diffofFrames);
threshold the diffofFrames as required to get the current position of the rat...
Now the task is easier if the image clearly shows its nose...since the nose has a pointed shape it can be detected by some template matching....however from the image you have given its difficult to make out the nose against a black background...However I can only suggest you the following process... green circles denote the tip of the nose...all I am trying to do is to get orientation of the head w.r.t. the body....for good results you need to have good images...
How does one have a scrolling map and detect collision on it?
I have tried a tile map but dont want to use this technique.
I would like to learn how a games background can be scrolled as well as object are detected for collision.
One example would be this game.
If one is not using tilemaps for this, how is the world scrollable as well as having physical objects detecting collision with the character sprite?
Please point me in the right direction on this topic.
Simple collision detection is done by checking to see if bounding boxes (or sometimes, bounding circles) overlap. That is, given object A and object B, if their bounding boxes overlap, then a collision has occurred.
If you want to get fancy, you create polygons for your objects and determine if the bounding polygons overlap. That's more complicated and harder to get right, and also involves considerably more processing.
It's not really much different from using a tilemap. With a tilemap, you're doing collision detection against the tiles. Without a tilemap you're doing collision detection against the bounding boxes of individual objects. The logic is almost identical.
Additional info:
I think the problem you're struggling with is that you're thinking in terms of screens. It's important when writing a game to separate the concept of the game world (the model) from the presentation (the screen).
For example, a simple side scroller game might consist of many "screens" full of information. Whereas the screen might be 1000 units wide, the game's world could be 10,000 or 100,000 units wide. Note that I say "units" rather than "pixels". A game's unit of measure could be inches, centimeters, miles, or whatever. All of your game logic should work in terms of world coordinates. How you project that onto the view (the screen) is irrelevant to what goes on in the world.
Every object in the world is assigned a position in world coordinates. That includes the player sprite, which is moving around in the world. All collision detection and other interaction of objects in the world is done in terms of world coordinates.
To simplify things, assume that there is a one-to-one mapping between world units and screen units. That is, your world is 10,000 pixels wide, and your screen is 1,000 pixels wide. The world, then is "ten screens" in size. More correctly, the screen can view 1/10 of the world. If that's the case, then your view can be defined by the left-most world coordinate that's visible. So if the world coordinate 2,000 is at the left of your screen, the rightmost column of pixels on your screen will be world coordinate 2,999.
This implies that you need world-to-screen and screen-to-world transformation functions. For this case, those functions are very simple. If we assume that there's no vertical scrolling, then only the x coordinate needs to be transformed by subtracting the world x coordinate from the view x coordinate. That is, world.x - viewOrigin.x. So in the case above, the screen coordinate of an object that's at world x coordinate 2,315 would be (2,315 - 2,000), or 315. That's the x coordinate of the object's position on the screen. In the current view.
The screen-to-world is the opposite: add the object's screen coordinate to the view's origin: 315 + 2,000 = 2,315.
I can't stress how important it is to maintain this separation between world and view coordinates. All of your game logic should operate in terms of world coordinates. You update the view to show you what's happening in the world. In a side scroller, you typically move the view left and right based on the player's actions. If the player moves left, you move the view origin left. If the player moves right, you move the view origin to the right.
Once you internalize this idea, things become much simpler. Everything done in the game world happens in the world. You make sure that the world state is consistent, and then you worry about displaying a view of the world.
There's no easier way (that I know of) to build a game that's larger than the screen. No matter how you build it, you have to have some concept of showing just part of the world on the view. By formalizing it, you separate the presentation from the game logic and things get much, much simpler.
You are going to want to set you camera dimensions to the dimensions of your level that you want to be rendered. You can get the level coordinates according to whatever you camera is supposed to be fixed on, say your hero.
Sprite collision is relatively easy. Use the two sprites' height/width you're testing against (IDK about cocos2d, but most api's sprites' position are at the upper left corner of the sprite):
if(player.Pos.x <= (object.Pos.x + object.width) || (player.Pos.x + player.width) >= object.Pos.x)
{
if(player.Pos.y >= (object.Pos.y - object.height) || (player.Pos.y - player.height) <= object.Pos.y)
//collision detected
}